In a few months from now, the draft of the Digital India Act, meant to be the most significant piece of IT legislation in India, will be out for discussion. The Act will have a huge impact on how citizens use IT tech or online services, and social media companies like Twitter and Facebook may be held accountable for whatever is posted on their site.
According to Union IT Minister Rajeev Chandrasekhar, the government is reviewing the “safe harbour” clause in the Information and Technology Act 2000 which provides legal immunity to platforms against content shared by their users.
During a presentation in Bengaluru, the minister asked if “online intermediaries should be entitled to safe harbour at all”. He said the platforms for which the safe harbour concept was applied back in the 2000s have now “morphed into multiple types of participants and platforms on the internet, functionally very different from each other, and requiring different types of guardrails and regulatory requirements”.
Social media, cloud computing, metaverse, blockchain, cryptocurrency, deep fakes and doxxing, all will be covered under the new Act, which will replace the decades-old IT Act, 2000.
What is ‘safe harbour’?
*One of the most debated issues is ‘safe harbour for social media intermediaries.’ The ‘safe harbour’ concept affects social media, e-commerce and AI-based platforms.
*According to the safe harbour principle, an online platform such as Facebook or Twitter cannot be held accountable for the content posted on them by users. The government is debating whether such platforms should continue to have zero liability for what users post on their platform.
*The safe harbour provision has been given under Section 79 of the IT Act 2000. It states that “an intermediary shall not be liable for any third-party information, data, or communication link made available or hosted by him”.
*But there are conditions for safe harbour. Section 79 states that safe harbour wouldn’t be given if the intermediary “fails to expeditiously” take down a post or remove a particular content even after the government flags that the information is being used to commit something unlawful.
*Who is responsible for taking down harmful content online while still ensuring that the best practices of free speech are protected?
*The government believes there should be no free pass to social media companies and ‘safe harbour’ cannot be an excuse to let harmful posts remain. Experts say safe harbour has often led to a lack of content moderation, inadequate fact-checking, and content violations on platforms.
*Last year, the government had mandated, through the IT Rules of 2021, that social media platforms must appoint a Chief Compliance Officer (CCO), Resident Grievance Officer (RGO), and Nodal Contact Person.
*Under the new Digital India law, each intermediary category will be subject to new regulations with a heavy focus on fact-checking to prevent misinformation or misuse of data.
*These platforms will now be held accountable for any content violations or cybercrimes that occur on their websites.
*The government says the “weaponisation of misinformation” will not be allowed.
*That extends to Deep Fakes, which, using artificial intelligence called deep learning, allows one to put words in people’s mouths, star in one’s favourite movie and a lot more. Many point out that Deep Fakes facilitate impersonating someone and violating their privacy.
*The other practices in focus are Doxxing, a form of online harassment that publicly reveals someone’s personal details, like their name, address and job, and phishing, or online attacks to steal user data, including login credentials and credit card numbers.