Mainstream Weekly

Home > Archives (2006 on) > 2019 > Is the ‘Code of Ethics’ for Social Media an Effective Deterrent?

Mainstream, VOL LVII No 20 New Delhi May 4, 2019

Is the ‘Code of Ethics’ for Social Media an Effective Deterrent?

Saturday 4 May 2019

by T. Sadashivam and Shahla Tabassum

The upcoming Lok Sabha elections 2019 will be called as the first ‘Digital Election’, because half of our electorate is internet users; and out of which almost all of them are using social media (Facebook, Twitter, Instagram, etc.) including WhatsApp which is a messaging application. In this scenario, the biggest concern is how to prevent abuse or misuse of the social media and its influence in the election. In this regard, the Election Commission of India (ECI) asked the internet and social media companies, and IAMAI (Internet and Mobile Association of India) to come out with a voluntary ‘Code of Ethics’, similar to and on the lines of the model code of conduct. As a result, the above-mentioned stakeholders came out with a ‘Code of Ethics for the General Elections 2019’ on March 20, 2019. Similarly, in October 2013, the ECI gave instructions to candidates and political parties with respect to the use of social media in election campaigning. However, then there was no active role for the social media platforms in partnership with the ECI like what we have in the 2019 general elections. The voluntary ‘Code of Ethics’ on the part of social media platforms is a good beginning. However, still, there are some challenges which are not addressed by the present ‘Code of Ethics’.

 Lack of Verified Accounts

There exist a lot of fake accounts in the social media (especially Facebook and Twitter) and these companies are unable to stop or remove these accounts from where hate speech, fake news, and misinformation used to spread. Instead, they are following the elitist approach while verifying the users’ account. For instance, Twitter verifies accounts of politicians, film stars, sports personalities etc. except common Twitterati, that is, the common people who use Twitter. So, the question arises: why are they following this elitist approach? Rather to stop fake news etc., it’s better to start verifying all the accounts whoever are their users. This will help in knowing who is spreading hate speech, fake news or misinformation. According to one research study published (2018) by the Journal Science conducted by MIT researchers (Sinan Aral, Soroush Vosoughi and Deb Roy) it was found that false or fake news spreads ‘farther, faster, deeper, and more widely’ than real news. By analysing, the 1,26, 000 true and false stories on Twitter between (2006 and 2017), the research found that fake news or stories (70 per cent) more likely to be retweeted than true real news; and it took six times longer for real news to reach 1500 people compared to with fake news. (Desikan, Shubashree 2018)

WhatsApp Still Out of Control

Although through the usage of Artificial Intelligence and human content reviewers, fake news or propaganda in Facebook and Twitter can be identified, what about WhatsApp which is an end-to-end encryption technology through which it is really easy to spread fake news with ease and faster, with almost no monitoring. Even for law enforcement agencies and WhatsApp, it’s very difficult to stop it. The good example is the mob-lynching incidents because of the spread of fake news which led to many people’s death in India. Similarly, it will be not easy to effectively implement Section 126 of the Representation of the People Act 1951, that prohibits advertising and campaigning on Television and other electronic media, 48 hours before voting which is called the ‘Silent Period’, especially in WhatsApp platform. Of course, on the part of WhatsApp, last year (2018) it advertised in several daily newspapers in India, spreading public awareness about the fake news. Furthermore, the forwarding message in its platform at one time is restricted to five people only. However, the outcome of these initiatives have not seen much positive impact.

Social Media Expenditure still Exclusive

This time the social media expenditure by the candidates or political parties will be included in their overall expenditure account. However, the question remains about those Facebook or Instagram pages which are present in big numbers and act as proxies of candidates or political parties. According to Alt News (fact-checking organisation), between February 7 to March 2, 2019, there were 126 Facebook or Instagram pages which declared that they do not have any direct link with a political party. However, these ran ads (related to politics and issues of national importance) supporting a particular party, and spending more than Rupees 77 lakhs. Furthermore, the other problem which was highlighted by the report is: many ads ran without a ‘Disclaimer’, it means who is (either individual or organisation) sponsoring these ads is unclear. (Chaudhuri, Pooja 2019)

Difficulty in Removing Objectionable Content within the time limit 

There is a doubt that any objectionable content posted on the social media platforms will be removed within three hours of violations reported by the ECI under (Section 126 of the Representation of the People Act, 1951). The past experiences show delay on the part of the social media platforms to take down objection-able content. This happened even in the recent deadly attacks on two mosques in New Zealand that were live-streamed on Facebook. By the time it was removed, the video was viewed about 4000 times and copy and posted in other platforms. Also, there are some users who reposted the video by making some modifi-cations so to avoid detection. According to the ‘Global Internet Forum to Counter Terrorism’, it has identified 800 different versions of the attack video. The Facebook within 24 hours removed 1.5 million videos out of which 1.2 million block out at the point of upload, whereas another 300,000 videos were able to bypass the filtering system; and were manually removed by moderators which represent (20 per cent) failure rate. (Jim, Waterson 2019) In this situation, the question arises: how effective will the three hours time limit be for the social media platforms, and what options are left for them if objectionable content is repeatedly posted by the users? Are they ready to take punitive action against those users who misuse their platforms for election or political purposes? In the ‘code of ethics’, nothing is mentioned as to how to control those persons other than candidates and political parties who post content relating to the election campaigning of political parties and candidates.

Overall we see, the voluntary ‘Code of Ethics’ has mentioned about the Sinha Committee recommendations. This Committee was set up on January 8, 2018 to review and suggest changes in the model code of conduct, Section 126 and other sections of the Representation of the People Act 1951. It submitted its report on January 10, 2019 to the ECI. But unfortunately, the report is not available in the public domain except some few recommendations which were published in the newspapers. This is very unfortunate, the reason being the committee was set up in the wake of rapid expansion in the media, especially social media, in the country. It would be the right time to make the full report public before the first digital general elections begin. So, it’s a big challenge ahead for the social media platforms in keeping out fake news, hate speeches and misinformation in order to maintain free and fair 2019 general elections in the country. Only time will tell how far such efforts are successful.

References

1. Chaudhuri, Pooja (2019), ‘Alt News Analysis: Pro-BJP pages account for 70 per cent of ad spending made public by Facebook’, Altnews, March 9 retrieved from https://www.altnews.in/alt-news-analysis-pro-bjp-pages-account-for-70-of-ad-spending-made-public-by-facebook/, on March 20 2019.

2. Desikan, Shubashree (2018), ‘When truth loses’, The Hindu, March 21, Delhi.

3. Jim, Waterson (2019), ‘Facebook removed 1.5 m videos of New Zealand attack in first 24 hours’, The Guardian English newspaper on March 17, retrieved from https://www.theguardian.com/world/2019/mar/17/facebook-removed-15m-videos-new-zealand-terror-attack, on March 22, 2019.

Dr T. Sadashivam is an Assistant Professor, Department of Public Administration, Pachhunga University College (PUC) (only Constituent College of the Mizoram Central University), Aizwal. He can be contacted at e-mail sadajmi[at]gmail.com

Dr Shahla Tabassum is an Assistant Professor, Department of Political Science, Zakir Husain Delhi College Evening (ZHDCE), University of Delhi. She can be contacted at e-mail shahlajmi[at]gmail.com

Notice: The print edition of Mainstream Weekly is now discontinued & only an online edition is appearing. No subscriptions are being accepted