How targeted misinformation works on Indian social media

If those with massive resources can robotically flood the medium with propaganda, then how democratic is social media?

September 26, 2020 04:30 pm | Updated September 27, 2020 12:04 pm IST

Facebook’s engineers in California monitor Indian election-related content in April 2019

Facebook’s engineers in California monitor Indian election-related content in April 2019

The 2014 Maharashtra Assembly elections were just around the corner. Arjun Sharma*, a student in Mumbai, was looking for some quick money when a friend approached him with a lucrative offer. The job required Sharma to create several seemingly authentic Twitter accounts.

“I created profiles and made them look real. For instance, to act as a cricket lover, I would make cricket-related posts and use a relevant profile picture,” he says. Within weeks, he was handling some 15 accounts he had created. He worked with a few other people. “A guy gave us fodder for daily posts,” he says. The accounts actively posted content about social and civic issues, and were directed against BJP and its candidate Ashish Shelar, who was then contesting against Congress’ Baba Siddique for the Bandra West seat. Shelar won by over 25,000 votes. Sharma was paid in cash for his stint.

Six years on, organised misinformation is more than just a digital issue. It is an extension of our very society, reflecting its lopsided keel. Even as social media posits itself as a democratic tool, it is worth questioning if its reach is indeed egalitarian, as those with massive resources robotically flood the medium with propaganda, leaving only an insignificant slice free for independent voices. When it comes to policy implementation by social media platforms, the concept of fostering equal speech — moving beyond just free speech — gets muddled, says Raheel Khursheed, former head of News, Politics and Government for Twitter India. Those with more resources enjoy the power to steer the online discourse, while the individual voice, even if free, gets buried in the noise.

Society’s worst

A still from docu drama The Social Dilemma

A still from docu drama The Social Dilemma

In the new Netflix documentary, The Social Dilemma , Tristan Harris, ethicist, computer scientist and former Google employee, articulates a fundamental issue — the existential threat to society isn’t technology itself, but its ability to bring out the worst in society. Changing this challenges the very business model of the platforms, which is focussed on monetisation of information. Actually, says Harris in the film, “It’s a disinformation-for-profit business model.” “If I want to manipulate an election, I can now go into a conspiracy theory group on Facebook, and find people who believe the world is completely flat... and I can tell Facebook (to) give me 1,000 users like that.”

Also read: Netflix’s ‘The Social Dilemma’ shows the unfair fight of The People vs The Algorithm

A study done by Devesh Kumar, professional coder and developer, is telling. In September 2019, he started mapping replies to some popular tweets and categorised them into three types — tweets by the user’s followers or other verified accounts, tweets by popular troll accounts, and tweets by unknown accounts. The third group interested him the most. Through scraping tools like Twint and Twitter’s own API, he prepared a database of 2.3 lakh accounts with heavy political content. He found that 89% of the accounts were created in 2013, between June and July, less than a year before the 2014 Lok Sabha election. Some 74% of these accounts had about 5,000 posts, with one tweet per 20 minutes on an average; 18% of accounts had tweeted more than 60,000 times with one tweet every eight minutes on an average.

Networks to manipulate

Kumar’s second study sheds light on information manipulation. Tweet speed — the total number of tweets / account age — is key here. A political account that tweets 600-700 times in less than a day appears unnatural, says Kumar. His research underscored what he calls ‘seed accounts’— profiles that swamp the platform with specific content through their networks to manipulate information. Of the 1,22,023 accounts analysed for Congress-related content, 147 — or 0.1% of accounts — fell in the category of seed accounts. For BJP-related content, of the 2,71,579 accounts studied, 7% fell in the same category. While seed accounts for Congress worked in a linear manner, the BJP’s were branched and functioned in clusters, providing more penetration. Some were even region-specific. While most suspected seed accounts of the BJP were followed by prominent party leaders, this was hardly seen with Congress accounts, says Kumar.

Social networking has an insidious and global reach

Social networking has an insidious and global reach

This research is hardly representative of the complete picture, but it throws up questions about the very fabric of social media. Take the case of the Deepika Padukone-starrer Chhapaak . Released two days after the actor’s much-publicised visit to Jawaharlal Nehru University to express solidarity with students there protesting against the Citizenship (Amendment) Act (CAA), the movie’s fate was foreshadowed by a huge backlash.

Also read: Deepika Padukone's 'Chhapaak' is an emotional force

Soon after Deepika’s visit, #BoycottChhapaak started trending on Twitter, accusing the filmmakers of giving the acid attacker of Laxmi Agarwal, on whom the movie is based, a ‘false’ Hindu identity. Alt News exposed the fake claim and carried a report that said: “Several accounts that spread misinformation about the film were anonymous. The followers of these users range from a few hundred to several thousand, and together their reach is significant.” As per the report, over 80 of the Twitter users, who shared targeted misinformation about the movie, were followed by Prime Minister Narendra Modi.

Pratik Sinha, founder of Alt News , says that the target of bot accounts on social media platforms are often individuals. “They typically delegitimise the individual. When such accounts are followed by the Prime Minister, the issue becomes far more serious.”

 

A 2018 study by BBC , titled ‘Duty, Identity, Credibility: Fake news and the ordinary citizen in India’, found that right-leaning messages on Twitter were disseminated in a much more organised manner. The political messaging by these accounts appeared to focus on an anti-minority stance, an inflated sense of Hindu power and supremacy, a pride in the Vedic age and traditions, or India’s superiority over the West.

Twitter is not the only platform rife with targeted misinformation. A survey by Social Media Matters and Institute for Governance, Policies, and Politics, published in April 2019, found that over 50% of the Indian population received fake news over various social media platforms ahead of the 2019 Lok Sabha election — with Facebook and WhatsApp leading as the mediums of circulation. About 41% of respondents in the survey came across misinformation as ‘hurtful comments about a particular community’.

Is it correct to term such organised groups as ‘bots’? Well, the term is often misinterpreted. A bot account may not necessarily mean that it is operated via a computer programme. It may also refer to accounts that are part of a syndicate of profiles, operated by real people who publish targeted content. Kumar points to an interesting Twitter trend: “A lot of political trends start between 2 a.m. and 4 a.m. when activity remains light on Twitter, so a base network is created to set the tone for the day, and it can then become organic.”

‘Share’ psychology

While research into bot accounts and dissemination of targeted content remains deeply incomplete due to its complex nature, what also eludes scholars is the psychology behind what people share, and why, in politically charged societies.

A survey published in the Proceedings of the National Academy of Sciences of the United States of America , indicates that moral-emotional words in messages increased their dissemination by a factor of 20% for each additional word. According to social psychologist Jonathan Haidt, moral-emotional feelings can be categorised into four families: the ‘other-condemning’ family (contempt, anger, disgust), the ‘self-conscious’ family’ (shame, embarrassment, guilt), the ‘other suffering’ family (compassion and sympathy), and the ‘other-praising’ family (gratitude, awe, and elevation).

Of the tweets analysed by Kumar between September and December 2019, words like ‘Pakistan’, ‘Pak’ and ‘Islamic Republic of Pakistan’ figured in 33% of them. Other popular words were ‘Hindustan’, ‘Hindutva’, ‘NaMo’, ‘Narendra Modi’, ‘Shri Ram’, and ‘Ram Mandir’. From January 1 to August 31 this year, words targeting Islam formed the bulk of 30% of the accounts analysed. Terms linked to PM Modi took up 20%, those in support of CAA and NRC formed 14% and Ayodhya and Babri Masjid-related words figured in 11% of the tweets studied.

Sinha says messages with images and videos get huge traction on social media. The Misinformation Review by the Harvard Kennedy School analysed politically-oriented WhatsApp groups in India in the period before the 2019 Lok Sabha polls. It found that 10% of images circulated on WhatsApp were known misinformation.

Far from its rallying cry of being a democratic tool, social media now seems like a well-oiled machine roaring away for those who wield the most power in real societies. Do more resources, then, warrant more control over the space? More importantly, are organisations doing enough to re-empower the individual? According to the latest Twitter Transparency Report for July-December 2019, the platform saw a 52% increase in global spam reports compared to the last reporting period.

Far from benign

Khursheed says that between 2014 and now, Twitter has moved far from its benign existence in India. “It is very different now in terms of its reach, capability, and how many people use the platform,” he says. But the problems in the space, in India, need to be tackled with a focused lens, he says.

“In the Indian context, the tendency is to treat it as a monolith. Take the case of Europe. Every country, like Italy, Spain and France has an individual team. In India, even though it is so diverse, there is a single team sitting in Delhi. My key criticism of social media platforms is that while they tend to think global, they are incredibly obsessed with their headquarters, which are in San Francisco, and even policy changes cater to those markets,” says Khursheed.

In response to a query, Twitter told The Hindu , “By using technology and human review in concert, Twitter proactively tackles attempts at platform manipulation and mitigates them at scale by actioning millions of accounts each week for violating our policies in this area.” Facebook was unavailable for comment.

Organised misinformation is more than just a digital issue. It is an extension of our very society.

Organised misinformation is more than just a digital issue. It is an extension of our very society.

 

On June 26 this year, the shares of both Facebook Inc. and Twitter Inc. plunged over 7% after consumer goods company Unilever joined a host of other giants in boycotting advertising on the platforms due to their inaction on hate speech. “The issue of hate speech in India started way, way, before the U.S.,” Khursheed points out. “If our regular society is not equitable and broad-based, how can we expect social media to not reflect those iniquities?”

India cannot afford to sweep the digital problem under the carpet any longer. An egalitarian society cannot exist in isolation, it needs to be enabled by the very people it seeks to represent. The same holds true for social media. It is, after all, an extension of our physical world. Or, at least, that is what it promises to be.

(*Name changed on request)

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.