Any reconceptualization of the Internet requires rethinking of incentives not only for content creation but also for content discovery and amplification. Academic and civic attention has been devoted to platforms’ amplification algorithms. But in countries such as India, content is amplified in closed messaging apps such as WhatsApp even in absence of platform enabled amplification. The lack of centralised moderation makes the information consumers the primary line of defence against low quality information. A point of interest in such a conundrum is the implication that people are highly susceptible to misinformation.
Social media and other informational platforms can be considered networked social systems with public goods, where the truth can be considered a public good. Information sharing can also be driven through social, legal, and financial incentives. Keeping these in mind, we aimed to understand whether incentivizing people to share “good” and factual content and disincentivizing the sharing of misinformation, either through micropayments or through social feedback, reduced the sharing of misinformation.
Tattle in partnership with Monk Prayogshala executed this project. It was funded by Grant for the Web
First, participants responded to a demographic form, a short questionnaire, and a bot test. Those who fulfilled the inclusion criteria and consented to participate were invited to participate in the experiment. Participants were provided with a total of 25 messages, five messages on the first day (baseline) and ten each on two days after that. The order of the messages was randomised and counterbalanced. That is, all participants were provided with all five types of messages across the three days: plausible, implausible, true, false, and ‘wholesome.’ For more information on how this content was created, you can refer to this blog After this, participants were provided with Qualtrics including post-task questions.
Results showed that incentivization, regardless of the type, encouraged people to share more true information although they also indicated that the type of incentive (monetary or social) did not influence the participants’ sharing-behavior on the platform. Although we did not achieve the result we were expecting, we discovered interesting insights on how people react to posts using emojis and how certain individual characteristics influenced their engagement with posts. The results show that people with conservative political beliefs were more likely to react to posts (using the happy, sad, and/or disgust emojis). Men and women did not differ in their sharing-behavior; however, in terms of age, older individuals were more likely to share posts than younger participants. One’s political ideology was also related to sharing of true, plausible, implausible, and wholesome posts, as well as whether they chose to ‘read more’ about a given post. We also found that individuals who ‘read more’ about a post are more likely to share it.
Results from the follow-up survey (conducted after the participants completed all 3 days of the study) showed that individuals spent time contemplating the posts shown to them (over the 3 days) and also thought about the consequences to their friends/family and others, and people they disliked, if the messages were true. On average, the individuals in both social and financial incentivised groups were aware of how they gained and lost their followers/money. However, it was interesting to note that more individuals from the social incentive group thought about their incentives (gaining/losing followers) while sharing messages than from the financial group (gaining/losing money).
We aim to continue to do this kind of research on incentives and content creation/sharing. Anybody who is interested in this research area/scope or just keen on getting to know more about this field or willing to support us in any other way can contact us.