Tattle’s workshop on Denormalising Online Abuse, particularly targeting language-based abuse or slur words, rampant in the social media ecosystem have provided an opportunity to better understand the social behaviours of young people online. These workshops are a pedagogical track of the Uli project for which we built a browser plugin that helps redact slur words to make online spaces safer. After conducting a series of these workshops with young women in academic institutions in Delhi, we were invited to conduct this workshop at Kumaraguru College of Liberal Arts and Sciences in Coimbatore, Tamil Nadu in January this year.
Over a period of three days, we conducted workshops with students studying Visual Communications, Psychology, Sports Sciences, Economics, Business Management, Commerce and Tamil Creative Writing. The two-hour long interactive workshop is designed in three compartments – understanding online behaviours, explaining forms of abuse and denormalising language-based abuse.
Students from Psychology and Tamil Creative Writing engaged most actively with the workshop. Students from other departments, however, were more comfortable discussing sensitive topics in smaller groups than speaking up individually. Within these conversations, a clear gender divide emerged. Young women described having to limit their online presence or tactfully navigate unsolicited advances from strangers on social media to keep themselves safe. Young men, by contrast, tended to highlight the more positive dimensions of social media and largely did not have to contend with gendered abuse. A majority of male students also considered toxic behaviour in online gaming to be normal, while women largely disagreed.
In another instance when we asked students to share their strong opinion on reporting tools on social media platforms – the gender divide was apparent yet again – men squarely didn’t find the need to use them while women used them regularly.
For an activity where we asked students to share an incident of online abuse that they may have heard of or witnessed – the responses ranged from national cybercrime helplines being unuseful in the time of need, social media reporting mechanisms being vague and unresponsive, hacking and eventually financial fraud to morphed images and the recent case of a man in Kerala dying by suicide as a result of a viral video which alleged that he was sexually harassing a woman in a bus. These responses underscore the high-stakes forms of online abuse that young people are exposed to on a daily basis.
The workshop also opened a brief but important dialogue around gender sensitivity and online toxicity at the intersection of marginalised communities. Our insight from interactions with over 340 students who participated in these workshops was that there is a resistance or rather a discomfort in speaking of caste, religious or class-based marginality. Gender-based marginalisation, however, drew strong and varied opinions from both men and women, with some participants questioning whether such abuse is overhyped and others firmly asserting its severity. This tension offered a revealing glimpse into how young people privately imagine and negotiate the boundaries of online abuse. At the same time, there was copious amounts of inhibition while discussing and denormalising slur words which were in Tamil language.
The range of slur words in Tamil language from the Uli dataset - crowdsourced from a wide variety of academics, researchers, social activists, journalists and linguistic experts – were met with a mixture of shock and nervous chuckles as the participants hadn’t expected these linguistic abuses to have been decoded in a classroom setting. A critical aspect of our workshop is to denormalise and decode language-based abuse and hate speech rampant on social media with students through the key complexities of what these slur words signify, who they target (primarily marginalised communities) and why it matters that these be denormalised. This exercise was significantly challenging as this was the first time participants discussed abuse in an academic institution. However, the professors observing these workshops found it an inherently useful exercise.
Mental health emerged as a recurring and necessary thread throughout these sessions, reflecting its growing urgency as a lens through which to understand young people's experiences on social media and in online gaming spaces.
In the final activity, we divided the participants into smaller groups and gave them a task to strategize content moderation for a social media platform of their choosing to safeguard a marginalised community that they want to protect – majority of the groups wanted to protect children from online abuse, a group led mainly by women chose men’s mental health and so on. Their proposals centred on restricting abusive language and imagery, which opened up a broader discussion on whether restriction is an effective or sufficient response to online harm.
Overall, the discussions were deeply meaningful and helped us in context-based engagement with students who came from different southern states including Tamil Nadu, Kerala, Hyderabad, Andhra Pradesh and Karnataka. What these three days in Coimbatore reaffirmed is that conversations about online abuse cannot be one-size-fits-all. The cultural, linguistic, and gendered contexts that students bring into the room fundamentally shape what they are willing to name, what they are able to hear, and what they are ready to challenge. Their diversity enriched these discussions considerably, surfacing regional and community-specific nuances.
Perhaps most importantly, the workshop demonstrated that young people are not passive consumers of online toxicity. They observe it, are affected by it, and, when given the tools and the space, are more than capable of thinking critically about how to address it. The goal of the Uli project is to continue building tools and spaces in classrooms and in communities which makes online spaces safer for young people so that the work of denormalising online abuse goes beyond the workshop.
