We have come a long way since the 2014 Gamergate scandal. Since then, the targeting of women and marginalized groups online has been widely discussed, documented and reported on. As a result, over the recent years, scores of actors, including international organizations, journalists, newsrooms, civil society organizations, academia and others have engaged in coordinated efforts to identify ways to address these attacks and their impact on journalists.

And while there is certainly greater awareness about harassment of women journalists online, it has not stopped the sheer scale of attacks. In fact, harassment and abuse online in the case of women journalists have become “more visible” and “more coordinated” in recent years, according to expert reports.

Among some of the common forms of targeting are “direct or indirect threats of physical or sexual violence, offensive messages, and targeted harassment (often in the form of “pile on”, i.e., with multiple perpetrators coordinated against an individual), and privacy violations (such as stalking, non-consensual sharing of intimate images and “doxing”, i.e., publishing private information, such as the target’s home address).”

Much of these forms of targeting take place on social media platforms. Because in this day and age, as journalists, we are heavily reliant on social media platforms whether for research, audience engagement, keeping up with the news cycle, work exposure and so on. As such, they have been at the heart of discussions “about the need to develop tailored and effective solutions to address the problem of gender-based harassment and abuse on their platforms.” As threats continue to escalate and the number of journalists targeted online grows, the necessity for platforms to do more when journalists face online abuse has become even more evident.

So, what is missing?

                  A human task force

Experts and journalists alike who have spent the recent years advocating with platforms for more coordinated action and measures to target online harassment and abuse on their platforms, do note that platforms understand the scale of harassment and threats online better, however, they have so far failed to invest in (human) resources to effectively tackle the harassment on their platforms. We are yet to see platforms take steps to create a human task force that expands across geographical divides, speaks multiple languages and is aware of the local political, social, and cultural sensitives.

Instead, they continue to rely on third-party actors, such as civil society organizations and international non-governmental organizations as “gatekeepers” between targeted users and the platforms.

In fact, this has been a reoccurring problem not only for journalists but also for activists, human rights defenders and other marginalized groups. For these groups, it has often been virtually impossible to reach and communicate with real individuals on these platforms when there is a problem. All too often, it takes an intervention by a third party that has direct contacts at these platforms and companies to address an issue at stake and resolve a specific threat problem. And even then, it is the responsibility of targeted users to know where to seek help.

                  AI and the language issue

In addition, the overreliance of platforms on machine-learning algorithms poses several human rights concerns, in particular as these algorithms are shielded from any external review, negating the principles of transparency and accountability. As a report by the Electronic Frontier Foundation puts it, “civil society and governments have been denied access to the training data or basic assumptions driving the algorithms, and there has never been any sort of third-party audit of such technology.”

The AI systems are biased and do not scale better than human beings, especially when they are not adjusted to languages other than English. Yet again, it becomes crucial that platforms engage with individuals who have contextual knowledge and an understanding of political and social contexts as well as local languages, who can act as shields offering better protection and keep humans in the loop.

                  Restricted moderation practices

There is also a lack of action or refusal to take down content that is actually posing a threat to the safety of journalists. For example, when news outlets report to platforms that its journalists are being doxed, the platform does not remove the reported posts because the platform does not consider these posts (based on the automated decision) in violation of their community standards. The human task force could ensure that decisions are made by an individual or a team, aware of the political context and the kind of targeting the journalists are facing.

Platforms must also take a more proactive role in identifying and labeling the type of threats individuals can face on their services. Depending on whether it is a coordinated or rather individually targeted attack, platforms should offer different mechanisms to mitigate the attack and assist the targeted user.

While all major platforms’ terms and services or community standards prohibit online attacks and harassment, as well as impersonation and other digital security attacks, the success rate, effectivity and enforceability of such rules have repeatedly been questioned. For instance, “Facebook says it will remove harassment like threats and direct attacks against public figures; however, in recently leaked internal documents, the Guardian found that Facebook allows public figures to receive some kinds of targeted harassment, including ‘calls for their death’.”

Commitments

In March 2021, PEN America called on Facebook, Twitter and Instagram to “implement product design changes addressing online abuse.” Among a list of proactive and reactive measures recommended were, shields “that enable users to proactively filter abusive content”; documentation feature “that allows users to quickly and easily record evidence of abuse”; “SOS button” to activate instantly in case of severe abuse and several others. In July of 2021, the World Wide Web Foundation presented platforms and companies with 11 prototyped features that they could explore further to ensure better safety mechanisms.

In July 2021, Facebook, Google, TikTok, and Twitter committed themselves to better tackling the abuse of women on their platforms. While the list of commitments such as “offering more granular settings (who can see, share, comment or reply to posts); offering users the ability to track and manage harassment reports; enabling greater capacity to address context and or language, etc,” is impressive, its success rate and its usability across all countries and different languages are yet to be seen and evaluated.

A few months later, Facebook announced it would classify journalists, activists, and rights defenders as “involuntary” public figures to extend the protection it offers to public figures vs private individuals.

In March of this year coinciding with International Women’s Day, Google’s Jigsaw unit announced the launch of Harassment Manager that would be compatible with Twitter for now. “It is debuting as source code for developers to build on, then being launched as a functional application for Thomson Reuters Foundation journalists in June [2022],” reported The Verge. Also, in March of this year, Meta Journalism Project in partnership with ICFJ launched a free digital security course for journalists. “This represents Meta’s ongoing commitment to support the safety of journalists and those who defend human rights around the world, the course will equip them with a strong foundation in digital security,” said Anjali Kapoor, Director of News Partnerships at Meta Asia Pacific.

What’s next

With greater scrutiny, comes greater responsibility and the platforms, aware of their growing responsibility in ensuring safety of their users, are taking steps. But whether these steps are enough is an important question to ask. The changes on policy level are important, but so are changes in design and solution mechanisms offered. There have been plenty of recommendations made to the platforms on what they can and should do. Offering a training on digital security is an excellent initiative in terms of raising awareness and educating journalists about their safety, but a strong account password is not and won’t be enough to prevent harassment or targeting. If anything, this indicates the lack of awareness and understanding of the threats journalists face today. There are numerous courses, manuals, and online workshops offered to journalists on digital safety already. What journalists and specifically women journalists need are better tools that are effective in mitigating the risks of being targeted on these platforms when their profession is on the line, or when they receive death and rape threats, or when they are targeted by inauthentic accounts on these platforms. Speaking their language, understanding the contexts within which journalists operate is important. Offering immediate response rather than delayed decisions over whether or not a reported content with profane language violates community standards is what needed.

Finally, understanding the importance platforms play in countries with repressive regimes where these platforms are the only lifeline for documenting human rights abuses, publishing news and calling for accountability and transparency can do much more. The question is, will they?

Share