The Global Internet Forum to Counter Terrorism: regulating expression online, without accountability?

Topic
Country/Region

An article in Slate looks at the workings of the Global Internet Forum to Counter Terrorism (GIFCT), a global, informal body set up in 2017 by Facebook, Microsoft, Twitter and YouTube in response to government pressure to 'do something' about hate speech and extremist content online. Are its methods for regulating online speech transparent, accountable and under democratic control?

Support our work: become a Friend of Statewatch from as little as £1/€1 per month.

The Future of Free Speech Online May Depend on This Database (Slate, link, emphasis added):

"The GIFCT has gone largely unnoticed by the public since it was established in 2017 by Facebook, Microsoft, Twitter, and YouTube. One reason is that what it does is complicated. (Another may be its “terrible acronym that no one can remember or pronounce,” says Daphne Keller, director of the Program on Platform Regulation at Stanford’s Cyber Policy Center.) Yet while onlookers pay close attention to Facebook’s Oversight Board, or its civil rights audit, or Twitter’s warning labels on President Donald Trump’s tweets, the GIFCT is making some of the most consequential decisions in online speech governance without the scrutiny of the public.

The GIFCT started as an industry response to pressures from governments, and especially from European Union legislators, to assume greater responsibility in countering terrorism online after attacks in Paris and Brussels. The basic goal was to coordinate content removal across different services. So together, the tech giants set content norms by creating a hash database of what they consider “violent terrorist imagery and propaganda.” This database, which serves as a kind of blacklist, helps other member sites—many with modest content teams—moderate their own platforms... Other than this basic structure, however, the inner workings of the GIFCT are opaque. Even as the coalition transitions to an independent organization, we still don’t know how individual platforms use the database. Are uploads blocked immediately? Do the platforms check each piece of content? It’s unclear.

(...)

Many of the problems with the GIFCT’s arrangement lie in its opacity. None of the content decisions are transparent, and researchers don’t have access to the hash database. As Keller recently laid out, the GIFCT sets the rules for “violent extremist” speech in private, so it defines what is and isn’t terrorist content without accountability. That’s a serious problem, in part because content moderation mistakes and biases are inevitable. The GIFCT may very well be blocking satire, reporting on terrorism, and documentation of human rights abuses.

(...)

Now, those concerns are compounded by the risks of extralegal censorship. Spurred by the Christchurch Call, a political summit of countries and tech companies to combat terrorist content online after the New Zealand mosque shootings, the GIFCT announced in 2019 that it would overhaul its internal structure. One component of that, the Independent Advisory Committee, includes government officials from Canada, France, Japan, Kenya, New Zealand, the United Kingdom, and the United States. Again, in theory the inclusion of government would make the GIFCT’s work more democratically accountable. Yet it could also be an opportunity, as Emma Llansó, the director of the Free Expression Project at the Center for Democracy and Technology, said, for governments to have “just as much power as they usually do and none of the accountability.”

Our work is only possible with your support.
Become a Friend of Statewatch from as little as £1/€1 per month.

 

Spotted an error? If you've spotted a problem with this page, just click once to let us know.

Report error