Sunday, December 11, 2022

Designing Public Interest Tech to Fight Disinformation

Our research lab organized a series of talks with NATO around how to design public interest infrastructure to fight disinformation globally. Our collaborator Victor Storchan wrote this great piece on the topic:
Disinformation has increasingly become one of the most preeminent threats as well as a global challenge for democracies and our modern societies. It is now entering in a new era where the challenge is twofold1: it has become both a socio-political problem and a cyber-security problem. Both aspects have to be mitigated at a global level but require different types of responses.
Let’s first give some historical perspective.
  • Disinformation didn’t emerge with automation and social networks platforms of our era. In the 1840s Balsac was already describing how praising or denigrating reviews were spreading in Paris to promote or downgrade publishers of novels or the owners of theaters. Though, innovation and AI in particular gave rise to technological capabilities to threat actors that are now able to scale misleading content creation.
  • More recently, in the 2000s the technologists were excited about the ethos around moving fast and breaking things. People were basically saying “let’s iterate fast, let’s shift quickly and let's think about the consequences later.”
  • After the 2010s, and the rise of deep learning increasingly used in the industry, we have seen a new tension emerging between velocity and validation. It was not about the personal philosophy of the different stakeholders asking for going “a little bit faster” or “a little bit slower” but rather about the cultural and organizational contexts of most of the organizations.
  • Now, AI is entering the new era of foundation models. With large language models we have consumer-facing powered tools like search engines or recommendation systems. With generative AI, we can turn audio or text in video at scale very efficiently. The technology of foundation models is at the same time becoming more accessible to users, cheaper and more powerful than ever. It means better AI to achieve complex tasks, to solve math problems, to address climate change. However, it also means cheap fake media generation tools, cheap ways to propagate disinformation and to target victims.

This is the moment where we are today. Crucially, disinformation is not only a socio-political problem but also a cyber-security problem. Cheap deep-fake technology is commoditized enabling targeted disinformation where people will receive specific, personalized disinformation through different channels (online platforms, targeted emails, phone). It will be more fine grained. It has already started to affect their lives, their emotions, their finances, their health etc.

The need for a multi-stakeholder approach as close as possible to the AI system design.The way we mitigate disinformation as a cyber-security problem is tied to the way we are deploying large AI systems and to the way we evaluate them. We need new auditing tools and third parties auditing procedures to make sure that those deployed systems are trustworthy and robust to adverse threats or to toxic content dissemination. As such AI safety is not only an engineering problem but it is really a multi stakeholder challenge that will only be addressable if non-technical parties are included in the loop of how we design the technology. Engineers have to collaborate with experts in cognition, psychologists, linguists, lawyers, journalists, civil society in general). Let’s give a concrete example: mitigating disinformation as a cyber security problem means protecting the at-risk user and possibly curing the affected user. It may require access to personal and possibly private information to create effective counter arguments. As a consequence it implies arbitrating a tradeoff between privacy and disinformation mitigation that engineers alone cannot decide. We need a multi stakeholder framework to arbitrate such tradeoffs when building AI tooling as well as to improve transparency and reporting.


The need for a macroscopic multi-stakeholder approach. Similarly, at a macroscopic level, there is a need for a profound global cooperation and coalition of researchers to address disinformation as a global issue. We need international cooperation at a very particular moment in our world which is being reorganized. We are living in a moment of very big paradox: we see new conflicts that emerge and structure the world and at the same time, disinformation requires international cooperation. At the macroscopic level, disinformation is not just a technological problem, it is just one additional layer on top of poverty, inequality, and ongoing strategic confrontation. Disinformation is one layer that adds to the international disorder and that amplifies the other ones. As such, we also need a multi stakeholder approach bringing together governments, corporates, universities, NGOs, the independent research community etc… Very concretely, Europe has taken legislative action DSA to regulate harmful content but it is now clear that regulation alone won’t be able to analyze,detect, and identify fake media. To that regard, the Christchurch call to action summit is a positive first step but did not lead yet to a systemic change.

The problem of communication. However, the communication between engineers, AI scientists and non-technical stakeholders generates a lot of friction. Those multiple worlds don't speak the same language. Fighting disinformation is not only a problem of resources (access to data and access to compute power) but it is also a problem of communication where we need new processes and tooling to redefine the way we collaborate in alliance to fight disinformation. Those actors are collaborating in a world where it is becoming increasingly difficult to understand AI capabilities and as a consequence to put in place the right mechanisms to fight adverse threats like disinformation. It is more and more difficult to really analyze the improvement of AI. It is what Gary Marcus is calling the demoware effect: a technology that is good for demo but not in the real world. It is confusing people and not only political leaders but also engineers (Blake Lemoine at Google). Many leaders are assuming false capabilities about AI and struggle monitoring it. Let us give two reasons to try to find the causes of this statement. First, technology is more and more a geopolitical issue which does not encourage more transparency and more accountability. Second, information asymmetry between the private and public sectors and the gap between the reality of the technology deployed in industry and the perception of public decision-makers has grown considerably, at the risk of focusing the debate on technological chimeras that distract from the real societal problems posed by AI like disinformation and the ways to fight it.

No comments: