The latest report to warn about the societal risks of artificial Intelligence (AI) documents extensive research of human rights online.
Freedom on the Net 2023: The Repressive Power of Artificial Intelligence assesses internet freedom in 70 countries, accounting for almost 90 percent of global internet users.
Their headline findings are depressing but perhaps not surprising:
- Global internet freedom declined for the 13th consecutive year.
- Attacks on free expression grew more common around the world.
- Generative artificial intelligence (AI) threatens to supercharge online disinformation campaigns.
- AI has allowed governments to enhance and refine their online censorship.
What provides constructive hope, however, are the report’s assessment of regulatory models (or lack thereof) and suggestion of a road map, largely built on recent steps taken by the European Union (EU) and United States.
The EU’s General Data Protection Regulations are a key foundation for the draft Artificial Intelligence Act, which “would tailor obligations based on the level of risk associated with particular technologies.”
The US effort, “The Blueprint for and AI Bill of Rights”, focuses on principles to AI design, use, and deployment but to date is dependent on voluntary commitments by companies.
The report notes that business decisions over the research period alone, notably by new owner of X (formerly Twitter), cast doubt on the effectiveness of self-regulation and make the case for responsible oversight more acute.
The report’s data and recommendations highlight the continued need for government, companies, and civil society to work together with a rights-based approach that first and foremost protects the freedom of expression and access to information. Regulation should be based on human rights, transparency, and independent oversight.
The recommendations also address the need to defend “information integrity” but admit that this requires a long-term solution that prioritizes independent media and educated communities:
A whole-of-society approach to fostering a diverse and reliable information space entails supporting independent online media and empowering ordinary people with the tools they need to identify false or misleading information. Civic education initiatives and digital literacy training can help people navigate complex media environments. Governments should also allocate funding to develop detection tools for AI-generated content, which will only become more important as these tools grow more sophisticated and more widely used. Finally, democracies should scale up efforts to support independent online media through financial assistance and innovative financing models, technical support, and professional development support.
Reading through the report data and recommendation, though, one wonders what would ever convince authoritarian governments (which seem to be increasing in number) and globally powerful tech companies to cede their power to control. And certainly, viewing the global reality is overwhelming.
But the power of democratic advocates and activists can be seen at local and national levels – and these are the real building blocks of change. As the report itself notes with several examples, “Digital activism and civil society advocacy drove real-world improvements for human rights during the coverage period.”
These are the positive cases that we need to work together to multiply.
Photo: Dmitry Demidovich