Digital technologies can make things both better and worse for people who are already vulnerable and marginalized in our societies.
On the one hand, the digital world provides new opportunities for communication and learning. On the other, online life can be full of harassment and threats, just like life in the “real” world.
Vulnerable and marginalized groups include many kinds of people, and many types of oppression, prejudice and exclusion. In this session we will look at the experiences of ethnic and racial minorities, Indigenous peoples, women, people with disabilities, the LGBTQ+ community, children, and others. They often struggle with gaps in digital literacy, discriminatory algorithms, online harassment, and violations of privacy.
In this session you will learn how to advocate for fairer digital experiences for marginalized people and groups by addressing issues like equitable access to digital technologies and protection from online harm.
Keywords

Outcomes

After this session you will be able to:
- Understand how digital technologies and platforms can contribute to marginalization.
- Explain how vulnerable and marginalised groups face both risks and opportunities in going online.
- Take steps to stop threats to marginalized groups and create inclusive online spaces.
Understanding the Risks
Women

Women face challenges online, including cyber harassment, gender-based discrimination, and unequal access to digital resources and opportunities. Going online can expose women to new forms of harassment and violence, like trolling (intentionally provoking or annoying someone online) and doxing (publishing private information online with the intent to cause harm).
Gender-based violence is abuse and violence directed toward someone based on their gender identity or perceived gender identity. Unlike biological sex, gender is social construct that encompasses identity, behaviours, and societal expectations that varies across cultures.
These digital risks have consequences. The International Telecommunications Union (ITU) reports that around the world these dangers cause women to retreat from digital spaces and often take a toll on their wellbeing.
Source: ITU, the United Nations agency for digital technologies.
Children

Children are vulnerable to online dangers, like grooming, cyberbullying, and exposure to inappropriate content. This puts them at risk for emotional and psychological trauma, and problems at home and school. Rapid changes in social media, gaming, and AI-related technologies make it hard for parents and teachers to keep pace with the experiences of children online.
Young people face special risks. Their critical thinking skills are still developing and they often spend time in digital spaces designed primarily for adult users. In the majority of online spaces, adults can contact children and young people with no special restrictions; this puts young people at risk for crime and other forms of abuse. A report from Finland, for example, indicates that 62% of Finnish children have been contacted by an adult online. Children’s voices are often left out of policy debates and their rights are not well protected by laws covering digital technologies.
Source: Pelastakaa Lapset (Save the Children), a global humanitarian organization funded by a mix of government grants, individual donations, corporate partnerships, and multilateral institutions.
Indigenous people

Indigenous peoples face unique risks in digital environments but are often not considered in the development of digital technologies. Today, AI and big data often replicate colonial patterns of explotation, especially when it comes to Indigenous languages and culture. The distinctiveness of Indigenous knowledge and values is not honoured by digital technologies, which are often driven by profit and competition, rather than collective responsibility and reciprocity. As a result, digital tools can reinforce historical injustices and widen existing divides.
The rapid expansion of digital technologies, including AI and algorithmic decision-making, has also raised concerns about data sovereignty and the exploitation of Indigenous identities. Companies and multilateral organizations have pushed for universal infrastructure access. Some Indigenous communities, however, stress the need to determine the level and type of access on their own terms in order to preserve their culture. They highlight that accessibility should be community-led rather than imposed. Global bodies like the UN have called for Indigenous-led innovation and governance in AI, emphasizing the principle of “nothing about us without us.”
Source: United Nations Department of Economic and Social Affairs
Disability

Digital technologies can help people with disabilities by improving accessibility, communication, and independence in daily life. People with disabilities, however, also face distinct challenges in digital environments. The people and companies designing technologies often overlook basic matters of accessibility. Even though everyone can benefit from inclusive design, digital spaces are often harder for people to with disabilities to access and use. Digital elements such as videos without captions, pictures without descriptive texts, and poor colour contrast, make being online harder than it needs to be for people with disabilities.
Nearly everyone needs digital tools for communication and learning, yet these are often not built to meet everyone’s needs. International frameworks like the UN Convention on the Rights of Persons with Disabilities are increasingly calling for digital technologies with inclusive digital design. Implementation, however, remains uneven.
Source: UN Office of the High Commissioner for Human Rights and Better Internet for Kids, an initiative of the European Union to create an internet tailored to the specific needs and vulnerabilities of children with disabilities.
Racism

Just like in the physical world, racial and ethnic minorities face very real threats in digital spaces. Online platforms and spaces often amplify existing inequalities because of biased algorithms, discriminatory content moderation, and surveillance practices. From predictive policing tools to facial recognition systems, racialized communities are disproportionately affected. At the same time, these groups are under-represented in the design and governance of these same technologies.
Online platforms can be hostile places. Hate speech, racial slurs, and algorithmic amplification of harmful stereotypes contribute to psychological distress and social exclusion. Despite the hype of increased objectivity, AI can actually amplify already existing problems. For example, a landmark study found that commercial facial recognition systems misclassified dark-skinned women up to 34.7% of the time, compared to just 0.8% for light-skinned men.
Source: Nature (“The Unseen Black faces of AI Algorithms”), a widely respected, peer-reviewed journal founded in 1869.
Onward

This section has covered just some of the basics about how marginalized people have different (and difficult) experiences online. But there’s more. It’s time to hear from experts in the field and go deeper with a case study. Click the button below to move on.