The metaverse is shaping up to be a racist hellscape. It doesn’t have to be that way
Marginalized people often suffer the most harm from unintended consequences of new technologies. For example, the algorithms that automatically make decisions about who gets to see what content, or how images are interpreted, suffer from racial and gender biases. People who have multiple marginalized identities, such as being Black and disabled, are even more at risk than those with a single marginalized identity.
Problems are already surfacing. Avatars, the graphical personas people can create or buy to represent themselves in virtual environments, are being priced differently based on the perceived race of the avatar, and racist and sexist harassment is cropping up in today’s pre-metaverse immersive environments.
Ensuring that this next iteration of the internet is inclusive and works for everyone will require that people from marginalized communities take the lead in shaping it. It will also require regulation with teeth to keep Big Tech accountable to the public interest. Without these, the metaverse risks inheriting the problems of today’s social media, if not becoming something worse.
This historical relationship between race and technology leaves me concerned about the metaverse. If the metaverse is meant to be an embodied version of the internet, as Zuckerberg has described it, then does that mean that already marginalized people will experience new forms of harm?
Facebook and its relationship with Black people
The general relationship between technology and racism is only part of the story. Meta has a poor relationship with Black users on its Facebook platform, and with Black women in particular.
In 2016, ProPublica reporters found that advertisers on Facebook’s advertising portal could exclude groups of people who see their ads based on the users’ race, or what Facebook called an “ethnic affinity.” This option received a lot of pushback because Facebook does not ask its users their race, which meant that users were being assigned an “ethnic affinity” based on their engagement on the platform, such as which pages and posts they liked.
In other words, Facebook was essentially racially profiling its users based on what they do and like on its platform, creating the opportunity for advertisers to discriminate against people based on their race. Facebook has since updated its ad targeting categories to no longer include “ethnic affinities.”
However, advertisers are still able to target people based on their presumed race through race proxies, which use combinations of users’ interests to infer races. For example, if an advertiser sees from Facebook data that you have expressed an interest in African American culture and the BET Awards, it can infer that you are Black and target you with ads for products it wants to market to Black people.
According to a recent Washington Post report, Facebook knew its algorithm was disproportionately harming Black users, but chose to do nothing.
A democratically accountable metaverse
In an interview with Vishal Shah, Meta’s vice president of metaverse, National Public Radio host Audie Cornish asked: “If you can’t handle the comments on Instagram, how can you handle the T-shirt that has hate speech on it in the metaverse? How can you handle the hate rally that might happen in the metaverse?” Similarly, if Black people are punished for speaking out against racism and sexism online, then how can they do so in the metaverse?
Ensuring that the metaverse is inclusive and promotes democratic values rather than threatens democracy requires design justice and social media regulation.