Can the tech industry hit the human rights refresh button?
On 4 July, the Human Rights Council will hear a report providing recommendations for protocol and standard settings in new technologies. We speak to the head of the team engaging with business and pushing for greater awareness of human rights implications from the fast-evolving sector.
Amazon, Microsoft and Spotify have become household brands, providing services to millions of customers worldwide. Recently, however, instead of being in the spotlight for an awaited product launch, the three mega tech firms were having to face the music of violating privacy regulations.
Microsoft is expected to be fined $400 million by the Irish Data Protection Commission for violating advertising practices with LinkedIn, while Amazon already agreed to pay $31 million for privacy violations with Alexa and Ring Camera. Spotify is also being fined for breaching users’ data access rights.
But such violations have become so common and frequent that when small penalties are involved, they fly by the night. And yet, privacy is a fundamental human right recognised in the United Nations Declaration of Human Rights, affecting individual autonomy, safety and freedom of expression.
Scott Campbell, a human rights expert, began his career in the field in Central Africa. Five years ago, the former UN high commissioner for human rights Zeid Ra’ad Al Hussein asked him to make a big career shift and move to Silicon Valley to examine the tech industry dynamics closely. He now runs the Human Rights Office’s work on digital technology and human rights, leading a team of five people, with a mandate to promote a safe online civic space.
In July 2021, the Human Rights Council asked the UN Human Rights Office to prepare a report on how human rights can positively contribute to technical standard-setting processes for new and emerging digital technologies. With countries just starting the long and uncharted path to regulate AI, adding human rights safeguards may boost user confidence in these emerging and disrupting technologies.
Geneva Solutions: Can you anticipate some of the findings of the report that your office will present on 4 July before the Human Rights Council?
It’s a new but crucial area of work for us, given the impact of technical standards on a wide range of human rights. How internet protocols are being developed will impact the security of communications, including how secure communications tools such as WhatsApp are and how those technical standards and protocols around AI are developed have huge implications not only on the right to privacy but also they impact our freedom of expression or access to information
How have you been working on the issue of new technologies with standard-setting organisations, such as the International Telecommunication Union (ITU), the International Organization for Standardization (Iso) and the International Electrotechnical Commission (IEC) ?
Most technical standard-setting organisations don’t have extensive experience in human rights. There’s a huge communication and language gap to fill between human rights experts and tech experts. The tech experts want to know how human rights make any difference to the standard technical setting, while the human rights community says that technical standards do impact human rights.
A transparency and information gap also exists. Even if some of the standard-setting organisations claim to be open to civil society, often there is a lack of clarity around how the different processes actually work. Sometimes, there are fees to join a process or to access information (ITU charges membership fees of several thousand Swiss francs), or the documentation is not easily available. Therefore, a number of barriers and hurdles need to be overcome. This year, we held expert consultations with the Iso, IEC and ITU, several fantastic human rights NGOs and academics that really understand this stuff. We just finalised a report that is scheduled to be presented to the Human Rights Council on 4 July, with recommendations on the next steps to take to move this super important question forward.
Do you think that one day companies could be required to ensure their technical standards and protocols comply with human rights obligations before companies implement them in their products?
It will still require a sea change in terms of how the standard-setting organisations view human rights and the importance of doing what they can to integrate human rights into the different processes. Governments need to live up to their obligations, and companies as well need to go beyond just thinking about how to get their standards adopted to advance their profit line.
The UN secretary general António Guterres has proposed that countries should agree next year on a Global Digital Compact. Your office suggested that human rights should be an overarching theme to the compact rather than a designated section of it. How is that technically possible?
I think there’s a very natural link between all the different elements of the Global Digital Compact and human rights when it comes to providing us with a common approach to joining forces for governments and the private sector. The connectivity issue is crucial. Having a better-connected world could lead to a world where the Sustainable Development Goals are achieved, poverty greatly reduced and human rights respected.
How can human rights principles be incorporated in a multibillion-dollar digital industry that has tended towards profiling users for hyper-targeted commercials and political advertising, posing privacy threats and sometimes compromising the integrity of political processes?
This is a huge challenge and a crucially important one. We are very engaged with businesses both directly and trying to have them understand and actually apply the UN Guiding Principles on Business and Human Rights. Furthermore, we push governments to draft legislation that essentially obliges companies to meet their responsibilities. We launched a flagship initiative, which we call the B-tech project. We began the project with a group of 13 companies, including some of the world’s biggest, including Apple, Microsoft and Meta. We are in a pilot phase, as the initiative is limited mostly to companies in the northern hemisphere, but we are expanding our reach through the recently established Africa initiative.
(Nonetheless), getting companies to change their business model fundamentally is a bit like the oil tanker analogy: it’s a difficult and slow shift.
One of Elon Musk’s first decisions, when he bought Twitter, was to remove the tech and human rights team. Volker Türk, the UN high commissioner for human rights, wrote to him expressing his concerns. Did Musk ever respond?
The High Commissioner for Human Rights did not receive a response from Mr Musk, which was disappointing as we thought he would have had an opportunity to use the platform in his company in a way that could really advance freedom of expression, access to information and was playing an incredibly important role across the globe. (Twitter’s human rights) team was doing excellent work and was a member of our B-tech project, aligning it with the UN’s Guiding Principles on Business and Human Rights. Twitter also used to have a very progressive and forward-leaning policy around access to data. (Recently,) we have seen reports of sharp upticks in online content that promotes hatred and racial discrimination and attacks individuals in ways that could be quite dangerous.
What are you most concerned about regarding the human rights implications of deploying generative AI such as Chat GPT?
I think we just need to temper a bit of the hysteria around the existential threat that AI may pose, but we need to take generative AI and its risks extremely seriously. Our office is engaged in a number of closed-door and open conversations with the major companies to see how best we can put in place human rights guardrails immediately to try to protect a broad range of human rights that are at increased risk due to the rapid development of generative AI.
Having massive amounts of misinformation and disinformation is an immediate, perhaps obvious, risk, but it can also have a huge impact on democracy, people’s right to access information, and also on their physical safety and security. Generative AI has an incredible capacity to support campaigns that might incite violence, discrimination or hatred, and that can have a real impact in the offline world. We very much hope that our initiative will contribute to the development of guardrails that can be used more broadly by the different companies globally that will be jumping into the generative AI race.