Beyond "The Social Dilemma":
Can human rights due diligence help the tech sector?
It feels like tech companies are again under siege. Netflix's recent documentary "The Social Dilemma" shines a light on the harmful consequences that can be caused by the algorithms powering social media platforms, from privacy concerns, to user addictions, to the proliferation of misinformation in elections. The public commentary the documentary has generated reminds us of the often uneasy relationship that tech companies have with human rights.
Tech undeniably can play a role for good, in helping to create a more sustainable and democratic future. But as of late, the ethical scandals in the tech industry are receiving more airplay – and the stakes are rising for those who get it wrong. In the US, tech giants are being sued by a human rights firm on behalf of Congolese families who say their children were killed or seriously injured while mining for the cobalt that is used in the companies' smartphones and laptops. In the UK, Uber's ongoing four year legal battle over the impact of the app on workers' rights has taken it to the Supreme Court, with similar claims being brought in The Netherlands in 2020.
Human rights due diligence (HRDD) remains a less-known and underutilised tool in the tech sector, that could help prevent and address potential harm to people linked to digital technologies, as well as related business risk.
What are the big risks to human rights in the tech sector?
Privacy concerns, quite rightly, attract a lot of attention – and a lot of regulatory and consumer focus. But the tech sector's activities can also impact other important human rights:
- Discrimination (AI, algorithms and big data): Systems built using this technology can be biased, depending on who built them, how they’re developed and how they’re used. There are particular concerns that automated decision-making technology can lead to discriminatory outcomes, in sectors such as consumer banking, healthcare, policing, and employment. Regulators are taking action – in Finland, a credit agency was hit with a conditional EUR 100,000 fine because its algorithmic scoring system for granting loans online showed a bias towards men over women.
- Democratic freedoms: The private sector's interactions with governments and law enforcement can be particularly high risk. These include requests for tech companies to provide 'back door' access in encryption software, the misuse of surveillance powers, overbroad requests by law enforcement for user data or content restrictions that are not proportionate and respect the right to freedom of opinion and expression. The public outrage and fall in Facebook's share price from the 2018 Cambridge Analytica scandal showed the costs of being linked to the manipulation of democratic rights.
- Workers' rights: Potentially invasive biometric and surveillance technologies are increasingly employed by businesses in the workplace, particularly in the wake of changed working conditions caused by the COVID-19 pandemic (see our separate briefing). Tech developments and AI have catalysed the automation of jobs, from self-service checkouts in supermarkets to AI call centre responses to 'robot only' factories. We anticipate further scrutiny of companies' impacts on labour rights, as more workers are deskilled by AI/robots or reshuffled to other positions that are likely to be automated with time.
Tech firms should be concerned not only with the impact of their direct activities, but also any issues associated with their subsidiaries / business partners and in their supply chains – whether the human rights risks arise from the use of technologies, or other business practices, like mining minerals for components, or factory conditions. In the UK, recent cases have indicated an evolution towards broader potential legal liabilities for companies, under tortious duty of care principles, in relation to harms caused to victims by the acts or omissions of separate but related entities (e.g. a company's subsidiary, or a purchaser of the company's goods).
Can a rights-based approach help businesses better manage risk?
Over the last 15 years, international standards have developed to help businesses prevent and address the risk of adverse human rights impacts linked to their activities. The UN Guiding Principles on Business and Human Rights (UNGPs) provide an authoritative global framework for how businesses can embed respect for human rights in their practices – and in so doing reduce the risks to their own businesses.
A key expectation for business on business under the UNGPs is to conduct iterative human rights due diligence. The HRDD process requires a business to assess actual and potential human rights impacts across its operations, products or services, or business relationships. Businesses can prioritise the most serious and important impacts on people. They are expected to track and act on findings, and communicate how impacts are addressed; as well as put in place processes to remedy any adverse impacts they cause or to which they contribute.
The UNGP framework, and HRDD tool, can help tech companies to strengthen their governance processes:
- Broad risk assessment and tracking: Undertaking HRDD can help businesses look beyond a blinkered focus on privacy, to identify a range of human rights issues which can have damaging financial and reputational consequences – including arising from business models or the end use of digital technology.
- Internationally-accepted, technology-neutral standard: The UNGPs have broad support / adoption across government, business and civil society. They provide a global, principled approach that can be implemented by all companies – and are increasingly expected by investors and economic organisations (e.g. OECD). Conducting HRDD is a credible and defensible approach to risk mitigation, particularly as the regulatory landscape applicable to digital technologies continues to evolve. In litigation, it may also assist a company to show it took reasonable steps to discharge of any duty of care found to be owed to individuals affected by the company's activities.
- Adaptable to change: The OHCHR, in its 'B-Tech project', has advocated the UNGPs as "grounded in a model of 21st Century governance that is well-suited to the pace, uncertainty and complexity of today’s technological advancement". HRDD is an ongoing, iterative process that can be applied by companies working cross-border, with stakeholders in the business's value chain, to address risks as they are identified.
- Sustainable business models: HRDD allows businesses to create more sustainable and resilient business models by embedding the assessment of human rights impacts in processes before technologies are brought to market – e.g. helping predict the negative impact of online short-term rental platforms on the right to housing for poorer residents. HRDD can give confidence to businesses – and demonstrate to consumers – that they are seeking to act responsibly.
HRDD is not new, but we predict that its importance and application in the tech sector will only increase as more governments are legislating to mandate HRDD (see our May 2020 briefing).
Our team is happy to discuss how HRDD can complement existing data and digital ethics policies and frameworks or strengthen your business's existing governance frameworks, and help your business take action to tackle key human rights risks.
This article was written by Louise Brown, Senior Associate, London and Adam Hunter, Trainee, London.