Insights

AI and Human Rights: A Call for Responsible Innovation

Photo credits: Markus Spiske at Unsplash

Be smart about human rights due diligence in the era of AI 

If the impact on human rights of your business’s development or deployment of artificial intelligence (AI) is not on your radar yet, it should be. Policymakers are turning their attention to the role of businesses as agents of change in global efforts to protect human rights.  

As AI technologies evolve and become more integral to business operations across a broad range of sectors, companies must consider how to use AI responsibly. Fortunately, national and international guidance and regulation on human rights due diligence (HRDD) offer a credible framework for these discussions. Here, we give you some background on HRDD expectations as well as key developments in this area, including some specific to AI.  

Global expectations: Business respect for human rights  

It’s been well over a decade since the UN Human Rights Council endorsed the Guiding Principles on Business and Human Rights (UNGPs) in 2011. The UNGPs are a groundbreaking initiative aimed at addressing adverse impacts on society arising from business activities.  

The principles highlight the responsibility of all businesses to respect human rights. Businesses are expected to maintain HRDD processes to identify, prevent, mitigate, and account for negative impacts on human rights that are linked to their business activities – even if states do not fulfil their own human rights obligations.1  

Global progress: Glass half full or half empty? 

In the years since their introduction, the UNGPs have become the primary international framework on business and human rights, informing the development of many subsequent mechanisms, regulations, guidance papers, and standards. Despite these actions, violations of human rights are reported in all parts of the world, particularly in the Global South. In addition, new risks to privacy and freedom of expression have emerged with the explosive growth in AI-supported technologies.  

Communication and accountability 

With the responsibility for corporate HRDD long established, why are abuses still happening? Lack of communication and accountability may be contributing factors. The most vulnerable populations, such as children, Indigenous Peoples and traditional communities, migrants, and minorities are often not involved in law making. Basic due diligence for new projects may not include their input. Once human rights violations are committed – knowingly or not – it is common to see the impacts go unaddressed with no remediation measures because of a lack of full (or even partial) accountability from the parties involved. 

The most recent Corporate Human Rights Benchmark, released by the World Benchmarking Alliance in November 2023, found that the average pace of improvement in corporate respect for human rights remains too slow, with particular room for improvement in the engagement of rights holders in human rights due diligence processes. Although over 60 percent of the companies that were evaluated had a due diligence process in place to protect human rights, fewer than one third of them interacted with the people whose rights were at stake. This lack of engagement can make well-meaning due diligence less effective. 

Developments in 2024 

The necessity for companies to undertake HRDD, whether general or specific to AI, has been a hot topic on policymaker agendas in 2024: 

  • In April, authorities in Taiwan released voluntary draft guidelines for businesses to respect human rights in their supply chains, and in June published guidelines on the use of AI in the financial sector that include human rights-related recommendations. 
  • In May, the Organisation for Economic Co-operation and Development issued a revised recommendation on AI that strengthens its responsible stewardship and human rights-related principles by addressing AI-amplified misinformation and risks arising from the misuse of AI.  
  • In July, the European Union’s Corporate Sustainability Due Diligence Directive, which includes human rights components, entered into force, giving member states two years to incorporate its provisions into national laws. 
  • In September, the USA, the UK, and the European Union were among the signatories of the Council of Europe Framework Convention on Artificial Intelligence, a legally binding international treaty aimed at ensuring AI systems respect human rights, democracy, and the rule of law.  
  • In December, the transposition of the Corporate Sustainability Reporting Directive (CSRD) by EU member states into their national legislation will take place. From 2025, CSRD requirements will become applicable. 

The UN Forum on Business and Human Rights in November (see our LinkedIn post on the forum) included a session on human rights impacts related to the procurement and deployment of AI and the steps companies and states should take to protect human rights in this context. Feedback from that session and an associated call for inputs by the UN Working Group on Business and Human Rights will inform a report on AI and the UNGPs to the Human Rights Council in June 2025.  

Stay informed 

As AI becomes more prevalent, it creates additional challenges for protecting human rights. Therefore, we anticipate that regulators, likely with guidance from international organizations, will move to require companies to conduct thorough due diligence to reduce potential risks and negative impacts on society.  

Do you want to know more about what your institution can do with respect to human rights due diligence or what gaps may exist in your current processes? Please reach out to us at ECOFACT. 

Bleiben Sie informiert

ECOFACT’s ambition is to be a catalyst in the transition towards a sustainable economy. We write, organize events, develop products and services. Be the first to know.