Policy Outlook

Artificial Intelligence & Corporate Responsibility

Artificial intelligence (AI) is reshaping entire industry sectors and the financial sector is not exempt from this transformation. The number of fintech companies keeps growing and big tech companies, such as Amazon and Google, have started to provide financial services. Last week, the Financial Times published an article addressing the “AI race” in the financial sector, predicting that competition from AI-driven systems “could soon matter enormously — helping to determine the future winners in finance and the next big set of regulatory risks”.

From a regulatory and corporate responsibility point of view, AI does not represent a lawless field as certain expectations for the technology have already been laid down by standard setters and continue to evolve.

Key regulatory frameworks:

  • OECD Principles on Artificial Intelligence: Aim to guide governments, organizations, and individuals in designing and running AI systems in a manner that puts people’s best interests first and ensures that designers and operators can be held accountable for the technology’s appropriate functioning.
  • EU Ethics Guidelines for Trustworthy Artificial Intelligence: The Guidelines put forward seven key requirements that should be evaluated and addressed throughout an AI system’s life cycle: 1) human agency and oversight; 2) technical robustness and safety; 3) privacy and data governance; 4) transparency; 5) diversity, non-discrimination, and fairness; 6) societal and environmental well-being; and 7) accountability.
  • Hong Kong Monetary Authority Circular on the High-Level Principles on Artificial Intelligence: Calls on banks to be “ethical, fair and transparent”. Principle 8 states that “banks should ensure that AI-driven decisions do not discriminate or unintentionally show bias against any group of consumers.”

Where to start:

We suggest that when conducting business relationships with companies active in the technology sector, as well as when integrating AI technology into financial products and/or services, financial institutions should consider the following key questions:

  • Are there internal policies that clearly establish and specify prohibitions on the use of AI (e.g. in product customization, targeting, servicing, or assistance) that violates international human rights law (e.g. privacy and freedom of expression)?
  • What are the existing internal processes to ensure that AI design and engineering choices incorporate human rights?
  • Are AI technologies submitted to regular audit programs and verification processes seeking to ensure that human rights are not violated?
  • How can external stakeholders (e.g. clients) seek remediation for violations arising from the use of AI technologies (e.g. discrimination)?

By using these questions as a starting point, financial institutions can frame their approach and expectations in relation to AI from a corporate responsibility point of view. While doing this, it is crucial to keep an eye not only on the latest technological developments, but also on how the regulatory landscape is evolving. For this you can count on the Policy Outlook team. We offer support to your institution in addressing this and other topics pertaining to corporate responsibility and sustainable finance.

Sign up for updates

ECOFACT’s ambition is to be a catalyst in the transition towards a sustainable economy. We write, organize events, develop products and services. Be the first to know.