On April 21, the European Commission (EC) presented its regulatory text with a proposal for a European regulation on Artificial Intelligence (AI). The objective is to protect citizens from harmful AI, with practices identified as “unacceptable risk,” including facial recognition in public places and social scoring, outright banned. So-called high-risk AI practices surrounding essential private and public services will have to be demonstrated as safe, while limited risk systems (such as chatbots) will be subjected to transparency obligations to ensure users are able to make informed decisions regarding their use.
As part of this regulation, companies will be forced to assess AI risks and incorporate explainability. In the event of non-compliance, fines may reach six percent of the company’s annual turnover.
It’s a first-of-its-kind initiative set to strongly influence AI development and deployment standards globally, comparatively similar to how the EU General Data Protection Regulation (GDPR) began to influence data protection when it became binding in 2018.
While the new regulations may be first and foremost focused on any user of AI systems in the EU, the crossover into the UK is inevitable. The regulations also apply to any company, based anywhere, that sells products and services into the EU, as well as providers and users of AI systems where the output produced by the system is used in the EU.
Beyond that, the UK is doing extensive work surrounding AI auditing and explainability frameworks via the Information Commissioner’s Office, while the government recently announced a new strategy to make the UK a global center for the development, commercialization and adoption of responsible AI. The FCA and Alan Turing Institute are also working on a year-long collaboration on AI transparency in relation to machine learning in financial markets.
With so much crossover between the users of products and services in the EU and UK, and, with all signs pointing to increased AI regulation in the UK in the near future, what should businesses be doing to prepare to ensure they comply and continue to drive innovation in AI?
What does the EU regulation say?
The EC distinguishes the wheat from the chaff. “Beneficial” AIs are defined as those that make it possible to automate tasks, or provide decision-making information to improve plant productivity, reduce costs, model climatic events, organize transport services, improve health services, or anticipate breakdowns. All of these applications are sources of benefits and performance for companies.
However, European regulations intend to ban all those AIs deemed non-beneficial. These include those used for the purposes of indiscriminate surveillance, facial recognition, manipulation of human behavior, or the rating of people based on their actions.
Other use cases that may be considered as possible sources of discrimination in certain recruitment processes will need to be assessed or brought into compliance to avoid facing fines under the new regulation. These may include AI used in the selection process of various educational establishments or AI that intervenes in the systems of distribution of emergency services, assessment of solvency, as well as any AI-driven systems associated with making decisions in the judicial system.
AI risk assessment and governance: What does it mean for business?
The EC intends to prevent any overflow by requiring companies to assess their AI systems to determine their risk before commercializing them on the European market. If, to date, the evaluation methods are not yet clearly defined, companies will need to assess them and follow regulatory guidance for compliance surrounding the probability of harm, as well as the severity of risk.
The notion of explicability of models must be taken into account, as companies will need to detail the functioning of an algorithm at the origin of decision making, both to the authorities and users. Data governance is obviously not a new concept — as long as data has been collected and stored, companies have needed some level of policy and oversight for its management. These policies will have to evolve to include AI, hence why the EC is proposing a new regulatory framework for AI that is different but complementary to GDPR.
Understanding exactly how to best assess AI risks and leverage the assessment outputs to address these risks is also essential. One of the advantages that data science, machine learning, and AI platforms bring is the ability to centralize data effort, allowing for model risk validation and processes to scale as well.
A centralized effort is helpful in ensuring AI risks are not treated as standalone technical anomalies, but rather as core business issues. However, this also means that governing AI risks will require an update to traditional risk and compliance corporate expertise.
At its core, AI risk assessment is the translation of core principles (e.g., explainability) into risk metrics and scores. These metrics cover not only technical elements of the AI lifecycle but also business and organizational elements.
Overall, identifying risks for all AI systems across the enterprise and following the established processes for assessment and compliance in a consistent manner for each of them is likely to be the main challenge for organizations. Ensuring that these efforts do not slow down AI development and its successful embedding in key business processes will be another significant challenge — especially when we know how much is still to be done in this space. An organized, centralized approach to both, with auditability and governance features will be beneficial in navigating the new regulatory environment.
Innovation still top of the agenda
It’s worth noting that in the new proposal, EU Member States will have the opportunity to promote innovation, by giving start-ups and SMEs the freedom to test and develop AI systems with lightened regulatory constraints before they are commercialized.
In terms of paving a way for the future, when it comes to AI regulation, perhaps it’s best to look at it this way: If a law is restrictive, it also often has the merit of preventing bad side effects. As they embark on their AI journeys, European companies have every interest in setting themselves up for AI governance, and UK companies will no doubt follow suit. Regardless of where companies are located, if they dedicate themselves to creating an informed strategy around AI risk and governance, they can not only comply with the new regulations but also derive all the benefits of AI to drive business growth and innovation.
Paul-Marie Carfantan, AI Governance Solution Manager, Dataiku