How to mitigate risks when implementing AI

By Hannah Pettit, above, commercial lawyer and data protection expert, Ashfords

The Artificial Intelligence (AI) boom is showing no sign of slowing down, as governments around the globe debate the contents of new legislation aimed at protecting society from the potential harm of AI, while still encouraging innovation.

The EU’s AI Act, which came into force on 1 August 2024, is the world’s first piece of comprehensive AI legislation. However, despite most of the provisions not applying until August 2026, there is already criticism that the legislation threatens innovation and will fail to keep pace with the speed at which AI technologies are developing.

Many small and medium-sized businesses are already using AI technology, but even where necessary technical expertise is present, it can be difficult to lift the lid on third-party AI products and decipher their mechanics. Before deploying any third-party developed AI product, you should conduct an AI risk assessment to understand and mitigate the risks.

How will you use the AI?

The first question to consider when completing an AI risk assessment is how the AI will be used within your business. You should set clear parameters around how the AI can be used and what data can be input into it. Once data has been entered into the AI system, it can be difficult to extract, so setting clear rules around this can remove the risk of sensitive information falling into the wrong hands.

Does the supplier’s reputation precede them?

A good supplier can be the difference between the death or success of a commercial relationship and the third-party provider of an AI model is no different. It is crucial to carry out due diligence on both the third-party provider of any AI technology you are considering, as well as the technology itself. Most businesses will be heavily reliant on the provider’s knowledge of its own AI product, so finding third parties that provide clear technical documentation and instructions for responsible use of the AI, is important.

Transparency is the golden ticket

You need to understand what is happening to the data you input into any third-party AI system. Some providers will enable you to opt out of your organisation’s data being used to train the model, whilst others will offer private instances deployed specifically for your organisation. However, the latter often comes at a significant cost.

Where you are the controller of any personal data used with AI technology, you will need to provide relevant individuals with transparent information about how their personal data will be processed. You can only do this if you understand how the AI model works and how it processes any personal data. This is why the transparency of any third-party AI providers you work with, is key. This, in turn, will allow you to comply with your own transparency obligations.

Robust security measures

Another key part of supplier due diligence will be to assess the technical security measures implemented by your AI provider. You should review and assess any security documentation that the provider makes available and ensure that they are contractually obliged to comply with the necessary security standards during the term of your agreement.

You should also look at whether the provider has security accreditations or certifications, and whether they carry out appropriate vulnerability testing and independent third-party security audits.

International personal data transfers

With many technology providers operating on a global basis, it is important to understand where your AI provider will be processing any personal data. Are you engaging an overseas provider, which means that you are making a restricted international transfer of personal data? Alternatively, does your chosen AI provider engage third parties in other countries as part of its IT infrastructure, meaning that it is making a restricted transfer?

There are prescriptive rules for cross-border transfers of personal data. You will need to ensure that appropriate safeguards are implemented to protect any personal data that is being processed in locations that are not considered to have adequate data protection laws.

Data subject rights

You will also need to assess how you will comply with requests from data subjects to exercise their rights under data protection laws, in respect of the AI processing.

The use of personal data with AI can make it challenging to comply with data subject requests, for example, subject access requests or data deletion or rectification requests. However, where you are the controller of personal data, you will still be required to comply with these requests when you receive them.

As just one example, when an AI model is trained on personal data that you provide, it then memorises this data as part of the learning process. This can make it difficult to fully erase that data from your systems in response to a data deletion request.

Preventing bias and discrimination

If the dataset that an AI system is trained on is biased, then the consequence will be a biased AI system. As an example, if you are using an AI tool to analyse CVs and it has been trained on a biased dataset, it is likely to yield discriminatory and biased results.

It is therefore important to assess the risk of discrimination and bias, based on how the model has been developed and trained, and how you implement the results.

Where biases are engrained into the AI model, they are difficult to retrospectively rectify, even by introducing a layer of human intervention.

Human intervention

AI can efficiently and effectively reduce workload and streamline internal processes, but it is still important to manually check the accuracy of AI generated output. We have all heard the perils and horror stories of using content created by generative AI without confirming its accuracy. To avoid reputational damage or liability for your organisation, it is therefore important to set expectations about when AI-assisted tasks will require manual human input.

For further information on how to make responsible decisions when deploying AI technology, contact Hannah Pettit at [email protected].