Wolfsberg outlined five main principles for responsible AI/ML:
- Legitimate Purpose
- Proportionate Use
- Design and Technical Expertise
- Accountability and Oversight
- Openness and Transparency
In this article, we comment on each point with smartKYC’s interpretation and further opinion.
“Financial institutions’ programmes to combat financial crimes are anchored in regulatory requirements, and a commitment to help safeguard the integrity of the financial system, while reaching fair and effective outcomes,” Wolfsberg, 2022.
Under GDPR, organisations that carry out identity checks and hold potentially sensitive information about customers have to be completely transparent about what happens to the data after use. Organisations that run these checks must have a legitimate purpose and perform and document an LIA (Legitimate Interest Assessment), to ensure that there is an assessment of ethical and operational risks in their approach to protect the customer.
Also, be wary of KYC tools who believe they are covered under GDPR because everything they find on individuals can be found in the public domain. This does not cover them from performing a proper assessment of legitimate purpose.
“Financial institutions should ensure that, in their development and use of AI/ML solutions for financial crimes compliance, they are balancing the benefits of use with appropriate management of the risks that may arise from these technologies,” Wolfsberg, 2022.
It is important not to just ‘hoover up’ everything in an attempt to monitor effectively. Information should only be saved if pertinent. Institutions should only implement a program that validates the use of AI/ML regularly. This can be a benefit to KYC screening because it only shows the human what is relevant to the topic, rather than overwhelming them with information.
For instance, during adverse media screening, a human analyst could read and learn lots of information about a client which is irrelevant for KYC screening purposes and could be deemed an invasion of their privacy: sexual preference, children’s names, etc. With AI tools like smartKYC, the irrelevant and personal information would be disregarded and only pertinent information relating to the job at hand would be presented to the human for review.
Design and Technical Expertise
“Financial institutions should carefully control the technology they rely on and understand the implications, limitations, and consequences of its use to avoid ineffective financial crime risk management,” Wolfsberg, 2022.
Financial institutions need people who understand what is going on in the AI/ML space. When considering any AI application, it is important to challenge the type of AI used. If you are unsure of how AI produced insights based off of a data set you may be relying on technology too heavily in your business. In the case of SaaS AI, this is increasingly important to be aware of due to its third-party nature.
Teams involved in the creation, monitoring and control of AI/ML should be composed of staff with the appropriate skills and diverse experiences needed to identify bias in the results. AI is a program, therefore it is only as good as the information that has been inputted. It does not mean that the information is accurate or correct.
Accountability and Oversight
“FIs are responsible for their use of AI/ML, including for decisions that rely on AI/ML analysis, regardless of whether the AI/ML systems are developed in-house or sourced externally,” Wolfsberg, 2022.
Beyond understanding the technology they use, financial institutions and their staff need to understand they are responsible for the decisions made based on AI/ML. Properly test your AI solutions so that if something does go wrong there are people constantly reviewing the system and being sceptical around the data.
This can be even more difficult to monitor when using third party machine learning tools, which can be somewhat of a black box by their very nature.
Openness and Transparency
“FIs should be open and transparent about their use of AI/ML, consistent with legal and regulatory requirements,” Wolfsberg, 2022.
Again, it is important to be aware if there is some uncertainty about how the programs made particular conclusions. There is a lot of danger with the possibility of data getting in the wrong hands. Institutions need to be open about what information is being obtained and used to make their decisions. The term ‘AI’ is broad and often poorly defined.
At smartKYC we believe it is of the utmost importance to have explainable AI clearly showing how certain decisions were made.
Financial organisations cannot ignore artificial intelligence and machine learning solutions but should adopt a measured and prudent process for adopting third party applications and developing their own.
Although AI/ML solutions can relieve companies of vast quantums of manual labour, it is important to understand these solutions should not be thought of as complete replacement for humans, but more of an enhancement to their current work force.
It is necessary to keep regulatory requirements and the organisation’s core values in mind when considering using an AI/ML solution for financial crime and risk management.
smartKYC’s adverse media screening software is the world’s most advanced multilingual semantic search engine to machine read all online media content for potential negative news about your clients, improving KYC processes and reducing risks. If you’re interested in learning more about smartKYC’s industry-leading multilingual NLP and how it can transform the efficiency and effectiveness of your KYC operations, book your demo today.