Are Your AI Models Compliant?
The topic of AI compliance from a regulatory standpoint has been covered extensively. For highly regulated industries like banks, insurance and healthcare, the potential penalties when AI models exhibit bias or violate other compliance standards run high.
One prominent example was back in 2019 when it was discovered that Apple’s credit card algorithms had inherent gender bias. And that led to males getting a higher credit limit than females with the same attributes in the same household. Another example is that of self-driving cars, which for fairly obvious reasons, have come under increased scrutiny from governing bodies such as the National Highway Traffic Safety Administration and the Department of Transportation. In this example, a non-compliant AI model could result in serious injury or even loss of life.
There’s two sides to the term non-compliant: there’s the concept of failing to meet regulatory requirements and then there’s another definition, in which non-compliant means failing to act in accordance with a wish or command. It’s that latter meaning we’ll explore here – in this situations, you might not be violating laws, but the consequences can still be significant in terms of business impact.
Looking at the problems of non-compliance
Non-compliant AI can sometimes be benign, where the models are making incorrect decisions in a way that is annoying or frustrating, but that don’t have an outsize impact. Perhaps an office worker says, “It’s too hot in here” and the chatbot thinks they mean “Turn up the temperature.” Sure, it’s a little irksome but it’s not going to cause major problems in the long run.
But a less benign example of non-compliance could result in a company not meeting profit objectives or losing money, or it might create a PR nightmare. Facebook’s AI mislabeling people of color or Zillow losing billions of dollars after using AI to purchase homes for flipping are two examples that come to mind. In the case of Zillow, the company’s algorithms failed to account for fluctuations in consumer needs and pricing during the COVID-19 pandemic – and it couldn’t accurately predict future prices or speed of sales. As a result, Zillow ended its Zillow Offers program – losing an estimated $380 million as a result.
AI and ML models are great when they do what you have set them up to do – but when that doesn’t happen, the technology can be a liability. And ensuring that your models will, in fact, do what you want them to do isn’t always easy,
Underlying challenges
AI and ML are still relatively new technologies. A significant number of enterprises haven’t even started the process of getting AI models running in production. And for those that do, this is still so new that we’ve barely scratched the surface in terms of the potential harm an AI model can do. Many companies aren’t fully aware of this issue – or they don’t see it as an issue.
To ensure accurate model behavior, start with clearly defining the business requirements. What are the key business metrics and compliance requirements that must be met? Identify the various sub-tasks and the owners for these tasks. Unlike traditional corporate compliance and audit requirements that are relatively well-defined, AI/ML governance requires extra layers that aren’t well understood. Therefore, it’s important to define these tasks and owners granularly.
Use caution when considering the tools you’ll use for AI governance. There are many vendors focused on explaining technical metrics from an academic standpoint. Terminologies such as responsible, explainability and observability are freely thrown around – but stay focused on the key issues. Don’t let vendors trap and confuse you into technical ML metrics that only data scientists can appreciate.
Teams across the company should begin to gain AI knowledge and competencies. AI is most often seen as a black box and the purview of highly technical data scientists and machine learning engineers. But that idea isn’t sustainable. It’s like saying, “Only my IT people need to understand what the security vulnerability means.” Just as casual users don’t need to know the nitty-gritty details of cyber security, they do need to have understanding on how to avoid well-known phishing tactics to protect themselves. Teams don’t need to become AI experts, but they do need to understand enough to be enablers of, not hindrances to, the process.
Governance is king
Model governance is a must-have when your organization has one or more AI/ML models. It helps you reduce risk and increases the odds of the model delivering on its initial promises. Because both the reward potential and the stakes are so high, use the best practices outlined above to ensure you are doing everything possible to remain in compliance with your policies and any external regulations as well.
Written by Victor Thu.
Have you read?
6 Ways to Make The Most Out of Your CRM With Good Email Marketing.
Manage Expectations and Maintain Harmony With Open, Authentic Communication by Grant Botma.
CEO Spotlight: Sean Li, CEO of CaterCow- Building a Unique Catering Marketplace and Leveraging Lunch & Learn Meetups.
Why the Convergence of Video Security and Access Control Is Becoming Mobile-Centric.
Add CEOWORLD magazine to your Google News feed.
Follow CEOWORLD magazine headlines on: Google News, LinkedIn, Twitter, and Facebook.
Copyright 2024 The CEOWORLD magazine. All rights reserved. This material (and any extract from it) must not be copied, redistributed or placed on any website, without CEOWORLD magazine' prior written consent. For media queries, please contact: info@ceoworld.biz