- Take a deep dive into what AI bias means in the insurance industry
- Read three ways Nuon’s AI is configured to eliminate the risks of negative AI bias
- For more on AI bias, check out Richard’s previous article on the topic
At Nuon AI we love to engage in a little blue-sky thinking, and lately we’ve been contemplating the challenge of eliminating AI bias. This is an interesting topic to explore and debate hypothetically, but also in very real terms of how we have put in place strategies to eliminate bias from our own AI. Read on to find out how we’ve approached this important subject.
If you’re new to the topic, AI bias is when software unexpectedly produces results that discriminate against a protected group, such as gender or religion. If you want a primer, you can read my earlier article on AI bias.
Here’s the big question – how can we reduce, and even eliminate bias in AI?
Maybe philosophy is the answer. If we teach the AI moral philosophy it could learn to ask itself the big questions and reach new, ground-breaking conclusions…or it might just tumble into a spiral of rationality.
So, that’s a non-starter.
When using AI in insurance tech to help draw profitable conclusions, bias has to be an inherent part of the system. The challenge stems from the fact that we’re trying to distinguish between “good” bias and “bad” bias. We could eliminate bias entirely if all the insurance companies got together and agreed a flat fee that everyone pays regardless of their individual profile.
Although I think that would be considered anti-competitive and that… *Googles quickly* … would result in five years in prison…
Ai bias can be dangerous… but is it necessary?
There’s actually some truth in that last notion. AI bias in insurance tech IS a necessary part of the system because without it, the end result would be identical.
Nuon AI’s software is designed, broadly speaking, to help insurance companies deliver quotes that maximise conversions and profit. It couldn’t do that without the AI being programmed, and subsequently learning, how to spot trends and lean into them.
We generally use the word “bias” to describe prejudice that is unfair, but the simplest definition is just preferring or selecting a particular thing. This leaves us no choice but to get our hands dirty and actually try to tease out the differences between bias that is desirable, and bias that is undesirable.
And the undesirable kind of AI bias is very dangerous indeed.
Our society is rightly quick to call out anyone who is perceived to be prejudiced or bigoted. There are plenty of examples of AI bias that have embarrassed people or businesses (Google “Amazon AI Bias” to learn about a classic example). But being caught with a red-face is the best-case scenario. Careers have been ended and corporate reputations irreparably damaged by accusations of discrimination.
And in the case of insurance, certain types of discrimination are illegal.
However, it is entirely possible for an insurer using an AI to generate quotes to systematically give higher quotes to categories against which it is illegal to discriminate, but not actually be basing those decisions on the category itself. In other words, the results are showing bias, but the AI itself isn’t biassed.
For instance, historically, women have enjoyed lower motor insurance premiums. However, since 2012 it has been illegal for insurers to discriminate on the grounds of gender, which means this can no longer be used as a factor when generating a quote. Nevertheless, ten years later…men still tend to pay more on average for their premiums.
Is that because insurance companies are ignoring the rules? No. It’s because there are other factors that allow the quoting algorithms to reach similar conclusions even when the software doesn’t know the gender of the customer.
Men have historically paid higher premiums because, presumably, the data showed that men are more likely to make a claim. But that doesn’t tell you why that’s the case. Could it be because men make up a slightly higher percentage of drivers? Is it to do with driving style or distractions?
Even if you remove gender from the quoting algorithm, the software can still reach a similar conclusion via other data. You can’t easily remove that kind of AI Bias. And even if you could it wouldn’t necessarily be desirable. If you go down that route, you’d eventually be in the aforementioned place where every single applicant is getting exactly the same quote and everyone in the insurance industry is in prison for the next five years.
3 strategies for reducing AI bias
Where does all of that leave us in terms of reducing AI bias in the Nuon AI software? We use three central strategies to manage bias.
We start by acknowledging that a certain level of AI bias is inevitable and that it isn’t necessarily indicative of unwanted discrimination.
From that point we ensure the AI is prevented from producing results that could be viewed as illegal or at the very least unfair. The goal is to be fair, stay within the law and protect customers, while still producing results that are profitable for the insurance companies we work with.
Strategy one: Don’t add discriminatory data in the first place
For Nuon AI’s software to work, we need our insurance customers to send us applicant data. But we specifically ask them to exclude any data points that relate to areas of discrimination, such as gender and ethnicity. If we don’t even add this information into the mix, we don’t have to worry about this data unintentionally tainting the end results. An AI can’t easily discriminate if it doesn’t even have the data to do so.
Strategy two: The data we receive is “hashed”
When our insurance customers send us data it can be hashed, making it unreadable by humans. Hashing data is akin to encrypting information, except there isn’t a function to decrypt it again later. The AI can still work with it because it doesn’t need to comprehend the 1s and 0s it’s working with to perform a calculation, it is looking for patterns in data not particular values.
In practice this means that, even if we wanted to (which we don’t), we can’t introduce any human bias into the equations.
Not only does this prevent unintentional bias, it demonstrated to the regulators and public that we take discrimination avoidance seriously.
Strategy three: We perform counterfactual checking tests
Counterfactuals are outcomes that didn’t happen, but could have occurred under the right conditions. As an additional line of defence, this allows us to, in a purely theoretical test in a simulated environment, introduce bias and see if the outcome would have changed.
For example, we could request two identical sets of data from the customer, where the only difference is that one is made up entirely of men and the other of women, and then test the AI on each separately.
If AI bias is absent, the end result will be the same. In the unlikely event that AI Bias has crept in, the end results will show a difference.
If the FCA ever come knocking because they believe our AI might be biased, a counterfactual test would allow us to prove that any unusual outcomes are coincidental and not a result of discrimination.
The future of Nuon AI
AI in its modern setting has been around since the 1950s. Which, in technology terms, makes it a rather young technology. AI Bias is just one of the many ongoing challenges we face and at some point in the future the industry may experience breakthroughs that make this article seem quaint.
For now we can only deal with the challenges that are in front of us. Today, Nuon AI helps insurance companies improve the profitability of their new business activities, but the next stage is to turn our AI onto the claims stage.
This will create a whole new raft of challenges because, for a start, claims involve individuals which makes hashing the data a non-starter. Making connections between claims and pricing without introducing bias is going to be a whole new challenge.
Personally, I think Nuon is going to do very well with this next step in our evolution. But then that could be my bias.