IBM Creates Critical Tool To Detect AI Bias

Bias could turn future AIs into very dangerous things because bias corrupts the process and would almost certainly lead to results that are untrustworthy. This is particularly scary with AIs particularly as they advance from , to , and eventually emerge as . This is because the farther you go down this evolutionary path the more autonomous the AI becomes and a powerful AI that can’t be trusted could easily be a company killer. There are few companies that have the focus and breadth of skills to take on a task like this and fortunately is one of them. Their Cambridge based is the tip of the spear when it comes to battling bias and, this week, they announced some impressive advancements.

Let’s talk about that this week.


Bias, with regard to an AI, will grow in danger the more the related is relied upon and it can be introduced in three areas and they aren’t mutually exclusive (meaning all three areas could introduce bias). The first is in the quality of the data, samples that are too small, that have left out critical data sets, or that organize the data in a way that results in it either being forgotten or misinterpreted is the most obvious place to look for bias. However, the AI algorithms themselves could be biased (and given they are created by humans and humans tend to be biased this could be especially problematic), and, finally the individual getting the output from an AI may themselves be biased and either misinterpret the data or weights put on the identified results.

Bias is relatively easy to introduce but very difficult to identify and remove largely because the folks using the system may share the same biases and thus not recognize that the system is performing poorly. Bias could result in friendly fire accidents for defense implementations, erroneous diagnosis from doctors relying on the biased system, and Smart Cities that are more insane than smart putting citizens at risk particularly where there are autonomous cars running.

The identification and elimination of bias is therefore critical for any firm deploying an AI system.

To read the complete article, CLICK HERE

NOTE: This column was originally published in the .

Leave a Reply