Artificial Intelligence Can Now Explain Its Own Decision-Making
Tasmin LockwoodTasmin Lockwood
People are scared of the unknown. So naturally, one reason why artificial intelligence (AI) hasn't yet been widely adopted may be because the rationale behind a machine’s decision-making is still unknown.
How can decisions be trusted when people don't know where they come from? This is referred to as the black box of AI—something that needs to be cracked open. As technology continues to play an increasingly important role in day-to-day life and change roles within the workforce, the ethics behind algorithms has become a hotly debated topic.
Medical practitioners are thought to be among the first who will benefit greatly from AI and deep learning technology, which can easily scan images and analyze medical data, but whose decision-making algorithms will only be trusted once people understand how conclusions are reached.
Key thinkers warn that algorithms may reinforce programmers' prejudice and bias, but IBM has a different view.
IBM claims to have made strides in breaking open the block box of AI with a software service that brings AI transparency.
IBM is attempting to provide insight into how AI makes decisions, automatically detecting bias and explaining itself as decisions are being made. Their technology also suggests more data to include in the model, which may help neutralize future biases.
IBM previously deployed an AI to help in decision-making with the IBM Watson, which provided clinicians with evidence-based treatment plans that incorporated automated care management and patient engagement into tailers plans.
Experts were quick to mistrust the model as it didn't explain how decisions were made. Watson aided in medical diagnosis and reinforces doctor’s decisions, but the hopeful technology would never replace the doctor. When Watson provided an analysis in line with the doctors, it was used as a reinforcement measure. When Watson differed, it was wrong.
But the company’s latest innovation, which is currently unnamed, appears to tackle Watson’s shortfalls. Perhaps naming it Sherlock would be fitting.
It's important to increase transparency not just in decision-making but also in records of the model’s accuracy, performance and fairness are easily traced and recalled for customer service, regulatory or compliance reasons, e.g. GDPR compliance.
Alongside the announcement of this AI, IBM Research also released an open-source AI bias detection and mitigation toolkit, bringing forward tools and resources to encourage global collaboration around addressing bias in AI.
This includes a collection of libraries, algorithms, and tutorials that will give academics, researchers, and data scientists the tools and resources they need to integrate bias detection into their machine learning models.
While other open-source resources have focused solely on checking for bias in training data, the IBM AI Fairness 360 toolkit claims to check for and mitigate bias in AI models.
"IBM led the industry in establishing trust and transparency principles for the development of new AI technologies. It’s time to translate principles into practice. We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision making."
— David Kenny, IBM’s SVP of Cognitive Solutions.
What could this mean for medical practitioners? The new technology may open an array of problems with its implementation as policy still has to catch up with tech. Who is liable for issues following a wrong diagnosis: the doctor or the AI? After a proven track-record of correct diagnosis, how does a person go against the software? How is a gut feeling justified?
The Most Comprehensive IoT Newsletter for Enterprises
Showcasing the highest-quality content, resources, news, and insights from the world of the Internet of Things. Subscribe to remain informed and up-to-date.
New Podcast Episode
Related Articles