IBM… AI with Trust and Transparency
There are numerous reasons for IT industry vendors’ interest in and focus on artificial intelligence (AI) solutions and services. Though it has long been a goal for scientists and engineers of every stripe, advancements in foundational technologies have finally made AI commercially viable. Equally important is how AI can complement and support the solution of increasingly complex, thorny problems, meaning that it can be applied in numerous technical, industry, workplace and consumer scenarios.
In other words, effective AI-based solutions can be developed, and there’s plenty of work to be done. While that’s great news, critical AI-related trust and transparency issues have never been more important for vendors to address or their customers to understand. That’s especially true as vendors bring new AI solutions and services on line.
Recently in New York City, IBM executives outlined the state of the company’s AI efforts, the critical roles trust and transparency play in that process and the next steps needed to bring those projects in-line with the company’s multi-cloud strategy and vision.
Following those events, IBM announced new Trust and Transparency capabilities on IBM Cloud which automatically detect bias and explain how AI makes decisions, as decisions are being made, and can be applied to models built from machine learning frameworks and AI-build environments, including IBM Watson, Tensorflow, SparkML, AWS SageMaker, and AzureML.
Let’s consider the current and future state of IBM AI, along with why the company is focusing so much attention on related trust and transparency issues and solutions.
To read the complete article, CLICK HERE
NOTE: This column was originally published in the Pund-IT Review.