that could heal diseases once thought incurable. Someday, it may even help us get around in self-driving taxis, buses, trains and airplanes.It’s important to remember that people have always struggled to trust technology at first. One reason companies paid people toback when they were new was so that people could trust that the elevator would bring them to the correct floor. They didn’t necessarily need an operator, but they were there to build trust.
One successful way I’ve found to do so is by showing people the data and rationale behind an AI’s decision-making, effectively creating a transparent AI. When my teams engage with a hospital system and begin implementing our platform in a hospital ICU, we ask for a year’s worth of data from their systems—which can include information like heart rate, blood pressure checks, IV drip settings and more. We feed all of that information into our system, and then when we begin presenting our AI technology to physicians and nurses, we show them how the system would have made decisions with past patients.
The problem with AI currently is not necessarily what it’s doing but that no one understands how it does what it does. Providing a frame of reference, as well as showing the math behind the recommendations and the ramifications of implementing those suggestions, is what builds trust. This approach is intended to give non-engineers easy-to-understand insights into the reasons for the actions taken as the AI engine makes decisions.