Should we back off on AI to make sure it is transparent? Is there a way to balance exciting progress enabled by AI with the visibility necessary to make sure AI is not doing something wrong or immoral? AI is kind of a black box and what's happening inside is not only mystery to those using, but it is to the developers as it gets more sophisticated. Let's look at the some of the aspects of this issue.
The Case for More AI Visibility:
How can we allow a black box to make important decisions? Can we really trust AI to make ethical decisions in a fair way? With programs, we can look at the coded logic and see the paths that can be chosen and watch the outcomes. With Decision Modeling Notation (DMN), we can see the decision and the results like programming. With AI we can't see the possible paths, reasoning and alternatives chosen from. Who can make sense of the inner workings of AI? Who do we hold responsible for the outcomes and ethics of AI? Would you trust AI with your life? AI is not flawless, so let's watch it closely.
The Case for Full Speed Ahead on AI:
Why should we slow down the benefits of AI while we wait for complete transparency? By a long shot AI algorithms are more accurate, by far, than human counter parts. AI can detect illnesses faster and can assist doctors with treatment plans. While some decisions are life impacting, there are a goodly number of decisions that are not life critical. Many AI investigations and actions can be logged and leveraged. AI should be tested like any other computer programming for a large number of possibilities. While the test beds for AI are difficult and sometimes near impossible, over time near perfection can be approached.
Since AI will be involved with many decisions going forward, transparency will grow as an issue. If you want to hear more about decision management, AI driven or not, please sign up for a free webinar by clicking here I believe that we can strike the balance by using AI to mine the audit the logs and actions generated AI activity. Let AI watch AI.