We must Lead in Responsible AI Research

In a recent survey of business leaders in the financial sector, a surprising number of respondents noted two obstacles to adoption of AI. One was a lack of evidence that AI projects have produced the expected results and the second, a sense of suspicion about trustworthiness of AI. Comparing these results to the same survey of 5 years ago, it seems that the level of trust in AI has not increased. In fact, it has decreased. But, why?  

As an AI scientist working on improving AI trustworthiness -those attributes of AI such as reliability, preserving privacy, explainability and transparency, fairness and ability to keep human-in-the loop when necessary, I get a sense that much more research, development and governance are needed in this area to restore public confidence in AI.  The need is urgent!  

In less than a week, we saw the CEO of OpenAI, Sam Altman getting terminated by the company’s board of directors citing his lack of transparency, getting hired by Microsoft to lead a new AI development business, then reinstated by the Open AI board possibly influenced by a large number of employees who complained and threatened to leave the company. The board had reacted to an initial letter, warning of a new, powerful AI capability that needed more safety considerations before being released to the public.  

All this drama which happened in matter of 5 days, should be a telltale and a warning sign for the rest of us. It is a microcosm of potential chaos, confusion and lack of sensible, responsible and safe handling of AI systems that affect our lives in disruptive ways.    These events have undoubtedly further eroded public confidence and trust in AI and the companies that lead its development. How do we protect the public from potentially reckless and chaotic decisions that can harm people in this industry.  

President Biden’s recently released Executive Order on AI is a great step to establish ethical, trustworthy and safe AI standards and practices. It mandates standards for AI best practices, safety, fairness, privacy protection, trustworthiness, and accountability for AI vendors and users. While it applies to government’s adoption of AI, it can be a guiding light to other corporations to enhance their AI governance.   But these measures may not be actionable enough to protect us against the next AI product that could cause public harm. 

We need an independent national laboratory established by the US Government to test and validate AI products for safety, reliability and trustworthiness. We need a testing lab similar to the United Laboratories (UL) to test AI product and offer certification to help public make informed decisions about using them.   As I look into the future, the need for better guardrails and safety controls for AI is only going to grow. AI is a powerful force and will get even more powerful. It will shift how we work, live and interact with one another. The proliferation of AI in the very fabric of our life, business and work will introduce new opportunities and challenges.    

The focus on responsible AI can’t be isolated to the federal government or a dozen or so leading AI companies. It must be everyone’s goal. For the entire society to benefit from safe and trustworthy AI, I wish and call on every company to establish their AI strategy and framework for enhanced AI governance.Â