How to build an inclusive future in the time of AI
In 1996, as a graduate student at the Logic Group at Stanford, I wrote my PhD thesis on artificial intelligence (AI) titled Integrating Specialized Procedures into Proof Systems . I have remained a student of AI most of my adult life, deeply influenced by the all-time great thinkers – McCarthy, Minsky, Kay, Sutherland, the list goes on. For many of us that have deeply studied this topic for many years, the applications of AI we see today barely begin to scratch the surface of the possible. Even as the applications are becoming more sophisticated — from fraud detection to autonomous vehicles, knowledge management to natural language processing – we are still very far from Marvin Minsky’s “society of mind”.
Infosys ’ recent research shows that AI technologies are being deployed or planned for deployment by most large companies. Businesses expect these technologies to bring disruption and, with that, growth and opportunity — for themselves, their employees, their customers and other stakeholders. Yet for many, the potential for disruptive change also brings fear and uncertainty. Coupled with it is the uncertainty introduced by tumultuous geopolitical events: Brexit, the US presidential election, demonetization in India, cybersecurity, the refugee crisis, global terrorism and more.
In this uncertain environment, it comes as no surprise that employees worry about the future of their jobs and about their privacy. These concerns remain despite attempts by employers to address them (80% of companies we surveyed plan to retrain and redeploy affected employees, and many say they carefully consider ethics and privacy/data protection as part of their AI efforts).
Like each technological disruption before it, the disruption caused by AI will also move at a speed and engage on a scale previously unknown in human history. And like we have done a multitude of times previously, humans must also evolve our faculties and tools, and move alongside it. We must achieve a kind of symbiosis between minds and machines, with machines amplifying and actualizing thoughts and ideas from the human brain, and freeing it from mundane and repetitive cognitive tasks. This unleashed brain can then do the kinds of things no AI will ever do – like being able to see what is not there and imagine what that can be.
People have the right to be concerned about the irresponsible use of technology and leaders have the opportunity, and indeed the imperative, to assuage those fears through empathy, action, education and communication. During the World Economic Forum’s Global Future Council on AI & Robotics held recently in the United Arab Emirates, my peers in the AI community and I identified four fundamental areas where leaders must act now to shape an inclusive and safe future:
The road to 2020 will be significant not only in the development of technologies around AI, but also because of the strategies that will govern our interaction with it for years to come. As leaders, we have a responsibility to reimagine education, employment and social frameworks, and work diligently to bring everyone along with us into the new reality. The opportunity to transcend the boundaries of our imagination with technologies such as AI is almost limitless, and the fear around this is natural. But it would be a great travesty to allow forces of fear and negativity to overwhelm the great potential for the purposeful and humane advancement of the human race that we have before us.
SOURCE: World Economic Forum