| by Eva Gudbergsdottir

Olaf Groth stading on a balcony
Olaf Groth MAIPS ‘93 Hult International Business School professor of strategy, innovation, and economics.  (Credit: Rob Ellis )

As systems running on artificial intelligence (AI) govern greater parts of our daily decisions and interactions, we might be in for some “wobbly stretches of human-machine interaction.”

In Solomon’s Code: Humanity in a World of Thinking Machines (Pegasus, 2018), Olaf Groth MAIPS’93, Hult International Business School professor of strategy, innovation, and economics, and Mark Nitzberg, executive director of UC Berkeley’s Center for Human-Compatible Artificial Intelligence, offer a broad examination of how AI shapes human values, trust, and power from a global perspective. The authors do not belong to the camp of doomsday predictors who are sounding alarms of a dystopian future where humans have lost control—as long as care is taken to manage the wobbles, and to build a solid foundation for human-machine interactions. We asked Groth to explain.

Artificial intelligence is at the same time underhyped and overhyped.

Q: What do you think people are missing in the current debate about AI and its impact on our lives today?

A: Artificial intelligence is at the same time under-hyped and overhyped. There is an under-awareness of both the tremendous transformative potential of AI for humans and societies and also just how pervasive this thing already is—it is already in our cars, in our Internet search engines, in our phones, in our online shopping and social media, in our home thermostats, on our highways, in our courtrooms and hospitals. So the train is leaving the station and the average human is not that interested in the intricacies of technology, such as algorithms; they just want it to be easy to use and to work. We are all at the mercy of brilliant minds who are advancing technology but who have not always thought of the unintended consequences to our lives. Clearly, we need to develop a better view of the different aspects of human decision making when we talk about new initiatives. First, we need to establish what we want, and then we can start really discussing the setting of rules and regulations. It is not enough to deal with it within the scientific community or to have business lead the way. To have a real debate about the opportunities and challenges, we need a multi-stakeholder forum that includes governments and civil society.

We test technology systems constantly for integrity of the coding, but we never test the effects of algorithms across society.

Q: What role do consumers have in making sure our digital future serves us, rather than the other way around?

A: That is a tough one. We need to be come better informed. But most people are really busy with their lives, and they might not really know where to start. There is also the conflict between helping AI systems become more useful to consumers while simultaneously guarding their privacy. It is the duty of policy makers and innovators to make things more palatable to consumers, and some of them are working on ways to do that. We test technology systems constantly for integrity of the coding, but we never test the effects of algorithms across society. We do this for drug, airplane, and car safety, so why not for code that impacts millions of people’s behaviors at a time?

In addition, because algorithms, data, and top AI talent flow across borders easily, we need transparency through an ethics-driven “Fortune 500 AI Readiness” (FAIR) index and a multi-stakeholder-driven “digital Magna Carta” that frames responsible opportunity for the cognitive society. To that end, we’re in dialogue with members of the House of Lords in the U.K., the chancellery and foundations in Germany, and some corporations in the U.S. and Europe. But it’s wickedly complex, because AI touches everything. So it’s all hands on deck, and we invite all globalists in the MIIS community to join our efforts.

For More Information

Jason Warburg

Eva Gudbergsdottir