“We need to be careful what we optimize our AI systems for”

Interview

How do we preserve our humanity in a world of intelligent machines? AI researcher Mark Nitzberg on the need to build AI models that are safe for humans and make explainable decisions – and why standards and oversight are key.  Our fellow Ekaterina Venkina interviewed the Executive Director at the Center for Human-Compatible AI at UC Berkeley (CHAI) for RedaktionsNetzwerk Deutschland (RND).

Nitzberg.jpg
Teaser Image Caption
Mark Nitzberg is Executive Director at the Center for Human-Compatible AI (CHAI) at UC Berkeley. He co-authored the book “The AI Generation: Shaping Our Global Future With Thinking Machines” with Olaf Groth.

Mr. Nitzberg, you write that artificial intelligence is bringing about a “brave new world of machine meritocracy”. What will this world look like?

Mark Nitzberg: The simple answer is: It is already here. You see, advanced computing is already integrated at every level of human endeavor. We have these highly interconnected systems. And they reach from central computing centers all the way into everyone's pocket. Into your pocket, if you have a cell phone.

Our digital world is one of massive computing power, network synergies, and extremely rapid technological advances. What is fundamentally new about the changes we are currently experiencing?

There have been major transformations before. The development of air transport, the arrival of the first automobiles. But this one is broader, and it is affecting so many sectors. Half of the world’s population looks at a small screen as soon as they wake up in the morning. And it’s not the same as a newspaper. Algorithms determine what you see first and what you see next. They’re optimizing towards something, and not necessarily in your best interest. The idea is for the system to find content that suits your preferences.

But ultimately, as you’ve seen with recent revelations by Facebook whistleblower [Frances Haugen, ed.], it’s optimizing for the company. You are a secondary consideration. Their primary concern is: How can they use you and change you to make the company more successful?

We all know the concept of the “digital native”. Are “AI natives” next?

I think that's already true. We already depend on algorithms. They get us from A to B. They connect us with an answer to a question. They help us decide what to have for our next meal. They let us know what is on our friends’ minds. This is called reinforcement learning. It’s kind of a man-versus-machine thing, and the machine always wins. So we need to be very careful what we’re optimizing for. And that’s what I spend my time on at the CHAI

As early as 2003, Oxford futurist Nick Bostrom warned of a “superintelligence”, the possible emergence of an AI that would surpass human intelligence in every way ...

It’s a thought experiment, which was conceived a long time ago. We are going all the way back to the Golem, four thousand years ago. When you look at the advances, they are not about individual intelligence. It’s not about creating one single superintelligent machine. It’s about intelligence as a distributed system, which may be different from our traditional notion of intelligence.  We have billions of years of evolution built into our brain, but the larger context is that when billions of people are connected with certain kinds of algorithms, it enables a superintelligence at a large scale.

And that’s already been achieved in certain ways. It’s already been shown that systems that are optimizing for simple objectives can have very broad, unintended consequences for society as a whole.  We need to find a way to measure these unintended, negative consequences, and then feed them back into the system to ensure that our systems and their objectives are adapting in the way we want them to.

At the Center for Human-Compatible AI, you work on human-machine adaptation in AI. What do you hope to achieve with this?

Until recently, until about five years ago, we had not even begun to consider how to build systems that are demonstrably aligned with our interests as individuals and as a society. But let’s say you want a system to affect people’s driving behaviors. How do you assure that it’s safe?  What we’re trying to do at CHAI is rebuild AI on a new foundation to improve safety, create standards, and establish oversight bodies for algorithms, as we have in place in other areas of engineering.

In the foreword to your book, Admiral James Stavridis writes that advanced cognitive technologies are both a panacea and a Pandora’s box …

We’ve never built anything at the scale that we are now building. This is the largest engineered system that humans have ever created. And if you’re building something really powerful, it is very hard to anticipate everything it might do, and when it’s in the hands of an adversary ... Admiral Stavridis went on to write another book on this: “2043. A Novel of the Next World War”. In the wrong hands, so much power could be used to cause damage akin to what we worry about in the nuclear age.  At CHAI, we can build systems that are demonstrably safe. But then, there will be people who will not use those safety measures, and we need to mitigate those risks.

Machine learning systems are often a sort of “black box”, but the market seems to demand more predictable and transparent solutions ...

This is a very important question. We know how to put some machine learning systems together, then we feed data into them, and they make decisions. But we can’t really explain why they make certain decisions in certain situations.  There’s a development in this area of research called “explainable AI”. We’re not going to just scrap these systems, because they offer an almost magical efficiency. But we do need to find a way to assure that when something goes wrong, when the behavior causes harm, we can account for it.

The other question is: Do we pursue research in systems that are so powerful that they can cause great damage? There are areas of science where there's real danger, such as gene editing. At Berkeley, the CRISPR system allows us to modify genes. Jennifer Doudna is doing research on this. I think we’re always going to be fascinated with powerful systems that are sharper than us. And we’re not going to stop the research, but we will want to institute standards there. And there are also some areas where I think we do need to draw a hard line. One of them is, and my colleague Stuart Russell and I are in agreement on this:  The fact that we are able to build completely autonomous lethal weapons doesn’t mean that we should allow that to be done.

Where do you see the biggest challenges in implementing AI? The potential that it might lead to social engineering? That jobs might be optimized and workers need to be retrained?

The issue of jobs has been with us ever since the very beginnings of automation. We had to make a lot of societal changes to protect ourselves. I think there’s a certain positive aspect to automating drudgery.  At CHAI, we don't know how many jobs will be eliminated. Certainly, tasks are being eliminated, but not necessarily entire positions. In Norway, every company must pay their workers to be trained for their next job in the context of their present job. I think that’s part of the formula.

You write about the fundamental development of AI via the three Cs - “Cognition, Consciousness, and Conscience”. “Consciousness” is about AI impacting our lives, our society, and our economy. “Conscience” is about ethical determinations, for example, establishing an AI Charta ...

I think the three Cs were an absolutely terrific outline for a book. We are now in an era where a lot of cognitive tests are created with AI tools. “Consciousness” is more of a philosophical consideration. “Conscience” ... I believe that we’re seeing the first really serious AI regulations, apart from this idea of an AI Bill of Rights in the US, which is really a set of principles that will guide us. China released some pretty serious regulations last month. It shows that the notion of “conscience” is being translated into actual laws.

Will future advances in artificial intelligence change the balance of power in the world?

The next breakthrough in AI? I don’t think that gives an advantage to the country that discovered it.  There are these big language models: China has Wu Dao 2.0, which uses one hundred and seventy five trillion parameters. The US GPT-3 “only” has one hundred seventy five billion. The problem is that training the Chinese model basically required the equivalent of three full days’ worth of power output from the Three Gorges Dam. And since China is struggling with power shortages, they don’t want to just waste it.

Where do Europe and Germany stand in this race?

Germany and Europe are playing a very important role in addressing the ethical and societal challenges of the rise of AI.  German politicians were among the first ones in Silicon Valley and at the AI lab at Berkeley to ask questions on standards for autonomous driving. We had visits from the German Ministry of the Economy and a small group of German Members of Parliament. They’ve been engaging actively with large corporations and with universities.  Germany has to do what it does really well. And among other things, that’s manufacturing and engineering. So, it needs to focus on AI applications in those areas. Also, the US and Europe both need to work out how to remain independent in the production of next-generation AI chips. And that is a transatlantic kind of work.

What is your most important takeaway from years of working with AI?

It's important not to sleepwalk into situations where we’re giving up our own space. We need to let our thoughts evolve rather than look to a screen or to a system to do the thinking for us.

This article was first published by the Heinrich-Böll-Stiftung Washington, DC office. A German version of this interview was first published by Redaktionsnetzwerk Deutschland (RND) on December 14, 2021. Research for this article was made possible with the support of the Heinrich-Böll-Stiftung Washington, DC.