Why AI is a threat to democracy – and what we can do to stop it

0
180
Why AI is a threat to democracy - and what we can do to stop it

Amy Webb, futurist, NYU professor and award-winning author, has done a lot of research over the past decade into people and organizations about artificial intelligence. “We have achieved a fever in all AI cases,” she says. Now it’s time to take a step back to see where it goes.

This is the task of her new book, The Big Nine: How the Tech Titans and their Thinking Machines Could Warp Humanity, in which she looks at the trends at a glance that, she warns, have put the development of technology on a dangerous path. In the US, Google, Microsoft, Amazon, Facebook, IBM and Apple (the “G-MAFIA”) are hampered by the relentless demands of a capitalist market in the short term, making long-term, well-considered planning impossible for AI. In China, Tencent, Alibaba and Baidu are consolidating and extracting huge amounts of data to feed the authoritarian ambitions of the government.

If we do not change this trajectory, Webb argues, we can immediately come to a catastrophe. But there is still time to act, and a role for everyone to play. MIT Technology Review sat down with her to explain why she is worried and what she thinks we can do about it.

The following Q&A has been summarized and edited for clarity.

You say that we are currently seeing a convergence of worrying technological, political and economic trends. Can you continue what the technological trends are?

If you talk to a researcher working in the field, they will tell you that it will be a very long time before we see many of the promises made about AI: things like full automation in vehicles, absolute recognition or artificial general intelligence – AGI systems that are capable of cognition and more human thinking.

Sign up for The algorithm – artificial intelligence, demystified

From my point of view, looking at the horizon for the day, we have some sort of walking, talking machine, or a machine with a bodyless voice that makes autonomous decisions, misses the point somewhat. We are already seeing billions of small improvements that have an aggravating effect over time and lead to systems that can independently make many decisions at the same time.

The DeepMind team, for example, has been working hard to teach machines how to beat people playing games. They have made considerable progress in areas such as hierarchical learning from reinforcements and learning with multiple tasks. The latest version of its AlphaGo algorithm, AlphaZero, is capable of learning how to play three games simultaneously without a human in the loop. That’s a pretty big jump. There is also a whole new field of generative hostile networks, where you can now generate human faces with quite a number of images that looked very, very realistic.

These advancements are not as sexy or exciting as what we have been told about AGI. But if you can see the 40,000 foot image, you can see that we are in a situation where systems will make choices for us. And we must stop and wonder what happens if those systems put human strategy aside in favor of something that is totally unknown to us.

What about the political and economic trends? Can you describe the ones that most concern you?

In the United States, the free flow of ideas can spread unencumbered. This is the way Silicon Valley was founded. It has cultivated both competition and innovation, and that’s how we came up with AI, alongside other types of technologies.

The Big Nine: How technical titans and their thinking machines could distort humanity

from Amy Webb

In the US, however, we also have a tragic lack of foresight. Instead of creating a grand strategy for AI or for our long-term futures, the federal government has removed the financing of scientific and technical research. The money must therefore come from the private sector. But investors also expect a certain return. That is a problem. You cannot plan your R&D breakthroughs when working on fundamental technology and research. It would be great if the big tech companies had the luxury of working very hard without having to organize an annual conference to show off their newest and best whiz bang thing. Instead, we now have countless examples of bad decisions made by someone in the G-MAFIA, probably because they worked quickly. We begin to see the negative effects of the tension between doing research that is in the interest of humanity and making investors happy.

That would be bad enough, right? But this all happens at the same time that an enormous amount of electricity is being consolidated in China. China has a sovereign wealth fund that focuses on basic basic research at AI. They throw huge amounts of money at AI. And they have a totally different idea than the US when it comes to privacy and data. This means that they have much more data that can be mined and refined. With a central authority, it is super easy for the government to test and build AI services with data from 1.3 billion people. And that’s just in their own country.

Then they have the Belt and Road Initiative, which looks like a traditional infrastructure program, but is also partly digital. It is not just about building roads and bridges; it is also about building 5G networks and laying fiber optics, and about mining and refining data abroad. The use of these technologies is a risk for people who care about issues such as freedom of expression and Western democratic ideals.

Why should we strive for Western democratic ideals?

It’s a great question. I have lived in China, in Japan and of course in the United States. And you could now look at the state of our country and what is happening in China and wonder, is that really the worst part? The social scoring system of China sounds bizarre and terrible for Americans, but what many people do not realize is that self-reporting and monitoring behavior in villages and communities is forever part of Chinese culture. The social credit score simply automates that. So yes, that’s a great question.

I think I would say that if I looked at the idealized version of Chinese communism and the idealized version of Western democracy, I would choose Western democracy because I think there is a better opportunity for the free flow of ideas and for every day people to succeed. I think giving incentives to people for individual and personal performance is a great way to lift a society, helping us reach our individual potential.

With the direction in which the world is led with AI today, is that a fair comparison? Do we have to compare the idealized versions of Chinese communism with Western democracy, or the worst versions of the two?

That’s a great question because you could argue that parts of the AI ‚Äč‚Äčecosystem are already affecting our Western democratic ideals in a really negative way. It is clear that everything that happened with Facebook serves as an example. But also look at what’s going on with the anti-vaxxer community. They distribute totally incorrect information about vaccines and basic science. Our American traditions will say freedom of expression, platforms are platforms, we have to let people express themselves. The challenge here is that algorithms make choices about editorial content that lead to people making very bad decisions and children getting sick.

The problem is that our technology has become increasingly sophisticated, but our thinking about what free speech is and what a free market economy looks like has not become that advanced. We tend to resort to very basic interpretations: free speech means that all speech is free, unless it conflicts with defamation laws, and that’s the end of the story. That is not the end of the story. We need to start a more sophisticated and intelligent conversation about our current laws, our emerging technology, and how we can make the two meet halfway.

In other words, you believe that we will evolve from where we are now to a more idealized version of Western democracy. And you would rather have that than the idealized Chinese communism.

Yes, I am confident that it is possible. My major concern is that everyone waits, that we drag our heels and that it will be a true catastrophe to make people take action, as if the place where we arrived is not catastrophic. But the fact that measles is back in Washington state is a catastrophic outcome. That is also what happened after the elections. Regardless of which side of the political spectrum you are on, I cannot imagine today that the current political climate is good for our future.

So I absolutely believe that there is a way forward. But we have to come together and bridge the gap between Silicon Valley and DC, so that we can all steer the boat in the same direction.

What do you recommend to the government, companies, universities and individual consumers?

The development process of AI is a problem and we all have an interest. You, me, my father, my neighbor, the man at the Starbucks that I now walk past. So what should ordinary people do? Be more aware of who uses your data and how. Take a few minutes to read the work of smart people and spend a few minutes to find out what we are really talking about. Before you draw your life away and start sharing photos of your children, do so in an informed way. If you’re fine with what it means and what it could mean later, fine, but first have that knowledge.

Companies and investors cannot expect that they will rush a product over and over again. It presents us with problems on the road. So that they can do things such as learning their recruitment processes, significantly increase their efforts to improve inclusiveness and ensure that their staff is more representative of what the real world looks like. They can also brake. Any investment made in an AI company or project or whatever it is must also include financing and time to monitor issues such as risk and bias.

Universities must create space for hybrid degrees in their programs. They need to encourage CS students to study comparative literature, world religions, microeconomics, cultural anthropology and similar courses in other departments. They must stand up for dual degree programs in the fields of computer science and international relations, theology, political science, philosophy, public health, education and the like. Ethics should not be given as a stand-alone lesson, something to simply tick off a list. Schools should even encourage professors to incorporate complex discussions on bias, risk, philosophy, religion, gender and ethics into their courses.

One of my biggest recommendations is the formation of GAIA, which I call the Global Alliance on Intelligence Augmentation. People around the world today have a very different attitude and approach when it comes to collecting and sharing data, what can and should be automated and what a future with more generally intelligent systems might look like. So I think we should set up a kind of central organization that can develop global norms and standards, a kind of crash barriers that not only penetrate American or Chinese ideals into AI systems, but world views that are much more representative of everyone.

Above all, we must be prepared to think about this much longer term, not just in five years. We have to stop saying, “Well, we can’t predict the future, so let’s not worry about it now.” It is true, we cannot predict the future. But we can certainly plan better.

An abridged version of this story originally appeared in our AI newsletter The Algorithm. Register here for free to have it delivered directly to your inbox.

LEAVE A REPLY

Please enter your comment!
Please enter your name here