It was once fashionable to worry about the prospect of super intelligent machines taking over the world. The past year has shown that AI can cause all kinds of dangers long before that happens.

The latest AI methods excel in perceptual tasks such as image classification and speech transcribing, but the hype and excitement about these skills have disguised how far we are from building machines that are as smart as we are . Six controversies from 2018 stand out as warnings that even the smartest AI algorithms can behave incorrectly, or that careless use of them can have serious consequences.

1. Self-driving cars

After a fatal accident involving one of Uber’s self-driving cars in March, researchers discovered that the company’s technology had failed catastrophically in a way that could easily have been prevented.

Carmakers such as Ford and General Motors, newcomers such as Uber and a myriad of startups are rushing to commercialize a technology that, despite its immaturity, has already seen billions of dollars in investments. Waymo, a subsidiary of Alphabet, has made the most progress; it rolled out the first fully autonomous taxi service in Arizona last year. But even Waymo’s technology is limited and autonomous cars cannot drive everywhere under all circumstances.

What should you look out for in 2019: regulators in the US and elsewhere have so far taken a hands-off approach for fear of stifling innovation. The US National Highway Traffic Safety Administration has even indicated that the existing safety rules can be relaxed. But pedestrians and human drivers have not registered as guinea pigs. Another serious accident in 2019 may change the attitude of the regulators.

2. Political manipulation of bots

In March, news broke out that Cambridge Analytica, a political consulting firm, had abused Facebook’s data exchange practices to influence the 2016 US presidential election. The resulting upheaval showed how the algorithms that decide which news and information on social media come to the surface can be collected to reinforce misinformation, undermine a healthy debate and isolate citizens from different perspectives.

At a conference session, Facebook CEO Mark Zuckerberg promised that AI itself could be trained to recognize and block malicious content, although it is still far from being able to understand the meaning of text, images or video.

What to look out for in 2019: Zuckerberg’s promise is being tested during elections in two of Africa’s largest countries: South Africa and Nigeria. The long run-up to the 2020 US election has also begun, and it can inspire new types of misinformation-driven technology, driven by AI, including malicious chatbots.

3. Algorithms for peace

An AI peace movement took shape last year when Google employees discovered that their employer provided technology to the US Air Force for classifying drone images. Employees feared that this could be a fatal step toward delivering technology to automate deadly drone attacks. In response, the company left Project Maven, as it was called, and created an AI ethical code.

Academics and industry heavyweights have supported a campaign to ban the use of autonomous weapons. However, military use of AI is only gaining strength and other companies, such as Microsoft and Amazon, have no reservations about helping.

What to look out for in 2019: While Pentagon spending on AI projects is increasing, activists hope that a preventive treaty banning autonomous weapons will come from a series of UN meetings scheduled for this year.

4. Face-off surveillance

The superhuman ability of AI to identify faces has led to countries using a remarkable pace of surveillance technology. With face recognition you can also unlock your phone and automatically tag photos for you on social media.

Civilian freedom groups warn of a dystopian future. The technology is a formidable way to invade people’s privacy and bias in training data makes it likely that discrimination is automated.

In many countries – especially China – facial recognition is widely used for police and government surveillance. Amazon sells the technology to American immigration and law enforcement agencies.

What to look out for in 2019: Face recognition is distributed to vehicles and webcams and is used to track your emotions and your identity. But we can also see a provisional arrangement this year.

5. Pretend until you break it

A proliferation of “deepfake” videos last year demonstrated how easy it is to make fake clips using AI. This means fake porn of celebrities, lots of weird movie masups and, possibly, virulent political smear campaigns.

Generative opponent networks (GANs), involving two dueling neural networks, can evoke extremely realistic but completely fabricated images and video. Nvidia recently demonstrated how GANs can generate photo-realistic faces, regardless of race, gender and age.

What should you look out for in 2019: as deepfakes improve, people are likely to be duped by them this year. DARPA will test new methods for detecting deepfakes. But because this also depends on AI, it becomes a game of cat and mouse.

6. Algorithmic discrimination

Bias was discovered last year in countless commercial tools. Vision algorithms trained on unbalanced data sets could not recognize women or colored people; Hiring programs that fed historical data proved to perpetuate pre-existing discrimination.

The lack of diversity in the AI ​​field itself is bound to the problem of bias – and more difficult to repair. Women occupy no more than 30% of the jobs in industry and less than 25% of the educational roles at top universities. There are also relatively few black and Latin researchers.

What to expect in 2019: we will see methods for detecting and reducing bias and algorithms that can yield unbiased results from biased data. The international conference on machine learning, an important AI conference, will be held in Ethiopia in 2020 because African scientists investigating bias problems may have difficulty getting visas to other regions. Other events can also move.

Similar Posts

Leave a Reply