Sundar Pichai, writing on Google blog, defined Artificial Intelligence (AI) this way: “AI is computer programming that learns and adapts.” He went on to further list out some amazing things one could do with AI, like sensors to predict wild fires, monitor cattle hygiene, diagnose diseases and more.

Further he listed out 7 principles that, according to Pichai, will guide the work that Google will carry out in the field of AI. These principles include sensitivity to people’s privacy and a commitment to upholding standards of scientific excellence.

Even while we stand at the cusp of a revolution that’s likely unprecedented in human history, there’s an elephant in the room a lot of people are looking away from: ethical issues.

What are the ethical dilemmas associated with AI?

Here is a short video on the ethical questions for AI:

Against the seven principles that Google claims it will follow, there are some serious questions that AI researchers, policymakers and people in general must face. These questions are based on universal ethics and can have far-reaching implications for virtually everything for the human race.

These questions on the ethics of AI involve a variety of things but always include the concept of individual liberty, the idea of a protective state and whether there’s a growing contradiction between the two. AI and ethics, therefore, span the entire spectrum of individualism vs collectivism.

It’s critical that we discuss and address questions on ethics of artificial intelligence because very soon it might be too late. Here are the top ethical questions in artificial intelligence:

1. How can we minimize or eliminate bias in the algorithm of AI?

Maybe the correct question to ask would have been “Can we?” rather than “How can we?”

To begin with artificial intelligence is created by humans who are prone to strong prejudices. After designing the algorithm, the machine is fed data to keep sharpening its ‘intelligence’.

The problem is if the data fed is racist by nature – showing more black criminals than white ones, for example – the machine will learn from the wrong kind of data. And its output will remain, at best, contentious.

The fact that most of the developments in AI comes from the private sector complicates matters. (China is an apparent exception because the Chinese government is very serious about AI, but that’s another story.) That’s because private sector isn’t that worried about being answerable to the general populace, unlike a government department which may be subject to intense scrutiny. This lack of transparency is worrying.

2. How secure will the AI be?

In some ways, this question is a derivative of the earlier one. Because developments in AI are mostly powered by the private sector, the degree of security could ultimately become a function of their business interests.

Businesses built in the digital economy haven’t always proven reliable when it comes to self-restraint and self-regulation. Facebook violating the data privacy of its users is often cited, correctly, as an example of how corporate greed soon overtakes enthusiastic corporate missions of being a good entity.

There are three questions of major significance one must address in light of security issues in AI.

One: What paradigms should corporates use to decide what levels of security are adequate?

Two: What is the optimal degree of policing the corporates?

Three: What checks and balances can corporates put in place to ensure the technology developed doesn’t end up with malicious actors?

Remember, in today’s world we’re talking about multinationals who may find two governments with conflicting requirements.

3. How can we stop AI from being deceptive?

Could AI be collecting information about you and keeping you in the dark?

In 2018, a story in The New York Times reported this: “Facebook empowered Apple to hide from Facebook users all indicators that its devices were asking for data. Apple devices also had access to the contact numbers and calendar entries of people who had changed their account settings to disable all sharing, the records show.”

Earlier, in 2014, Google took over one of world’s most important AI lab DeepMind. Something similar to Google’s Project Dragonfly happened: there were deep concerns about the way Google could lead DeepMind into. DeepMind has created neural network that plays video games like humans do. It’s AlphaGo beat the Go world champion Lee Sedol in 2016 (The game Go is considered far more complex than chess).

An important announcement was made when Google took charge of DeepMind. An external review board was to be set up that would ensure the lab’s research does not end up in military applications. Today, no one knows whether the board even exists, let alone makes any decisions.

Both the examples from Facebook and Google tell a similar story of bent morals: commercial interests have frequently overtaken noble projects and there’s no reason to think AI will not end up the same way.

Without enough oversight and a tight policy framework, AI can be used deceptively.

4. What can we do to stop AI from being malicious?

One risk about is about AI falling into the wrong hands.

The other is even more grave: what if the AI is designed with malafide intentions to begin with?

Consider the self-flying selfie camera developed by Skydio. Using 13 cameras that do the visual tracking, the flying robot called R1 manages itself; at the launch, you’ve to tell it what person or object the R1 must follow (you “tell” R1 by a specially designed app that can sit in any smartphone).

That’s it.

One click on the app and the robot will figure out the rest. It will read the surroundings, decode the obstacles, lock its target and will begin following.

The dangerous part is, you don’t even have to buy it; you can rent it for just $40 a day. For the price of one pack of Parents’ Choice ‘best value’ diapers, you can spy on anyone, anywhere for one full day.

While emerging technology can be breathtakingly exciting, it’s a serious mistake to launch the product commercially without understanding all the risks involved.

5. How far can you trust something that’s largely unregulated?

Even though it’s made to sound that you can buy a gun in the US as easily as you can buy a can of Coke, it’s not entirely true. There’s some paperwork and background check involved in buying guns, as EuroNews wrote.

Buying Skydio camera (mentioned above) is just one of the many, many devices that use AI and you can buy easily like a commodity from the open market. No questions asked.

Ironically, it’s been employees of organizations that have opposed deals that could have put AI to military (autonomous weapons, for instance) use, not someone from outside the organizations.

For instance, employees of at least one company wrote an open letter to their CEO, questioning the stance, wisdom and policy to work with the military.

Protests are sometimes successful (Google moving out of a Pentagon project and from the Project Dragonfly). Sometimes they aren’t (Clarifai, the company to which the above open letter was directed is going ahead in its business with the US military).

In absence of detailed, strict and practical regulations, there’s no way of knowing what is cutting-edge and what is abominable.

6. Is there any way AI can be kept from being used for political vendetta?

China is using AI in some of the most innovative ways to bring in justice and stability. For instance, the 300 million cameras in China that track movement of people, enforce traffic discipline and deliver better, more efficient governance.

Critics are (rightly) wary of the way AI could be used by authoritarian governments like China.

What keeps single-party governments – like China’s – from using AI to silence the voices that are unpleasant to the government?

With the help of face recognition technologies, for instance, China is able to not only track the whereabouts of “notorious elements” but also make traveling and buying air-tickets extremely difficult for people who are on the government’s blacklist.

The use of AI to contain and effectively suppress political dissidents is one of the major risks emerging in China, but that’s not to suggest other countries (their ruling parties, to be more specific) are any way immune to the greed of abusing AI and unleash a witch-hunt.

7. What about the odd risk of “If we don’t, others surely will”?

There’s an equally strong and logical argument that it’s important to make sure companies within Europe and the US too don’t start misusing artificial intelligence for their monetary gains; after all, it’s not just China that must be controlled, right?

While this argument is perfectly rational, there’s a class of people who oppose the idea of excessively regulating the AI industry in Europe or the US.

This is their principal argument: When you hold back European or American companies by red-tape, China isn’t going to wait. So effectively, you run a potential risk of China overtaking every other country in artificial intelligence.

This is called the “If we don’t, others surely will” mentality and there’s definitely some water in it. A defiant China could actually jeopardize a lot many things when it controls a technology that’s effectively banned in other countries.

Conclusion

There’s no two opinion that ethical questions in artificial intelligence are far too many and far too compelling to be taken lightly. All emerging technologies come with associated risks and AI is no exception.

More dialogue, more openness and international cooperation are probably going to work best for AI. We can only hope that developments in AI do not outpace the political will to develop the correct regulations.

Featured image: Photo by Evan Dennis on Unsplash