Hey Google … what movie should I watch today? How AI can affect our decisions

Shutterstock

TaeWoo Kim, University of Technology Sydney

Social media algorithms, artificial intelligence and our own genetics are among the factors influencing us beyond our awareness. This raises an ancient question: do we have control over our own lives? This article is part of The Conversationโ€™s series on the science of free will.

Have you ever used Google Assistant, Appleโ€™s Siri or Amazon Alexa to make decisions for you? Perhaps you asked it what new movies have good reviews, or to recommend a cool restaurant in your neighbourhood.

Artificial intelligence and virtual assistants are constantly being refined, and may soon be making appointments for you, offering medical advice, or trying to sell you a bottle of wine.

Although AI technology has miles to go to develop social skills on par with ours, some AI has shown impressive language understanding and can complete relatively complex interactive tasks.

In several 2018 demonstrations, Googleโ€™s AI made haircut and restaurant reservations without receptionists realising they were talking with a non-human. https://www.youtube.com/embed/D5VN56jQMWM?wmode=transparent&start=67 Would you let Google Duplex make phone bookings for you?

Itโ€™s likely the AI capabilities developed by tech giants such as Amazon and Google will only grow more capable of influencing us in the future.

But what do we actually find persuasive?

My colleague Adam Duhachek and I found AI messages are more persuasive when they highlight โ€œhowโ€ an action should be performed, rather than โ€œwhyโ€. For example, people were more willing to put on sunscreen when an AI explained how to apply sunscreen before going out, rather than why they should use sunscreen.

We found people generally donโ€™t believe a machine can understand human goals and desires. Take Googleโ€™s AlphaGo, an algorithm designed to play the board game Go. Few people would say the algorithm can understand why playing Go is fun, or why itโ€™s meaningful to become a Go champion. Rather, it just follows a pre-programmed algorithm telling it how to move on the game board.

Our research suggests people find AIโ€™s recommendations more persuasive in situations where AI shows easy steps on how to build personalised health insurance, how to avoid a lemon car, or how to choose the right tennis racket for you, rather than why any of these are important to do in a human sense.

A robot hand playing the ancient Chinese boardgame called Go
People tend to think of AI as not having free will and therefore not having the ability to explain why something is important to humans. Shutterstock

Does AI have free will?

Most of us believe humans have free will. We compliment someone who helps others because we think they do it freely, and we penalise those who harm others. Whatโ€™s more, we are willing to lessen the criminal penalty if the person was deprived of free will, for instance if they were in the grip of a schizophrenic delusion.

But do people think AI has free will? We did an experiment to find out.

Someone is given $100 and offers to split it with you. Theyโ€™ll get $80 and youโ€™ll get $20. If you reject this offer, both you and the proposer end up with nothing. Gaining $20 is better than nothing, but previous research suggests the $20 offer is likely to be rejected because we perceive it as unfair. Surely we should get $50, right?

But what if the proposer is an AI? In a research project yet to be published, my colleagues and I found the rejection ratio drops significantly. In other words, people are much more likely to accept this โ€œunfairโ€ offer if proposed by an AI.

This is because we donโ€™t think an AI developed to serve humans has a malicious intent to exploit us โ€” itโ€™s just an algorithm, it doesnโ€™t have free will, so we might as well just accept the $20.

The fact people could accept unfair offers from AI concerns me, because it might mean this phenomenon could be used maliciously. For example, a mortgage loan company might try to charge unfairly high interest rates by framing the decision as being calculated by an algorithm. Or a manufacturing company might manipulate workers into accepting unfair wages by saying it was a decision made by a computer.

To protect consumers, we need to understand when people are vulnerable to manipulation by AI. Governments should take this into account when considering regulation of AI.

Weโ€™re surprisingly willing to divulge to AI

In other work yet to be published, my colleagues and I found people tend to disclose their personal information and embarrassing experiences more willingly to an AI than a human.

We told participants to imagine theyโ€™re at the doctor for a urinary tract infection. We split the participants, so half spoke to a human doctor, and half to an AI doctor. We told them the doctor is going to ask a few questions to find the best treatment and itโ€™s up to you how much personal information you provide.

Participants disclosed more personal information to the AI doctor than the human one, regarding potentially embarrassing questions about use of sex toys, condoms, or other sexual activities. We found this was because people donโ€™t think AI judges our behaviour, whereas humans do. Indeed, we asked participants how concerned they were for being negatively judged, and found the concern of being judged was the underlying mechanism determining how much they divulged.

It seems we feel less embarrassed when talking to AI. This is interesting because many people have grave concerns about AI and privacy, and yet we may be more willing to share our personal details with AI.

A phone featuring Google Assistant
As AI develops further, we need to understand how it affects human decision-making. Shutterstock

But what if AI does have free will?

We also studied the flipside: what happens when people start to believe AI does have free will? We found giving AI human-like features or a human name could mean people are more likely to believe an AI has free will.

This has several implications:

  • AI can then better persuade people on questions of โ€œwhyโ€, because people think the human-like AI may be able to understand human goals and motivations
  • AIโ€™s unfair offer is less likely to be accepted because the human-looking AI may be seen as having its own intentions, which could be exploitative
  • people start feeling judged by the human-like AI and feel embarrassed, and disclose less personal information
  • people start feeling guilty when harming a human-looking AI, and so act more benignly to the AI.

We are likely to see more and different types of AI and robots in future. They might cook, serve, sell us cars, tend to us at the hospital and even sit on a dining table as a dating partner. Itโ€™s important to understand how AI influences our decisions, so we can regulate AI to protect ourselves from possible harms.

TaeWoo Kim, Lecturer, UTS Business School, University of Technology Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Observer Voice is the one stop site for National, International news, Editorโ€™s Choice, Art/culture contents, Quotes and much more. We also cover historical contents. Historical contents includes World History, Indian History, and what happened today. The website also covers Entertainment across the India and World.

Follow Us on Twitter, Instagram, Facebook, & LinkedIn

Back to top button