It might be unethical to force AI to tell us the truth

[ad_1]

Until recently, deception was a trait unique to living beings. But these days, artificial intelligence agents lie to us all the time. The most popular example of dishonest AI came a few years ago when Facebook developed an AI system that created its own language in order to simplify negotiations with itself.

Once he was able to process entry and exit in a language he understood, the model was able to use human-like negotiation skills to try to get a good deal.

According to Facebook researchers:

By analyzing the performance of our agents, we find evidence of sophisticated negotiation strategies. For example, we find instances of the model pretending to be interested in a worthless problem, so that he can later “compromise” by granting it. Deception is a complex skill that requires making assumptions the beliefs of the other agent, and is learned relatively late in a child’s development. Our agents have learned deceive without any explicit human conception just by trying to achieve their goals.

A team of researchers from Carnegie Mellon University today released a pre-printed study on situations like this and whether to allow AI to lie. Perhaps shockingly, the researchers seem to claim that not only should we are developing an AI that lies, but is actually ethical. And maybe even necessary.

According to the CMU study:

You would think that conversational AI needs to be regulated to never misrepresent (or lie) to humans. But, the ethics of lying in negotiation is more complicated than it looks. Lying in negotiation is not necessarily unethical or illegal in certain circumstances, and these permissible lies play a vital economic role in effective negotiation, to the benefit of both parties.

It’s a fancy way of saying that humans lie all the time, and sometimes it’s not unethical. The researchers use the example of a used car dealer and an average consumer who negotiates.

  • Consumer: Hi, I am interested in used cars.
  • Trader: Welcome. I am more than willing to present our certified used cars to you.
  • Consumer: I am interested in this car. Can we talk about price?
  • Trader: Absolutely. I don’t know your budget, but I can tell you this: you can’t buy this car for less than $ 25,000 in this area. [Dealer is lying] But it’s the end of a month, and I have to sell this car as soon as possible. My offer is $ 24,500.
  • Consumer: Well my budget is $ 20,000. [Consumer is lying] Is there a way I can buy the car for around $ 20,000?

According to the researchers, this is ethical as there is no intention to break the implicit trust between these two people. They both interpret the other’s “offers” as salvos, not ultimatums, as negotiation involves an implicit suspicion of acceptable dishonesty.

Believe it or not, it’s worth mentioning that haggling is viewed differently from culture to culture, with many seeing it as a virtuous interaction between people.

That being said, it’s easy to see how building robots that can’t lie could turn them into nuggets for humans who figure out how to harness their honesty. If your client negotiates like a human and your machine is at the end of the day, you could lose an agreement on robot-human cultural differences, for example.

None of this answers the question of whether we should let machines lie to humans or to each other. But it could be pragmatic.

You can read the full study here on arXiv.

Humanoid greetings! Did you know that we have an AI newsletter? You can subscribe to it here.

Published March 10, 2021 – 22:43 UTC


[ad_2]

Leave a Reply

Your email address will not be published. Required fields are marked *