How Deepfakes could help implant false memories in our minds

[ad_1]

The human brain is a complex and miraculous thing. As far as we can tell, it is the epitome of biological evolution. But it doesn’t come with any preinstalled security software. And that makes it ridiculously easy to hack.

We like to imagine the human brain as a giant neural network that speaks its own language. When we talk about developing brain-computer interfaces, we’re usually talking about some sort of transceiver that interprets brain waves. But the point is, we’ve been hacking the human brain since time immemorial.

Think of the actor who uses a sad memory to evoke tears or the detective who uses reverse psychology to extract a confession from a suspect. These examples may seem less extraordinary than, for example, the memory eraser of Men in black. But the end result is essentially the same. We are able to modify the data that our mind uses to establish basic reality. And we’re really good at it.

Background

A team of researchers from universities in Germany and the UK today released pre-printed research detailing a study they were successful in getting a foothold in. and removed false memories in test subjects.

According to the team’s article:

Human memory is fallible and malleable. In forensic circles in particular, this poses a challenge because people may mistakenly remember events with legal implications that never happened. Despite an urgent need for remedies, however, research into whether and how rich false autobiographical memories can be reversed under realistic conditions (i.e., using reversal strategies that can be applied in real settings) is practically non-existent.

Basically, it’s relatively easy to implant false memories. Getting rid of them is the hardest part.

The study was conducted on 52 subjects who agreed to allow researchers to attempt to plant a false childhood memory in their minds over several sessions. After a while, many subjects began to believe in false memories. The researchers then asked the subjects’ parents to pretend the fake stories were true.

The researchers found that adding a trusted person made it easier to both integrate and remove false memories.

By paper:

The present study therefore not only replicates and extends previous demonstrations of false memories, but crucially documents their reversibility after the fact: using two ecologically valid strategies, we show that rich but false autobiographical memories can largely to be undone. Importantly, the reversal was specific to false memories (i.e., did not occur for true memories).

False memory implantation techniques have been around for some time, but there hasn’t been a lot of research to reverse them. Which means that this paper does not come a moment too soon.

Enter Deepfakes

There aren’t many positive use cases for implanting false memories. But luckily most of us don’t really have to worry about being the target of a mind control plot that involves slowly being tricked into believing a false memory over the course of multiple sessions with the complicity of our own parents.

Yet that’s almost exactly what happens on Facebook every day. Everything you do on the social network is recorded and coded to create a detailed picture of who exactly you are. This data is used to determine what ads you see, where you see them and how often they appear. And when a member of your trusted network makes a purchase through an ad, you’re more likely to start seeing those ads.

But we all know that already, don’t we? Of course we do, you can’t go a day without seeing an article about how Facebook and Google and all the other big tech companies are manipulating us. So why are we supporting it?

Well, that’s because our brains adjust to reality better than what we attribute to them. The moment we know there is a system that we can manipulate, the more we think the system is saying something about us as humans.

A team of Harvard researchers wrote about this phenomenon in 2016:

In a study we conducted with 188 undergraduates, we found that attendees were more interested in purchasing a Groupon for a restaurant advertised as fancy when they thought the ad was targeted to them. based on specific websites they had visited in a previous task (browsing the web to create a travel itinerary) versus when they thought the ad was targeted based on demographics (their age and their gender) or not at all targeted.

What does this have to do with Deepfakes? It’s simple: if we’re so easily manipulated by snatches of exposure to tiny little ads in our Facebook feed, imagine what might happen if the advertisers started to hijack the personalities and faces of the people we have. trust?

For example, you might not be planning to buy Grandma’s Cookies products anytime soon, but if it was your grandmother tell you how delicious they are in the ad you watch… you could.

Using existing technology, it would be trivial for a large tech company, for example, to determine that you are a student who has not seen your parents since last December. With that knowledge, Deepfakes, and the data it already has on you, it wouldn’t take much to create targeted ads featuring your Deepfaked parents telling you to buy hot chocolate or something like that.

But false memories?

It’s all fun and fun when the stake simply involves a social media business using AI to convince you to buy goodies. But what happens when it’s a bad actor who breaks the law? Or worse, what happens when it’s the government do not break the law?

Police use a variety of techniques to solicit a confession. And law enforcement is generally not required to tell the truth while doing so. In fact, it’s perfectly legal in most places for cops to outright lie to get a confession.

One popular technique is to tell a suspect that his friends, family and any co-conspirators have already told the police that they know they were the one who committed the crime. If you can convince someone that the people they respect and care about believe they’ve done something wrong, it’s easier for them to accept it as a fact.

How many law enforcement agencies around the world currently have an explicit policy against the use of manipulated media in soliciting confessions? Our hypothesis would be: close to zero.

And this is just one example. Imagine what an autocratic or iron-fisted government could do on a large scale with these techniques.

The best defense …

It is good to know that there are already methods that we can use to extract these false memories. As the European research team discovered, our brains tend to let go of false memories when challenged, but cling to real ones. This makes them more resistant to attack than we might think.

However, it perpetually puts us on the defensive. Currently, our only defense against AI-assisted false memory implantation is either seeing it coming or getting help after it has happened.

Unfortunately the unknown unknown make it a terrible security plan. We just can’t plan for all the ways a bad actor could exploit the loophole that makes it easier to edit our brains when someone we trust is helping the process.

With Deepfakes and enough time, you can convince someone of anything as long as you can find a way to get them to watch your videos.

Our only real defense is to develop technology that sees through Deepfakes and other media manipulated by AI. With brain-computer interfaces set to hit consumer markets in the next few years and AI-powered media becoming less and less distinctive from reality down to the minute, we are nearing a point of no return. for technology.

Just as the invention of the gun enabled those not skilled in sword fighting to win a duel, and the creation of the calculator gave those who struggle with mathematics the ability to performing complex calculations, we may be on the cusp of an era where psychological manipulation becomes a push-button endeavor.

Published March 23, 2021 – 19:13 UTC


[ad_2]

Leave a Reply

Your email address will not be published. Required fields are marked *