Scientists made an AI that reads your mind so it can generate portraits you’ll find attractive

[ad_1]

A team of researchers recently developed artificial intelligence that uses an individual’s personal preferences to generate portraits of attractive people who don’t exist.

The computer-generated beauty is really in the viewer’s AI.

The big idea: Scientists at the University of Helsinki and the University of Copenhagen today published an article detailing a system by which a brain-computer interface is used to transmit data to an AI system which then interprets that data. and uses them to form an image generator.

According to a press release from the University of Helsinki:

Initially, the researchers gave a Generative Antagonist Neural Network (GAN) the task of creating hundreds of artificial portraits. The images were shown, one at a time, to 30 volunteers who were asked to pay attention to the faces they found attractive while their brain responses were recorded by electroencephalography (EEG).

The researchers analyzed the EEG data with machine learning techniques, connecting individual EEG data through a brain-computer interface (BCI) to a generative neural network.

Once the user’s preferences were interpreted, the machine then generated a new set of images, adjusted to be more appealing to the individual whose data it was trained to. Upon examination, the researchers found that 80% of the personalized images generated by the machines withstood the attractiveness test.

Background: Sentiment analysis is a big deal in AI, but it’s a little different. Typically, machine learning systems designed to observe human sentiment use cameras and rely on facial recognition. This makes them unreliable for use with the general public at best.

But this system relies on a direct link to our brain waves. And that means it should be a pretty reliable indicator of positive or negative sentiment. In other words: the basic idea seems pretty solid in that you look at an image that you find pleasing, and then an AI tries to make more images that trigger the same brain response.

Quick setting: You could attempt to hypothetically extrapolate the potential uses of such AI all day long and never decide whether it was ethical or not. On the one hand, there is a treasure trove of psychological insight to be gleaned from a machine that can ignore what we like about a given image without relying on us to consciously understand it.

But, on the other hand, based on what bad actors can do with just a tiny amount of data, it’s absolutely horrible to think of what a company like Facebook (which is currently developing its own BCIs) or a political influence machine like Cambridge Analytica could do with an AI system that knows how to ignore someone’s conscious mind and appeal directly to the part of their brain that likes things.

You can read the entire document here.

Published March 5, 2021 – 21:11 UTC


[ad_2]

Leave a Reply

Your email address will not be published. Required fields are marked *