Snowmen can’t walk — but your Tesla thinks they can

[ad_1]

Imagine driving down a city street. You go around a curve and suddenly see something in the middle of the road in front of you. What should you do

Of course, the answer depends on what that “something” is. A torn paper bag, a lost shoe or a tumbleweed? You can roll over it without thinking, but you’ll definitely swerve around a bunch of broken glass. You’ll probably stop for a dog standing in the road, but head straight for a flock of pigeons, knowing the birds will fly away. You can plow through a pile of snow, but veer around a carefully constructed snowman. In short, you will quickly determine which actions are best suited to the situation – what humans call “common sense.”

Human drivers aren’t the only ones who need common sense; its lack of artificial intelligence (AI) systems will likely be the biggest obstacle to the large-scale deployment of fully autonomous cars. Even today’s best self-driving cars face the problem of objects on the road. Perceiving “ obstacles ” that no human would ever stop for, these vehicles are likely to slam the brakes unexpectedly, catching other motorists by surprise. Accident by human drivers is the most common accident involving self-driving cars.

Cars Need Common Sense To Be Trusted

The challenges of autonomous vehicles are unlikely to be resolved by giving cars more training data or explicit rules on what to do in unusual situations. To be trustworthy, these cars need common sense: a vast knowledge of the world and the ability to adapt that knowledge to new circumstances. While today’s AI systems have made impressive strides in areas ranging from image recognition to language processing, their lack of a solid foundation of common sense makes them vulnerable to unpredictable and inhuman errors.

Common sense is multi-faceted, but an essential aspect is the mostly tacit “basic knowledge” that humans share – the knowledge we are born with or learn with while living in the world. This includes a broad knowledge of the properties of objects, animals, other people, and society in general, and the ability to apply that knowledge flexibly in new situations. You can predict, for example, that while a pile of glass in the road won’t fly away as you approach you, a flock of birds likely will. If you see a ball bouncing in front of your car, for example, you know that it could be followed by a child or a hound to retrieve it. From this perspective, the term “ common sense ” seems to capture exactly what today’s AI cannot do: use general knowledge about the world to act outside of prior training or pre-programmed rules.

Human learning vs machine learning

Today’s most successful AI systems use deep neural networks. These are algorithms trained to spot patterns, based on statistics gleaned from large collections of labeled human examples. This process is very different from the way humans learn. We seem to come into the world with an innate knowledge of certain basic concepts that help begin our path to understanding – including notions of discrete. objects and events, the three-dimensional nature of space and the very idea of causality himself. Humans also seem to be born with nascent concepts of sociality: babies can recognize simple facial expressions, they have ideas about language and its role in communication, and rudimentary strategies for getting adults to communicate. Such knowledge is so basic and immediate that we are not even aware that we have it or that it forms the basis for all future learning. A big lesson learned from decades of AI research is how difficult it is to teach machines such concepts.

In addition to their innate knowledge, children also manifest innate urges to actively explore the world, find out the causes and effects of events, make predictions, and enlist adults to teach them what they want to know. Concept formation is closely related to the development of children’s motor skills and awareness of their own bodies – for example, babies seem to start to wonder why other people are grabbing objects at the same time they can. do it by themselves. As today’s advanced machine learning systems start out as blank slates and function as passive, bodyless learners of statistical models; in contrast, common sense in infants is developed through innate knowledge combined with embodied, social, active learning oriented towards creating and testing theories of the world.

TThe history of the implementation of common sense in AI systems has largely focused on the cataloging of human knowledge: manual programming, crowdsourcing or common sense “ assertions ” of web crawling or computer representations of stereotypical situations. But all of these attempts face a major, perhaps fatal, obstacle: much (perhaps most) of our basic intuitive knowledge is unwritten, tacit, and not even in our conscious awareness.

[Read: How do you build a pet-friendly gadget? We asked experts and animal owners]

DARPA (US Defense Advanced Research Projects Agency), one of the leading funders of AI research, recently launched a four-year program on “ Foundations of Human Common Sense ” that takes a different. He challenges researchers to create an AI system that learns from “ experience ” in order to reach the cognitive abilities of an 18-month-old baby. It may seem odd that pairing a baby is seen as a big challenge for AI, but it reflects the gap between AI success in specific and narrow areas and more general, robust intelligence.

According to developmental psychologists, basic knowledge in infants develops with a predictable time scale. For example, around the age of two to five months, babies manifest a knowledge of “object permanence”: if one object is blocked by another object, the first object still exists, even if the baby cannot. see. At this time, babies are also aware that when objects collide they do not pass through each other, but their movement changes; they also know that “agents” – entities with intentions, such as humans or animals – can change the movement of objects. Between nine and 15 months, infants come to have a basic “theory of mind”: they understand what another person can and cannot see and, at 18 months, can recognize when another person shows the need for help.

Since babies under 18 months old cannot tell us what they are thinking, some cognitive milestones need to be inferred indirectly. This usually involves experiments that test for “violation of expectations.” Here, a baby watches one of two staged scenarios, only one of which meets common sense expectations. The theory is that a baby will look longer for the scenario that violates their expectations, and indeed babies tested this way will appear longer when the scenario does not make sense.

In DARPA’s Foundations of Human Common Sense Challenge, each team of researchers is tasked with developing a computer program – a simulated “common sense agent” – that learns from video or virtual reality. DARPA’s plan is to evaluate these agents by performing experiments similar to those performed on infants and by measuring the agents’ “wait signal violation”.

It won’t be the first time that AI systems have been evaluated on tests designed to measure human intelligence. In 2015, a group showed that an AI system could match a four-year-old’s performance on an IQ test, which led the BBC to report that “ AI has the IQ of ‘ a four-year-old ”. More recently, researchers at Stanford University created a “ reading ” test that became the basis of the New York Post reporting that “AI systems beat humans in reading comprehension. These claims are misleading, however. Unlike humans who pass the same test well, each of these AI systems was specifically trained in a narrow domain and lacked any of the general abilities the test was designed to measure. As New York University computer scientist Ernest Davis warned, “ The public can easily conclude that because an AI program can pass a test, it has the intelligence of a human passing the test. same test. ”

I think it’s possible – if not likely – that something similar will happen with the DARPA initiative. It could produce a specially trained AI program to pass DARPA tests for cognitive milestones, but lack any of the general intelligence that gives rise to these milestones in humans. I suspect there is no shortcut to common sense, whether one is using an encyclopedia, training videos or virtual environments. To develop an understanding of the world, an agent needs the right kind of innate knowledge, the right kind of learning architecture, and the ability to actively grow in the world. They should experience not only physical reality, but also all the social and emotional aspects of human intelligence that cannot truly be separated from our “cognitive” abilities.

Although we have made remarkable progress, the artificial intelligence of our time remains narrow and unreliable. To create a more general and reliable AI, we may need to take a radical step backwards: design our machines to learn more like babies, instead of specifically training them to succeed against particular criteria. After all, parents do not directly train their children to show signals of “violation of expectations”; how infants behave in psychology experiments is simply a side effect of their general intelligence. If we can figure out how to get our machines to learn like children, perhaps after a few years of physical and social learning out of curiosity, these young “ common sense agents ” will eventually grow up to be teenagers, sane enough to be trusted. the car keys.

Published in association with the Santa Fe Institute, a strategic partner of Aeon.Aeon counter - do not remove

This article was originally posted to Aeon by Melanie Mitchell and has been republished under Creative Commons.


Do EVs Excite Your Electrons? Do e-bikes turn your wheels? Do Self-Driving Cars All Charge You?

Then you need the weekly SHIFT newsletter in your life. Click here to register.

Published March 11, 2021 – 09:19 UTC


[ad_2]

Leave a Reply

Your email address will not be published. Required fields are marked *