Stanislas Dehaene on the Neuroscience of Focused Learning
Imagine arriving at the airport just in time to catch a plane. Everything in your behavior betrays the heightened concentration of your attention. Your mind on alert, you look for the departures sign, without letting yourself be distracted by the flow of travelers; you quickly scroll through the list to find your flight. Advertisements all around call out to you, but you do not even see them—instead, you head straight for the check-in counter. Suddenly, you turn around: in the crowd, an unexpected friend just called your first name. This message, which your brain considers a priority, takes over your attention and invades your consciousness . . . making you forget which check-in counter you were supposed to go to.
In the space of a few minutes, your brain went through most of the key states of attention: vigilance and alertness, selection and distraction, orientation and filtering. In cognitive science, “attention” refers to all the mechanisms by which the brain selects information, amplifies it, channels it, and deepens its processing. These are ancient mechanisms in evolution: whenever a dog reorients its ears or a mouse freezes up upon hearing a cracking sound, they’re making use of attention circuits that are very close to ours.
Why did attention mechanisms evolve in so many animal species? Because attention solves a very common problem: information saturation. Our brain is constantly bombarded with stimuli: the senses of sight, hearing, smell, and touch transmit millions of bits of information per second. Initially, all these messages are processed in parallel by distinct neurons—yet it would be impossible to digest them in depth: the brain’s resources would not suffice. This is why a pyramid of attention mechanisms, organized like a gigantic filter, carries out a selective triage. At each stage, our brain decides how much importance it should attribute to such and such input and allocates resources only to the information it considers most essential.Attention is essential, but it may result in a problem: if attention is misdirected, learning can get stuck.
Selecting relevant information is fundamental to learning. In the absence of attention, discovering a pattern in a pile of data is like looking for the fabled needle in a haystack. This is one of the main reasons behind the slowness of conventional artificial neural networks: they waste considerable time analyzing all possible combinations of the data provided to them, instead of sorting out the information and focusing on the relevant bits.
It was only in 2014 that two researchers, Canadian Yoshua Bengio and Korean Kyunghyun Cho, showed how to integrate attention into artificial neural networks. Their first model learned to translate sentences from one language to another. They showed that attention brought in immense benefits: their system learned better and faster because it managed to focus on the relevant words of the original sentence at each step.
Very quickly, the idea of learning to pay attention spread like wildfire in the field of artificial intelligence. Today, if artificial systems manage to successfully label a picture (“A woman throwing a Frisbee in a park”), it is because they use attention to channel the information by focusing a spotlight on each relevant part of the image. When describing the Frisbee, the network concentrates all its resources on the corresponding pixels of the image and temporarily removes all those which correspond to the person and the park—it will return to them later. Nowadays, any sophisticated artificial intelligence system no longer connects all inputs with all outputs—it knows that learning will be faster if such a plain network, where every pixel of the input has a chance to predict any word at the output, is replaced by an organized architecture where learning is broken down into two modules: one that learns to pay attention, and another that learns to name the data filtered by the first.
Attention is essential, but it may result in a problem: if attention is misdirected, learning can get stuck. If I don’t pay attention to the Frisbee, this part of the image is wiped out: processing goes on as if it did not exist. Information about it is discarded early on, and it remains confined to the earliest sensory areas. Unattended objects cause only a modest activation that induces little or no learning. This is utterly different from the extraordinary amplification that occurs in our brain whenever we pay attention to an object and become aware of it. With conscious attention, the discharges of the sensory and conceptual neurons that code for an object are massively amplified and prolonged, and their messages propagate into the prefrontal cortex, where whole populations of neurons ignite and fire for a long time, well beyond the original duration of the image.
Such a strong surge of neural firing is exactly what synapses need in order to change their strength—what neuroscientists call “long-term potentiation.” When a pupil pays conscious attention to, say, a foreign-language word that the teacher has just introduced, she allows that word to deeply propagate into her cortical circuits, all the way into the prefrontal cortex. As a result, that word has a much better chance of being remembered. Unconscious or unattended words remain largely confined to the brain’s sensory circuits, never getting a chance to reach the deeper lexical and conceptual representations that support comprehension and semantic memory.American psychologist Michael Posner distinguishes at least three major attention systems: Alerting; Orienting; and Executive Attention.
This is why every student should learn to pay attention—and also why teachers should pay more attention to attention! If students don’t attend to the right information, it is quite unlikely that they will learn anything. A teacher’s greatest talent consists of constantly channeling and capturing children’s attention in order to properly guide them.
Attention plays such a fundamental role in the selection of relevant information that it is present in many different circuits in the brain. American psychologist Michael Posner distinguishes at least three major attention systems:
1. Alerting, which indicates when to attend, and adapts our level of vigilance.
2. Orienting, which signals what to attend to, and amplifies any object of interest.
3. Executive attention, which decides how to process the attended information, selects the processes that are relevant to a given task, and controls their execution.
These systems massively modulate brain activity and can therefore facilitate learning, but also point it in the wrong direction. Let us examine them one by one.
The first attention system, perhaps the oldest in evolution, tells us when to be on the watch. It sends warning signals that mobilize the entire body when circumstances require it. When a predator approaches or when a strong emotion overwhelms us, a whole series of subcortical nuclei immediately increases the wakefulness and vigilance of the cortex. This system dictates a massive and diffuse release of neuromodulators such as serotonin, acetylcholine, and dopamine. Through long-range axons with many spread-out branches, these alerting messages reach virtually the entire cortex, greatly modulating cortical activity and learning. Some researchers speak of a “now print” signal, as if these messages directly tell the cortex to commit the current contents of neural activity into memory.
Animal experiments show that the firing of this warning system can indeed radically alter cortical maps. The American neurophysiologist Michael Merzenich conducted several experiments in which the alerting system of mice was tricked into action by electrical stimulation of their subcortical dopamine or acetylcholine circuits. The outcome was a massive shift in cortical maps. All the neurons that happened to be activated at that moment, even if they had no objective importance, were subject to intense amplification. When a sound, for instance, a high-pitched tone, was systematically associated with a flash of dopamine or acetylcholine, the mouse’s brain became heavily biased toward this stimulus. As a result, the whole auditory map was invaded by this arbitrary note. The mouse became better and better at discriminating sounds close to this sensitive note, but it partially lost the ability to represent other frequencies.
It is remarkable that such cortical plasticity, induced by tampering with the alerting system, can occur even in adult animals. Analysis of the circuits involved shows that neuromodulators such as serotonin and acetylcholine—particularly via the nicotinic receptor (sensitive to nicotine, another major player in arousal and alertness)—modulate the firing of cortical inhibitory interneurons, tipping the balance between excitation and inhibition. Remember that inhibition plays a key role in the closing of sensitive periods for synaptic plasticity. Disinhibited by the alerting signals, cortical circuits seem to recover some of their juvenile plasticity, thus reopening the sensitive period for signals that the mouse brain labels as crucial.Far from reducing our ability to concentrate, video games can actually increase it. They are a powerful stimulant of attention.
What about Homo sapiens? It is tempting to think that a similar reorganization of cortical maps occurs every time a composer or a mathematician passionately dives into their chosen field, especially when their passion starts at an early age. A Mozart or a Ramanujan is perhaps so electrified by fervor that his brain maps become literally invaded with mental models of music or math. Furthermore, this may apply not only to geniuses, but to anyone passionate in their work, from a manual worker to a rocket scientist. By allowing cortical maps to massively reshape themselves, passion breeds talent.
Even though not everyone is a Mozart, the same brain circuits of alertness and motivation are present in all people. What circumstances of daily life would mobilize these circuits? Do they activate only in response to trauma or strong emotions? Maybe not. Some research suggests that video games, especially action games that play with life and death, provide a particularly effective means of engaging our attentional mechanisms. By mobilizing our alerting and reward systems, video games massively modulate learning. The dopamine circuit, for example, fires when we play an action game. Psychologist Daphné Bavelier has shown that this translates into rapid learning. The most violent action games seem to have the most intense effects, perhaps because they most strongly mobilize the brain’s alerting circuits. Ten hours of gameplay suffice to improve visual detection, refine the rapid estimation of the number of objects on the screen, and expand the capacity to concentrate on a target without being distracted. A video game player manages to make ultra-fast decisions without compromising his or her performance.
Parents and teachers complain that today’s children, plugged into computers, tablets, consoles, and other devices, constantly zap from one activity to the next and have lost the capacity to concentrate—but this is untrue. Far from reducing our ability to concentrate, video games can actually increase it. In the future, will they help us remobilize synaptic plasticity in adults and children alike? Undoubtedly, they are a powerful stimulant of attention, which is why my laboratory has developed a whole range of educational tablet games for math and reading, based on cognitive science principles.
Video games also have their dark side: they present well-known risks of social isolation, time loss, and addiction. Fortunately, there are many other ways to unlock the effects of the alerting system while also drawing on the brain’s social sense. Teachers who captivate their students, books that draw in their readers, and films and plays that transport their audiences and immerse them in real-life experiences probably provide equally powerful alerting signals that stimulate our brain plasticity.
The second attention system in the brain determines what we should attend to. This orienting system acts as a spotlight on the outside world. From the millions of stimuli that bombard us, it selects those to which we should allocate our mental resources, because they are urgent, dangerous, appealing . . . or merely relevant to our present goals.
The founding father of American psychology, William James (1842 – 1910), in his The Principles of Psychology (1890), best defined this function of attention: “Millions of items of the outward order are present to my senses which never properly enter into my experience. Why? Because they have no interest for me. My experience is what I agree to attend to. Only those items which I notice shape my mind.”Paying attention consists of suppressing unwanted information—and in doing so, our brain runs the risk of becoming blind to what it chooses not to see.
Selective attention operates in all sensory domains, even the most abstract. For example, we can pay attention to the sounds around us: dogs move their ears, but for us humans, only an internal pointer in our brain moves and tunes in to whatever we decide to focus on. At a noisy cocktail party, we are able to select one out of ten conversations based on voice and meaning. In vision, the orienting of attention is often more obvious: we generally move our head and eyes toward whatever attracts us. By shifting our gaze, we bring the object of interest into our fovea, which is an area of very high sensitivity in the center of our retina. However, experiments show that even without moving our eyes, we can still pay attention to any place or any object, wherever it is, and amplify its features. We can even attend to one of several superimposed drawings, just like we attend to one of several simultaneous conversations. And there is nothing stopping you from paying attention to the color of a painting, the shape of a curve, the speed of a runner, the style of a writer, or the technique of a painter. Any representation in our brains can become the focus of attention.
In all these cases, the effect is the same: the orienting of attention amplifies whatever lies in its spotlight. The neurons that encode the attended information increase their firing, while the noisy chattering of other neurons is squashed. The impact is twofold: attention makes the attended neurons more sensitive to the information that we consider relevant, but, above all, it increases their influence on the rest of the brain. Downstream neural circuits echo the stimulus to which we lend our eyes, ears, or mind. Ultimately, vast expanses of cortex reorient to encode whatever information lies at the center of our attention. Attention acts as an amplifier and a selective filter.
“The art of paying attention, the great art,” says the philosopher Alain (1868–1951), “supposes the art of not paying attention, which is the royal art.” Indeed, paying attention also involves choosing what to ignore. For an object to come into the spotlight, thousands of others must remain in the shadows. To direct attention is to choose, filter, and select: this is why cognitive scientists speak of selective attention. This form of attention amplifies the signal which is selected, but it also dramatically reduces those that are deemed irrelevant. The technical term for this mechanism is “biased competition”: at any given moment, many sensory inputs compete for our brain’s resources, and attention biases this competition by strengthening the representation of the selected item while squashing the others. This is where the spotlight metaphor reaches its limits: to better light up a region of the cortex, the attentional spotlight of our brain also reduces the illumination of other regions. The mechanism relies on interfering waves of electrical activity: to suppress a brain area, the brain swamps it with slow waves in the alpha frequency band (between eight and twelve hertz), which inhibit a circuit by preventing it from developing coherent neural activity.
Paying attention, therefore, consists of suppressing the unwanted information—and in doing so, our brain runs the risk of becoming blind to what it chooses not to see. Blind, really? Really. The term is fully appropriate, because many experiments, including the famous “invisible gorilla” experiment, demonstrate that inattention can induce a complete loss of sight. In this classic experiment, you are asked to watch a short movie where basketball players, dressed in black and white, pass a ball back and forth. Your task is to count, as precisely as you can, the number of passes of the white team. A piece of cake, you think—and indeed, 30 seconds later, you triumphantly give the right answer.
But now the experimenter asks a strange question: “Did you see the gorilla?” The gorilla? What gorilla? We rewind the tape, and to your amazement, you discover that an actor in a full-body gorilla costume walked across the stage and even stopped in the middle to pound on his chest for several seconds. It seems impossible to miss. Furthermore, experiments show that, at some point, your eyes looked right at the gorilla. Yet you did not see it. The reason is simple: your attention was entirely focused on the white team and therefore actively inhibited the distracting players who were dressed in black . . . gorilla included! Busy with the counting task, your mental workspace was unable to become aware of this incongruous creature.
The invisible gorilla experiment is a landmark study in cognitive science, and one which is easily replicated: in a great variety of settings, the mere act of focusing our attention blinds us to unattended stimuli. If, for instance, I ask you to judge whether the pitch of a sound is high or low, you may become blind to another stimulus, such as a written word that appears within the next fraction of a second. Psychologists call this phenomenon the “attentional blink”: your eyes may remain open, but your mind “blinks”—for a short while, it is fully busy with its main task and utterly unable to attend to anything else, even something as simple as a single word.
In such experiments, we actually suffer from two distinct illusions. First, we fail to see the word or the gorilla, which is bad enough. (Other experiments show that inattention can lead us to miss a red light or run over a pedestrian—never use your cell phone behind the wheel!) But the second illusion is even worse: we are unaware of our own unawareness—and, therefore, we are absolutely convinced that we have seen all there is to see! Most people who try the invisible gorilla experiment cannot believe their own blindness. They think that we played a trick on them, for instance by using two different movies. Typically, their reasoning is that if there really was a gorilla in the video, they would have seen it. Unfortunately, this is false: our attention is extremely limited, and despite all our good will, when our thoughts are focused on one object, other objects—however salient, amusing, or important—can completely elude us and remain invisible to our eyes. The intrinsic limits of our awareness lead us to overestimate what we and others can perceive.
The gorilla experiment truly deserves to be known by everyone, especially parents and teachers. When we teach, we tend to forget what it means to be ignorant. We all think that what we see, everyone can see. As a result, we often have a hard time understanding why a child, despite the best of intentions, fails to see, in the most literal sense of the term, what we are trying to teach him. But the gorilla heeds a clear message: seeing requires attending. If students, for one reason or another, are distracted and fail to pay attention, they may be entirely oblivious to their teacher’s message—and what they cannot perceive, they cannot learn.
From How We Learn by Stanislas Dehaene, published by Viking, an imprint of Penguin Publishing Group, a division of Penguin Random House, LLC. Copyright © 2020 by Stanislas Dehaene.
Stanislas Dehaene is the director of the Cognitive Neuroimaging Unit in Saclay, France, and the professor of experimental cognitive psychology at the Collège de France. He is the author of Reading in the Brain and more recently of How We Learn.
(Inspired by Suzanne Deakins, H.W., M.)