Book: “Behave: The Biology of Humans at Our Best and Worst”

Behave: The Biology of Humans at Our Best and Worst

Behave: The Biology of Humans at Our Best and Worst

by Robert M. Sapolsky 

Why do we do the things we do?

More than a decade in the making, this game-changing book is Robert Sapolsky’s genre-shattering attempt to answer that question as fully as perhaps only he could, looking at it from every angle. Sapolsky’s storytelling concept is delightful but it also has a powerful intrinsic logic: he starts by looking at the factors that bear on a person’s reaction in the precise moment a behavior occurs, and then hops back in time from there, in stages, ultimately ending up at the deep history of our species and its evolutionary legacy.
And so the first category of explanation is the neurobiological one. A behavior occurs–whether an example of humans at our best, worst, or somewhere in between. What went on in a person’s brain a second before the behavior happened? Then Sapolsky pulls out to a slightly larger field of vision, a little earlier in time: What sight, sound, or smell caused the nervous system to produce that behavior? And then, what hormones acted hours to days earlier to change how responsive that individual is to the stimuli that triggered the nervous system? By now he has increased our field of vision so that we are thinking about neurobiology and the sensory world of our environment and endocrinology in trying to explain what happened.

Sapolsky keeps going: How was that behavior influenced by structural changes in the nervous system over the preceding months, by that person’s adolescence, childhood, fetal life, and then back to his or her genetic makeup? Finally, he expands the view to encompass factors larger than one individual. How did culture shape that individual’s group, what ecological factors millennia old formed that culture? And on and on, back to evolutionary factors millions of years old.

The result is one of the most dazzling tours d’horizon of the science of human behavior ever attempted, a majestic synthesis that harvests cutting-edge research across a range of disciplines to provide a subtle and nuanced perspective on why we ultimately do the things we do…for good and for ill. Sapolsky builds on this understanding to wrestle with some of our deepest and thorniest questions relating to tribalism and xenophobia, hierarchy and competition, morality and free will, and war and peace. Wise, humane, often very funny, Behave is a towering achievement, powerfully humanizing, and downright heroic in its own right.


Why the French Don’t Show Excitement

Not only is ‘Je suis excité’ not the appropriate way to convey excitement in French, but there seems to be no real way to express it at all.

BBC Travel|

  • Emily Monaco

Julie Barlow: “The French don’t appreciate in conversation a kind of positive, sunny exuberance that’s really typical of Americans.” Credit: Norbert Scanella/Alamy.

When I was 19 years old, after five years of back-and-forth trips that grew longer each time, I finally relocated officially from the United States to France. Already armed with a fairly good grasp of the language, I was convinced that I would soon assimilate into French culture.

Of course, I was wrong. There’s nothing like cultural nuance to remind you who you are at your core: my Americanness became all the more perceptible the longer I remained in France, and perhaps no more so than the day a French teacher told me his theory on the key distinction between those from my native and adopted lands.

“You Americans,” he said, “live in the faire [to do]. The avoir [to have]. In France, we live in the être [to be].”

The moment he said it, it made perfect sense. I thought back to my life in New York, where every moment was devoted to checking tasks off a perpetual to-do list or planning for the days, weeks and years to come. In France, however, people were perfectly contented to just be.


Writer Emily Monaco was told that the key difference between Americans and the French is that the French ‘live in the être.’ Credit: Anna Berkut/Alamy.

During two-hour lunch breaks, they sat at sidewalk cafes and watched the world pass them by. Small talk was made up not of what they did for a living, but where they had recently been on holiday. Women working at the post office chatted lazily with one another as the queue ticked slowly forward, enjoying the company of their co-workers while I impatiently waited to buy stamps so that I could fulfil my self-assigned obligation of sending postcards home.

I wanted very badly to blend in and live in the être, but it was harder than it looked. It seemed that no matter what I did, I exposed myself as an American. I smiled too much. I spoke too loudly. And I got excited way too often.

I knew before moving that the French word ‘excité’ was verboten. It is one of the first ‘false friends’ that a student of the language becomes aware of. Most French learners can recall the day that a classmate first uttered the phrase ‘Je suis excité’ (which literally translates as ‘I am excited’) only to have their teacher hem and haw uncomfortably before explaining that the word excité doesn’t signal emotional but rather physical excitement. A better translation of the phrase Je suis excité into English would be ‘I am aroused’.


In France, people are perfectly content just to be. Credit: Jeff Gilbert/Alamy.

French doesn’t have the excited/aroused lexical pair that English does, so one word does both jobs. Excité technically denotes excitement both “objective (a state of stimulation) and subjective (feelings),” according to Olivier Frayssé, professor of American Civilization at Paris-Sorbonne University, but the physical sensation is the one most often implied. “If ‘aroused’ existed, it would be unnecessary to interpret ‘excité’ this way,” he explained.

Our cooler, calmer, more reticent sides come out when we’re speaking French

Anglophones, meanwhile, blessed with both words, are free to use ‘excited’ as we please – which we (particularly Americans) do with reckless abandon. We’re excited for our weekend plans, for the summer holiday, to get home after a long day of work and relax in front of our favourite Netflix show. But English speakers who live in France have no way to express this sentiment in the language of our adopted country. As opposed to other false friends – like ‘Je suis pleine’, which means not ‘I’m full’, as its literal translation suggests, but ‘I’m pregnant’, forcing Francophones to use phrases like ‘J’ai assez mangé’ (‘I’ve eaten enough’) or ‘je suis repu’ (‘I am sated’) – not only is ‘Je suis excité’ not the appropriate way to convey excitement, but there seems to be no real way to express it at all.

“I usually say ‘Je suis heureuse’ [‘I’m happy’] or ‘J’ai hâte de’ [‘I’m looking forward to’],” one bilingual friend said. Neither quite captures the intensity of excitement, but it seems these are the best substitutes that French has to offer.

“I think it’s safe to say I express excitement often and outwardly,” said bilingual Australian Dr Gemma King, who teaches French language and cinema at the Australian National University in Canberra, noting that when she speaks French, it is another story entirely. “My students and I often joke that our cooler, calmer, more reticent sides come out when we’re speaking French,” she said.

This is not, then, a mere question of translation, but rather a question of culture. Like other untranslatable terms like Japan’s shinrin-yoku (the relaxation gained from being around nature) or dadirri (deep, reflective listening) in aboriginal Australian, it seems as though the average French person doesn’t need to express excitement on the day to day.


Not only is ‘Je suis excité’ not the appropriate way to convey excitement in France, but there seems to be no real way to express it at all. Credit: Rostislav Glinsky/Alamy.

For Julie Barlow, Canadian co-author of The Story of French and The Bonjour Effect, this is largely due to the implied enthusiasm in the word ‘excited’, something that’s not sought after in French culture. She notes that Francophone Canadians, culturally North American rather than French, find work-arounds such as ‘Ça m’enthousiasme’ (‘It enthuses me’).

“[The French] don’t appreciate in conversation a kind of positive, sunny exuberance that’s really typical of Americans and that we really value,” Barlow explained. “Verbally, ‘I’m so excited’ is sort of a smile in words. French people prefer to come across as kind of negative, by reflex.

My French husband agrees.

“If you’re too happy in French, we’re kind of wondering what’s wrong with you,” he said. “But in English, that’s not true.”

For some, however, it’s not necessarily negativity that the French seek, but reserve.

“I think there is something cultural about the greater level of reservation French people tend to show in everyday conversation,” Dr King said. “From my perspective, it doesn’t mean they show less enthusiasm, but perhaps less of an emotional investment in things they are enthusiastic about.”

Indeed, those who are unable to show the proper emotional detachment within French society can even be perceived as being somehow deranged, something that is exemplified by the pejorative labelling of former President Nicolas Sarkozy as ‘l’excité’, due to the zeal he shows in public appearances.


The average French person does not need to express excitement on a day-to-day basis. Credit: Shaun A Daley/Alamy.

American Matt Jenner lived in France for several years and is bilingual. For him, it is not necessarily a matter of the French not being able to express their excitement, but rather that English speakers – and Americans in particular – tend to go overboard. The American public, he says, has been trained “to have a fake, almost cartoonish view on life, in which superficial excitement and false happiness are the norm.” By comparison, he notes, in France, “excitement is typically shown only when it is truly meant.”

Authenticity has been important to the French since the Revolution, according to Brice Couturier at France Culture. “The Ancien Régime, indeed, had cultivated a culture of the court and of salons, based on the art of appearances and pleasing,” he said. “This culture implied a great mastery of the behavioural codes of the time, as well as an ability to conceal one’s true emotions.”

Excitement is typically shown only when it is truly meant

In reaction, Couturier continued, the French revolutionaries fought back against these masks and this hypocrisy – something that the French maintain today by expressing their emotions as truthfully as possible to avoid appearing inauthentic.

This tendency was something that irked me when I first noticed it: French friends saying that a dish they tried in a restaurant was just ‘fine’, or shrugging nonchalantly when I asked if they were looking forward to their holiday. Their attitude struck me as unnecessarily negative. But on our first joint visit to the US, my husband opened my eyes to the somewhat forced hyperbole of American excitement. After our server cheerfully greeted us at a restaurant, he asked if she was a friend of mine; he could think of no other reason why her welcome would be so enthusiastic.

“I used to judge Americans because I thought they were always too ecstatic, always having disproportionate reactions,” he told me years later, though now, he added, “I feel like I have two worlds in my head, one in French and one in English. I feel like the English world is a lot more fun than the French one.”

After 11 years of living in France, my innate desire to say “Je suis excitée” has faded. But I still fixate on the idea that the French live in the être.


The French express their emotions as truthfully as possible to avoid appearing inauthentic. Credit: Kathy deWitt/Alamy.

When we were first dating, my husband used to watch me buzzing around like a busy bee, making plans for the future. He, meanwhile, was able to find not excitement, but contentment, in nearly everything. His frequent motto, whether we were drinking rosé in the sunshine or just sitting in a park, was: “on est bien, là” – we are good, here.

Excitement, after all, has a forward-thinking connotation, a necessary suggestion of the future. Ubiquitous in Anglophone culture, where we are often thinking about imminent or far-off plans, about goals and dreams, this is far less present among French people who, on the contrary, tend to live more in the moment. It’s not necessarily that they don’t think of the future but that they don’t fixate on the future. They consider it, cerebrally, but their emotions are in the present.


While English speakers often fixate on the future, French people tend to live more in the moment. Credit: Ian Shaw/Alamy.

“Life in France places you happily in the present tense,” Paris-based author Matthew Fraser told The Local, “unlike in Anglo-Protestant countries where everything is driving madly towards the future.”

Life in France places you happily in the present tense

The excitement that drives Anglophones to action, motivating us and driving us to look ahead is not nearly as present in France. But joie de vivre and contentment in simple pleasures certainly are. And when one is living in the moment, there’s no need to think about – or get excited about – what’s next.

This article was originally published on November 5, 2018, by BBC Travel, and is republished here with permission.

Marcus Aurelius on Embracing Mortality and the Key to Living with Presence

“The longest-lived and those who will die soonest lose the same thing. The present is all that they can give up, since that is all you have, and what you do not have, you cannot lose.”

Brain Pickings|

  • Maria Popova

“When you realize you are mortal you also realize the tremendousness of the future. You fall in love with a Time you will never perceive,” the great Lebanese poet, painter, and philosopher Etel Adnan wrote in her beautiful meditation on time, self, impermanence, and transcendence. It is a sentiment of tremendous truth and simplicity, yet tremendously difficult for the mind to metabolize — we remain material creatures, spiritually sundered by the fact of our borrowed atoms, which we will each return to the universe, to the stardust that made us, despite our best earthly efforts. Physicist Alan Lightman contemplated this paradox in his lyrical essay on our longing for permanence in a universe of constant change: “It is one of the profound contradictions of human existence that we long for immortality, indeed fervently believe that something must be unchanging and permanent, when all of the evidence in nature argues against us.”

Two millennia earlier, before the very notion of a universe even existed, the Roman emperor and Stoic philosopher Marcus Aurelius (April 26, 121–March 17, 180) provided uncommonly lucid consolation for this most disquieting paradox of existence in his Meditations ( | free ebook) — the timeless trove of ancient wisdom that gave us his advice on how to motivate yourself to get out of bed each morningthe mental trick for maintaining sanity, and the key to living fully.

Eons before the modern invention of self-help, the Stoics equipped the human animal with a foundational toolkit for self-refinement, articulating their recipes for mental discipline with uncottoned candor that often borders on brutality — an instructional style they share with the Zen masters, whose teachings are often given in a stern tone that seems berating and downright angry but is animated by absolute well-wishing for the spiritual growth of the pupil.

It is with this mindset that Marcus Aurelius takes up the question of how to embrace our mortality and live with life-expanding presence in Book II of his Meditations, translated here by Gregory Hays:

The speed with which all of them vanish — the objects in the world, and the memory of them in time. And the real nature of the things our senses experience, especially those that entice us with pleasure or frighten us with pain or are loudly trumpeted by pride. To understand those things — how stupid, contemptible, grimy, decaying, and dead they are — that’s what our intellectual powers are for. And to understand what those people really amount to, whose opinions and voices constitute fame. And what dying is — and that if you look at it in the abstract and break down your imaginary ideas of it by logical analysis, you realize that it’s nothing but a process of nature, which only children can be afraid of. (And not only a process of nature but a necessary one.)

Art from Duck, Death and the Tulip by Wolf Erlbruch, an uncommonly tender illustrated meditation on life and death.

In a sentiment Montaigne would echo sixteen centuries later in his assertion that “to lament that we shall not be alive a hundred years hence, is the same folly as to be sorry we were not alive a hundred years ago,” Marcus Aurelius rebukes our pathological dread of death by demonstrating how it ejects us from the only arena on which life plays out — the present. Long before Rilke made the countercultural, almost counterbiological observation that “death is our friend precisely because it brings us into absolute and passionate presence with all that is here, that is natural, that is love,” he adds:

Even if you’re going to live three thousand more years, or ten times that, remember: you cannot lose another life than the one you’re living now, or live another one than the one you’re losing. The longest amounts to the same as the shortest. The present is the same for everyone; its loss is the same for everyone; and it should be clear that a brief instant is all that is lost. For you can’t lose either the past or the future; how could you lose what you don’t have?

Remember two things:

1) that everything has always been the same, and keeps recurring, and it makes no difference whether you see the same things recur in a hundred years or two hundred, or in an infinite period;

2) that the longest-lived and those who will die soonest lose the same thing. The present is all that they can give up, since that is all you have, and what you do not have, you cannot lose.

Art by Sydney Smith from Sidewalk Flowers by JonArno Lawson — a lyrical illustrated invitation to living with presence.

He concludes by summarizing the basic facts of human life — a catalogue of uncertainties, crowned by the sole certainty of death — and points to philosophy, or the love of wisdom and mindful living, as the only real anchor for our existential precariousness:

Human life.

Duration: momentary. Nature: changeable. Perception: dim. Condition of Body: decaying. Soul: spinning around. Fortune: unpredictable. Lasting Fame: uncertain. Sum Up: The body and its parts are a river, the soul a dream and mist, life is warfare and a journey far from home, lasting reputation is oblivion.

Then what can guide us?

Only philosophy.

Which means making sure that the power within stays safe and free from assault, superior to pleasure and pain, doing nothing randomly or dishonestly and with imposture, not dependent on anyone else’s doing something or not doing it. And making sure that it accepts what happens and what it is dealt as coming from the same place it came from. And above all, that it accepts death in a cheerful spirit, as nothing but the dissolution of the elements from which each living thing is composed. If it doesn’t hurt the individual elements to change continually into one another, why are people afraid of all of them changing and separating? It’s a natural thing. And nothing natural is evil.

Complement this portion of the altogether indispensable Meditations with psychoanalyst Adam Phillips on what Freud and Darwin taught us about how to live with death, neurologist Oliver Sacks on gratitude, the measure of living, and the dignity of dying, and philosopher, comedian, and my beloved friend Emily Levine on how to live with exultant presence while dying, then revisit two other great Stoics philosophers’ strategies for peace of mind: Seneca on the antidote to anxiety and Epictetus on love, loss, and surviving heartbreak.

This article was originally published on May 20, 2019, by Brain Pickings, and is republished here with permission.

The Real Trick to Staying Young Forever

As more people live longer, the case for intergenerational relationships is getting stronger.


  • Jenny Anderson

Photo by Reuters/David Mdzinarishvili.

People are living longer today. But how do we make sure those extra years are good ones?

For people in wealthy countries, it’s a question of increasing urgency. In 2019, for the first time ever, there were more Americans over age 60 than under age 18. One in three of babies born in the UK will live to see their 100th birthday, according to the UK’s Office of National Statistics. These demographic shifts raise the question of how the 60-plus set will find purpose and meaning in their second and third acts of life—elements which are key to happiness.

Author and social entrepreneur Marc Freedman has one idea. “The real fountain of youth is in the same place it’s always been,” he said at a longevity conference in London. “The real fountain of youth is the fountain with youth.” In other words: Spending time with kids and young adults.

The architecture of Western societies today keeps generations apart from one another. A recent UK parliamentary inquiry (pdf) into intergenerational connection found that levels of segregation between retirees and young adults had roughly doubled between 2001 and 2018; during the same period, children’s chances of living near someone over 65 fell from 15 to 5 percent.

That’s a shame, according to Freedman. As he writes (paywall) in the Wall Street Journal, ”for all the hand-wringing about the graying of America, the needs and assets of the generations fit together like pieces of a jigsaw puzzle.” That’s why, in places from Singapore to London to Cleveland, Ohio, organizations are working to try and help the young and old forge connections.

The Case for Intergenerational Friendships

In 1938, amidst the dark days of the Great Depression, scientists began tracking the health of 268 Harvard sophomores. They hoped the study, which would eventually track those men, their children, and their wives, as well as a sizable control group, for more than 80 years, would reveal what factors constituted a healthy, long, and happy life. Dubbed the Harvard Study of Adult Development (and often referred to as the Grant study), it is one of the world’s longest studies of adult life, rich with data about their physical and mental health.

Its main finding? Relationships deeply affect people’s physical and mental health—including relationships with younger generations. George Vaillant, the psychiatrist who led the study for decades, found that those in middle age or older who invest in nurturing the next generation were three times as likely to be happy as those who fail to do so. “Biology flows downhill,” he said.

Intergenerational friendships provide huge benefits to young people, too. In one of the most famous longitudinal studies (pdf) to look at resilience—why some kids thrive under adversity and others do not—a team of mental health workers, pediatricians, public health nurses, and social workers followed the development of nearly 700 children from Kauai, Hawaii, from age one to age 40.

After four decades, it became clear that the children who faced great adversity but thrived were those who had the support of one caregiver—often a grandparent or an older member of the community. “Children who succeeded against the odds had the opportunity to establish, early on, a close bond with at least one competent, emotionally stable person who was sensitive to their needs,” the study notes. “Much of this nurturing came from substitute caregivers, such as grandparents, older siblings, aunts, and uncles. Resilient children seemed to be especially adept at ‘recruiting’ such surrogate parents.”

“Every child needs at least one adult who is irrationally crazy about him or her,” Freedman said, citing child psychologist Urie Bronfenbrenner. Friendships between the young and old are also a natural fit: Older generations have time and love to give; younger ones are very much in need of both. Parents meanwhile, are often bereft of time, and in search of any support they can get.

It Wasn’t Always This Way

Freedman points out that generations weren’t always so stratified. Parents, children and grandparents used to live and work together. But hard-fought advancements, including the introduction of child labor laws and universal education, meant that the family unit was increasingly separated. Meanwhile, as older people started living longer, retirement communities sprang up to house them, wit the aspiration of turning the graying years into “golden” ones. (When Sun City, the first retirement community in the US, opened in 1960 in the Arizona desert, 100,000 people turned up, producing what Freedman describes as “the largest traffic jam in state history.”)

Clearly, moving children from factories to schools was a good idea, and retirement communities are a necessity with so many adult children living far from their parents. But the upshot is that there are few structures that bring together generations. Interestingly, the two largest groups in society who report being loneliest are the young and the old.

That separation has also led to more agism, as old age becomes associated not with experience and wisdom, but with higher health-care costs, sickness, and loneliness. It was not always so, Laura Carstensen, founder of the Stanford Center on Longevity and a professor of psychology at Stanford, said at at the inaugural Longevity Forum in London on Nov. 5. In 1900, 25 percent of children died before age five, and many more before age 12. “Death wasn’t strongly associated with old age but with young life.”

Bringing the Old and Young Together

Freedman, in his 2018 book How to Live Forever: The Enduring Power of Connecting the Generations, explains that it doesn’t have to be this way. Freedman himself started Experience Corp in the US in 1996 to help older people to teach younger children to read. Independent research showed the kids learned more, teachers welcomed the help, and the “tutors” were happier. Today, he’s the CEO and founder of, which connects retiring adults with internships at nonprofits as a way to provide talent to organizations, including those that might not already be looking at the above-65 set.

In his book, Freedman looks at a range of other initiatives. Singapore, for example, is investing $3 billion to build “a cohesive society with intergenerational harmony,” including co-locating eldercare and childcare facilities “to maximize opportunities for inter-generational interactions.

For his research, Freedman also visited Judson Manor, a senior living community in Cleveland which hosts an artist-in-residence program, providing free housing for graduate music students who in turn perform for the residents and join in meals and other activities. According to Freedman’s Wall Street Journal piece, “When one young violist living at Judson became engaged in 2014, she asked her 90-something neighbor to be part of the wedding party.”

There are other efforts to bridge the generational divide to help combat loneliness. In Norfolk, Friend In Deed runs the Little Visitors scheme, where young mothers with babies visit seniors in the Badgers Wood Care home. The Cares Family, a charity with branches in London, Manchester, and Liverpool, creates social gatherings for young professionals and older people such as dance parties, cooking clubs, podcasting clubs, and dim sum lunches. The charity also has an outreach program to help connect those in need with services, and a one-one-one matching service to connect in people’s homes. Research shows these programs are effective at combatting loneliness and improving social connectedness.

“When older people and younger people share time, there’s magic,” Alex Smith, founder and CEO of the Cares Family, said at the longevity conference. “Younger people get a sense of pause from their everyday lives, a sense of connection to another generation and a different set of life experiences, as well as a sense of community.”

The program helps change the way both younger and older people experience the cities in which they live. While many young people come to places like London or Manchester for the diverse cultural and economic opportunities, they often end up alone, or with only people like themselves. Meanwhile, older people, eager to explore what is new in their cities—like new restaurants or cocktail clubs—may feel those spaces are off-limits to them; younger people can introduce them to those spaces. “It’s a way for older people to reclaim their city through physical space and shared storytelling,” he added.

While there is a popular narrative that older generations have robbed younger ones of future prosperity, Smith says the generations have more in common than not. Almost eight in 10 people between 18 and 24 and the over-65s want life to slow down, and social care for older people remains the second-highest concern for 18-to-34-year-olds. The issue is not whether they have anything in common, but how to connect them.

One of the most important things about intergenerational friendships is that they serve as a reminder that there is no predetermined lifestyle that accompanies getting older. “Aging itself is malleable; it is a moving target,” says Carstensen. “We can influence how it unfolds.” Freedman, meanwhile, thinks we can help aging happen in a way that connects people together, rather than isolating them. “We can fix it,” he said. “And in the process it can fix us.”

This article was originally published on November 8, 2018, by Quartz, and is republished here with permission.



How Are Native American Artists Envisioning the Future? | Zocalo Public Square • Arizona State University • Smithsonian

Top panel of Indian Church by Mary Sully. Image courtesy of Philip J. Deloria.Moderated by Manuela Well-Off-Man, Chief Curator, Museum of Contemporary Native Arts

800 Wilshire Blvd.
Los Angeles, CA 90017Paid parking is available for $6 after 4 PM in the Joe’s Parking garage at 746 S. Hope Street.
Metro: 7th St./Metro Center

Native American artists have long used explorations of the future as a way to reflect on the present. Contemporary Native artists, from the Mohawk sci-fi multimedia artist Skawennati to the Navajo photographer Will Wilson, are using innovative techniques to create visual art, literature, comics, and installations to build on that tradition and reframe it in a modern context. Often described as “Indigenous Futurisms,” this movement has reconsidered science fiction’s colonialist narratives in ways that place the Native American experience at their heart. What are the inspirations for this wave of futuristic work? How does it build on the many traditions of Native American art forms? And to what extent does this art suggest ideas for addressing civilizational threats like climate change, plagues, inequality, and mass violence? Harvard historian and Becoming Mary Sully: Toward an American Indian Abstract author Philip J. Deloria, visual and performance artist Suzanne Kite, and Aja Couchois Duncan, writer and Sweet Land librettist, visit Zócalo to explore the future through the art of today.

Buddhist Perspectives for Times of Anxiety and Conflict with Santikaro

Theosophical Society Discontent rumbles violently through our society. Many troublesome conflicts frighten and bedevil us, and forces around us and within us scheme to arouse negative emotions. We absorb much of that and recycle it within us. How may we cool our discontents and conflicts? This talk will explore possibilities for how Buddhist teachings can contribute to the reduction of our inner and outer anxieties and conflicts. Santikaro has been practicing and teaching this path of meditation along with broader Buddhist teachings that inform it for more than three decades. He adapts his teacher’s meditation guidance to the needs of Americans today, especially here in the Midwest. With his wife Jo Marie, he looks after Kevala Retreat, a refuge for silence and contemplation in southwest Wisconsin (

How We Pay Attention Changes the Very Shape of Our Brains

Stanislas Dehaene on the Neuroscience of Focused Learning

VIA VIKING By Stanislas Dehaene

January 30, 2020 (

Imagine arriving at the airport just in time to catch a plane. Everything in your behavior betrays the heightened concentration of your attention. Your mind on alert, you look for the departures sign, without letting yourself be distracted by the flow of travelers; you quickly scroll through the list to find your flight. Advertisements all around call out to you, but you do not even see them—instead, you head straight for the check-in counter. Suddenly, you turn around: in the crowd, an unexpected friend just called your first name. This message, which your brain considers a priority, takes over your attention and invades your consciousness . . . making you forget which check-in counter you were supposed to go to.

In the space of a few minutes, your brain went through most of the key states of attention: vigilance and alertness, selection and distraction, orientation and filtering. In cognitive science, “attention” refers to all the mechanisms by which the brain selects information, amplifies it, channels it, and deepens its processing. These are ancient mechanisms in evolution: whenever a dog reorients its ears or a mouse freezes up upon hearing a cracking sound, they’re making use of attention circuits that are very close to ours.

Why did attention mechanisms evolve in so many animal species? Because attention solves a very common problem: information saturation. Our brain is constantly bombarded with stimuli: the senses of sight, hearing, smell, and touch transmit millions of bits of information per second. Initially, all these messages are processed in parallel by distinct neurons—yet it would be impossible to digest them in depth: the brain’s resources would not suffice. This is why a pyramid of attention mechanisms, organized like a gigantic filter, carries out a selective triage. At each stage, our brain decides how much importance it should attribute to such and such input and allocates resources only to the information it considers most essential.Attention is essential, but it may result in a problem: if attention is misdirected, learning can get stuck.

Selecting relevant information is fundamental to learning. In the absence of attention, discovering a pattern in a pile of data is like looking for the fabled needle in a haystack. This is one of the main reasons behind the slowness of conventional artificial neural networks: they waste considerable time analyzing all possible combinations of the data provided to them, instead of sorting out the information and focusing on the relevant bits.

It was only in 2014 that two researchers, Canadian Yoshua Bengio and Korean Kyunghyun Cho, showed how to integrate attention into artificial neural networks. Their first model learned to translate sentences from one language to another. They showed that attention brought in immense benefits: their system learned better and faster because it managed to focus on the relevant words of the original sentence at each step.

Very quickly, the idea of learning to pay attention spread like wildfire in the field of artificial intelligence. Today, if artificial systems manage to successfully label a picture (“A woman throwing a Frisbee in a park”), it is because they use attention to channel the information by focusing a spotlight on each relevant part of the image. When describing the Frisbee, the network concentrates all its resources on the corresponding pixels of the image and temporarily removes all those which correspond to the person and the park—it will return to them later. Nowadays, any sophisticated artificial intelligence system no longer connects all inputs with all outputs—it knows that learning will be faster if such a plain network, where every pixel of the input has a chance to predict any word at the output, is replaced by an organized architecture where learning is broken down into two modules: one that learns to pay attention, and another that learns to name the data filtered by the first.

Attention is essential, but it may result in a problem: if attention is misdirected, learning can get stuck. If I don’t pay attention to the Frisbee, this part of the image is wiped out: processing goes on as if it did not exist. Information about it is discarded early on, and it remains confined to the earliest sensory areas. Unattended objects cause only a modest activation that induces little or no learning. This is utterly different from the extraordinary amplification that occurs in our brain whenever we pay attention to an object and become aware of it. With conscious attention, the discharges of the sensory and conceptual neurons that code for an object are massively amplified and prolonged, and their messages propagate into the prefrontal cortex, where whole populations of neurons ignite and fire for a long time, well beyond the original duration of the image.

Such a strong surge of neural firing is exactly what synapses need in order to change their strength—what neuroscientists call “long-term potentiation.” When a pupil pays conscious attention to, say, a foreign-language word that the teacher has just introduced, she allows that word to deeply propagate into her cortical circuits, all the way into the prefrontal cortex. As a result, that word has a much better chance of being remembered. Unconscious or unattended words remain largely confined to the brain’s sensory circuits, never getting a chance to reach the deeper lexical and conceptual representations that support comprehension and semantic memory.American psychologist Michael Posner distinguishes at least three major attention systems: Alerting; Orienting; and Executive Attention.

This is why every student should learn to pay attention—and also why teachers should pay more attention to attention! If students don’t attend to the right information, it is quite unlikely that they will learn anything. A teacher’s greatest talent consists of constantly channeling and capturing children’s attention in order to properly guide them.

Attention plays such a fundamental role in the selection of relevant information that it is present in many different circuits in the brain. American psychologist Michael Posner distinguishes at least three major attention systems:

1. Alerting, which indicates when to attend, and adapts our level of vigilance.
2. Orienting, which signals what to attend to, and amplifies any object of interest.
3. Executive attention, which decides how to process the attended information, selects the processes that are relevant to a given task, and controls their execution.

These systems massively modulate brain activity and can therefore facilitate learning, but also point it in the wrong direction. Let us examine them one by one.


The first attention system, perhaps the oldest in evolution, tells us when to be on the watch. It sends warning signals that mobilize the entire body when circumstances require it. When a predator approaches or when a strong emotion overwhelms us, a whole series of subcortical nuclei immediately increases the wakefulness and vigilance of the cortex. This system dictates a massive and diffuse release of neuromodulators such as serotonin, acetylcholine, and dopamine. Through long-range axons with many spread-out branches, these alerting messages reach virtually the entire cortex, greatly modulating cortical activity and learning. Some researchers speak of a “now print” signal, as if these messages directly tell the cortex to commit the current contents of neural activity into memory.

Animal experiments show that the firing of this warning system can indeed radically alter cortical maps. The American neurophysiologist Michael Merzenich conducted several experiments in which the alerting system of mice was tricked into action by electrical stimulation of their subcortical dopamine or acetylcholine circuits. The outcome was a massive shift in cortical maps. All the neurons that happened to be activated at that moment, even if they had no objective importance, were subject to intense amplification. When a sound, for instance, a high-pitched tone, was systematically associated with a flash of dopamine or acetylcholine, the mouse’s brain became heavily biased toward this stimulus. As a result, the whole auditory map was invaded by this arbitrary note. The mouse became better and better at discriminating sounds close to this sensitive note, but it partially lost the ability to represent other frequencies.

It is remarkable that such cortical plasticity, induced by tampering with the alerting system, can occur even in adult animals. Analysis of the circuits involved shows that neuromodulators such as serotonin and acetylcholine—particularly via the nicotinic receptor (sensitive to nicotine, another major player in arousal and alertness)—modulate the firing of cortical inhibitory interneurons, tipping the balance between excitation and inhibition. Remember that inhibition plays a key role in the closing of sensitive periods for synaptic plasticity. Disinhibited by the alerting signals, cortical circuits seem to recover some of their juvenile plasticity, thus reopening the sensitive period for signals that the mouse brain labels as crucial.Far from reducing our ability to concentrate, video games can actually increase it. They are a powerful stimulant of attention.

What about Homo sapiens? It is tempting to think that a similar reorganization of cortical maps occurs every time a composer or a mathematician passionately dives into their chosen field, especially when their passion starts at an early age. A Mozart or a Ramanujan is perhaps so electrified by fervor that his brain maps become literally invaded with mental models of music or math. Furthermore, this may apply not only to geniuses, but to anyone passionate in their work, from a manual worker to a rocket scientist. By allowing cortical maps to massively reshape themselves, passion breeds talent.

Even though not everyone is a Mozart, the same brain circuits of alertness and motivation are present in all people. What circumstances of daily life would mobilize these circuits? Do they activate only in response to trauma or strong emotions? Maybe not. Some research suggests that video games, especially action games that play with life and death, provide a particularly effective means of engaging our attentional mechanisms. By mobilizing our alerting and reward systems, video games massively modulate learning. The dopamine circuit, for example, fires when we play an action game. Psychologist Daphné Bavelier has shown that this translates into rapid learning. The most violent action games seem to have the most intense effects, perhaps because they most strongly mobilize the brain’s alerting circuits. Ten hours of gameplay suffice to improve visual detection, refine the rapid estimation of the number of objects on the screen, and expand the capacity to concentrate on a target without being distracted. A video game player manages to make ultra-fast decisions without compromising his or her performance.

Parents and teachers complain that today’s children, plugged into computers, tablets, consoles, and other devices, constantly zap from one activity to the next and have lost the capacity to concentrate—but this is untrue. Far from reducing our ability to concentrate, video games can actually increase it. In the future, will they help us remobilize synaptic plasticity in adults and children alike? Undoubtedly, they are a powerful stimulant of attention, which is why my laboratory has developed a whole range of educational tablet games for math and reading, based on cognitive science principles.

Video games also have their dark side: they present well-known risks of social isolation, time loss, and addiction. Fortunately, there are many other ways to unlock the effects of the alerting system while also drawing on the brain’s social sense. Teachers who captivate their students, books that draw in their readers, and films and plays that transport their audiences and immerse them in real-life experiences probably provide equally powerful alerting signals that stimulate our brain plasticity.


The second attention system in the brain determines what we should attend to. This orienting system acts as a spotlight on the outside world. From the millions of stimuli that bombard us, it selects those to which we should allocate our mental resources, because they are urgent, dangerous, appealing . . . or merely relevant to our present goals.

The founding father of American psychology, William James (1842 – 1910), in his The Principles of Psychology (1890), best defined this function of attention: “Millions of items of the outward order are present to my senses which never properly enter into my experience. Why? Because they have no interest for me. My experience is what I agree to attend to. Only those items which I notice shape my mind.”Paying attention consists of suppressing unwanted information—and in doing so, our brain runs the risk of becoming blind to what it chooses not to see.

Selective attention operates in all sensory domains, even the most abstract. For example, we can pay attention to the sounds around us: dogs move their ears, but for us humans, only an internal pointer in our brain moves and tunes in to whatever we decide to focus on. At a noisy cocktail party, we are able to select one out of ten conversations based on voice and meaning. In vision, the orienting of attention is often more obvious: we generally move our head and eyes toward whatever attracts us. By shifting our gaze, we bring the object of interest into our fovea, which is an area of very high sensitivity in the center of our retina. However, experiments show that even without moving our eyes, we can still pay attention to any place or any object, wherever it is, and amplify its features. We can even attend to one of several superimposed drawings, just like we attend to one of several simultaneous conversations. And there is nothing stopping you from paying attention to the color of a painting, the shape of a curve, the speed of a runner, the style of a writer, or the technique of a painter. Any representation in our brains can become the focus of attention.

In all these cases, the effect is the same: the orienting of attention amplifies whatever lies in its spotlight. The neurons that encode the attended information increase their firing, while the noisy chattering of other neurons is squashed. The impact is twofold: attention makes the attended neurons more sensitive to the information that we consider relevant, but, above all, it increases their influence on the rest of the brain. Downstream neural circuits echo the stimulus to which we lend our eyes, ears, or mind. Ultimately, vast expanses of cortex reorient to encode whatever information lies at the center of our attention. Attention acts as an amplifier and a selective filter.

“The art of paying attention, the great art,” says the philosopher Alain (1868–1951), “supposes the art of not paying attention, which is the royal art.” Indeed, paying attention also involves choosing what to ignore. For an object to come into the spotlight, thousands of others must remain in the shadows. To direct attention is to choose, filter, and select: this is why cognitive scientists speak of selective attention. This form of attention amplifies the signal which is selected, but it also dramatically reduces those that are deemed irrelevant. The technical term for this mechanism is “biased competition”: at any given moment, many sensory inputs compete for our brain’s resources, and attention biases this competition by strengthening the representation of the selected item while squashing the others. This is where the spotlight metaphor reaches its limits: to better light up a region of the cortex, the attentional spotlight of our brain also reduces the illumination of other regions. The mechanism relies on interfering waves of electrical activity: to suppress a brain area, the brain swamps it with slow waves in the alpha frequency band (between eight and twelve hertz), which inhibit a circuit by preventing it from developing coherent neural activity.

Paying attention, therefore, consists of suppressing the unwanted information—and in doing so, our brain runs the risk of becoming blind to what it chooses not to see. Blind, really? Really. The term is fully appropriate, because many experiments, including the famous “invisible gorilla” experiment, demonstrate that inattention can induce a complete loss of sight. In this classic experiment, you are asked to watch a short movie where basketball players, dressed in black and white, pass a ball back and forth. Your task is to count, as precisely as you can, the number of passes of the white team. A piece of cake, you think—and indeed, 30 seconds later, you triumphantly give the right answer.

But now the experimenter asks a strange question: “Did you see the gorilla?” The gorilla? What gorilla? We rewind the tape, and to your amazement, you discover that an actor in a full-body gorilla costume walked across the stage and even stopped in the middle to pound on his chest for several seconds. It seems impossible to miss. Furthermore, experiments show that, at some point, your eyes looked right at the gorilla. Yet you did not see it. The reason is simple: your attention was entirely focused on the white team and therefore actively inhibited the distracting players who were dressed in black . . . gorilla included! Busy with the counting task, your mental workspace was unable to become aware of this incongruous creature.

The invisible gorilla experiment is a landmark study in cognitive science, and one which is easily replicated: in a great variety of settings, the mere act of focusing our attention blinds us to unattended stimuli. If, for instance, I ask you to judge whether the pitch of a sound is high or low, you may become blind to another stimulus, such as a written word that appears within the next fraction of a second. Psychologists call this phenomenon the “attentional blink”: your eyes may remain open, but your mind “blinks”—for a short while, it is fully busy with its main task and utterly unable to attend to anything else, even something as simple as a single word.

In such experiments, we actually suffer from two distinct illusions. First, we fail to see the word or the gorilla, which is bad enough. (Other experiments show that inattention can lead us to miss a red light or run over a pedestrian—never use your cell phone behind the wheel!) But the second illusion is even worse: we are unaware of our own unawareness—and, therefore, we are absolutely convinced that we have seen all there is to see! Most people who try the invisible gorilla experiment cannot believe their own blindness. They think that we played a trick on them, for instance by using two different movies. Typically, their reasoning is that if there really was a gorilla in the video, they would have seen it. Unfortunately, this is false: our attention is extremely limited, and despite all our good will, when our thoughts are focused on one object, other objects—however salient, amusing, or important—can completely elude us and remain invisible to our eyes. The intrinsic limits of our awareness lead us to overestimate what we and others can perceive.

The gorilla experiment truly deserves to be known by everyone, especially parents and teachers. When we teach, we tend to forget what it means to be ignorant. We all think that what we see, everyone can see. As a result, we often have a hard time understanding why a child, despite the best of intentions, fails to see, in the most literal sense of the term, what we are trying to teach him. But the gorilla heeds a clear message: seeing requires attending. If students, for one reason or another, are distracted and fail to pay attention, they may be entirely oblivious to their teacher’s message—and what they cannot perceive, they cannot learn.


how we learn

From How We Learn by Stanislas Dehaene, published by Viking, an imprint of Penguin Publishing Group, a division of Penguin Random House, LLC. Copyright © 2020 by Stanislas Dehaene.

Stanislas Dehaene
Stanislas Dehaene

Stanislas Dehaene is the director of the Cognitive Neuroimaging Unit in Saclay, France, and the professor of experimental cognitive psychology at the Collège de France. He is the author of Reading in the Brain and more recently of How We Learn.

(Inspired by Suzanne Deakins, H.W., M.)

THRIVE: What On Earth Will It Take?

THRIVE Movement Stay informed & learn about THRIVE II, subscribe to mailing list: For more information, visit: Follow us on Facebook: Follow us on Twitter: Follow us on Instagram: If you value what is presented in this movie, please go to where you can support Thrive Movement by making a donation. You will also find more in-depth information on each of the subjects discussed in the movie, learn about Critical Mass initiatives supported by Thrive, and connect with others who are waking up and taking action.

Film Synopsis: THRIVE is an unconventional documentary that lifts the veil on what’s REALLY going on in our world by following the money upstream — uncovering the global consolidation of power in nearly every aspect of our lives. Weaving together breakthroughs in science, consciousness and activism, THRIVE offers real solutions, empowering us with unprecedented and bold strategies for reclaiming our lives and our future.

(Contributed by Steve Hines.)

As Virus Spreads, Anger Floods Chinese Social Media

The sheer volume of criticism of the government, and the sometimes clever ways that critics dodge censors, are testing Beijing’s ability to control the narrative.

In Beijing on Sunday, riders wearing protective masks cycle on a nearly empty street that is normally busy with tourists.
In Beijing on Sunday, riders wearing protective masks cycle on a nearly empty street that is normally busy with tourists.Credit…Kevin Frayer/Getty Images
Raymond Zhong

By Raymond Zhong

  • Published Jan. 27, 2020 (

SHANGHAI — Recently, someone following the coronavirus crisis through China’s official news media would see lots of footage, often set to stirring music, praising the heroism and sacrifice of health workers marching off to stricken places.

But someone following the crisis through social media would see something else entirely: vitriolic comments and mocking memes about government officials, harrowing descriptions of untreated family members and images of hospital corridors loaded with patients, some of whom appear to be dead.

The contrast is almost never so stark in China. The government usually keeps a tight grip on what is said, seen and heard about it. But the sheer amount of criticism — and the often clever ways in which critics dodge censors, such as by referring to Xi Jinping, China’s top leader, as “Trump” or by comparing the outbreak to the Chernobyl catastrophe — have made it difficult for Beijing to control the message.

In recent days, critics have pounced when officials in the city of Wuhan, the center of the outbreak, wore their protective masks incorrectly. They have heaped scorn upon stumbling pronouncements. When Wuhan’s mayor spoke to official media on Monday, one commenter responded, “If the virus is fair, then please don’t spare this useless person.”

The condemnations stand as a rare direct challenge to the Communist Party, which brooks no dissent in the way it runs China. In some cases, Chinese leaders appear to be acknowledging people’s fear, anger and other all-too-human reactions to the crisis, showing how the party can move dramatically, if sometimes belatedly, to mollify the public.CORONAVIRUS UPDATES Read the latest developments in the coronavirus outbreak here.

Such criticism can go only so far, however. Some of China’s more commercially minded media outlets have covered the disease and the response thoroughly if not critically. But articles and comments about the virus continue to be deleted, and the government and internet platforms have issued fresh warnings against spreading what they call “rumors.”

“Chinese social media are full of anger, not because there was no censorship on this topic, but despite strong censorship,” said Xiao Qiang, a research scientist at the School of Information at the University of California, Berkeley, and the founder of China Digital Times, a website that monitors Chinese internet controls. “It is still possible that the censorship will suddenly increase again, as part of an effort to control the narrative.”

When China’s leaders battled the SARS virus in the early 2000s, social media was only just beginning to blossom in the country. The government covered up the disease’s spread, and it was left to journalists and other critics to shame the authorities into acknowledging the scale of the problem.

Today, smartphones and social media make it harder for mass public health crises to stay buried. But internet platforms in China are just as easily polluted with false and fast-moving information as they are everywhere else. During outbreaks of disease, Beijing’s leaders have legitimate reason to be on alert for quack remedies and scaremongering fabrications, which can cause panic and do damage.

China’s premier, Li Keqiang, center, visiting a supermarket in Wuhan on Monday.
China’s premier, Li Keqiang, center, visiting a supermarket in Wuhan on Monday.Credit…Agence France-Presse — Getty Images

In recent days, though, Beijing seems to be reasserting its primacy over information in ways that go beyond mere rumor control. At a meeting this past weekend between Mr. Xi and other senior leaders, one of the measures they resolved to take against the virus was to “strengthen the guidance of public opinion.”

Wang Huning, the head of the Communist Party’s publicity department and an influential party ideologue, was also recently named deputy head of the team in charge of containing the outbreak, behind only China’s premier, Li Keqiang.

Chinese officials seem to recognize that social media can be a useful tool for feeling out public opinion in times of crisis. WeChat, the popular Chinese messaging platform, said over the weekend that it would crack down hard on rumors about the virus. But it also created a tool for users to report tips and information about the disease and the response.

Internet backlash may already have caused one local government in China to change course on its virus-fighting policies. The southern city of Shantou announced on Sunday that it was stopping cars, ships and people from entering the city, in a policy that echoed ones in Wuhan. But then word went around that the decision had led people to panic-buy food, and by the afternoon, the order had been rescinded.

Nowhere has the local government been the target of more internet vitriol than in Hubei Province, where Wuhan is the capital.

After the Hubei governor, Wang Xiaodong, and other officials there gave a news briefing on Sunday, web users mocked Mr. Wang for misstating, twice, the number of face masks that the province could produce. They circulated a photo from the briefing of him and two other officials, pointing out that one of them did not cover his nose with his mask, that another wore his mask upside down and that Mr. Wang did not wear a mask at all.

On Monday, social media users were similarly unrelenting toward Wuhan’s mayor, Zhou Xianwang.

During an interview Mr. Zhou gave to state television, commenters in live streams unloaded on him, with one writing: “Stop talking. We just want to know when you will resign.”

Top authorities may be deliberately directing public anger toward officials in Hubei and Wuhan as a prelude to their resigning and being replaced. Many other targets within the Chinese leadership seem to remain off limits.

This month, as news of the coronavirus emerged but Mr. Xi did not make public appearances to address it, people on the social platform Weibo began venting their frustration in veiled ways, asking, “Where’s that person?”

Masks offer a visible reminder of China’s struggle with the coronavirus. A Chinese couple took a selfie while overlooking the Forbidden City in Beijing on Sunday.
Masks offer a visible reminder of China’s struggle with the coronavirus. A Chinese couple took a selfie while overlooking the Forbidden City in Beijing on Sunday.Credit…Kevin Frayer/Getty Images

But even those comments were deleted. So some users started replacing Mr. Xi’s name with “Trump.” As in, “I don’t want to go through another minute of this year, my heart is filled with pain, I hope Trump dies.”

Other people hungering to express frustration have taken to the Chinese social platform Douban, which has been flooded recently by user reviews for “Chernobyl,” the hit television series about the Soviet nuclear disaster.

“In any era, any country, it’s the same. Cover everything up,” one reviewer wrote on Monday.

“That’s socialism,” wrote another.

Some Chinese news outlets have been able to report incisively on the coronavirus. The influential newsmagazine Caixin has put out rigorous reporting and analysis. The Paper, a digital news outlet that is overseen by Shanghai’s Communist Party Committee, published a chilling video about a Wuhan resident who couldn’t find a hospital that would treat him and ended up wandering the streets.

Mr. Xiao, the Chinese internet expert, said the central authorities long gave such outlets special leeway to cover certain topics in ways that official media cannot. But the outlets should not be viewed as independent of the government, he said, calling their coverage “planned and controlled publicity” from the authorities.

Even outside the digital realm, it is not hard to find people in China who remain unsure of whether to trust what their government is telling them about the outbreak.

Chen Pulin, a 78-year-old retiree, was waiting outside a Shanghai hospital recently while his daughter was inside being tested for the virus. When word of the disease first began trickling out, he immediately had doubts about whether officials were being forthcoming about it.

“Even now, the government seems to be thinking about the economy and social stability,” Mr. Chen said. “Those things are important, but when it comes to these infectious diseases, stopping the disease should come first.”

Li Yuan contributed reporting from Hong Kong. Claire Fu, Lin Qiqing and Wang Yiwei contributed research.