Battle Hymn of the Republic – Mormon Tabernacle Choir

The Tabernacle Choir at Temple Square Purchase “Battle Hymn of the Republic” from the album “Spirit of America”: Amazon: http://amzn.to/QJDqpy iTunes: http://bit.ly/XeY4RR Deseret Book: http://bit.ly/Wqktw3 LDS Store: http://bit.ly/Qj9f72

The Mormon Tabernacle Choir and Orchestra at Temple Square perform “Battle Hymn of the Republic” composed by William Steffe with lyrics by Julia Ward Howe and arranged by Peter J. Wilhousky.

Lyrics: Mine eyes have seen the glory of the coming of the Lord; He is trampling out the vintage where the grapes of wrath are stored; He hath loosed the fateful lightning of His terrible swift sword: His truth is marching on. Gloria! Glory! Glory! Hallelujah! Gloria! Glory! Glory! Hallelujah! Gloria! Gloria! Glory! Glory! Hallelujah! Gloria! His truth is marching on! I have seen Him in the watch-fires of a hundred circling camps; (Truth is marching, truth is marching, truth is marching) They have builded Him an alter in the evening dews and damps; (Truth is marching, truth is marching, truth is marching) I can read His righteous sentence in the dim and flaring lamps, (Truth is marching, truth is marching, truth is marching) His day is marching on! (Truth is marching, truth is marching, truth is marching) Glory! Glory! Hallelujah! Glory! Glory! Hallelujah! Glory! Glory! Hallelujah! His truth is marching on. In the beauty of the lilies, Christ was born across the sea, With a glory in His bosom that transfigures you and me: As He died to make men holy, Let us live to make men free, While God is marching on. Glory! Glory! Hallelujah! Glory! Glory! Hallelujah! Glory! Glory! Hallelujah! His truth is marching on! Glory! Glory! Hallelujah! Glory! Glory! Hallelujah! Glory! Glory! Hallelujah! His truth is marching on! Amen! Amen!

Episode 4315. May 27, 2012

Just Read the Book Already

Digital culture doesn’t have to make you a shallow reader. But you have to do something about it.

Slate|getpocket.com

  • Laura Miller
e7c47ef0-943a-46c2-ac7a-f1a7cdcc70d2.jpeg

Illustration by Doris Liou.

Not long ago, a cognitive neuroscientist decided to perform an experiment on herself. Maryanne Wolf, an expert on the science of reading, was worried—as perhaps you have worried—that she might be losing the knack for sustained, deep reading. She still bought books, she writes in Reader, Come Home: The Reading Brain in a Digital World, “but more and more I read in them rather than being whisked away by them. At some time impossible to pinpoint, I had begun to read more to be informed than to be immersed, much less to be transported.” Despite having written a popular book, Proust and the Squid: The Story and Science of the Reading Brain, celebrating, among other things, the brain’s neuroplasticity—that is, its tendency to reshape its circuitry to adapt to the tasks most often demanded of it—Wolf told herself that it wasn’t the style of her reading that had changed, only the amount of time she could set aside for it. Nevertheless, she felt she owed the question more rigorous scrutiny. Hence the informal experiment.

Wolf resolved to allot a set period every day to reread a novel she had loved as a young woman, Hermann Hesse’s Magister LudiIt was exactly the sort of demanding text she’d once reveled in. But now she discovered to her dismay that she could not bear it. “I hated the book,” she writes. “I hated the whole so-called experiment.” She had to force herself to wrangle the novel’s “unnecessarily difficult words and sentences whose snakelike constructions obfuscated, rather than illuminated, meaning for me.” The narrative action struck her as intolerably slow. She had, she concluded, “changed in ways I would never have predicted. I now read on the surface and very quickly; in fact, I read too fast to comprehend deeper levels, which forced me constantly to go back and reread the same sentence over and over with increasing frustration.” She had lost the “cognitive patience” that once sustained her in reading such books. She blamed the internet.

She’s not the first—either to notice this falling off or to sound the alarm at it. Her dilemma is the same one that prompted Nicholas Carr to write his 2010 book The Shallows: What the Internet Is Doing to Our BrainsWhenever he tried to read anything substantial, Carr wrote, “I get fidgety, lose the thread, begin looking for something else to do. … The deep reading that used to come naturally has become a struggle.” Reader, Come Home’s chapters are written in the form of letters—Wolf’s attempt to strike an intimate tone—but for all her adoration of literature, this is a writer who lives most of her professional life in the realm of academia and policymaking, an environment that has left its mark on her prose. Her sentences sometimes resemble a stand full of broken umbrellas through which the reader must forage in search of a workable statement: “Many changes in our thinking owe as much to our biological reflex to attend to novel stimuli as to a culture that floods us with continuous stimuli with our collusion.”

In her defense, unlike, say, her fellow neuroscience popularizer and Proust fan, the disgraced Jonah Lehrer—a pleasure to read if you don’t know about the plagiarism and fabrication—Wolf is a serious scholar genuinely trying to make the world a better place. Reader, Come Home is full of sound, if hardly revelatory, advice for parents—read to your small children instead of handing them an iPad with an “enhanced” e-book on it that can read itself aloud while you check your smartphone—and considered policy recommendations. A good third of the book concerns how educators can improve the deep-reading skills of children in various age groups while also promoting their digital literacy. Wolf’s goal, she insists, is not to bemoan the lost idylls of print reading, but to build “biliterate” brains in children who are “expert, flexible code-switchers,” with “parallel levels of fluency” in both the skimming, “grasshopper” style of reading fostered by digital media and the immersive, deep, reflective reading associated with print books. Wolf may wax lyrical on the subject of print, but she is no Luddite. One of her many projects is an initiative that distributes carefully designed digital teaching devices to “nonliterate children in remote parts of the world” where they have no books and no parents or other adults able to instruct them.

But if Wolf is impassioned about the importance of deep reading, she doesn’t always seem fully cognizant of the forces arrayed against it. She seems to be responding to the digital culture of nearly a decade ago, when parents still thought it was cutting-edge for schools to issue tablets to students, not the 2018 in which parents worry that their kids have become “addicted” to digital devices. She views online reading as if it mostly consists of news consumption, decrying the way the medium pressures writers to condense their work into snackable content and makes readers impatient with anything long and chewy. But “reading” doesn’t necessarily describe what many people are doing online anymore, whether they’re teenagers checking Instagram, seniors debating politics on Facebook, 10-year-olds playing video games, or toddlers being lulled into docility by robotically disturbing YouTube videos. Sure, we read Twitter and Facebook, but not in the way we read even so fragmented a text as a newspaper.

cd7f7124-4d33-4fdc-bcb4-8230bb12b446.jpeg

Maryanne Wolf. Photo by Rod Searcey.

As in Proust and the Squid, Wolf refers back to a famous story from Phaedrus, in which Socrates cautioned against literacy, arguing that knowledge is not fixed but the product of a dialogue between the speaker and listener—that the great weakness of a text is that you can’t ask it questions and make it justify its conclusions. Wolf uses the story to point out the futility of rejecting a powerful new communication technology like the book, but she doesn’t seem to have noticed that the internet more closely resembles Socrates’ ideal than the printed page does. Social media and comments sections are more like conversations than they are like books or print journalism. In her paeans to deep reading and its power to engender critical thinking, “wherein different possible interpretations of the text move back and forth, integrating background knowledge with empathy and inference with critical analysis,” Wolf argues that good readers learn to weigh their acquired knowledge against the text, testing it. But an internet in which serious ideas are presented and evaluated by writers and readers who debate them publicly and in good faith would be just as much a boon to critical thinking.

Or at least that was the promise, long ago, back when digital utopianism still had a leg to stand on, before even the people who have made fortunes selling us this stuff started going on digital detoxes and raising their kids tech-free. When Carr published the article on which The Shallows was based, he drew a lot of fire from technology boosters for his fuddy-duddy obsession with books. Why, Clay Shirky argued, couldn’t Carr recognize that the form of “literary reading” he lamented had had its day and was being replaced by something different but of even greater value because it is so much more democratic? The reason no one’s reading War and Peace is, Shirky asserted, because it’s “too long, and not so interesting.” Instead of mourning the loss of the “cathedral” reading experience offered by a great 19th-century novel, we should be adapting to the “bazaar” culture of the internet. If the medium trains our supremely adaptable brains to work differently, well, maybe that’s because they need to work that way to take advantage of “the net’s native forms.”

Shirky’s was only the sauciest form of an argument I heard whenever I mentioned to my techno-utopian friends that I identified with Carr’s distress. Concentration had become more difficult even for me, a professional reader and lifelong lover of books. Now it seems utterly nuts that someone could insist both that technology is an unstoppable force that cannot be directed or corrected and also that everything will work out great in the end, but that was standard operating procedure among the tech commentariat as recently as eight years ago. That large corporations might manipulate and exploit these changes for profit (let alone that hostile foreign governments might tamper with the much-vaunted “wisdom of crowds” to influence a U.S. election) never seemed to occur to them—or if it did, it didn’t bother them much. Freed from what Shirky deplored as the “impoverished access” of the past, we were, he assured us, poised for “the greatest expansion of expressive capability the world has ever known.” In the dark days of 2018, all of us are fully aware of what it’s like to be bombarded with the expanded expressive capability of the internet. Forget War and Peace—nothing could be as uninteresting as 99 percent of the stuff people post online. The medium remains the message, and in the case of the wide-open internet, that means the medium is ever more cacophonous and indiscriminate, its democratic qualities as much a bug as a feature. One of the reasons that digital readers skim is not because of some quality inherent in screens, as Wolf seems to think, but because so much of what we find online is not worth our full attention.

c2470448-e7a8-457e-a96e-c121271a4227.jpeg

Photo from Harper.

In Reader, Come Home, Wolf arrives bearing a bit more evidence in defense of deep reading, and specifically the reading of fiction, which some researchers have found to increase empathy. The reader of a novel imagines herself into the psyche of a person who may be very different from herself, and this offers a qualitatively different experience from watching a visual, dramatic medium like film, where she witnesses the character’s experience from the outside. But there isn’t a lot of this research, partly because experiments involving an activity as complex and idiosyncratic as reading are hard to devise. Most of us, like Wolf, will have to resort to more ad hoc investigations. Wolf, struggling through Magister Ludi, found that she did eventually get her groove back, falling into the rhythms of Hesse’s prose. This made her feel that she was “home again,” hence the title of her book.

Reading about Wolf’s experiment reminded me of the weekend last year when I sunk into the velvet folds of Jennifer Egan’s Manhattan Beach, a novel that is not at all slow but rather slowing. It demonstrated an uncanny power to still my jittering mind and return me to the long, trancelike reading bouts of my early 20s, when I devoured fat Henry James novels for fun. How much more satisfying this was than the facile and addictive consumption of social media that takes up way too much of my time! I went looking on my shelves for a copy of Egan’s novel, wondering if I could find on a randomly accessed page some trace of that magic. Finally I remembered that I’d read Manhattan Beach in PDF form, on my iPad. The screen didn’t lessen the potency of the novel’s spell. It would be nice if Wolf caught up with the times and stopped fretting about e-books and the notion that they inhibit this kind of immersion.

Perhaps the answer to Wolf’s worries is neither so complicated nor so apocalyptic as technology’s champions and Cassandras have made it out to be, but largely a matter of willpower and common sense. Most adults do know what’s good for them. Like hitting the gym instead of the couch or having the salade niçoise instead of fries, you pick the less tempting option because in the long run it will make you feel better. As the programmers put it, garbage in, garbage out.

This article was originally published on August 16, 2018, by Slate, and is republished here with permission.

How Does a Caterpillar Turn into a Butterfly?

To become a butterfly, a caterpillar first digests itself. But certain groups of cells survive, turning the soup into eyes, wings, antennae and other adult structures.

Scientific American|getpocket.com

  • Ferris Jabr
GettyImages-941635240.jpg

Photo by barbaraaaa / Getty Images.

As children, many of us learn about the wondrous process by which a caterpillar morphs into a butterfly. The story usually begins with a very hungry caterpillar hatching from an egg. The caterpillar, or what is more scientifically termed a larva, stuffs itself with leaves, growing plumper and longer through a series of molts in which it sheds its skin. One day, the caterpillar stops eating, hangs upside down from a twig or leaf and spins itself a silky cocoon or molts into a shiny chrysalis. Within its protective casing, the caterpillar radically transforms its body, eventually emerging as a butterfly or moth.

But what does that radical transformation entail? How does a caterpillar rearrange itself into a butterfly? What happens inside a chrysalis or cocoon?

First, the caterpillar digests itself, releasing enzymes to dissolve all of its tissues. If you were to cut open a cocoon or chrysalis at just the right time, caterpillar soup would ooze out. But the contents of the pupa are not entirely an amorphous mess. Certain highly organized groups of cells known as imaginal discs survive the digestive process. Before hatching, when a caterpillar is still developing inside its egg, it grows an imaginal disc for each of the adult body parts it will need as a mature butterfly or moth—discs for its eyes, for its wings, its legs and so on. In some species, these imaginal discs remain dormant throughout the caterpillar’s life; in other species, the discs begin to take the shape of adult body parts even before the caterpillar forms a chrysalis or cocoon. Some caterpillars walk around with tiny rudimentary wings tucked inside their bodies, though you would never know it by looking at them.

Once a caterpillar has disintegrated all of its tissues except for the imaginal discs, those discs use the protein-rich soup all around them to fuel the rapid cell division required to form the wings, antennae, legs, eyes, genitals and all the other features of an adult butterfly or moth. The imaginal disc for a fruit fly’s wing, for example, might begin with only 50 cells and increase to more than 50,000 cells by the end of metamorphosis. Depending on the species, certain caterpillar muscles and sections of the nervous system are largely preserved in the adult butterfly. One study even suggests that moths remember what they learned in later stages of their lives as caterpillars.

Getting a look at this metamorphosis as it happens is difficult; disturbing a caterpillar inside its cocoon or chrysalis risks botching the transformation. But Michael Cook, who maintains a fantastic website about silkworms, has some incredible photos of a Tussah silkmoth (Antheraea penyi) that failed to spin a cocoon. You can see the delicate, translucent jade wings, antennae and legs of a pupa that has not yet matured into an adult moth—a glimpse of what usually remains concealed.

Ferris Jabr is a contributing writer for Scientific American. He has also written for the New York Times Magazine, the New Yorker and Outside.

This article was originally published on August 10, 2012, by Scientific American, and is republished here with permission.

Why your brain is not a computer

 Photograph: Artem Burduk/Getty Images/iStockphoto

For decades it has been the dominant metaphor in neuroscience. But could this idea have been leading us astray all along?

By Matthew Cobb

Thu 27 Feb 2020 (theguardian.com)

We are living through one of the greatest of scientific endeavours – the attempt to understand the most complex object in the universe, the brain. Scientists are accumulating vast amounts of data about structure and function in a huge array of brains, from the tiniest to our own. Tens of thousands of researchers are devoting massive amounts of time and energy to thinking about what brains do, and astonishing new technology is enabling us to both describe and manipulate that activity.

A neuroscientist explains: the need for ‘empathetic citizens’ – podcast

We can now make a mouse remember something about a smell it has never encountered, turn a bad mouse memory into a good one, and even use a surge of electricity to change how people perceive faces. We are drawing up increasingly detailed and complex functional maps of the brain, human and otherwise. In some species, we can change the brain’s very structure at will, altering the animal’s behaviour as a result. Some of the most profound consequences of our growing mastery can be seen in our ability to enable a paralysed person to control a robotic arm with the power of their mind.

Every day, we hear about new discoveries that shed light on how brains work, along with the promise – or threat – of new technology that will enable us to do such far-fetched things as read minds, or detect criminals, or even be uploaded into a computer. Books are repeatedly produced that each claim to explain the brain in different ways.

And yet there is a growing conviction among some neuroscientists that our future path is not clear. It is hard to see where we should be going, apart from simply collecting more data or counting on the latest exciting experimental approach. As the German neuroscientist Olaf Sporns has put it: “Neuroscience still largely lacks organising principles or a theoretical framework for converting brain data into fundamental knowledge and understanding.” Despite the vast number of facts being accumulated, our understanding of the brain appears to be approaching an impasse.

In 2017, the French neuroscientist Yves Frégnac focused on the current fashion of collecting massive amounts of data in expensive, large-scale projects and argued that the tsunami of data they are producing is leading to major bottlenecks in progress, partly because, as he put it pithily, “big data is not knowledge”.

“Only 20 to 30 years ago, neuroanatomical and neurophysiological information was relatively scarce, while understanding mind-related processes seemed within reach,” Frégnac wrote. “Nowadays, we are drowning in a flood of information. Paradoxically, all sense of global understanding is in acute danger of getting washed away. Each overcoming of technological barriers opens a Pandora’s box by revealing hidden variables, mechanisms and nonlinearities, adding new levels of complexity.”

The neuroscientists Anne Churchland and Larry Abbott have also emphasised our difficulties in interpreting the massive amount of data that is being produced by laboratories all over the world: “Obtaining deep understanding from this onslaught will require, in addition to the skilful and creative application of experimental technologies, substantial advances in data analysis methods and intense application of theoretic concepts and models.”

There are indeed theoretical approaches to brain function, including to the most mysterious thing the human brain can do – produce consciousness. But none of these frameworks are widely accepted, for none has yet passed the decisive test of experimental investigation. It is possible that repeated calls for more theory may be a pious hope. It can be argued that there is no possible single theory of brain function, not even in a worm, because a brain is not a single thing. (Scientists even find it difficult to come up with a precise definition of what a brain is.)

As observed by Francis Crick, the co-discoverer of the DNA double helix, the brain is an integrated, evolved structure with different bits of it appearing at different moments in evolution and adapted to solve different problems. Our current comprehension of how it all works is extremely partial – for example, most neuroscience sensory research has been focused on sight, not smell; smell is conceptually and technically more challenging. But the way that olfaction and vision work are different, both computationally and structurally. By focusing on vision, we have developed a very limited understanding of what the brain does and how it does it.

The nature of the brain – simultaneously integrated and composite – may mean that our future understanding will inevitably be fragmented and composed of different explanations for different parts. Churchland and Abbott spelled out the implication: “Global understanding, when it comes, will likely take the form of highly diverse panels loosely stitched together into a patchwork quilt.”


For more than half a century, all those highly diverse panels of patchwork we have been working on have been framed by thinking that brain processes involve something like those carried out in a computer. But that does not mean this metaphor will continue to be useful in the future. At the very beginning of the digital age, in 1951, the pioneer neuroscientist Karl Lashley argued against the use of any machine-based metaphor.

“Descartes was impressed by the hydraulic figures in the royal gardens, and developed a hydraulic theory of the action of the brain,” Lashley wrote. “We have since had telephone theories, electrical field theories and now theories based on computing machines and automatic rudders. I suggest we are more likely to find out about how the brain works by studying the brain itself, and the phenomena of behaviour, than by indulging in far-fetched physical analogies.”

This dismissal of metaphor has recently been taken even further by the French neuroscientist Romain Brette, who has challenged the most fundamental metaphor of brain function: coding. Since its inception in the 1920s, the idea of a neural code has come to dominate neuroscientific thinking – more than 11,000 papers on the topic have been published in the past 10 years. Brette’s fundamental criticism was that, in thinking about “code”, researchers inadvertently drift from a technical sense, in which there is a link between a stimulus and the activity of the neuron, to a representational sense, according to which neuronal codes represent that stimulus.

The unstated implication in most descriptions of neural coding is that the activity of neural networks is presented to an ideal observer or reader within the brain, often described as “downstream structures” that have access to the optimal way to decode the signals. But the ways in which such structures actually process those signals is unknown, and is rarely explicitly hypothesised, even in simple models of neural network function.

MRI scan of a brain.
 MRI scan of a brain. Photograph: Getty/iStockphoto

The processing of neural codes is generally seen as a series of linear steps – like a line of dominoes falling one after another. The brain, however, consists of highly complex neural networks that are interconnected, and which are linked to the outside world to effect action. Focusing on sets of sensory and processing neurons without linking these networks to the behaviour of the animal misses the point of all that processing.

By viewing the brain as a computer that passively responds to inputs and processes data, we forget that it is an active organ, part of a body that is intervening in the world, and which has an evolutionary past that has shaped its structure and function. This view of the brain has been outlined by the Hungarian neuroscientist György Buzsáki in his recent book The Brain from Inside Out. According to Buzsáki, the brain is not simply passively absorbing stimuli and representing them through a neural code, but rather is actively searching through alternative possibilities to test various options. His conclusion – following scientists going back to the 19th century – is that the brain does not represent information: it constructs it.

The metaphors of neuroscience – computers, coding, wiring diagrams and so on – are inevitably partial. That is the nature of metaphors, which have been intensely studied by philosophers of science and by scientists, as they seem to be so central to the way scientists think. But metaphors are also rich and allow insight and discovery. There will come a point when the understanding they allow will be outweighed by the limits they impose, but in the case of computational and representational metaphors of the brain, there is no agreement that such a moment has arrived. From a historical point of view, the very fact that this debate is taking place suggests that we may indeed be approaching the end of the computational metaphor. What is not clear, however, is what would replace it.

Scientists often get excited when they realise how their views have been shaped by the use of metaphor, and grasp that new analogies could alter how they understand their work, or even enable them to devise new experiments. Coming up with those new metaphors is challenging – most of those used in the past with regard to the brain have been related to new kinds of technology. This could imply that the appearance of new and insightful metaphors for the brain and how it functions hinges on future technological breakthroughs, on a par with hydraulic power, the telephone exchange or the computer. There is no sign of such a development; despite the latest buzzwords that zip about – blockchain, quantum supremacy (or quantum anything), nanotech and so on – it is unlikely that these fields will transform either technology or our view of what brains do.


One sign that our metaphors may be losing their explanatory power is the widespread assumption that much of what nervous systems do, from simple systems right up to the appearance of consciousness in humans, can only be explained as emergent properties – things that you cannot predict from an analysis of the components, but which emerge as the system functions.

In 1981, the British psychologist Richard Gregory argued that the reliance on emergence as a way of explaining brain function indicated a problem with the theoretical framework: “The appearance of ‘emergence’ may well be a sign that a more general (or at least different) conceptual scheme is needed … It is the role of good theories to remove the appearance of emergence. (So explanations in terms of emergence are bogus.)”

This overlooks the fact that there are different kinds of emergence: weak and strong. Weak emergent features, such as the movement of a shoal of tiny fish in response to a shark, can be understood in terms of the rules that govern the behaviour of their component parts. In such cases, apparently mysterious group behaviours are based on the behaviour of individuals, each of which is responding to factors such as the movement of a neighbour, or external stimuli such as the approach of a predator.

This kind of weak emergence cannot explain the activity of even the simplest nervous systems, never mind the working of your brain, so we fall back on strong emergence, where the phenomenon that emerges cannot be explained by the activity of the individual components. You and the page you are reading this on are both made of atoms, but your ability to read and understand comes from features that emerge through atoms in your body forming higher-level structures, such as neurons and their patterns of firing – not simply from atoms interacting.

Strong emergence has recently been criticised by some neuroscientists as risking “metaphysical implausibility”, because there is no evident causal mechanism, nor any single explanation, of how emergence occurs. Like Gregory, these critics claim that the reliance on emergence to explain complex phenomena suggests that neuroscience is at a key historical juncture, similar to that which saw the slow transformation of alchemy into chemistry. But faced with the mysteries of neuroscience, emergence is often our only resort. And it is not so daft – the amazing properties of deep-learning programmes, which at root cannot be explained by the people who design them, are essentially emergent properties.

Interestingly, while some neuroscientists are discombobulated by the metaphysics of emergence, researchers in artificial intelligence revel in the idea, believing that the sheer complexity of modern computers, or of their interconnectedness through the internet, will lead to what is dramatically known as the singularity. Machines will become conscious.

There are plenty of fictional explorations of this possibility (in which things often end badly for all concerned), and the subject certainly excites the public’s imagination, but there is no reason, beyond our ignorance of how consciousness works, to suppose that it will happen in the near future. In principle, it must be possible, because the working hypothesis is that mind is a product of matter, which we should therefore be able to mimic in a device. But the scale of complexity of even the simplest brains dwarfs any machine we can currently envisage. For decades – centuries – to come, the singularity will be the stuff of science fiction, not science.

A related view of the nature of consciousness turns the brain-as-computer metaphor into a strict analogy. Some researchers view the mind as a kind of operating system that is implemented on neural hardware, with the implication that our minds, seen as a particular computational state, could be uploaded on to some device or into another brain. In the way this is generally presented, this is wrong, or at best hopelessly naive.

The materialist working hypothesis is that brains and minds, in humans and maggots and everything else, are identical. Neurons and the processes they support – including consciousness – are the same thing. In a computer, software and hardware are separate; however, our brains and our minds consist of what can best be described as wetware, in which what is happening and where it is happening are completely intertwined.

Imagining that we can repurpose our nervous system to run different programmes, or upload our mind to a server, might sound scientific, but lurking behind this idea is a non-materialist view going back to Descartes and beyond. It implies that our minds are somehow floating about in our brains, and could be transferred into a different head or replaced by another mind. It would be possible to give this idea a veneer of scientific respectability by posing it in terms of reading the state of a set of neurons and writing that to a new substrate, organic or artificial.

But to even begin to imagine how that might work in practice, we would need both an understanding of neuronal function that is far beyond anything we can currently envisage, and would require unimaginably vast computational power and a simulation that precisely mimicked the structure of the brain in question. For this to be possible even in principle, we would first need to be able to fully model the activity of a nervous system capable of holding a single state, never mind a thought. We are so far away from taking this first step that the possibility of uploading your mind can be dismissed as a fantasy, at least until the far future.


For the moment, the brain-as-computer metaphor retains its dominance, although there is disagreement about how strong a metaphor it is. In 2015, the roboticist Rodney Brooks chose the computational metaphor of the brain as his pet hate in his contribution to a collection of essays entitled This Idea Must Die. Less dramatically, but drawing similar conclusions, two decades earlier the historian S Ryan Johansson argued that “endlessly debating the truth or falsity of a metaphor like ‘the brain is a computer’ is a waste of time. The relationship proposed is metaphorical, and it is ordering us to do something, not trying to tell us the truth.”

On the other hand, the US expert in artificial intelligence, Gary Marcus, has made a robust defence of the computer metaphor: “Computers are, in a nutshell, systematic architectures that take inputs, encode and manipulate information, and transform their inputs into outputs. Brains are, so far as we can tell, exactly that. The real question isn’t whether the brain is an information processor, per se, but rather how do brains store and encode information, and what operations do they perform over that information, once it is encoded.”

Marcus went on to argue that the task of neuroscience is to “reverse engineer” the brain, much as one might study a computer, examining its components and their interconnections to decipher how it works. This suggestion has been around for some time. In 1989, Crick recognised its attractiveness, but felt it would fail, because of the brain’s complex and messy evolutionary history – he dramatically claimed it would be like trying to reverse engineer a piece of “alien technology”. Attempts to find an overall explanation of how the brain works that flow logically from its structure would be doomed to failure, he argued, because the starting point is almost certainly wrong – there is no overall logic.

Reverse engineering a computer is often used as a thought experiment to show how, in principle, we might understand the brain. Inevitably, these thought experiments are successful, encouraging us to pursue this way of understanding the squishy organs in our heads. But in 2017, a pair of neuroscientists decided to actually do the experiment on a real computer chip, which had a real logic and real components with clearly designed functions. Things did not go as expected.

The duo – Eric Jonas and Konrad Paul Kording – employed the very techniques they normally used to analyse the brain and applied them to the MOS 6507 processor found in computers from the late 70s and early 80s that enabled those machines to run video games such as Donkey Kong and Space Invaders.

First, they obtained the connectome of the chip by scanning the 3510 enhancement-mode transistors it contained and simulating the device on a modern computer (including running the games programmes for 10 seconds). They then used the full range of neuroscientific techniques, such as “lesions” (removing transistors from the simulation), analysing the “spiking” activity of the virtual transistors and studying their connectivity, observing the effect of various manipulations on the behaviour of the system, as measured by its ability to launch each of the games.

Despite deploying this powerful analytical armoury, and despite the fact that there is a clear explanation for how the chip works (it has “ground truth”, in technospeak), the study failed to detect the hierarchy of information processing that occurs inside the chip. As Jonas and Kording put it, the techniques fell short of producing “a meaningful understanding”. Their conclusion was bleak: “Ultimately, the problem is not that neuroscientists could not understand a microprocessor, the problem is that they would not understand it given the approaches they are currently taking.”

This sobering outcome suggests that, despite the attractiveness of the computer metaphor and the fact that brains do indeed process information and somehow represent the external world, we still need to make significant theoretical breakthroughs in order to make progress. Even if our brains were designed along logical lines, which they are not, our present conceptual and analytical tools would be completely inadequate for the task of explaining them. This does not mean that simulation projects are pointless – by modelling (or simulating) we can test hypotheses and, by linking the model with well-established systems that can be precisely manipulated, we can gain insight into how real brains function. This is an extremely powerful tool, but a degree of modesty is required when it comes to the claims that are made for such studies, and realism is needed with regard to the difficulties of drawing parallels between brains and artificial systems.

An Atari 2600 console, which contained a MOS 6507 microprocessor.
 Current ‘reverse engineering’ techniques cannot deliver a proper understanding of an Atari console chip, let alone of a human brain. Photograph: Radharc Images/Alamy

Even something as apparently straightforward as working out the storage capacity of a brain falls apart when it is attempted. Such calculations are fraught with conceptual and practical difficulties. Brains are natural, evolved phenomena, not digital devices. Although it is often argued that particular functions are tightly localised in the brain, as they are in a machine, this certainty has been repeatedly challenged by new neuroanatomical discoveries of unsuspected connections between brain regions, or amazing examples of plasticity, in which people can function normally without bits of the brain that are supposedly devoted to particular behaviours.

In reality, the very structures of a brain and a computer are completely different. In 2006, Larry Abbott wrote an essay titled “Where are the switches on this thing?”, in which he explored the potential biophysical bases of that most elementary component of an electronic device – a switch. Although inhibitory synapses can change the flow of activity by rendering a downstream neuron unresponsive, such interactions are relatively rare in the brain.

A neuron is not like a binary switch that can be turned on or off, forming a wiring diagram. Instead, neurons respond in an analogue way, changing their activity in response to changes in stimulation. The nervous system alters its working by changes in the patterns of activation in networks of cells composed of large numbers of units; it is these networks that channel, shift and shunt activity. Unlike any device we have yet envisaged, the nodes of these networks are not stable points like transistors or valves, but sets of neurons – hundreds, thousands, tens of thousands strong – that can respond consistently as a network over time, even if the component cells show inconsistent behaviour.

Understanding even the simplest of such networks is currently beyond our grasp. Eve Marder, a neuroscientist at Brandeis University, has spent much of her career trying to understand how a few dozen neurons in the lobster’s stomach produce a rhythmic grinding. Despite vast amounts of effort and ingenuity, we still cannot predict the effect of changing one component in this tiny network that is not even a simple brain.

This is the great problem we have to solve. On the one hand, brains are made of neurons and other cells, which interact together in networks, the activity of which is influenced not only by synaptic activity, but also by various factors such as neuromodulators. On the other hand, it is clear that brain function involves complex dynamic patterns of neuronal activity at a population level. Finding the link between these two levels of analysis will be a challenge for much of the rest of the century, I suspect. And the prospect of properly understanding what is happening in cases of mental illness is even further away.

Not all neuroscientists are pessimistic – some confidently claim that the application of new mathematical methods will enable us to understand the myriad interconnections in the human brain. Others – like myself – favour studying animals at the other end of the scale, focusing our attention on the tiny brains of worms or maggots and employing the well-established approach of seeking to understand how a simple system works and then applying those lessons to more complex cases. Many neuroscientists, if they think about the problem at all, simply consider that progress will inevitably be piecemeal and slow, because there is no grand unified theory of the brain lurking around the corner.

Why can’t the world’s greatest minds solve the mystery of consciousness?

There are many alternative scenarios about how the future of our understanding of the brain could play out: perhaps the various computational projects will come good and theoreticians will crack the functioning of all brains, or the connectomes will reveal principles of brain function that are currently hidden from us. Or a theory will somehow pop out of the vast amounts of imaging data we are generating. Or we will slowly piece together a theory (or theories) out of a series of separate but satisfactory explanations. Or by focusing on simple neural network principles we will understand higher-level organisation. Or some radical new approach integrating physiology and biochemistry and anatomy will shed decisive light on what is going on. Or new comparative evolutionary studies will show how other animals are conscious and provide insight into the functioning of our own brains. Or unimagined new technology will change all our views by providing a radical new metaphor for the brain. Or our computer systems will provide us with alarming new insight by becoming conscious. Or a new framework will emerge from cybernetics, control theory, complexity and dynamical systems theory, semantics and semiotics. Or we will accept that there is no theory to be found because brains have no overall logic, just adequate explanations of each tiny part, and we will have to be satisfied with that. Or –

This is an edited extract from The Idea of the Brain by Matthew Cobb, which will be published in the UK by Profile on 12 March, and in the US by Basic Books on 21 April, and is available at guardianbookshop.com

• Follow the Long Read on Twitter at @gdnlongread, and sign up to the long read weekly email here.

Meet the Woman Bringing Social Justice to Astrology

Chani Nicholas is transforming horoscopes from quips about finding true love and stumbling into financial good fortune to pointed calls to action.

Rolling Stone|getpocket.com

  • Ariana Igneri
chani-nicholas-horoscopes-a4eb819e-7be9-4ca5-9ac3-96dec9f7cd04.jpg

Chani Nicholas is bringing social justice to online astrology. Photo by Mel Melcon/Los Angeles Times/Getty.

Chani Nicholas doesn’t care for the hulking Alex Katz painting, depicting a trio of suited white men, hanging behind the front desk of the Langham hotel in New York. It reminds her of the patriarchy, she tells me one rainy, starless night in February, as we take the elevator up to her hotel suite and sit on the couch. We’re wrapping up a conversation about privilege, gender equality and the zodiac when Nicholas, who’s become popular on Instagram as a kind of social-justice astrologer, notices a different art piece hovering behind her. This one, she likes. The painting, titled “Mona,” portrays a woman who shares a striking resemblance to Nicholas – dark hair with tight curls, sharp brown eyes, a strong jawline. She compares it to the painting in the lobby. “The hotel staff must’ve known not to put me in a room with a bunch of weird guys on the wall,” she says. “I’m basically an angry feminist who just happens to be into astrology and healing.”

Nicholas, in her 40s, is transforming horoscopes from generalizations about finding true love and stumbling into financial good fortune to pointed calls to action with a left-leaning, social-justice agenda. Based in Los Angeles, she has more than 100,000 followers on Instagram and a blog with as many as one million monthly readers. She weaves activism into the majority of her writing, appealing to a generation particularly interested in issues like racism, sexism and gun control.

Nicholas used November 2017’s mercury retrograde to urge her followers to contact the FCC prior to its vote on net neutrality. She wrote about the new moon in Scorpio representing the need to heal during the initial wave of sexual assault accusations in Hollywood. She’s posted about DACA and the border wall and has even been promoting an online tool called FreeFrom, which was started by her wife Sonya Passi to help victims of domestic violence understand how to pursue financial compensation. In fact, last October, Nicholas raised $40 thousand for FreeFrom by offering her followers a chance to win one of five free astrology readings, with the majority of people donating just $5 to $10. “I love that we can use my platform to create wealth and work for folks that need access,” Nicholas says.

Nicholas’s popularity owes as much to the Internet’s ability to foster communities around niche interests as it does to the current political and social landscape. And for those who’ve tapped into this unique convergence, it’s good business. The psychic services industry, which includes astrology and palmistry, among other services, is worth $2 billion annually, per data from industry analysis firm IBIS World. It grew by two percent between 2011 and 2016, but Nicholas, who offers astrology workshops through her website for $38 to $48 a pop, says she’s seen demand for her classes increase by more than half in the last year alone. Things are becoming so busy that Nicholas recently posted a job listing for a personal assistant on her Instagram story. The annual salary for a six-hour, 5-day work week is $60,000.

Lately, Nicholas has been tied up working on her first book, which was released in January 2020 through HarperCollins. Like her online writing, Nicholas’s book explains how to interpret the star’s movements from a political perspective, which is exactly what Anna Paustenbach, Nicholas’s editor at HarperCollins, says makes her work unique. “Chani’s social justice angle is both timely and timeless,” Paustenbach says. “Right now, there’s a distrust among young people in a lot of things, like religion and government, but Chani’s astrology is helping them find a sense of purpose and belonging.”

Nicholas’s work helped Jen Richards and Laura Zak, creators of the 2016 Emmy-nominated web series HerStory, find exactly that. They discovered Nicholas on Instagram while they were starting production of the show, which explores the dating lives of trans and queer women. “Chani’s work connects people to themselves,” Zak says. “I think a testament to the strength of an expression of art is its ability to resonate with people, and to help them heal.” Zak used to work at V-Day, Eve Ensler’s global nonprofit to end violence against women and girls, and sees both HerStory and Nicholas’s astrology as “artivism.” “We weren’t just trying to make entertainment,” Richards says. “We were trying to enact social change.”

Richards, a trans activist known for her role as Allyson Del Lago on the fifth season of Nashville, got hooked on Nicholas because of the way her horoscopes combined astrological concepts with ideas like intersectional feminism and international politics. Although Richards grew up surrounded by astrologers, numerologists and reiki masters, she says that their teachings were always divorced from material reality. “The fact that Chani brought these spheres together was a revelation,” Richards says. “They never should’ve been seperated in the first place.”

Screenshot_2020-01-22 Meet the Woman Bringing Social Justice to Astrology .png

Devout followers of Nicholas posit that her brand of astrology is unlike anything they’ve seen before. But Nicholas Campion, an author and historian of astrology, explains that astrologers have pushed a progressive agenda throughout history. Campion points to Nicholas Culpepper, an astrological herbalist, who went against the establishment in the 1640s because he believed that medical knowledge belonged to the people rather than exclusively to physicians. Around the time of the English Civil War, Culpepper wrote astrological predictions in favor of overthrowing the Crown, including a prediction that the eclipse of 1652 would usher in the rise of republicanism in Europe. “He didn’t care if his predictions were true or not,” Campion says, “so long as they encouraged the enemies of monarchy.”

While Nicholas is by the far the most dominant voice in the astrological community fusing politics with the zodiac today, there are others who occasionally do so as well. There’s Barry Perlman, who’s written about Mars in the context of queer politics, and the AstroTwins, who write regularly for Refinery29ELLE and their own website, Astrostyle. Following the 2016 presidential election, the AstroTwins, Ophira and Tali Edut, wrote a piece suggesting that the outcome of the vote could be attributed to the fact that the month before “aggressive Mars and powermonger Pluto were both in Capricorn, the sign that rules the patriarchy.” In the post, the Eduts urged their readers to stand up against the man in the Oval Office. “It’s time to lean in, to raise our voices, to protest and fight like hell when the new powers that be try to strip our rights away,” they wrote. “Because from the looks of it, they will.”

Nicholas attributes her interest in social justice to her politicized education both in Canada and San Francisco. She had her first full astrology reading when she was 12, after her parents went through a divorce, and she found solace in her step-grandmother, who was a reiki master. When Nicholas was 20, she went to school for what she says might now be called “feminist counseling,” learning how to work with victims of domestic violence. “It was this small cohort of students and teachers from diverse backgrounds,” Nicholas says, noting that the program consisted entirely of women. “It really radicalized me, very young.” Two years later, Nicholas moved to Los Angeles to pursue a career in acting. She found the industry toxic and tried out a range of odd jobs instead, waiting tables, teaching yoga, counseling and reading her friends’ charts. Eventually, Nicholas enrolled in the California Institute of Integral Studies to complete her BA degree, where she immersed herself in the work of authors like bell hooks, an activist recognized for her writing on the intersection of race, capitalism and gender. The program, Nicholas says, made her think about systems of oppression and what she calls, “healing justice,” asking questions like, “Who gets to heal? Who gets the time? Who gets the resources?”

Nicholas talks about her horoscopes as providing access to restorative measures for all sorts of people, especially to those who might not be able to afford expensive therapy sessions or week-long yoga retreats. She’s careful to distance herself from the self-help gurus of the 1980s as well as more modern wellness brands, like Gwyneth Paltrow’s company Goop. To Nicholas, healing shouldn’t exclusively belong to the elite, and it shouldn’t assume everyone is coming from the same set of life experiences. “When you’re struggling to take care of your kids, pay the bills and worrying about whether your child is gonna get shot by the police, a f*cking rose quartz isn’t going to help you,” she says.

Screenshot_2020-01-22 Meet the Woman Bringing Social Justice to Astrology .jpg

Not only is the Internet making previously privileged spaces more accessible, it’s also launching astrology into the mainstream. But Jenna Wortham, a follower of Nicholas and a technology and culture reporter at The New York Times, is hesitant to call astrology’s current status a resurgence. “I think the Internet is really good at helping like-minded individuals find each other and affirm each other,” she says. “I know a lot of people in my life who don’t give a shit about astrology and think that my interest in star signs is ludacris and laughable, but I don’t have to talk to them,” she says.

In fact, Campion, the historian, says that astrology might not be any more popular than it ever was. The only thing that’s changed, he says, is the technology. “There was a boom in the Thirties when horoscope columns began,” he says. “You could argue that they created an interest in astrology, but I’d argue that they became an instant success because they appealed to an existing interest.”

No matter what you call it, astrology’s moment wouldn’t be what it is without the Internet. Wortham thinks that the millennial interest in astrology has to do with the correction of an imbalance, in which people are looking at their relationship to technology and finding it, at least to a degree, unnatural. Because social media and the Internet require people to externalize so much of their lives, people are looking for ways to be more introspective, she says. “In the same way that we’re like, ‘What’s the quality of the food that we’re eating? We’re now like, ‘How are we living? Is there a better way to live?’”

In 2017, Wortham went through a difficult breakup and decided to switch neighborhoods in Brooklyn. She had recently connected with Nicholas, who was intently advising her to move into a new apartment, build out her creative space and to do it quickly. The planets were shifting, she said. There’d be turbulence. Wortham would have to watch her finances. “I took Chani’s advice, and I made it happen,” says Wortham, who was recently accepted into the MacDowell Colony fellowship for writing, whose alumni include authors like James Baldwin, Alice Walker and Toni Morrison. “When I think back on it, I don’t think it would’ve been as easy for me to manage all the influxes of opportunity had my house not been in order.” Nicholas’s guidance, Wortham says, helped her affirm whether she was doing the right thing. “It’s cool feeling like there’s something correlating in the cosmos and on the earth,” she says.

That correlation, Nicholas contends, is inextricably political – and she has no qualms about expressing her point of view in her horoscopes. “I’m not neutral about the things that happen that I find to be unjust,” Nicholas says. “Maybe that makes me not the best astrologer, but if you want that, then you have to go to someone else.”

And some of her followers have. “People reach out saying, ‘I’ve been following you for years, and I’m so upset that this week you decided to bring your politics into this,’” says Sonya Passi, Nicholas’s wife and manager. “I’m like, ‘What have you been reading? What did you think Chani was talking about all this time?’”

Nicholas’s response to her disgruntled fans is simple. “I’m a stranger writing something for a million people,” she says. “Don’t take it too seriously. If it helps you heal or navigate through our current crises of humanity, great. If it doesn’t fit you, move on.”

This article was originally published on June 1, 2018, by Rolling Stone, and is republished here with permission.

10 Breakthrough Technologies 2020

  • February 26, 2020 (technologyreview.com)

Here is our annual list of technological advances that we believe will make a real difference in solving important problems. How do we pick? We avoid the one-off tricks, the overhyped new gadgets. Instead we look for those breakthroughs that will truly change how we live and work.

  1. Unhackable internet
  2. Hyper-personalized medicine
  3. Digital money
  4. Anti-aging drugs
  5. AI-discovered molecules
  6. Satellite mega-constellations
  7. Quantum supremacy
  8. Tiny AI
  9. Differential privacy
  10. Climate change attribution
This story is part of our March/April 2020 IssueSee the rest of the issue
Subscribe

We’re excited to announce that with this year’s list we’re also launching our very first editorial podcast, Deep Tech, which will explore the the people, places, and ideas featured in our most ambitious journalism. Have a listen here.

Unhackable internet

Later this year, Dutch researchers will complete a quantum internet between Delft and the Hague.YOSHI SODEOKA

An internet based on quantum physics will soon enable inherently secure communication. A team led by Stephanie Wehner, at Delft University of Technology, is building a network connecting four cities in the Netherlands entirely by means of quantum technology. Messages sent over this network will be unhackable.

Unhackable Internet
  • Why it mattersThe internet is increasingly vulnerable to hacking; a quantum one would be unhackable.
  • Key playersDelft University of Technology
    Quantum Internet Alliance
    University of Science and Technology of China
  • Availability5 years

In the last few years, scientists have learned to transmit pairs of photons across fiber-optic cables in a way that absolutely protects the information encoded in them. A team in China used a form of the technology to construct a 2,000-kilometer network backbone between Beijing and Shanghai—but that project relies partly on classical components that periodically break the quantum link before establishing a new one, introducing the risk of hacking.

The Delft network, in contrast, will be the first to transmit information between cities using quantum techniques from end to end.

The technology relies on a quantum behavior of atomic particles called entanglement. Entangled photons can’t be covertly read without disrupting their content.

But entangled particles are difficult to create, and harder still to transmit over long distances. Wehner’s team has demonstrated it can send them more than 1.5 kilometers (0.93 miles), and they are confident they can set up a quantum link between Delft and the Hague by around the end of this year. Ensuring an unbroken connection over greater distances will require quantum repeaters that extend the network.

Such repeaters are currently in design at Delft and elsewhere. The first should be completed in the next five to six years, says Wehner, with a global quantum network following by the end of the decade.

Russ Juskalian

Hyper-personalized medicine

Novel drugs are being designed to treat unique genetic mutations.

JULIA DUFOSSÉ

Here’s a definition of a hopeless case: a child with a fatal disease so exceedingly rare that not only is there no treatment, there’s not even anyone in a lab coat studying it. “Too rare to care,” goes the saying.

Hyper-personalized Medicine
  • Why it mattersGenetic medicine tailored to a single patient means hope for people whose ailments were previously uncurable.
  • Key playersA-T Children’s Project
    Boston Children’s Hospital
    Ionis Pharmaceuticals
    US Food & Drug Administration
  • AvailabilityNow

That’s about to change, thanks to new classes of drugs that can be tailored to a person’s genes. If an extremely rare disease is caused by a specific DNA mistake—as several thousand are—there’s now at least a fighting chance for a genetic fix.

One such case is that of Mila Makovec, a little girl suffering from a devastating illness caused by a unique genetic mutation, who got a drug manufactured just for her. Her case made the New England Journal of Medicine in October, after doctors moved from a readout of her genetic error to a treatment in just a year. They called the drug milasen, after her.

The treatment hasn’t cured Mila. But it seems to have stabilized her condition: it has reduced her seizures, and she has begun to stand and walk with assistance.

Mila’s treatment was possible because creating a gene medicine has never been faster or had a better chance of working. The new medicines might take the form of gene replacement, gene editing, or antisense (the type Mila received), a sort of molecular eraser, which erases or fixes erroneous genetic messages. What the treatments have in common is that they can be programmed, in digital fashion and with digital speed, to correct or compensate for inherited diseases, letter for DNA letter.

How many stories like Mila’s are there? So far, just a handful.

But more are on the way. Where researchers would have once seen obstacles and said “I’m sorry,” they now see solutions in DNA and think maybe they can help.

The real challenge for “n-of-1” treatments (a reference to the number of people who get the drug) is that they defy just about every accepted notion of how pharmaceuticals should be developed, tested, and sold. Who will pay for these drugs when they help one person, but still take large teams to design and manufacture?
Antonio Regalado

A race for a cureIf DNA is like software, can we just fix the code?In a race to cure his daughter, a Google programmer enters the world of hyper-personalized drugs.

Digital money

The rise of digital currency has massive ramifications for financial privacy.

Last June Facebook unveiled a “global digital currency” called Libra. The idea triggered a backlash and Libra may never launch, at least not in the way it was originally envisioned. But it’s still made a difference: just days after Facebook’s announcement, an official from the People’s Bank of China implied that it would speed the development of its own digital currency in response. Now China is poised to become the first major economy to issue a digital version of its money, which it intends as a replacement for physical cash.

Digital money
  • Why it mattersAs the use of physical cash declines, so does the freedom to transact without an intermediary. Meanwhile, digital currency technology could be used to splinter the global financial system.
  • Key playersPeople’s Bank of China
    Facebook
  • AvailabilityThis year

China’s leaders apparently see Libra, meant to be backed by a reserve that will be mostly US dollars, as a threat: it could reinforce America’s disproportionate power over the global financial system, which stems from the dollar’s role as the world’s de facto reserve currency. Some suspect China intends to promote its digital renminbi internationally.

Now Facebook’s Libra pitch has become geopolitical. In October, CEO Mark Zuckerberg promised Congress that Libra “will extend America’s financial leadership as well as our democratic values and oversight around the world.” The digital money wars have begun.
Mike Orcutt

Digital cash explained An elegy for cash: the technology we might never replace Cash is gradually dying out. Will we ever have a digital alternative that offers the same mix of convenience and freedom?

Anti-aging drugs

Drugs that try to treat ailments by targeting a natural aging process in the body have shown promise.YOSHI SODEOKA

The first wave of a new class of anti-aging drugs have begun human testing. These drugs won’t let you live longer (yet) but aim to treat specific ailments by slowing or reversing a fundamental process of aging.

Anti-aging drugs
  • Why it mattersA number of different diseases, including cancer, heart disease, and dementia, could potentially be treated by slowing aging.
  • Key playersUnity Biotechnology
    Alkahest
    Mayo Clinic
    Oisín Biotechnologies
    Siwa Therapeutics
  • AvailabilityLess than 5 years

The drugs are called senolytics—they work by removing certain cells that accumulate as we age. Known as “senescent” cells, they can create low-level inflammation that suppresses normal mechanisms of cellular repair and creates a toxic environment for neighboring cells.

In June, San Francisco–based Unity Biotechnology reported initial results in patients with mild to severe osteoarthritis of the knee. Results from a larger clinical trial are expected in the second half of 2020. The company is also developing similar drugs to treat age-related diseases of the eyes and lungs, among other conditions.

Senolytics are now in human tests, along with a number of other promising approaches  targeting the biological processes that lie at the root of aging and various diseases.

A company called Alkahest injects patients with components found in young people’s blood and says it hopes to halt cognitive and functional decline in patients suffering from mild to moderate Alzheimer’s disease. The company also has drugs for Parkinson’s and dementia in human testing. 

And in December, researchers at Drexel University College of Medicine even tried to see if a cream including the immune-suppressing drug rapamycin could slow aging   in human skin.

The tests reflect researchers’ expanding efforts to learn if the many diseases associated with getting older—such as heart diseases, arthritis, cancer, and dementia—can be hacked to delay their onset.
Adam Piore

AI-discovered molecules

Scientists have used AI to discover promising drug-like compounds.

The universe of molecules that could be turned into potentially life-saving drugs is mind-boggling in size: researchers estimate the number at around 1060. That’s more than all the atoms in the solar system, offering virtually unlimited chemical possibilities—if only chemists could find the worthwhile ones.

Now machine-learning tools can explore large databases of existing molecules and their properties, using the information to generate new possibilities. This could make it faster and cheaper to discover new drug candidates.

In September, a team of researchers at Hong Kong–based Insilico Medicine and the University of Toronto took a convincing step toward showing that the strategy works by synthesizing several drug candidates found by AI algorithms.

AI-discovered molecules
  • Why it mattersCommercializing a new drug costs around $2.5 billion on average. One reason is the difficulty of finding promising molecules.
  • Key playersInsilico Medicine
    Kebotix
    Atomwise
    University of Toronto
    BenevolentAI
    Vector Institute
  • Availability3-5 years

Using techniques like deep learning and generative models similar to the ones that allowed a computer to beat the world champion at the ancient game of Go, the researchers identified some 30,000 novel molecules with desirable properties. They selected six to synthesize and test. One was particularly active and proved promising in animal tests.

Chemists in drug discovery often dream up new molecules—an art honed by years of experience and, among the best drug hunters, by a keen intuition. Now these scientists have a new tool to expand their imaginations.
David Rotman

Satellite mega-constellations

We can now affordably build, launch, and operate tens of thousands of satellites in orbit at once.

JULIA DUFOSSÉ

Satellites that can beam a broadband connection to internet terminals. As long as these terminals have a clear view of the sky, they can deliver internet to any nearby devices. SpaceX alone wants to send more than 4.5 times more satellites into orbit this decade than humans have ever launched since Sputnik.

Satellite mega-constellations
  • Why it mattersThese systems can blanket the globe with high-speed internet—or turn Earth’s orbit into a junk-ridden minefield.
  • Key playersSpaceX
    OneWeb
    Amazon
    Telesat
  • AvailabilityNow

These mega-constellations are feasible because we have learned how to build smaller satellites and launch them more cheaply. During the space shuttle era, launching a satellite into space cost roughly $24,800 per pound. A small communications satellite that weighed four tons cost nearly $200 million to fly up.

Today a SpaceX Starlink satellite weighs about 500 pounds (227 kilograms). Reusable architecture and cheaper manufacturing mean we can strap dozens of them onto rockets to greatly lower the cost; a SpaceX Falcon 9 launch today costs about $1,240 per pound.

The first 120 Starlink satellites went up last year, and the company planned to launch batches of 60 every two weeks starting in January 2020. OneWeb will launch over 30 satellites later this year. We could soon see thousands of satellites working in tandem to supply internet access for even the poorest and most remote populations on the planet.

But that’s only if things work out. Some researchers are livid because they fear these objects will disrupt astronomy research. Worse is the prospect of a collision that could cascade into a catastrophe of millions of pieces of space debris, making satellite services and future space exploration next to impossible. Starlink’s near-miss with an ESA weather satellite in September was a jolting reminder that the world is woefully unprepared to manage this much orbital traffic. What happens with these mega-constellations this decade will define the future of orbital space.

Neel V. PatelHere’s how mega-constellations workHow satellite mega-constellations will change the way we use spaceAnd wherever humans go, they’ll be taking satellite constellations with them to moon and Mars.

Quantum supremacy

Google has provided the first clear proof of a quantum computer outperforming a classical one.YOSHI SODEOKA

Quantum computers store and process data in a way completely differently from the ones we’re all used to. In theory, they could tackle certain classes of problems that even the most powerful classical supercomputer imaginable would take millennia to solve, like breaking today’s cryptographic codes or simulating the precise behavior of molecules to help discover new drugs and materials.

Quantum supremacy
  • Why it mattersEventually, quantum computers will be able to solve problems no classical machine can manage.
  • Key playersGoogle
    IBM
    Microsoft
    Rigetti
    D-Wave
    IonQ
    Zapata Computing
    Quantum Circuits
  • Availability5-10+ years

There have been working quantum computers for several years, but it’s only under certain conditions that they outperform classical ones, and in October Google claimed the first such demonstration of “quantum supremacy.” A computer with 53 qubits—the basic unit of quantum computation—did a calculation in a little over three minutes that, by Google’s reckoning, would have taken the world’s biggest supercomputer 10,000 years, or 1.5 billion times as long. IBM challenged Google’s claim, saying the speedup would be a thousandfold at best; even so, it was a milestone, and each additional qubit will make the computer twice as fast.

However, Google’s demo was strictly a proof of concept—the equivalent of doing random sums on a calculator and showing that the answers are right. The goal now is to build machines with enough qubits to solve useful problems. This is a formidable challenge: the more qubits you have, the harder it is to maintain their delicate quantum state. Google’s engineers believe the approach they’re using can get them to somewhere between 100 and 1,000 qubits, which may be enough to do something useful—but nobody is quite sure what.

And beyond that? Machines that can crack today’s cryptography will require millions of qubits; it will probably take decades to get there. But one that can model molecules should be easier to build.

Gideon LichfieldThe battle for supremacy: IBM vs. GoogleInside the race to build the best quantum computer on EarthIBM thinks quantum supremacy is not the milestone we should care about.

Tiny AI

We can now run powerful AI algorithms on our phones.

JULIA DUFOSSÉ

AI has a problem: in the quest to build more powerful algorithms, researchers are using ever greater amounts of data and computing power, and relying on centralized cloud services. This not only generates alarming amounts of carbon emissions but also limits the speed and privacy of AI applications.

Tiny AI
  • Why it mattersOur devices no longer need to talk to the cloud for us to benefit from the latest AI-driven features.
  • Key playersGoogle
    IBM
    Apple
    Amazon
  • AvailabilityNow

But a countertrend of tiny AI is changing that. Tech giants and academic researchers are working on new algorithms to shrink existing deep-learning models without losing their capabilities. Meanwhile, an emerging generation of specialized AI chips promises to pack more computational power into tighter physical spaces, and train and run AI on far less energy.

These advances are just starting to become available to consumers. Last May, Google announced that it can now run Google Assistant on users’ phones without sending requests to a remote server. As of iOS 13, Apple runs Siri’s speech recognition capabilities and its QuickType keyboard locally on the iPhone. IBM and Amazon now also offer developer platforms for making and deploying tiny AI.

All this could bring about many benefits. Existing services like voice assistants, autocorrect, and digital cameras will get better and faster without having to ping the cloud every time they need access to a deep-learning model. Tiny AI will also make new applications possible, like mobile-based medical-image analysis or self-driving cars with faster reaction times. Finally, localized AI is better for privacy, since your data no longer needs to leave your device to improve a service or a feature.

But as the benefits of AI become distributed, so will all its challenges. It could become harder to combat surveillance systems or deepfake videos, for example, and discriminatory algorithms could also proliferate. Researchers, engineers, and policymakers need to work together now to develop technical and policy checks on these potential harms.
Karen Hao

Differential privacy

A technique to measure the privacy of a crucial data set.

In 2020, the US government has a big task: collect data on the country’s 330 million residents while keeping their identities private. The data is released in statistical tables that policymakers and academics analyze when writing legislation or conducting research. By law, the Census Bureau must make sure that it can’t lead back to any individuals. 

But there are tricks to “de-anonymize” individuals, especially if the census data is combined with other public statistics.

Differential privacy
  • Why it mattersIt is increasingly difficult for the US Census Bureau to keep the data it collects private. A technique called differential privacy could solve that problem, build trust, and also become a model for other countries.
  • Key playersUS Census Bureau
    Apple
    Facebook
  • AvailabilityIts use in the 2020 US Census will be the biggest-scale application yet.

So the Census Bureau injects inaccuracies, or “noise,” into the data. It might make some people younger and others older, or label some white people as black and vice versa, while keeping the totals of each age or ethnic group the same. The more noise you inject, the harder de-anonymization becomes.

Differential privacy is a mathematical technique that makes this process rigorous by measuring how much privacy increases when noise is added. The method is already used by Apple and Facebook to collect aggregate data without identifying particular users.

But too much noise can render the data useless. One analysis showed that a differentially private version of the 2010 Census included households that supposedly had 90 people.

If all goes well, the method will likely be used by other federal agencies. Countries like Canada and the UK are watching too.
Angela Chen

Climate change attribution

Researchers can now spot climate change’s role in extreme weather.YOSHI SODEOKA

Ten days after Tropical Storm Imelda began flooding neighborhoods across the Houston area last September, a rapid-response research team announced that climate change almost certainly played a role.

Climate change attribution
  • Why it mattersIt’s providing a clearer sense of how climate change is worsening the weather, and what we’ll need to do to prepare.
  • Key playersWorld Weather Attribution
    Royal Netherlands Meteorological Institute
    Red Cross Red Crescent Climate Centre
    University of Oxford
  • AvailabilityNow

The group, World Weather Attribution, had compared high-resolution computer simulations of worlds where climate change did and didn’t occur. In the former, the world we live in, the severe storm was as much as 2.6 times more likely—and up to 28% more intense.

Earlier this decade, scientists were reluctant to link any specific event to climate change. But many more extreme-weather attribution studies have been done in the last few years, and rapidly improving tools and techniques have made them more reliable and convincing.

This has been made possible by a combination of advances. For one, the lengthening record of detailed satellite data is helping us understand natural systems. Also, increased computing power means scientists can create higher-resolution simulations and conduct many more virtual experiments.

These and other improvements have allowed scientists to state with increasing statistical certainty that yes, global warming is often fueling more dangerous weather events. 

By disentangling the role of climate change from other factors, the studies are telling us what kinds of risks we need to prepare for, including how much flooding to expect and how severe heat waves will get as global warming becomes worse. If we choose to listen, they can help us understand how to rebuild our cities and infrastructure for a climate-changed world.
James Temple

Chinese Rover Discovers It’s Sitting on 39 Feet of Moon Dust

“That’s a lot of regolith.”

VICTOR TANGERMANN February 27, 2020 (futurism.com)

Researchers at the Chinese Academy of Sciences in Beijing have started analyzing data collected by the country’s Yutu-2 Moon rover’s ground-penetrating radar. The instrument peered 40 meters (131 feet) below the lunar surface — and found it was sitting on top of a mountain of fine dust.

China’s Chang’e 4 lander touched down on the far side of the Moon in January 2019, becoming the first man-made object to do so. Shortly after, it deployed the rover Yutu-2 from its belly. The rover has been exploring the South Pole-Aitken basin, the largest and oldest crater on the Moon, ever since.

Using high-frequency radar to look beneath the surface, it found that it was sitting on top of 12 meters (39 feet) of fine Moon dust, as detailed in a paper published in the journal Science Advances on Wednesday.

“That’s a lot of regolith,” David Kring, senior scientist at the Lunar and Planetary Institute in Houston who was not involved in the research, told The New York Times. “That’s food for thought.”

The fine particles were likely the result of many small meteorite collisions and a ton of solar radiation, as New Scientist reports.

Below the dust, between 12 and 24 meters (39 and 79 feet), the rover spotted larger rocks, likely what’s left of larger asteroid and meteorite impacts. Further below that, the rover detected alternating layers of fine and coarser soil.

Most noteworthy is the striking difference between the new readings and the ones taken at Chang’e 4’s landing site, where measurements suggested it landed on top of a dense lava layer buried below the surface, the remains of a volcanic event.

“The subsurface structure at the Chang’e 4 landing site is more complex, and suggests a totally different geological context,” Yan Su, co-author from the Chinese Academy of Sciences, told New Scientist.

READ MORE: China’s rover has discovered what lies beneath the moon’s far side [New Scientist]

More on the mission: China Claims Its Moon Rover Found a Colorful “Gel-Like” Substance