Coleridge the philosopher

Though far more often remembered as a poet, Coleridge’s theory of ideas was spectacular in its originality and bold reach

Detail of Moonlight Landscape (1785) by Joseph Wright of Derby. Courtesy the Ringling Museum of Art, Florida State University

Peter Cheyne is an associate professor teaching British literature and culture, and philosophy and literature, at Shimane University in Japan, and a visiting fellow in philosophy at Durham University in the UK. He is the author of Coleridge’s Contemplative Philosophy (2020) and co-editor of The Philosophy of Rhythm (2019), and is currently working on a multi-author book titled Living Ideas: Coleridge and Other Idealists on Life and Matter.

Edited by Sam Dresser

19 April 2021 (aeon.co)

Samuel Taylor Coleridge (1772-1834) stands tall in the cultural pantheon for his poetry. It’s less well known that in his own lifetime, and in the decades following his death, this canonical poet had an equal reputation as a philosopher. His published works containing much of his philosophical prose span from The Statesman’s Manual (1816), which set out his theory of imagination and symbolism; Biographia Literaria (1817), one of the great and founding works of literary criticism; The Friend (1818), which includes his philosophical ‘Essays on the Principles of Method’; Aids to Reflection (1825), where he expounds his religious philosophy of transcendence; and On the Constitution of the Church and the State (1829), which presents his political philosophy.

The effect of those last two books was so impressive that John Stuart Mill named Coleridge as one of the two great British philosophers of the age – the other being Jeremy Bentham, Coleridge’s polar opposite. His thinking was also at the root of the Broad Church Anglican movement, a major influence on F D Maurice’s Christian socialism, and the main source for American Transcendentalism. Ralph Waldo Emerson visited Coleridge in 1832, and John Dewey, the leading pragmatist philosopher, called Coleridge’s Aids to Reflection ‘my first Bible’.

Yet philosophical fortunes change. The almost-total eclipse of British idealism by the rise of analytic philosophy saw a general decline in Coleridge’s philosophical stock. His philosophy languished while his verse rose. Coleridge’s poetry resonated with the psychedelia of the 1960s and a general cultural shift that emphasised the value of the imagination and a more holistic view of the human place within nature. Today, Coleridge is far more often remembered as a poet than a philosopher. But his philosophy was spectacular in its originality and syntheses.

Although Coleridge wrote poetry throughout his life, his energies increasingly channelled towards philosophy. Drawing from neo-Platonism, the ingenious but difficult transcendental idealism of Immanuel Kant, and the even obscurer intricacies of post-Kantians such as J G Fichte and F W J Schelling, his philosophy was undoubtedly of the difficult metaphysical kind, very much at odds with practically minded British empiricism. Lord Byron spoke for many when he described Coleridge:

Explaining Metaphysics to the nation –
I wish he would explain his Explanation.

Yet the British empiricism of John Locke, David Hume and David Hartley was itself at odds, Coleridge pointed out, with a deeper heritage of British thought. ‘Let England be,’ he pronounced, ‘Sidney, Shakespeare, Spenser, Milton, Bacon, Harrington, Swift, Wordsworth’, who represent the idealising and proto-romantic tradition that he identified as ‘the spiritual platonic old England’. Coleridge rallied that ‘spiritual platonic’ tradition to oppose the philosophies of empiricists and hard-headed expounders of ‘common-sense’ such as Samuel Johnson, Erasmus Darwin, Hume, Joseph Priestley, William Paley and William Pitt, ‘with Locke at the head of the Philosophers and [Alexander] Pope of the Poets’.

Without denigrating commercial and industrial success, Coleridge argued that the haste for economic improvement led to a decline in culture, tradition and spiritual wellbeing. Identifying ‘civilisation’ with the forces of economic and technological progression, and ‘cultivation’ with the deeper roots of spiritual connection, tradition and permanence, he warned of producing a society that was ‘varnished rather than polished; perilously over-civilised, and most pitiably uncultivated!’ This concern with cultivation was an important tenet in what Mill called the ‘Germano-Coleridgian’ school, which examined what the empiricists, utilitarians and materialist mechanists tended to overlook: the historical development and the socially and psychologically significant meanings embedded in religion, tradition and cultural symbolism.

Mill’s recognition of this difference foreshadows what would become the analytic-Continental divide between Anglophone philosophy, focused on discrete analysis meant to clarify problems, and the more historically and theoretically ambitious, synthesising approaches that began with those philosophers following Kant, such as Schelling and G W F Hegel. This ‘Germano-Coleridgian’ approach was in stark contrast to British utilitarianism, which reduced ethics to Bentham’s principle of utility. In the culture wars of his day, Coleridge championed cultural and spiritual concerns, and opposed the ethical elevation of sensual pleasure, and the reduction of that and everything else to base matter.

Coleridge similarly sided with the supporters of the spiritual and transcendent against those who maintained the reality of the material and immanent only. In this way, he took part in the ‘Pantheism Controversy’ that raged primarily across German philosophy in the late 18th and early 19th centuries. Coleridge argued for the transcendence of God rather than holding, with Baruch Spinoza, that God is a wholly immanent power identified with the natural world. Characteristically of Coleridge, however, he didn’t dismiss Spinozistic arguments, but adopted parts of them to fit within what he saw as a wider whole. ‘Spinoza’s is … true philosophy,’ he wrote, ‘but it is the Skeleton of the Truth.’ It needed to be fleshed out in order to let ‘the dry Bones live’.

Coleridge’s thinking provides a bridge between materialist and dynamic views in the sciences

This inclusive attitude is one of the strengths of Coleridge’s approach, which grew from his celebrated powers of synthesis. Seeing polarised debates as revealing an interdependent whole, he tried to embrace the views of his philosophical opponents, rather than simply dismiss them. He saw dichotomous or binary thinking (B versus C) as merely disputative, whereas a broader trichotomy (B versus C within a broader unity of A) presented a unified whole as the higher ideal that fierce yet dependent polar opposition imperfectly represents. The view of a higher union of opposites leads to reasoning, while binary thinking leads merely to arguing.

Beyond the ‘cultivating’ merits of Coleridgean synthesis, it’s also valuable to delve into the content of his philosophy. Over the past 15 years, philosophers have been attending to what Anna Marmodoro calls ‘the metaphysics of powers’ and, since Albert Einstein’s theories of relativity and the later quantum theory, most philosophers and physicists agree that forces and fields of force are more fundamental than matter, which is no longer held to be the atomistic ne plus ultra it was often thought to be. Notably, Isaac Newton refused to reduce the force of gravitation to something that is itself material, leaving it as one of those dark mysteries that we must simply observe and accept without fully understanding.

Without denying physical matter, Coleridge contended against what he saw as abject materialism, which reduced all qualities to quantity and collapsed physical forces into matter. On this point, history now sides with Coleridge against the materialists, and philosophers sympathetic with the intent of materialism now generally identify not with ‘materialism’ but with ‘physicalism’, or the view that the fundamental components of the Universe are whatever physics will eventually conclude they are. Current thinking in quantum physics construes these elements as fundamental forces, which Coleridge himself argued.

An understanding of Coleridge’s thinking, then, provides insight into the beginnings of the analytic-Continental divide and a bridge between materialist and dynamic (powers-based) views in the sciences. It also illuminates Coleridge’s poetry as the expression of a unified view of the world not as mere matter clumping together in happy coincidence, but as the evolution of the power of ideas in a world of dynamically forged syntheses that resound back to the powers from which these creative forces arose.

‘In Xanadu did Kubla Khan / A stately pleasure-dome decree …’ So begins Coleridge’s poem on the power of words and imagination in physical and poetic creation. Echoing how the mighty potentate creates with an imperious fiat, the inspired poet, we’re told, could ‘build that dome in air’ in an explosively constructive fusion of opposites – ‘[t]hat sunny dome! those caves of ice!’ It’s a creation with more astonishing magic than even the worldly power of the Khan could muster:

And all should cry, Beware! Beware!
His flashing eyes, his floating hair!
Weave a circle round him thrice,
And close your eyes with holy dread,
For he on honey-dew hath fed,
And drank the milk of Paradise.

Though unpublished till 1816, Kubla Khan was written between 1797 and 1799, around Coleridge’s annus mirabilis of 1797-98, when he also wrote his supernatural poem The Rime of the Ancient Mariner, the daemonic Christabel, and some of the greatest of what he called his ‘Meditative Poems in Blank Verse’. One of those poems, the sublime ‘Frost at Midnight’, describes the beauties of nature – ‘lakes and shores / And mountain crags’ – as incarnations of the divine word, being ‘The lovely shapes and sounds intelligible / Of that eternal language, which thy God / Utters’. That poem ends on the achingly beautiful, mysterious note of ‘the secret ministry of frost’ that will, if the night gets colder, hang up the thaw-drops from the eaves ‘in silent icicles, / Quietly shining to the quiet Moon.’

Coleridge’s interconnecting themes are: the power of the creative word, in both worldly and poetic construction, echoing the divine word; nature as the living alphabet of God, only dimly understood in human knowledge; ideas as metaphysical essences and powers that pre-exist the physical world; and the notion of the earthly reflecting the ideal, as the icicles shine to the moon, itself reflecting the otherwise unseen light, at night, of an unseen sun. These are all themes that Coleridge developed in his philosophical writings until his death in 1834.

As a young man, Coleridge drew much from David Hartley’s associationist theory of mind. Like Hartley, young Coleridge wanted to trace the paths from root nerves and stimuli to an ever-increasing and sublime spirituality. This became entwined for him with a longer-lasting respect for the philosophy of Spinoza, who saw mind and matter as the only attributes we can perceive of the infinite being he called deus sive natura (God or nature). Aged 22, Coleridge declared:

I am a complete Necessitarian – and understand the subject as well almost as Hartley himself – but I go farther … and believe the corporeality of thought – namely, that it is motion.

Associationists viewed the mind as built up from immediate sensations, whose traces then recall and modify each other in constructing maps of experience – a kind of mental atomism. Although he’d soon give up materialistic psychology, he retained associationism as a theory of how animal and human mind begin to get organised. Coleridge accepted what he saw as its ‘half-truth’ in explaining much of mental activity at the levels of sensation, desire and the early awakenings of understanding. The essentially deterministic theory, however, left little if any room for human freedom. How could this theory of an automatic, irrational, desire-centred mind work so well at explaining the elementary functions of thought and perception yet utterly contradict the experiences, indeed the very possibility, of freedom, responsibility, and the pursuit of higher purposes? His answer would allow him to go beyond the notion of the ‘corporeality of thought’ while staying with the theory of it as ‘motion’, as he developed a view of mind as arising out of the interplay of opposed energies and functioning in a system of dynamics, or elementary forces.

It’s only when we’re self-consciously aware of ideas that we’re fully awake

Before delving deeper into Coleridge’s ‘polar philosophy’, we need a clearer picture of what he meant by ‘ideas’. For him, polar opposition derives from the energy of ideas conceived subjectively, as ‘universal ideas’, or objectively, as ‘cosmic laws’. His universal ideas relate to moral truths, history and the ‘humane sciences’, while the cosmic laws refer to the laws of nature and the physical sciences. Coleridge’s notion of ‘ideas’ is akin to the Platonic ideas, such as Goodness, Truth, Beauty, Justice and so on. From 1818 onwards, he gave a number of lists of such ideas, including:

the Ideas of Being, Form, Life, the Reason, the Law of Conscience, Freedom, Immortality, God!
… ideas, (NB not images) as the theorems of a point, a line, a circle, in Mathematics; and of Justice, Holiness, Free-Will, &c in Morals.

and:

eternity … Will, Being, Intelligence, and communicative Life, Love, and Action … without change, without succession.

The ability to intuit and behold transcendent ideas, he argued, is what proves that ‘we are born with the god-like faculty of Reason’, adding that ‘it is the business of life to develop and apply it’, since these ideas:

constitute … humanity. For try to conceive a man without the ideas of God, eternity, freedom, will, absolute truth, of the good, the true, the beautiful, the infinite. An animal endowed with a memory of appearances and of facts might remain. But the man will have vanished, and you have instead a creature, ‘more subtile than any beast of the field, but likewise cursed above every beast of the field …’

Coleridge became fascinated with the notion of universal truth as a realm of ‘eternal verities’ that originate and endure in some kind of cosmic reason. This ‘reason’ he saw as underlying the fabric of the Universe and corresponding to both the universal Logos of Heraclitus and the divine Logos of St John. While Heraclitus is known for his view of a world in such constant flux that we can’t step into the same river twice, he’s also the philosopher who conceived of a universal Logos, the all-encompassing order that allows a coherent and rational reality to exist from what would otherwise be a swirling chaos. The Logos of St John is the Word that was with God in the beginning, which was and is God. It’s the spiritual heart of reality that entered into its own creation by becoming flesh, becoming the light of the world, if only the darkness could comprehend it.

To Coleridge, these notions of Logos became united as the living mind in which the ideas as truths and powers reside. In some ways similar to the grand systems of Schelling and Hegel, these ideas gradually become realised through human thoughts and actions via inspiration, imagination and contemplation. Coleridge defined the imagination in its fundamental sense as ‘the living Power and prime Agent of all human Perception, and as a repetition in the finite mind of the eternal act of creation in the infinite I Am.’ In this view, human artistic creativity, scientific discovery and philosophical insight share in an attenuated form of the original, divine power of creation by virtue of being able to attend in imagination to ideas, or symbols of ideas, of ultimate reality. Everything that exists owes its being to the ideas. Nature, though charged with ideas, is sleeping; animal life, sleepwalking; with most of human life in a slightly higher state of dreaming. It’s only when we’re self-consciously aware of ideas that we’re fully awake. As Coleridge described the sway of ideas in 1827, ‘all live in their power – the Idea working in them’, but only ‘the Fewest among the Few … live in their Light’.

All phenomena in nature and human history are symbolic appearances that reunite a clash of elementary forces

Conceiving of these ultimate and eternal powers as ‘ideas’ subjectively (as fundamental to mind) and as ‘laws’ objectively (as fundamental to world), Coleridge placed at the heart of his philosophy a theory of powers beyond the human mind but accessible to it in contemplation, imagination and in vague intuitions. These ‘living and life-producing Ideas’ were ‘essentially one with the germinal causes in Nature’. In an intriguing consequence of his theory of ideas, he didn’t dismiss physical matter as mere appearance or abstract concept with no corresponding reality beyond subjective experience. Rather, as Coleridge saw it, matter is a synthesis that arises out of the opposition of the fundamental forces of existence. It’s the elementary forces that are primal, and the matter that arises out of them is the efflorescence in which we take part, only dimly aware that we’re ‘connected with master-currents below the surface’.

Broader and deeper than any idealism that would do away with matter as an illusion or an abstraction, Coleridge retained it within his system, much as he’d done with associationism in the theory of mind. Thus, as he wrote in 1817, he saw it essential to:

consider matter as a Product – coagulum spiritûs [the coagulation of spirit], the pause, by interpenetration, of opposite energies – … while I hold no matter as real otherwise than as the copula [or synthesis] of these energies, consequently no matter without Spirit, I teach on the other hand a real existence of a spiritual World without a material.

From what Coleridge described as ‘the universal Law of Polarity’ follows the actualisation of all subsequent existence in the form of metaphysical powers and forces of nature. Coleridge’s cosmology, like Schelling’s metaphysics, was part of a post-Kantian movement of organic philosophy of nature that saw itself as opposed to atomism and associationism, and which was very much a metaphysics of powers that found deep consonance with Coleridge’s developing view of mind and the senses. From the principle of polar opposition springs not only history, but all matter and all phenomena. In 1818, Coleridge defined this law as follows:

Every Power in Nature and in Spirit must evolve an opposite, as the sole means and condition of its manifestation: and all opposition is a tendency to re-union.

In a lineage from Kant, to Fichte, then Schelling, and soon to be furthered by Hegel, the principle of polar opposition was made into a tripartite logic, which Fichte was the first to describe as the progression through thesis, antithesis and synthesis, itself becoming a new thesis, thus continuing the evolution. Coleridge developed this into a ‘pentadic’ (or five-fold) logic, with the addition of the ‘prothesis’, as the originary idea that the thesis and antithesis manifest as opposite poles, and the ‘indifference point’, the midpoint between both thesis and antithesis. For Coleridge, synthesis is the resolution of opposed forces into the material phenomena of experience. All phenomena in nature and in human history are symbolic appearances that reunite a deeper clash of elementary forces.

To Coleridge, the opposition of reason to sense was a fundamental polarity in the mind that demonstrates the polar dynamics of the cosmos. Following Kant, he construed reason as essentially free, guided by truth and higher values rather than impulses and associations. This tug-of-war between reason and sense stretches the mind both ‘up’, into abstract truths and the realm of freedom, compassion and humanity, and ‘down’, into sensuality, self-interest and the realm of nature. The lower mind is necessary for the higher, which depends on the former for nutrition, physical safety and the basics of society. Nonetheless, the dynamic is marked by a hierarchy: sense can evolve towards reason, but the truths of reason aren’t similarly transformed by sense and basic impulses. The middle position, generated by the opposed dyad of sense ‘below’ and reason ‘above’, is the understanding, which is partly a reflection in the human mind of the universal reason (or Logos) yearning after science, art and social progress, and partly the rationally self-interested schemer that sets itself to satisfy our natural desires. With this dynamic, Coleridge felt he’d eventually marshalled those half-truths of associationism by showing that they can hold sway only in the lower mind.

With no foothold in the mind’s higher levels, the main principles now switch from paths of pleasure and the mechanics of contiguity to freedom, creativity and the pursuit of ideas. This switching over happens at the crux, the crucial point in the centre of Coleridge’s model of mind, where our lives are balanced between sensation stretching down into nature and reason stretching up into the ideas. Everything that happens in human and natural history occurs between these poles, with the familiar parts of our lives clumped around the ordinary understanding at the middle, where we find comfort in concepts supported by sensations ‘below’ and stirred on by ideas ‘above’.

Coleridge’s theory of ideas led to a philosophy where the notion of matter itself was retained but reframed in a way that opposed the mechanical view that saw the Universe as nothing more than a network of mere matter. In the most thoroughgoing materialist accounts, even energy and forces are supposed to be reducible to matter. Against this, Coleridge developed a philosophy of ideas as powers that saw matter arise from opposed forces, forces arise from powers, powers and laws as the objective side of ideas, and ideas as residing eternally in cosmic reason, or Logos, the mind of God.

Coleridge’s philosophy of ideas countered the view of the Universe as ‘an immense heap of little things’

His philosophy gained a comprehensiveness beyond psychology and philosophy of mind as his enquiries progressed into cosmology and the metaphysics of matter. Throughout his life, Coleridge searched for a unified view of reality that was at once bodily and spiritual. As he wrote in a letter in October 1797:

frequently all things appear little – all the knowledge, that can be acquired, child’s play – the universe itself – what but an immense heap of little things? – I can contemplate nothing but parts, & parts are all little – ! – My mind feels as if it ached to behold & know something great – something one & indivisible – and it is only in the faith of this that rocks or waterfalls, mountains or caverns give me the sense of sublimity or majesty! – But in this faith all things counterfeit infinity!

Over the course of the next three-and-a-half decades, Coleridge developed his philosophy of ideas that countered the view of the Universe as ‘an immense heap of little things’ and replaced it with a cosmos of ideas, powers and forces that give rise to the material world. In this way, he ended up providing his alternative to the mechanistic materialism, expounded to varying degrees by Galileo, Descartes, Locke and Newton, that he saw as removing too many ‘positive properties’ from the world, which then, abstracted into mere ‘figure and mobility’, becomes ‘a lifeless Machine whirled about by the dust of its own Grinding’.

Rather than reject associationism in the mind and materialism in the cosmos, Coleridge disavowed instead their abject extremes while managing to embrace what many would have too easily dismissed outright as the ‘enemy’ perspective. Mill commended Coleridge’s ‘catholic and unsectarian … spirit’ as ‘less extreme in its opposition’ than the materialist positions because ‘it denies less of what is true in the doctrine it wars against’. The poet-philosopher was applauded for correcting what he saw as dangerous ‘half-truths’ by retaining them within a broader, balanced ambit. Coleridge didn’t even fully reject utilitarianism, because even here he sought what was true in it, and realised that it warranted a limited place within the whole. His approach, in his words, embraced inclusion, not exclusion:

Exclude Utility? No. My System of Moral Philosophy neither excludes nor rests on it: were it for this reason only that it includes it.

Coleridge corrected what he saw as dangerous half-truths – on either side of the issues he encountered – by retaining what was valuable in them within a broader, balanced overview:

My system … is the only attempt I know ever made to reduce all knowledges into harmony. It … shows … how that which was true in the particular in each … became error, because it was only half the truth.

He persuaded many of his empiricist and utilitarian British contemporaries of the dangers of understanding everything mechanistically, including mind and humanity itself. With these methods, Coleridge achieved not only an astonishingly broad and holistic philosophy of great intellectual richness and scope, but also forged a brilliant synthesis within the culture wars of his time, which we could well heed today.

Stories and literatureThinkers and theoriesHistory of ideas

The radical impact of seeing Alzheimer’s as a second childhood

The radical impact of seeing Alzheimer’s as a second childhood | Psyche

Photo by Gonzalo Fuentes/Reuters

Han Yuis a professor in the Department of English at Kansas State University, where she teaches scientific and technical communication. Her latest book is Mind Thief: The Story of Alzheimer’s (2021), a comprehensive and engaging history of Alzheimer’s that illuminates our efforts to understand the disease.

Edited by Sally Davies

19 APRIL 2021 (psyche.co)

In some times and places, life is seen as a one-way expedition from birth to death. We progress linearly and don’t look back. In other times and places, life is circular, a never-ending round trip. We live, die, and live again.

In Buddhism, this cyclicality is known as Saṃsāra; in Taoism, Lunhui. Both are intricate concepts rooted in ancient theologies and scripts, but one doesn’t need to be religious to appreciate the circularity of life. To many, it’s simply a way of being. It says that, when we fight our way into this world as a crying infant, we know nothing, have nothing, not even an awareness of the self. Over time, we see, we hear, and we learn. We form the concept of ‘I’, what this ‘I’ likes and dislikes, and over time we come to live for this ‘I’. Eventually, we die, or rather our flesh dies. We give back the things we learned, and once again become a clean slate, perhaps reborn in another body.

The idea that existence is cyclical can bring us a certain composure in the face of life’s tribulations and vulnerabilities. It can help us appreciate the words of the ancient Greek playwright Aristophanes that ‘old men are children twice over’, as well as William Shakespeare’s dictum that old age is a ‘second childishness’. As the Taoist thinker Zhuang Zi said: ‘方生方死’ or ‘Fang sheng fang si’ – at the moment of life is death; at the moment of death is life. The beginning is the end; the end is the beginning.

Such ancient wisdom and literary musing find a close translation in modern neurological research into dementia and Alzheimer’s disease: retrogenesis, meaning backward (retro-) beginning (-genesis). It was proposed by Barry Reisberg, a psychiatrist at the New York University school of medicine, as a model to make sense of the progressive decline in Alzheimer’s. In 1999, Reisberg defined retrogenesis as ‘the process by which degenerative mechanisms reverse the order of acquisition in normal development’. In other words, the deterioration of a patient with Alzheimer’s follows, in reverse order, a child’s normal development. What a child learns first in this world, a patient loses last; what a child learns last, the patient loses first. The beginning is the end; the end is the beginning.

This is not just philosophical contemplation. A body of scientific evidence supports the retrogenesis model. In Alzheimer’s, we generally witness the gradual decline of two kinds of abilities: cognitive and functional. The former refers to capacities such as attention, memory, language and orientation. The latter refers to abilities to perform daily living tasks from dressing and eating to shopping and handling finances. Multiple tools have been developed to assess the extent of these skills in patients with Alzheimer’s, such as the mini-mental state exam (MMSE) for cognitive abilities and the interview for deterioration in daily living activities in dementia (IDDD) as a measure of functional decline.

When researchers in 2012 used these tools to compare the cognitive and functional abilities of 148 elderly participants and 181 children, they found much to support the retrogenesis model. Patients with Alzheimer’s who were at the very early stage of the disease obtained cognitive MMSE scores comparable with those achieved by children around the ages of six and seven. As the disease progresses, patients’ MMSE scores gradually dropped to the level of a five-year-old and, then, below that of a four-year-old. As for their functional abilities, patients with mild to moderate Alzheimer’s had IDDD scores comparable with those obtained by children aged seven to four, respectively. Patients with severe Alzheimer’s obtained IDDD scores below the level of a four-year-old.

Even in emotional development, there tends to be a reverse relationship between childhood and Alzheimer’s

Reisberg had also mapped the sequence of development in childhood and in Alzheimer’s, and the correlation is striking. At the age of 12 and above, children have the ability to hold a job, as working people at the very early stage of Alzheimer’s would insist on doing. At the age of eight to 12, children can be trusted to handle simple finances, as people with mild Alzheimer’s also do. At the age of five to seven, children can be counted on to select proper clothing for themselves, and people with moderate Alzheimer’s likewise determine their own fashion preferences. Around the age of four, children start to brave the toilet independently; people with moderately severe Alzheimer’s often retain that ability, too. At the age of 15 months, children can speak five to six words, and people with severe Alzheimer’s are equally fluent. Younger than 15 months, infants exhibit reflexes important for survival, for example, turning their faces toward a gentle stroking hand or sucking whatever touches the roofs of their mouths. These reflexes are lost in adulthood but regained in severe Alzheimer’s.

Even in emotional development, which can’t be charted with the same precision, there tends to be a reverse relationship between childhood and Alzheimer’s. For example, children aged between two and five tend to use temper tantrums and verbal outbursts to make their frustration known. Alzheimer’s patients at corresponding stages of their second childhood also have that approach.

Why would this reverse correlation happen? What, if any, mechanisms in the brain account for it? In childhood and early adulthood, brain development is not a homogeneous process. It starts in the so-called ‘primitive’ regions – those that allow basic functions such as processing visual information (the visual cortex) or sensing touch and warmth (the somatosensory cortex). The more evolved or higher-order brain regions mature later, such as those that enable reasoning and problem solving (the frontal lobe). In Alzheimer’s, brain damage happens to follow a reverse order. Patients first lose their higher-order abilities such as memory, language and reasoning, but hold on to their visual, sensory and motor skills for a longer time.

Researchers speculate that what lies behind this first-in, last-out pattern are axons – the long, slender fibres extended from neurons, like the brain’s electrical wires. Axons conduct electrical impulses away from a neuron to be received by other neurons so that information can be relayed and processed in the brain. Just like wires need to be insulated for their protection and proper function, axons are coated by a sheath of protein and fatty substances called myelin. This myelin sheath doesn’t develop simultaneously all over the brain’s axons. It grows first around axons that are critical for a child’s early survival: for example, axons that enable sensory and motor functions. Over years, then, these axons accumulate thicker myelin, which, like thickly coated wires, are less susceptible to damage.

By contrast, because babies don’t need language and self-control to survive, so axons that are involved in higher-order abilities don’t develop myelin until later in the process, and possess only a thin coat of protection. These axons are the ones that are more liable to damage in old age. When that happens, signal-exchange between neurons slows down or becomes blocked, abnormal protein deposits called tangles spread and choke neurons to death, corresponding brain functions are lost, and childhood is reversed.

As with all contemporary theories that try to explain Alzheimer’s, the retrogenesis model has its problems: for, when we really scrutinise them, our first and second childhoods aren’t exactly mirror images of one another. In children, the ability to perform basic activities such as dressing themselves and complex activities such as handling finances develop simultaneously. That is, with increasing age, children tend to have progressively higher overall functional abilities. By contrast, in Alzheimer’s, functional decline clearly discriminates. Patients lose the ability to perform complex tasks first, while basic abilities hang on for a while longer, sometimes persisting into the late stage of the disease when patients should have been, according to retrogenesis, no more capable than infants.

A further example concerns language, which isn’t always a predictable, straightforward story in reverse. According to the retrogenesis model, patients with late-stage Alzheimer’s, like infants, can utter only a few words. But in reality, a great deal of individual differences exist: some patients are able to produce one word; others, up to 252 words. In some cases, patients with severe Alzheimer’s dementia retain the ability to communicate humour, irony and sarcasm, which are metalinguistic abilities developed late in childhood, and thus should have been lost sooner.

In this second childhood, we don’t lose the ability to be content; we just lose the obsession to seek contentment

Moreover, pathologically speaking, multiple hypotheses exist on the ultimate cause of degeneration in the Alzheimer’s brain: the accumulation of the protein fragment beta-amyloid, the spread of the abnormal protein tau, the dysfunction of brain metabolism. In other words, various factors other than myelin play a role in the Alzheimer’s brain.

So retrogenesis is just that, one of several competing theories. While evidence supports its broad pattern, its details could do with correction. Yet you don’t need to follow retrogenesis religiously in order to glean some sense of peace and grace toward Alzheimer’s – just like you don’t need to be a Buddhist or Taoist to take comfort in the cyclicality of life. And the professionals, psychologists, therapists and social workers don’t need the retrogenesis model to be strictly true in all respects before they envision activities and care programmes that accommodate patients’ changing abilities, while also protecting their sense of pride and wellbeing.

Just as a newborn thrives on your tender voice – and the soft hugs and kisses that come with it – without having to understand the ‘I love you’ that you whisper, your grandpa, who no longer understands the words you utter, will too. Just as you praise a toddler for trying to help around the house and ignore the mess he makes in the process, you might want to compliment your mother for her lovely singing, not fret about her no-longer-lovely cooking. Just as you let 10-year-olds play, run and exercise while secretly admiring their unfailing energy, take comfort in the fact that the patients with Alzheimer’s under your care might want to walk and wander – and do respect their natural desire. Rather than antipsychotic medication or physical restraints, engage them in simple games and exercises to reduce agitation or even pair them with young children who are at similar stages of development for mutual joy and satisfaction. Ultimately, focus on what they, young or old, have, not on what they haven’t got from this world or what they’ve returned to the world.

No, none of this will change the fact that Alzheimer’s is a cruel disease. None of this will bring someone ‘back’. But in the absence of an Alzheimer’s cure, we can at least change the way we think and approach the disease. If we’re willing to give up our stubborn belief that life must be progressively linear, then when our parents or grandparents – or, indeed, when we – start to fade away, we might accept it as the beginning of a new cycle. In this second childhood, we don’t lose the ability to be content; we just lose the obsession to seek contentment. When our conscious mind drifts, we’re not so much losing something we’re entitled to as returning to the way we first came into this world. When the ones we care about deeply enter their second childhood, we’re not so much losing them as gaining another opportunity to love, praise and accept as they approach life’s end and its beginning. Fang sheng fang si. The beginning is the end; the end is the beginning.

On the Link Between Great Thinking and Obsessive Walking

VIA HARPER

From Charles Darwin to Toni Morrison, Jeremy DeSilva Looks at Our Need to Move

By Jeremy DeSilva

April 19, 2021 (lithub.com)

Moreover, you must walk like a camel, which is said to be the only beast which ruminates when walking.
–Henry David Thoreau, “Walking,” 1861
*

Charles Darwin was an introvert. Granted, he spent almost five years traveling the world on the Beagle recording observations that produced some of the most important scientific insights ever made. But he was in his twenties then, embarking on a privileged, 19th-century naturalist’s version of backpacking around Europe during a gap year. After returning home in 1836, he never again stepped foot outside the British Isles.

He avoided conferences, parties, and large gatherings. They made him anxious and exacerbated an illness that plagued much of his adult life. Instead, he passed his days at Down House, his quiet home almost twenty miles southeast of London, doing most of his writing in the study. He occasionally entertained a visitor or two but preferred to correspond with the world by letter. He installed a mirror in his study so he could glance up from his work to see the mailman coming up the road—the 19th-century version of hitting the refresh button on email.

Darwin’s best thinking, however, was not done in his study. It was done outside, on a lowercase d–shaped path on the edge of his property. Darwin called it the Sandwalk. Today, it is known as Darwin’s thinking path. Janet Browne, author of a two-volume biography of Darwin, wrote:

As a businesslike man, he would pile up a mound of flints at the turn of the path and knock one away every time he passed to ensure he made a predetermined number of circuits without having to interrupt his train of thought. Five turns around the path amounted to half a mile or so. The Sandwalk was where he pondered. In this soothing routine, a sense of place became preeminent in Darwin’s science. It shaped his identity as a thinker.

Darwin circled the Sandwalk as he developed his theory of evolution by means of natural selection. He walked to ponder the mechanism of movement in climbing plants and to imagine what wonders pollinated the fantastically shaped and colorful orchids he described. He walked as he developed his theory of sexual selection and as he accumulated the evidence for human ancestry. His final walks were done with his wife Emma as he thought about earthworms and their role in gradually remodeling the soil.

In February 2019, I had the meta-experience of walking Darwin’s thinking path to think about how walking helps you think. It was school vacation in London, and I had to compete with families arriving in droves to see where Darwin had lived and worked. The desk in his study is still cluttered with books, letters, and small specimen boxes containing pinned insects. Hanging from a nearby chair is his black jacket, black bowler hat, and a wooden walking stick. The stick has a helical design like a crawling tendril and looks freshly polished. The bottom of the walking stick, however, is well worn—evidence of miles on the Sandwalk.Darwin’s best thinking was not done in his study. It was done outside, on a lowercase d–shaped path on the edge of his property.

I walked out the back kitchen of the cream-colored home, passed the green trellis and vine-covered columns holding up Darwin’s back porch, crossed the beautifully groomed garden, and entered the Sandwalk. I was alone. The day was cool and blustery. Gray clouds hung low on the horizon and moved swiftly overhead, dropping an intermittent drizzle. Occasional breaks in the clouds allowed the sun to peek through, making the raindrops flicker.

I could hear planes from the nearby London Biggin Hill Airport and the hum of a lorry traveling along A233. But those modern sounds were fleeting. It was easy to imagine that it was 1871 and that I was taking a walk with Darwin himself. I could hear the chatter of gray squirrels but tuned them out as well since they are an invasive North American species introduced into England in 1876.

I stacked five flat flints at the entrance for the five laps I would take and began my walk, first along the meadow and then counterclockwise into the woods. The Sandwalk is alive. Starlings and crows fly overhead, filling the air with their trills and gurgles. Ivy inches up the thick trunks of alder and oak trees toward the sunlight. Underfoot, fungi decompose wet leaves, emitting the smell of fresh earth. I picked up a clump of cockleburs just off the path, and their hooks pulled on the folds of my hand and latched to my jacket. With each step the gravel crunched, and my shoes occasionally slipped on damp stones made smooth by thousands of footsteps, including some taken by Darwin himself.

Down House is not a place of magic, nor is it a place of worship. Looping the Sandwalk one flint at a time did not endow me with the wisdom to continue my scientific pursuits. It turns out, any walk outdoors has the potential to unlock our brains. The Sandwalk just happened to be where the unlocking of one 19th-century brain helped change the world and our place in it.

But why? Why does walking help us think?

You are undoubtedly familiar with this situation: You’re struggling with a problem—a tough work or school assignment, a complicated relationship, the prospects of a career change—and you cannot figure out what to do. So you decide to take a walk, and somewhere along that trek, the answer comes to you.

The nineteenth-century English poet William Wordsworth is said to have walked 180,000 miles in his life. Surely on one of those walks he discovered his dancing daffodils. French philosopher JeanJacques Rousseau once said, “There is something about walking which stimulates and enlivens my thoughts. When I stay in one place I can hardly think at all; my body has to be on the move to set my mind going.”

Ralph Waldo Emerson’s and Henry David Thoreau’s walks in the New England woods inspired their writing, including “Walking,” Thoreau’s treatise on the subject. John Muir, Jonathan Swift, Immanuel Kant, Beethoven, and Friedrich Nietzsche were obsessive walkers. Nietzsche, who walked with his notebook every day between 11 am and 1 pm, said, “All truly great thoughts are conceived by walking.” Charles Dickens preferred to take long walks though London at night. “The road was so lonely in the night, that I fell asleep to the monotonous sound of my own feet, doing their regular four miles an hour,” Dickens wrote. “Mile after mile I walked, without the slightest sense of exertion, dozing heavily and dreaming constantly.” More recently, walks became an important part of the creative process of Apple cofounder Steve Jobs.

It turns out, any walk outdoors has the potential to unlock our brains.

It is important to pause and reflect on these famous walkers. They are all guys. Little has been written about famous women who regularly walked. Virginia Woolf is one exception. She apparently walked quite a bit. More recently, Robyn Davidson trekked with her dog and four camels across Australia and wrote about it in her book Tracks. In 1999, Dorris Haddock, an 89-year-old grandmother from Dublin, New Hampshire, walked 3,200 miles from coast to coast to protest United States campaign finance laws.

Historically, however, walking has been the privilege of white men. Black men were likely to be arrested, or worse. Women just out for a walk were harassed, or worse. And, of course, rarely in our evolutionary history was it safe for anyone to walk alone.

Perhaps it is a coincidence that so many great thinkers were obsessive walkers. There could be just as many brilliant thinkers who never walked. Did William Shakespeare, Jane Austen, or Toni Morrison walk every day? What about Frederick Douglass, Marie Curie, or Isaac Newton? Surely the astoundingly brilliant Stephen Hawking did not walk after ALS paralyzed him. So walking is not essential to thinking, but it certainly helps.

*

Marilyn Oppezzo, a Stanford University psychologist, used to walk around campus with her Ph.D. advisor to discuss lab results and brainstorm new projects. One day they came up with an experiment to look at the effects of walking on creative thinking. Was there something to the age-old idea that walking and thinking are linked?

Oppezzo designed an elegant experiment. A group of Stanford students were asked to list as many creative uses for common objects as they could. A Frisbee, for example, can be used as a dog toy, but it can also be used as a hat, a plate, a bird bath, or a small shovel. The more novel uses a student listed, the higher the creativity score. Half the students sat for an hour before they were given their test. The others walked on a treadmill.

The results were staggering. Creativity scores improved by 60 percent after a walk.

A few years earlier, Michelle Voss, a University of Iowa psychology professor, studied the effects of walking on brain connectivity. She recruited 65 couch-potato volunteers aged 55 to 80 and imaged their brains in an MRI machine. For the next year, half of her volunteers took 40-minute walks three times a week. The other participants kept spending their days watching Golden Girls reruns (no judgment here; I love Dorothy and Blanche) and only participated in stretching exercises as a control. After a year, Voss put everyone back in the MRI machine and imaged their brains again. Not much had happened to the control group, but the walkers had significantly improved connectivity in regions of the brain understood to play an important role in our ability to think creatively.

Walking changes our brains, and it impacts not only creativity, but also memory.

In 2004, Jennifer Weuve of Boston University’s School of Public Health studied the relationship between walking and cognitive decline in 18,766 women aged 70 to 81. Her team asked them to name as many animals as they could in one minute. Those who walked regularly recalled more penguins, pandas, and pangolins than the women who were less mobile. Weuve then read a series of numbers and asked the women to repeat them in reverse order. Those who walked regularly performed the task much better than those who didn’t. Even walking as little as 90 minutes per week, Weuve found, reduced the rate at which cognition declined over time. Therefore, because cognitive decline is what occurs in the earliest stages of dementia, walking might ward off that neurodegenerative condition.

But correlation does not equal causation. Otherwise, one could interpret graveyards as places where giant stones fall from the sky and kill unsuspecting, mostly elderly, people. Perhaps the arrow of causality was pointing in the wrong direction. Maybe mentally active people were simply more likely to go for a walk. Researchers had to dive deeper.

___________________________________________________

Excerpted from First Steps: How Upright Walking Made Us Human by Jeremy DeSilva. Excerpted with the permission of Harper. Copyright © 2021 by Jeremy DeSilva.

Rediscovering the Scientist-Priest Who Radically Changed Our View of the Universe

Guido Tonelli on the Intuition of Georges Lemaître

By Guido Tonelli

April 19, 2021 (lithub.com)

In 1917 Albert Einstein, developing the consequences of his general theory of relativity, postulated a homogeneous, static, spatially curved universe. Mass and energy warp space-time, and would tend to make it collapse into a point—but if you add to the equation a positive term that compensates for this tendency towards contraction, the system remains in equilibrium. The beginning of modern cosmology is ushered in with this maneuver. To avoid the catastrophic ending of the universe—the inevitable result if only gravity were present—an arbitrary term was invented. Wanting to maintain the prejudice regarding the stability and persistence of the universe that had lasted for millennia and still evidently held Einstein captive, he forcefully introduced the “cosmological constant,” a kind of vacuum energy which is positive and tends to push everything outwards, thus contrasting with and counterbalancing the gravitational pull and guaranteeing the stability of the whole.

Today, now we know that the universe is made up of 100 billion galaxies, it is shocking to realize that scientists in the first two decades of the last century, among them some of the most brilliant minds of all time, were still convinced that it consisted solely of the Milky Way. It was the slow concentric movement of the bodies belonging to this galaxy that gave the idea of a universe that was like a stationary, harmonious and ordered system. Soon afterwards this was brought into question by new kinds of observation, but a radical break with the old conceptions was also anticipated by the brilliant intuition of a young Belgian scientist.

In 1927 Georges Lemaître was a 33-year-old Catholic priest with a degree in astronomy from the University of Cambridge, and in the process of completing his PhD at the Massachusetts Institute of Technology. He is among the first to grasp that Einstein’s equations can also describe a dynamic universe, a system of constant mass but one that is expanding—with a radius, that is, which gets bigger with the passage of time. When he presents this idea to his older and much more established colleague, Einstein’s response is shockingly negative: “Your calculations are correct, but your physics is abominable.” So deeply rooted is the prejudice which for millennia had conceived of the universe as a stationary system that even the most elastic and imaginative mind of the period rejects the idea that it can be expanding, and that as a consequence of this expansion it must have had a beginning.

It would take years of discussion and fierce argument before this extraordinarily novel idea was generally accepted by scientists, and a great deal more time would have to pass before it became public knowledge.

Lemaître’s intuition, confirmed by Hubble’s measurements, provided the basis for nothing less than a new vision of the world.

The key to its success is suggested by Lemaître himself, in an article in which he proposed his new theory, backed up with measurements of the radial speed of extra-galactic nebulae.

At the time, the attention of astronomers was concentrated on those peculiar objects resembling clouds which they conceived of as being groups of stars aggregated together with agglomerations of dust or gas. Today we know that they are in fact galaxies, each containing thousands of stars, but the telescopes of the time were not sufficiently developed to show them in much detail.

In order to calculate the speed at which a star or any other luminous body moved, astronomers had long known how to use the Doppler effect. The same phenomenon that we notice with sound waves from an ambulance siren can be observed with light waves. When the source recedes, the frequency of the waves that we receive is reduced: the sound of the siren gets fainter the further away it is. In the same way, the color of visible light shifts towards red with distance. By analysing the spectrum of luminous frequencies emitted by various celestial bodies, we can measure for each one this shift towards red, precisely the so-called red shift, and work out from this the radial speed with which they are receding from us.

But it was not easy to measure how far away these formations were, or consequently to determine whether they were situated within our galaxy or not. The solution was discovered by Edwin Hubble, a young astronomer working at the Mount Wilson Observatory in California, equipped with what at the time was the world’s most powerful telescope.

The technique employed was based on the study of Cepheids, pulsating stars of variable luminosity or brightness. Hubble begins his work just a few years after the death of Henrietta Swan Leavitt, one of the first American astronomers, a young scientist who had contributed enormously to this field and received, as is often the case, no appropriate recognition. In fact, at the beginning of the twentieth century it was considered unthinkable that a woman could use a telescope, and the extremely rare young female scientists were often deployed in subordinate roles. Leavitt was entrusted with the role of human “computer,” a wholly secondary and badly paid job: her task, in fact, consisted of examining, one after another, thousands of photographic plates containing images taken through telescopes, and recording the characteristics of stars and other celestial objects. She was assigned, in particular, the task of measuring and cataloguing the apparent brightness of these stars.

The young astronomer focused her studies on the stars with variable luminosity belonging to the Small Magellanic Cloud, a nebula which at the time was thought to be part of our own galaxy. It was Leavitt’s incisive observation that the brightest stars were also those with the largest pulsation period. Once this correlation was established, an estimate of the absolute brightness of a star could be obtained, which in turn would allow us to measure its distance from us. The brightness of an object varies according to the inverse square of the distance from the observer, so by knowing its absolute brightness, one need only measure the apparent brightness to calculate the distance.

Leavitt measured the relation between luminosity and period in the Cepheid variables of the Small Magellanic Cloud, and by hypothesizing that the stars were largely at the same distance, she was able to construct a scale of intrinsic luminosity, starting from the visible ones recorded on the plates.

Thanks to the incredible intuition of this brilliant young astronomer, we have at our disposal standard candles, that is to say light sources of known intensity, through which it is possible to deduce an absolute measure of distance.

This is what Hubble did when he used the Cepheids of the Andromeda nebula to reach the conclusion that these celestial bodies are too far away to be part of our Milky Way.

Lemaître was familiar with the first measurements made by Hubble, which not only placed these nebulae beyond our galaxy but also endowed them with an impressive speed of recession. His theory of an expanding universe made it possible to explain these unprecedented observations, as long as it was accepted that an enormous system was involved, immensely bigger than anything previously supposed. A gigantic structure in which there are countless galaxies similar to our own, with everything inclined to move away from everything else.

After having placed the Earth at the centre of the universe for thousands of years, and having reluctantly accepted that it is just one of the many bodies that rotate around the Sun, a further, final illusion suddenly crumbles. The solar system and our beloved Milky Way have no special position. We are an insignificant component of an anonymous galaxy—just one among the myriad of others to be found throughout the universe. And as if this was not enough, the entire system changes over time. Like all material objects it had a point of origin, and it will in all probability also have an end.

Lemaître’s intuition, confirmed by Hubble’s measurements, provided the basis for nothing less than a new vision of the world. In his original article, written in French, the astronomer-priest had gone so far as to predict a relationship of strict proportionality between distance and the speed at which astronomical objects recede. If his idea about the expanding universe was right, the more distant galaxies would have to move away from us at higher speeds, and would consequently exhibit a greater red shift. And this is precisely the result that Hubble obtained as his catalogue of observations grew in complexity and richness. But for a long time, Lemaître’s intuition was ignored because the Belgian journal in which he’d published his article had such a limited circulation. For this reason, until very recently, the scientific world had always referred to this correlation as “Hubble’s law.”

Thanks to a careful work of reconstruction, the contribution of the Belgian scientist has finally been recognized. It took almost a century, but today the relation that made it possible to establish the essentially dynamic nature of the universe is called, appropriately, the “Hubble–Lemaître law.” In the early 1930s, faced with large amounts of experimental data, Einstein also ended up abandoning his initial scepticism. Legend would have us believe that when reluctantly admitting that the Belgian priest and the American astronomer were right, the eminent scientist regretted his previous failure to understand, remarking that the cosmological constant “had proved to be the biggest blunder I have made in my life.”

Starting from an initial state in rapid expansion, there was no need to introduce this ad hoc correction, and so the cosmological constant disappeared for many decades from the fundamental equation of cosmology. By an irony of sorts, however, there would be a further reversal in the second half of the twentieth century when the discovery of dark energy caused the term that had so tormented its inventor to be reintroduced.

It was Lemaître once more who was the first to speculate that the expansion of the universe could actually be accelerating—and who, not by chance, kept Einstein’s cosmological constant in the equation, albeit reduced to a much lower value. Lemaître described the birth of the universe as a process that had taken place some time between 10 and 20 billion years ago, starting from an elementary state which he called the “primeval atom.” His hypothesis drew together the most advanced scientific theories of the period and the numerous mythological narratives that made everything originate from a kind of cosmic egg. But before doing so, he established the connection between microcosm and macrocosm that would prove so very fruitful in the coming decades.

From the outset, the formulation of this groundbreaking theory produced a great deal of perplexity. In truth, world opinion was otherwise engaged: the Wall Street Crash of 1929, the emergence of Fascism and Nazism in Europe, the many indications that the entire planet was about to descend into another global war. But even in scientific contexts where there was interest, the skepticism directed at the new cosmological theory was extremely strong. A good number of the most eminent scientists of the age refused even to countenance the idea of a beginning to space-time, or a birth of the universe. The problem lay in the fact that it bore a terrible resemblance to the biblical Genesis, and to the creation theory advocated by many religions. And if this wasn’t bad enough, the first proponent of the theory happened to be a priest as well as a scientist, and a Roman Catholic one at that.

The idea of an eternal universe, of an uncreated and everlasting stationary state, had first been supported by Aristotle, and it still fascinated many scientists. One of the best known of these was Fred Hoyle, a British astronomer who simply considered the theory proposed by Lemaître to be utterly repugnant. Hoyle remained convinced by his own ideas right up until his death in 2001. In 1949, in a program made for BBC radio, it was Hoyle who coined the term “Big Bang”—a description he intended as derogatory. Ironically, the image of a great explosion that Hoyle had used with the intention of ridiculing the new cosmological theory ended up penetrating so deeply into the collective imagination that it contributed significantly to its success.

It was experimental results, once again, that would determine the definitive success of Big Bang theory.

One of the bastions of the most tenacious opposition to the theory was provided by Soviet science. For decades, Soviet scientists stigmatized the Big Bang as a pseudoscientific and idealistic theory that hypothesized a form of creationism—far too similar to the religious kind not to be deeply suspect. It mattered little, for them, that Lemaître had scrupulously separated science from faith, to the extent of reacting with horror when in 1951 Pope Pius XII could not resist the temptation of referring to the Big Bang described by scientists as resembling the biblical moment of Creation. It was an attempt by the Pope to provide a sort of scientific basis for creationism, to reinforce the rational basis of faith—and Lemaître strongly objected to it.

It was experimental results, once again, that would determine the definitive success of Big Bang theory. Among the theoretical developments of the new cosmological theory there had been, in the 1950s, the prediction of a radiation diffused throughout the universe: fossil waves, the relics of a moment in which photons had irrevocably separated from matter and that continued to fluctuate around us. These were very weak electromagnetic waves, stretched for billions of years by the expansion of space-time, an attenuated energy that would have given to the interstellar void a typical temperature of a few degrees kelvin.

The stunning discovery that confirmed this was made almost by chance in 1964 by the American astronomers Arno Penzias and Robert Wilson. The pair had been working for weeks to mend an antenna they hoped to use for radioastronomical observations in the microwave region, but they had failed to eliminate an annoying signal that seemed to be coming from every direction at once. At first they had assumed that it was interference caused by a radio station transmitting in the vicinity of the laboratory; then they had thought it might be electromagnetic disturbance connected with various activities in nearby New York. After even checking that a pair of pigeons that had nested in the antenna—leaving a coating of whitish dielectric material, also known as pigeon poo—were not responsible, they stopped searching and published their results in a short letter. The discovery of cosmic microwave background (CMB) radiation emanating from all directions and the observation that the universe had a temperature of a few degrees kelvin, that is to say around –270 degrees Celsius, sealed the success of the already indisputable new theory. Penzias and Wilson had effectively recorded the echo of the Big Bang, the mother of all catastrophes, the primal event, the proof that everything had begun 13.8 billion years ago.

__________________________________

genesis

Excerpted from Genesis: The Story of How Everything Began by Guido Tonelli, translated by Erica Segre and Simon Carnell. Published by Farrar, Straus and Giroux in April, 2021. Copyright © Giangiacomo Feltrinelli Editore, Milano. Translation copyright © 2020 by Simon Carnell and Erica Segre. All rights reserved.Albert EinsteinastrophysicsEdwin HubbleErica SegreFSGGenesisGuido TonelliphysicsSimon Carnell

Guido Tonelli
Guido Tonelli

Guido Tonelli is an Italian particle physicist who played a key role in the discovery of the Higgs boson, the so-called God particle, which earned François Englert and Peter Higgs the 2013 Nobel Prize in Physics. For his contributions to the field, Tonelli was made a commendatore of the Order of Merit of the Italian Republic in 2012 and was awarded the Enrico Fermi Prize from the Italian Physical Society and the $3 million Special Breakthrough Prize in Fundamental Physics. He is a professor of general physics at the University of Pisa and a visiting scientist at the European Organization for Nuclear Research (CERN).

Recovering Sex Addict Assures Friends They Can Still Fuck Around Him

Friday 12:25PM (theonion.com)

EDMOND, OK—Explaining how he doesn’t want his newfound abstinence to infringe on their having a good time, Doug Chandler, a recovering sex addict, assured his group of friends at a party Friday that they can still fuck around him. “You guys should totally feel free to have sex while I’m around,” said Chandler, explaining that, while he himself would not partake, his inner circle shouldn’t feel weird about stuffing each other’s holes or bringing themselves to orgasm in his presence. “If you guys decide to bone, I might just dry hump a couch cushion or jack it on my own, but please, don’t let me deter you from pounding away at each other. I promise I’m not judging you guys or anything, I just gotta do what’s best for me.” At press time, Chandler woke up on the couch drenched in bodily fluids after relapsing that night.

My Cancer Journey — 4/19

Nedhenry April 19, 2021 Medium.com

I haven’t written anything in a while and thought I’d give you an update. I had my last of 6 chemo infusions last Friday. The fatigue has gotten much worse over time they tell me due to the cumulative effects fo the chemo. They also tell me I will get my energy back as the drugs leave my system over the rest of this month and most of May. I will get a full PET CT scan on May 24th and meet the oncologist on May 26th to get the final verdict. I am feeling pretty good about the cancer mostly due to the interim scan done after the third infusion. And I have to hope my energy level returns so I can get back to some exercise and movement again. The neuropathy is still present in my left foot but PT seems to be helping albeit very slowly. So I do the PT at home as much as I can and keep trying to move those toes again. So that’s the medical update.

I know when I get into a stream of consciousness flow my mind can sometimes go to places I don’t really control. It just sort of flows out. I got into one of those deep discussions that brought me to a self diagnosis of a specific psychological issue with myself. I went too far in that discussion but I am still processing my life (and my neurosis) through this cancer time. I want to have the courage to be open about my life — all of it — good and bad — but I don’t know that that can happen at the present time. Once reason for this is what I now know is something very real and that is chemo brain fog. Besides the fatigue, I can tell that my thinking is not nearly as sharp or clear as it is normally. I think this will wear off as the drugs leave my body. It has affected memory, cognition and mental stamina. So for the last few weeks I confess I have mostly been passing time. Back to watching too much TV but also trying to get outside on the deck to enjoy the gorgeous spring weather we’ve had here in Atlanta. I do get the occasional visitor on the deck and am still getting help from neighbors and friends with shopping and small chores so I am not yet getting out in the population. Even though I have been vaccinated for Covid, they still do not know the efficacy of the vaccine for cancer patients with compromised immunity. So I am choosing to err on the side of caution until I build back more strength.

I have heard from several of you — some who I did not even know read this — with some personal and empathic messages and emails. I haven’t gotten back to all of you yet — mostly due to the constant fatigue but perhaps a touch of laziness that has crept back into my routine. Right now, as I recover, I am letting myself pretty much do what I feel like doing and eat what I feel like eating and just get through this aggressive chemo treatment and see where I am on the other side of it. My goal has been to get done with it any way I can and begin the recovery from chemo and that process began with the last treatment last Friday. So hopefully my mind with get clearer and my energy level will increase and I can come up with something interesting to write. I am still looking for the insights from cancer and I know there is gold to mine. Cancer will be with me one way or another for the rest of my life. I think I will be a cancer survivor and will end up at the cancer center for scans on a regular basis from now on. That’s the best case. I don’t want to ponder the worst case.

I know this is short but I wanted to let those of you who are following me what was going on. There is Light at the end of the tunnel and I can see it.

White Right: Meeting the Enemy (Emmy Award Winning Documentary)

Our Life Documentary in which Emmy award-winning film-maker Deeyah Khan meets US neo-Nazis and white nationalists face-to-face, and attends America’s biggest and most violent far right rally in recent years. Khan, who has received death threats in the past after advocating diversity and multiculturalism, tries to get behind the violent ideology in a bid to understand the personal and political reasons behind the apparent resurgence of far right extremism in the US. This film was first broadcast: 11 Sep 2017 Our Life brings you fascinating stories of social interest from around the world. You can discover award winning documentaries, films and groundbreaking reports that capture the complexities of our daily life, with stories that will entertain, inspire and inform. Content distributed by ITV Studios.