J.G. Ballard: My Favorite Books

The renowned English writer reflects on the literature that shaped his imagination.

Photograph: Fay Godwin/British Library Board

By: J.G. Ballard (thereader.mitpress.mit.edu)

J. G. Ballard (1930-2009) was a colossal figure in English literature and an imaginative force of the 20th century. Alongside seminal novels — from the notorious “Crash” (1973) to the semi-autobiographical “Empire of the Sun” (1984) — Ballard was a sought-after reviewer and commentator, publishing journalism, memoir, and cultural criticism in a variety of forms. The following essay, in which Ballard reflects on the evolving impact of literature throughout his life, is excerpted from a new volume that collects the most significant short nonfiction of Ballard’s 50-year career.


As I grow older — I’m now in my early 60s — the books of my childhood seem more and more vivid, while most of those that I read 10 or even five years ago are completely forgotten. Not only can I remember, half a century later, my first readings of “Treasure Island and Robinson Crusoe,” but I can sense quite clearly my feelings at the time — all the wide-eyed excitement of a seven-year-old, and that curious vulnerability, the fear that my imagination might be overwhelmed by the richness of these invented worlds. Even now, simply thinking about Long John Silver or the waves on Crusoe’s island stirs me far more than reading the original text. I suspect that these childhood tales have long since left their pages and taken on a second life inside my head.

This essay is excerpted from the book “J.G. Ballard: Selected Nonfiction, 1962-2007,” edited by Mark Blacklock.

By contrast, I can scarcely recall what I read in my 30s and 40s. Like many people of my age, my reading of the great works of Western literature was over by the time I was 20. In the three or four years of my late teens I devoured an entire library of classic and modern fiction, from Cervantes to Kafka, Jane Austen to Camus, often at the rate of a novel a day. Trying to find my way through the grey light of postwar, austerity Britain, it was a relief to step into the rich and larger-spirited world of the great novelists. I’m sure that the ground-plan of my imagination was drawn long before I went up to Cambridge in 1949.

In this respect I differed completely from my children, who began to read (I suspect) only after they had left their universities. Like many parents who brought up teenagers in the 1970s, it worried me that my children were more interested in going to pop concerts than in reading “Pride and Prejudice” or “The Brothers Karamazov” — how naive I must have been. But it seemed to me then that they were missing something vital to the growth of their imaginations, that radical reordering of the world that only the great novelists can achieve.

I now see that I was completely wrong to worry, and that their sense of priorities was right — the heady, optimistic world of pop culture, which I had never experienced, was the important one for them to explore. Jane Austen and Dostoyevsky could wait until they had gained the maturity in their 20s and 30s to appreciate and understand these writers, far more meaningfully than I could have done at 16 or 17.

In fact I now regret that so much of my reading took place during my late adolescence, long before I had any adult experience of the world, long before I had fallen in love, learned to understand my parents, earned my own living and had time to reflect on the world’s ways. It may be that my intense adolescent reading actually handicapped me in the process of growing up — in all senses my own children and their contemporaries strike me as more mature, reflective and more open to the possibilities of their own talents than I was at their age. I seriously wonder what Kafka and Dostoyevsky, Sartre and Camus could have meant to me. That same handicap I see borne today by those people who spend their university years reading English literature — scarcely a degree subject at all and about as rigorous a discipline as music criticism — before gaining the experience to make sense of the exquisite moral dilemmas that their tutors are so devoted to teasing out.

The early childhood reading that I remember so vividly was largely shaped by the city in which I was born and brought up. Shanghai was one of the most polyglot cities in the world, a vast metropolis governed by the British and French but otherwise an American zone of influence. I remember reading children’s editions of “Alice in Wonderland,” “Robinson Crusoe,” and Swift’s “Gulliver’s Travels” at the same time as American comics and magazines. Alice, the Red Queen and Man Friday crowded a mental landscape also occupied by Superman, Buck Rogers and Flash Gordon. My favorite American comic strip was Terry and the Pirates, a wonderful Oriental farrago of Chinese warlords, dragon ladies and antique pagodas that had the added excitement for me of being set in the China where I lived, an impossibly exotic realm for which I searched in vain among Shanghai’s Manhattan-style department stores and nightclubs. I can no longer remember my nursery reading, though my mother, once a schoolteacher, fortunately had taught me to read before I entered school at the age of five.

There were no cheerful posters or visual aids in those days, apart from a few threatening maps, in which the world was drenched red by the British Empire. The headmaster was a ferocious English clergyman whose preferred bible was “Kennedy’s Latin Primer.” From the age of six we were terrorized through two hours of Latin a day, and were only saved from his merciless regime by the Japanese attack on Pearl Harbor (though he would have been pleased to know that, sitting the School Certificate in England after the war, I and a group of boys tried to substitute a Latin oral for the French, which we all detested).

Once home from school, reading played the roles now filled by television, radio, cinema, visits to theme parks and museums (there were none in Shanghai), the local record shop and McDonalds. Left to myself for long periods, I read everything I could find — not only American comics, but TimeLifeSaturday Evening Post and the New Yorker: At the same time I read the childhood classics — “Peter Pan,” the Pooh books and the genuinely strange William series, with their Ionesco-like picture of an oddly empty middle-class England. Without being able to identify exactly what, I knew that something was missing, and in due course received a large shock when, in 1946, I discovered the invisible class who constituted three-quarters of the population but never appeared in the Chums and Boys’ Own Paper annuals.

Later, when I was seven or eight, came “The Arabian Nights,” Hans Andersen and the Grimm brothers, anthologies of Victorian ghost stories and tales of terror, illustrated with threatening, Beardsley-like drawings that projected an inner world as weird as the surrealists’. Looking back on my childhood reading, I’m struck by how frightening most of it was, and I’m glad that my own children were never exposed to those gruesome tales and eerie colored plates with their airless Pre-Raphaelite gloom, unearthly complexions and haunted infants with almost autistic stares. The overbearing moralistic tone was explicit in Charles Kingsley’s “The Water-Babies,” a masterpiece in its bizarre way, but one of the most unpleasant works of fiction I have ever read before or since. The same tone could be heard through so much of children’s fiction, as if childhood itself and the child’s imagination were maladies to be repressed and punished.

The greatest exception was “Treasure Island,” frightening but in an exhilarating and positive way — I hope that I have been influenced by Stevenson as much as by Conrad and Graham Greene, but I suspect that “The Water-Babies” and all those sinister fairy tales played a far more important part in shaping my imagination. Even at the age of 10 or 11 I recognized that something strangely morbid hovered over their pages, and that dispersing this chilling miasma might make more sense of the world I was living in than Stevenson’s robust yarns. During the three years that I was interned by the Japanese my reading followed a new set of fracture lines.

The 2,000 internees carried with them into the camp a substantial library that circulated from cubicle to cubicle, bunk to bunk, and was my first exposure to adult fiction — popular American bestsellers, Reader’s Digest condensed books, Somerset Maugham and Sinclair Lewis, Steinbeck and H. G. Wells. From all of them, I like to think, I learned the importance of sheer storytelling, a quality which was about to leave the serious English novel, and even now has scarcely returned.

Arriving in England in 1946, I was faced with the incomprehensible strangeness of English life, for which my childhood reading had prepared me in more ways than I realized. Fortunately, I soon discovered that the whole of late 19th- and 20th-century literature lay waiting for me, a vast compendium of human case histories that stemmed from a similar source. In the next four or five years I stopped reading only to go to the cinema.

The Hollywood films that kept hope alive — “Citizen Kane,” “Sunset Boulevard,” “The Big Sleep” and “White Heat” — seemed to form a continuum with the novels of Hemingway and Nathanael West, Kafka and Camus. At about the same time I found my way to psychoanalysis and surrealism, and this hot mix together fueled the short stories that I was already writing and strongly influenced my decision to read medicine.

There were also false starts, and doubtful acquaintances. “Ulysses” overwhelmed me when I read it in the sixth form, and from then on there seemed to be no point in writing anything that didn’t follow doggedly on the heels of Joyce’s masterpiece. It was certainly the wrong model for me, and may have been partly responsible for my late start as a writer — I was 26 when my first short story was published, and 33 before I wrote my first novel. But bad company is always the best, and leaves a reserve of memories on which one can draw for ever.

For reasons that I have never understood, once my own professional career was under way I almost stopped reading altogether. For the next 20 years I was still digesting the extraordinary body of fiction and non-fiction that I had read at school and at Cambridge. From the 1950s and 1960s I remember “The White Goddess” by Robert Graves, Genet’s “Our Lady of the Flowers,” Durrell’s “Justine” and Dalí’s “Secret Life,” then Heller’s “Catch-22” and, above all, the novels of William Burroughs — “The Naked Lunch” restored my faith in the novel at a time, the heyday of C. P. Snow, Anthony Powell and Kingsley Amis, when it had begun to flag.

Since then I’ve continued on my magpie way, and in the last 10 years have found that I read more and more, in particular the 19th- and 20th-century classics that I speed-read in my teens. Most of them are totally different from the books I remember. I have always been a voracious reader of what I call invisible literatures — scientific journals, technical manuals, pharmaceutical company brochures, think-tank internal documents, PR company position papers — part of that universe of published material to which most literate people have scarcely any access but which provides the most potent compost for the imagination. I never read my own fiction.

In compiling my list of 10 favorite books I have selected not those that I think are literature’s masterpieces, but simply those that I have read most frequently in the past five years. I strongly recommend Patrick Trevor-Roper’s “The World through Blunted Sight” to anyone interested in the influence of the eye’s physiology on the work of poets and painters. “The Black Box” consists of cockpit voice-recorder transcripts (not all involving fatal crashes), and is a remarkable tribute to the courage and stoicism of professional flight crews. My copy of the Los Angeles “Yellow Pages” I stole from the Beverly Hilton Hotel three years ago; it has been a fund of extraordinary material, as surrealist in its way as Dalí’s autobiography.

  • “The Day of the Locust,” Nathanael West
  • “Collected Short Stories,” Ernest Hemingway
  • “The Rime of the Ancient Mariner,” Samuel Taylor Coleridge
  • “The Annotated Alice,” ed. Martin Gardner
  • “The World through Blunted Sight,” Patrick Trevor-Roper
  • “The Naked Lunch,” William Burroughs
  • “The Black Box,” ed. Malcolm MacPherson
  • “Los Angeles Yellow Pages”
  • “America,” Jean Baudrillard
  • “The Secret Life of Salvador Dalí,” by Dalí

(“The Pleasure of Reading,” 1992)


James Graham “J.G.” Ballard (1930-2009) was a British author and journalist. Best known for his dystopic works of science fiction, his novels include “Crash” (1973) and “High-Rise” (1975). His semi-autobiographical novel “Empire of the Sun” (1984) was adapted by Stephen Spielberg in the 1987 film of the same name. This essay is excerpted from the collection “J.G. Ballard: Selected Nonfiction, 1962-2007“.

How We Sort the World: Gregory Murphy on the Psychology of Categories

“Every category is a simplification to some degree; it throws away information about the thing.”

By: The Editors (thereader.mitpress.mit.edu)

The minute we are born — sometimes even before — we are categorized. From there, classifications dog our every step: to school, work, the doctor’s office, and even the grave.

Despite the vast diversity and individuality in every life, we seek patterns, organization, and control. Or, as cognitive psychologist Gregory Murphy puts it: “We put an awful lot of effort into trying to figure out and convince others of just what kind of person someone is, what kind of action something was, and even what kind of object something is.”

Gregory L. Murphy is the author of “Categories We Live By: How We Classify Everyone and Everything

In his new book “Categories We Live By,” Murphy, a professor emeritus at New York University who has spent a career studying concepts and categories, considers the categories we create to manage life’s sprawling diversity. Analyzing everything from bureaucracy’s innumerable categorizations to the minutiae of language, his book reveals how these categories are imposed on us and how that imposition affects our everyday lives.

In the interview that follows, Murphy discusses his lifelong interest in studying concepts and categories — “the glue that holds our mental world together,” as he’s described them — the impact of his research on his worldview, and the complex relationship between language and categorization across cultures.


What got you interested in studying concepts and categories, and how did this eventually lead to your new book?

I started studying categories in my first year of graduate school. The psychology of categories was fairly new and an exciting area then, because of the important discoveries by Eleanor Rosch and her students (some reviewed in the book). It was clear that the traditional way of studying how people learn categories was no longer appropriate, and we had to develop new techniques and ask new questions. So, it was a great time to be in the field.

But, as happens in many scientific fields, the work began to get increasingly technical and specialized. By this century, much of the work relied on mathematical models and computer simulations in order to test different theories. This is a normal and generally useful progression, but it meant that the average person couldn’t read the work or benefit from the field’s discoveries.

We often feel that once we determine the thing’s category, then all questions will be answered about it.

As I became a “senior figure” (ahem) in the field, I began to think that to some degree we’ve lost the forest for the trees, so I wanted to write about some of the important things we have learned about categories over the past 50 years, many of which are interesting or useful (and some both). Not all of this comes from psychological research; there have also been important contributions from philosophy, linguistics (especially semantics), and anthropology. I also wanted to get beyond theoretical investigations and talk about a lot of concrete examples, from the life-changing to the trivial. I learned a lot from doing research into some of these examples and thought others would find these cases interesting as well.

I want to get back to some of those examples. But first, how do you think your work has changed the way you view the world?

I think it has made me more skeptical of arguments about identities, of people and all kinds of things. We put an awful lot of effort into trying to figure out and convince others of just what kind of person someone is, what kind of action something was, and even what kind of object something is. We often feel that once we determine the thing’s category, then all questions will be answered about it: The person is qualified or unqualified; it’s the right thing to do or the wrong thing; the object must be made out of wood. But division into categories is often arbitrary — not completely, but in some respects. And every category is a simplification to some degree; it throws away information about the thing. If you call me an academic, that is no doubt true, but that doesn’t include a lot of other information about me, nor do I correspond exactly to your stereotype of an academic. (OK, I actually do, but a lot of academics don’t.)

There are a number of different ways to make categories, and they don’t always agree with one another. At some point, we have to make a principled decision about what the category is and why that is the best way to think about it, because the world isn’t pre-divided into nice categories that we simply have to notice.

That’s not to say that disputes over categorization aren’t important. They may be necessary, especially in rule-making contexts like the law or in organizations. But you have to take the entire process with a grain of salt and recognize the limits of what the category can tell you.

You suggest that any human culture that had language must have also had categories. How do words and their meanings influence the way we categorize and understand the world around us?

The relationship between language and categories is pretty complex. One thing to keep in mind is that every content word essentially specifies a category, of all the things that can be called by that word. So, a verb like jump specifies a category of actions that we would describe using that verb. A noun like cat specifies a category of the animals we would call “cat.” However, word meanings are often very complex and extended. They can include very different things.

An obvious example is homonyms, when two different and unrelated words happen to have the same name. From the point of view of the child learning a language, these are confusing, and one of the joys of parenthood is being forced to explain why the flying bat is called the same thing as a baseball bat, why we hear through ears but eat ears of corn, how a calf can refer both to a baby cow and to the muscles on the bottom of the leg, and so on. Children soon figure out that “that’s just the way it is” and have to go along with the crazy word meanings that adults have given to them. But from the perspective of the Whorfian hypothesis — which proposes that the language we use influences and shapes our thought processes and worldview — all such cases are troubling. I don’t think that English speakers believe that baseball bats are in any way similar to the mammal, even though they have the same name. There are limits on how much language can shape our thoughts.

Of course, most words are not homonyms, but many very common words have a lot of different, though related, meanings. So, the word church can be used to refer to a religious organization but also to a building (“The church burned down.”). Surely organizations and buildings aren’t the same kind of thing. So, language is an imperfect clue to what categories we have. Research comparing similar words of different languages confirms this: The categories picked out by different languages can be very different, even when people’s categories are quite similar.

Language is an imperfect clue to what categories we have.

Still, we have to admit that word meanings can help or hinder the learning about categories. If you’re a new skier, hearing people use words like popcorn or powder to describe different kinds of snow may help you learn what different kinds of snow there are and how they affect your skiing. But it seems pretty clear that this vocabulary developed historically in response to the categories that skiers had already figured out. The words didn’t come first, and then people learned about snow later. If there is a category that is important to us, society tends to find a label to refer to it.

How much do categories differ across languages and cultures? Do you believe certain languages inherently foster different categorization systems?

It’s hard to give a specific answer to this. In some respects, the categories of different cultures are surprisingly similar. In other respects, they’re (not-so-surprisingly) different. In the basic, common categories of everyday life, different cultures seem to have pretty similar categories. The differences may be in more social and abstract categories. There’s actually a field of study — ethnobiology — that compares the categories of the natural world across cultures, and to some degree, people pick out the same categories, or at least the same kinds of categories even in very different cultures. Of course there are differences when parts of the natural world occur in one place but not in another. I don’t know the different kinds of birds of South Asia, and I would imagine that the people of South Asia don’t generally know the different American sparrows or wild mammals. That is not very surprising. However, when people in different parts of the world make categories of plants or animals, they tend to make about the same kinds of categories, often based on the biological genus: categories like oaks, sparrows, trout, or roses.

Anthropologists who have studied nonindustrial societies often find that their categories of plants and animals are pretty similar to ours at a middle level. The differences are often found in the more specific and more abstract categories. An abstract category like arthropods is something that scientists may find useful and that you could be taught in your biology class, but it’s not something that people who are directly in contact with animals find at all useful. Arthropods don’t look very similar, don’t do the same behaviors, aren’t all edible, and so on. Knowing about crabs is useful; knowing that crabs and millipedes are in the same category… not so much. Similarly, people often know about very specific categories only when they are actually used in agriculture, hunting, or for some specific purpose. If you raise corn, you will want to know which corn is good to plant, which is only good for animal fodder or is decorative, and so on. So, you’ll have categories of different kinds of corn to help distinguish them. If you don’t raise corn, then you’ll probably just see it all as corn.

So, for most natural categories, the effect of culture is in providing focus and expertise to develop more categories of the things you’re interested in. Occasionally, that focus may result in categories that are specifically designed for a single purpose, like edible fish that can be caught in the local rivers. This is not a “natural” category, and it wouldn’t be known to people who don’t fish or don’t live in the area, but it could be very useful to the subset of people who rely on fishing or engage in it as a hobby. We may all form categories like that in our mini-cultures of special interests and activities.

The second part of your book is organized around specific case studies exploring categories ranging from critical (medicine, race, mortality) to trivial (peanut butter, almond milk, parking regulations). We could probably dedicate separate discussions to each of those, but since it’s not feasible to cover any of them comprehensively here, can you share the broader insights these diverse case studies collectively offer about the nature of categories?

Theories of categories are useful only insofar as they tell us about real examples. If they can only be applied to artificial and simplified situations (as in many psychology experiments), they aren’t of much use. So, my goal was to show how the different aspects of categorization play out in real examples. Some of these are outright silly, such as whether a burrito is a sandwich, or the famous case of whether Pringles are potato chips. Others are extremely important to the people involved and to society as a whole, such as racial or legal categories.

What’s interesting to me is that the same issues arise for the trivial and serious categories, such as the difficulty of making a definition that works, beliefs that categories have essences, and problems of fuzzy boundaries. Sometimes the law steps in, to tell us just how much peanut something must have to be sold as peanut butter or exactly what criteria must be met for someone to be declared dead. But such rules still allow for unclear cases, and often people don’t know or don’t follow those rules in their everyday lives. So, there are at least two categories: the official one based on the law and the one that most people actually use.

Is there anything else you think our readers should know?

I had the idea early in my career that science might step in and tell us what the answers are, at least for some categories. However, many scientific categories have the same problems as our everyday categories do. The case of Pluto is one that many people know. During the debate over whether Pluto is a planet, it came to light that there was no definition of what a planet is. Oops! And I discovered, to my dismay (which I share in the book), that there is little agreement in biology over how to determine what a species is. The problem, I argue, is not in science or in our human institutions, but in the world, which is extraordinarily complex. There are many different valid ways of dividing up the entities in the world, and we have to figure out which ones are useful to us. That’s why understanding how and why we make categories is itself an important goal. The world doesn’t give us categories for free — we have to make them ourselves.


Gregory L. Murphy is Professor Emeritus of Psychology at New York University. He is the author of “The Big Book of Concepts” and “Categories We Live By.”

Tarot Card for February 29: The Three of Cups

The Three of Cups

The Lord of Abundance is a warm and joyous card, which indicates a rare and precious type of love – a love which, once experienced, reminds us of the richness of shared emotion and commitment.It is also a card which refers to the wellspring of fertility, whether spiritual or material. Here we see the first seeds sown of a bright and bountiful harvest. Accordingly, the card will sometimes come up to indicate high days of celebration – like weddings or other intimate celebrations of love.The emotional quality represented by this card is deep and unusual – indicating the love felt not only by lovers, but also the love between close friends, or family. These relationships are gifts, which need to be cared for with great respect and gratitude.The Lord of Abundance offers one word of warning – this type of love cannot be created, nor engineered. When it occurs in our lives we are lucky and blessed. Some people spend a lifetime looking for such depth of emotion. And sometimes, people try to pretend it exists where it does not. So when you raise this card in a reading be aware that you are fortunate indeed!

Neural Correlates of Liberalism and Conservatism in a Post-communist Country

Credit: Getty Images

Jan Kremláček,1,* Daniel Musil,2 Jana Langrová,1 and Martin Palecek2

Author information Article notes Copyright and License information PMC Disclaimer

Abstract

A previous experiment showed that there was a strong correlation between conservatism/liberalism and brain activity, linked to an error response (r = 0.59, p < 0.001) in the USA political environment. We re-ran the experiment on a larger and age-homogeneous group (n = 100, 50 females and 50 males, aged 20–26 years) in the Czech Republic; a European country with a different sociocultural environment and history. We did not find a relationship between the brain activity connected to conflict monitoring and self-reported conservatism/liberalism orientation (ρ = −0.11, p = 0.297) or conservatism/liberalism validated for the USA agenda (ρ = −0.01, p = 0.910). Instead of replicating the previous study, we decided to test the hypothesis under a different socio-cultural context. Our results support a view of self-reported or validated, conservative or liberal attitudes as a complex behavioral pattern. Such a behavioral pattern cannot be determined with statistical significance, using a simple Go-NoGo detection task, without accounting for confounding factors such as age and socio-cultural conditions. Sufficiently powered studies are warranted to evaluate this neuro-political controversy.

Keywords: error related negativity, political attitude, liberalism, conservatism, neuropolitics

Introduction

Political orientation significantly affects behavior and decision making on an individual and social level. Increasingly there are attempts to describe the factors that shape political orientation across anthropological directions with political neuroscience (Jost et al., 2014). Neuroscientific methods with objective measurements can verify or generate hypotheses related to the political orientation with conservative stance, characterized by closeness and holding tradition vs. a liberal orientation associated with openness and bringing about a change in the order (Jost and Amodio, 2012). Several studies advocating a strong correlation between neurocognitive settings and political preference continues to increase over time (for review, see Schreiber, 2017).

A recent study (Amodio et al., 2007), hereinafter referred to as Am2007, demonstrated that a person’s self-reported political attitude may be closely linked to a neural correlate accompanying a repeated error response, the error related negativity (ERN), in a simple laboratory detection task. The authors found a statistically significant relationship between the amplitude of the ERN and self-evaluation on the liberalism/conservatism (L/C) axis. They demonstrated that participants who presented a higher degree of liberalism in their answers had a larger ERN amplitude, and this fact was interpreted as a higher sensitivity to incentives for change to established rules.

The ERN represents a negative component of event related potentials (ERP) culminating above the medial-frontal cortex, about 50 ms after the moment of an error response (Gehring et al., 2012). Probable neural origins of ERN are localized in the frontomedial cortex, the anterior and rostral cingulate cortex, and the adjacent supplementary motor cortex (Iannaccone et al., 2015), as was also demonstrated by intracortical recordings (Brázdil et al., 2005). The ERN, characterized as a response to conflict, is behavior- and context- dependent. The ERN changes with several factors, such as personal characteristics, psychopathology, age, condition of neurotransmitters (diet, previous experiences) or relationship to the task being performed (Gehring et al., 2012; Larson et al., 2014). As is the case for other ERP, ERN is sensitive to the various aforementioned factors; however, its behavioral interpretation is difficult, as an undistinguishable ERN change may be caused by different conditions. Therefore, we found the results explaining political ideology, by considering ERN (Amodio et al., 2007; Jost and Amodio, 2012; Weissflog et al., 2013; Jost et al., 2014) appealing, and we decided to test if the results are robust with respect to different sociocultural environments. We attempted to replicate the results of the previous Am2007 study in the Czech Republic—a democratic country with a communistic history (1948–1989), in which self-evaluation of L/C need not correspond to the general political orientation. To account for different environmental and historical conditions, we calibrated political preferences according to the values associated with a typical agenda of US liberalism and conservatism in addition to the self-report. We expected to verify the extent to which the original hypothesis of correlation between neurological setting and political preference is specific to the self-reported L/C and US environment.

More at: https://www.ncbi.nlm.nih.gov/

Concentrate

By Heather Williams, H.W., M. (with permission)

February 5, 2024 (TheProsperos.org)

Can you focus your attention on the present moment for one minute?

CONCENTRATE = to focus one’s powers, efforts or attention; to bring or direct toward a common center or objective

QUESTION: Can you focus on the present for one minute?

STORY: Focusing our attention is the key to opening the door to our Higher Capacities. Are you aware that your Higher Capacities are within you and available now? Right now I need to access my higher capacities because I am packing up all my studio stuff for our move to Tennessee. I’m lifting heavy books, picture frames, large drawing papers, tables and artist equipment as I attempt to fit them properly into a variety of boxes. I’m on box #52 right now! I am 77 years old and all this physical labor is draining my energy. Just to let you know how I access my Higher Capacities – I stop and take a break and listen to my body without judging, worrying or trying to change anything. I just focus my attention, relax any tension, listen, feel and love the Divine Energy that I AM. And then I move forward with gratitude. Wherever you are right now – feel gratitude for the Divine Energy that you are.

QUOTES

“Energy flows where attention goes.” ~ Tony Robbins

“Do not dwell in the past, do not dream of the future, concentrate the mind on the present moment.” ~ Buddha

“Our task is to find teaching methods that continually engage the whole human being. We would not succeed in this endeavor if we failed to concentrate on developing the human sense of art.” ~ Rudolf Steiner

“The Western day is indeed nearing when the inner science of self-control will be found as necessary as the outer conquest of nature. This new Atomic Age will see men’s minds sobered and broadened by the now scientifically indisputable truth that matter is in reality a concentrate of energy.” ~ Yaramahansa Yogananda

EXERCISE

STOP.

Sit quietly. Assume an erect posture.

Sense the breath.

Sit calmly and focus your attention on the Present moment.

Notice any tightness or tension in your body.

Relax. Breathe.

Get your pen and paper and write words or draw lines expressing yourself paying attention to the Divine Energy that you are.

Take charge of your most valuable asset: your attention.

Move forward into your new day focusing your attention on the Divine Energy that is always flowing through you.

Richard Lewis, Acerbic Comedian and Character Actor, Dies at 76

Richard Lewis (1947-2024)

After rising to prominence for his stand-up act, he became a regular in movies and TV, most recently on “Curb Your Enthusiasm.”

Richard Lewis, an intense-looking dark-haired man wearing a black leather jacket over a black T-shirt with his arms crossed.
The comedian Richard Lewis in 2014. He was among the best-known names in a generation of comedians who came of age during the 1970s and ’80s.Credit…Michael Schwartz/WireImage
Clay Risen

By Clay Risen

Feb. 28, 2024 (NYTimes.com)

Richard Lewis, the stand-up comedian who first achieved fame in the 1970s and ’80s with his trademark acerbic, dark sense of humor, and who later parlayed that quality into an acting career that included movies like “Robin Hood: Men in Tights” and a recurring role as himself on HBO’s “Curb Your Enthusiasm,” died on Tuesday at his home in Los Angeles. He was 76.

His publicist, Jeff Abraham, said the cause was a heart attack. Mr. Lewis announced last year that he had Parkinson’s disease.

Mr. Lewis was among the best-known names in a generation of comedians who came of age during the 1970s and ’80s, marked by a world-weary, sarcastic wit that mapped well onto the urban malaise in which many of them plied their trade.

After finding success as a comedian in New York nightclubs, he became a regular on late-night talk shows, favored as much for his tight routine as for his casual, open affability as an interviewee. He appeared on “Late Night With David Letterman” 48 times.

And he was at the forefront of the boom in stand-up comedy that came with the expansion of cable television in the late 1980s.

ImageMr. Lewis performing as a standup in Las Vegas in 2005. He called himself “the Prince of Pain.” Credit…Ethan Miller/Getty Images


Albert Einstein as Serial Killer and Misogynist: One Context of Discovery

Paul Austin Murphy

Paul Austin Murphy

Published in Paul Austin Murphy’s Essays on Philosophy

Feb 18, 2024

(i) Introduction
(ii) Sokal’s Sex Life and Kripke’s Schooldays
(iii) Robert P. Crease on the Envy, Rivalry and Anger of Scientists
(iv) Paul Davies on Newton’s Religious Context of Discovery
(v) Rupert Sheldrake Against the Context-of-Discovery Distinction
(vi) Slavoj Žižek Against the Context-of-Discovery Distinction

Many readers will have noticed the numerous social-media memes which have Albert Einstein’s words embedded within them. Relevantly, most of these memes aren’t actually about his scientific theories and views, or even about science itself. Instead, most of them are posted to defend the view that Einstein was religious, or spiritual, or a socialist, or this, or that, or the other. Other memes concentrate on Einstein’s private life.

More particularly, the users of Facebook and social media generally might also have also noted how often spiritual idealists (see here) and New Agers quote a handful of passages from German and Austrian physicists which were mainly spoken (or written) in the first three decades of the 20th century. The relevant point is that these much-quoted scientists rarely made an effort to tie their non-scientific views to their actual physical (i.e., technical) theories. What’s more, these physicists hardy referred to “Eastern thought” and spiritual stuff in the first place. Hence, the very-few passages which spiritual commentators, New Agers, etc. quote and embed in their memes.

Erwin Schrödinger is a good example of all this.

He actually went out of his way to disconnect his interest in (loosely called) Eastern religion from his actual technical physics. [See note 1.]

New Agers, on the other hand, do the opposite of this.

Such people go out of their way to connect — specifically — quantum physics to their prior spiritual beliefs.

So are all these (as it’s put in philosophy) contexts of discovery important to the scientific theories of particular scientists?

Indeed, are they contexts of discovery at all?

Sokal’s Sex Life and Kripke’s Schooldays

Jean Bricmont and Alan Sokal

In extreme terms, it doesn’t matter if the scientist discussed (or memed) was also, say, a serial killer, a Nazi, a neoliberal, a narcissist, etc. In Einstein’s particular case, it doesn’t matter that he was (according to Metro newspaper) a “misogynist” and “neanderthal”.

Yet, to take just one example, the French critic and writer Philippe Sollers was very interested in contexts of discovery. Or at least he was interested in in the sex life and personal psychology of a mathematician and physicist.

Philippe Sollers’ words are to be found in this article.

So now take American mathematician and physicist Alan Sokal and the physicist and philosopher Jean Bricmont and their words (from the book Intellectual Impostures) on Philippe Sollers’ words (see image above) on… well, Sokal and Bricmont themselves:

“[] Philippe Sollers asserts [] that our private lives ‘merit investigation’: ‘What do they like? What paintings do they have on their walls? What are their wives like? How are those beautiful abstract statements translated in their daily and sexual lives?’

“Well! Let’s concede once and for all that we are arrogant, mediocre, sexually frustrated scientists, ignorant in philosophy and enslaved by a scientistic ideology (neoconservative or hard-line Marxist, take your pick).”

In any case, how did Alan Sokal react to Philippe Sollers’ words?

In the following way:

“But please tell us what this implies concerning the validity or invalidity of our arguments.”

All that said, popular-science writers often become very fixated on biographical detail. Perhaps they do so for two related — as well as obvious – reasons:

(1) To popularise science and scientists
(2) To help sell their books.

Let’s now take a rather less sexy context of discovery.

Saul Kripke

The American philosopher and logician Saul Kripke was once honest enough to admit (in this video) that his initial interest in Ludwig Wittgenstein was solely down to who taught him at university when he was a student. (He mentions “three faculty members” particularly.) Of course, alongside the fact that his teachers had an interest in Wittgenstein (specifically the “late Wittgenstein” of the Philosophical Investigations) would have been the fact that Kripke actually developed an independent interest in what Wittgenstein wrote. Having said that, Kripke also confesses that he didn’t at first see the importance of Wittgenstein or his Philosophical Investigations. Indeed, he didn’t “develop [his] own take on what [Wittgenstein] was doing until 1962 and 1963”…

Then again, Kripke was still only 22 in 1962…

But who cares about all this biographical detail!

Well, a lot of people do.

Indeed, there’s nothing wrong with that.

The question is what relevance does it have to Kripke’s actual philosophical ideas in, say, logic, metaphysics, philosophy of mind, etc?

The context of Kripke’s discovery of Wittgenstein, in this case, will have no interest at all to those strict philosophers who’re solely interested in the context of justifying Kripke’s analysis of Wittgenstein.

Thus, if biography and context are really so important when it comes to Sokal’s arguments (or to the scientific theories of scientists), then take the following letter (see here) which Einstein wrote to his wife in 1919? —

“You will make sure:
– that my clothes and laundry are kept in good order;
– that I will receive my three meals regularly in my room;
– that my bedroom and study are kept neat, and especially that my desk is left for my use only.

“You will renounce all personal relations with me insofar as they are not completely necessary for social reasons. Specifically, You will forego:
– my sitting at home with you;
– my going out or travelling with you.

“You will obey the following points in your relations with me:
– you will not expect any intimacy from me, nor will you reproach me in any way;
– you will stop talking to me if I request it;
– you will leave my bedroom or study immediately without protest if I request it.

“You will undertake not to belittle me in front of our children, either through words or behaviour.”

Have these words ever appeared in any social-media memes?

That said, Einstein’s letter to his wife (of the time) has indeed been tackled by journalists, and by some science writers too.

The point here is that if contexts of discovery can be used in positive ways, then they can be used in negative ways too.

More relevantly, if positive contexts of discovery can be tied to actual scientific theories and ideas, then so too can negative ones.

So can we pick and choose contexts of discovery according to taste?

Now let’s just pretend that Einstein was a serial killer, or a neanderthal, or a misogynist — or perhaps all three at once.

How would, say, the historian of science and philosopher Robert P. Crease deal with these possibilities?

Robert P. Crease on the Envy, Rivalry and Anger of Scientists

Robert Crease isn’t just interested in contexts of discovery: he actually ties the scientific theories of physicists to their (as it were) extra-curricular activities and beliefs…

Or at least he seems to!

Crease believes such that such scientific theories actually embody aspects of the (as it were) biographical detail of the scientists who created them.

Robert P. Crease

So take this passage from Crease:

“[I]n Einstein [] we can see glimpses of what lies beyond the standard model: an account of science in which character and personal feeling are not marginal to the scientific process, not a prelude to a person’s scientific labours, but what sustains them and carried them forward.”

Of course it can still be asked if Crease is actually arguing that all this “character and personal feeling” is somehow embodied in scientists’ scientific theories.

So if it is, then how is it so?

What’s more, what is Crease actually pitting himself against?

Crease is pitting himself against what he (ironically) calls “the standard model [of] [m]ost histories of science”. Crease writes:

“It emphasises the collective and impersonal dimension, and downplays the experiences of specific individuals. The principle structural ingredients are discoveries, instruments, measurements, and theories.”

Is this true?

Do historians really “downplay[] the experiences of specific individuals”?

Not in the cases I’ve read.

Indeed, if we move away from historians of science, popular-science writers certainly don’t!

So perhaps that’s the very distinction Crease is making: the distinction between historians of science and popular-science writers. (Crease has himself written such a popular-science book — the one these quotes come from.)

Moreover, even if what Crease says about historians of science is true, then what are we to make of all these experiences of specific individuals from a scientific point of view?

Of course, much — very much ! — has been made of them from all sorts of other points of view.

However, what relevance do the specific experiences of specific scientists have to their specific scientific theories and ideas?

In more detail, Crease also tells us that

“[e]nvy, rivalry, anger, disbelief, conviction, stress, hope, despair, dejection — all can be found in the documents”.

Has any historian of science argued that scientists don’t experience envy, rivalry, anger, disbelief, conviction, stress, hope, despair, rejection? Has any scientist himself ever argued this about his fellow scientists?

Like the physicist and popular-science writer Paul Davies (to be discussed in a moment), Robert Crease believes that all of this experience becomes embodied in the actual scientific theories of scientists…

Or at least I think that’s what he believes.

As already stated, it’s hard to see what Crease is getting at otherwise.

In any case, Paul Davies certainly does believe this.

Paul Davies on Newton’s Religious Contexts of Discovery

The popular-science writer and physicist Paul Davies goes much further too.

For example, Davies is keen to stress Isaac Newton’s religious beliefs, and how they influenced (or even determined) his actual physics. (See Davies’s ‘Taking Science on Faith’ for the New York Times.)

Thus, if this is true (in this case at least), then the context of discovery can’t be separated from from the context of justification at all. That’s because Davies is making a direct link between Newton’s scientific theories and his religious beliefs.

On the other hand, if that link between contexts of discovery and actual scientific theories isn’t there, then (at its crudest) it wouldn’t make the slightest bit of difference to Newton’s scientific theories and ideas whether he too was a serial killer, or believed in pink goblins, or was a Christian fundamentalist, or that he stole all his ideas from Leibniz.

Of course, much has also been made of Newton’s (as it were) religious credentials by other people.

For example, spiritual-but-not-religious people and New-Agers have made much of Newton’s alchemy, Biblical prophesies, chronologies, fixation with numbers, interpretations of the Bible, and his takes on the philosopher’s stone and sacred geometry.

What’s more, their biographical and historical detail about Newton may well be largely correct!

Rupert Sheldrake Against the Distinction

Say that Einstein was a serial killer or a misogynist.

Many scientists (as it were) get around all this with the distinction they make between science itself and (flesh and blood) scientists.

Philosophers attempt a similar job with their own distinction between the context of discovery and the context of justification.

However, some critics of science (e.g., postmodernists, poststructuralists, religious and spiritual people, some Marxists, psychoanalysts, Jungians, etc.) have argued that this distinction is a phoney. And it’s phoney largely because they see it as an idealisation and a simplification.

Take the related case of the scientist, writer and parapsychology researcher Rupert Sheldrake.

It can be assumed that Sheldrake will be aware of this context-of-discovery/context-of-justification distinction. However, it can also be assumed that he doesn’t really buy it — at least not unquestioningly. (I doubt that anyone accepts it unquestioningly. I don’t.)

Take the following passage:

“To this day, scientists pretend that they are rather like disembodied minds. Unlike other human activities, science is supposed to be uniquely objective. Scientific papers are conventionally written in an impersonal style, seemingly devoid of emotions. Conclusions are meant to follow from facts by a logical process of reasoning, such as that which might be followed by a computer, if machines with sufficient artificial intelligence could ever be constructed. Nobody is ever seen doing anything, methods are followed, phenomena observed, and measurements are made, preferably with instruments. Everything is reported in the passive voice. Even schoolchildren learn this style, and practise it in their laboratory notebooks: ‘a test tube was taken…’

“All research scientists know that this process is artificial; they are not disembodied minds, uninfluenced by emotion.”

So what if scientists did believe that they have “disembodied minds”? Where would that lead us? Would it impact on the attitude we have to their actual scientific theories, and to science generally?

Moreover, do any of Sheldrake’s psychological analyses of scientists (even if genuinely insightful) matter? (Readers may now ask: Matter to whom? Matter in which respects?)

In any case, isn’t this simply Sheldrake’s biased interpretation of what scientists believe? Indeed, even if (most? many? some?) scientists do believe that they have a monopoly on what people call “the objective facts”, then that still wouldn’t entail a commitment to believing that their minds need to be disembodied in order to access those objective facts.

The philosopher Zizek (who uses the words “objective truth-values”) is also against the distinction. However, he never actually uses the technical terms “context of discovery” and “context of justification”.

Slavoj Žižek Against the Distinction

Firstly, Žižek tells us that the “standard distinction” is between

“the social or psychological conditions of a scientific invention and its objective truth-value”.

Žižek has a problem with this division (or distinction).

He continues:

“The least one can say about it is that the very distinction between the (empirical, contingent sociopsychological) genesis of a certain scientific formation and its objective truth-value, independent of the conditions of this genesis, already presupposes a set of distinctions (between genesis and truth-value, etc.) which are by no means self-evident.”

Let’s firstly comment on certain terms which Žižek uses, and which are questionable.

Take his words “objective truth-value”.

Surely one can make a distinction between the context of discovery (or Žižek’s “genesis”) and the context of justification, and still not have a strong (or even any) commitment to objective truth-values.

For one, what does “objective” even mean in this context?

The words “scientific invention” also (to use Žižek’s own words) “presuppose[] a set of distinctions” which Žižek himself is making. In this case, he appears to believe that scientific theories (or even experimental findings) are little (or even nothing) more than inventions.

Of course, “invention” is a loaded term. Nonetheless, Žižek is on fairly strong ground here because some quantum theorists (who’re also physicists) stress this.

Take “quantum Bayesianism” (Qbism) and the position of Christopher Fuchs.

Fuchs believes that “quantum states represent observers’ personal information, expectations and degrees of belief”. More relevantly, Fuchs believes that this

“allows one to see all quantum measurements events as little ‘moments of creation’, rather than as revealing anything pre-existent”.

Now what could be more (as it were) constructionist, and, more relevantly, biographical than stressing (scientific) “moments of creation”?

Note:

(1) See Walter Moore’s excellent biography: Schrödinger: Life and Thought. Moore goes into much detail on Schrödinger’s interest in Schopenhauer, Vedanta, etc.

Paul Austin Murphy

Written by Paul Austin Murphy

·Editor for Paul Austin Murphy’s Essays on Philosophy

MY PHILOSOPHY: https://paulaustinmurphypam.blogspot.com/ My Flickr Account: https://www.flickr.com/photos/193304911@N06/

Book: “A City Cannot Be a Work of Art: Learning Economics and Social Theory From Jane Jacobs”

A City Cannot Be a Work of Art: Learning Economics and Social Theory From Jane Jacobs

Sanford Ikeda

This open access book connects Jane Jacobs’s celebrated urban analysis to her ideas on economics and social theory. While Jacobs is a legend in the field of urbanism and famous for challenging and profoundly influencing urban planning and design, her theoretical contributions – although central to her criticisms of and proposals for public policy – are frequently overlooked even by her most enthusiastic admirers. This book argues that Jacobs’s insight that “a city cannot be a work of art” underlies both her ideas on planning and her understanding of economic development and social cooperation. It shows how the theory of the market process and Jacobs’s theory of urban processes are useful complements – an example of what economists and urbanists can learn from each other. This Jacobs-cum-market-process perspective offers new theoretical, historical, and policy analyses of cities, more realistic and coherent than standard accounts by either economists or urbanists.

(Goodreads.com)

Consciousness, sexuality, androgyny, futurism, space, the arts, science, astrology, democracy, humor, books, movies and more