Spiritual teacher Deepak Chopra shows you how to be your true self. Discover a sense of deep purpose and inner peace with the all-new Oprah & Deepak 21-Day Meditation Experience.
Photo: Paleo-Christian/Bridgeman/Getty Images
By Deepak Chopra (Oprah.com)
It’s important to be yourself. We’re all told that, and it’s true—we know the damage done by being false to ourselves and to others. But I’d like to suggest that to “be yourself” goes much deeper. Most people don’t know how much wisdom and power resides in the self, which is not the everyday self that gets mixed up with all the business of life, but a deeper self, which I call, for simplicity’s sake, the true self.
The true self isn’t a familiar term to most people, although it is close to what religion calls your soul, the purest part of yourself. But religion depends upon faith, and that’s not the issue here. You can actually test if you have such a true self. How? You know that sugar is sweet because you can taste it. Likewise, the true self has certain qualities that belong to it the way sweetness belongs to sugar. If you can experience these qualities, repeat them, learn to cultivate them and finally make them a natural part of yourself, the true self has come to life.
The trick is distinguishing what is your true self and what is not. If we had a switch that could turn off the everyday self and turn on the true self, matters would be much simpler. But human nature is divided. There are moments when you feel secure, accepted, peaceful and certain. At those moments, you are experiencing the true self. At other moments, you experience the opposite, and then you are in the grip of the everyday self, or the ego-self. The trouble is that both sides are convincing. When you feel overwhelmed by stress, crisis, doubts and insecurity, the true self might as well not exist. You are experiencing a different reality colored by the state of your mind.
At those dark, tough moments, try to get some outside perspective about what is happening. The qualities of the everyday self and the true self are actually very different:
1. The true self is certain and clear about things. The everyday self gets influenced by countless outside influences, leading to confusion.
2. The true self is stable. The everyday self shifts constantly.
3. The true self is driven by a deep sense of truth. The everyday self is driven by the ego, the unending demands of “I, me, mine.”
4. The true self is at peace. The everyday self is easily agitated and disturbed.
5. The true self is love. The everyday self, lacking love, seeks it from outside sources.
Look at the qualities of the true self: self-reliant, evolutionary, loving, creative, knowing, accepting and peaceful. Whenever anyone is in crisis, whether the problem is a troubled marriage or difficulties at work or over money, they will make the best decisions if they utilize these qualities.
Sadly, we are more likely to be driven by selfishness, panic, uncertainty, impulsiveness, survival instincts and other qualities associated with the ego-self. That’s how society trained us. We measure our worth by our achievements and possession. Money and status feed the ego, and society rewards those who play the game of getting and spending with skill and drive.
But look at the faulty choices millions of people make. They choose material rewards in the hope that money can buy happiness, or at least all the nice trappings of a happy life. They plunge into careers that offer success but end up with little inner fulfillment. Doesn’t it make sense instead that the foundation for every choice should be the true self? The true self understands what you really want and what you really need to be joyful. It creates a much stronger, more expansive foundation for your life than any the ego-self can provide, since that is rooted in fear and insecurity.
Once you begin to recognize and encourage the qualities of the true self, your life will begin to change. You’ll make better choices. You’ll expand your awareness. You’ll discover and encourage your purpose. You’ll challenge yourself to meet new goals.
The greatest spiritual secret in the world is that every problem has a spiritual solution, not because every prayer is answered by a higher power, but because the true self, once discovered, is the source of creativity, intelligence and personal growth. No external solution has such power. The true self is the basis for being deeply optimistic about how life turns out and who you really are, behind the screen of doubt and confusion. The path to it isn’t simply inspiring; it’s the source of solutions that emerge from within.
| Professor of philosophy at Utah State University1,505 wordsRead time: approx. 8 mins
Our identity is not just about our internal memories, beliefs, hopes and fears. We are made up equally of our environment, of things outside ourselves – for example, by how we react to people and places. With this in mind, the dream of uploading ourselves to the cloud has a fatal flaw. We can upload our inner selves to the cloud, but we are nothing without the outside world in which we live and the people in it, writes Charlie Huenemann.
We will turn to the possibility of uploading our ethereal souls to supercomputers in just a moment, but first let’s talk about keys and locks. A key is an ingenious little device with a handle (or bow) and a blade with some particular series of cuts in it. One could provide a very precise mathematical description of those cuts, and one might wish to do so because that specific series of cuts — exactly those, in that sequence — is what makes a key that particular key and no other. For this reason, we might well suppose that the shape of the key is the essence of the key, so that one need not look beyond the key itself to find its essence.
But this isn’t quite right. For there is a reason each key has the shape it has, and that reason has to do with some specific lock. That lock has some complementary configuration (often involving some number of little pegs of various lengths) which will allow a cylinder to turn only if the key, with its corresponding shape, is inserted into the keyhole. That is why each key has the shape it has: because of its relationship to some lock. If the lock is lost, the key’s shape doesn’t really matter anymore. It might as well have a different shape, for it is now just a decorative piece of metal, perhaps to be hung on a necklace or glued onto a piece of art. What was essential to it is no longer essential, for its relationship to other things in the world has been altered. It is not its intrinsic shape, but its relation to other things – or the lack of such relations – that determines what the essence of a particular key is.
Just as keys have the shapes they have because of the locks they fit, people have the selves they have because of the lives they fit.
Okay, now to selves and consciousness. Many scientists and philosophers have taken an approach to selves that resembles the first approach we took to keys. What is essential to a self is internal to that self: our memories, our beliefs, our attitudes and desires and hopes and fears, and so on. These are the cuts in our blade, and for each person the cuts are here or there, deeper or shallower, and it is these differences that make us so different from one another. Some scientists and philosophers hope that they will find some kind of explanatory correlation between the cuts of our blades and the configurations of our neurons, believing that there must be a clear connection between the two. Other scientists and philosophers (well, mainly philosophers) believe no such connection can be found, because we are dealing with very different sorts of shapes: shapes of the subjective or semantic or meaningful sort, as opposed to shapes of the more geometrical sort. These philosophers will argue that even if we were to know the full architecture of some particular brain, we still could not possibly know whether it was the brain of a cobbler or a prince. The only way to know whose brain it is would be to be it, somehow. Then we would know it “from the inside”— an inside we can never see with our eyes, no matter how acute our vision.
But these scientists and philosophers are forgetting about locks. Just as keys have the shapes they have because of the locks they fit, people have the selves they have because of the lives they fit. My memories and beliefs are shaped by what I have experienced, but they are also tuned to the people I ordinarily meet, what I take to be their expectations of me, and networks of obligations and responsibilities I negotiate on a daily basis. My attitudes, desires, hopes, and fears are quite fluid, adapting to my circumstances and the attitudes of others around me. I am the particular self I am because of my on-going, changing relationships to people around me, as well as to the culture, economics, and politics of my time and place.
Indeed, this is where the key-and-lock analogy breaks down, for keys and locks are relatively stable over time. Lives are rivers in constant change and flow. We fit into them because we are able to shape-shift as the circumstances of life require. A better picture would be of a lock undergoing continuous change, reflected in a corresponding and continuous change in the shape of the key; but at that point one wonders why we would want to employ the key-and-lock analogy in the first place. Still, it’s handy, so let’s go with it.
Just as a key’s shape loses its meaningfulness when its lock is taken away, a human self unravels when its life is taken away. In a powerful and insightful essay, Lisa Guenther explores the effects of solitary confinement on a human being and establishes that without a social world to plug into, a human being undergoes a torturous loss of self; even one’s sense of reality becomes unhinged when there is no one else around to confirm it or push back against it. As Guenther writes, “our ‘here’ is intertwined with their ‘there’”, and who we are is intertwined with who they are. That is to say, more simply, that the key loses its shape when there is no corresponding lock.
We are not islands, as John Dunne wrote:No man is an island entire of itself; every manis a piece of the continent, a part of the main;if a clod be washed away by the sea, Europeis the less, as well as if a promontory were, aswell as any manner of thy friends or of thineown were; any man’s death diminishes me,because I am involved in mankind.And therefore never send to know for whomthe bell tolls; it tolls for thee.
“Any man’s death diminishes me”: and not just out of deeply sympathetic sorrow or because of our common plight, but because of the fact that my being consists in his, in part; his seeing me lends me something that can be seen. As Descartes should have said, cogitamus, ergo sumus: “We think, therefore we are”.
A neural net, installed on whatever substrate, will not capture a self unless that neural net is giving and taking in a larger network of neural nets.
We might use this insight to understand what we look for as we try to expose instances of deception. If I suspect Alice or Bob to be imposters, I may well ask them things only Alice or Bob would know; but as we know from watching spy movies, imposters will excel at recounting such trivia. What will be more telling, and nearly impossible for any imposter to achieve, are the ways in which Alice and Bob should interact with us, and how they should reply to our jokes or respond to our stories. The distinctive ways in which Alice and Bob should change in response to our changes is what will tell us whether we really have Alice and Bob, or some imposters. What is telling is the way they fit into our lives, and how we fit into theirs. Alice and Bob will have to be absolute wizards at picking locks, if they are to succeed.
Now I fear that this is going to make being uploaded into the cloud a decidedly difficult affair, and not merely because of difficulties in coding. Ordinarily, the strategy is that moving a self into the cloud should be “only” a matter of figuring out the program that governs the behavior of one’s neural net, and reinstalling that program on some other machine’s components. It is philosophically the same as having a key copied, even though the blade may have something like 100 billion cuts. But without the corresponding lock, the key will be lost. A neural net, installed on whatever substrate, will not capture a self unless that neural net is giving and taking in a larger network of neural nets, a virtual archipelago of selves defining their boundaries through incessant bickering and chattering negotiations. No angel without its choir, as it were.
We cannot go anywhere alone, it turns out, and this will include any trips to the cloud. To paraphrase the songwriting wit Tom Lehrer, we must all go together when we go; or at the very least we must travel with enough companions to allow for decent conversation. For if there is no we, there will be no me.
On April 18th, 2021, the Sun and Mercury are conjunct at 29° Aries, and Mercury transforms from a Morning Star into an Evening Star.
Each planet has two phases: Morning Star (when the planet rises before the Sun) and Evening Star (when the planet rises after the Sun).
These two distinct “star phases” are very important and reveal insights that we cannot get from the sign or house placement alone.
Let’s say your Mercury is in Gemini. People with Mercury in Gemini are known for their curiosity and fresh approach to life.
However, if Mercury is an Evening Star, then Mercury has slightly different qualities, which may complement or even contradict the qualities of the sign.
The planetary star phases and synodic cycles were very important concepts in traditional astrology. Back in the day, our ancestors got all their information from the most important source: the sky itself.
Modern astrology has somehow ‘forgotten’ about planetary cycles, but fortunately, we are slowly bringing this concept back, giving it the importance it deserves.
The Venus star type – the Venus Morning Star and Venus Evening Star – is by far the most popular topic, but ALL planets have Morning and Evening phases.
Every time a planet is conjunct the Sun, it transforms either from a Morning star into an Evening Star or vice versa.
Let’s Take Mercury
We have two types of Mercury-Sun conjunctions: one when Mercury is retrograde, and one when Mercury is direct.
When Mercury is retrograde, Mercury transforms from an Evening Star into a Morning Star
When Mercury is direct, it is so fast that it overtakes the Sun, transforming from a Morning Star into an Evening Star.
The current Mercury cycle started on February 8th, 2021 with the Mercury retrograde-Sun conjunction at 20° Aquarius.
On April 18th, 2021 we have the 2nd Sun-Mercury conjunction of the cycle – this time Mercury is direct. We are now in the middle of the current Mercury cycle.
Just like we have Full Moons, the Sun-Mercury conjunction on April 18th, 2021 is the “Full Moon” or “Full Mercury” part of the cycle.
Mercury now starts to rise after the Sun, transforming from a Morning Star into an Evening Star. Mercury becomes less daring and curious and more efficient and contemplative.
At the time of the conjunction, you will receive important insights (Mercury) about yourself and your purpose in life (the Sun). This is a time of alignment and clarity.
If you’ve been feeling confused about whether or not to take on an opportunity, or start a new project, the Sun-Mercury cazimi-conjunction will bring you the clarity and confidence you need.
From April 8th, 2021 until the end of the current Mercury cycle on May 30th, Mercury is an Evening Star.
Mercury Evening Star is less interested in dreaming and strategizing, and more interested in implementation and follow-through.
Your Natal Mercury Star Phase
Mercury conjunct Sun and the Mercury cycle also influence your natal chart.
We are all born during a specific phase of the Mercury cycle, and this phase, or this Mercury star type, influences our thinking and communication style.
To find out what your Mercury star type is, look whether Mercury is before or after the Sun in your natal chart.
Mercury can only be found in the sign before the Sun, the same sign with the Sun, or in the sign after the Sun. So if you’re a Virgo Sun, your Mercury can only be in Leo, Virgo or Libra.
If you’re a Virgo Sun and your Mercury is in Leo, you’re a Mercury Morning Star, because Leo comes before Virgo. If your Mercury is in Libra, you’re a Mercury Evening Star, because Libra follows Virgo.
If your Mercury is in the same sign as the Sun, look at the degree. If Mercury is at an earlier degree than the Sun, it is a Morning Star. If it is at a later degree, then it is an evening star.
What does it mean for a planet to be a Morning Star or an Evening Star?
Mercury Morning Star has Gemini-like qualities, while Mercury Evening Star has Virgo-like qualities.
Mercury Morning Star is more independent, more curious, innovative, but may lack experience.
Mercury Evening Star is more organized, more fluent, more experienced, but of course, may lack the enthusiasm and the curiosity of Mercury morning star.
The 4 Mercury Star Phases
There are two subtypes for each Mercury star phase, so we have a total of 4 different Mercury star phases. Each of these types will, of course, influence our personality in distinct ways.
If Mercury is Morning Star and Retrograde, then the person is extremely curious, has a beginner’s mind and a vivid imagination. Famous Mercury Retrograde Morning Star people are Steve Jobs, J. R.R. Tolkien, George Lucas, Princess Diana, and Lady Gaga.
If Mercury is Morning Star and Direct, it is close to the Sun, so it ‘borrows’ Solar qualities. It is idealistic, intuitive and visionary. People with this star type have an abstract, rather than a concrete, fact-based mind. They can quickly process information and be very mentally prolific, but may lack attention to detail. Famous Mercury Direct Morning Stars are Michelangelo, Mark Twain, Lewis Carroll, and Maya Angelou.
If Mercury is an Evening Star and Direct, Mercury is now familiar with the “Solar” agenda, and is concerned with follow-through and implementation. People with this star type are efficient and have a strong sense of purpose. Elon Musk, Martin Luther King, Mother Teresa, Gandhi, Oprah are all Mercury Direct Evening Stars.
If Mercury is Evening Star and Retrograde it is contemplative and artistic. They can become icons in their area of excellence. Examples of Mercury Retrograde Evening Stars: Gabriel Garcia Marquez, Salvador Dali, and Madonna.
Jun 21, 2012Mitt Romney spends most of a factory visit yelling at employees to work harder, the deep, orange sun beautifully sets on Topher Grace’s career, and a man on the verge of self-realization instead turns to God. It’s the week of June 11th, 2012. Subscribe to The Onion on YouTube: http://bit.ly/xzrBUA Like The Onion on Facebook: http://www.fb.com/theonion Follow The Onion on Twitter: http://www.twitter.com/theonion
Event on April 18, 2021: Releasing the Hidden Splendour workshop facilitated by Richard Hartnett, H.W., M. and Rick Thomas, H.W., M. will be held on Sunday, April 18 at 3 p.m. Pacific time. Open to all students of RHS.
Online via Zoom @ 6:30 to 8:30 p.m. Pacific time. If you are a Translator, please join our meetings from your computer, tablet or smartphone. Link: https://us02web.zoom.us/j/816612022
Magic mushrooms have a long and rich history. Now scientists say they could play an important role in the future, with their active ingredient a promising treatment for depression.
The results from a small, phase two clinical trial have revealed that two doses of psilocybin appears to be as effective as the common antidepressant escitalopram in treating moderate to severe major depressive disorder, at least when combined with psychological therapy.
“I think it is fair to say that the results signal hope that we may be looking at a promising alternative treatment for depression,” said Dr Robin Carhart-Harris, head of the centre for psychedelic research at Imperial College London and a co-author of the study.
Carhart-Harris said psilocybin was thought to work in a fundamentally different way to escitalopram. While both act on the brain’s serotonin system, he said escitalopram seemed to work by helping people tolerate stress more easily.
“With a psychedelic it is more about a release of thought and feeling that, when guided with psychotherapy, produces positive outcomes,” he said, adding participants given psilocybin had often reported feeling they had got more fully to the root of why they were depressed.
However, Carhart-Harris cautioned against seeking out magic mushrooms – a class A drug in the UK – for DIY treatment.
Dr Robin Carhart-Harris, co-author of the study: ‘The results signal hope that we may be looking at a promising alternative treatment for depression.’ Photograph: Imperial College London/PA
“That would be an error of judgment,” he said. “We strongly believe that the … psychotherapy component is as important as the drug action.”
Over the six-week trial, 30 out of 59 adults with moderate to severe major depressive disorder were randomly allocated to receive two 25mg doses of psilocybin three weeks apart – a dose that Carhart-Harris said was high enough to produce the kind of experiences often described as existential or even “mystical”. The day after the first dose of psilocybin, this group began a daily placebo.
The other 29 participants were given two very low, or “inactive”, doses of psilocybin three weeks apart. This was to ensure any differences in outcomes between the groups would not simply be down to the expectation of being given psilocybin. The day after the first dose of psilocybin, this group began a daily dose of escitalopram, the strength of which increased over time.
Adults with moderate to severe major depressive disorder were randomly allocated to receive two 25mg doses of psilocybin three weeks apart. Photograph: Imperial College London/PA
Each psilocybin session – which lasted six hours, including a three- to four-hour “trip” for those on the high dose – was supervised by at least two mental health professionals, with the participants lying on their back, propped up by pillows and listening to emotionally evocative neoclassical music.
All participants received psychological therapy the day after a psilocybin session, as well as a phone or video call in the week after the first dose.
The results, published in the New England Journal of Medicine, reveal that after six weeks both groups showed, on average, a decrease in the severity of depressive symptoms, according to scores from a questionnaire completed by the participants. However, this reduction did not differ significantly between the two groups.
“Psilocybin therapy, as we predicted, works more quickly than escitalopram,” said Carhart-Harris.
He added that results from other scales were “tantalisingly suggestive of potential superiority of psilocybin therapy” not only for depression but other aspects of wellbeing. He warned the findings were not conclusive as the team did not take into account the number of comparisons being made.
However, the team noted that 57% of patients in the high-dose psilocybin group were judged to be in remission for their depression by the end of the six weeks, compared with 28% in the escitalopram group, while neither group had serious side-effects.
While the team said the results were promising, others said the study was not big enough to draw firm conclusions. Among other limitations, the majority of participants were white, middle-aged, highly educated and male, while participants may have been able to guess which treatment they received and there was no group given a placebo and therapy alone.
Anthony Cleare, professor of psychopharmacology at King’s College London, said the study provided “some of the most powerful evidence to date that psychedelics may have a role to play in the treatment of depression”.
However, he said far more data was needed before drugs such as psilocybin could be used outside of research, adding they would not replace existing treatments for depression.
In the UK, Samaritans can be contacted on 116 123 or email jo@samaritans.org. You can contact the mental health charity Mind by calling 0300 123 3393 or visiting mind.org.uk
For an empire that collapsed more than 1,500 years ago, ancient Rome maintains a powerful presence. About 1 billion people speak languages derived from Latin; Roman law shapes modern norms; and Roman architecture has been widely imitated. Christianity, which the empire embraced in its sunset years, remains the world’s largest religion. Yet all these enduring influences pale against Rome’s most important legacy: its fall. Had its empire not unravelled, or had it been replaced by a similarly overpowering successor, the world wouldn’t have become modern.
This isn’t the way that we ordinarily think about an event that has been lamented pretty much ever since it happened. In the late 18th century, in his monumental work The History of the Decline and Fall of the Roman Empire (1776-1788), the British historian Edward Gibbon called it ‘the greatest, perhaps, and most awful scene in the history of mankind’. Tankloads of ink have been expended on explaining it. Back in 1984, the German historian Alexander Demandt patiently compiled no fewer than 210 different reasons for Rome’s demise that had been put forward over time. And the flood of books and papers shows no sign of abating: most recently, disease and climate change have been pressed into service. Wouldn’t only a calamity of the first order warrant this kind of attention?
It’s true that Rome’s collapse reverberated widely, at least in the western – mostly European – half of its empire. (A shrinking portion of the eastern half, later known as Byzantium, survived for another millennium.) Although some regions were harder hit than others, none escaped unscathed. Monumental structures fell into disrepair; previously thriving cities emptied out; Rome itself turned into a shadow of its former grand self, with shepherds tending their flocks among the ruins. Trade and coin use thinned out, and the art of writing retreated. Population numbers plummeted.
But a few benefits were already being felt at the time. Roman power had fostered immense inequality: its collapse brought down the plutocratic ruling class, releasing the labouring masses from oppressive exploitation. The new Germanic rulers operated with lower overheads and proved less adept at collecting rents and taxes. Forensic archaeology reveals that people grew to be taller, likely thanks to reduced inequality, a better diet and lower disease loads. Yet these changes didn’t last.
The real payoff of Rome’s demise took much longer to emerge. When Goths, Vandals, Franks, Lombards and Anglo-Saxons carved up the empire, they broke the imperial order so thoroughly that it never returned. Their 5th-century takeover was only the beginning: in a very real sense, Rome’s decline continued well after its fall – turning Gibbon’s title on its head. When the Germans took charge, they initially relied on Roman institutions of governance to run their new kingdoms. But they did a poor job of maintaining that vital infrastructure. Before long, nobles and warriors made themselves at home on the lands whose yield kings had assigned to them. While this relieved rulers of the onerous need to count and tax the peasantry, it also starved them of revenue and made it harder for them to control their supporters.
When, in the year 800, the Frankish king Charlemagne decided that he was a new Roman emperor, it was already too late. In the following centuries, royal power declined as aristocrats asserted ever greater autonomy and knights set up their own castles. The Holy Roman Empire, established in Germany and northern Italy in 962, never properly functioned as a unified state. For much of the Middle Ages, power was widely dispersed among different groups. Kings claimed political supremacy but often found it hard to exercise control beyond their own domains. Nobles and their armed vassals wielded the bulk of military power. The Catholic Church, increasingly centralised under an ascendant papacy, had a lock on the dominant belief system. Bishops and abbots cooperated with secular authorities, but carefully guarded their prerogatives. Economic power was concentrated among feudal lords and in autonomous cities dominated by assertive associations of artisans and merchants.
The resultant landscape was a patchwork quilt of breathtaking complexity. Not only was Europe divided into numerous states great and small, those states were themselves split into duchies, counties, bishoprics and cities where nobles, warriors, clergy and traders vied for influence and resources. Aristocrats made sure to check royal power: the Magna Carta of 1215 is merely the best-known of a number of similar compacts drawn up all over Europe. In commercial cities, entrepreneurs formed guilds that governed their conduct. In some cases, urban residents took matters into their own hands, establishing independent communes managed by elected officials. In others, cities wrung charters from their overlords to confirm their rights and privileges. So did universities, which were organised as self-governing corporations of scholars.
Councils of royal advisers matured into early parliaments. Bringing together nobles and senior clergymen as well as representatives of cities and entire regions, these bodies came to hold the purse strings, compelling kings to negotiate over tax levies. So many different power structures intersected and overlapped, and fragmentation was so pervasive that no one side could ever claim the upper hand; locked into unceasing competition, all these groups had to bargain and compromise to get anything done. Power became constitutionalised, openly negotiable and formally partible; bargaining took place out in the open and followed established rules. However much kings liked to claim divine favour, their hands were often tied – and if they pushed too hard, neighbouring countries were ready to support disgruntled defectors.
This deeply entrenched pluralism turned out to be crucial once states became more centralised, which happened when population growth and economic growth triggered wars that strengthened kings. Yet different countries followed different trajectories. Some rulers managed to tighten the reins, leading toward the absolutism of the French Sun King Louis XIV; in other cases, the nobility called the shots. Sometimes parliaments held their own against ambitious sovereigns, and sometimes there were no kings at all and republics prevailed. The details hardly matter: what does is that all of this unfolded side by side. The educated knew that there was no single immutable order, and they were able to weigh the upsides and drawbacks of different ways of organising society.
Whenever dynasties failed and the state splintered, new dynasties emerged and rebuilt the empire
Across the continent, stronger states meant fiercer competition among them. Ever costlier warfare became a defining feature of early modern Europe. Religious strife, driven by the Reformation, which broke the papal monopoly, poured fuel on the flames. Conflict also spurred expansion overseas: Europeans grabbed lands and trading posts in the Americas, Asia and Africa, more often than not just to deny access to their rivals. Merchant societies spearheaded many of these ventures, while public debt for funding constant war spawned bond markets. Capitalists advanced on all fronts, lending to governments, investing in colonies and trade, and extracting concessions. The state, in turn, looked after these vital allies, protecting them from rivals foreign and domestic.
Hardened by conflict, the European states became more integrated, slowly morphing into the nation-states of the modern era. Universal empire on a Roman scale was no longer an option. Like the Red Queen in Alice in Wonderland, these rival states had to keep running just to stay in place – and speed up if they wanted to get ahead. Those that did – the Dutch, the British – became pioneers of a global capitalist order, while others laboured to catch up.
Nothing like this happened anywhere else in the world. The resilience of empire as a form of political organisation made sure of that. Wherever geography and ecology allowed large imperial structures to take root, they tended to persist: as empires fell, others took their place. China is the most prominent example. Ever since the first emperor of Qin (he of terracotta-army fame) united the warring states in the late 3rd century BCE, monopoly power became the norm. Whenever dynasties failed and the state splintered, new dynasties emerged and rebuilt the empire. Over time, as such interludes grew shorter, imperial unity came to be seen as ineluctable, as the natural order of things, celebrated by elites and sustained by the ethnic and cultural homogenisation imposed on the populace.
China experienced an unusual degree of imperial continuity. Yet similar patterns of waxing and waning can be observed around the world: in the Middle East, in South and Southeast Asia, in Mexico, Peru and West Africa. After the fall of Rome, Europe west of Russia was the only exception, and remained a unique outlier for more than 1,500 years.
This wasn’t the only way in which western Europe proved uniquely exceptional. It was there that modernity took off – the Enlightenment, the Industrial Revolution, modern science and technology, and representative democracy, coupled with colonialism, stark racism and unprecedented environmental degradation.
Was that a coincidence? Historians, economists and political scientists have long argued about the causes of these transformative developments. Even as some theories have fallen by the wayside, from God’s will to white supremacy, there’s no shortage of competing explanations. The debate has turned into a minefield, as scholars who seek to understand why this particular bundle of changes appeared only in one part of the world wrestle with a heavy baggage of stereotypes and prejudices that threaten to cloud our judgment. But, as it turns out, there’s a shortcut. Almost without fail, all these different arguments have one thing in common. They’re deeply rooted in the fact that, after Rome fell, Europe was intensely fragmented, both between and within different countries. Pluralism is the common denominator.
If you side with those scholars who believe that political and economic institutions were the basis for modernising development, western Europe is the place for you. In an environment where bargaining trumped despotism and exit options were plentiful, rulers had more to gain from protecting entrepreneurs and capitalists than from fleecing them. Size also mattered: only in moderately sized countries could commercial interests hope to hold their own against aristocratic landlords. Smaller polities enjoyed greater capacity for inclusion, not least by means of parliamentary deliberations. The better medieval legacies of pluralism survived, the more such states developed in close engagement with organised representatives of civil society. International competition rewarded cohesion, mobilisation and innovation. The more governments expected from their citizens, the more they had to offer in return. State power, civic rights and economic progress advanced together.
But what if Europeans owed their later preeminence to the ruthless oppression and exploitation of colonial territories and plantation slavery? Those terrors too grew out of fragmentation: competition drove colonisation while commercial capital greased the wheels. Geography as such played second fiddle. It has been said that the Europeans rather than the Chinese reached the Americas first simply because the Pacific is much wider than the Atlantic. Yet successive Chinese empires failed to seize even nearby Taiwan until the Ming finally intervened in the late 17th century, and never showed much of an interest in the Philippines, let alone more distant Pacific islands. That made perfect sense: for an imperial court in charge of countless millions of people, such destinations held little appeal. (The Ming ‘treasure fleets’ that were dispatched into the Indian Ocean didn’t make any sense at all and were soon shut down.)
Large empires were generally indifferent to overseas exploration, and for the same reason. It was small, geographically peripheral cultures – from the ancient Phoenicians and Greeks to the Norse, Polynesians and Portuguese – that had the most to gain from striking out. And so they did. Had Europeans not sailed forth with reckless abandon, there would have been no colonies, no Bolivian silver, no slave trade, no plantations, no abundant cotton for the Lancashire mills. Capitalising on military skills honed by endless war, European powers escaped the perpetual stalemate on their own continent by exporting violence and conquest across the globe. Separated by entire oceans from the imperial heartlands, colonised populations could be squeezed much harder than would have been feasible back in Europe. Over time, much of the world turned into a subordinate periphery that fuelled European capitalism.
Intense competition among rulers, merchants and colonisers fed an insatiable appetite for new techniques
Yet brute force alone would have taken Europe only so far. Useful knowledge also played a vital role. There was no hope of transforming industry and medicine without dramatic advances in science and engineering. That posed a serious challenge: what if new insights and ways of doing things clashed with hallowed tradition or religious doctrine? Innovators had to be able to follow the evidence wherever it led, regardless of how many toes they stepped on in the process. That turned out to be a hard slog in Europe, as incumbents of all stripes – from priests to censors – were determined to defend their turf. However, it was even harder elsewhere. China’s imperial court sponsored the arts and sciences, but only as it saw fit. Caged in a huge empire, dissenters had nowhere else to go. In India and the Middle East, foreign-conquest regimes such as the Mughals and the Ottomans relied on the support of conservative religious authorities to shore up their legitimacy.
Europe’s pluralism provided much-needed space for disruptive innovation. As the powerful jostled for position, they favoured those whom others persecuted. The princes of Saxony shielded the heretic Martin Luther from their own emperor. John Calvin found refuge in Switzerland. Galileo and his ally Tommaso Campanella managed to play off different parties against each other. Paracelsus, Comenius, René Descartes, Thomas Hobbes, John Locke and Voltaire headline a veritable who’s who of refugee scholars and thinkers.
Over time, the creation of safe spaces for critical enquiry and experimentation allowed scientists to establish strict standards that cut through the usual thicket of political influence, theological vision and aesthetic preference: the principle that only empirical evidence counts. In addition, intense competition among rulers, merchants and colonisers fed an insatiable appetite for new techniques and gadgets. Thus, while gunpowder, the floating compass and printing were all invented in distant China, they were eagerly embraced and applied by Europeans vying for control over territory, trade and minds.
Paired with commercial expansion, political fragmentation also encouraged a change in societal values. In imperial states, coalitions of large landowners, military men and clerics usually called the shots. Such elite groups eyed merchants, artisans and bankers with suspicion and disdain: after all, weren’t farming, war and prayer much more honourable pursuits than profiting from markups and interest? For bourgeois attitudes to thrive, and for capitalists to enjoy protection from predatory intervention, these traditional snobberies had to lose their grip on the popular imagination. Smaller states that were deeply immersed in commercial operations led the way: first the city-states of Italy and the Hanseatic League, then the Netherlands and Britain.
In the end, once the Italian Renaissance had run its course, it was precisely those parts of western Europe where the legacies of Roman rule had faded most thoroughly, or where Rome had never held sway at all, that spearheaded political, economic and scientific progress: Britain, the Low Countries, northern France and northern Germany. It was there that Germanic traditions of communal decision-making survived the longest and that the Reformation precipitated yet another break from Rome. It was there that social values changed most profoundly, modern commercial capitalism took root, and science and industrial technology flourished. But that was also where the fiercest wars of the era were being hatched and fought.
We might well be forgiven for finding this combination of fracture, violence and growth baffling or even implausible. Wasn’t it preferable to lead peaceful lives in a large and stable empire than on a continent where people were constantly at each other’s throats? Only if we think in the short term. Large-scale empire was indeed an extremely effective way of organising agrarian societies: by providing limited governance, it ensured a degree of peace and order while largely staying out of most people’s lives. Even taxes were generally quite modest. Designed to cater to the needs of a small ruling class and drawing heavily on the services of local elites, empires were relatively easy to build and cheap to maintain. But they came with built-in limitations: on liberties, on innovation, on sustainable growth.
Why was that? Influenced by Orientalising tropes about Asian societies, Western scholars used to think that, in traditional empires, human development was held back by despotism. We now know that this was at best a small part of the story. To be sure, ambitious rulers sometimes contrived to wreak considerable damage; but for the most part they preferred a laissez-faire approach. Empires tended to be quite detached from civil society: notorious for the sporadic exercise of despotic power, the ability to deal with their subjects unconstrained by what we now call the rule of law, they often scored low in terms of infrastructural power – their ability to shape people’s lives.
Faced with the challenges of holding on to huge territories, central authorities prized stability above all. As we saw, their empires reflected this priority by encouraging conservatism and reinforcing the status quo. They also empowered the ruler’s allies to prey on the weak, while sheer scale made the idea of political representation a nonstarter. At the same time, limited managerial capacities exposed such empires to secession and invasion, which threatened to undo the economic growth that had been achieved. China, which was repeatedly laid low by warlords, peasant uprisings and assaults from the steppe, is the best-known but by no means the only example.
In post-Roman Europe, by contrast, the spaces for transformative economic, political, technological and scientific development that had been opened up by the demise of centralised control and the unbundling of political, military, ideological and economic power never closed again. As states consolidated, intracontinental pluralism was guaranteed. When they centralised, they did so by building on the medieval legacies of formalised negotiation and partition of powers. Would-be emperors from Charlemagne to Charles V and Napoleon failed, as did the Inquisition, the Counter-Reformation, censorship, and, at long last, autocracy. That wasn’t for want of trying, of attempts to get Europe back on track, so to speak, to the safety of the status quo and universal rule. But the imperial template, once fashioned by ancient Romans, had been too thoroughly shattered to make this possible.
The benefits of modernity were disseminated around the world, painfully unevenly yet inexorably
This story embraces a grimly Darwinian perspective of progress – that disunion, competition and conflict were the principal selection pressures that shaped the evolution of states, societies and frames of mind; that it was endless war, racist colonialism, crony capitalism and raw intellectual ambition that fostered modern development, rather than peace and harmony. Yet that’s precisely what the historical record shows. Progress was born in the crucible of competitive fragmentation. The price was high. Bled dry by war and ripped off by protectionist policies, it took a long time even for Europeans to reap tangible benefits.
When they finally did, unprecedented inequalities of power, wealth and wellbeing began to divide the world. Racism made Western preeminence seem natural, with toxic consequences to the present day. Fossil fuel industries polluted earth and sky, and industrialised warfare wrecked and killed on a previously unimaginable scale.
At the same time, the benefits of modernity were disseminated around the world, painfully unevenly yet inexorably. Since the late 18th century, global life expectancy at birth has more than doubled, and average per-capita output has risen 15-fold. Poverty and illiteracy are in retreat. Political rights have spread, and our knowledge of nature has grown almost beyond measure. Slowly but surely, the whole world changed.
None of this was bound to happen. Even Europe’s rich diversity need not have produced the winning ticket. By the same token, transformative breakthroughs were even less likely to occur elsewhere. There’s no real sign that analogous developments had begun in other parts of the world before European colonialism disrupted local trends. This raises a dramatic counterfactual. Had the Roman Empire persisted, or had it been succeeded by a similarly overbearing power, we would in all likelihood still be ploughing our fields, mostly living in poverty and often dying young. Our world would be more predictable, more static. We would be spared some of the travails that beset us, from systemic racism and anthropogenic climate change to the threat of thermonuclear war. Then again, we would be stuck with ancient scourges – ignorance, sickness and want, divine kings and chattel slavery. Instead of COVID-19, we would be battling smallpox and plague without modern medicine.
Long before our species existed, we caught a lucky break. If an asteroid hadn’t knocked out the dinosaurs 66 million years ago, our tiny rodent-like ancestors would have had a hard time evolving into Homo sapiens. But even once we had gotten that far, our big brains weren’t quite enough to break out of our ancestral way of life: growing, herding and hunting food amid endemic poverty, illiteracy, incurable disease and premature death. It took a second lucky break to escape from all that, a booster shot that arrived a little more than 1,500 years ago: the fall of ancient Rome. Just as the world’s erstwhile apex predators had to bow out to clear the way for us, so the mightiest empire Europe had ever seen had to crash to open up a path to prosperity.Nations and empires
We’re on the hunt for life – what do we do when we find it?
May 11, 2016 (theconversation.com)
Author
Kelly C. SmithAssociate Professor of Philosophy & Biological Sciences, Clemson University
Disclosure statement
Kelly C. Smith does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
NASA’s chief scientist recently announced that “…we’re going to have strong indications of life beyond Earth within a decade, and I think we’re going to have definitive evidence within 20 to 30 years.” Such a discovery would clearly rank as one of the most important in human history and immediately open up a series of complex social and moral questions. One of the most profound concerns is about the moral status of extraterrestrial life forms. Since humanities scholars are only just now beginning to think critically about these kinds of post-contact questions, naïve positions are common.
Take Martian life: we don’t know if there is life on Mars, but if it exists, it’s almost certainly microbial and clinging to a precarious existence in subsurface aquifers. It may or may not represent an independent origin – life could have emerged first on Mars and been exported to Earth. But whatever its exact status, the prospect of life on Mars has tempted some scientists to venture out onto moral limbs. Of particular interest is a position I label “Mariomania.”
Should we quarantine Mars?
Mariomania can be traced back to Carl Sagan, who famously proclaimed
If there is life on Mars, I believe we should do nothing with Mars. Mars then belongs to the Martians, even if the Martians are only microbes.
Get facts about the coronavirus pandemic and the latest research
Science newsletter
Chris McKay, one of NASA’s foremost Mars experts, goes even further to argue that we have an obligation to actively assist Martian life, so that it does not only survives, but flourishes:
…Martian life has rights. It has the right to continue its existence even if its extinction would benefit the biota of Earth. Furthermore, its rights confer upon us the obligation to assist it in obtaining global diversity and stability.
To many people, this position seems noble because it calls for human sacrifice in the service of a moral ideal. But in reality, the Mariomaniac position is far too sweeping to be defensible on either practical or moral grounds.
Streaks down Martian mountains are evidence of liquid water running downhill – and hint at the possibility of life on the planet. NASA/JPL/University of Arizona, CC BY
A moral hierarchy: Earthlings before Martians?
Suppose in the future we find that:
There is (only) microbial life on Mars.
We have long studied this life, answering our most pressing scientific questions.
It has become feasible to intervene on Mars in some way (for instance, by terraforming or strip mining) that would significantly harm or even destroy the microbes, but would also be of major benefit to humanity.
Mariomaniacs would no doubt rally in opposition to any such intervention under their “Mars for the Martians” banners. From a purely practical point of view, this probably means that we should not explore Mars at all, since it is not possible to do so without a real risk of contamination.
Beyond practicality, a theoretical argument can be made that opposition to intervention might itself be immoral:
Humans beings have an especially high (if not necessarily unique) moral value and thus we have an unambiguous obligation to serve human interests.
It is unclear if Martian microbes have moral value at all (at least independent of their usefulness to people). Even if they do, it’s certainly much less than that of human beings.
Interventions on Mars could be of enormous benefit to humankind (for instance, creating a “second Earth”).
Therefore: we should of course seek compromise where possible, but to the extent that we are forced to choose whose interests to maximize, we are morally obliged to err on the side of humans.
Obviously, there are a great many subtleties I don’t consider here. For example, many ethicists question whether human beings always have higher moral value than other life forms. Animal rights activists argue that we should accord real moral value to other animals because, like human beings, they possess morally relevant characteristics (for instance, the ability to feel pleasure and pain). But very few thoughtful commentators would conclude that, if we are forced to choose between saving an animal and saving a human, we should flip a coin.
Simplistic claims of moral equality are another example of overgeneralizing a moral principle for rhetorical effect. Whatever one thinks about animal rights, the idea that the moral status of humans should trump that of microbes is about as close to a slam dunk as it gets in moral theory.
On the other hand, we need to be careful since my argument merely establishes that there can be excellent moral reasons for overriding the “interests” of Martian microbes in some circumstances. There will always be those who want to use this kind of reasoning to justify all manner of human-serving but immoral actions. The argument I outline does not establish that anyone should be allowed to do anything they want to Mars for any reason. At the very least, Martian microbes would be of immense value to human beings: for example, as an object of scientific study. Thus, we should enforce a strong precautionary principle in our initial dealings with Mars (as a recent debate over planetary protection policies illustrates).
For every complex question, there’s a simple, incorrect answer
Mariomania seems to be the latest example of the idea, common among undergraduates in their first ethics class, that morality is all about establishing highly general rules that admit no exception. But such naïve versions of moral ideals don’t long survive contact with the real world.
By way of example, take the “Prime Directive” from TV’s “Star Trek”:
…no Star Fleet personnel may interfere with the normal and healthy development of alien life and culture…Star Fleet personnel may not violate this Prime Directive, even to save their lives and/or their ship…This directive takes precedence over any and all other considerations, and carries with it the highest moral obligation.
Hollywood’s version of moral obligation can be a starting point for our real-world ethical discussion.
As every good trekkie knows, Federation crew members talk about the importance of obeying the prime directive almost as often as they violate it. Here, art reflects reality, since it’s simply not possible to make a one-size-fits-all rule that identifies the right course of action in every morally complex situation. As a result, Federation crews are constantly forced to choose between unpalatable options. On the one hand, they can obey the directive even when it leads to clearly immoral consequences, as when the Enterprise refuses to cure a plague devastating a planet. On the other hand, they can generate ad hoc reasons to ignore the rule, as when Captain Kirk decides that destroying a supercomputer running an alien society doesn’t violate the spirit of the directive.
Of course, we shouldn’t take Hollywood as a perfect guide to policy. The Prime Directive is merely a familiar example of the universal tension between highly general moral ideals and real-world applications. We will increasingly see the kinds of problems such tension creates in real life as technology opens up vistas beyond Earth for exploration and exploitation. If we insist on declaring unrealistic moral ideals in our guiding documents, we should not be surprised when decision makers are forced to find ways around them. For example, the U.S. Congress’ recent move to allow asteroid mining can be seen as flying in the face of the “collective good of mankind” ideals expressed in the Outer Space Treaty signed by all space-faring nations.
The solution is to do the hard work of formulating the right principles, at the right level of generality, before circumstances render moral debate irrelevant. This requires grappling with the complex trade-offs and hard choices in an intellectually honest fashion, while refusing the temptation to put forward soothing but impractical moral platitudes. We must therefore foster thoughtful exchanges among people with very different conceptions of the moral good in order to find common ground. It’s time for that conversation to begin in earnest.