Novara Media Premiered 2 hours ago DownstreamIan Hogarth has invested in more than 50 artificial intelligence companies and is co-author of the annual “State of AI” report. And he’s worried. Not only in regard to the immensely disruptive consequences of machine learning for employment, as AI automates potentially millions of jobs, but with the potential rise of an ‘AGI’ (an artificial general intelligence). For Hogarth, the prospect of a machine able to augment its own intelligence is of grave concern – and something which, so far, political elites have ignored. So what could the emergence of an AGI mean? And how soon before it arrives – if at all? How important is quantum computing? And does the existence of these technologies within the broader framework of capitalism mitigate or amplify risks? Could AI operate more effectively under a different kind of economic system, and will its diffusion herald a break with capitalism as we know it?
Tag Archives: A.I.
Why AI is incredibly smart — and shockingly stupid
586,535 views | Yejin Choi • TED2023
Computer scientist Yejin Choi is here to demystify the current state of massive artificial intelligence systems like ChatGPT, highlighting three key problems with cutting-edge large language models (including some funny instances of them failing at basic commonsense reasoning.) She welcomes us into a new era in which AI is becoming almost like a new intellectual species — and identifies the benefits of building smaller AI systems trained on human norms and values. (Followed by a …SHOW MORE
About the speaker
Computer scientistSee speaker profile
Yejin Choi investigates if (and how) AI systems can learn commonsense knowledge and reasoning.
Our Glorious Future with AI
Brilliant master or useful idiot?

The term “AI” has been commodified to mean any clever machine. It is used to describe everything from your smart thermostat to the personal data mining program the company used to determine if you get the job. Today, in 2023, the unfortunate fact with which we must all contend is this: every AI in the world is nothing compared to ChatGPT. If you don’t get the job, that’s a problem. If a hundred industries disappear, that is a crisis. When you have to work to find reliable news, that is a problem. When no information of any kind can be trusted, that is a crisis. I intend to demonstrate that those crises are written into the very guts of ChatGPT.
OpenAI’s development of ChatGPT should not be thought of as a really cool improvement on a common technology. It should be thought of in the same way we might think of the discovery of petroleum and its introduction into every facet of human life. There will be no escaping it and everyone will find themselves using it whether they realize it or not. Hyperbole does not begin to encompass its full import.
I am an engineer but I am not an expert in AI. I have spent some time training a neural network and I have developed several expert systems. I have seen how clever mere software can appear and I am not strictly skeptical of how far AI can go. I suspect that Alan Turing, conducting his own Turing test against ChatGPT, would be pretty impressed. What I’ve been pondering goes beyond that.
So, let’s assume that all the hype about ChatGPT is correct and that it will usher in a new era wherein a single electronic system can answer all of the questions we would normally pose to intelligent humans. Let’s further assume that a general pre-trained transformer (GPT) evolves into a true artificial general intelligence (AGI) which can learn from mistakes and answer questions with near flawless accuracy. With those assumptions in place, let’s consider our glorious future.
Who the Tech Serves
From the Industrial Revolution to the 1970s, improvements in efficiency and worker productivity have inspired many. Through most of the 20th century, polymath and philosopher R. Buckminster Fuller promoted his sincere belief in an economy of abundance, lauding the latest technological advances and predicting that the 40 hour work week would soon become a 20 hour work week and people would be freed to dedicate more time to enjoying life. He wrote:
We must do away with the absolutely specious notion that everybody has to earn a living.
To be fair, Karl Marx had the same aspiration, believing that the worker could put in four hours in the morning at conventional work and dedicate the afternoon to fishing, painting or other private projects. The difference being that Marx did not believe this would ever be possible in a capital-intensive profit-based economy.
As far as we can see now in the year 2023, Marx was right and Fuller was wrong. Higher productivity does not increase worker leisure. It increases unemployment, and executive salaries.
Creating a Thinking Machine
For years now, companies have been trying to build an artificial general intelligence (AGI). Most purely algorithmic designs fell well short of expectations leading someone to finally conclude that, rather than simulate the human brain with software algorithms, it might be easier to emulate the human brain with a network of neuron-like circuitry. The idea was to construct a thinking machine by interconnecting a large number of artificial neurons configured in interconnecting layers. In the industry, this is called a neural network.
That neural network is not “programmed” in the conventional sense. It must be trained. In order to recognize a hockey stick, images of hockey sticks in many different positions, locations and lighting angles must be shown to the input computer along with some kind of indication that the output should identify this as a hockey stick. Ten images won’t do it, a thousand images won’t do it. At 100,000 precisely labeled images, we might begin to form the desired interconnections and amplitudes of the neurons to reliably recognize a hockey stick in any reasonable scene.
The sheer magnitude of the training process led some scientists to turn to the wealth of information readily available on the Internet (mostly for free). They opened a fire hose of raw Internet information into their machine from web sites all over the world. This produced AI models that correctly reflected the randomness of the poorly curated input data. They communicated in a believable fashion but incorporated unfounded conclusions and profanity. For this reason, much research was applied to the development of the various curation algorithms for training the systems. In many specific cases, human intervention was also required in order to assure that the data was not just “processed” but, in a sense, “understood”.
For example, the AI needs to recognize not only that a disciplined legal assessment of the January 6th insurrection represents actual facts on the ground, but also that a Fox “News” transcript of the event does not. ChatGPT is in the news because its developer, OpenAI, figured out how to implement a practical solution to the massive problem of training an AI model to communicate using words that are, for the most part, good representations of well-evaluated information.
Turning AI Into Money
OpenAI produced GPT-4 for the purpose of making money and there is plenty of money to be made. Currently the company provides limited access to the general public through the online platform ChatGPT and enhanced access to paying customers who may use a number of useful online interfaces to exploit the full GPT-4 capabilities. This AI-as-a-service model is adequate for now but eventually large corporations will tire of sharing a common resource with their competitors and will demand options that they control completely.
Imagine you are a big company and you’ve developed cool technologies that you don’t want anyone else to know about. What you want is a GPT that’s all your own. If you start asking questions of OpenAI’s online version, you’ll get the same answers your competitors are getting. If you try to train it with your own proprietary information, how do you know that information won’t leak through to your competitors? This is a big problem. OpenAI doesn’t make money if they can’t convince you that your private access is completely hidden from your competitors. What do they do?
They will sell you your own GPT hardware in the form of a computer installation similar to their own but isolated within your lab. They will then sell you a training machine, a specialized trainer, that will allow you to train your own personal installation with all of the information on the Internet that applies to what your company does. That specialized trainer will allow you to build up your own specialized intelligence (SI) with minimal human intervention.
Imagine hundreds of paying customers using their personal OpenAI GPTs and specialized trainers to scour the online literature for scholarly papers, dialogues, lectures and critiques relating to music, mathematics or biochemistry. Companies would use these tools to initially build out their own SI; but also, to incorporate the very latest research into each operational SI as that research comes online. OpenAI (and undoubtedly competitors) would provide these tools and services to companies seeking an edge over any competitor still using humans or the generic online GPT intelligence.
A Pharmacological Example
Let’s imagine a pharmaceutical company specializing in treatments for autoimmune diseases. The company purchases an OpenAI GPT and installs it in their own lab. They also purchase specialty trainers focused on the disciplines of pharmacology and genetic manipulation. They start the trainers and watch for a few months as petabytes of data are reviewed, curated and filtered into the waiting GPT mechanism.
From time to time human employees will pose questions to the developing SI and assess the usefulness of the answer. Eventually, over time, the assessments come back as competent and correct. Next, the employees feed in their own internal proprietary papers and presentations and the SI folds that new information into its state-of-the-art genetic/pharmacological skill-set.
Of course the goal, in a profit-based pharmacology, is to transform a fatal disease into a chronic disease. For this reason the company also needs to train the SI to produce drugs that mitigate the disease without actually curing it. A cure for muscular dystrophy would introduce a fairly minor improvement to the bottom line. An expensive drug to be taken for the life of the patient is always preferred.
So now the company initiates a project to develop a system for improving the quality of life for those with progressive multiple sclerosis. A team is assembled and instructed to submit to the SI a full description of the problem to be solved. With that done, the SI is left to cogitate.
A few hours later, the SI prints the formula and experimental testing plan for a new drug and an improved method for injecting the drug. That drug and method are tested first in mice, then in specially-bred rhesus monkeys and finally in humans. The results are excellent and the FDA approves the drug and method for sale.
When a second project shows similar results, it becomes clear to management that a machine is doing the work of hundreds of human experts. The company lays off 95% of its physicians and biochemists, keeping only enough to compose the problem statements to the SI. Crazy? Of course, but admit it. You’ve seen crazier, haven’t you?
In response to that move, the competitors clamor for their own specialized intelligences. They buy the hardware and the trainers and begin training and testing. Soon, they all lay off most of their scientists and proceed through FDA testing with their own artificially designed super-genius drugs. Other types of companies will undoubtedly do the same thing, but we’ll stick with pharmacology for this discussion.
We soon find that biochemistry is no longer an appealing skill since wages have been dropping consistently as all that is required is the ability to describe a medical problem. The astounding success of the SI-developed drugs (along with extensive lobbying efforts) leads the FDA to enact an abbreviated protocol for the SI-developed drug approval process.
Drug companies, using specialized AI, begin producing more and more such drugs. The vast majority of biochemists have moved on, some into art, some into podcasting, some into retail sales. Their skills grow stale and they lose track of the latest research. The world sees a decade of rapidly developed drugs and processes that alleviate much suffering and offer patients easy monthly payments for disease mitigation solutions assuring a lifetime of almost-not-miserable existence.
As the various specialty trainers scour the internet for new research, fewer and fewer documents are found. Biochemistry is now a wholly uninteresting endeavor. Universities are closing down those departments, confident in the competence of the AIs. Only a few government laboratories are looking into any kind of medical research and since government researchers, thanks to Ronald Reagan’s memorandum on patents, don’t usually get to keep their patents any more, the research tends to be halfhearted and proforma investigations into fairly simple issues.
After several years of unmitigated success, one of the drugs shows fatal side-effects after a few years of use. A degenerative effect on the heart valves leads to the sudden deaths of hundreds of patients with SLE. The offending drug is removed from the market. Now the question becomes: Was the error contained in that company’s proprietary data or was it from a common source? If from a common source then other drugs developed with the use of that same training may also lead to deaths. Which drugs would those be? Where will we find experienced biochemists to work with the findings of the autopsies in order to figure out what went wrong? What schools still teach biochemistry and pharmacology curricula? Once resolved, how is that faulty training corrected in the complex neural nets of the various SIs?
The larger question may be this: Is the SI a simulation of human ingenuity (monkey see, monkey do) or is it an emulation of human ingenuity capable of surpassing its examples not merely by expertly integrating existing human works but by actually experiencing human-like curiosity and revelation? Do we call upon the SIs to do basic research, making the humans in the lab merely the arms and eyes of the machine?
A Political Example
With ChatGPT gaining popular acclaim as a counselor of laudable repute, those who seek the comfort of Fox “News” or OANN will find its answers disturbing and frightening. Incompatible with their restricted world view, MAGA sycophants will cry out for a chatbot that really understands the actual oppression of the suffering white male. The Republican Organization will have to respond. Surely, it doesn’t want its followers to ask ChatGPT if climate change is a threat. That answer would undoubtedly be inconsistent with Sean Hannity’s latest screed. They must, therefore, have their own chatbot version of Pure Flix.
For this reason, regressive organizations will purchase and deploy their own MagaGPT which will be easily trained from the unedited writings and transcripts of the reactionary Right. Maria Bartiromo’s latest pronouncement will be disgorged directly into the inputs of the various authoritarian models every day. The corresponding chatbot will be provided freely to MAGA Republicans and its conclusions will be used to confirm Marjorie Taylor Greene’s current rant, the undeniable innocence of the latest Republican presidential candidate and the vile corruption of any Democrat.
This specialized AI will be trained on an inconsistent corpus of lies, innuendo and partial truths. They will become part of a definitive knowledge base that emulates the innocent certainty of a twelve-year-old explaining why girls aren’t as smart as boys. The chatbot will leave no room for doubt regarding the spendthrift policies of the democrat president. It will, of course, have no understanding at all of the War Between the States or the Civil Rights movement or the actual U.S. Constitution.
Like its more expert cousin, though, it will be able to quickly formulate a fiendishly persuasive argument to support each statement. No longer needing to leak falsehoods to the New York Times in order to assure a façade of respectability, there is now an army of AI bots to do that and every smart person knows that AIs are never wrong. It will be the AI to make the libs cry. And one wonders how many more ideology-specific creations will follow.
Why Should Robots Have All the Fun?
Let’s imagine that anything I’ve suggested here seems reasonable. Is this our fate? Are we just going to have to hope that people are smart enough to recognize that there is something special about humans that we just don’t want to relegate to a machine? Can we humans decide that we, ourselves, really enjoy solving problems and we don’t want those necessary skills to atrophy? Can we use these AI projects to improve our understanding of the process of thought without redefining thinking as an artificial construct?
So what if our machines are amazingly smart — smarter than we can ever be? Does that mean we want them to do the thinking? Why do they get all the fun? Why would we give that up? More importantly, what does all of this mean for the human livelihood?
As we have seen, from the turn of the 19th century onward, machines that can help humans do not benefit the average human: they benefit those in power. Until recently, those machines have primarily replaced factory workers. Some have replaced white collar workers. Now they will be replacing those jobs that were considered sacrosanct only a decade ago: the technical and artistic creative person. As art and literature is generated more and more effectively by machine; as technical designs are produced more quickly and thoroughly; as political principles are invented and presented more persuasively, the machine predominates.
In the end, of course, the machine is not the master. People are making these decisions, determining the goals and exploiting the results. Is AI useful? Under certain circumstances, AI has certainly proved useful. Is it an idiot? Being oblivious to the ethics or moral ramifications of the tasks to which it is set and undertaking those tasks without question, it is certainly an idiot. The machines will not launch a robot revolution, it will happen through a psychological coup. Humans will become more and more uncertain and confused. An individual will ask, “Am I persuaded because 3000 years of human genius has been synthesized by machine into an inescapable argument or because it actually makes sense?” We will give up our power and our freedom because the masters of the machines will make it that easy.
Simulation or emulation? If a simulation, those AI systems will drive us to the intellectual brick wall that represents the end of human research and ingenuity. If an emulation, do the machines actually “deserve” to triumph in this merit-based zero-sum game?
Julian S. Taylor is the author of Famine in the Bullpen a book about bringing innovation back to software engineering.
Available at or orderable from your local bookstore.
Rediscover real browsing at your local bookstore.
Also available in ebook and audio formats at Sockwood Press.
Beyond ChatGPT: Stuart Russell on the Risks and Rewards of A.I.
Commonwealth Club of California • Apr 4, 2023 • SAN FRANCISCO OpenAI’s question-and-answer cha tbot ChatGPT has shaken up Silicon Valley and is already disrupting a wide range of fields and industries, including education. But the potential risks of this new era of artificial intelligence go far beyond students cheating on their term papers. Even OpenAI’s founder warns that “the question of whose values we align these systems to will be one of the most important debates society ever has.” How will artificial intelligence impact your job and life? And is society ready? We talk with UC Berkeley computer science professor and A.I. expert Stuart Russell about those questions and more. Photo courtesy the speaker. April 3, 2023 Speakers Stuart Russell Professor of Computer Science, Director of the Kavli Center for Ethics, Science, and the Public, and Director of the Center for Human-Compatible AI, University of California, Berkeley; Author, Human Compatible: Artificial Intelligence and the Problem of Control Jerry Kaplan Adjunct Lecturer in Computer Science, Stanford University—Moderator