"Thank you, Mat, for inviting me to write about Artificial Intelligence and encouraging me, somewhat assertively, not to write a thesis. I wonder what gave you the impression that a sci-fi author and digital transformation advisor promoting an AI-centered novel could go on for far too long on AI-related content. It’s such a far-fetched idea."
Please read my tiny artificial intelligence non-essay here.
Tiny Elf Disclaimer: No author was harmed (or coerced) during or before the writing of this article.
Alexandra Disclaimer: . . .
Since the beginning of time, we have separated creativity and logic, artists and scientists, divergent and convergent thinking. Once in a while, we break the rules, defying the perfect boxes of segregation and propelling the world, or a community, into a rich renaissance era. A magic moment born out of a collision of ideas and temperaments.
This battle plays out everywhere. It’s the tension between energy creation and conservation. The second being a natural progression of any complex system—a life, a business, a human brain. The need to codify the rules so that it can perform a task while spending the least amount of energy. In other words, the system attempts to retain knowledge: life’s DNA, business automation, or the unconscious mind performing infinite activities on autopilot, sometimes even that morning’s drive to work.
Some argue that this Free Energy Principle, as defined by Karl Friston, implies that complex systems are constantly working away from consciousness, and that the more intelligent a system is, the less conscious it becomes. This definition of intelligence is not only dangerous, but misguided.
Do you know what is the best way to save energy? Death.
And before I use my creative boot to crush that argument (or not) using science, let’s explore the consequences of this Free Energy Principle, better known in corporate circles as cost reduction, effectiveness, and efficiency.
As an example, I’ll use the once innovative Silicon Value unicorn that disrupted transportation with a ridesharing business model. A perfect illustration of Cory Doctorow’s and Rebecca Giblin’s Choke Point Capitalism, driving efficiency, cost effectiveness, convenience, and price reduction through unsustainable subsidies quickly dropped as soon as they neutralize the incumbents. The same unicorn that now “deactivates” drivers using algorithmic logic, no human intervention required. A business, reducing energy by outsourcing cognitive tasks and responsibility to the machine, and washing their hands of unintended consequences.
As Renée Richardson Gosline points out, when we incorporate algorithms in the decision-making process, we perceive the humans as less accountable when those decisions go wrong. And they often go wrong, as identified by the research of the Algorithmic Justice League. Driven by historical bias, the cold algorithm doesn’t always learn from exceptions, because automating exceptions is not cost effective: a perceived waste of energy.
We experience it every day. The customer, stuck in support limbo, because the chat bot or call center IVR (interactive voice response) cannot categorize the problem or recognize the foreign accent. The overqualified professional, discarded by the CV scanning bot, an algorithm taught by data sets riddled with historical bias related to gender, sexual orientation, or anything else that does not compute.
Is the crushing of the different, novel, exceptional, rare, or diverse a sign of intelligence? No, it’s not. And we don’t even need to use all the valid ethical arguments to push back against this madness. It’s simply not good evolutionary fitness. Energy conservation may enable you to survive and even thrive in the short term. Sometimes it’s a matter of life and death, but pursue that strategy for too long and you’re dead.
And, the argument that this problem is temporary, that automation will catch up with the exception, disregards that there will always be exceptions. In fact, exception is the rule, and agility, resilience, adaptability and the innovation and creativity required to achieve them are at the heart of emergent complex systems. The creation of levels of abstraction is good as long as there’s something else, monitoring what is external to the system, looking for the novel and making sense of it. The RNA to the DNA, a mechanism able to change the rules of the game—temporarily or not—when they are no longer valid, explained here by Oded Recha and Andrew Huberman.
The arrogant stop learning.
The beginning of their demise.
Progress is not finite. It takes a lot of arrogance to assume otherwise. So who is going to evolve the rule book in these low energy systems? A calculator supports but does not replace the work of a Nobel prize laureate at the cutting edge of physics. Eliminate the creativity inside the system and we are left with energy efficient calculators, effective at what they do; blind systems unable to identify external opportunities and risks.
In the business world, I’ve been studying this for decades, that to counteract Ichak Adizes’s downward curve in the lifecycle of a corporation—after it reaches its prime—we need to innovate. Without it, the pursuit of corporate efficiency and effectiveness leads to its demise. Sense making, agility, diversity, and innovation: the answers to the dilemma, and the consciousness tech giants have discarded long ago. Now they attempt to buy it and fail to integrate it.
Make no mistake, the current AI hype arrives at the back of plummeting tech stock. Businesses and their investors pushing the next big thing, as fast as they can, to recover their previous stock valuations. Automation aligning perfectly with the Tech Giants’ stage in the corporate lifecycle and the current pressure to respond to inflation and supply chain issues with productivity gains. Just standard business practice, there's nothing wrong with a sensible focus on operating effectiveness.
However, it's the AI hype and the speed in scaling these systems under the banner of this new flavor of energy conservation that should leave us terrified. The AI-powered abundance perfectly articulated here by Elon Musk, where the “public,” now jobless, will somehow buy shares in Tesla to influence Tesla’s direction. Abundance (for the elites) propelled by trans-humanist movements funded by billionaires and deeply rooted in eugenics as Émile P. Torres outlines here.
What about human consciousness—our creativity—is it at risk?
We were once promised that apps and robots would replace us in the most inhumane tasks. The boring, mindless, repetitive jobs, where our brains shrivel with boredom, unable to exercise their curiosity. Brains losing neurons and their connections, the neuroplasticity that comes from learning and engagement. But is this not what is happening with the humans currently babysitting AI-enabled systems until fully autonomous solutions make them redundant? The Uber driver, now unable to find their way in town when the app is down. The call center agent, once a subject matter expert, now diminished to AI-written prompt reading, lacking the business knowledge to solve customer problems. What is happening to these brains, controlled by machines for most of their working days? What is happening to the wellbeing of the human babysitting the machine? Is this the self-fulfilling prophecy of this Free Energy Principle? The elimination of consciousness from large portions of our population who will babysit machines while waiting for redundancy.
Is this the start of the slow degradation of everything that makes us human? Or is it an opportunity to free humans to create and innovate?
We can find answers here: OpenAI and Meta contracting workers in developing nations like Kenya to read, tag, and moderate the worst content imaginable. Are these the tools that will bring abundance to humanity? Or is it abundance for the same old white men, the billionaires, and the ruling class? New technology, same old story.
So, if for the time being, we are pushing humans into the jobs best suited to mindless machines, what is happening to the machines? Are they becoming conscious?
For now, ChapGPT et al. are interesting tools, but nothing more than stochastic parrots, their gaps and risks well articulated by Emily M. Bender, Timnit Gebru, and Gary Marcus.
Humans evolve their internal models of the world throughout their lifetime. These models are imperfect but quite reliable, because we test them every day, and our survival and success depend on it. A lot of what we learn doesn’t come from language. Perhaps, as a child, we burned our fingers touching a lit candle, and we learned not to do it again. We also generalized that the same cause-effect pair probably applied to the gas stove burner or the fire at the campsite. Embodiment—the way we learn with all our senses by interacting with the environment that surrounds us—is quite important to help us develop common sense and this internal model of the world.
ChatGPT, for example, is simply a large language model, identifying patterns in text to answer our prompts. So there’s a lot of mimicking or high-tech plagiarism, as Noam Chomsky delightfully calls it in this debate. And in most cases, the answer sounds credible, but these LLMs are often wrong; they hallucinate. The sources they use may contain information about Fantastic Four’s Human Torch or Dani, the Mother of Dragons, and how their experience with fire differs from ours. Common sense, real world embodiment, abstraction are all problems these AI models have not solved yet. These problems are hard, but not impossible to solve, and scanning through the billionaires’ portfolios of businesses and investments will give you all the clues you’ll need on how that will play out.
The immediate risks are real and very concerning: misinformation, discrimination, deception, etc. However, perhaps all these experts raising alarm bells are missing the point of what Sam Altman, the CEO of OpenAI and his team are actually up to. That these hallucinations, the creativity that sparks from odd connections, from getting things wrong, and experimenting with large data sets, may be exactly what AI needs to attain some level of consciousness. I use the word loosely, avoiding the complexities of this hard problem and its endless rabbit holes.
The current design issues may not be a bug, but a feature setup to prioritize creativity over correctness.
Open AI has stated clearly that AGI (artificial general intelligence) is their primary goal. Providing utility through ChatGPT and benefiting from the business models that come from it is certainly a way to fund it and scale faster, but commercial utility is the icing on the cake for this group of trans-humanists burdened by a severe God Complex. They are chasing energy creation, not conservation: the spark of life that will wake up the machine.
According to David Deutsch “The very laws of physics imply that artificial intelligence must be possible.” And he has an explanation on what’s holding us up: creativity, the human ability to produce new explanations.
“The prevailing misconception is that by assuming that ‘the future will be like the past’, it can ‘derive’ (or ‘extrapolate’ or ‘generalise’) theories from repeated experiences by an alleged process called ‘induction’.”
Creativity is the answer; it is also error prone. It’s the connection of ideas that don’t fit together neatly. The spark of invention, yet to be tested, that advances scientific thinking. It’s the thing that makes us human, best articulated by Richard Feynman—a renaissance man— as he explains the scientific method. "First, we guess it."
So what are Altman and his team really up to? Are they concerned with accuracy and working on Bayesian induction systems or searching for human-like machine creativity? The creativity defined by Deutsch. Maybe this interaction provides some clues on why no one at OpenAI seems to be concerned with ChatGPT’s hallucinations and the propagation of misinformation. Look beyond the Bayesianism of Free Energy Principle to understand their strategy.
Obsessed with chasing creativity in machines, they risk flooding the world with misinformation, threatening our safety and our democratic systems, because in this context—achieving AGI or subjective 'qualia'—truth is objectively not important. The future is unlike the past and is not derived by induction. In parallel, they crush, at scale, human creativity as we shift to become AI babysitters, or unwilling AI teachers through plagiarism.
What happens next? Annihilation? Rights and personhood for AGIs? Transcendence for some privileged humans? And for the rest of us? Unemployed or bored out of our minds, we will influence the billionaires by buying stock in their companies with our non-existent universal basic income.
Back to energy conservation or lack of it. These AI models may be years away from achieving human-level sentience, but their scaling carbon footprint can and will destroy the planet. Prioritizing where we apply such powerful intelligence is an imperative. Advancing human and planet health and well-being now, rather than chasing megalomaniacal dreams of transcendence, will make or break our ability to survive and thrive.
So how do we engage in the prioritization of where we should be spending energy in AI?
Too much centralization of power enabled by tech (in business or government,) is the problem we need to tackle first, otherwise we waste our time playing endless Whac-A-Mole with the egos of sociopaths and psychopaths. And no, the solution is not the blockchain, but it may start with decentralized digital public squares (not controlled by billionaires.) The types of spaces Eli Pariser has been working on with his team.
Most importantly, we need to stop spending time reacting to bad actors and use our creativity and our science to co-create a better future. A fully conscious future, thriving in creativity and light.
P.S. This free-flowing stream of creative consciousness connects ideas from different domains with a view to help me make sense of the world and explore possibilities.
My expertise is tech-enabled business transformation and operating model agility. I have over 26 years of experience in this field. I’m also a science fiction author and futurist and this allows me to tap into different fields and use my creative boot to crush ideas that, in my opinion, do not serve people or the planet.
A special thank you to Peter Watts and Moid Moidelhoff, whose conversation sparked enough controversy and restlessness in my mind to inspire me to write this essay.
My life has been equally full of magic and science. It may seem an odd combination for many, but it never felt paradoxical to me. As one of my heroes would say, “Any sufficiently advanced technology is indistinguishable from magic.”
At seven years of age, I would sit in front of the TV, eagerly awaiting the next episode of Battlestar Galactica. Myths, gods, machines, and humans all perfectly blended into a series of adventures where a thoughtful, selfless hero and his impulsive, self-serving sidekick showed me the magical power of friendships between opposites.
Twenty-six years later, I would break all the rules to download the most recent episodes of the reimagining of the original series just as it aired in the US. This time, it was the well-crafted politics and the mercurial Kara “Starbuck” Trace that kept me on the edge of my seat. The beloved series had evolved with the times, just as I had done.
I still own—lawfully this time—both series on all mediums available. The original Captain Apollo, Kara “Starbuck” Trace, and the myth-filled bot-controlled worlds they inhabit have become part of the fabric of my imagination. Kara was a refreshing upgrade from another fictional rebellious woman who also shaped my pre-teen years—Zimmer Bradley’s Morgan Le Fay.
At the tender age of ten or eleven, I devoured the Mists of Avalon, highlighting extracts that felt like universal truths. I contemplated the plight of women, particularly the brown-haired ones—those with all the magic and none of the fair beauty. It was my first taste of feminism, intricately woven into the magical realism so common in the lives and stories of my Portuguese elders.
Decades later, my world collapsed as I was writing my first fantasy novel very much inspired by Zimmer Bradley’s writing. The author’s horrid crimes surfaced, and her life’s work took on a completely different meaning. In my attempts to make peace with it all, I accepted that art has as much to do with the creator as the consumer, and that my values and imagination shaped my Mists of Avalon and my Morgan Le Fay.
In my teens, authors like Arthur C. Clarke, Carl Sagan, and Ray Bradbury all opened my world and imagination, while my father kept feeding me a steady flow of science magazines. There was no turning back. I was too addicted to the What Ifs to care about the What's Next. On TV, the 80s version of The Twilight Zone kept me glued to the screen, a feeling only matched decades later by Charlie Brooker’s Black Mirror.
During my transition to adulthood, my interest in science fiction books fizzled, to be replaced by TV and Film, particularly the ones inspired by the works of Philip K. Dick. This was a period where I struggled to find books with the fresh ideas that had fired new connections in my young neurons, like Ray Bradbury’s Storm of Thunder and its “Butterfly Effect.”
My career in technology kept me close to the worlds of artificial intelligence, robotics, quantum mechanics, and blockchain, to name a few. Still, I struggled to find Science Fiction authors who could blow my mind. The feeling I got recently from reading the works of Liu Cixin and Ted Chiang. The first later betrayed my devotion to his writing with his dangerous dissemination of gender stereotypes in Death’s End.
But where’s the magic? You may ask. Wait, let’s transition slowly into that realm. I feel the need to protect my oversensitive heart from your judgment by showing off my brain first. It works like a charm, you see? Intimidated or confused, they back off and fail to figure out how vulnerable I am. Empathy is a curse, sometimes…
Perhaps the best transition between realms is to focus on my early twenties and the day I read Antonio Jose Damasio’s The Descartes Error. A time when I was overwhelmed by a loved one’s mental health issues. I feared I would share her fate, but I found relief and joy in the discovery of neuroplasticity and neuro-linguistic programming (for self-help, not the manipulation of others.)
Damasio’s connection between emotion and decision-making drove me to learn more about behavioral economics, and the nature of consciousness in humans and...bots. I continued my journey by immersing myself in the Simulation Hypothesis to uncover the inevitability of a panpsychist universe—the magical universe I’ve been experiencing all my life. A story I will leave for another day.
Like many other inspired moments in my life, the Spiral Worlds' seed was planted at SXSW in 2019. I came up with the premise as I watched George Hotz's talk about Jailbreaking the Simulation. You can see me here, at minute 40, asking George the question that led to this story. I started by stating, "I can't believe I'm saying this out loud."
Yeah, three and half years later I still don't quite believe I'm about to publish a series of books about a contrast-making simulation that speaks to the latest advancements in both the fundamental laws of science and artificial intelligence.
Wish me luck,
SPIRAL WORLDS' NEWSLETTER
Stay in touch by subscribing to my mailing list where I answer readers' questions about the series. Do stay in touch. I'd love to hear from you.
Much love, A.
Much love, A.
© Alexandra Almeida 2022