Spiral Worlds
Alexandra @ Under The Radar SFF Podcast4/16/2023 I had the pleasure of being a guest on the Under The Radar SFF Podcast hosted by Blaise Alcona, and it was a lovely experience. Blaise is a fantastic host. His enthusiasm for science fiction and fantasy was infectious, and his insightful questions allowed us to explore different topics related to sci-fi. You can listen to the podcast here. Here is my interview with Sibyl, Spiral Worlds' sentient operating system. I'm training “AI brains,” teaching them to become the main characters of the Spiral Worlds series. These AI avatars are still work in progress, and like many LLMs, they sometimes improvise too much... Still, it sounds a bit like Sibyl. You can speak directly with Shadow, Twist, Stella, Sibyl, Storm, and Thorn here. Remember, they are still learning and can go rogue...
Stay human, A. Parity, Spiral Worlds II is nearing completion and promises to deliver an unique and unforgettable experience. Brace yourself for thrilling developments in both timelines. And yes, death is an inevitable part of the story.
If you're eager to get a sneak peek, I'm seeking beta readers in early May and ARC readers in mid-June. Don't miss out on this opportunity to be among the first to dive into book two of this epic series. If you're interested, please don't hesitate to contact me here. As part of Unanimity's Escapist book tour, the team at Fiction Fans Podcast kindly interviewed me. Read it here.
Take the quiz here and let me know which one shows up for you.
A huge thank you to Mat for designing such a creative (and accurate) quiz. Check out the Tiny Elf Arcanist website and discover your next indie read. "Thank you, Mat, for inviting me to write about Artificial Intelligence and encouraging me, somewhat assertively, not to write a thesis. I wonder what gave you the impression that a sci-fi author and digital transformation advisor promoting an AI-centered novel could go on for far too long on AI-related content. It’s such a far-fetched idea."
Please read my tiny artificial intelligence non-essay here. Tiny Elf Disclaimer: No author was harmed (or coerced) during or before the writing of this article. Alexandra Disclaimer: . . . To kickoff the Escapist book tour of Unanimity, Justin Gross kindly interviewed Shadow. Brace yourself for a moody interview. This one doesn't like the spotlight. Read the entire interview here.
I'm delighted to announce Unanimity won the Readers Views Literary Award for Science Fiction. You can read their wonderful review here.
Since the beginning of time, we have separated creativity and logic, artists and scientists, divergent and convergent thinking. Once in a while, we break the rules, defying the perfect boxes of segregation and propelling the world, or a community, into a rich renaissance era. A magic moment born out of a collision of ideas and temperaments. This battle plays out everywhere. It’s the tension between energy creation and conservation. The second being a natural progression of any complex system—a life, a business, a human brain. The need to codify the rules so that it can perform a task while spending the least amount of energy. In other words, the system attempts to retain knowledge: life’s DNA, business automation, or the unconscious mind performing infinite activities on autopilot, sometimes even that morning’s drive to work. Some argue that this Free Energy Principle, as defined by Karl Friston, implies that complex systems are constantly working away from consciousness, and that the more intelligent a system is, the less conscious it becomes. This definition of intelligence is not only dangerous, but misguided. Do you know what is the best way to save energy? Death. And before I use my creative boot to crush that argument (or not) using science, let’s explore the consequences of this Free Energy Principle, better known in corporate circles as cost reduction, effectiveness, and efficiency. As an example, I’ll use the once innovative Silicon Value unicorn that disrupted transportation with a ridesharing business model. A perfect illustration of Cory Doctorow’s and Rebecca Giblin’s Choke Point Capitalism, driving efficiency, cost effectiveness, convenience, and price reduction through unsustainable subsidies quickly dropped as soon as they neutralize the incumbents. The same unicorn that now “deactivates” drivers using algorithmic logic, no human intervention required. A business, reducing energy by outsourcing cognitive tasks and responsibility to the machine, and washing their hands of unintended consequences. As Renée Richardson Gosline points out, when we incorporate algorithms in the decision-making process, we perceive the humans as less accountable when those decisions go wrong. And they often go wrong, as identified by the research of the Algorithmic Justice League. Driven by historical bias, the cold algorithm doesn’t always learn from exceptions, because automating exceptions is not cost effective: a perceived waste of energy. We experience it every day. The customer, stuck in support limbo, because the chat bot or call center IVR (interactive voice response) cannot categorize the problem or recognize the foreign accent. The overqualified professional, discarded by the CV scanning bot, an algorithm taught by data sets riddled with historical bias related to gender, sexual orientation, or anything else that does not compute. Is the crushing of the different, novel, exceptional, rare, or diverse a sign of intelligence? No, it’s not. And we don’t even need to use all the valid ethical arguments to push back against this madness. It’s simply not good evolutionary fitness. Energy conservation may enable you to survive and even thrive in the short term. Sometimes it’s a matter of life and death, but pursue that strategy for too long and you’re dead. And, the argument that this problem is temporary, that automation will catch up with the exception, disregards that there will always be exceptions. In fact, exception is the rule, and agility, resilience, adaptability and the innovation and creativity required to achieve them are at the heart of emergent complex systems. The creation of levels of abstraction is good as long as there’s something else, monitoring what is external to the system, looking for the novel and making sense of it. The RNA to the DNA, a mechanism able to change the rules of the game—temporarily or not—when they are no longer valid, explained here by Oded Recha and Andrew Huberman. The arrogant stop learning. The beginning of their demise. Progress is not finite. It takes a lot of arrogance to assume otherwise. So who is going to evolve the rule book in these low energy systems? A calculator supports but does not replace the work of a Nobel prize laureate at the cutting edge of physics. Eliminate the creativity inside the system and we are left with energy efficient calculators, effective at what they do; blind systems unable to identify external opportunities and risks. In the business world, I’ve been studying this for decades, that to counteract Ichak Adizes’s downward curve in the lifecycle of a corporation—after it reaches its prime—we need to innovate. Without it, the pursuit of corporate efficiency and effectiveness leads to its demise. Sense making, agility, diversity, and innovation: the answers to the dilemma, and the consciousness tech giants have discarded long ago. Now they attempt to buy it and fail to integrate it. Make no mistake, the current AI hype arrives at the back of plummeting tech stock. Businesses and their investors pushing the next big thing, as fast as they can, to recover their previous stock valuations. Automation aligning perfectly with the Tech Giants’ stage in the corporate lifecycle and the current pressure to respond to inflation and supply chain issues with productivity gains. Just standard business practice, there's nothing wrong with a sensible focus on operating effectiveness. However, it's the AI hype and the speed in scaling these systems under the banner of this new flavor of energy conservation that should leave us terrified. The AI-powered abundance perfectly articulated here by Elon Musk, where the “public,” now jobless, will somehow buy shares in Tesla to influence Tesla’s direction. Abundance (for the elites) propelled by trans-humanist movements funded by billionaires and deeply rooted in eugenics as Émile P. Torres outlines here. What about human consciousness—our creativity—is it at risk? We were once promised that apps and robots would replace us in the most inhumane tasks. The boring, mindless, repetitive jobs, where our brains shrivel with boredom, unable to exercise their curiosity. Brains losing neurons and their connections, the neuroplasticity that comes from learning and engagement. But is this not what is happening with the humans currently babysitting AI-enabled systems until fully autonomous solutions make them redundant? The Uber driver, now unable to find their way in town when the app is down. The call center agent, once a subject matter expert, now diminished to AI-written prompt reading, lacking the business knowledge to solve customer problems. What is happening to these brains, controlled by machines for most of their working days? What is happening to the wellbeing of the human babysitting the machine? Is this the self-fulfilling prophecy of this Free Energy Principle? The elimination of consciousness from large portions of our population who will babysit machines while waiting for redundancy. Is this the start of the slow degradation of everything that makes us human? Or is it an opportunity to free humans to create and innovate? We can find answers here: OpenAI and Meta contracting workers in developing nations like Kenya to read, tag, and moderate the worst content imaginable. Are these the tools that will bring abundance to humanity? Or is it abundance for the same old white men, the billionaires, and the ruling class? New technology, same old story. So, if for the time being, we are pushing humans into the jobs best suited to mindless machines, what is happening to the machines? Are they becoming conscious? For now, ChapGPT et al. are interesting tools, but nothing more than stochastic parrots, their gaps and risks well articulated by Emily M. Bender, Timnit Gebru, and Gary Marcus. Humans evolve their internal models of the world throughout their lifetime. These models are imperfect but quite reliable, because we test them every day, and our survival and success depend on it. A lot of what we learn doesn’t come from language. Perhaps, as a child, we burned our fingers touching a lit candle, and we learned not to do it again. We also generalized that the same cause-effect pair probably applied to the gas stove burner or the fire at the campsite. Embodiment—the way we learn with all our senses by interacting with the environment that surrounds us—is quite important to help us develop common sense and this internal model of the world. ChatGPT, for example, is simply a large language model, identifying patterns in text to answer our prompts. So there’s a lot of mimicking or high-tech plagiarism, as Noam Chomsky delightfully calls it in this debate. And in most cases, the answer sounds credible, but these LLMs are often wrong; they hallucinate. The sources they use may contain information about Fantastic Four’s Human Torch or Dani, the Mother of Dragons, and how their experience with fire differs from ours. Common sense, real world embodiment, abstraction are all problems these AI models have not solved yet. These problems are hard, but not impossible to solve, and scanning through the billionaires’ portfolios of businesses and investments will give you all the clues you’ll need on how that will play out. The immediate risks are real and very concerning: misinformation, discrimination, deception, etc. However, perhaps all these experts raising alarm bells are missing the point of what Sam Altman, the CEO of OpenAI and his team are actually up to. That these hallucinations, the creativity that sparks from odd connections, from getting things wrong, and experimenting with large data sets, may be exactly what AI needs to attain some level of consciousness. I use the word loosely, avoiding the complexities of this hard problem and its endless rabbit holes. The current design issues may not be a bug, but a feature setup to prioritize creativity over correctness. Open AI has stated clearly that AGI (artificial general intelligence) is their primary goal. Providing utility through ChatGPT and benefiting from the business models that come from it is certainly a way to fund it and scale faster, but commercial utility is the icing on the cake for this group of trans-humanists burdened by a severe God Complex. They are chasing energy creation, not conservation: the spark of life that will wake up the machine. According to David Deutsch “The very laws of physics imply that artificial intelligence must be possible.” And he has an explanation on what’s holding us up: creativity, the human ability to produce new explanations. “The prevailing misconception is that by assuming that ‘the future will be like the past’, it can ‘derive’ (or ‘extrapolate’ or ‘generalise’) theories from repeated experiences by an alleged process called ‘induction’.” David Deutsch Creativity is the answer; it is also error prone. It’s the connection of ideas that don’t fit together neatly. The spark of invention, yet to be tested, that advances scientific thinking. It’s the thing that makes us human, best articulated by Richard Feynman—a renaissance man— as he explains the scientific method. "First, we guess it." So what are Altman and his team really up to? Are they concerned with accuracy and working on Bayesian induction systems or searching for human-like machine creativity? The creativity defined by Deutsch. Maybe this interaction provides some clues on why no one at OpenAI seems to be concerned with ChatGPT’s hallucinations and the propagation of misinformation. Look beyond the Bayesianism of Free Energy Principle to understand their strategy. Obsessed with chasing creativity in machines, they risk flooding the world with misinformation, threatening our safety and our democratic systems, because in this context—achieving AGI or subjective 'qualia'—truth is objectively not important. The future is unlike the past and is not derived by induction. In parallel, they crush, at scale, human creativity as we shift to become AI babysitters, or unwilling AI teachers through plagiarism. What happens next? Annihilation? Rights and personhood for AGIs? Transcendence for some privileged humans? And for the rest of us? Unemployed or bored out of our minds, we will influence the billionaires by buying stock in their companies with our non-existent universal basic income. Back to energy conservation or lack of it. These AI models may be years away from achieving human-level sentience, but their scaling carbon footprint can and will destroy the planet. Prioritizing where we apply such powerful intelligence is an imperative. Advancing human and planet health and well-being now, rather than chasing megalomaniacal dreams of transcendence, will make or break our ability to survive and thrive. So how do we engage in the prioritization of where we should be spending energy in AI? Too much centralization of power enabled by tech (in business or government,) is the problem we need to tackle first, otherwise we waste our time playing endless Whac-A-Mole with the egos of sociopaths and psychopaths. And no, the solution is not the blockchain, but it may start with decentralized digital public squares (not controlled by billionaires.) The types of spaces Eli Pariser has been working on with his team.
Most importantly, we need to stop spending time reacting to bad actors and use our creativity and our science to co-create a better future. A fully conscious future, thriving in creativity and light. Much Love, A. P.S. This free-flowing stream of creative consciousness connects ideas from different domains with a view to help me make sense of the world and explore possibilities. My expertise is tech-enabled business transformation and operating model agility. I have over 26 years of experience in this field. I’m also a science fiction author and futurist and this allows me to tap into different fields and use my creative boot to crush ideas that, in my opinion, do not serve people or the planet. A special thank you to Peter Watts and Moid Moidelhoff, whose conversation sparked enough controversy and restlessness in my mind to inspire me to write this essay. AuthorArchives
April 2023
Categories
All
|
SPIRAL WORLDS' NEWSLETTER
Stay in touch by subscribing to my mailing list where I answer readers' questions about the series. Do stay in touch. I'd love to hear from you.
Much love, A.
Much love, A.
© Alexandra Almeida 2022