Are we at risk of global annihilation or global hallucination?
Have you ever seen the movie "Ex Machina" and wondered if we are getting closer to achieving true artificial general intelligence like Ava? Or are we merely creating "stochastic parrots" that perpetuate biases and misinformation at scale?
In the movie "Ex Machina," Ava is portrayed as a humanoid robot with advanced artificial intelligence. She has the ability to process information, understand natural language, and exhibit emotions. As the story unfolds, it becomes apparent that Ava has her own motives and desires, and is not simply a machine following programmed instructions. Ava ultimately demonstrates her intelligence and autonomy, leading to a dramatic conclusion. She seems capable of performing a wide range of cognitive tasks at a human-like level.
But current AI systems are not as advanced as Ava. Not even close. Most are merely "stochastic parrots."
Stochastic parrots are AI models that are able to generate human-like text responses, but often produce inaccurate or misleading results. ChatGPT is an example of a stochastic parrot. While it is an impressive AI system, it is far from achieving human-like intelligence. The term stochastic parrot was popularized by professor of linguistics Emily Bender and computer scientist Timnit Gebru. They used it to describe the potential dangers of relying too heavily on large language models that have been trained on biased data.
ChatGPT is a narrow or specialized AI system that is designed to perform specific tasks, such as generating human-like text responses to user input. However, it does not have a broad understanding of the world or the ability to reason and think abstractly like a human being. It is also highly dependent on the data it has been trained on and lacks the ability to learn new things on its own. This limits its adaptability and flexibility.
This dependence on training data can lead to large-scale misinformation. Stochastic parrots like ChatGPT can perpetuate and amplify biases and stereotypes present in the training data, leading to harmful and discriminatory outcomes. This can be particularly problematic in the context of social media, where false information can spread rapidly and have real-world consequences.
So, it's important to validate the output of AI systems like ChatGPT. This means critically evaluating the sources of information we encounter online, fact-checking claims, and being aware of the potential for synthetic media and other forms of misinformation. To date, Open AI, the company behind ChatGPT has refused to disclose its data sources. There is no safety without transparency and they might as well change their name to Closed AI.
While Ava represents a potential future of human-like Artificial General Intelligence. There is a much more pressing risk right now. We must be mindful of the limitations of current AI systems like ChatGPT and continue to question their output and think critically of the most appropriate use of such tools. By understanding the risks and limitations of these systems, we can work towards creating more advanced and responsible AI technologies.
Check out the work of Emily Bender and Timnit Gebru. Keep thinking critically, and stay human!
Leave a Reply.
Science Fact & Fiction
The science behind some of our favorite science fiction books, films, and series.
SPIRAL WORLDS' NEWSLETTER
Stay in touch by subscribing to my mailing list where I answer readers' questions about the series. Do stay in touch. I'd love to hear from you.
Much love, A.
Much love, A.
© Alexandra Almeida 2022