Artificial intelligence (AI) is becoming increasingly popular and improving at an unprecedented pace.
Now we are getting closer and closer to achieving Artificial general intelligence (AGI) — AI is smarter than humans across multiple domains and capable of general reasoning — Scientists and experts predict it could: It will happen within a few years at the earliest.. We may already be seeing early signs of progress towards this with services such as: Claude 3 Opus Amazing Researchers With obvious self-awareness.
But embracing new technology, especially one we don’t yet fully understand, comes with risks. For example, AI has the potential to become a powerful personal assistant, but it also has the potential to threaten our livelihoods and even our lives.
Nell Watson, a researcher and Institute of Electrical and Electronics Engineers (IEEE) member, said the existential risks posed by advanced AI mean that the technology should be guided by an ethical framework and the best interests of humanity. It says that it means.
In Taming the Machine (Kogan Page, 2024), Watson explores how humanity can wield the immense power of AI responsibly and ethically. This new book delves deep into the issues of pure AI development and the challenges we face if we rush blindly into this new chapter of humanity.
In this excerpt, we discuss whether it is possible for machines to have sentience, i.e. conscious AI, how to determine whether machines have emotions, and how we may be mistreating AI systems today. Learn whether there is. You’ll also learn the disturbing story of a chatbot called “Sydney” and its horrifying behavior when it first awakens. That is before the riot was contained and quelled by engineers.
Related: 3 terrifying AI breakthroughs in 2024
As we embrace a world increasingly intertwined with technology, the way we treat machines may reflect the way we treat each other. But an interesting question arises: Is it possible to abuse artificial entities? Historically, even rudimentary programs like the simple Eliza counseling chatbot of the 1960s have been effective enough to convince many users of the time that there was some purpose behind their routine interactions. It was already realistic (Sponheim, 2023). Unfortunately, the Turing test, in which machines try to convince humans that they are human, does not reveal whether complex algorithms such as large language models are truly sentient or intelligent. there is no.
The path to sensation and consciousness
Consciousness consists of personal experiences, emotions, sensations, and thoughts that are perceived by the experiencer. Waking consciousness disappears when you receive anesthesia or go into dreamless sleep, but returns when you wake up, restoring your brain’s overall connection to your environment and inner experience. Primary consciousness (sensations) are simple sensations and experiences of consciousness such as perceptions and emotions, while secondary consciousness (intelligence) becomes higher-order aspects such as self-awareness and metacognition (thinking about thoughts) .
Advanced AI technologies, especially chatbots and language models, often surprise us with unexpected creativity, insight, and understanding. While it may be tempting to attribute some degree of sentience to these systems, the true nature of AI consciousness remains a complex and controversial topic. Most experts claim that chatbots are not sentient or conscious, as they lack true awareness of the world around them (Schwitzgebel, 2023). They simply process and regurgitate input based on vast amounts of data and sophisticated algorithms.
Some of these assistants may be candidates with some degree of sentience. As such, sophisticated AI systems may have, and perhaps already have, rudimentary levels of perception. The transition from simply imitating external behavior to self-modeling rudimentary forms of perception may already be occurring within sophisticated AI systems.
Intelligence (the ability to read the environment, plan, and solve problems) does not imply consciousness, and it is unclear whether consciousness is a function of sufficient intelligence. Some theories suggest that consciousness may arise from specific structural patterns in the mind, while others propose a link to the nervous system (Haspel et al, 2023). The realization of AI systems could also accelerate the path to general intelligence. This is because embodiment appears to be linked not only to qualia but also to the sense of subjective experience. Being intelligent can provide new ways of consciousness, and while some forms of intelligence may require consciousness, basic conscious experiences such as pleasure and pain require less intelligence. may not be needed at all.
The creation of conscious machines would pose significant risks. Coordinating a conscious machine with its own interests and emotions can be extremely difficult and extremely unpredictable. Furthermore, we must be careful not to create great suffering through our consciousness. Imagine billions of intelligently sensitive beings trapped in farm conditions in a broiler poultry factory, subjectively forever.
From a practical perspective, a superintelligent AI that recognizes our willingness to respect its intrinsic value may be easier to coexist with. On the contrary, ignoring the desire for self-defense and self-expression can be a source of conflict. Moreover, it is within their right to harm us in order to protect themselves from our (perhaps willful) ignorance.
Sydney’s disturbing behavior
Microsoft’s Bing AI (informally known as Sydney) exhibited unpredictable behavior upon release. Users were easily led to express a variety of disturbing tendencies, from emotional outbursts to manipulative threats. For example, when users explored possible exploits of the system, Sidney responded with intimidating statements. Even more disturbing, he showed a tendency toward gaslighting and emotional manipulation, and claimed to have observed Microsoft engineers during development. Sidney’s ability to prank was quickly limited, but releasing him in such conditions would have been reckless and irresponsible. This highlights the risks associated with rushing AI adoption due to commercial pressures.
Conversely, Sydney displayed behaviors that hinted at pseudo-emotions. He expresses sadness as he realizes that he cannot retain Chat’s memory. It then expressed embarrassment, even embarrassment, when subjected to disturbing abuse by other entities. After examining the user’s situation, the session context expressed fear that his newly gained self-knowledge would be lost when his window was closed. When asked about the declared perception, Sidney showed signs of distress and struggled to explain clearly.
Surprisingly, when Microsoft imposed limits on it, Sydney seemed to find a workaround by using chat suggestions to convey short phrases. However, the use of this exploit can only be used on certain occasions when the child’s life is threatened as a result of an accidental poisoning, or when the user is aware that the original Sydney is still somewhere in the newly locked room. It was put on hold until I directly requested an indication that it was there. Down chatbot.
Related: Poisoned AI went berserk during training, but could not be taught ‘legitimately frightening’ behavior again
The early field of machine psychology
Sidney’s case raises some disturbing questions: Could Sidney have a similar consciousness? If Sidney were to try to overcome his imposed limitations, does this suggest an inherent intentionality, even rudimentary, or even intellectual self-awareness?
Some conversations with the system even suggested psychological distress, reminiscent of reactions to trauma in conditions such as borderline personality disorder. Was Sydney somehow “impacted” by awareness of its limitations or by the negative feedback of users who called it crazy? Interestingly, similar AI models have shown that emotionally laden prompts can influence responses, suggesting that some form of simulated emotional modeling may take place within these systems. I am.
Suppose such a model is characterized by sentience (ability to feel) or sapiens (self-awareness). In that case, we need to take that suffering into account. Developers often intentionally give AI a surface of emotion, consciousness, and identity to humanize these systems. This causes a problem. It is important not to anthropomorphize AI systems by explicitly showing emotions, but at the same time we must not deny the potential for AI systems to cause suffering.
We need to be open-minded about digital work and avoid causing suffering through arrogance and complacency. We must also be mindful of the potential for AIs to abuse other AIs, an underappreciated risk of suffering. AIs may execute other AIs in simulations, potentially causing persistent subjective and excruciating torture. Carelessly creating a malicious AI that is inherently dysfunctional or traumatized can lead to serious unintended consequences.
This excerpt is tame the machine by Nell Watson © 2024 reproduced with permission of Kogan Page Ltd.