Artificial general intelligence (AGI), whether theorized or perhaps realized, is a frequent topic of conversation in a world where people talk to machines on a daily basis. However, there is an inherent problem with the term AGI, and it is one that is rooted in perception. First, assigning “intelligence” to a system instantly anthropomorphizes it, adding the perception that there is a human mind working behind the scenes. This concept of mind deepens the realization that there is some single entity operating all of this human-level thinking.
This problematic perception is further exacerbated by the fact that large-scale language models (LLMs) such as ChatGPT, Bard, and Claude make fun of the Turing Test.They certainly seem very human, and it’s no surprise that people turn to LLMs as therapists, friends, and lovers (sometimes together) disastrous results). Does the humanness of their predictive abilities amount to some kind of general intelligence?
According to some estimates, important aspects of AGI have already been achieved by the LLMs mentioned above. Noema’s recent articles Blaise Agüera Y Arcas (Google Research vice president and researcher) and Peter Norvig (computer scientist at the Stanford Institute for Human-Centered AI) argue: This is a threshold that previous generations of AI and supervised deep learning systems have not been able to manage. Decades from now, they will be recognized as the first true examples of AGI. ”
For OpenAI and other companies, AGI is still on the horizon. “We believe our research will eventually lead to artificial general intelligence.” Their research page declares:“a system that can solve human-level problems.”
Whether early forms of AGI already exist or are still years away, miniature versions of AGI are likely to be created by companies looking to leverage these powerful technologies. Businesses need a technology ecosystem that can mimic human intelligence with cognitive flexibility to solve increasingly complex problems. This ecosystem must use and coordinate existing software to understand everyday tasks, contextualize large amounts of data, learn new skills, and work across a wide range of domains. The LLM, by itself, can only perform part of this work. LLM seems most useful as part of a conversational interface that allows people to converse with the technology ecosystem. There are strategies currently being used by leading companies to move in this direction towards what we might call organizational AGI.
Organizational AGI? (eye roll)
There are good reasons to be wary of yet another one-sided information in the slush pile of AI terminology. Regardless of what you call the end result of these activities, there are organizations currently using LLM as an interface layer. They have built an ecosystem where users can converse with the software through channels such as Rich Web Chat (RCW), concealing the machinations going on behind the scenes. This is difficult work, but the rewards are huge. Rather than having to jump between apps to get something done on their computer, customers and employees can ask technology to do a task for them. When people eliminate boring tasks from their lives, there are immediate and tangible benefits. Additionally, there are long-term benefits of a burgeoning ecosystem where employees and customers interact with digital teammates who can leverage all forms of data across the organization to perform automation. This is an ecosystem that is starting to take the form of a digital twin.
McKinsey explains digital twin as “a virtual replica of a physical object, A process that can be used to simulate a person, or their behavior, to better understand how they function in real life. ” They believe that digital twins in ecosystems similar to the one I have described create an enterprise metaverse, one that replicates and connects all aspects of an organization and optimizes simulation, scenario planning, and decision-making. It details that it can be an immersive environment. ”
Regarding what I said earlier about anthropomorphizing technology, digital teammates in this kind of ecosystem are abstract, but I think of them as intelligent digital workers, or IDWs. IDW is like a collection of skills. These skills come from shared libraries, and skills can be adapted and reused in many ways. Skills allow LLMs to leverage all the information accumulated within an organization by mining unstructured data such as emails and recorded calls.
This data becomes more meaningful thanks to graph technology that better indexes skills, systems, and data sources. A graph is more than just a list, it includes how these elements relate to and interact with each other. One of the core strengths of graph technology is its ability to represent and analyze relationships. In an IDW network, understanding how the various components are linked together is critical to efficient orchestration and data flow.
Generative tools such as LLM and graph technology can work together to drive the transition to a digital twinhood or organizational AGI. Twins include all aspects of your business, including events, data, assets, locations, people, and customers. Digital twins may initially have low fidelity and a limited view of your organization. However, the more interactions and processes within an organization, the higher the fidelity of the digital twin. An organization’s technology ecosystem is about more than just understanding the current state of the organization. They can also autonomously adapt and respond to new challenges.
In this sense, every part of the organization represents an intellectual consciousness united toward a common goal. In my mind, it reflects the nervous system of cephalopods. Peter Godfrey-Smith writes in his book: other hearts (2016, Farrar, Straus, Giroux), “In octopuses, the majority of neurons are located in the arms themselves, with a total of almost twice as many neurons as those in the central brain. The arms have their own sensors and controllers. In addition to their sense of touch, they also have the ability to sense chemicals such as smell and taste. Each suction cup on an octopus’s arm may have 10,000 neurons that process taste and touch. Even with a surgically removed arm, you can perform a variety of basic movements such as reaching and grasping.”
Does this sound messy?
A world full of self-conscious brands is going to be very busy. According to Gartner, by 2025, 90% of businesses worldwide will have generative AI as a workforce partner. However, this does not mean that all of these companies will rapidly move towards organizational AGI. Generative AI, especially his LLM, cannot by itself meet an organization’s automation needs. Giving the entire employee access to her GPT or Copilot won’t make a huge difference in terms of efficiency. While this may help people write better emails faster, it takes a lot of effort to make LLM a reliable resource for user queries.
Their hallucinations are well documented and training them to provide reliable information is a huge effort. Jeff McMillan, chief analytics and data officer at Morgan Stanley (MS), told me: It took his team nine months to train GPT-4 Covers over 100,000 internal documents. This work began before he launched ChatGPT, and Morgan Stanley had the advantage of working directly with his OpenAI people. They created a personal assistant that investment banking advisors could chat with and leverage much of their collective knowledge. “Now you’re talking about connecting it to every system,” he said regarding building the kind of ecosystem needed for AI in organizations. I’m sure that’s the direction going forward. ”
Companies like Morgan Stanley that have already laid the groundwork for so-called organizational AI have a significant advantage over competitors who are still deciding how to integrate LLM and adjacent technologies into their operations. I am. So instead of the world being full of self-aware organizations, there will be a few market leaders in each industry.
This is relevant to broader AGI in the sense that these intelligent organizations need to interact with other intelligent organizations. It’s hard to imagine exactly how deep the information sharing will take place between these elite organizations, but over time these interactions will increase her AGI or Singularity (a.k.a. It may play a role in bringing about the
Ben Goertzel, founder of SingularityNET and best known for coining the term. Make a convincing case that AGI should be decentralizedwe rely on open source development, decentralized hosting, and mechanisms for interconnected AI systems to learn from and teach from AI systems.
SingularityNET’s DeAGI manifesto states: The simplest way to achieve this seems to be for AGI to “grow” in a context where it serves and is guided by all humanity, or as good an approximation as can be mustered. . ”
It is dangerous for AGI to emerge due to the aggressive activities of commercial companies. As Goertzel pointed out, “It begs the question. [about] Who owns and controls these potentially creepy and configurable human-like robot assistants…and who sells things to people and brainwashes them into corporate government media advertising orders? but rather, to what extent are they fundamentally motivated to help people? ”
There is a strong argument that allegiance to profit will override the promise these technologies offer to humanity as a whole. Oddly enough, the Skynet scenario in The Terminator (in which the system becomes self-aware, determines that humanity is a major threat, and exterminates all life) is similar to the one in which the system is isolated into a single corporation. The assumption is that they are programmed to have survival instincts. It must be said that survival at all costs is the final line. This suggests that we need to be especially careful when developing these systems in environments where profit is paramount above all else.
Perhaps most importantly, we must keep this technology in human hands and ensure that the myriad technologies associated with AI are only used in ways that benefit humanity as a whole and do not exploit marginalized groups. It’s about pushing ideas forward. Not propagating synthetic bias at scale.
Whatever it is, it’s ultimately a human issue.
I took some of these ideas for organizational AGI to Jaron Lanier, co-developer of VR technology as we know it and Microsoft’s Octopus (Office of the Chief Unified Scientist). Then he said my vocabulary was nonsense and my ideas were nonsense. It didn’t match his perception of technology. In any case, we felt like we agreed on the core aspects of these technologies.
“I don’t think AI creates new beings. I think of it as a collaboration between people.” Lanier said. “That’s the only way to think about making good use of it…To me, it’s all a form of collaboration. The sooner you realize that, the sooner you can design useful systems.” .For me, there are only humans.”
In that sense, AGI is yet another tool, far below the stones our ancestors used to crack nuts. It is a manifestation of our ingenuity and aspirations. Are we going to use it to crush every nut on the planet, or are we going to use it to find a way to grow enough nuts for everyone to enjoy? The trajectory we set early on is very important.
“We are in the Anthropocene. We are in an era where our actions affect everything in our biological environment.” Noeme article author Blaise Aguera Y Arcas said: told me. “The Earth is finite, and if we don’t have a sense of solidarity that allows us to consider the entire Earth as our own body, we’ll be in a mess.”
Josh Tyson is a co-author The age of invisible machinesDirector of Conversational AI Books and Creative Content OneReach.ai. He co-hosts two podcasts. invisible machine and N9K.