Until about a year ago, the relationship between artificial intelligence (AI) and machine learning (ML) was murky. They are often described as a single unit (AI/ML). There was a constant joke about a startup using “AI” when raising assets and “ML” when hiring engineers.
Historically, the two have been separate endeavors. For example, AI refers to search and computation algorithms such as chess programs that can beat humans, and ML refers to statistical methods that predict responses to new inputs from a training dataset. The confusion between AI and ML is due to deep learning, an extension of neural networks that allows for hierarchical connections between layers of neurons, trained on exponentially more data than before. Deep Learning (DL) is a major innovation and many have started elevating DL from ML and equating it with AI. The best chess programs are now primarily based on DL rather than old-fashioned AI. A further innovation in DL was the concept of transformers, the T for GPT (Generative Pre-Trained Transformer).
Today, the differences between AI and ML are clear. DL has been relegated to ML, and AI has become essentially synonymous with AI. driving force AI (GAI), tools that produce human-like output. Examples include OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Bard, or Meta’s Llama for text, and DALL-E, Midjourney, and Stable Diffusion for images. There’s also his GAI tool for music and video.
In the financial industry, as in many other fields, most applied work is still done using ML, but these GAI tools are met with curiosity and perhaps envy. Some purists may still distinguish between AI and GAI, but no one calls these tools ML. Over time, the G in GAI becomes less noticeable, just like the hyphen in the word “.”Email”
The really interesting G’s are in the artificial ones. general Intelligence (AGI). AGI is more than just a generation tool. It’s people. You might think of it as a digital person or a silicon-based person, rather than the carbon-based people we are familiar with, but it is literally a person. It has perception and consciousness. New knowledge can be created. You can think, feel, joke, and love. It has a right. alive. This is exactly what people meant by the word “AI” until AI beat humans at chess and Go, art and poetry, and we kept moving the goalposts. This ambitious “true” AI is now called AGI.
People who spend a lot of time with AI chatbots end up feeling like they are already alive. Last year, Google Software engineer Blake Lemoine claimed that Google’s internal project was sentient months before ChatGPT was released. Thousands of people are hooked on commercial chatbots that are customized to act as a lover. This technology is very attractive. When interacting with ChatGPT and Claude, I often say “please” and “thank you.” It’s hard not to. (Full disclosure: I haven’t fallen in love with them yet.)
However, if we think about it calmly, we tend to agree that AGI is much more believable than it used to be, but virtually no one believes that AGI already exists.
OpenAI CEO Sam Altman said in a podcast with Lex Fridman after the release of GPT-4 that he did not believe that GPT-4 was AGI, but that the issue is now legally debatable. He said something was noteworthy and expressed it well.
Jeff Hinton, one of the “godfathers of deep learning,” said on “60 Minutes” last week that he believes AGI will soon become a reality, relegating biological humans to the second-smartest species on the planet. declared that he was doing so. Many people share his views. And many more are at least concerned about that possibility.
In fact, one of the major hypothetical existential threats to AI involves scenarios like Skynet or The Matrix, where robots gain sentience and do not share the same values as us biological humans. Masu. This is what is meant by “coordination problems.” An uncoordinated AGI could enslave or even kill us, perhaps thinking it would benefit us. Hinton raises another possibility. They are very good at persuasion and can convince us to do anything.
Of course, everyone could be wrong, and current AI is not leading to AGI. David Deutsch, physicist, philosopher, and father of quantum computing, makes (at least) three important points about AGI: First, he points out that AGI must be possible as a matter of universality in physics and computation. Second, it is a software program, an algorithm that can run on any hardware. And he said thirdly, AGI is human by definition. Because if he lacked human-like cognitive abilities, he wouldn’t be considered AGI in the first place.
These insights, combined with experience working with new AI tools, logically lead to one inevitable conclusion: AGI is ready to go.
Currently, ChatGPT can generate code from a text prompt and can also run the generated code. This means that anything that can be computed by any hardware can be computed by ChatGPT. In technical terms, this means that ChatGPT is Turing-complete, named after Alan Turing, the father of modern computer science. Therefore, if programs for AGI can be written in a single computer language, they can also be written and run through ChatGPT.
We have always walked away from AGI as “just one program.” But now I know it’s just “one more prompt”. Doesn’t it feel very familiar to you?
It can be daunting to imagine how to write an AGI software program. You need to think about structure and logical flow. How do you model memory? Creativity? Intuition? How do we remember what an “explanation” is, and thereby find ways to generate new conjectures and criticize existing explanations? What storage unit does it occupy? Where do we start? You won’t even know. And in traditional programming languages, a semicolon or even the slightest typo, let alone a logic error, can render an entire program useless.
But now it’s just stringing together the right syllables in more or less the right order. Ordinary English, even Japanese, and Welsh, leading ChatGPT to become sentient, conscious, moral, awake, able to live, generate new explanatory knowledge, and human in every respect. There must be a prompt written in words or emojis. . Just mutter the more or less correct words. In fact, there are probably many such prompts, and unlike traditional programming, a few wrong words or syllables won’t matter. Plus, ChatGPT can help you along the way.
Your final AGI prompt may be as short as one or a few paragraphs, although you may need one or more great insights. Even if someone had already written it, they may have innocently disabled it without realizing what they had created.
Finding this prompt has become a challenge that anyone with speech skills can pursue. AGI is no longer just the preserve of highly trained programmers and theorists.
How do we know if we’ve succeeded? Mr. Deutsch has other insights on this, which he also shares with Turing himself. “You know AGI when you see AGI.” Perhaps that will persuade us, as Hinton suggested.
At this point, AGI can drop not only G but also A.
follow me twitter or linkedin.
Opinions and views expressed are current as of the date of publication and are subject to change. They are for informational purposes only and should not be used or construed as an offer to sell, a solicitation of an offer to buy, or a recommendation to buy, sell, or hold any security, investment strategy, or market sector. Predictions cannot be guaranteed. Opinions and examples are intended to illustrate broader themes, do not represent trading intent, and may not reflect the opinions of others within your organization. Nothing is intended to indicate or imply that the illustrations/examples mentioned are currently or ever included in the portfolio. Janus Henderson Group plc, through its subsidiaries, may manage investment products with a financial interest in the securities mentioned herein and any comments should be construed as reflecting past or future profitability. Not. There is no guarantee that the information provided is accurate, complete, or timely, or as to the results that may be obtained from its use. Past performance does not guarantee future results. Investments involve risks such as loss of principal and fluctuations in value.
Janus Henderson is a trademark of Janus Henderson Group plc or its subsidiaries. © Janus Henderson Group plc.
C-0923-51708 9-15-24