Mon. Dec 23rd, 2024
Australian Research Lab Trying To Tame The Beast Called Ai

Modern AI is amazing,” says Simon Lucey. “It’s remarkable. None of us thought we’d reach the heights we’re at now. But at the end of the day, they’re still a glorified ‘if this, then then’ machine. is. ”

This is an honest assessment from the Director of the Australian Institute of Machine Learning (AIML) and Professor in the School of Computer and Mathematical Sciences at the University of Adelaide, who claims to be interested in computer vision, machine learning, and robotics.

“This is a technology based on brute force,” he said. universe.

AIML is Australia’s first research institute dedicated to machine learning research. It was established in early 2018 from the Australian Center for Visual Technologies (ACVT) with funding from the South Australian Government and the University of Adelaide.

Simon Lucy. Credit: AIML

This has contributed hundreds of millions of dollars to the university’s research revenue and helped it rise to No. 2 in CSRankings’ international rankings for computer vision research, which ranks computer science institutions around the world.

I have a wide range of interests, but the three main pillars are machine learning, artificial intelligence, and computer vision.

AIML has approximately 200 “members” consisting of leading academics, postdoctoral researchers, graduate students, scholarship recipients, a full-stack engineering team, and a small team of professional staff.

AIML is one of Australia’s largest machine learning research groups. claim to be “It’s one of the best products in the world for computer vision.”

The company has partnered with Microsoft and worked with Amazon, but funding from the South Australian Government has allowed it to establish its own engineering team. Local companies may be able to take advantage of “free” engineering time to build high-tech industrial software solutions.

One of its big customers was Rising Sun Pictures. A.I.M.L. built an AI to movies like Elvis and many Marvel movies.

open office space
Inside AIML. Credit: AIML

University of Adelaide research students and AIML engineers used their spare time to compete in the 2022 Learn-to-Race Autonomous Racing Virtual Challenge, beating over 400 international competitors to take first and second place in their categories. We secured a first place finish.

Self-driving car milestones will be achieved as AI allows vehicles to understand their surroundings.

The AIs that have taken the world by storm in recent years, such as ChatGPT, Google Bard, and Alexa, are called Large Language Model (LLM) systems.

Currently, the focus seems to be on LLMs that absorb valuable knowledge from across the internet and package it all into algorithms. And doing this requires a huge amount of computing power.

LLMs can write an essay on Shakespeare’s A Midsummer Night’s Dream. Because they have read everything there is to read on the subject. You can then combine those details to create further versions. Upon request, we can also imitate different writing styles that we have seen.

These can be translated between very different languages. That’s because we can scour an internet-wide list of examples and average out the most commonly applied similar phrases.

They can access your entire internet history, interpolate your preferences and habits, and adjust your searches, social media, and ad feeds accordingly.

Language translators will produce hilarious and embarrassing mistakes. AI cannot understand context or nuance.

And AI content creators are often fooled by misinformation or made false connections.

“I just don’t understand,” Lucy says. “There’s no logic.”

AI is taught in a completely different way than human children. And that may be part of the problem.

“We don’t learn how to read by reading trillions of pages of text, but that’s how we currently teach AI to read.

“Similarly, we study billions of images and still don’t learn to recognize what we’re looking at.”

And while remembering the Internet gives powerful systems like Chat GPT enormous pattern recognition, it doesn’t create recognition.

“What’s really surprising is that ChatGPT 4 can do all these amazing things and yet it can’t multiply it.”

“That’s because you learn by rote, by memorization, which is incredible.

“But it doesn’t understand what it sees. It can’t understand the rules behind it.”

However, it is precisely rote learning that much AI research relies on.

Newsletter

“So there’s this hypothesis, and I think a lot of companies are counting on it. If you just get enough data and enough computing, something called ’emergence’ will happen. Somehow these machines will become more intelligent.

“By the way, there are problems with that paper. First, it’s very inefficient. It’s very expensive. It also requires collecting a huge amount of data.

“It’s almost limited to national superpowers and large multinational corporations. And they are, in some ways, already facing the limits of that process.”

Essentially, machine intelligence is a series of step-by-step instructions. If this, then that too.

“Decades ago, people discovered that there were many intelligent tasks that could be programmed, such as baking a cake,” says Lucy.

ChatGPT 4 can do all these amazing things, but it can’t increase it.

Simon Lucy

The trick behind LLM AI is that it tries to memorize every recipe in its entirety.

Bundle all the examples we’ve seen so far into an algorithm. And the more you look, the more variations the algorithm can encompass.

“That’s what we’re seeing with ChatGPT right now,” Lucey explains. “In ChatGPT 2 and 3, the algorithm itself is essentially the same. The only thing that has changed is the amount of data and the computer power used to scale it.”

big data. Big calculation. A lot of money.

“Only two companies in the world, like OpenAI and DeepMind, can afford to create models at scale like this. And it’s only going to get harder.”

However, despite this brute-force big data approach, LLM algorithms have yet to yield reliable fully autonomous vehicles.

“If you look at humans, we often memorize things. But there are also a lot of things that can be generalized in some way. You don’t have to spend trillions of hours behind the wheel to drive a car. [it].

“We definitely make mistakes. But if a kid runs out onto the road, we know we have to stop.”

“When AI recognizes something it has never seen before, it can fail disastrously,” Lucy explains.

“And in many ways, that’s the big gap between human and machine intelligence.”

Lucy says new approaches are needed to generate different types of intelligence tailored to learning in professional environments. And the added benefit of this is that it avoids the need for expensive “big data, big computing” technology.

That’s what AIML is trying to leverage.

Space exploration is one example.

There is no Internet-wide source of raw material to fill algorithms with trends and averages about life and activities on the Moon. Therefore, the rover must be able to quickly adapt and learn from its own and surrounding experiences without access to a supercomputer.

This requires a new skill: the ability to reason.

Machines need to make “big picture” decisions, he says.

Here, many small pieces of information trigger different neural pathways to produce a coherent, if not perfect, picture. Form expectations from available facts.

Rationalization.

“And that’s where we make broader thinking decisions,” he added.

New machine deep learning techniques emulate neural networks.

“This is a doorway into AI,” Lucey says. “We’re not just saying we can’t compete with the big guys. We need to make a case for AI. It’s not just because it’s cheap and easy for Australia. That’s not to say we can’t compete with the big guys. Because it’s the only way to solve some of the toughest problems you’ll ever have.”