Mon. Dec 23rd, 2024
Present And Future Dangers Of Ai – Daily Sundial

Depending on what you focus on, emerging AI technologies will lead to a “Wall-E”-like post-scarcity utopia, or rogue sentient robots will crush humanity and achieve domination on Earth. This will lead to a dystopian nightmare. Some may be led to believe that such science fiction may become fact.

In any case, the alleged existential threat of AI has been in the news lately – in case you weren’t aware. AI anxiety joins a host of modern complexes such as climate anxiety and smartphone addiction.

We should always remain skeptical and optimistic, but given the number of celebrities directly involved in the development of AI who are sounding the alarm, there may be some real cause for concern.

Indeed, at this point, the so-called “godfather of AI” Jeffrey Hinton, who is on the media rounds, is warning that the current trajectory of AI development without real guardrails will lead to artificial generalizations. We’ve all heard of it. Intelligence (AGI) inevitably gains control.

The reality is that technology is growing exponentially along a J-curve. According to , computing power roughly doubles every 12 to 18 months. moore’s law.

Hinton’s caveat Repeat this. “Look at what it is like five years ago and now. Capture the differences and bring them forward. That’s scary.”

Tamlyn Hunt writes in an article for Scientific American:Here’s why AI is so dangerous, conscious or not“This rapid acceleration is expected to soon lead to ‘artificial general intelligence,’ when AI will be able to improve itself without human intervention.”

In his book Scary Smart, one of the leading voices of reason, former chief business officer of Google I have outlined them, but they are all ignored.

beginning, he says Powerful AI systems should not be on the open internet until control issues are resolved. Oops, too late. ChatGPT, Bard, and more already exist thanks to fearless corporate overlords.

Second, they cautioned against teaching AI how to write code. In just a few years, AI will become the best software developer on the planet. Gawdat also believes that the power of AI will double every year.

By learning how to write their own code, AI systems may be able to escape control in the not-too-distant future. According to Hinton others.

As Hunt observes, once AI is capable of self-improvement, which could happen in just a few years, it will be difficult to determine what it does or how it can be controlled. It’s difficult to predict.

Perhaps the biggest doomer in AI, Eliezer Yudkowsky, one of the pioneers in the field of “coordinating” or controlling artificial general intelligence, believes that: Recent calls for a six-month pause in AI development has not gone far enough and the current lack of regulation will inevitably lead to a “Terminator” scenario.

Again, this goes back to the exponential growth of technology. Yudkowsky writes“Advances in AI capabilities are far ahead of advances in AI coordination and even in understanding what exactly is going on within the system.”

Compounding the severity of the problem is the prospect of properly controlling AI for current and future generations, and it may take decades, if not decades, to get it right the first time. Yudkowsky and others point out that this is a difficult prospect that will take several years.

“Trying to get something right on the first really significant try is an extraordinary demand in science or engineering. Nothing comes close to the approach required to make it successful. Preparation is… We haven’t been able to. We haven’t been able to get it ready in a reasonable amount of time. There’s no plan.” he warns.

Surprisingly, Yudkowsky is not alone in thinking that superintelligent AI is a potential existential risk. At the recent invitation-only Yale University CEO Summit held in June, 42% of CEOs surveyed believed that AI could wipe out humanity within the next five to 10 years. According to Fortune Magazine’s Chloe Taylor.

No matter how real such risks are, adjusting AI is a necessary and serious issue, but not everyone buys into the dualistic utopias and doom-laden hype. Rather, many critics believe that such hype is either intentional, or at least serves the purpose of profiting all of the major companies. Moreover, the catastrophic hype obscures the many very real problems that AI is creating and exacerbating.

In an excellent editorial in the Guardian, Samantha Floreani argues that doomsday scenarios are being propagated to manipulate us. Distract from the more immediate harms of AI (We have a lot).

For Floreani and many others, this is an age-old corporate song and dance for maximizing profit and power. There is a clear contradiction between the actions and words of corporate elites who are riding the AI ​​wave to expand their market share and influence. Floreani writes: “The problem with instilling fear in people about AGI while calling for intervention is that companies like OpenAI cast themselves as responsible technology leaders, benevolent experts who can save us from hypothetical harm. It’s about being able to position yourself as such and having the power, money and market power to do so.”

Far from being our collective saviors, the widely used technologies that fall under the umbrella of AI, such as recommendation engines, surveillance technologies, and automated decision-making systems, are already causing widespread harm based on existing inequalities. Masu.

Stanford University concluded in a recent study that Automated decision-making often “reproduces” and “magnifies” the very social biases that we are still trying to overcome. Bias can not only be reinforced, but actually exacerbated through algorithmic feedback loops.

This is because the historical data used to train AI systems is often biased and outdated. Nir Eisikowitz, professor of philosophy at the University of Massachusetts, Boston, writes:AI is an existential threat – it’s not what you think“AI decision-making systems that provide loan approvals or employment recommendations carry the risk of algorithmic bias, because the training data and decision-making models on which the AI ​​runs reflect long-standing societal biases. Prejudice and discrimination in these systems also negatively impact access to services, housing, and justice.

Generative AI such as ChatGPT could also lead us into a dystopian era, albeit with political developments. As AI-generated documents become more sophisticated and persuasive, they can potentially further undermine and threaten already fragile democracies.

As shown by Cornell University professors Sarah Krebs and Doug Criner, generative AI now features micro-targeting, meaning that the propaganda it generates can be tailored to individuals all at once. They cite studies showing that such propaganda is just as effective as that written by people.

Disinformation campaigns will thus intensify and make 2016 election interference look like child’s play.

This constant flow of misinformation not only determines how we perceive politicians, but also undermines how we view them. “True responsibility system” That’s what elections are meant to provide, but it’s also what makes us all cynical. If the entire information ecosystem becomes contaminated and no information can be trusted, our trust in the media and government will be further eroded. You know who benefits from more political apathy and nihilism.

In the constant flood of disinformation, those who don’t drown are the ones who don’t participate. But democracy ideally presupposes participation.

Let’s go back to the images of people depicted in “WALL-E.” They are being trivialized and pacified. AI technologies not only threaten our jobs, democracy, and privacy, but also our humanity.

It is only a matter of time before AI far exceeds human intelligence, and we will become increasingly dependent on it for every whim and action.

Being human means making decisions, and often our choices make even more sense without all the information. Eisikowitz sees AI. You’ll end up adopting most, if not all, of your decisions. “These decisions are becoming increasingly automated and delegated to algorithms. The world won’t end if that happens. But people will gradually lose the ability to make these decisions themselves. I guess.”

While it is true that living according to algorithms makes us more efficient and productive, human life is not only about strict planning and prediction, but also increasingly eroded by serendipity, spontaneity, and meaningful coincidences. Eisikowitz thinks so.

Dire predictions and the scope of the more immediate problem aside, Eisikowitz advises that the “uncritical acceptance” of AI technology will lead to “the gradual erosion of some of our most important human skills.” are doing. Technology always costs money. For Eisikowitz, apocalyptic rhetoric masks the fact that these subtle costs are already occurring.

Similarly, Emily Bender, a linguistics professor at the University of Washington, thinks the rhetoric is a smokescreen. This is because of the pathological pursuit of profit by tech giants. According to Bender, companies with so much to gain from the proliferation of AI technology are using this dire warning as a way to distract us from bias in their datasets and how their systems are trained. It is said that there is. She believes that if our attention is focused squarely on the existential threat of AI, these companies can remain “free from data theft and exploitative practices for much longer.”

Unfortunately, it appears that technology company executives are not the only ones concerned about the existential threat posed by unregulated superintelligent AI.

Critics like Mr. Floreani and Mr. Bender are correct in pointing out that such companies may be profiting from the distraction, but it is not an either-or case. Current AI technologies, including generative AI, are already causing serious problems, and the unregulated development of artificial general intelligence could threaten the survival of humanity.

Vendors ask thought-provoking questions. “If they really believe this could be causing the extinction of humanity, why don’t they stop it?”

While this may seem logical at first glance, it quickly becomes clear that companies blindly pursue profits. Look at the state of your environment. Given climate change projections, corporate profiteering is not only environmentally destructive, it is also suicidal. Business and technology executives don’t “just stand still” because they’re in a technology arms race. One cannot stop as others march us all into oblivion.

That’s true, as MIT economics professor Daron Acemoglu says.AI hype takes us from extreme optimism to extreme pessimism‘But we also need to take apocalyptic risks seriously and adjust AI accordingly before it’s too late.’ .

It is true that a range of pressing issues such as misinformation, job losses, and threats to democracy also need to be addressed and regulated.

As Acemoglu acknowledges, AI is being deployed in an “uncontrolled” and “unregulated manner,” but unfortunately that is not only the problem at hand, but also the impending threat of an inevitable superintelligent AI. This also applies to other problems.

People like Jeffrey Hinton have been criticized for focusing too much on potential existential threats rather than the growing problems that exist here and now. But if he and others are right, or perhaps right, then we should take what they say very seriously, and we should all demand immediate and universal adjustments to our AI systems. You should ask for it.

Hinton and his colleagues use their computer science expertise to show that AI technology is accelerating rapidly and that the exponential growth of technology is running out of time to properly control it. It scares me because I know it from my computer science expertise.

When your doctor warns you that you are very likely to get cancer in the near future and that you must do an immediate test to detect cancer, you are angry at your doctor and think that your cholesterol is better than yours. Don’t tell them you’re worried. You deal with both.

The current AI problem teeth We need a pragmatic, informed and vocal citizenry to demand change. The future is always uncertain, but the unlikely possibility of a robot apocalypse or the complete disappearance of human life requires equally serious action. Now is the time!