Mon. Dec 23rd, 2024
Ai Is Basically A “labor Replacement Tool”

Welcome to This Week in AI, Gizmodo’s weekly deep dive into what’s happening in artificial intelligence.

For many months I harp In certain respects, the artificial intelligence tools currently in place are primarily good at one thing: replacing human employees.of “AI revolution” Most of it was a revolt by corporations against the general public leveraging new technology to reduce headcount across companies. The biggest sellers of AI are I’m very open about this— has acknowledged time and time again that new forms of automation will lead to human work being reused as software.

We got to know that again this week when we sat down with Mustafa Suleiman, founder of Google’s DeepMind. interview With CNBC. Mr. Suleiman was in Davos, Switzerland, attending the World Economic Forum. annual gatheringwhere the AI ​​was reportedly Most popular conversation topics. During the interview, Suleiman was asked by news anchor Rebecca Quirk whether AI would “massively replace humans in the workplace.”

The tech CEO’s answer was: “I think in the long term, over many decades, we’re going to have to think very seriously about how we integrate these tools, because if left entirely to the market, these tools will be This is because the workforce will be replaced.”

And there it is. Although Suleiman makes this sound like a foggy future hypothesis, it is clear that the aforementioned “labor replacement” is already occurring. Technology and media industry – uniquely exposed Faced with the threat of AI-related job losses, large-scale layoffs occurred last year, just as AI was “coming online.” In the first few weeks of January alone, established companies like Google, Amazon, YouTube, and Salesforce announced more aggressive layoffs. explicitly linked to Further expand AI adoption.

of general consensus In corporate America, the idea seems to be that companies should use AI to run leaner teams. Such a team can be powered by a small group of AI-savvy experts. These AI professionals will become an increasingly popular class of workers, as they offer the opportunity to reorganize corporate structures around automation and become more “efficient.”

For businesses, the benefits are clear. You don’t have to pay for a software program or provide health benefits to a software program. You don’t have to take six months off work to take care of a new baby, or you don’t have to start a union campaign in the break room because you’re unhappy with working conditions.

The billionaires touting this technology are making vague rhetorical gestures about something like a universal basic income as a cure for the inevitable worker turnover that will occur. Only a fool would think that these are nothing more than empty promises to keep some people at bay. A kind of lower-class rebellion. The truth is that AI is a technology created by and for business owners around the world. This week’s Davos frenzy, in which the world’s wealthy were fawning over it like Greek peasants discovering the Promethean fire, is just the latest reminder of that.

photograph: Stefan Vermes/Bloomberg (Getty Images)

Question of the day: What is the excuse for OpenAI to become a defense contractor?

The short answer to this question is “not very good.” This week, an influential AI organization was revealed. was working with the Department of Defense To develop new cybersecurity tools. OpenAI previously promised not to participate in the defense industry. Now, after a quick edit to its terms and conditions, the billion-dollar company is fully committed to developing new toys for the world’s most powerful military. After being confronted about this rather drastic change in direction, the company’s response was essentially:  ̄\_(ツ)_/ ̄ …”We used to have essentially a blanket ban on the military, so a lot of people thought that would ban a lot of these use cases, but that’s not what we want to see in the world. “People think it’s very consistent with the real thing,” a company spokesperson said. told Bloomberg. I’m not sure what that means, but it doesn’t seem particularly convincing. Of course, it’s not just OpenAI. Many companies are currently rushing to sell their AI services to the defense community. The meaning is that referenced As the “most innovative technology” in recent decades, the technology will inevitably be sucked into the American military-industrial complex.Considering other countries already doing it I think this is just the beginning when it comes to AI.

More headlines this week

  • FDA approves new AI-powered device to help doctors spot signs of skin cancer. The Food and Drug Administration approval Something called a derma sensor, unique handheld device Doctors can use it to scan patients for signs of skin cancer. The device leverages AI to perform a “rapid assessment” of skin areas and assess whether they look healthy. While there are plenty of silly uses for AI out there, experts claim it AI may actually prove to be very useful in the medical field.
  • OpenAI is establishing connections with higher education. OpenAI is trying to extend its tentacles to every layer of society, and higher education is the most recent area to be compromised. This week’s organization is announced that It was affiliated with Arizona State University. As part of the partnership, ASU will have full access to ChatGPT Enterprise, the business-level version of the company’s chatbot. ASU also plans to build a “personalized AI tutor” that students can use to help with their schoolwork. The university is also planning an “instant engineering course” to help students learn how to ask questions to chatbots. A convenient thing!
  • The internet is already full of AI-generated nonsense.a new report 404 Media found that Google uses algorithms to enhance AI-generated content from a large number of questionable websites. According to the report, these websites are designed to seek out content from other legitimate websites and repackage it using algorithms. The entire scheme revolves around automating content output to generate advertising revenue. This regurgitated crap is promoted to appear in search results by Google’s news algorithm. Joseph Cox said, “The presence of AI-generated content on Google News shows that Google may not be ready to manage its news service in the age of consumer-access AI.” “There is,” he wrote.