Mon. Dec 23rd, 2024
New York Times Vs. Microsoft. Plus, Oren Etzioni On Ai
An AI-created image in Microsoft Designer based on the prompt, “Generate an image that reflects the rise of AI and what’s next for the field in 2023.”

“The biggest surprise was that AI used to be narrowly defined…and when you moved from one application to another, you had to start everything from scratch. Here we are. …It’s almost like the love child of a search engine like Google, with the somewhat limited but very real intelligence of an AI system that you can talk to about anything.”

Oren Etzioni, AI2 Technical Director. (AI2 photo)

It’s a computer scientist and an entrepreneur Oren Etzionithis week’s guest on the GeekWire Podcast, reflects on the past year in AI and talks about what’s next.

Mr. Etzioni has been an AI leader for decades, is a professor emeritus at the University of Washington, a board member of the Allen Institute for AI (AI2), technical director of the AI2 incubator, and a venture director at Madrona Venture Group. Also a partner.

Among other roles, he National AI Research Resources Task Forceadvises the White House on policy issues, and he is working on a project to combat AI-related misinformation in the upcoming election.

In the first segment of the show, my colleague John Cook and I discuss this week’s big AI news. The New York Times filed suit against Microsoft and OpenAI over the use of articles in GPT-4 and other AI models. .

Listen below or subscribe to GeekWire apple podcast, Google Podcasts, spotify Or wherever you listen, keep reading for key takeaways.

Highlights of Oren Etzioni’s comments edited for clarity and length:

AI evolution over the past year: “That was a real wake-up call. …I would like to say that our overnight success was decades in the making. And many of us have been aware of the potential of AI for some time. I don’t think any of us expected how quickly or how dramatically it would come in the form of ChatGPT or something like that. But we all saw it coming. We knew it. Now we know that literally the rest of the world is catching up, and that includes politicians, and that includes children, and that includes teachers. It’s changing every aspect of society.”

The role of AI in our work and lives: “The word copilot itself may not be appropriate. Maybe it’s assistant. And the assistant’s abilities are often just as important as the tasks you are given. You can’t delegate, you delegate poorly, you’re under specified. …Here’s where you might find your work monotonous, find things you don’t want to do, and AI can help you a lot with those. I think.”

AI “Toothbrush Test”: “What’s going to happen next in 2024 is what someone once called the toothbrush test. So how many times a day do you use that technology? For most people, they brush their teeth two to three times. So I think in 2024, we’re going to see an explosion of AI toothbrush tests. You’ll see us using them two, three, 10, even 20 times a day. And I… , I’m not even talking about the implicit use when you’re doing voice recognition in your car or when the Google search engine uses it to re-rank things. What I’m talking about. is about us interacting with AI systems, music, art, or work. On average, I think it’s about 10 times a day.”

The need for a strong open source model: “Consolidation of power in AI is a big risk. And we’ve seen some of that in top companies as well. The power to counter that is, first of all, the open source model. The similarities we saw with operating systems were billions of dollars were spent on Windows. But there was also Linux, which was championed by the open source movement. So Linux for language models, Linux for AI. I hope Linux becomes a reality, and I also think the government has a role to play in making resources available.

Risk of AI-powered misinformation: “We’ve already seen this in previous elections, but now it’s cheaper and easier to do with generative AI. I’m worried about the impact on the November election…the primaries, the election itself. , the potential for distrust, soon. And I am determined to do something about this problem to ensure that generative AI does not become the Achilles heel of democracy.”

What it takes to combat AI misinformation: “I think there needs to be strong regulation. I think there needs to be education. People need to understand how to critically evaluate what they’re hearing, especially on social media. …Additionally, watermarks, You also need authentication, provenance, so you know where things come from. In addition to that, you also need the ability to detect. So when you watch a video or hear audio, you’re like, “Did this change? ?” you must be able to ask. Was this manipulated? Is this automatically generated by AI? With these elements in place, I think there is hope for building a robust system. Without either of those, I think there are some big risks. ”

Outlook for AI startups: “Some people have a perception that it’s difficult to launch a successful startup at this point because these large-scale models require huge amounts of computing power. Nothing could be further from the truth. I think we’re in a moment of disruption, but disruption creates a lot of opportunity.”

Advice for aspiring computer scientists: “One is to study the fundamentals. The basic ideas of mathematics, statistics, computer science, those have not changed. These are the building blocks on which we build modern technology.…

“The second thing I would say is follow your passion. A lot of times people are worried about the future or trying to gamble on the future. Well, because I was able to get that job, You need to study this and you need to do this. You are young and the world is changing rapidly. Follow your passion, enjoy the process of education and learn what you need to do. Enjoy things and these things will take care of themselves.”