Mon. Dec 23rd, 2024
Ai Of The Week: Do Shoppers Actually Want Amazon's Genai?

Keeping up with an industry as rapidly changing as AI is a challenge. So until AI can do it for you, here’s a quick recap of recent stories in the world of machine learning, as well as notable research and experiments that we couldn’t cover on our own.

This week, Amazon announced Rufus, an AI-powered shopping assistant trained on the e-commerce giant’s product catalog and information from around the web. Rufus lives within his Amazon mobile app and helps you find products, compare products, and get recommendations on what to buy.

Do extensive research at the beginning of your shopping trip, such as “What should I consider when buying running shoes?” Even comparisons such as “What is the difference between trail running shoes and road running shoes?” …Rufus significantly improves how customers find and discover the best products to meet their needs,” he wrote in an Amazon blog post.

That’s all great.But my question is who is asking for it? Really?

I’m not convinced that GenAI, especially in chatbot form, is a technology that the general public is interested in or thinking about. Research supports this. Last August, Pew Research Center found that of Americans who had heard of OpenAI’s GenAI chatbot ChatGPT (18% of adults), only 26% had tried it. Of course, usage varies by age, but younger people (under 50) are more likely than older people to report having used it. But the fact remains that the majority of people don’t know or have no interest in using his GenAI product, which is probably the most popular.

GenAI has well-known problems, including a tendency to fabricate facts, infringe on copyright, and spout bias and toxicity. Amazon’s previous attempt at a GenAI chatbot, Amazon Q, struggled mightily, with sensitive information leaked within its first day of release. But I would argue that the biggest problem with GenAI right now is that, at least from a consumer perspective, there are few universally compelling reasons to use it.

Indeed, GenAIs like Rufus are useful for shopping by occasion (e.g., finding winter clothes), comparing product categories (e.g., the difference between lip gloss and oil), and providing top recommendations (e.g., for Valentine’s Day). They can assist with specific narrow tasks, such as presenting gifts.But does it meet the needs of most shoppers? Recent information suggests it doesn’t poll From e-commerce software startup Namogoo.

Namogoo asked hundreds of consumers about their online shopping needs and frustrations and found that product images are the most important contributor to a great e-commerce experience, followed by product reviews and descriptions. I understand that. A respondent ranked search as her fourth most important thing and “simple navigation” as his fifth most important thing. She was second to last in remembering preferences, information, and shopping history.

This means that people usually shop with the product in mind. That search will be postponed. Perhaps Rufus will shake up the equation. I tend not to think so, especially if it’s a difficult deployment (and there’s a good chance it would be considered) reception (Results from Amazon’s other GenAI shopping experiments) — But I think something strange happened.

Here are some other notable AI stories from the past few days.

  • Google Maps experiments with GenAI: Google Maps is introducing GenAI features to help you discover new places. This feature leverages large-scale language models (LLMs) to analyze posts from his 250+ million locations on Google Maps and his 300+ million local guides to find what you’re looking for. Elicit suggestions based on.
  • GenAI tools for music and more: In other Google news, the tech giant released GenAI tools for creating music, lyrics, and images, and brought one of its more capable LLMs, Gemini Pro, to Bard chatbot users around the world.
  • New open AI model: The Allen Institute for AI, a nonprofit AI research institute founded by the late Microsoft co-founder Paul Allen, has released several GenAI language models that it claims are more “open” than other language models. . And importantly, it’s licensed in a way that developers can use it. You can freely conduct training, experiments, and even commercialization.
  • FCC moves to ban AI calls: The FCC has ruled that the use of voice cloning technology in robocalls is fundamentally illegal, proposing to make it easier to prosecute operators of these scams.
  • Shopify launches image editor: Shopify is releasing GenAI Media Editor to enhance your product images. Seller can choose a type from her 7 styles or enter a prompt to generate a new background.
  • GPT, called: By enabling ChatGPT, OpenAI is driving adoption of GPT, a third-party app that leverages AI models. Users can call them in chat. Paid users of ChatGPT can bring their GPT into the conversation by typing “@” and selecting her GPT from the list.
  • OpenAI is partnering with Common Sense. In an unrelated announcement, OpenAI has partnered with Common Sense Media, a nonprofit organization that reviews and ranks the suitability of various media and technologies for children, to create new products for parents, educators, and young people. They announced that they would cooperate on AI guidelines and educational materials.
  • Autonomous browsing: The Browser Company, which makes Arc Browser, aims to build AI that surfs the web for you and bypasses search engines to get results, Ivan wrote.

More machine learning

Does the AI ​​know what is “normal” or “typical” in a particular situation, medium, or utterance? It is uniquely suited for identifying patterns that are most similar to a pattern.Really That’s what researchers at Yale University discovered. In a study about whether AI can identify the “typicality” of one thing among a group of others. For example, if you have 100 romance novels, which ones are the most “typical” and which are the least “typical” given what the model stores about the genre?

Interestingly (and frustratingly), Professors Balázs Kovács and Gaël Le Mens had been working on their own model, a variant of BERT, for years, and were just about to publish it when ChatGPT came along. In many ways it was an exact replica of what they had been doing. “It’s okay to cry,” Le Mens said in a news release. But the good news is that both the new AI and its older, tuned models suggest that indeed this type of system can identify what is typical and what is atypical in a dataset, and that this The findings could be useful in the future. They point out that while ChatGPT actually supports their thesis, its closed nature makes it difficult to approach scientifically.

Scientists at the University of Pennsylvania were investigating Another strange concept to quantify: common sense.. We asked thousands of people to rate how “common sense” statements such as “you get what you give” and “don’t eat food past its expiration date” are true. Unsurprisingly, although patterns emerged, “few beliefs were recognized at the group level.”

“Our findings suggest that the concept of common sense may be unique to each person, and that it may be less common than expected,” said co-lead author Mark Why. Ting says. Why is this being published in the AI ​​Newsletter? Because, like almost everything else, something as “simple” as the common sense that AI is expected to eventually have is anything but simple. Because it turned out. However, quantifying it in this way may allow researchers and auditors to determine how much common sense the AI ​​has and what groups or biases it aligns with. not.

Speaking of bias, many large-scale language models are fairly lax about the information they incorporate. That is, if you give them the right prompts, they may respond in ways that are aggressive, inaccurate, or both. Latimer is a startup aiming to change that with a model intended to be more inclusive by design.

There aren’t many details about their approach, but Latimer said their model uses search augmentation generation (which is believed to improve responses) and sources from many cultures not typically represented in these databases. Proprietary licensed content and data used. So when you ask a question about something, the model won’t go back to his 19th century book to answer you. We’ll know more about this model when Latimer releases more information.

Image credits: Purdue/Bedrich Benes

However, one thing AI models can definitely do is grow trees. fake tree. Researchers at Purdue’s Digital Forestry Institute (which is where I want to work, so give me a call) have created a microscopic model that looks like this: Realistically simulates tree growth. This is one of those problems that looks easy but actually isn’t. Sure, you can simulate tree growth if you’re making a game or movie, but what about serious scientific research? “AI seems to be here to stay, but so far , we have mostly achieved great success in modeling 3D geometries that have nothing to do with nature,” said lead author Bedrich Benes.

Their new model is only about 1 megabyte, which is extremely small for an AI system. But of course, DNA is even smaller and denser, encoding the entire tree from root to bud. Although this model still works in abstraction and is by no means a perfect simulation of nature, it shows that the complexity of tree growth can be encoded into a relatively simple model.

Finally, a robot developed by researchers at the University of Cambridge can read Braille faster than humans and with 90% accuracy. Why do you ask? As a matter of fact, this is not something that visually impaired people can use. The team decided this was an interesting and easily quantifiable task to test the sensitivity and speed of the robot’s fingertips. If you can read Braille just by zooming, that’s a good sign. Learn more about this interesting approach here.. Or watch the video below.