Mon. Dec 23rd, 2024
Ai Of The Week: Ai Ethics Continues To Be Left

Keeping up with an industry as rapidly changing as AI is a challenge. So until AI can do it for you, here’s a quick recap of recent stories in the world of machine learning, as well as notable research and experiments that we couldn’t cover on our own.

The news cycle has finally (finally!) quieted down a bit in the AI ​​industry this week ahead of the holiday season. But that is not to suggest that writing was in short supply, both a blessing and a curse, for this sleep-deprived reporter.

A particular headline from the Associated Press caught my eye this morning. “AI image generator is trained using explicit photos of children.” The main point of the story is that LAION is Stable Diffusion, Stable Diffusion, imagencontains thousands of images of suspected child sexual abuse. Stanford Internet Watch, a Stamford-based watchdog group, worked with anti-fraud charities to identify illegal content and report links to law enforcement.

Now, LAION, a non-profit organization, has removed the training data and has pledged to remove any offending material before republishing it. However, this incident highlights how little consideration is being given to generative AI products as competitive pressures increase.

Thanks to the proliferation of no-code AI modeling tools, it has become incredibly easy to train generative AI on any dataset imaginable. Bringing such a model to market is a boon for startups and tech giants alike. However, lower barriers to entry create a temptation to abandon ethics and accelerate market entry.

Ethics is difficult. That can’t be denied. To take this week’s example, combing through the thousands of problematic images in LAION doesn’t happen overnight. And ideally, developing AI ethically requires collaboration with all relevant stakeholders, including organizations representing groups that are often marginalized and adversely affected by AI systems.

There are many examples in the industry where decisions to release AI were made with shareholders, rather than ethicists, in mind. Take, for example, Bing Chat (now Microsoft Copilot), his chatbot on Bing that leverages Microsoft’s AI. launch He compared journalists to Hitler and insulted their appearance. As of October, ChatGPT and Google’s ChatGPT competitor Bard were still give Outdated and racist medical advice. The latest versions of OpenAI’s image generator DALL-E are: evidence Anglocentric.

Suffice it to say that the pursuit of AI superiority, or at least Wall Street’s concept of AI superiority, is being harmed. Perhaps there is some hope on the horizon with the passage of the EU’s AI regulation, which imposes fines for not adhering to certain AI guardrails. However, the road ahead is indeed long.

Here are some other notable AI stories from the past few days.

AI predictions for 2024: Devin shares his predictions for AI in 2024, including how AI will impact the US primary election and what’s next for OpenAI.

Against pseudo-human acts: Devin has also written an article suggesting that AI be prohibited from imitating human behavior.

Create music with Microsoft Copilot. Copilot, Microsoft’s AI-powered chatbot, can now compose songs through integration with GenAI music app Suno.

Introducing facial recognition capabilities at Rite Aid: Rite Aid’s “reckless use of facial surveillance systems” humiliates customers and “compromises confidential information” after the Federal Trade Commission found that U.S. drugstore giant Rite Aid was banned from using facial recognition technology for five years.

The EU provides the following computing resources: The EU is expanding on a plan originally announced in September and kicked off last month to support its own AI startups by providing access to processing power for model training on EU supercomputers. There is.

OpenAI gives boards new powers, including: OpenAI is expanding its internal safety processes to prevent harmful AI threats. A new “safety advisory group” will sit above the technical team and make recommendations to management, with veto powers given to the board.

Q&A with Ken Goldberg of the University of California, Berkeley: In our regular Actuator newsletter, Brian speaks with startup founder and accomplished roboticist Ken Goldberg, a professor at the University of California, Berkeley, about broader trends in humanoid robots and the robotics industry. We talked about it.

CIOs use Gen AI slowly. Ron says that while CIOs are under pressure to deliver the kind of experience people see when they use ChatGPT online, most CIOs are not intentionally introducing this technology into their enterprises. He writes that he is taking a cautious approach.

News publisher sues Google over AI: A class action lawsuit filed by several news publishers accuses Google of “siphoning information.”[ing] Anti-competitive news content, in part powered by AI technologies such as Google’s Search Generative Experience (SGE) and Bard chatbots.

OpenAI Inc. handles Axel Springer. Speaking of publishers, OpenAI has signed an agreement with Axel Springer, the Berlin-based owner of publications such as Business Insider and Politico, to train generative AI models based on publisher content, and Axel Springer recently published The article was added to ChatGPT.

Google is bringing Gemini to more places. Google has integrated the Gemini model with more products and services, including the Vertex AI managed AI development platform and AI Studio, the company’s tool for creating AI-based chatbots and other experiences.

More machine learning

Indeed, the wildest (and most misunderstood) research of the past week or two is this: life 2 beck, a Danish study, uses countless data points in a person’s life to predict what that person is like and when they will die. almost!

Visualization of life2vec mapping of various related life concepts and events.

This study doesn’t claim to be oracle-like accurate (it says it’s three times faster, by the way), but rather that if our lives are the sum of our experiences, then those The purpose is to show that the path can be estimated to some extent using current machine learning techniques. By considering people’s upbringing, education, work, health, hobbies, and other indicators, we can reasonably predict whether a person is an introvert or an extrovert, as well as how these factors will affect their life expectancy. maybe. Although we are not yet at the “pre-crime” level, we are certainly impatient for insurance companies to license this initiative.

Another big claim was made by CMU scientists who created a system called Coscientist, an LLM-based assistant for researchers that can autonomously perform many menial tasks in the lab. Although currently limited to specific areas of chemistry, such models, like scientists, become experts.

Lead researcher Gabe Gomez told Nature: “The moment I saw that a non-organic intelligence could autonomously plan, design, and execute a chemical reaction invented by humans, it was amazing. It was the ‘bad’ moment.” Basically, a chemical document. Identify and execute common reactions, reagents, and steps using LLMs like GPT-4, which are fine-tuned based on Therefore, there is no need to tell your lab technician to synthesize four batches of catalyst. AI can do it, and you don’t even need to hold its hand.

Google’s AI researchers also had a productive week, diving into some interesting frontier areas. fan search It may sound like Google for kids, but it actually stands for Function Search, and like Coscientist, it can help you make mathematical discoveries. Interestingly, to prevent hallucinations, this (like everything else these days) uses a pair of matching AI models, much like the “old” GAN architecture. One theorizes, the other evaluates.

FunSearch doesn’t set out to make ground-breaking new discoveries, but it allows you to take what’s out there and polish and reapply it to new places, so you can find out what’s used in one domain but not in another. Features not recognized in the domain may be used to improve industry standards. algorithm.

style drop is a useful tool for those who want to replicate specific styles through generated images. The problem is (as researchers think) that if you have a style in mind (e.g. “pastel”) and write it down, the model has too many substyles of “pastel” to extract, so the result is This means that it will look like this: unpredictable. With StyleDrop, you can provide an example of the style you have in mind, and the model will work from that. This is basically a very efficient tweak.

Image credits: Google

Blog posts and papers have shown that this is fairly robust, allowing you to apply the style of any image – photography, paintings, cityscapes, cat portraits, etc. – to other types of images, and even to alphabets (some (It’s notoriously difficult for a reason).

Google is also working on a generated video game with VideoPoet. It uses LLM base (like everything else these days…what else are you going to use?), converts text and images to video, extends extensions, and… Perform high-volume video tasks. Or stylize an existing video, etc. As is evident in all projects, the challenge here is not simply to create a series of interrelated images, but over longer periods of time (e.g. more than a second) and large movements and changes. It’s about creating a consistent image.

Image credits: Google

video poet It looks like we’ve moved the ball forward, but as you can see, the result is still pretty weird. But that’s the way these things go. First it’s lame, then it’s weird, then it’s weird. Perhaps they will leave the spooky state at some point, but no one has really gotten there yet.

On the practical side, Swiss researchers are applying AI models to snow measurement. Usually you rely on weather stations, but they can be far away and we have all this amazing satellite data. right. So the ETHZ team took publicly available satellite images from the Sentinel 2 constellation, but as lead director Konrad Schindler says, “You can’t immediately tell how deep the snow is just by looking at the white area on the satellite image.” “No,” he said.

So they input country-wide topographic data from federal topographical services (such as the USGS) and train the system to make inferences based not just on the white bits in the images, but also on trends such as ground truth data and melting patterns. Did. The resulting technology is being commercialized by his ExoLabs, which he plans to contact to learn more.

Warning from Stanford UniversityHowever, it is important to note that while applications like the one above are powerful, they do not involve much human bias. When it comes to health, it suddenly becomes a big issue and a ton of AI tools are being tested in the health space. Researchers at Stanford University have shown that AI models propagate “old medical racial metaphors.” GPT-4 does not know whether something is true or not. As such, they can and do parrot old disproven claims about groups, such as that black people have lower lung capacity. no! Always be careful when using any kind of AI model in the health or medical field.

Finally, we present a short story written by Bard with a shooting script and prompt rendered by VideoPoet. Watch out, Pixar!