Image credits: microsoft
Keeping up with an industry as rapidly changing as AI is a challenge. So until AI can do it for you, here’s a quick recap of recent stories in the world of machine learning, as well as notable research and experiments that we couldn’t cover on our own.
This week in the AI space, Microsoft announced a new standard PC keyboard layout with a “Copilot” key. You heard right. From now on, Windows machines will replace the right Control key with a dedicated key to launch Copilot, Microsoft’s AI-powered assistant.
We imagine this move is intended to signal the seriousness of Microsoft’s investment in the race for consumer (and by extension, business) AI supremacy. This is the first time in nearly 30 years that Microsoft has changed the Windows keyboard layout. Laptops and keyboards with Copilot keys are expected to ship as early as late February.
But is it all ado? Are there really Windows users? want AI shortcuts — or Microsoft’s taste of the AI era?
Microsoft has revealed that it is introducing “Copilot” functionality to almost all of its products, new and old. With flashy keynotes, sophisticated demos, and now AI Key, the company is highlighting its AI technology and betting on it to drive demand.
Demand is not certain. But to be fair. Several vendors have managed to create successful viral AI hits. Check out her OpenAI, creator of ChatGPT. reportedly Generative art platform Midjourney is also clearly profitable and has yet to take in a dime of outside capital.
give importance to some, However. Most vendors have been weighed down by the costs of training and running cutting-edge AI models and have had to seek increasingly large amounts of capital to survive. A great example of this is Anthropic. nurture One round raised $750 million, bringing the total amount raised to more than $8 billion.
Microsoft, along with chip partners AMD and Intel, expects, and is likely right, that AI processing will increasingly move from expensive data centers to local silicon, commoditizing AI in the process. . Intel’s new lineup of consumer chips includes custom-designed cores to run AI. Additionally, new data center chips, such as Microsoft’s own, could make training models cheaper than it is today.
But there are no guarantees. The real test will be to see whether Windows users and business customers, who have been bombarded with what amounts to Copilot’s advertising, will be willing to shell out big bucks for the technology. If they don’t, it might not be long before Microsoft has to redesign the Windows keyboard again.
Here are some other notable AI stories from the past few days.
- Copilot is now on mobile: In more Copilot news, Microsoft has quietly brought the Copilot client to Android and iOS along with iPadOS.
- GPT store: OpenAI announced plans to launch a store for GPT, custom apps based on text-generating AI models (such as GPT-4).), within the next week. The GPT store was announced last year during DevDay, OpenAI’s first annual developer conference, but was postponed to December. This is almost certainly due to the management shake-up that took place in his November, shortly after the initial announcement.
- OpenAI reduces enrollment risk. In other OpenAI news, the startup is looking to reduce regulatory risk in the EU by centralizing much of its overseas business through an Irish entity. Natasha wrote that the move would reduce the ability of some privacy watchdogs in the region to respond unilaterally to concerns.
- Robot training: Google’s DeepMind Robotics team is exploring ways to help robots understand exactly what humans want from them, Brian writes. The team’s new system can manage a fleet of robots working together and suggest tasks that the robot’s hardware can perform.
- New Intel company: Intel spins out The new platform company, Articul8 AI, is backed by Boca Raton, Florida-based asset manager and investor DigitalBridge. As an Intel spokesperson explains, Articul8’s platform “provides AI capabilities that keep customer data, training, and inference within the enterprise security perimeter.” This is an attractive prospect for customers in highly regulated industries such as healthcare and financial services.
- The shadowy fishing industry exposed: Satellite imagery and machine learning are providing new, more detailed information about the maritime industry, particularly the number and activity of fishing and transport vessels at sea.It turns out that there is Method That’s much more than publicly available data suggests, according to a new study published in Nature by the Global Fishing Watch team and collaborating universities.
- AI-powered search: Perplexity AI, a platform that applies AI to web search, has raised $73.6 million in a funding round, valuing the company at $520 million. Unlike traditional search engines, Perplexity provides a chatbot-like interface where users can ask questions in natural language (e.g., “Do you burn calories while sleeping?”, “Which country has the least number of visitors?” Where is it?” etc.).
- Clinical notes automatically created: In more funding news, Paris-based startup Nabla It raised an impressive $24 million. A company that has Partnership with Permanent Medical GroupThe company, a division of U.S. medical giant Kaiser Permanente, is working on an “AI co-pilot” that automatically takes notes and generates medical reports for doctors and other clinical staff.
More machine learning
You may remember various examples of interesting work from last year. For example, one that involves a machine learning model making small changes to the image, such as mistaking a photo of a dog for a photo of a car. This is done by adding “perturbations”, small changes to the pixels of the image, in patterns that only the model can recognize.Or at least they are thought Only the model can recognize it.
Experiments by Google DeepMind researchers We show that when a photo of a flower is perturbed to make it look more cat-like to an AI, people are more likely to describe the image as looking more cat-like, even though it is clearly less cat-like. I did. The same goes for other common objects such as trucks and chairs.
why? how? The researchers didn’t really know, and the participants all felt like they were just picking at random (in fact, the influence is reliable, but hardly beyond chance). (not exceeded). It seems we’re more sensitive than we think, and this has implications for safety and other measures as well. Because it suggests that subliminal signals can actually propagate through images without anyone noticing.
Another interesting experiment in human perception was published this week from MIT. This experiment uses machine learning to Helps elucidate specific language understanding systems. Basically, some simple sentences like “I walked to the beach” require little brain power to decipher, but “in that aristocracy they effect a gruesome revolution.” Complex or confusing sentences, such as , elicit more widespread activation as measured by fMRI.
The researchers compared activation readings in humans who read different such sentences and how the same sentences activated areas that correspond to cortical areas in a large-scale language model. She then created her second model, which learns how the two activation patterns correspond to each other. The model was able to predict whether new sentences would tax human cognition. It may sound a bit esoteric, but it’s definitely very interesting. Believe please.
It is still an open question whether machine learning can mimic human cognition in more complex areas, such as interacting with computer interfaces. However, there is a lot of research out there and it’s always worth considering.This week act of lookinga system created by researchers at Ohio State, works by painstakingly grounding LLM’s interpretations of possible actions in real-world examples.
Basically, when you ask a system like GPT-4V to create a reservation on your site, it tells you what the task is and that you should click the “Make a reservation” button, but it doesn’t actually I don’t know how. . By improving the way we perceive interfaces using explicit labels and knowledge of the world, we can get much better results, even if we still only succeed a small portion of the time. These agent models have a long way to go, but many big claims are expected this year. I heard a few things today.
Then check out this interesting solution to a problem you didn’t know existed but makes perfect sense. Self-driving ships are a promising area of automation, but when the seas are rough it’s difficult to make sure the ship is on track. GPS and gyro cannot solve this problem, and visibility may also be poor. But more importantly, the systems that manage them are not very sophisticated. Therefore, if done incorrectly, it is possible to miss the target or take a major detour and waste fuel, which is a big problem if the vehicle is powered by a battery. I had never thought of that!
Korea Oceanographic University (Another thing I learned today) proposes a more powerful pathfinding model built on simulating ship motion with a computational fluid dynamics model. They propose that a deeper understanding of wave action and its impact on ship hulls and propulsion could significantly improve the efficiency and safety of autonomous maritime transport. It may make sense to use it on human-guided vessels where the captain doesn’t really know what the optimal angle of attack is for a particular squall or wave shape.
Finally, if you want a good summary of the last year’s big advances in computer science (which overlapped heavily with ML research in 2023), here’s how. Check out our great review of Quanta.