Sun. Dec 22nd, 2024
From School Bans To Sam Altman Dramas: Ai's Biggest Developments

The artificial intelligence (AI) industry is off to a strong start in 2023 as schools and universities struggle with students using OpenAI’s ChatGPT to help with homework and essays.

Less than a week into this year, New York City public schools banned ChatGPT (announced to great fanfare a few weeks ago). This is a move that will set the stage for much of the discussion around generative AI in 2023.

As the buzz surrounding Microsoft-backed ChatGPT and rivals like Google’s Bard AI, Baidu’s Ernie Chatbot, and Meta’s LLaMA grows, the question becomes: how do we make use of the powerful new technology that became accessible to the general public overnight? There were also growing doubts as to whether it should be treated as such.

AI-generated images, music, video, and computer code created by platforms such as Stability AI’s Stable Diffusion and OpenAI’s DALL-E open up exciting new possibilities, while also reducing misinformation, targeted It also stoked concerns about harassment and copyright infringement.

In March, a group of more than 1,000 signatories, including Apple co-founder Steve Wozniak and billionaire tech entrepreneur Elon Musk, issued a new called for a temporary halt to the development of AI.

Although no moratorium occurred, governments and regulators began rolling out new laws and regulations to put guardrails on the development and use of AI.

Although many AI questions remain unresolved into the new year, 2023 will be remembered as a major milestone in the field’s history.

Drama with OpenAI

After ChatGPT gained more than 100 million users in 2023, developer OpenAI said its board abruptly fired CEO Sam Altman in November, saying he had “consistently been candid in his communications with the board.” “It wasn’t,” and returned to the headlines.

The Silicon Valley startup did not provide details on why Altman was fired, but his ouster is widely believed to have been due to internal ideological conflicts between safety and commercial concerns. It is being

Mr. Altman’s firing follows five days of highly public drama that included OpenAI staff threatening to quit en masse and Mr. Altman being briefly hired by Microsoft until his return and board replacement. It was the beginning.

OpenAI is trying to emerge from this drama, but the questions raised during this turmoil still apply to the industry as a whole. This includes concerns that AI will become too powerful or fade too quickly, and how to balance the pursuit of profit with the launch of new products. It ends up in the wrong hands.

Sam Altman was briefly fired from OpenAI [File: Lucy Nicholson/Reuters]

A Pew Research Center survey of 305 developers, policy makers, and academics conducted in July found that 79% of respondents felt either more anxious than excited about the future of AI, or as concerned as excited. I said yes.

Despite AI’s potential to transform sectors from healthcare to education to media, respondents expressed concerns about risks such as mass surveillance, government and police harassment, job loss and social isolation.

Sean McGregor, founder of Responsible AI Collaborative, said 2023 has highlighted the hopes and fears that exist around generative AI, as well as the deep philosophical divisions in the field.

“Most encouragingly, it is shining a light on the social decisions that engineers make, but it is concerning that many of our colleagues in the technology industry seem to view such attention negatively.” McGregor told Al Jazeera that AI will “address the needs of the people most affected.”

“I still feel largely positive, but the next few decades will be difficult as we realize that the AI ​​safety debate is just a flashy technological version of an old societal challenge. ” he said.

Future legislation

In December, European Union policymakers agreed on comprehensive legislation to regulate the future of AI, capping a year of efforts by national governments and international organizations such as the United Nations and the G7.

Key concerns include the sources of information used to train AI algorithms, much of which is collected from the internet without regard to privacy, bias, accuracy, or copyright.

The EU bill would require developers to disclose training data and comply with local laws, place restrictions on certain types of use and open the door for user complaints. There is.

Similar legislative efforts are underway in the United States, where President Joe Biden issued a comprehensive executive order on AI standards in October and hosted an AI Safety Summit in November that brought together 27 countries and industry stakeholders. It is also underway in the UK.

China is also taking steps to regulate the future of AI, announcing interim rules for developers requiring them to submit a “security assessment” before releasing their products to the public.

The guidelines also restrict AI training data and prohibit content deemed to “advocate terrorism,” “violate social stability,” “overthrow the socialist regime,” or “damage the country’s image.”

Globally, the first interim international agreement on AI safety will be signed in 2023, with 20 countries including the US, UK, Germany, Italy, Poland, Estonia, Czech Republic, Singapore, Nigeria, Israel, and Chile. countries have signed.

AI and the future of work

Questions about the future of AI are also pervasive in the private sector, with the use of AI already leading to class action lawsuits in the US from writers, artists and media outlets alleging copyright infringement.

Concerns about AI replacing jobs were the driving force behind a months-long strike in Hollywood by the Screen Actors Guild and Writers Guild.

Goldman Sachs predicted in March that generative AI will replace 300 million jobs through automation, impacting at least two-thirds of current jobs in Europe and the United States in some way, and only increasing work productivity. He predicted that there is a possibility that automation will be promoted instead.

Some are trying to temper the more catastrophic predictions.

The United Nations’ labor agency, the International Labor Organization, announced in August that generative AI is likely to augment rather than replace most jobs, listing office work as the occupation most at risk.

The year of “deepfakes”?

2024 will be a big test for generative AI, with new apps coming to market and new laws coming into effect against a backdrop of global political turmoil.

Over the next 12 months, more than 2 billion people will vote in elections across a record 40 countries, including geopolitical hotspots such as the United States, India, Indonesia, Pakistan, Venezuela, South Sudan and Taiwan.

Online misinformation campaigns are already a regular feature of many election cycles, but fake information is becoming increasingly difficult to distinguish from the real thing and easier to replicate on a large scale. AI-generated content is expected to make things even worse.

AI-generated content, including “deepfake” images, is already being used to stoke anger and chaos in conflict zones such as Ukraine and Gaza, and has even featured in hotly contested election campaigns like the US presidential election. .

Last month, Meta announced to advertisers that it would ban political ads on Facebook and Instagram that were created using AI, while YouTube announced that it would require creators to label their AI-generated content with realistic-looking labels. announced that it would be mandatory.