Mon. Dec 23rd, 2024
Women In Ai: Irene Solaiman, Head Of Global Policy, Hugging

To give female academics and others focused on AI their well-deserved and overdue spotlight time, TechCrunch is launching a series of interviews highlighting notable women who have contributed to the AI ​​revolution. Start. As the AI ​​boom continues, we’ll be publishing several articles throughout the year highlighting key research that may go unrecognized. Click here for a detailed profile.

Irene Solaiman began her career in AI as a researcher and public policy manager at OpenAI, where she led a new approach to the release of GPT-2, ChatGPT’s predecessor. She joined Hugging Face as head of global policy after nearly a year as a manager in AI policy at Zillow. Her responsibilities there range from building and leading the company’s AI policy globally to conducting her socio-technical research.

Mr. Solaiman also advises the Institute of Electrical and Electronics Engineers (IEEE), the electronics engineering professional association, on AI issues and is recognized as an expert on AI by the Organization for Economic Co-operation and Development (OECD).

Irene Solaiman, Head of Global Policy, Hugging Face

In short, how did you get started in AI? What attracted you to the field?

Completely non-linear career paths are common in AI. My interest began in the same way that many teenagers with awkward social skills find their passion: through science fiction media. I originally studied human rights policy and then took a course in computer science because I saw AI as a way to address human rights and build a better future. The ability to conduct technical research and lead policy in an area with many unanswered questions and unexplored paths continues to make my job exciting.

What work (in the AI ​​field) are you most proud of?

I am most proud of when my expertise resonates with people across the AI ​​field, especially when writing about release considerations in the complex landscape of release and openness for AI systems. When I looked at my paper, AI Release Gradient Frame Technology Deployment Rapid discussions among scientists and use in government reports are positive and a good sign that I’m working in the right direction. Personally, some of the work I am most excited about is around cultural value alignment, with a focus on ensuring that systems work best with the culture in which they are deployed. I’m leaving it there. Along with my wonderful co-author and now dear friend, Christy Dennison. The process of adapting language models to society This was a heartfelt project (and lots of debugging time) that shaped the safety and coordination work we do today.

How do we overcome the challenges of a male-dominated tech industry and, by extension, a male-dominated AI industry?

I get to work with amazing company leaders who care deeply about the same issues I prioritize, and I’m a co-author of great research that allows me to start every work session with a mini-therapy session. I have found, and continue to find, my talent. Affinity groups are great for building community and sharing tips. The emphasis here is on intersectionality. My community of Muslim and BIPOC researchers continues to inspire me.

What advice would you give to women looking to enter the AI ​​field?

Create a support group where your success is your success. In young people’s terms, I think it’s a “girl’s girl.” The same women and peers who got me into this field are my favorite coffee dates and panicked late-night phone calls before a deadline. One of the best pieces of career advice I’ve read was given by Mr. Arvind Narayan on the platform formerly known as his Twitter, saying that he is not smarter than anyone, but that he has certain skills. We have established the “Liam Neeson Principle” of having

What are the most pressing issues facing AI as it evolves?

Since the most pressing problems themselves evolve, the meta answer is: International coordination for a safer system for all. People who use and are affected by systems have different preferences and ideas about what is safest for them, even in the same country. And the questions that arise depend not only on how AI evolves, but also on the environment in which it is deployed. Safety priorities and definitions of capability vary by region. For example, a highly digitalized economy increases the threat of cyberattacks on critical infrastructure.

What issues should AI users be aware of?

Technological solutions rarely, if ever, address risks and harms holistically. While there are steps users can take to improve their AI literacy, it is important to invest in a number of safeguards against the risks associated with AI evolution. For example, I’m excited about further research into watermarking as a technical tool. We also need tailored guidance from policymakers, especially regarding the distribution of content generated on social media platforms.

What is the best way to build AI responsibly?

We are constantly reevaluating how we evaluate and implement safety technologies with those affected. Both beneficial applications and potential harms are constantly evolving and require iterative feedback. Measures to improve the safety of AI should be considered collectively as a field. The most popular ratings for the 2024 model are much more robust than the ones I was running in 2019. I’m currently much more bullish on the technical valuation than I am on the Red Team. I believe the utility of human evaluation is very high, but as evidence mounts regarding the mental burden and various costs of human feedback, I am becoming increasingly bullish about standardizing it.

How can investors more effectively promote responsible AI?

They already are! We are pleased to see that many investors and venture capital firms are actively engaging in the safety and policy debate through open letters and Congressional testimony. We would like to hear more from our investors’ expertise on what will stimulate small businesses across sectors, especially as the use of AI increases in areas outside of core technology industries.