Former National Security Agency director and retired Gen. Paul Nakasone will join the board of directors of AI company OpenAI. Announced Thursday afternoonHe will also sit on the Council’s “Security and Safety” subcommittee.
This high-profile addition is likely intended to satisfy critics who believe OpenAI is moving faster than is wise for its customers, and perhaps humanity, offering models and services without properly assessing or locking down risks.
Nakasone has decades of experience with the Army, Cyber Command and the NSA, so regardless of how you feel about the practices and decision-making of those organizations, he certainly can’t be accused of lacking expertise.
As OpenAI establishes itself as an AI provider not only to the tech industry but also to government, defense and large corporations, this kind of institutional knowledge will be valuable, both for the company itself and as a solace to anxious shareholders. (The connections he brings to state and military organizations will no doubt be welcome, too.)
“OpenAI’s dedication to its mission closely aligns with my own values and experience as a public servant,” Nakasone said in a press release.
That certainly seems true. Nakasone and the NSA recently defended the practice of buying data of questionable origin to feed their surveillance networks, arguing that there is no law against it. Meanwhile, OpenAI argues that it doesn’t buy large amounts of data from the internet, it just gets it, and when it gets discovered, there is no law against it. They seem to agree on the issue of asking for forgiveness rather than asking for permission.
The OpenAI release also states:
“His insights will also contribute to OpenAI’s efforts to better understand how AI can be used to strengthen cybersecurity by rapidly detecting and responding to cybersecurity threats. We believe that AI could have significant benefits in this area for many institutions that are prone to cyberattacks, such as hospitals, schools, and financial institutions.”
So this is also a new market movement.
Nakasone will join the board’s Safety and Security Committee, which is “responsible for making recommendations to the full board of directors on significant safety and security decisions regarding OpenAI’s projects and operations.” It remains to be seen what this newly created body will actually do and how it will operate: several senior staff involved in safety (regarding AI risks) have left the company, and the committee itself is in the midst of a 90-day evaluation of the company’s processes and safety measures.