OpenAI and IBM chiefs voice concerns over the unchecked power of artificial intelligence.
OpenAI CEO Sam Altman and IBM’s Chief Privacy and Trust Officer Christina Montgomery have made a clear case before US Senators on Tuesday, highlighting the urgency of establishing regulatory frameworks for artificial intelligence (AI) technologies.
The technologies raise serious questions surrounding ethics, law and national security, OpenAI CEO Altman told the US Senate.
In the Senate Judiciary Subcommittee hearing which lasted three hours, Altman acknowledged the potential of AI in addressing some of the world’s most pressing issues.
Yet, he also cautioned about AI’s profound capacity to disrupt society in unforeseeable ways, emphasising that stringent government regulations are essential to curb potential risks.
“My deepest fear is that the tech industry could cause significant harm to the world. If AI technology goes astray, the repercussions could be severe,” Altman said.
Meanwhile, Montgomery argued for a risk-oriented regulation focusing on the applications of AI rather than its development, coining it as “precision regulation”.
The Senate’s capability to tackle this regulatory challenge was openly debated during the hearing. Previous political stalemates and intense lobbying by major tech companies have rendered it difficult to establish necessary safeguards for issues like data security and child protection on social media.
The Senators highlighted that the legislative process is often sluggish compared to the rapid advancement of technology.
Chairing the panel, Senator Richard Blumenthal demonstrated AI’s potential for deception by presenting an AI-generated recording that perfectly mimicked his own voice.
Blumenthal urged AI developers to cooperate with regulatory authorities to impose new restrictions, while acknowledging that Congress has yet to enact sufficient protections for existing technologies.
“Congress has a choice now, just as it had when social media emerged. We failed to rise to the occasion with social media. Now, we are obligated to act preemptively on AI before the threats materialise,” he said, drawing parallels with the social media revolution.
Altman and other senators also proposed the creation of a new regulatory body with jurisdiction over AI and other emerging technologies.
However, NYU professor Gary Marcus, who also testified at the hearing, cautioned against the possibility of such an agency falling prey to the very industry it aims to regulate.
The conversation also broached subjects like AI-propagated disinformation, biases in AI models, potential threats to democracy and concerns over international rivals such as China gaining the upper hand in AI capabilities.
Concerns over AI Misuse
OpenAI, known for its groundbreaking work on generative AI that can produce human-like images, audio, and text, was discussed in detail.
Despite these remarkable developments, concerns persist over the unpredictable nature of these AI models and their potential misuse.
While the current administration has proposed some non-binding guidelines for AI, including the “AI Bill of Rights”, the pleas of Altman and Montgomery highlight the pressing need for more robust and binding regulations.
Altman suggested that AI models of a certain sophistication level should be registered and licensed before deployment, after passing a series of tests.
Montgomery, on the other hand, proposed that AI products should be transparent about when users are interacting with a machine and stressed the role of IBM’s AI ethics board in maintaining internal control, a role Congress is yet to assume.
Reflecting on the dynamic between innovation and regulation, Montgomery said, “Although innovation tends to outpace government regulation, the window of opportunity for the government to assert its regulatory role in the AI landscape hasn’t closed yet.”