Recently, Sam Altman, the CEO of OpenAI and co-founder of Worldcoin, had the honour of testifying before Congress, alongside NYU professor Gary Marcus and IBM's chief of trust, Christina Montgomery.
This marked Altman's first official appearance before Congress, providing an opportunity for senators to inquire about OpenAI's stance on AI regulation.
The session, organised by the Senate Judiciary Privacy, Technology, & the Law Subcommittee, was deemed "historic" by Senator Dick Durbin of Illinois. The primary focus was to gain insights into the potential threats posed by generative AI models like ChatGPT and to explore how lawmakers should approach regulating this industry.
Altman's comments during the hearing, which were described as sincere and genuine by both congressional members and fellow speaker Gary Marcus, took some senators by surprise. He advocated for the establishment of a federal oversight agency with the power to issue and revoke development licenses. Altman also expressed his belief that creators should be compensated when their work is used to train an AI system.
Furthermore, he agreed that consumers who suffer harm using AI products should have the right to sue the developers.
Altman casually brushed off questions regarding the recent "AI pause" letter, which called for a six-month moratorium on deploying systems more powerful than GPT-4 (the AI system behind ChatGPT). He clarified that OpenAI had conducted extensive evaluations of GPT-4 over a longer period before deployment, and the company had no plans to release another model within the next six months. Gary Marcus, one of the signatories of the pause letter, confessed that he agreed more with the spirit of the letter than its contents. He urged Congress to consider global oversight along with federal regulation—an idea that Altman supported.
Throughout the hearing, the three guest speakers generally aligned on most topics, including support for privacy protections, increased government oversight, third-party auditing, and the urgency of regulating the industry. However, IBM's Christina Montgomery voiced dissent, disagreeing with the need for a new federal agency to enforce AI regulations. Instead, she favoured a targeted approach utilising existing regulatory bodies to focus enforcement on specific use cases.
While all speakers acknowledged the potential harm posed by AI and the necessity of safety interventions, Marcus emphasised that nobody currently comprehends or can predict the extent of harm existing AI products can cause. He called for a cautious approach accompanied by greater transparency. The speakers and members of Congress also agreed on the need for a national privacy law in the United States, similar to those in Europe. However, Altman disagreed with the idea that consumers should have the ability to opt out of including their publicly available web data in training data sets.
Additionally, Altman did not firmly state OpenAI's opposition to offering an ad-based version of its GPT products, despite asserting earlier in the hearing that OpenAI adheres to consumer privacy standards and does not build user profiles for serving tailored advertisements. Senator Corey Booker of New Jersey, drawing from his experience working with decentralised finance and Web3 companies, raised concerns about centralisation and monopolisation. Marcus responded by warning of the risk of giving control over public perception to a small number of leading AI companies with enough resources to rival Microsoft, Google, and Amazon.
Altman, highlighting his Worldcoin project, which combines decentralised cryptocurrency assets on the Ethereum blockchain with iris-scanning technology for identity authentication, clarified that OpenAI merely provides a platform. He emphasised that the democratisation of OpenAI's products occurs when developers, companies, and end-users adapt the GPT API for "fantastic" applications.
The Senate hearing provided valuable insights into the perspectives of industry leaders regarding AI regulation, emphasising the need for careful consideration of potential risks and the importance of transparency and privacy safeguards.
As we move forward, it is crucial for policymakers, industry leaders, and the public to heed the warnings and embrace the opportunities presented by artificial intelligence. By striking a delicate balance between innovation and accountability, we can navigate the uncharted waters of this transformative technology and shape a future where AI is harnessed for the betterment of humanity.