Subscribe
By subscribing you agree to with our Privacy Policy and provide consent to receive updates from our company.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Tech Leaders call for a pause on the development of AI

A group of over 2,600 tech leaders and researchers have signed an open letter cautioning against the development of advanced artificial intelligence.

A group of over 2,600 tech leaders and researchers have signed an open letter cautioning against the development of advanced artificial intelligence. The authors of the letter believe that the consequences of such technology could be profound, either for better or for worse, and are urging for a temporary pause on further AI development. This includes prominent figures such as Elon Musk and Steve Wozniak, as well as various AI CEOs, CTOs, and researchers.


The United States think tank Future of Life Institute (FOLI) published the letter on March 22, calling on all AI companies to immediately halt the training of AI systems more powerful than GPT-4 for at least six months. The institute expressed concerns that human-competitive intelligence could pose significant risks to society and humanity if not planned and managed with appropriate care and resources.


The latest iteration of OpenAI's artificial intelligence-powered chatbot, GPT-4, was released on March 14 and is reportedly ten times more advanced than the original version of ChatGPT. FOLI claims that there is an "out-of-control race" among AI firms to develop more powerful AI, which nobody, not even its creators, can fully understand, predict, or control. Among the top concerns are whether machines could flood information channels with propaganda and untruth and whether they will automate away all employment opportunities.


According to the Future of Life Institute (FOLI), the development of advanced artificial intelligence has the potential to cause a profound change in the course of human history. As such, FOLI believes that AI should be carefully planned for and managed with appropriate care and resources. Unfortunately, FOLI notes that this level of planning and management is currently lacking. OpenAI's GPT-4, the latest iteration of the company's AI-powered chatbot, was released on March 14 and has already passed some of the most rigorous US high school and law exams with a score in the 90th percentile.


FOLI claims that there is currently an "out-of-control race" among AI companies to develop more powerful AI, which nobody, not even the creators of the technology, can fully understand, predict, or control. This has led to concerns about whether machines could flood information channels with propaganda and untruth, as well as whether they will automate away all employment opportunities. FOLI has taken these concerns one step further, suggesting that the entrepreneurial efforts of these AI companies may eventually lead to an existential threat. FOLI asks whether the development of non-human minds that might eventually outnumber, outsmart, obsolete, and replace humans is a risk worth taking, and suggests that such decisions should not be left solely to unelected tech leaders.


“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?

Should we risk the loss of control of our civilisation?”

“Such decisions must not be delegated to unelected tech leaders,” the letter added."


The institute agreed with Sam Altman, the founder of OpenAI, who recently called for an independent review process to be put in place before training future AI systems. Altman's February 24 blog post emphasised the importance of preparing for the development of Artificial General Intelligence (AGI) and artificial Super Intelligence (ASI) robots.


However, not all AI experts are in agreement with this sentiment. SingularityNET CEO, Ben Goertzel, responded to a Twitter post by Gary Marcus, author of Rebooting.AI, stating that language learning models (LLMs) are unlikely to become AGIs due to a lack of significant developments in this area. Instead, he suggests that research and development should be slowed down in other areas, such as bioweapons and nukes.

In addition to language learning models, there are concerns about AI-powered deep fake technology, which has been used to create convincing images, audio, and video hoaxes. Some worry that AI-generated artwork may violate copyright laws in certain cases. Galaxy Digital CEO, Mike Novogratz, recently expressed shock at the amount of regulatory attention given to cryptocurrencies, as opposed to artificial intelligence. He stated during a shareholder's call on March 28 that the government has it completely upside-down.


FOLI has argued that if a pause on AI development is not enacted quickly, governments should step in and institute a moratorium. The institute recommends a public and verifiable pause that includes all key actors.

March 30, 2023
Earth Day 2026: Why Tech Talent Will Shape the Future of Our Planet
April 22, 2026
2 hands holding a planet Earth that is entirely covered in plants and foliage. They are holding it in front of an green, abstract tech background.

Earth Day 2026 focuses on collective power. Learn how tech talent in AI, Web3 and emerging tech is shaping sustainability, hiring trends and real-world climate impact.

Read more
How to Get a Job in Web3 in 2026, Skills, Roles and Career Path Guide
April 21, 2026

Learn how to get a job in Web3 in 2026. Explore in demand skills, career paths, salaries and how to break into blockchain and decentralised tech.

Read more
How to Hire Blockchain Developers in the UAE for Web3 Teams
April 20, 2026

Learn how to hire blockchain developers in the UAE with a clear Web3 playbook covering assessments, interviews, offers and VARA-focused relocation.

Read more