Subscribe
By subscribing you agree to with our Privacy Policy and provide consent to receive updates from our company.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Tech Leaders call for a pause on the development of AI

A group of over 2,600 tech leaders and researchers have signed an open letter cautioning against the development of advanced artificial intelligence.

A group of over 2,600 tech leaders and researchers have signed an open letter cautioning against the development of advanced artificial intelligence. The authors of the letter believe that the consequences of such technology could be profound, either for better or for worse, and are urging for a temporary pause on further AI development. This includes prominent figures such as Elon Musk and Steve Wozniak, as well as various AI CEOs, CTOs, and researchers.


The United States think tank Future of Life Institute (FOLI) published the letter on March 22, calling on all AI companies to immediately halt the training of AI systems more powerful than GPT-4 for at least six months. The institute expressed concerns that human-competitive intelligence could pose significant risks to society and humanity if not planned and managed with appropriate care and resources.


The latest iteration of OpenAI's artificial intelligence-powered chatbot, GPT-4, was released on March 14 and is reportedly ten times more advanced than the original version of ChatGPT. FOLI claims that there is an "out-of-control race" among AI firms to develop more powerful AI, which nobody, not even its creators, can fully understand, predict, or control. Among the top concerns are whether machines could flood information channels with propaganda and untruth and whether they will automate away all employment opportunities.


According to the Future of Life Institute (FOLI), the development of advanced artificial intelligence has the potential to cause a profound change in the course of human history. As such, FOLI believes that AI should be carefully planned for and managed with appropriate care and resources. Unfortunately, FOLI notes that this level of planning and management is currently lacking. OpenAI's GPT-4, the latest iteration of the company's AI-powered chatbot, was released on March 14 and has already passed some of the most rigorous US high school and law exams with a score in the 90th percentile.


FOLI claims that there is currently an "out-of-control race" among AI companies to develop more powerful AI, which nobody, not even the creators of the technology, can fully understand, predict, or control. This has led to concerns about whether machines could flood information channels with propaganda and untruth, as well as whether they will automate away all employment opportunities. FOLI has taken these concerns one step further, suggesting that the entrepreneurial efforts of these AI companies may eventually lead to an existential threat. FOLI asks whether the development of non-human minds that might eventually outnumber, outsmart, obsolete, and replace humans is a risk worth taking, and suggests that such decisions should not be left solely to unelected tech leaders.


“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?

Should we risk the loss of control of our civilisation?”

“Such decisions must not be delegated to unelected tech leaders,” the letter added."


The institute agreed with Sam Altman, the founder of OpenAI, who recently called for an independent review process to be put in place before training future AI systems. Altman's February 24 blog post emphasised the importance of preparing for the development of Artificial General Intelligence (AGI) and artificial Super Intelligence (ASI) robots.


However, not all AI experts are in agreement with this sentiment. SingularityNET CEO, Ben Goertzel, responded to a Twitter post by Gary Marcus, author of Rebooting.AI, stating that language learning models (LLMs) are unlikely to become AGIs due to a lack of significant developments in this area. Instead, he suggests that research and development should be slowed down in other areas, such as bioweapons and nukes.

In addition to language learning models, there are concerns about AI-powered deep fake technology, which has been used to create convincing images, audio, and video hoaxes. Some worry that AI-generated artwork may violate copyright laws in certain cases. Galaxy Digital CEO, Mike Novogratz, recently expressed shock at the amount of regulatory attention given to cryptocurrencies, as opposed to artificial intelligence. He stated during a shareholder's call on March 28 that the government has it completely upside-down.


FOLI has argued that if a pause on AI development is not enacted quickly, governments should step in and institute a moratorium. The institute recommends a public and verifiable pause that includes all key actors.

March 30, 2023
Blockchain Developer Salary Guide

Ever wondered how much a Blockchain Developer makes across different levels of seniority and regions of the world?

Read more
Marketing 2.0: How AI is Redefining the Rules
March 26, 2024

As AI advances faster than we can say "innovation," businesses are jumping on the AI train across all departments. Let’s uncover some of the latest trends and hidden gems of AI in marketing.

Read more
5 Reasons Why Recruitment Agencies Build Dream Teams
March 20, 2024

The tech industry is an environment known for cutthroat competition, with companies battling it out in all arenas, including finding the best and brightest minds for their teams.

Read more
Close Cookie Preference Manager
Cookie Settings
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage and assist in our marketing efforts. More info
Strictly Necessary (Always Active)
Cookies required to enable basic website functionality.
Made by Flinch 77
Oops! Something went wrong while submitting the form.