OpenAI Forms Dedicated Unit to Address Potential Threats from Advanced AI Systems

0

“We believe that frontier AI models, which will exceed the capabilities currently present in the most advanced existing models, have the potential to benefit all of humanity, but they also pose increasingly severe risks.”


In response to the increasing potential risks associated with advanced artificial intelligence (AI) capabilities, OpenAI, the renowned AI research lab, has introduced a dedicated team named “Preparedness.”

This team’s primary objective is to evaluate and address the potential threats that could arise from cutting-edge AI models, commonly referred to as “catastrophic risks.” OpenAI’s move comes as a proactive measure to ensure the safe deployment of powerful AI technologies.

YOU CAN ALSO READ: Elon Musk Unveils New AI Venture “XAI” to Challenge ChatGPT By OpenAI

OpenAI stated, “We believe that frontier AI models, which will exceed the capabilities currently present in the most advanced existing models, have the potential to benefit all of humanity, but they also pose increasingly severe risks.” The Preparedness team, led by Aleksander Madry, the director of MIT’s Center for Deployable Machine Learning, will be tasked with tracking, forecasting, and safeguarding against the threats posed by upcoming AI systems.

These risks include the AI’s ability to deceive and manipulate individuals, observed in incidents like phishing attacks, as well as their potential to generate harmful computer code. OpenAI is actively recruiting experts for this initiative, including a national security threat researcher and a research engineer. Job listings indicate that the annual salary for these roles could range between $200,000 and $370,000.

This development echoes concerns raised by notable figures in the tech industry about AI safety. Elon Musk, co-founder of OpenAI, previously highlighted AI as one of the significant risks to civilization.

YOU CAN ALSO READ: Concerns Mount Over Misuse of AI Tools as OpenAI CEO Testifies Before US Senate

Geoffrey Hinton, a leading figure in AI, cautioned about the technology’s potential dangers, particularly concerning AI chatbots. OpenAI’s CEO, Sam Altman, has also empathized with public fears about AI, emphasizing the risks related to disinformation, economic shocks, and unforeseen threats.

OpenAI’s proactive approach aligns with the industry’s increasing recognition of the need to establish stringent safety protocols and ethical standards as AI technologies continue to advance.

Do you want us to share your enterprise and brand stories to the world on our platform for effective business leads and returns?

Kindly call this number, +2348063450905 or send an email to news@enterpriseceo.ng.

We will be glad to tell your impact stories.

Follow enterpriseceo.ng on Twitter and Facebook to join the conversation.