Are you interested in REQUESTS? Save with our coupons on WHATSAPP o TELEGRAM!

OpenAI creates team for AI 'catastrophic' risks. That's why it concerns us

OpenAI, a company now at the forefront of the field of artificial intelligence, recently announced the creation of a new team with the specific task of evaluate and mitigate potentially catastrophic risks associated with advancing AI-related technologies. This move highlights the growing importance of proactively addressing the challenges and dangers that generative models can bring. Obviously, where they are not properly monitored.

Risk identification by OpenAI

As artificial intelligence rapidly evolves, new risks emerge that could have severe impacts on a global scale. For this, the new OpenAI preparation team will dedicate itself to to track, evaluate, provide e protect against these potential problems. Among the risks identified are: threats nuclear, a theme that highlights the urgency of responsible action in the field of AI.

In addition to nuclear threats, the team will also focus on substance-related dangers chemical, biological and radiological, as well as on fphenomenon of self-replication of artificial intelligence, or the ability of an artificial intelligence system to replicate itself autonomously. Other areas of focus include AI's ability to deceive humans and threats to the cybersecurity.

Benefits and challenges of advanced AI

OpenAI recognizes that frontier AI models, those that surpass the capabilities of the most advanced existing models, have the potential to benefit all humanity. And no, contrary to what you think, theArtificial intelligence is not here to steal our jobs but rather, to create it.

However, with great power, also comes great responsibility. It is therefore essential to develop policies informed about the risks, which allow you to evaluate and monitor AI models, to ensure that the benefits outweigh the associated risks.

openai offers rewards for finding bugs in chatgpt

Read also: DeepMind has the solution to prevent AI risks

The leadership of this new team has been entrusted to Alexander Madry, a leading figure in the field of machine learning. Currently on leave from his role of director of MIT's Center for Deployable Machine Learning, Wise will coordinate efforts to develop and maintain a risk-informed development policy.

Towards a global priority

The CEO of OpenAI, Sam altman, it's not new to concerns about the potentially catastrophic risks of AI. He suggested that the treatment of AI technologies should be ptaken as seriously as that of nuclear weapons, highlighting the need for a collective effort to mitigate the extinction risks associated with AI. This new team represents a concrete step towards realizing this vision, highlighting OpenAI's commitment to contributing to a responsible management of Artificial Intelligence.

Gianluca Cobucci
Gianluca Cobucci

Passionate about code, languages ​​and languages, man-machine interfaces. All that is technological evolution is of interest to me. I try to divulge my passion with the utmost clarity, relying on reliable sources and not "on the first pass".

Subscribe
Notify
guest

0 Post comments
Inline feedback
View all comments
XiaomiToday.it
Logo