Tuesday, November 5, 2024

OpenAI has announced the implementation of new strategies to prevent misuse of AI

Must Read
Under the direction of Alexandre Madry, the Open AI team will evaluate the educational potential of artificial intelligence in creating threats such as chemical weapons. (Illustrative image)

OpenAIa company artificial intelligence This is behind ChatGPTmaking his plans to anticipate what he thought could be Serious risks For the technology you develop, such as allowing bad actors to learn how to build Chemical and biological weapons.

Preparation team. OpenAIDirected by Professor of Artificial Intelligence at MIT Alexander Madrewill employ artificial intelligence researchers, He. Sheexperts in National Security And Political professionals to Monitor your technologyTest it frequently and warn the company if it thinks any of its AI capabilities are becoming dangerous.

The equipment is in the “Security Systems” group. OpenAIWhich works to solve existing problems such as Instilling racial biases in artificial intelligence, and the company’s “Superalignment” team, which is looking at how to ensure that artificial intelligence does not harm humans in an imagined future where technology has completely overtaken Human intelligence.

The increasing presence of AI in everyday fields poses security challenges that OpenAI intends to address through a dedicated team. (Illustrative image)

popularity ChatGPT And the technological improvement Generative artificial intelligence has sparked controversy in Technology community About how dangerous it is. Earlier this year, prominent leaders in the field of artificial intelligence from OpenAI, Google And Microsoft He warned that technology could represent Existential risk For humanity, on the same level as pandemics or nuclear weapons.

Other AI researchers said that focusing on those big, scary risks allows companies to divert attention from the harmful effects the technology is already causing.

See also  Logisfashion expects a 24% increase in orders during the summer sales campaign

A growing group of Business leaders Amnesty International states that risks exist Exaggerated And that companies must continue to develop technology to help improve society He earns money did that.

OpenAI He puts forward a compromise in this debate in his book General position. Its executive director, Sam AltmanHe believes there are serious long-term risks inherent in the technology, but we must also focus on solving the problem Actual problems. according to Altmanthe Systems To avoid the harmful effects of artificial intelligence should not make it difficult Competition from small businesses. At the same time, he pushed the company to do so market Technology and raising money for it faster Its growth.

AI regulation according to OpenAI should protect society without hindering innovation in startups. (Istock)

My mothera veteran artificial intelligence researcher who heads MIT Center for Deployable Machine Learning Participates in directing MIT Artificial Intelligence Policy Forumjoin OpenAI At the beginning of this year. He was part of a small group of leaders OpenAI Who resigned when? Altman Launched by board of directors company in November. My mother When he returned to the company Altman He was admitted again five days later.

OpenAIwhich is governed by A Non-profit council Its mission is to advance artificial intelligence and make it useful to all humans, and it is in the process of selecting new board members after three of the four board members were fired. Altman He resigned as part of their comeback.

Despite the “turmoil” in leadership… My mother we think that OpenAI panel Take it seriously Artificial intelligence risks Which you are investigating. “I realized that if I really wanted to determine how AI would impact society, why not go to a company that was already doing it?”

See also  73% of children in Europe already control instant messaging from their mobile phones or tablets

The preparation team is hiring National security experts Outside the world of artificial intelligence that can help the company Understand how to deal with high risks. OpenAI Starts conversations with organizations like National Nuclear Security AdministrationWhich oversees nuclear technology in United StateHe said, to ensure the company’s ability to adequately study the risks of artificial intelligence My mother.

The debate over the future of AI and its security is heating up due to the popularity of platforms like ChatGPT. (picture information)

The team will monitor how and when AI can direct people to hack Computers Or building chemical, biological and Dangerous nuclearbeyond what people can find on the Internet Regular search. My mother Look for people who are “really thinking, how can I play with this set of rules?” How can I be more resourceful in myself? Evil?“”.

The company will also allow “qualified, independent third parties” outside of it OpenAI Testing their technology, he says in a Blog post Published on Monday. My mother He said he did not agree with it. debate Between AI “disruptors,” who fear the technology has already reached the potential to surpass human intelligence, and “accelerationists,” who want to remove all barriers to AI. Artificial intelligence development.

“In my opinion, this acceleration and deceleration framework is very simple. AI has a lot of advantages, but we also have to do it work to ensure The advantages are achieved and the disadvantages are not.

(c) 2023, The Washington Post

Latest News

Fast, Private No-Verification Casinos in New Zealand: Insights from Pettie Iv

The world of online gambling has come a long way since its inception, and New Zealand has been no...

More Articles Like This