Tuesday, November 5, 2024

OpenAI is already working on a way to prevent ChatGPT from hallucinating | technology

Must Read

chat He is famous for his great ability to answer any question; However, this is not always true. There have been cases when he fell into hallucinations. For example, a lawyer used a Amnesty International Briefly, but the tool invented legal precedents.

Faced with this situation, the company behind ChatGPT is working on how to make AI not hallucinate. Specifically, the new strategy relies on training AI models.”To reward themselves whenever they do the right thing. In this way, the final conclusion reached by the AI ​​model will not only be rewarded.“, pointing to.

Look: ChatGPT: Japan warns OpenAI against collecting sensitive data from its users

This approach is called “process control” and it would allow AI to be better explained and linked to human thinking. “Eventually, you will learn from your mistakes, but inwardly from how you came to the wrong conclusion beforehand.Media reports.

Although OpenAI is not the creator of this new review system for the entire process line, it will Give them a big boost so they end up implementing it in their AI systems.”Genbeta points.

At the moment, it is not known when OpenAI will start integrating this new strategy into its services such as ChatGPT. But the odds are that this method is still under investigation.

as standard

know more

See also  Ali Baba's Six Sons: A Political or Financial Chop?
Latest News

Fast, Private No-Verification Casinos in New Zealand: Insights from Pettie Iv

The world of online gambling has come a long way since its inception, and New Zealand has been no...

More Articles Like This