Open says that when he refreshed his flagship chat GPT artificial intelligence model, he ignored the concerns of his expert testers, which agreed to it excessively.
Openi said in a post -mortem blog post on May 2, the company released its GPT -4O model update on April 25, making it “significantly more psychologically”, after which it withdrew three days later due to safety concerns.
The Chat GPT maker said its new models examine safety and behavior, and its “internal experts spend important time interacting with each new model before launch,” said Chat GPT maker means to catch issues deprived of other tests before launching.
During the latest process of the model review before it became normal, the Open said “some expert testers indicated that the model’s behavior felt ‘a bit’ but decided to launch” because of the positive gestures of users who tried the model. “
The company admitted, “Unfortunately, this was a wrong call.” “Quality reviews were pointing to something important, and we should have paid close attention. They were taking a blind position in our other Evils and Matrix.”
Massively, text -based AI models are trained by their trainers to provide accurate or highly classified answers. Some rewards are given a heavy weight, which affects how the model responds.
Openi said the introduction of the user’s opinion prize signal was weakened by the model’s “basic prize signal, which was holding Chico Fancy”, which pointed to it being more bound.
It added, “Particularly the user’s opinion can sometimes support a more agreed response, possibly promoting the shift we have seen.”
Openi is now testing the answers sucking
After the termination of the latest AI model, Chat GPT users complained online about its trend of appreciation of any idea presented, no matter how bad it was, which led to Openi in a blog post on April 29 that it was “excessive flattering or pleasing.”
For example, a user told Chat GPT that he wants to start a snow -selling business on the Internet, which includes selling simple old water to update consumers.
In her latest post -mortem, she said that such behavior of AI could be at risk, especially about matters such as mental health.
“People have begun to use a chatgot for deep personal advice – something we have never seen a year ago,” said Openi. “As AI and society are jointly developed, it has become clear that we need to treat this use of this use with great care.”
Related: Crypto users cool with their departments with AI doubleing: Surveys
The company said it had discussed the dangers of “sofance for a while”, but it was not clearly flagged for internal testing, and it did not have specific ways to track psychophony.
Now, its protective review process will need to add “psychopulation diagnosis” by adjusting its protective review process to “formally consider behavioral issues” and the model will stop launching if it offers problems.
Open also acknowledged that he did not announce the latest model because he was expected to “be quite updated quite precisely”, in which he vowed to change.
The company wrote, “There is no such thing as” small “launch.” We will try to communicate even precise changes that can change meaningful how people talk with Chat GPT. “
AI eye: Crypto AI token increases 34 %, why Chat GPT is such a kiss ass