ChatGPT Sent Users to a Website for a Feature It Didn't Have—So the Founder Built It - adtechsolutions

Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

ChatGPT Sent Users to a Website for a Feature It Didn’t Have—So the Founder Built It



Briefly

  • Chatgpt falsely told users that the sounds could play ASCIA TABLATURE, which leads to an increase in unsuccessful recording.
  • Founder Adrian Hrovatica created a function to suit unexpected demand.
  • Holaty acknowledged that the step has raised concerns about how disinformation AI can affect the product’s decision.

What do you do when your website is bombarded by recording that cannot process? This is a developer of the situation software and musician Adrian Holaty found himself when he noticed a strange increase in unsuccessful recording to the scanner of the musical foot of his society.

He did not expect the culprit allegedly Chatgpt.

In recent Blog postThe co-founder of Soundlice explained that he was looking at the report protocols when he found that Chatgpt had instructed users to record “cards” ASCII-SECOME MUSIC format used by guitarists and others instead of musical notation-to sound to hear sound playback. The problem was that the function did not exist. Hrovys decided to build it.

“In my knowledge, this is the first case of society to develop a function, because Chatgpt incorrectly tells people that they exist,” he wrote.

Launched in 2012, Sound is an interactive platform for learning and sharing music that digitizes notes from photos or PDF.

“Our scanning system has not intended to support this style of notation,” Hrovovatic wrote. “Why were we bombarded with so many images of the Asci Tab Chatgpt screen?

“We never supported the ASCII card; Chatgpt was directly lied to people. And he made us look bad in the process and set false expectations about our service.”

The phenomenon of hallucinations AI is common. Since the public opening of the Chatgpt in 2022, numerous Examples Chatbot, including Chatgpt, Google Geminiand anthropic Claude aiintroduced false or misleading information as a reality.

While Openi did not mention the Holaty claims, society acknowledged that hallucinations were still a problem.

“Hallucination solutions are an ongoing area of ​​research,” said OpenAI spokesman Unscramble. “In addition, it clearly informs users that Chatgpt can make mistakes, we are constantly working to improve the accuracy and reliability of our models through different methods.”

Openai advises users to treat Chatgpt as the first suggestions and verify any critical information through reliable sources. It publishes data on the evaluation of the model in system cards and a node on the assessment of security.

“Hallucinations don’t leave,” Northwest ai Consulting co -founder and CEO of Wyatt Mayham said Unscramble. “In some cases, such as creative writing or brainstorming, hallucinations may actually be useful.”

And this is exactly the Holaty approach.

“In the end, we decided: What the hell? We could also satisfy the market demand,” he said. “So we made custom -made ASCII, who was near my” software I expected in the 2025 list “, and changed a copy of the user interface in our scanning system to tell people people.”

Shavy did not respond Unscramble Request for comment.

Generally intelligent Bulletin

Weekly journey AI narrated gene, generative model AI.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *