AI Still Doesn’t Understand the Word ‘No,’ MIT Study Finds - adtechsolutions

Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

AI Still Doesn’t Understand the Word ‘No,’ MIT Study Finds



Briefly

  • AI is still struggling to understand negation, presenting risks in critical domains such as health care.
  • A study of MIT revealed that AI in Vid language, especially, cannot reliably understand the negative statements.
  • Experts warn that AI failure to process “no” and “no” could lead to mistakes in the real world.

AI can diagnose the disease, write poetry, and even drive cars – but still struggles with a simple word: “No.” This blind place could have serious consequences in actual applications, such as AI -built on health care.

According to Novi study Led by the MITH MIT CUMAMUD, in collaboration with Open And the University of Oxford, a failure in understanding “no” and “no” can have deep consequences, especially in medical environments.

Negation (for example, “without fracture” or “is not expanded”) is a critical language function, especially in environments with high roles such as health care, where misinterpretation can result in serious damage. The study shows that current AI models – like Chatgpt, Gemini and Llam – often fails to properly process negative statements, rather than a positive association.

The underlying problem is not just a lack of data; This is how the AI ​​is dressed. Most large language models are built to recognize samples, not the reason logically. This means that I can interpret “not good” as still a bit positive because with the positiveness they identify “good”. Experts claim that unless the models are learned to distinguish through logic, not just the mimicing of the language, they will continue to work light and yet dangerous mistakes.

“Ai is very good at generating a response similar to what is seen during training. But it’s really bad in terms of something truly new or out of training information,” said Franklin Delehelle, the main research engineer at the Zero-Knowledge Lagrarange Labs Infrastructure Company, said Labs, said, said Decipheses. “So, if training data do not have strong examples of saying” no “or expressing negative feelings, the model could fight to create this kind of answer.”

In the study, researchers found that models in vision language, designed to interpret pictures and texts, show even stronger bias towards confirmation of statements, often not distinguishing positive and negative titles.

“Through information of synthetic negation, we offer a promising path to more reliable models,” the researchers said. “Although our approach to synthetic data improves understanding of negation, the challenges remain, especially with fine -sheltered differences in negation.”

Despite the constant progress in reasoningMany AI systems are still struggling with human explanation, especially when dealing with open problems or situations that require deeper understanding or “”common sense. “

“All LLM’s, which is now usually called Ai-Dia affect their initial inquiry. When communicating with Chatgpt or similar systems, the system not only uses your input. There is also an internal or” internal “query that the company has set in advance-one of you, a user, do not have control over.” Delehelle said Decipheses.

Delehelle emphasized one of Ai’s basic limitations: his reliance on patterns found in training data, a limit she can shape – and sometimes distort – how she responds.

Kian Katanforoosh, an assistant professor of deep learning at Stanford University and the founder of The Skills Intelligence Company Worker, said the challenge with negation stems from the basic flaw in the functioning of language models.

“Negation is deceptively complex. Words like” no “and” no “turn the meaning of the sentence, but most linguistic models do not turn logic – they predict what sounds likely to be based on patterns,” Katanforoosh told said to say Decipher. “This makes them prone to lack of a point when negation is included.”

Katanforoosh also pointed out, echoing Delehelle, that AI models trained the fundamental problem.

“These models were trained to connect and not understand. So, when you say” is not good, “they still strongly connect the word” good “with positive feelings,” he explained. “Unlike people, they don’t overcome these associations.”

Katanforoosh warned that the inability of precise interpretation of negation is not only a technical flaw-it can have serious consequences in the real world.

“Understanding negation is fundamental to understanding,” he said. “If the model cannot reliably understand it, you risk subtle but critical errors – especially in cases of use like legal, medicalor Hr apps. “

While scanning of training data can be easily repaired, he claimed that the solution lies elsewhere.

“This solution is no longer data, but a better explanation. We need models that can handle logic, not just language,” he said. “There is now a limit: bridging statistical learning structured by thinking.”

Edited by James Rubin

Generally intelligent Bulletin

Weekly AI journey narrated by gene, generative AI model.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *