Friday, September 27, 2024

Reliability issues in large language models explored

Researchers Examine Accuracy and Transparency of Leading AI Chatbots: A Closer Look

Introduction to the Study

Performance of a selection of GPT and LLaMA models

Researchers from the Universitat Politecnica de Valencia in Spain have discovered that as Large Language Models expand size and complexity, they are less inclined to acknowledge their lack of knowledge to users.

Study: Examining AI Chatbots

The researchers, in their Nature study, assessed the newest versions of three popular AI chatbots, examining their response accuracy and users' effectiveness in recognizing incorrect information.

Increased Reliance on LLMs

As LLMs gain widespread adoption, users have increasingly relied on them for tasks like writing essays, composing poems or songs, solving mathematical problems, and more, Consequently, accuracy has become a growing concern.

Study Objective: Evaluating AI Accuracy

In this new study, the researchers sought to determine whether popular LLMs improve in accuracy with each update and how they respond when they provide incorrect answers.

AI Chatbots Assessed: BLOOM, LLaMA and GPT

In order to assess the accuracy of three leading LLMs--BLOOM, LLaMA and GPT---the researchers presented them with thousands of questions and compared the answers to those generated by earlier versions in response to the same prompts.

Diverse Themes Tested

The researchers also diversified the themes, encompassing math, science, anagrams, and geography, while evaluating the LLMs' capabilities to generate text or execute tasks like list ordering. Each question was initially assigned a level of difficulty.

Key Findings: Accuracy and Transparency

The researchers discovered that accuracy generally improved with each new iteration of the chatbots. However, they observed that as question difficulty increased, accuracy declined, as anticipated.

Transparency Decreases with Size

Interestingly, they noted that as LLMs became larger and more advanced, they tended to be less transparent about their ability to provide correct answers.

Behavioral Shift in AI Chatbots

In previous iterations, most LLMs would inform users that they were unable to find answers or required additional information. However, in the latest versions, these models are more inclined to make guesses, resulting in a greater number of responses, both accurate and inaccurate.

Reliability Concerns

The researchers also found that all LLMs occasionally generated incorrect answers, even to straightforward questions, indicating their continued lack of reliability.

User Study: Evaluating Incorrect Answers

The research team subsequently requested volunteers to evaluate the answers from the initial phase of the study, determining their correctness. They discovered that most participants struggled to identify the incorrect responses.

Source 

Labels: ,

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home