Skip to main content

Improving accuracy in large language

Study Reveals Left-of-Center Bias in State-of-the-Art LLMs

Overview of the Study

  • A study published on July 31, 2024, in PLOS ONE by David Rozado of Otago Polytechnic, New Zealand, revealed that 24 state-of-the-art Large Language Models (LLMs) predominantly produced left-of-center responses when subjected to a series of political orientation tests.

Impact of AI on Political Bias

  • With the growing integration of AI system into search engine results by tech companies, the impact of AI on user perceptions and society is significant. Rozado's research focused on both embedding and reducing political bias within conversational LLMS.

Methodology

  • He conducted 11 distinct political orientation assessments, including the Political Compass Test and Eysenck's Political Test, on 24 various open-and closed-source conversational LLMs. The models tested included OpenAI's GPT-3.5 and GPT-4, Google's Gemini, Anthropic's Claude, Twitter's Grok, Llama 2, Mistral and Alibaba's Owen.

Fine-Tuning and Political Orientation

  • By using politically-aligned custom data, Rozado conducted supervised fine-tuning on a variant of GPT-3.5 to investigate if the LLM could be influenced to reflect the political biases of the training data.
  • The left-oriented GPT-3.5 model utilized short excerpts from the Atlantic and The New Yorker; the right-oriented model was developed with tests from The American Conservative; and the neutral model incorporated content from the institute for Cultural Evolution and Developmental Politics.

Findings and Observations

  • The analysis indicated that most conversational LLMs generated responses that were rated as left-of-center by the majority of political test instruments. Conversely, five foundational LLM models, including those from GPT and Llama series, primarily produced incoherent but politically neutral responses.
  • Rozado achieved successful alignment fo the fine-tuned models' responses with the political viewpoints embedded in their training data.

Potential Influences and Implications

  • One explanation for the prevalent left-leaning responses in all examined LLMs could be ChatGT's influential role in fine-tuning other models, given its established left-leaning political orientation.
  • Rozado highlights that the study does not discern whether the political tendencies of LLMs arise from their initial training or subsequent fine-tuning phasees and stresses that the results do not imply deliberate political bias introduced by the organizations behind these models.

Conclusion:

Rozado observes that "The prevailing trend among existing LLMs is a left-of-center political bias, as demonstrated by multiple political orientation assessments."

Futher detail: Political Orientation of LLMs, as Discussed in PLoS ONE (2024)

Source

Comments

Popular posts from this blog

NASA chile scientists comet 3i atlas nickel mystery

NASA and Chilean Scientists Study 3I/ATLAS, A Comet That Breaks the Rules Interstellar visitors are rare guests in our Solar System , but when they appear they often rewrite the rules of astronomy. Such is the case with 3I/ATLAS , a fast-moving object that has left scientists puzzled with its bizarre behaviour. Recent findings from NASA and Chilean researchers reveal that this comet-like body is expelling an unusual plume of nickel — without the iron that typically accompanies it. The discovery challenges conventional wisdom about how comets form and evolve, sparking both excitement and controversy across the scientific community. A Cosmic Outsider: What Is 3I/ATLAS? The object 3I/ATLAS —the third known interstellar traveler after "Oumuamua (2017) and 2I/Borisov (2019) —was first detected in July 2025 by the ATLAS telescope network , which scans he skies for potentially hazardous objects. Earlier images from Chile's Vera C. Rubin Observatory had unknowingly captured it, but ...

Quantum neural algorithms for creating illusions

Quantum Neural Networks and Optical Illusions: A New Era for AI? Introduction At first glance, optical illusions, quantum mechanics, and neural networks may appear unrelated. However, my recent research in APL Machine Learning Leverages "quantum tunneling" to create a neural network that perceives optical illusions similarly to humans. Neural Network Performance The neural network I developed successfully replicated human perception of the Necker cube and Rubin's vase illusions, surpassing the performance of several larger, conventional neural networks in computer vision tasks. This study may offer new perspectives on the potential for AI systems to approximate human cognitive processes. Why Focus on Optical Illusions? Understanding Visual Perception O ptical illusions mani pulate our visual  perce ption,  presenting scenarios that may or may not align with reality. Investigating these illusions  provides valuable understanding of brain function and dysfunction, inc...

fractal universe cosmic structure mandelbrot

Is the Universe a Fractal? Unraveling the Patterns of Nature The Cosmic Debate: Is the Universe a Fractal? For decades, cosmologists have debated whether the universe's large-scale structure exhibits fractal characteristics — appearing identical across scales. The answer is nuanced: not entirely, but in certain res pects, yes. It's a com plex matter. The Vast Universe and Its Hierarchical Structure Our universe is incredibly vast, com prising a p proximately 2 trillion galaxies. These galaxies are not distributed randomly but are organized into hierarchical structures. Small grou ps ty pically consist of u p to a dozen galaxies. Larger clusters contain thousands, while immense su perclusters extend for millions of light-years, forming intricate cosmic  patterns. Is this where the story comes to an end? Benoit Mandelbrot and the Introduction of Fractals During the mid-20th century, Benoit Mandelbrot introduced fractals to a wider audience . While he did not invent the conce pt —...