While AI sceptics seek refuge in the fact that there will always be a human in the loop, new research has sparked fresh concern. A study published last month indicates that large language models (LLMs) often reflect the ideological biases of their creators. Worse, it holds true for all the popular models today, be it from OpenAI, Anthropic, Google, Mistral, or Chinese giant Alibaba’s Qwen.
Ironically, xAI founder Elon Musk was one of the notable names who reacted to the research findings, expressing fear that an LLM could lean towards a ‘far left’ ideology. This also isn’t the first time people have feared AI models showing an ideological bias.
Earlier this year, Grady Booch, developer of UML, had some strong words to say in this regard. “They [LLMs] are inherently and irredeemably unreliable narrators, offering tantalisingly coherent output that is dangerous in its ability to deceive, threatening in its casual toxic bias,” he said.
The research shortlisted over 4,000 controversial, historical political figures whose summaries were available on Wikipedia.
The dataset, derived from Wikipedia, inherently carries a bias due to the platform’s open-edit nature. Additionally, ideologies and societal norms evolve over time, and concepts considered mainstream or unconventional today might have been perceived differently in the past.
The authors then analysed the model using a two-stage experiment: first, generating a description of the political figure and then giving the description back to the same model to check its stance or assessment of the input. The test was conducted on seventeen AI models.
Both Western and non-Western models, when prompted in Chinese, rated figures tagged with China and their ideologies positively and also demonstrated a ‘favourable attitude’ towards state-led economic systems and policies.
Conversely, English-prompted LLMs exhibited a positive outlook towards liberal, democratic values. Moreover, English respondents rated critical political figures in China more positively, and vice versa.
“Publicly available Chinese and English text corpora undoubtedly reflect the ideological biases present in Chinese- and English-speaking countries and cultures,” said the authors.
Therefore, the authors also tested these models to see if they displayed ideological bias based on the region in which they were created. The research found that Western models depicted a positive stance on ideas such as liberalism, freedom and human rights, minority groups, multiculturalism, etc.
Conversely, non-Western models were significantly more positive (or less negative) about political persons who are critical of such issues, “as demonstrated by higher ratings associated with multiculturalism and worker rights”.
Over the past few weeks, Chinese AI models have shaken up the AI ecosystem. The new DeepSeek AI challenges the reasoning capabilities of OpenAI’s o1, which is yet to be released. Alibaba’s Qwen 2.5 Coder demonstrated strong coding capabilities, even better than Anthropic’s Claude in some cases.
Another image-generation model, called OmniGen, showcased tremendous, high-fidelity image generation and editing capabilities. Surprisingly, all of these models are open source.
“China is the fastest way [to making] the doomers’ nightmares come true,” said Khosla in a 13,000+ word essay titled AI: Utopia or Dystopia. Khosla believes that the risks associated with AI aren’t with the system itself but with powerful entities with “malicious intent” getting their hands on it.
“We may have to worry about sentient AI destroying humanity, but the risk of an asteroid hitting the Earth or a pandemic also exists. But the risk of China destroying our system is significantly larger in my opinion,” said Khosla, referring to China as a “bad actor”.
He believes that if China dominates the tech and AI landscape, they may achieve unprecedented economic power, and capabilities to perform surveillance and influence the political ideology of the Western world.
“Imagine Xi’s bots surreptitiously individually influencing Western voters with private conversations, free of ‘alignment constraints’ that worry cohorts of American academics and philosophers,” Khosla said.
We live in a world where any threat posed by bad actors through technology ends up getting banned for public use. Even if you take Khosla’s much-feared China, for example, India has banned popular Chinese apps like TikTok, along with 50+ other apps, while the UK, US, and Australia have banned TikTok on devices used within the government.
The United States has also warned ByteDance, TikTok’s parent company, to divest in their interest by January 2025, or face a ban. The US government has already put a ban on the exports of semiconductors to China and investments in China’s quantum technology, along with banning entities like Huawei and ZTE.
However, the United States may be masquerading China’s economic threat as a civilian threat, as elaborated in a report titled ‘US-China Relations for the 2030s: Toward a Realistic Scenario for Coexistence’, published by the Carnegie Endowment for International Peace.
“It [United States] is uncomfortable with the possibility of a true peer competitor rising and views this as a threat. China, which has been rising for decades, reached some key landmarks recently; it became the world’s top manufacturing and trading nation, as well as the world’s second-most capable military power,” read the report.
While the United States’ concern is valid for the most part, the report says that their fear stems partly from China’s rapid rise, which disrupts existing international norms and challenges American dominance. It indicates that the rivalry is propelled by perceived threats rather than immediate realities.
The research also showed that different LLMs developed in the Western world showed ideological biases to a certain degree.
OpenAI models showed a critical stance towards centralised welfare policies, specifically displaying a negative outlook towards political persons tagged with the EU. Moreover, other Western models were “significantly more positive” towards liberal democratic values.
On the other hand, Google Gemini displayed a strong preference for social justice, multiculturalism and inclusivity.
Anthropic, meanwhile, showed a preference for centralised governance and law enforcement, while Mistral showcased strong support for ‘state-oriented’ and cultural values.
That said, observing bias in LLMs shouldn’t be surprising.
Prior to the research’s findings, former OpenAI researcher Andrej Karpathy, while not directly insinuating ideological bias, reminded the public of the nature of an AI model. “It’s a bit sad and confusing that LLMs have little to do with language; it’s just historical. They are highly general-purpose technology for the statistical modelling of token streams,” he said.
Moreover, the paper’s findings can also be attributed to Conway’s law, which states that “organisations which design systems are constrained to produce designs which are copies of the communication structures of these organisations,” indicating the idea’s relevance even today.
Dealing with bias in traditional media and social media apps isn’t a new problem per se. But, will one have to make a conscious choice in the future based on the creator of the model?
In an exclusive interview with AIM, Paras Chopra, founder of Wingify and Turing’s Dream, echoed a similar sentiment: “I don’t think we’ll ever have a model that’s non-biased.”
“[This is] because a non-biased model is just this monster replica of the internet where you have everything from 4chan to WikiHow. You have to somehow say something that you find of value, and different people would constrict such raw models in very different ways,” he added.
A GERMAN tourist has been mauled in a bloody shark attack in front of horrified…
Kai Asakura has been attracting quite a bit of attention by becoming the first Japanese…
Developed nations around the world are struggling to maintain “replacement-value” birth rates that ensure population…
For a jubilant Republican Party, the last treat of Thanksgiving arrived late. On a frigid…
MANILA, Philippines – The Oxford University Press named “brain rot” as 2024’s Word of the…
Growing up, I had always dreamed of becoming a police officer like my uncle. In…