Under Trump, AI Scientists Are Told to Remove ‘Ideological Bias’ From Powerful Models

A directive from the National Institute of Standards and Technology eliminates mention of “AI safety” and “AI fairness.”
Ernie Stanton · 17 days ago · 3 minutes read


Shifting Sands in AI Safety: NIST Refocuses AISI Priorities

From Safety and Fairness to "Ideological Bias"

The National Institute of Standards and Technology (NIST) has issued a revised directive to its AI Safety Institute (AISI) partners, significantly altering the focus of their collaborative research. Terms like "AI safety," "responsible AI," and "AI fairness," previously central to the AISI's mission, have been conspicuously removed.

Instead, the new agreement emphasizes "reducing ideological bias" to promote "human flourishing and economic competitiveness." This shift raises questions about the future direction of AI safety research in the United States and the potential implications for marginalized groups.

Concerns Over Discrimination and Misinformation

Previously, the AISI encouraged research into identifying and mitigating biases in AI models related to sensitive attributes like gender, race, age, and socioeconomic status. The updated agreement omits this focus, raising concerns about the potential for unchecked algorithmic discrimination.

Furthermore, the previous emphasis on tools for "authenticating content," "tracking provenance," and "labeling synthetic content" — vital in combating misinformation and deepfakes — has been removed. This change suggests a diminished prioritization of these crucial areas.

"The Trump administration has removed safety, fairness, misinformation, and responsibility as things it values for AI, which I think speaks for itself," notes a researcher affiliated with the AISI, speaking anonymously for fear of reprisal.

This researcher warns that overlooking these issues could lead to a future where AI systems are "unfair, discriminatory, unsafe, and deployed irresponsibly," particularly impacting those outside the tech elite.

"America First" and the Question of Human Flourishing

The new agreement introduces a distinct focus on strengthening America's global AI position. One working group is tasked with developing testing tools specifically for this purpose, highlighting a shift towards national competitiveness.

The vague notion of "human flourishing" as a driving principle also raises eyebrows. "What does it even mean for humans to flourish?" questions another researcher who has previously collaborated with the AISI. This ambiguity leaves room for interpretation and potential misdirection.

Musk, Bias, and the Department of Government Efficiency

The news comes amidst Elon Musk's critique of AI models developed by OpenAI and Google, accusing them of both "racist" and "woke" biases. Musk's influence, coupled with his leadership of the Department of Government Efficiency (DOGE) under President Trump, adds another layer of complexity to the situation.

DOGE's cost-cutting and restructuring efforts across the US government, including NIST and AISI, have led to numerous dismissals and a perceived hostile environment for dissenting voices.

While research suggests political bias in AI can impact users across the political spectrum, the new NIST directive's focus on "ideological bias" within the context of DOGE's activities raises questions about the potential politicization of AI research and development.