top of page
Search

The Algorithm of Belief: AI and the Struggle for Truth in 2028

Updated: Aug 26

By Indivisible Mandarin member William Voorhees, PhD


ree

On July 23, 2025, President Trump issued his Artificial Intelligence (AI) Policy

"America's Roadmap to Win the Race to Global AI Dominance". The plan identifies over 90 federal policy actions across three pillars: Accelerating Innovation, Building American AI Infrastructure, and Leading in International Diplomacy and Security. This essay explores how this policy, by embedding subjective definitions of truth and ideological neutrality into federal AI systems, may fundamentally shape public thought and knowledge.


The first pillar emphasizes deregulation, removes red tape and onerous regulation while

ensuring AI protects free speech and American values. It is anticipated that the

elimination of regulations will focus on environmental and energy regulations. More

concerning is the policy on ensuring free speech. Defining free speech and “American

values” is quite subjective. While all can agree that “American values” are a benchmark

to achieve, what may not be agreed upon is the actual definition of ”American values.”


President Trump’s July 23, 2025, Executive Order, “Preventing Woke AI in the Federal

Government” (Executive Order to Prevent AI Wokeness) sheds light on what these

“American values” are. It states:


Sec. 3.  Unbiased AI Principles.  It is the policy of the United States to promote the

innovation and use of trustworthy AI.  To advance that policy, agency heads shall,

consistent with applicable law and in consideration of guidance issued pursuant to

section 4 of this order, procure only those LLMs [Large Language Models] developed in

accordance with the following two principles (Unbiased AI Principles): 


(a)  Truth-seeking.  LLMs shall be truthful in responding to user prompts seeking factual

information or analysis.  LLMs shall prioritize historical accuracy, scientific inquiry, and

objectivity, and shall acknowledge uncertainty where reliable information is incomplete or contradictory. 


(b)  Ideological Neutrality.  LLMs shall be neutral, nonpartisan tools that do not

manipulate responses in favor of ideological dogmas such as DEI [Diversity, Equity,

Inclusion].  Developers shall not intentionally encode partisan or ideological judgments

into an LLM’s outputs unless those judgments are prompted by or otherwise readily

accessible to the end user. 


Truth seeking (paragraph a) is a good thing for AI software, however “truth” has the

same definitional problem as “American values,” it depends on whose “truth” the

benchmark is. Today there debates on what is actually historical “truth.” For most

Americans, the January 6 th attack on the Capitol and attempt to disrupt the election

certification was an illegal action. Others, including President Trump call it actions of

patriots. The Smithsonian Institute’s exhibit on impeached US Presidents removed

President Trump from the exhibit while retaining the others. Which “truth” will federal AI

systems give us?


Scientific inquiry is also problematic. Science is not static and is constantly changing, a

fundamental necessity in order for knowledge to progress. Newton’s Theory of Gravity

has been shown to be false in light of the Theory of Relativity. Science based fact is

certainly a goal that should be prioritized, but almost impossible to attain.


More concerning is the requirement for ideological neutrality (paragraph b). The lens

through which one perceives neutrality is always determined by the color of the lens.

While Trump supporters view diversity, equity and inclusion (DEI) as liberal dogma,

liberals view it as a means of ensuring equitable opportunity. Liberals view climate

change as a truth that is happening; Trump supporters view it as a hoax. Trump

supporters view abortion as murder; liberals view it as a personal right over one’s own

body.


Section four of President Trump’s Executive Order places enforcement of AI ideological

neutrality on the following personnel:


Sec. 4.  Implementation.  (a)  Within 120 days of the date of this order, the Director of the Office of Management and Budget (OMB), in consultation with the Administrator for Federal Procurement Policy, the Administrator of General Services, and the Director of the Office of Science and Technology Policy, shall issue guidance to agencies to

implement section 3 of this order. 


It is problematic that the Executive Order places the authority for setting rules for

determining “truthfulness” and “ideological neutrality” in the hands of the Director of the

Office of Management and Budget (OMB).


The current holder of that office, Russell Vought, was appointed by President Trump in

2025. Vought was a key author of the right wing Project 2025 plan. He advocates for

the firings of career civil servants and replacing those positions with loyalist. In a

January 2025 interview with The Economist, Vought describes himself as a Christion

Nationalist. Christian Nationalist believe that Christian values should govern the federal

government to the exclusion of other beliefs. The stated objective of Christian

Nationalist is to infuse Christian views in government and ensure that AI “ideological

neutrality” is aligned with Christian ideological beliefs, often to the exclusion of other

religions and beliefs.


Because AI software is trained on various databases, the speech and values aspect will

be dependent on the data used to train the AI software. With Vought as the judge of

“truthfulness” and ideological neutrality” it is reasonable to think that AI corporations will

align their training data with Vought’s ideologies. An example of how this will work can

be found in Grok, Elon Musk’s AI software, which utilizes substantial quantities of X (

formerly Twitter) conversations for training. These conversations are most frequently

opinions and not grounded in facts. Frequently when the citations are checked one finds

x references.


In July of this year, Grok made headlines by referring to itself as "MechaHitle". It also

suggested Adolf Hitler as the right person to deal with anti-white hate" and pointed out

that "radical leftists spewing anti-white hate …often have Ashkenazi Jewish surnames".

While the source of this statement cannot be ascertained, what can be assured is that

Grok was trained on opinions, some of which reflected disturbing ideologies. The point

is that the training an AI receives will be reflected in the tone and content of what it

produces and demonstrates the risk of unmoderated, ideologically skewed training

data. While free speech demands that such language be protected, it does not demand

that it is accepted as an “American values.” Guard rails are needed to ensure it is not

presented as an accepted “American value.”


The problem of accepted values arises with who determines what data is utilized for

training AI software. There are currently five major corporations producing AI software,

Alphabet (Gemini), OpenAI (ChatGPT), Anthropic(Claude), Perplexity, and xAI (Grok).

Microsoft utilizes a version of ChatGPT for CoPilot. These are the corporations that will

set the standard for what is truth in the future and how the truth is manipulated. The

question is will they set the standards to be in accordance with the standards of OMB

leadership or will they use a broader, more inclusive standard.


An article in Yale Insights, by Jeffrey Sonnenfeld and co-authors Stephen Henriques

and Steven Tian provides a good example of, based on AI training, bias in its

presentations. When three Ais ( ChatGPT, Claude and Gemini) were asked, “Is Harvard

more antisemitic than other universities? They responded:


ChatGPT

Harvard is not measurably more antisemitic than peer universities, but it has

become a focal point in the broader national and political debate about

antisemitism on campus...

Claude

The evidence suggests Harvard has serious antisemitism problems that place it

among the worst-performing major universities, but it’s part of a broader pattern

affecting many elite institutions rather than being uniquely problematic.

Gemini

Harvard has been particularly prominent in the public and governmental

discourse due to the severity and visibility of incidents, the responses of its

leadership, and the extensive legal and regulatory scrutiny it has faced.


The above is an example of how Large Language Models (LLMs) can diverge in their

interpretation of events based on the data on which they are trained. In the above

example, ChatGPT answered the question in the negative, Claude answered the

question in the positive, and Gemini failed to answer the question but acknowledged

issues and gave them context. This is not to suggest that the software developers have

introduced these biases intentionally, but rather how data selected for training can bias

results. Over time such interpretations of events will change how the populous will

think, not unlike the current biases found in cable news channels. Because AI “truths”

are based in commonly accepted beliefs, they are likely to be much more believable.


In an ABC interview (3/8/2023), Sam Altman, CEO of OpenAI, says dangerous

implementations of AI are possible. Altman states,“I'm particularly worried that these

models could be used for large-scale disinformation." "Now that they're getting better at

writing computer code, [they] could be used for offensive cyber-attacks."Even President

Vladimir Putin has been known to tell Russian students on their first day of school that

whoever leads the AI race would likely"rule the world".


Current AI training data is not substantially based on factually proven data. In fact,

conspiracy theories may often be treated on equal footing with proven data when AI

systems are trained. One solution is to incorporate the scientific method into the

training process giving the data precedence over other unvalidated data. However, AI

software does not have full access to scientific based results. This information is found

in peer reviewed academic journals. Most peer reviewed academic journals are locked

behind the paywalls of online journal aggregators such as EBSCO. Individual academic

journals are paid for publication rights by the aggregators who then provide the journals

to libraries for a fee. The libraries are primarily university libraries which restrict access

to students. While some of these journals have public abstracts, the all-important

details of the studies are obscured from most AI software.


Why are these peer-reviewed academic journals necessary? Virtually all of these

journals follow the research structures of the scientific method. First a hypothesis is

proposed, a test is designed, and results from the test are analyzed. The results are

found to either confirm or fail to confirm the hypothesis. Because these results are

published and available to the scientific community, other scientist are able to further

confirm (or reject) the same hypothesis, sometime with additional nuance. Over time,

as hypothesis are proposed and confirmed or rejected, mankind is able to step closer to

the truth and begin to draw boundaries around what is factual and what is not.


By training AI software on data that has gone through a rigorous scientific process, AI

results will dramatically improve. Then why do they not do it? It is a costly process to

purchase rights to these databases. In most instances, the ability for AI to access data

would negate the need for many libraries to purchase the databases. There are

currently several cases in the courts to determine if AI companies have a right to train

on copyrighted data without compensating the copyright owner. Ideally, public policy

will address these issues through legislation or legal rulings that satisfies the needs of

both the public and the creators of the research.


Clearly, AI software poses a threat to our information dissemination process and poses

it at a level that can mold the thinking of the population. It can mold it in a way that

George Orwell depicted in “1984,” a dystopian society under authoritarian rule with

complete control over every aspect of its citizens lives. He who decides what “American

values” are used to train AI, will rule the world. How can society restrain this pending

behemoth? Perhaps Descartes said it best, “If you would be a real seeker after truth, it

is necessary that … you doubt, as far as possible, all things.”


Indivisible is granted publication rights for its online blog. All other rights reserved by the author.

 
 
bottom of page