NewsLocal NewsIn Your NeighborhoodBrazos CountyTexas A&M University

Actions

Texas universities deploy AI tools to review and rewrite how some courses discuss race and gender

TAMU admin building.PNG
Posted

A senior Texas A&M University System official testing a new artificial intelligence tool this fall asked it to find how many courses discuss feminism at one of its regional universities. Each time she asked in a slightly different way, she got a different number.

“Either the tool is learning from my previous queries,” Texas A&M system’s chief strategy officer Korry Castillo told colleagues in an email, “or we need to fine tune our requests to get the best results.”

It was Sept. 25, and Castillo was trying to deliver on a promise Chancellor Glenn Hegar and the Board of Regents had already made: to audit courses across all of the system’s 12 universities after conservative outrage over a gender-identity lesson at the flagship campus intensified earlier that month, leading to the professor’s firing and the university president’s resignation.

Texas A&M officials said the controversy stemmed from the course’s content not aligning with its description in the university’s course catalog and framed the audit as a way to ensure students knew what they were signing up for. As other public universities came under similar scrutiny and began preparing to comply with a new state law that gives governor-appointed regents more authority over curricula, they, too, announced audits.

Records obtained by The Texas Tribune offer a first look at how Texas universities are experimenting with AI to conduct those reviews.

At Texas A&M, internal emails show staff are using AI software to search syllabi and course descriptions for words that could raise concerns under new system policies restricting how faculty teach about race and gender.

At Texas State, memos show administrators are suggesting faculty use an AI writing assistant to revise course descriptions. They urged professors to drop words such as “challenging,” “dismantling” and “decolonizing” and to rename courses with titles like “Combating Racism in Healthcare” to something university officials consider more neutral like “Race and Public Health in America.”

Read Texas State University's guide to faculty on how to review their curriculum with AIDownload
While school officials describe the efforts as an innovative approach that fosters transparency and accountability, AI experts say these systems do not actually analyze or understand course content, instead generating answers that sound right based on patterns in their training data.

That means small changes in how a question is phrased can lead to different results, they said, making the systems unreliable for deciding whether a class matches its official description. They warned that using AI this way could lead to courses being flagged over isolated words and further shift control of teaching away from faculty and toward administrators.

“I’m not convinced this is about serving students or cleaning up syllabi,” said Chris Gilliard, co-director of the Critical Internet Studies Institute. “This looks like a project to control education and remove it from professors and put it into the hands of administrators and legislatures.”

Setting up the tool

During a board of regents meeting last month, Texas A&M System leaders described the new processes they were developing to audit courses as a repeatable enforcement mechanism.

Vice Chancellor for Academic Affairs James Hallmark said the system would use “AI-assisted tools” to examine course data under “consistent, evidence-based criteria,” which would guide future board action on courses. Regent Sam Torn praised it as “real governance,” saying Texas A&M was “stepping up first, setting the model that others will follow.”

That same day, the board approved new rules requiring presidents to sign off on any course that could be seen as advocating for “race and gender ideology” and prohibiting professors from teaching material not on the approved syllabus for a course.

In a statement to the Tribune, Chris Bryan, the system’s vice chancellor for marketing and communications, said Texas A&M is using OpenAI services through an existing subscription to aid the system’s course audit and that the tool is still being tested as universities finish sharing their course data. He said “any decisions about appropriateness, alignment with degree programs, or student outcomes will be made by people, not software.”

In records obtained by the Tribune, Castillo, the system’s chief strategy officer, told colleagues to prepare for about 20 system employees to use the tool to make hundreds of queries each semester.

The records also show some of the concerns that arose from early tests of the tool.

When Castillo told colleagues about the varying results she obtained when searching for classes that discuss feminism, deputy chief information officer Mark Schultz cautioned that the tool came with “an inherent risk of inaccuracy.”

“Some of that can be mitigated with training,” he said, “but it probably can’t be fully eliminated.”

Schultz did not specify what kinds of inaccuracies he meant. When asked if the potential inaccuracies had been resolved, Bryan said, “We are testing baseline conversations with the AI tool to validate the accuracy, relevance and repeatability of the prompts.” He said this includes seeing how the tool responds to invalid or misleading prompts and having humans review the results.

Experts said the different answers Castillo received when she rephrased her question reflect how these systems operate. They explained that these kinds of AI tools generate their responses by predicting patterns and generating strings of text.

“These systems are fundamentally systems for repeatedly answering the question ‘what is the likely next word’ and that’s it,” said Emily Bender, a computational linguist at the University of Washington. “The sequence of words that comes out looks like the kind of thing you would expect in that context, but it is not based on reason or understanding or looking at information.”

Because of that, small changes to how a question is phrased can produce different results. Experts also said users can nudge the model toward the answer they want. Gilliard said that is because these systems are also prone to what developers call “sycophancy,” meaning they try to agree with or please the user.

“Very often, a thing that happens when people use this technology is if you chide or correct the machine, it will say, ‘Oh, I’m sorry’ or like ‘you’re right,’ so you can often goad these systems into getting the answer you desire,” he said.

T. Philip Nichols, a Baylor University professor who studies how technology influences teaching and learning in schools, said keyword searches also provide little insight into how a topic is actually taught. He called the tool “a blunt instrument” that isn’t capable of understanding how certain discussions that the software might flag as unrelated to the course tie into broader class themes.

“Those pedagogical choices of an instructor might not be present in a syllabus, so to just feed that into a chatbot and say, ‘Is this topic mentioned?’ tells you nothing about how it’s talked about or in what way,” Nichols said.

Castillo’s description of her experience testing the AI tool was the only time in the records reviewed by the Tribune when Texas A&M administrators discussed specific search terms being used to inspect course content. In another email, Castillo said she would share search terms with staff in person or by phone rather than email.

System officials did not provide the list of search terms the system plans to use in the audit.

Martin Peterson, a Texas A&M philosophy professor who studies the ethics of technology, said faculty have not been asked to weigh in on the tool, including members of the university’s AI council. He noted that the council’s ethics and governance committee is charged with helping set standards for responsible AI use.

While Peterson generally opposes the push to audit the university system’s courses, he said he is “a little more open to the idea that some such tool could perhaps be used.”

“It is just that we have to do our homework before we start using the tool,” Peterson said.

AI-assisted revisions

At Texas State University, officials ordered faculty to rewrite their syllabi and suggested they use AI to do it.

In October, administrators flagged 280 courses for review and told faculty to revise titles, descriptions and learning outcomes to remove wording the university said was not neutral. Records indicate that dozens of courses set to be offered by the College of Liberal Arts in the Spring 2026 semester were singled out for neutrality concerns. They included courses such as Intro to Diversity, Social Inequality, Freedom in America, Southwest in Film and Chinese-English Translation.

Faculty were given until Dec. 10 to complete the rewrites, with a second-level review scheduled in January and the entire catalog to be evaluated by June.

Administrators shared with faculty a guide outlining wording they said signaled advocacy. It discouraged learning outcomes that describe students “measure or require belief, attitude or activism (e.g., value diversity, embrace activism, commit to change).”

Administrators also provided a prompt for faculty to paste into an AI writing assistant alongside their materials. The prompt instructs the chatbot to “identify any language that signals advocacy, prescriptive conclusions, affective outcomes or ideological commitments” and generate three alternative versions that remove those elements.

Jayme Blaschke, assistant director of media relations at Texas State, described the internal review as “thorough” and “deliberative,” but would not say whether any classes have already been revised or removed, only that “measures are in place to guide students through any adjustments and keep their academic progress on track.” He also declined to explain how courses were initially flagged and who wrote the neutrality expectations.

Faculty say the changes have reshaped how curriculum decisions are made on campus.

Aimee Villarreal, an assistant professor of anthropology and president of Texas State’s American Association of University Professors chapter, said the process is usually faculty-driven and unfolds over a longer period of time. She believes the structure of this audit allows administrators to more closely monitor how faculty describe their disciplines and steer how that material must be presented.

She said the requirement to revise courses quickly or risk having them removed from the spring schedule has created pressure to comply, which may have pushed some faculty toward using the AI writing assistant.

Villarreal said the process reflects a lack of trust in faculty and their field expertise when deciding what to teach.

“I love what I do,” Villarreal said, “and it’s very sad to see the core of what I do being undermined in this way.”

Nichols warned the trend of using AI in this way represents a larger threat.

“This is a kind of de-professionalizing of what we do in classrooms, where we’re narrowing the horizon of what’s possible,” he said. “And I think once we give that up, that’s like giving up the whole game. That’s the whole purpose of why universities exist.”

The Texas Tribune partners with Open Campus on higher education coverage.

Disclosure: Baylor University, Texas A&M University and Texas A&M University System have been financial supporters of The Texas Tribune, a nonprofit, nonpartisan news organization that is funded in part by donations from members, foundations and corporate sponsors. Financial supporters play no role in the Tribune's journalism. Find a complete list of them here.

This article first appeared on The Texas Tribune.