The Judge Is in Session: AI Op Ed with Dr. Aaron Boyson

This article was originally published in the March 2026 print edition of The Bark, distributed at the University of Minnesota Duluth campus.

Aaron R. Boyson, PhD

Associate Professor & Head of the Department of Communication

“All technologies should be assumed guilty until proven innocent,” according to decorated World War II veteran and conservationist David Brower.  Consider an altogether different quote about technology from Elon Musk, taken from a recent Axios interview.  The subject, by the way, was (p)doom, which denotes the probability that AI will result in an existential calamity, including human extinction.  You should know, Musk’s calculation was that (p)doom = .20, or 20%.  He then said,

“It's like, I think it'll be good. Most likely it'll be good... But, I somewhat reconciled myself to the fact that even if it wasn't going to be good, l'd at least like to be alive to see it happen.”

How starkly different are these two technological philosophies?  Brower assumed technology could lead to progress only if it proves itself first, while Musk seems to believe all technology is progress by definition, and therefore it is either inevitable or amoral to stop it.

How should the rest of us reconcile these two views?

Before answering, you should know that Musk is not alone.  A founding father of AI, Geoffrey Hinton, estimates doom similarly and believes we are moving way too fast with AI for our species to be sustainable.  So does Google’s current AI CEO, Demis Hassabis, and so does Anthropic co-founder Dario Amodei, who estimates (p)doom to be .25.

Americans feel likewise.  In December, a YouGov poll found that 77% of the rest of us were very or somewhat concerned that AI poses a threat to humanity.  Seldom do these two groups align about the threat of a new technology, maybe not since the atomic bomb, an oddity that deserves more attention than it seems to get.  Despite the concern, statistically speaking, we are nevertheless enlisted in a game of Russian Roulette with our entire species.

In 30 years, I have never been encouraged to adopt a so-called “tool” that has a 1 in 5 chance of destroying humanity to teach a class.  I can’t help but feel like we are playing a similar game with education. Let me explain with the help of five things.

Thing 1 - Canary in a coal mine

Nearly two years ago, I learned from a colleague that a student had come to the writing center at UMD for help writing an essay with AI, “ so the professor wouldn’t know.”  I checked twice that I heard it right.  The writing center faculty replied by asking if there was a reading assigned for the essay, to which the student replied, “Yes,” but had not done the reading.

Apparently, I now have to write that cheating used to be a private affair and wrong.  The student seems caught in a tug of war between Browers’ and Musk’s approaches to technology.

To be fair, the student probably had noticed how wildly our institution, the units across campus, or professors, are deploying or corralling AI.  Over here in college, A is a Musk philosophy, in College B, a Brower one.  Normally, I value variety in education, but the chaos seems to have caused the student to conclude that academic integrity had been degraded enough to ask for a faculty co-conspirator.

Faculty I talk to are no less confused than the students.  Some venture; some retreat.  Some use AI but do not allow their students to do so, inviting an unsavory perception of hypocrisy.

Despite the confusion, nationwide, there is general agreement from those of us on the frontlines.  A survey of more than 1,000 faculty by the American Association of Colleges & Universities last year found that faculty think AI: 1) will decrease attentions spans (83%), 2) diminish critical thinking (90%), 3) will cause students to become over-reliant on it (95%), and 4), has already increased cheating (78%) – 57% saying it has increased cheating a lot. 

Are we sufficiently concerned about the learning losses from cheating already happening in the Musk-like rollout of AI in higher education? Administrators at nearly all levels of education seem to be imagining AI invites us into a hot new romance, ignoring the visible warts.

Thing 2 - It's Einstein, stupid

This month, I learned of an AI companion named Einstein (wait for it).  If purchased, your Einstein AI companion will take any course for you delivered through Canvas.  Einstein will do all the things.  It will attend lectures for you, do your readings, do assignments, make discussion posts, etc.  Notably, the same company makes an AI companion for me as a professor.  It will design, deliver, and evaluate an entire college course for me on Canvas, especially those delivered online.

Let it breathe for a minute.  A professor can teach a class they do not design, deliver, or evaluate to a student who can take a class they never attend or contribute an ounce of thought to the work assigned.

If there is a more crystalline rock-bottom of education than this scenario, I cannot envision it.  Is Canvas even still viable?  Feels like this moment demands an urgent response.

Thing 3 - Wordplay

Photo by Macie Groth

I suspect one reason there is not a three-alarm fire about cheating is that we no longer know clearly what it even means.  Another is that there is no credible tool for detecting cheating with AI, but having one is also related to being able to define it.  Harold Innis said that one of the great powers of any communication technology is that it can change the character of symbols – the things we think with. 

Einstein steals meaning from us, too.  It degrades verbs like “attend,” “do,” “read,” or even “take” a class.  Innis was trying to stop us from thinking about technology as “tools,” in part because they wreak havoc on our language, which they do by altering the landscape of cognition. 

Companionship and therapy are now the top two reasons people use generative AI, according to a recent report from the Harvard Business Review.  An article from the Monitor on Psychology called this sort of companionship “digitally fueled disconnection.”  There is a promise that sociable robots can reduce loneliness among the elderly, for example, who are actually alone.  But the Pew Research Center finds that young people are the heaviest users.  The Monitor article reports that heavy use leads, tragically, to increased loneliness.

AI’s timing is dreadful because it comes during mental health and loneliness epidemics among college-age students, likely caused by the spread of social media.  Character.ai alone has 20 million monthly subscribers.  The sudden use of AI for “companionship” strains the word as we knew it, and thus the world as we have known it.  Suddenly, disconnection is a pathway to companionship.

Is our use of AI also straining what it means to learn?  Can our notions of learning outlast the smog of cheating, or cognitive offloading, summarizing, and shortcutting?  Look closer: The prevailing conditions of attention deficit and mental health suggest learning is just as vulnerable as companionship to technological theft of meaning.

Thing 4 - Promises, promises

The promises that AI use can enhance education and learning seem tragically reminiscent of the digital screen revolution in education.  Meta-analyses from decades of empirical research show that the revolution came, alright, but it has reduced learning.  Neuroscientist Jared Horvath’s new book, Digital Delusion, shows EdTech’s screen push in schools across the globe can explain why Gen Z will be the first generation in our history not to score higher than their parents on measures of cognitive ability.

Screens didn’t just break Edtech’s promises; they made things worse.  What’s the right response to AI in light of these findings?  Shouldn’t we “first do no harm” to learning?  Perhaps education is less precious than medicine to deserve the oath.  Of all people, the guardians of education should be among the best stewards of technological adaptation rather than feverish technological adoption.

Meanwhile, chatbots and the like have begun the campaign to convince us that the human condition is a problem in need of a technological solution.  Einstein is perhaps the real canary in education, as it promises to solve the problem of humans all at once.  Clearly, at least to me, learning is already suffering death by a thousand smaller cuts to attention and memory.

Thing 5 - Beta testing en mass

There simply needs to be better proof of concept in education.  Instead, we are all AI beta-testers.  AI products have already been released to the public, like an army of solutions in search of problems.  The most powerful invention in human history, adopted as if out of a cannon, may also be the first to be beta-tested while it is being released.  Saying that’s unwise is an epic understatement.

Why must education enlist itself in the beta-testing?  Are we that gullible to the myth of its inevitability?  We fell for that with screens, already.  What theories of learning guide the way now?  Is anyone using them?  Are learning theories enough, or even the right theories, for this technology?  Can any of them even be used in the age of Einstein?

I do not mean to suggest AI should never be used in learning, which would be nearly as stupid as Einstein..., AI that is.  I mean to say only that the guardrails were never put up, and the questions have not been sufficiently asked or answered by folks who are supposed to be uniquely skilled at doing both.

There is a dire need for a thoughtful referendum on whether AI can be proven innocent in its takeover of education and learning, as Brower might have said.  More than any other, this technology needs such a trial in education.  The judge is in session; I wonder who among us is willing to pound the gavel.

VoicesAaron Boyson