Artificial Intelligence and the End of Human Responsibility

In what seemed like a moment, artificial intelligence took the world by storm. Before we hardly had a chance to blink, AI manifested itself everywhere from college campuses to search engines. From all sides, we are now bombarded by claims of the power AI will have in shaping the future and our daily lives, from how we take and edit pictures to how we find information. In this way, artificial intelligence has catapulted itself from a fringe issue to the ethical question of our age.

What is stunning to me, amidst this AI blitz, is the contrast between people’s cautious and often pessimistic perceptions of AI versus how quickly all barriers to its complete and widespread implementation seem to be melting away.

Apple HQ in Silicon Valley – Visit San Francisco

It is as if a small but powerful faction of the US populace—namely its tech sector—are from their isolated Silicon Valley island working overtime to sweep AI into all things before there is a chance to fully assess the ethical implications of doing so. This should immediately raise suspicions.

To start, there are undoubtedly benefits to the use of AI in some sectors. In one stunning example, researchers utilized the technology to read an ancient Pompeiian scroll, unlocking the secrets of history in a way that would have not otherwise been possible. In another fascinating example, marine biologists incorporated AI into understanding the language of whales, orcas, and other marine animals. It has also been tested in the realm of more accurate medical diagnoses. Clearly, in these circumstances artificial intelligence is being used as a tool for the extension of human understanding, science, and discovery of the natural world. When used in this way, it is merely the next in a long range of technologies to do these things, just as calculators, computers, and the internet have done.

These uses, I imagine, are generally less controversial and more favored by the American public at large. Unfortunately for us, however, these form just a small subset of the new AI uses the tech industry is attempting to shove upon us. There is a far larger, far more dystopian subset of this technology that warrants discussion—where the powers that be propose AI to not just assist human utility, but to replace human thought, artistic ability, and even reality. These are a wholly different matter than the first, ethically and philosophically. Let us consider these three in reverse order.

How AI Threatens to Replace Reality

First, to how AI is replacing reality. This is perhaps best personified in recent commercials advertising a new phone that has come to the market. As its greatest selling point, this phone purports to take “perfect pictures.” Was your picture blurry? Now, AI can clarify it instantly. Was the natural lighting on your mountaintop picture just not ideal? Now, you can interpose a beautiful blue sky worthy of a Mediterranean vacation. Did someone in your group photo blink or turn away? Now, you can overlay their perfect smiling face as if it never happened.

In every one of these cases, artificial intelligence is being used to create memorable scenes that never existed. By using AI in this way, generations of family members may see and make their memories from family photos that did not actually exist. Scrolling through their phones together, groups of friends may reminisce upon pictures that didn’t actually look like what is shown on their screen. In the arms race for likes on social media, people may share idealized photographs in the struggle to appear interesting and relevant.

Want the perfect picture? AI promises to give it all

Zia Villanueva, Pinterest

These touch upon a far deeper issue within society today: our discomfort with imperfection. We strive to have the “perfect” bodies, “perfect” jobs, and “perfect” lifestyles—or, what is more often the case, the appearance of having these things. Artificial intelligence therefore has the capability to aggravate and elevate these sentiments and perpetuate these falsehoods to levels never seen before, and this is dangerous to both our mental health and state of society in ways that should be obvious. The encroachment of AI editing tools is just one piece in the worrying trend of attempts to “escape reality” through other means, such as Meta’s or Apple’s virtual reality headsets.

Reflect upon advertisements for Apple’s Vision Pro, where users sit upon their couches staring at the wall with the headset on. Cameras on the outside of the equipment feed in a livestream of the wall and its objects that are already there, yet adds to it buttons, apps, and other tools augmented on top. While the technology is a stunning example of human ingenuity, let us ask ourselves: is it really worth it to spend thousands of dollars on a headset that merely shows us what is already in front of us in a new and flashy way? Why do we feel the need to enhance the firm substance-and-matter reality around us in the first place? What are we truly seeking? And…is there a deeper reality that has already manifested itself in places like philosophy and Scripture that we hiding from? These questions are large and worthy of future discussion on this platform, but I bring them up now to further my point.

Person wearing virtual reality headset – Getty Images

How AI Threatens to Replace Human Artistic Ability

ChatGPT’s ability to generate realistic images and videos with a mere phrase has made artificial intelligence famous. For now, the technology is not perfect and AI-generated images can usually be picked out, but this is by no means guaranteed in the future. Yet, people I have talked to seem satisfied to dismiss the issue with a chuckle and something like “it looked real, but something seemed off,” and nothing more. But there is in fact much, much more at stake.

To begin with the obvious and most pressing issue, the ability of AI image generation to be used by nefarious and authoritarian parties to spread lies and misinformation is unbounded. There is at this time no widespread mechanism to identify and make public indications of false images, and for now this remains untested by any widespread AI-generated falsehoods. But the same people who constantly complain about “misinformation” on the internet seem strangely quiet when it comes to AI’s ability to propagate it, and this should again raise suspicions.

But beyond this lies another far deeper ethical issue, and that is about the place of the human spirit in artistic expression. Since the beginning of recorded history—and far before it—mankind has interpreted and expressed the world through art from the human mind and by human hands. This has been true from the earliest cave paintings to the protest of authoritarian Communist China by artist Ai Weiwei, and every generation of art has built upon those that came before it.

Cave Art from Lascaux Cave, France

N. Aujoulat/MCC-CNP

Pottery from Chinese dissident Ai Weiwei

Zachary Fagenson/Reuters/Landov

Allowing a robot, a machine, to do our artwork for us is an insult to all artists living and dead. We must not ever allow a machine to generate in a second what an artist painstakingly creates over months and years of precision, sweat and revision. Further, notice the difference between the words we use for human art versus AI art: creation against generation. While a human creates new things, with a spark of the divine, artificial intelligence generates images through the cutting and splicing of existing artwork (most of it already created by real artists in the past). Beyond raising questions of copyright for existing artists, this is plain disrespectful to them.

Allowing a robot, a machine, to do our artwork for us is an insult to all artists living and dead.

To summarize, art is a realm that is too human to be usurped from human hands by robotic cogs, and we must not allow it to be so.

How AI Threatens to Replace Human Thought

And now for the last, and I believe, most dangerous threat of artificial intelligence: how the tech industry is pitching AI as a means for replacing human thought. Of all the threats discussed here, this is the one farthest from the “beneficial uses” discussed above.

When most people think about the risks of AI, they usually do so in terms of what jobs it will take. Most think along the lines of one my coworkers upon a discussion on the subject: “it is one thing when a robot does my dishes for me, and another when it tries to take my job.” This is undoubtedly a threat of AI in some cases, but for the purposes of this discussion I want to go deeper: to what happens when AI threatens our reasoning ability, the most human thing we have.

Robot doing a human task – Getty Images

For ages philosophers have upheld reason as the defining characteristic that separates men from the animals. As Aristotle famously said in his Politics, “man is by nature a political animal,” and our ability to reason and shape the world around us is what makes us unique. But now the technologies we have created are nearing the ability to make decisions for us, and many appear to want to roll over and accept this.

Let us consider a few examples. By far the most glaring is the way ChatGPT and other AI software have corrupted our education system. For the past two years, stories abound of students using the software to write their papers, code, presentations, and essays. Because we have pumped an inflated higher education system full of people who actually don’t want to be there, it is not altogether surprising that our high schools and universities have a serious AI problem. Nor is it surprising when I speak to my friends still in higher education and hear their justifications of AI usage in writing their essays.

Common half-baked justifications I often hear are: “I let ChatGPT write 90% of the essay and I put in the finishing touches and make it flow. That way it is still ‘my’ work.” Or: “I would have looked up all the same facts anyway. ChatGPT puts it all in one place for me.”

Students in school – Getty Images

But one must have pretty thick pride to believe work like this can still be rightly claimed as one’s own, and I do not think most of my friends really believe what they are saying. Most stunning to me, however, is when I hear professors saying the same thing.

Many teachers I talk to are “excited” about the “AI revolution” in education and dismissive of questions posed by others that “somehow [their students’] education would be ruined by AI.” Others see the control of student AI usage as futile, and that it must be accepted in whole. Often an adaptive approach is proposed, such as: “What we’re working on with them now is, once you’ve sort of primed your writing or your research, or your understanding with something that’s been generated, what are those next steps?” Once again, I do not accept ChatGPT “writing 90%” of an essay as merely priming your writing, but taking an adaptive approach is at least somewhat practical.

In short, the education system’s response to the AI revolution has been far too soft. Providence has given this generation of educators the solemn responsibility of laying down the initial rules of AI usage in the classroom; rules that will set and shape the stage of student AI usage for generations to come. This is just how the first generation of American justices shaped the Supreme Court’s jurisprudence and precedents, to take an equally serious example. Once a precedent is set, it is nearly impossible to take away, and with this in mind, to be frank, our educators are woefully shirking their duties towards our children.

In short, the education system’s response to the AI revolution has been far too soft.

Very few states have provided official guidance to their school districts on how AI ought to be handled in school, and those that have seemingly place a larger focus on equity and inclusion issues than the actual quality of education. For instance, in the State of California’s guidance, they discuss “how AI could help bridge equity and diversity workforce gaps in STEM fields.” Tell me, assuming AI could do such a thing, what good does it do to the student if they “bridge a gap” in a field but did so relying entirely on something other than themself? Arguments such as this for the benefits of AI astoundingly appear to favor demographic numbers on a spreadsheet than the resilience and merit of the students themselves.

Why all the strong language here? Why do I not agree with the educators quoted above, c’est la vie? This is because the problem of AI is a problem of dependence, a problem that has grown more and more acute with time. The twentieth and twenty-first centuries have been a steady trope towards more dependency on technology and less reliance on individual skills, brains, and merit. As with the other issues discussed in this article, there is something deeper at play here: we are losing the sanctity and importance of hard work to human meaning and the human spirit. As Wendell Berry wrote in his landmark The Unsettling of America, “Among the people as a whole…the ideals of workmanship and thrift have been replaced by the goals of leisure, comfort, and entertainment.”

Farm work – TransitionsAbroad.com

The use of ChatGPT to do the heavy lifting in one’s job is not solely limited to students. In my field of work, grant writers are using it to write their grants. Computer coders are using it to write their code, engineers, their designs. Even some professors themselves are now using AI to make their lesson plans and presentation slides, in a terrible lapse in their duty as role models to their students. While searching for articles on each of these subjects, each result included nearly a dozen websites with AI purporting to do the very job task that I was searching. In one example, the company Personal AI claims to establish “digital twins” of yourself or important people in industry so that they can make decisions in cases of the “loss” of the person.

The problem of AI is a problem of dependence.

Dependency has real consequences. Stories abound, for example, of today’s students unable to hand-write for extended periods or even properly hold a pencil due to computer overuse in the classroom. Has an academic degree been so reduced to mere ink and paper and the purpose of education so reduced to making money that this is the best our educators can do? Where is the substance? Where is the substance in anything anymore?

Let us seriously ask ourselves if using AI in this way is truly “helpful” if we could not complete the task ourselves without its generative power? To prevent AI overuse, the facetious old adage of “don’t use Wikipedia” won’t work here; artificial intelligence requires stronger stuff. Remember, it is wholly different from previous technologies that have made certain human tasks easier, such as measuring or manufacturing equipment. It is also different from calculators or infamous websites like Chegg or Wolfram Alpha that could solve math problems for students and professionals. In those situations, there is one answer and often one set path to it. But when we use artificial intelligence to write our papers or make our lesson plans, we are using it to generate something that has millions of combinations of words, phrases, and tone—things that require creativity and reason to organize and arrange. In other words, the two things that make us human.

Human creativity and reason – Getty Images

Using artificial intelligence in this way is therefore a certain road to a widespread inability to creatively think and critically reason, the death of merit as we know it, and the utter weakening of the human spirit—and it is about time someone said so. And when a society loses its ability to think and reason, it leaves itself completely vulnerable to authoritarian governance, which already seeks to quell human creativity and reason as a rule.

In this vein we have the stunning example of Google’s Gemini AI, fully released earlier this year. Gemini is a clear and frightening example of the (very real) way AI can be used as a tool by the authoritative and ideologically-bent to shape narratives, truth, and the availability of information. In early 2024, it was revealed that Google’s Gemini refused to generate pictures of white people or their achievements, claiming it “reinforced harmful stereotypes,” while readily depicting the same for people of other races.

To give Google the benefit of the doubt, large language models (LLMs) such as Gemini glean the internet to generate responses, so it is impossible to fully predict what it will say. But that is the problem. If the implementation of critical race theory into AI can create such a glaring example of restricted information in the name of an ideology—there is no knowing what other restrictions or misinformation could be generated; and that is without assuming the powers-that-be will not tilt the scales in their favor, which history has consistently shown they will. The Google Gemini controversy has proven that with AI, the potential for history to truly be shaped by the victors has been realized.

Using artificial intelligence in this way is a certain road to a widespread inability to creatively think and critically reason, the death of merit as we know it, and the utter weakening of the human spirit.

And so I ask: what happened to the stunning resolve of the generations of men and women who overcame their challenges through hard work? Do our hearts not still brighten at stories of those who merited achievements through their work, and is our own willpower not still inspired by them? Have we lost the desire to be the best we can possibly be? I repeat: the best we can be. Not the best a robot can be. So then, take heart! Don’t roll over and allow something that parades under a facade of progress steal it from us.

As Wendell Berry also said in the Unsettling, “Highly problematic changes are cited solely as evidence of the advance of technology, which we are evidently expected to regard as simply good.” I think the large part of the current weak response to AI can be attributed to this misguided belief in the “goodness” of “progress” and technological advancement, and I hope that what I have explained today shows otherwise.

Technology is not Inevitable

I will conclude with one more quote from Wendell Berry, which rings prophetic and is merited in its entirety:

“The cult of the future has turned us all into prophets. The future is the time when science will have solved all our problems, gratified all our desires; when we will all live in perfect ease in an air-conditioned, fully automated womb; when all the work will be done by machines so sophisticated that they will not only clothe, house, and feed us, but think for us, play our games, paint our pictures, write our poems. It is the Earthly Paradise, the Other Shore, where all will be well.”

Wendell Berry – The Unsettling of America

There are far too many aspects of ethics and human fallibility mixed into this issue, and few can be discussed at great depth here. Artificial intelligence is the clear ethical issue of our age, and it is the duty of this generation to responsibly leave a world unmarred by it to posterity by setting reasonable precedents that protect the work ethic and merit of us and our children. Far more work needs to be done.

By Evan Patrohay

Leave a comment

About Me

A South Carolina conservative, dedicated to the cause of responsible leadership and environmental conservation. Conservation is conservative!