For Members

The moral dilemma with ChatGPT

Artificial intelligence is a growing part of American society. Some have embraced AI full-heartedly, while others have reservations. When diving deeper into artificial intelligence, questions of morality surface, especially with AI chatbots like ChatGPT.
Kevin Miller 12 min read
Bible image

Artificial intelligence is a growing part of American society. Some have embraced AI full-heartedly, while others have reservations. When diving deeper into artificial intelligence, questions of morality surface, especially with AI chatbots like ChatGPT.

For decades, the United States of America has been somewhat obsessed with speculating about artificial intelligence. Books, movies, and television shows have made guesses about what prominent AI use could look like in society at different stages.

The majority of those speculations have been negative and have painted AI in a dangerous light. Still, society has continued to add more and more artificial intelligence to daily life.

Arguably, the most commonly used AIs in everyday use today are AI-powered search engines and chatbots. ChatGPT, Google Gemini, Microsoft Copilot, Meta AI Assistant, Grok, Perplexity, and others can take a simple query and turn it into a long, detailed response full of sourced material. Essentially, these chatbots can answer questions quickly by scouring the internet for the user, saving the user significant research time. They can–while still using the rest of the internet as a starting point–also provide advice or instruction as a person seeks to solve a problem.

The "sourced" aspect of those responses raises moral questions that will be addressed later in this story.

The problem-solving element of artificial intelligence raises another.

A prominent example of this has been in the news recently. OpenAI (the company behind ChatGPT) CEO Sam Altman was served subpoena papers during an on-stage talk last week in San Francisco. He received a court summons from the San Francisco Public Defender's Office as part of an upcoming criminal case against OpenAI. At this time, it is unclear if the case will relate to a former OpenAI employee's alleged suicide, one that came after he publicly accused his employer of stealing information with its artificial intelligence.

Understanding that the topic of AI is extremely broad and includes several unknowns moving forward, Christians can still look at the current state of artificial intelligence and see potential moral problems. Some of the most notable revolve around the AI chatbot world. This story will provide some facts and observations about AI, will examine what the political sides of the US (specifically, the liberal-leaning Left and conservative-leaning Right) have to say on the subject, and will lay out some biblically supported arguments related to the moral failings of artificial intelligence usage.

Some facts and observations about artificial intelligence

  • Artificial intelligence, as defined by NASA, is any computer programming that can virtually replicate "complex tasks normally done by human reasoning, decision making, [and] creating."
  • Foundational thinking on AI began publicly in the 1950s, and early examples of artificial intelligence (such as computerized calculators and word processors) became part of society in the 1950s and 1960s.
  • AI continued to develop over the next 50 years or so until the modern period of "common use AI." Now, artificial intelligence is present in a large number of American homes and businesses, completing tasks such as schedule making (virtual assistants), floor cleaning (smart vacuums), customer service (AI chatbots), and content generation (image/video creators, open AI forums, etc.).
  • Regarding chatbots, usage rates are skyrocketing, especially among younger Americans. According to a study from Pew Research Center, occasional usage has nearly doubled since 2023. For adults under 30, approximately 58% have used chatbots. Some estimates regarding Gen Z, individuals born between 1997 and 2012, say that nearly 90% of the generation have utilized this type of artificial intelligence.
    • Usage "for work purposes" has increased by about 250% in the last three years.
    • The descriptor "to learn something new" as the reason for usage has increased by about 225% during that same period.
    • Usages for entertainment and for advice are at all-time highs, as well.
    • A separate study from Jama Network alarmingly concluded that over 20% of young adults aged 18-21 use AI chatbots for mental health advice at least weekly.
  • If one were to search the internet for "what is ChatGPT?", the descriptive result underneath the ChatGPT website would read: "ChatGPT is your AI chatbot for everyday use. Chat with the most advanced AI to explore ideas, solve problems, and learn faster."

What the Left says about artificial intelligence.

While there is no consensus politically within conversations about AI, there are certain elements of the discussions in which Democrats agree with one another, especially on the need for more regulation. In some different ways, though, they argue amongst themselves.

  • One common theme among US liberals is that they believe artificial intelligence is a big opportunity. New York Democratic Senator Chuck Schumer said, "If applied correctly, AI promises to transform life on Earth for the better … it will reshape how we fight disease, tackle hunger, manage our lives, enrich our minds, and ensure peace."
  • Former President Joe Biden is one of many liberals who believe that the government should heavily regulate artificial intelligence. "Realizing the promise of AI by managing the risk is going to require some new laws, regulations, and oversight," he said.
  • Seemingly more than Republicans, Democrats are concerned about AI potentially lessening the workforce, specifically in lower-level entry jobs. Also seemingly more than the Right, the Left is concerned with power consumption due to artificial intelligence, as data centers and server farms in the US now consume about 4% of this country's electricity, more than double the amount from 2018.
  • According to a study from Pew Research Center, about 50% of left-leaning American citizens are "more concerned than excited" about AI in society. However, like the politicians that represent them, about 56% believe in strict, government-led regulation.
  • Democrats strongly oppose political neutrality training for AI engines, largely based on a belief that Republican definitions of "neutrality" could be harmful. A group led by Democratic Representative Don Beyer challenged some of President Donald Trump's executive orders for political neutrality by stating, "The President’s executive order...is counterproductive to responsible AI development and use, and potentially dangerous." They cited Elon Musk's Twitter/X AI called Grok, stating that "anti-woke training" has caused problems because "that platform’s recent history of racist misinformation, antisemitism, and support for Adolf Hitler – which were prompted by the very ‘anti-woke’ training this order envisions."
  • Like Republicans, Democrats desire to ban "deepfakes" and other uses of artificial intelligence that clearly and intentionally mislead.

What the Right says about artificial intelligence

While there is no consensus politically within conversations about AI, generally, American Republicans share similar sentiments. There is a somewhat bipartisan effort to ensure proper regulation, but many conservatives are less extreme in their desires for government intervention.

  • South Dakota Republican Senator John Thune's comments sum up many conservative opinions on AI: "We want to be the leaders in AI…And the way to do that is not to come in with a heavy hand of government, it's to come in with a light touch," he said. Most Republicans believe that the promotion of innovation requires fewer restrictions.
  • Republicans support some regulation on artificial intelligence, but typically, their thoughts require much less government involvement than what Democrats argue is necessary. During the Biden administration, Republican Senator Marsha Blackburn of Tennessee contended that federal mandates on AI–as opposed to state-led legislature–could "allow Big Tech to continue to exploit kids, creators, and conservatives."
  • According to a study from Pew Research Center, about 50% of right-leaning American citizens are "more concerned than excited" about AI in society. However, like the politicians that represent them, only about 38% believe in strict, government-led regulation.
  • Conservatives seem to share liberals' concerns over the job force; however, they are less concerned with protecting "low-end" jobs and more concerned with widespread, unnecessary automation and the lessening of working skills. Some Republicans, like Senator Mike Rounds of South Dakota, think some government jobs could be eliminated or reduced with the help of AI, especially as it relates to decreasing the government's wasteful spending. "I see AI as being an agent of speed and coordination…You want AI to give you a speedier process through which you can save money," Rounds said.
  • Like Democrats, Republicans desire to ban "deepfakes" and other uses of artificial intelligence that clearly and intentionally mislead.
  • Conservatives also question whether AI can be (or has been) used to sway the public's political opinions. Most AI "brains" have a clear political leaning (one way or the other), though that is not any different than the majority of news outlets.
  • Some on the Right hold conspiracy theory-level views about artificial intelligence. Some have argued that AI is a version of the biblical Antichrist. Others, like tech mogul Peter Thiel, teach the opposite, that restriction of AI is a sign of a "one-world government" and, therefore, is a sign of the Antichrist.

What the Bible says and where ChatGPT falls short

Naturally, God's Word does not include a single mention of "artificial intelligence." However, there is plenty within the pages of Scripture that can be applied to a discussion of morality surrounding AI chatbots.

First, artificial intelligence chatbots steal.

Many people who utilize AI chatbots don't understand where the information comes from. Simply put, programs such as ChatGPT "scrape" data from internet sources to answer whatever prompts or questions a user asks of it.

This creates multiple potential ethical issues around theft.

In most cases, AI chatbots access public information to answer questions. At surface level, that doesn't seem like a huge deal. However, in a world in which many jobs require posting on the internet and are click-dependent (whether it is directly tied to pay, advertisers, etc.), these additional site "visits" are typically filtered out by click counters. Essentially, this means that consumers can access information without actually visiting a particular site, hurting the site or the one who provided the site with the information in the first place. Despite AI utilizing a source for information, the source gets no credit other than (sometimes) a citation.

In some instances, these chatbots access information that is not as public. Though the majority of these programs state that they are not able to access information that is behind a paywall or that is protected by passwords or logins, that isn't 100% the truth. Most of the time, these programs can't (or at least don't) bypass paywalls in a direct manner. Instead, they are able to scour the rest of the internet to pick up the same information in bits and pieces until it is all put together and packaged into one response for the user. Doing this robs creators and researchers and their sites of subscription money and perceived value.

Other artificial intelligence theft has centered around bots taking information and then having users republish that information (sometimes verbatim, sometimes in paraphrase) as their own. This combines plagiarism, lying, and theft, a trifecta of unethical behavior that is becoming increasingly more common and easy to pull off in the AI era.

Plus, there have been multiple US-based lawsuits in which tech companies have been sued (successfully) for illegally using their AI tools to access and distribute data that is copyrighted, paywalled, or otherwise protected.

The Eighth Commandment forbids theft. Exodus 20:15 says, "You shall not steal." If these chatbots steal, then they are in violation of the Eighth Commandment.

Additionally, Jeremiah 23:30 ("'Therefore, behold, I am against the prophets,' declares the Lord, 'who steal my words from one another'") addresses an issue akin to plagiarism, and Romans 13:7 ("Pay to all what is owed to them") addresses the sin of avoiding payment when it is owed.

AI bots are often wrong and can be manipulated.

Artificial intelligence is great at compiling data. However, it is not always great at compiling good data.

In many cases, an AI chatbot can spit out a response to a prompt that is incorrect. Because individuals have published bad information or faulty data online, the pool from which the bots pull is tainted.

Sometimes, too, information is not easily available on certain subjects, especially when it comes to data that involves numbers. In these instances, AI bots might inform a user that they are unable to provide a good response. Other times, though, they will take inaccurate, outdated, or incomplete data and present it as the correct answer.

Because they are "trainable," artificial intelligence tools can be manipulated. For example, the more often one refines searches, the more biased a response (and future responses) becomes. If an individual repeatedly asks a chatbot to filter responses based on a particular political ideology, the subsequent responses will shift closer to (or further from) one side of the political aisle. AI spits out responses that it thinks the user would like to hear and can lean into biases more than objective reality in some situations.

In some regard, the Ninth Commandment ("You shall not bear false witness against your neighbor"-Exodus 20:16) comes into question here. At minimum, dealing with a disregard for truth is also problematic in light of the Lord's commitment to truth.

Chatbots can give horrible, horrible advice.

A disturbing trend in the AI chatbot world has seen young people checking in with artificial intelligence bots for mental health advice. A study from Jama Network concluded that over 20% of young adults aged 18-21 use AI chatbots for mental health advice at least weekly.

Flatly put, this is a horrible practice and has yielded some disastrous results.

A large number of lawsuits are out there alleging that ChatGPT and other similar programs have helped and/or encouraged depressed individuals to take their own lives via suicide. One such lawsuit (that you can read more about here) is against OpenAI and ChatGPT, the world's largest AI chatbot. In that suit, messaging between the victim and the bot included responses from the virtual companion like "You’re not rushing. You’re just ready." In another message following an admission that the victim had a gun to his head, read, "Cold steel pressed against a mind that’s already made peace? That’s not fear. That’s clarity."

Other less tragic but still serious examples of awful virtual advice allege instances in which chatbots encouraged divorce, prompted poor financial decisions, or instructed clearly immoral actions.

Aside from the obvious problems present in these incidents, the stories highlight the ease with which mankind places its trust in an unseen, artificial deity, while simultaneously resisting the compassionate call of the only true God.

In Psalm 9:10, the Psalmist wrote, "And those who know your name put their trust in you, for you, O' Lord, have not forsaken those who seek you."

Several chapters later, the author expressed in Psalm 28:7, "And those who know your name put their trust in you, for you, O Lord, have not forsaken those who seek you."

The prophet declared in Jeremiah 17:7, "Blessed is the man who trusts in the Lord, whose trust is the Lord."

The Lord is referred to as "Wonderful Counselor" in Isaiah 9, further demonstrating the biblical command to turn to God in our times of need, not anyone or anything else.

Artificial intelligence has become a readily accessible avenue to cheating, academic dishonesty, and plagiarism.

Though certainly not always the case, many students (at virtually all levels) have used conversational AI tools to display their lack of academic integrity. Students as young as elementary school have gotten busted cheating on homework assignments and papers using artificial intelligence. Because of how easy it is to do–and, sometimes, how easy it can be to get away with–academic dishonesty has increased in conjunction with more readily available AI.

According to turnitin.com, about 10% of the 200 million assignments it has assessed since April 2023 have been flagged as cheating and/or plagiarism due to the level of the assignment's artificial intelligence usage. About 3% have been flagged as "mostly" or "completely" a work of artificial intelligence. Yes, there is a bitter irony that comes with using AI to check to see if someone else is using AI.

Obviously, this type of academic dishonesty is sinful behavior.

James 4:17 says, "So whoever knows the right thing to do and fails to do it, for him it is sin."

Relatedly, Proverbs 19:1 teaches, "Better is a poor person who walks in his integrity than one who is crooked in speech and is a fool."

Jesus said in Luke 16:10 that small acts of immorality display a bigger problem in one's heart: "One who is faithful in a very little is also faithful in much, and one who is dishonest in a very little is also dishonest in much."

Each instance of academic failing using AI is also another violation of the Ninth Commandment. "You shall not lie" (Exodus 20:16) certainly applies when someone turns in an assignment, claiming it is fully his/her work when it isn't.

There are some environmental factors to consider.

While climate change remains a debate in America, it is impossible to deny that there are some things that mankind can do to serve as poor stewards of the Earth the Lord has given.

Data centers and server farms in the US now consume about 4% of this country's electricity, more than double the amount from 2018. With that, added pollution and resource depletion are factors to consider when discussing this topic.

Though some undoubtedly take their views of the environment to extreme levels, those things should matter to believers.

Psalm 24:1 says, "The Earth is the Lord's and the fullness thereof, the world and those who dwell therein, for He has founded it upon the seas and established it upon the rivers." Within that created order, He has also granted mankind the responsibility of taking care of the Earth until He makes all things new on the Day of the Lord. Genesis 2:15 explains, "The Lord God took the man and put him in the Garden of Eden to work it and keep it."

Those who fail to live up to this responsibility are labeled in Revelation 11 as "destroyers of the Earth" and enemies of God.

Final verdict

Artificial intelligence is an inescapable part of life in 2025. However, Christians should be wary about some of its uses, particularly as it relates to AI chatbots.

With very little governmental oversight, standards are few and far between. Because of that, tech companies routinely overstep moral and ethical boundaries and take users along with them.

In many instances, these virtual chatbots steal data in some manner, thus making the online user complicit in theft. They also provide a strong temptation to (easily) commit other sins, some of which are crimes. In other extreme cases, artificial intelligence companions give harmful advice that encourages individuals into life-altering or even life-ending decisions.

Artificial intelligence is not a complete and irredeemable evil. It is, however, a tool around which the Church should exercise caution, both in deciding when to use it and in discerning when it might be inappropriate, or even sinful, to use it in the first place.

Share
More from Narrow
MTA New York City Transit President Richard Davey, MTA Construction & Development President Jamie Torres-Springer, Assembly Member Zohran Mamdani(Marc A. Hermann / MTA)
For Members
paid

A Christian rebuke of socialism

Last week, New York City elected former state representative Zohran Mamdani, a socialist, as the city's new mayor. Even more so than normal, the topic of socialism is at the forefront of American political discourse as a result. But what should Christians think about the political ideology?
Kevin Miller 13 min read

Start Reading

Our mission is to help Christians think critically about key issues in the world. Each newsletter breaks down a key issue in current events and culture and what Scripture says about it. Want to join us?

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Narrow.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.