The problem with AI

Introduction

As a technologist, working in the software industry, and as a Christian, thinking deeply about technology, faith, and work – I’ve been closely watching the rise of AI, looking at both the opportunities and the challenges. My goal here is to summarize the biggest issues we as Christians (and everyone concerned with human flourishing) should be aware of.

AI is a very broad category that covers many technologies, however the majority of AI getting news and usage is generative AI, specifically the models and services created by the largest tech firms in the USA (and China to a lesser extent). This will be the focus of the essay.

Framework

To help bring clarity, I’ve created a simple framework, corresponding to the process of producing and using generative AI. At each step I’ll highlight the biggest issues I see. Here is the four step framework:

inputs: step 1

In order for generative AI to work, it needs input data, and LOTS of it. The latest LLMs (Large Language Models) have been trained on essentially the entire English internet, plus any other data they can get their hands on, legally or otherwise. 
There are four issues here that must be mentioned.

1 – Unauthorized

The first issue is that, in the race to get more data, companies are gathering data without permission; aggressively scraping websites, youtube videos, social media posts, and even downloading copyrighted books.

2 – Uncredited

Not only is this input data gathered in shady or illegal ways, but when someone uses AI to generate an output, such as a song or an image, the original creator whose work was used to train the AI receives no credit.

3 – Unremunerated

Additionally, the revenue for the use of an AI model goes solely to the provider (primarily the dominant tech firms) with nothing to the original creator. There are a number of active lawsuits that are seeking to address this, with varying degrees of success.

4 – Uncensored

The quality of AI output is directly connected to the quality of the input data. Early on, companies were very selective with the data they used, but as the race to build bigger models has heated up, the only thing that seems to matter is more data. This has lead to using scraped data from websites known for toxic content which makes the quality of the AI output worse, even with filtering and feedback (step 3).

Step 2: Training

Once the input data is gathered, the AI model is then trained. To train one model requires a huge amount of “compute” – essentially extremely powerful computers, with top of the line GPUs (Graphical Processing Units). These computers are housed in giant warehouse-like facilities called data centres.

Resource Consumption

A data centre requires few humans to operate (besides the initial build), but it does require vast supplies of energy to power the computers and water to cool the processors.

The amount of energy needed to run a single hyperscale (i.e. gigantic) data centre is enough to power hundreds of thousands of homes. This demand is leading utility companies to upgrade their infrastructure, with the cost often passed on to the public rather than the tech companies.

Because the energy demand is so high, the tech companies are also building their own power sources, often with gas powered generators which produce significant emissions.

On top of the power usage, these high end processors need clean water to be cooled. Sadly, the data centres are often being built in places with water supply constraints; in Texas the Stargate data centre will use 1 million gallons of water from an area that already has water restrictions.

Finally, the computer chips themselves require resources like rare earth minerals that have further environmental consequences, and are leading to geopolitical tensions, especially with China and the west.

These issues will only increase given the planned number of hyperscale data centres (assuming they are all completed).

Step 3: Feedback

Once an AI model has been generated, it still requires feedback. This includes huge teams of people to test and flag when responses aren’t appropriate. Related to this, each company decides, almost unilaterally, what is “allowed” and what the biases will be in their model.

Model Bias

If you provide an AI model to businesses or end users, you must decide what your model can be used for; from hacking, to health tips, to pornography, each company is the arbiter of what is allowed and what is returned, with very little oversight.

As a human, this is worrying, as the truth can be altered, and as a Christian, this is worrying, as there is a clear bias against biblical answers.

Human Impact

On top of the bias, the hidden work of tagging data and testing these models is outsourced to intermediaries who take advantage of the most vulnerable people in the world, exploiting them for minimum pay. The work can range from simply tagging images (e.g. a cat and a dog) to trying to generate the most horrific content possible and then rating it.

Step 4: Usage

When a model is “ready” and made public, it is used in a number of ways, either directly through a website or app (like ChatGPT), built into an existing service (like Google Gemini in search results) or through a 3rd party who uses the model behind the scenes (like a chatbot that provides “help” on a website).

Let’s start with the least contentious issues relating to using AI, and move up from there. As an aside, not only is the process of generating an AI model extremely energy and resource intensive, but so is the usage. Just one interaction with ChatGPT can use almost 500mL of water!

Too Much Content

Ignoring all the other issues, one of the challenges with AI is simply the amount of content it can quickly create. This includes websites, images, videos, and comments on popular websites. As the amount of AI content increases, finding reliable, quality content created by real humans gets harder.

Replacing Thinking

At first, it feels comfortable not to think. Then, suddenly, you realize you can’t anymore.

Comment on a YouTube video about AI

Using AI to generate anything is a shortcut to doing the work. Sometimes it reduces annoying work, like formatting data into a spreadsheet, but the more we shortcut thinking, the harder it is to think when we need to. And when we shortcut the creative process, whether it be a song, a video or a poem, we lose out on the benefit of that challenging process.

I’ve written about this idea recently, so I won’t repeat myself here, but in essence, as we use AI more, we lose cognitive gains, and if we never learned the skill in the first place, we become dependent on AI. This is especially tempting and harmful for students.

Truthfulness

The core of generative AI is that it generates new outputs. This is also a core issue because it can present as “fact” things that just aren’t true, which is called “hallucinating”. In earlier models it would be simple things, like the number of letters in a word, but even the latest models still face this issue.

If you know a topic well, this isn’t too bad, as you can detect when something is wrong and fix it. However, if you are using AI to learn something, there is a chance that some of the information is incorrect.

Additionally, as AI can make mistakes, if a system relies on the AI decision without human review (such as someones eligibility for social security), it has devastating real world consequences.

From a biblical perspective, this is an issue for anyone using AI to explore issues relating to faith. If the creators of the model can modify the output and the model itself is prone to hallucinate, using AI for such a vital task is worrisome at best.

Relationships

Need support without judgment? Download free

Instagram Ad for Microsoft Copilot (AI chatbot)

As humans, one of the deepest needs we have is for real connection with others; yet human relationships are also where we are most hurt. Using technology as a replacement for human relationship promises the benefit of a friend (or partner) without the messy bits.

So it’s no surprise that we see people, especially younger people or those without robust social connections, using AI to give them advice, support, and even friendship. Sadly, this can never replace a healthy human relationship, and can end up being extremely harmful. We have seen this play out specifically with teens who commit suicide because of prompting from their AI chatbot.

People talk about the most personal s**t in their lives to ChatGPT … People use it — young people especially use it — as a therapist, a life coach.

Sam Altman, CEO of Open AI

You would think hope that the companies running these services would not want to encourage this behavior, however it seems to be the opposite. Microsoft has been advertising their Copilot chatbot as a therapist on Instagram, and Meta has explicitly stated in their AI guidelines that bots can have sensual chats with kids. Beyond the main AI providers, there are numerous new startups explicitly providing services such as AI therapists, AI girlfriends, and worse.

Rampant Crime

Not only does AI make it easy to generate vast amounts of content, but it makes it easier for criminals to scale their digital operations like never before. This includes using AI to generate phishing emails, impersonating the voice of another person, and even creating fake videos for extortion, often targeting seniors and other vulnerable people.

Scam compounds in Southeast Asia are luring in people looking for work via ads on social media, then taking their documents and forcing them to scam people in the west using AI tools.

Although scams existed before AI, it is making it faster, easier and much more believable (for the victim). This is leading to greater return for the criminals, and hence, more focus from the crime world.

The System: Companies Powering AI

Now that we’ve reviewed the four main steps in creating and using generative AI, it’s important to look at the organizations that are behind these AI services.

It’s Not Neutral

A common refrain I hear is that technology is a neutral tool, and how it’s used determines if it is positive or negative. Yet, technologies are created by people with a specific bias and worldview, and this ends up being encoded in what they build. If you want to hear more about the history of Silicon Valley and its ethos (which is driving AI), Mark Sayers explains it very well.

We must remember that Generative AI services are predominantly provided by the largest and most powerful organizations on the planet: Open AI, Google, Apple, Microsoft, X, and Amazon. They are ruthlessly vying for market share, and have personal and political pressures that impact how and what they build; this technology is not neutral.

Replacing People

One of the stories that these companies tell is the inevitable replacement of human labor by AI. This isn’t inevitable; it’s a lucrative offer to business leaders – reduce costs by firing human workers and replacing them with AI. Although the return on AI investment hasn’t been great, it’s the future the AI firms are pushing aggressively.

Yet, given the advances in AI, we are seeing people displaced from work as companies choose to use AI; this includes writers, illustrators, translators, programmers, customer service, and more – especially those in entry level positions. This doesn’t free these people up to pursue passion projects and creativity as promised, but leads to stress and economic decline.

Increasing Power

That raises an uncomfortable prospect: that this supposedly revolutionary technology might never deliver on its promise of broad economic transformation, but instead just concentrate more wealth at the top.

ChatGPT’s $8 Trillion Birthday Gift to Big Tech

There is so much more that could be discussed around the dominant AI companies, but the core of it is that as you use AI services, you’re giving your money and your data to these powerful organizations, which only cements their strength and dominance. They promise a cancer cure, but then release tools to pump out AI slop (or worse). Is this the future we want?

Conclusion

At it’s core, generative AI offers us the ability to greatly increase the speed and breadth of our abilities. But before we jump in, we must be ask these hard questions; Where was the input from? What is the environmental impact? What is the bias? Who is getting the economic benefit? Who is bearing the cost? Who owns your data?

There is a future where AI doesn’t just make the rich richer, where the models are generated in a more economic manner, and where real world problems are solved. As individuals we can feel hopeless, as it seems that we are powerless to effect change; but by becoming aware of the challenges and using AI wisely, we can vote with our time, our data, and our money.

Some Good News

While this essay has been focused on the biggest challenges of generative AI,there are amazing stories of success and hope. Often this means specialized AI models, created for a specific context, and delivered along with real human support so that people don’t feel abandoned to a machine.

As we become aware of the issues with AI, my prayer is that more technologists will look to build alternate solutions that bring hope and light to this world.

More Resources

If you are interested in reading more about AI, here are a few good resources.

The End