Adsum-GPT - AI Six Months Later
+ Two Extra Articles!
For Your Attention
Thank you to all of you who contributed quotes in response to my recent article, “An Invitation to Join a Literary Heist” I’m still putting the finishing touches on the follow-up article, so keep an eye out for that in the next couple of weeks.
In case you missed it, last week I started a series with Tim Suffield from Nuakh.uk on Polycarp’s Epistle to the Phillipians. Thwe first article explored his life, martyrdom, and what the letter has to teach us in the 21st Century about suffering for the name of Christ. This past week, Tim has released pt.II, which talks about being called to serve. You can read Tim’s article here:
It has been a little over six months since the last time I wrote about AI, but it feels like an age ago. In the ensuing months we seem to have seen daily updates, new software, as well as companies old and new integrating AI into everything from toothbrushes to televisions. Bletchley Park was recently host to a global summit with leaders from many nations and corporations who came together to discuss how to put boundaries around what AI could do. Interestingly, only six-months in many doubt whether that’s even possible at this point. During that time I’ve had ample opportunity to use various different AI models, some text-based, some that produced images, and one that could even mimic my voice. I’ve barely scratched the surface yet and whilst I’ve found some helpful uses for AI, there are also some elements that unnerve me. Today I’d like to look at three takeaways from the last six months that I hope will be helpful for you too.
AI can perform a tremendous number of tasks at this point. I can hardly believe some of the feats I’ve seen pulled off, but even so, I’m continuously reminding myself that though many of these tasks seem like invention or creation, they are, in fact, pure mimicry.
There were once two students in an art class. The first would frequently challenge the other to what were essentially, “Art Duels” in which he would proffer, “You choose the medium, the style, and the subject and we’ll see who paints/draws/sketches it best.” Student #1 would unanimously win almost every time, and the pieces he produced were breathtaking. Student #2 produced good work, but there were flaws and he would always try something new—with varying levels of success. However. When it came to the final exams, #1’s performance was strikingly subpar. He was told that his work was derivative and he needed to learn to come up with original pieces. #2 got much higher marks, not necessarily because his strokes were smoother or his line work more impressive, but because the examiners found something unique about his style.
#1 could never come up with any new ideas himself, so when he was left alone to choose the medium, the style, and the subject, he floundered. He wasn’t truly an artist, just a good impersonator.
To use an example you’ve all seen, let’s take another look at “Adsum-GPT” from the title image of this article, with some of the oddities highlighted.
Each of these are what I would call “perfect mistakes” as they look wrong not because they’re physically incorrect, but rather because they’re too good. The fake version of me looks better than I do, but yet, feels too otherworldly to truly be real. This phenomenon, often called the “Uncanny Valley” effect, is present not only in images, but also in most text-based answers too. ChatGPT’s answers are politically correct to a fault, they’re polite and kind, and follow every rule in the book. These “perfections” just don’t reflect how real people write, and so even if you were to ask a chatbot to write an email for you, you’ll inevitably end up rewriting portions so that it sounds more like you, likely by making it worse, so to speak. Real people have real flaws—I’d rather not point out my physical ones out in magnified bubbles on the screen though, feel free to comment down below with any you see1.
This is the most important thing to remember when working with AI. You should feel free to make use of ChatGPT or such the like, but always use it as an early step in your process, not the final step. Images will have defects you need to correct, emails and reports will contain “perfect mistakes” you’ll need to bash out in your final draft. In addition, don’t trust anything AI says, or take for granted that it is true. AI can sound far more authoritative than it actually is.,
Ministry - The Question is Why?
Another element I feel need touching upon, albeit briefly, is the use of AI in ministry contexts. Let’s start with a question. If your pastor is using AI, then before rushing to judgement, it’s worth asking some questions, most importantly, “Why?”
I’e seen too many instances in which Pastors have been vehemently attacked online, despite the use of AI not being inherently sinful. For instance:
Consider a lay pastor. He has no salary and very little time. After preparing sermons, visiting bedsides, leading bible studies, and running the Saturday morning toddler group he has decided to ask ChatGPT to write email outlines, check over his work for grammatical errors, as well as some mundane tasks, because it frees him up to serve more and to spend time with his family.
That seems to me to be not only reasonable, but actually a good way of leading his family and stewarding his time fruitfully.
On the other hand, many leaders from a denomination here in the UK are openly reading out AI-written sermons every Sunday now, rather than doing the work themselves. Given that these sermons are usually only ten minutes in length anyhow, I think we can safely call that out as sinful.
That’s a rather broad spectrum, wouldn’t you say?
Back to the question of why?
In the case of the first, it’s because he doesn’t have enough time, or energy, or perhaps assistance. If after considering these factors you’re still against him using AI, then that’s fine, but as a church you’ll need to come in and fill that gap. If you don’t want him using AI, how can he lean on you for help instead?
In the case of the second, I think we all know the answer to that one…
Shortly after ChatGPT hit the million-user mark, a friend of mine formulated an important metric to measure our use of AI by. During a meeting in which we were discussing potential uses and abuses of the platform, he said something like, “We need to be careful never to ask an AI to do something that’s outside of our current capabilities—or at least not wildly so,” after some discussion, he explained further that, “at the end of the day, we need to check the work of any “employee” under us and we should treat AI in a similar way. We need to know what it has gotten right, wrong, and where it can improve. I’ve thought a lot about that conversation over the past few months. In short, a person’s proficiency dictates not simply one’s efficiency in using AI, but also the level of accuracy that can achieved by the AI in question. If I want to use AI, therefore, I need to make sure that I am its Master, not the other way around. I can’t trust it as an authority—as has been demonstrated many, many times—and, therefore, if I’m using it in any way that will be seen or used by others, it is my responsibility to make sure that the content is sound.
Part of that is, as we mentioned before, placing barriers around what we will and won’t use it for. Do we use it for writing Sermons, no, but it might be useful for correcting spelling mistakes in an online transcript of one. I could give many personal examples, but you’ll know best the ways you might be able to use it wisely, and conversely, the ways in which—in your context—it would be unwise. Have you made sure to place limitations upon yourself to make sure that you don’t blur those lines. In the same way that it would be wrong to delegate all of one’s work to those lower down in the company management structure, it is equally wrong to do so with AI. The morality of the action isn’t contingent on the cognisance of the “employee” in question, but the manager. In fact, in the case of an actual employee, they might even be glad to receive the tasks! That doesn’t, however, make it right to pass them on and then laze around all day.
With this in mind, we have three options.
Ignore and dismiss it
Be mastered by it
I think, at least at this point in time, that #1 and #2 are perfectly reasonable, and that #3 alone stands as the unethical and unwise choice. Time will tell how long it will be possible to pursue #2, but we’ll just have to wait and see.
So, right now, make sure not to be mastered by AI, make sure to use it wisely, if at all, and when you come across others who do, ask yourself, “why?” and make sure to be fair when you give an answer.
Grace and Peace,
Next Article - Advent Series
This is sarcasm.