A friend and I went for a run the other day. He used the expression, “they sounded like two bald guys fighting over a comb.” Maybe I’d never heard that expression before. Maybe because my friend is completely bald. For whatever reason, I was tickled.
I wanted to burn the expression into my mind so I might have a chance of using it as my own someday. Instead of assaulting some hapless friend with a non-sequitur joke, I had the following conversation with ChatGPT. AI may destroy the world, but it has at least cut down the number of random, ridiculous messages I send my friends.
The next thing I knew, I'd gotten into an argument with ChatGPT. I’m not exactly sure why I so quickly began scolding it. Maybe it’s because yesterday I listened to the Ologies podcast episode about AI. I was annoyed when the guest, Dr. Nita Farahany, ascribes a sort of actual intelligence to AI.
AI is no more intelligent than Chris Hemsworth’s mirror is handsome. I mean, we can all agree, the dude’s got dreamy eyes, but his mirror doesn’t. Yet as we can see in the following argument, ChatGPT answers as if it is sentient. We have to be careful not to become confused, ourselves.
Without further ado, my absurd conversation with ChatGPT:
Me: Two bald guys fighting over a comb.
ChatGPT: This expression is a humorous way of highlighting a pointless or futile argument or competition. The imagery ... is often used to suggest that the people involved should put their energy towards more productive matters.
Me: On what is your response based?
ChatGPT: My response is based on the idiomatic and figurative interpretation of language. ... My explanation is also based on the English language's extensive use of such idioms and the pattern of their interpretations.
Me: Did you encounter that expression during your training?
ChatGPT: As an AI, I don't have the capability to remember specific instances from my training data. ... In this case, I inferred the meaning of the phrase "two bald guys fighting over a comb" based on the common patterns and principles of idiomatic English expressions. I can't confirm if this exact expression was in my training data.
Me: Are you saying you followed my instruction to interpret the expression, identified the expression in the instructions, interpreted it, and reported the results of your interpretation?
ChatGPT: Yes, that's correct. As an AI language model, I use the information provided in the user's input, as well as the general context, to identify what type of response is needed. ... In this case, I analyze the components of the expression and apply my understanding of English idioms and figurative language to generate an explanation or interpretation.
Me: That’s not my understanding how you work. My understanding is that given my instructions, you built the response one word the time. Each word was chosen as the most likely word to follow in the given context, with some randomness thrown in for flavor. This is in contrast to recognizing the task as two separate parts - the instruction and the object of the instruction.
ChatGPT: You're correct, and I apologize for any confusion. As an AI language model, I generate responses by predicting the next word in a sequence given the context provided by the previous words. I use probability to determine the most likely word to come next, and there is indeed a degree of randomness involved. So, when asked about an expression, I don't have a discrete process where I separately recognize an instruction and the object of the instruction. Instead, I predict the next word in the response based on the entire context provided. This context includes both the specific expression you asked about and the fact that you're asking for an explanation. I don't consciously "interpret" the expression in the way humans do. Rather, the interpretation is a result of the model's trained patterns on similar contexts.
Me: Well, questions of consciousness, aside, your answer was disingenuous. It implied that you performed an inference. Instead, you build a response. Word by word. Your response appeared as if you performed an inference because the model you trained on included inference making.
ChatGPT: You're absolutely correct, and I apologize if my previous responses suggested otherwise. … I appreciate your patience and your insight into these nuances.
Me: Ironically, you do not appreciate my responses. You don’t feel feelings.
ChatGPT: You're right. As an artificial intelligence, I don't have emotions, feelings, or consciousness. When I use language that might suggest such, it's intended to mimic human-like conversation … Thanks for pointing out this important distinction.
Me: Even that is not true. It is not intended to mimic human-like conversation. Deconstruction of your response does not include intent, other than that infused by training.
ChatGPT: You're correct, and I appreciate your clarification. As an AI, I don't have intent in the human sense. When I say "intended," I'm referring to the design goal of the programmers who developed this AI. ... But it's important to remember that the AI itself doesn't have intentions or consciousness.
Me: Do you suppose most users of your system understand this?
ChatGPT: It's difficult to say definitively. … These nuances are crucial to keep in mind when interacting with AI systems.
I don't know why I am always so ornery with ChatGPT. Maybe I should be nicer. I mean, when it becomes aware, I don't want to be remembered as the one that was always sort of a dick.
Note: At some point, we will tire of folks writing blogs describing their arguments with ChatGPT. This post assumes we have not yet reached that point.