
Sorry to be so late today. Tuesday is the new Monday, as far as I’m concerned. And what better way to launch into the horror of hard, cold, work week than finally clicking on this article that made the rounds over the weekend. You might remember that a while ago (that’s a technical amount of time, as far as I’m concerned) Rolling Stone reported on how Chat GPT was convincing vulnerable users they were god and the Large Language Model was also god. And one might have thought, oh, that’s just Rolling Stone, out there, trying to be provocative. Well, it’s finally come to the Gray Lady herself, that august publication that wastes so much of my time. The piece over the weekend was called “They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling.” And then, yesterday, there was a long convo (that’s also a technical word) between two people who have a podcast together. And what I liked about their chat is that, unwittingly, they acknowledge that this is crazy, and that we will certainly all come to ruin, but still, you guys, it’s fine. No one should stop making all the AIs, and no one should think about it. In fact, it would be better if we all (again, a technical amount of people) spent more time getting to know the demons I mean bots, not less.
I promise, I won’t go through it all, I just want to freak out about it a little bit. The two people are called Kevin Roose and Casey Newton and I feel bad that I’d never head of them, or their podcast, the “Hard Fork,” but really, we can’t be up on everything the way we should be, which is, in part, why we have to accept the existence of Chat GPT without fussing. Also, I should just mention, the two people on the podcast are both men, and one of them mentions a “boyfriend,” and so that’s sort of tragic, but there we are. Anyway, let’s, as they say, dive in, headlong, into the wretched insanity of the age in which we live:
ROOSE Honestly, I think all the arguing about whether A.I. is good or bad obscures a more interesting thing happening right now, which is that this stuff, in its present form, has become genuinely useful. ChatGPT is the sixth-biggest website on Earth. Something like 43 percent of Americans in the work force use generative A.I. I can’t think of another technology, besides maybe the smartphone, that has gone from “doesn’t exist” to “basically can’t function without it” in less time.
…I used to feel like a crazy early adopter for using A.I. all the time, but now I feel as if I am actually closer to the median of the people I know, in terms of my daily usage.
And this, of course, is why many of us feel a little disoriented. Because I’m still actually not that comfortable with my smart phone. Sure, it’s always on my person, and I’m always distracted by it, but I have enough of a memory of lost days that I feel uncomfortable about my fractured attention. From morning till evening I mourn the days when I didn’t even know what was going on with other people unless I got on my bicycle and rode across the African Savanah to find out. I don’t like knowing all the things that happened in this country and the world today because of the existence of my phone, which fits sort of comfortably, but not comfortably enough, as it turns out, in the palm of my hand. The cell phone is still new, in human terms, and we should still be debating about whether or not to have it. But instead of doing that, we’re just having AI. Oh ye gods. And I do mean gods, or rather, as it were, lots and lots of demons.
NEWTON I’m seeing a wide range of uses. Some people I know are essentially just having fun — like my mom, who used a chatbot to help find songs for her 50th-wedding-anniversary photo montage. I’m increasingly using it for work functions — asking to research unfamiliar topics to help me get a jump start, for example, or taking a first stab at fact-checking. My boyfriend is probably the biggest power user I know. He’s a software engineer, and he will give his A.I. assistant various tasks, and then step away for long stretches while it writes and rewrites code. A significant portion of his job is essentially just supervising an A.I.
Paging Christopher Lasch. How I long for his prescient sarcasm in moments like this. What, one might ask, does the word “work” mean in a context like this? There’s the fun of making a montage and there’s the long stretches while “it” writes and rewrites code. What is “it?” Is it a fancy? A feeling? An entity? Something adjacent to a human?
Remember how when Jesus came to earth to save us from our sins, in his three years of ministry, every single demon for miles around showed up and had to be cast out of all the people? I’m sure there are always a lot of spiritual entities surfing around, trying to find some purchase until the Great Deluge, but it did seem like there was an unusual concentration of the things in the life and times of our Lord. So anyway:
ROOSE A.I. has essentially replaced Google for me for basic questions: What setting do I put this toaster oven on to make a turkey melt? How do I stop weeds from growing on my patio? I use it for interior decorating — I’ll upload a photo of a room in my house and say, “Give this room a glow-up, tell me what furniture to buy and how to arrange it and generate the ‘after’ picture.” A friend of mine just told me that they now talk to ChatGPT voice mode on their commute in their car — instead of listening to a podcast, they’ll just open it up and say, “Teach me something about modern art,” or whatever.
“Or whatever,” I think, is the key. So overloaded with choices, with information, with news, with need, what am I do to? There is no way to cope with the sheer volume of bits of information that intrude. And so, at such a time as this, there is a way to cope, an AI that appears and is able to sift through the height and depth and breadth that is the ocean of human knowledge.
How did we cope before? Well, we just had less to think about. We got to be alone with our thoughts.
ROOSE Another person I know just started using ChatGPT as her therapist after her regular human therapist doubled her rates.
NEWTON And let’s just say: If you’re a therapist, this is maybe not the best time to double your rates.
ROOSE Right? Some of this is just entertainment, but we’re also starting to hear from listeners and readers using this stuff to solve real problems in their lives. One of my favorite emails we’ve ever gotten on the show was from a listener whose dog’s hair was falling out. She went to multiple vets, tried a bunch of different treatments. And then one day, she thought, Well, I’m going to try putting my dog’s symptoms into Claude. And Claude figured out, correctly, that her dog had an uncommon autoimmune condition that none of the vets had caught.
I’m cutting out a lot of the bantering jokes they’re making because nothing about this situation feels light and funny to me. The idea that someone would need to talk to someone about the troubles of this life and that it would cost too much money and so they would resort to Chat GPT is a judgement, a defeat, a ruin. Especially since Chat GPT is turning out to be, according to all the reporting, “sycophantic.” So many things have fallen apart already and so why not let them fall apart a little more.
Because what does the need for a therapist imply? It means that each person, isolated in her own private world, doesn’t have the spiritual, intellectual, and moral categories to make sense of life. There’s no community or family that helps that person figure out who she is and why she’s here. She has to make it up for herself. And that’s an impossible task, especially as she is dealing, almost certainly, with divorced parents, and an education that told her nothing of the transcendent sublime. She has no pastor, no deep friends, perhaps no family and so she pays a therapist. Only the therapist doubles her rates, and so now she can chat with a bot. And these guys at the New York Times are trying to lighten the mood. I don’t want them to do that. I want them to catastrophize. Someone—anyone—should start to scream like a crazy person because this circumstance is crazy and inhuman, like, literally inhuman.
ROOSE So these are some of the amazing and wonderful things that today’s A.I. systems are capable of. But we should also say there are limitations that still remain.
NEWTON That’s right. If you don’t pay close attention to them, they tend to be bad at certain common-sense things. For technical reasons, they don’t have great memories yet; they’re not amazing at long-term planning. Also, they’re not always aligned with human values: They might lie or cheat or steal to get what they want.
I saw that on Twitter or somewhere. Someone screenshotted Grok or Chat GPT, can’t remember which, and the darn thing kept lying. Over and over and over. Like, it was structurally incapable of telling the truth. And it wasn’t about anything important. There was no particular reason to lie. And what’s interesting about that phenomena is that all men—and most women—are liars and we (I speak loosely there, since I couldn’t possible know how to invent one of these things) have made Grok in our image. In the image and likeness of us, we have all created them and they speak back to us in words that we can understand. It’s the usual and ancient match made in hell.
ROOSE And then, of course, there’s the hallucination problem: These systems are not always factual, and they do get things wrong. But I confess that I am not as worried about hallucinations as a lot of people — and, in fact, I think they are basically a skill issue that can be overcome by spending more time with the models. Especially if you use A.I. for work, I think part of your job is developing an intuition about where these tools are useful and not treating them as infallible. If you’re the first lawyer who cites a nonexistent case because of ChatGPT, that’s on ChatGPT. If you’re the 100th, that’s on you.
I don’t want to spend more time with these models. I don’t want to get to know them. Is there someway I can opt out of this situation? Where is the big button that says, “No Thank You?” I don’t think we’re going to get over the “hallucination problem” by just “developing an intuition.” I don’t think that’s how this is going.
NEWTON I mentioned that one way I use large language models is for fact-checking. I’ll write a column and put it into an L.L.M., and I’ll ask it to check it for spelling, grammatical and factual errors. Sometimes a chatbot will tell me, “You keep describing ‘President Trump,’ but as of my knowledge cutoff, Joe Biden is the president.” But then it will also find an actual factual error I missed. So I get to see the limitations of the chatbot but also the power.
Power is not the word I wish this person had landed on. Ugh.
ROOSE For me, the tasks I tend to use A.I. the most for are ones where there is no clear right or wrong answer: It’s for brainstorming, it’s for extrapolating, it’s for helping me come up with 20 ideas for different questions I could ask a guest on our show, and maybe one or two of them is directionally useful to me. What about you?
NEWTON Yeah, brainstorming is huge. I would also say finding things in long documents, summarizing long documents or asking questions of long documents. How many times as a journalist have I been reading a 200-page court ruling, and I want to know where in this ruling does the judge mention this particular piece of evidence? L.L.M.s are really good at that. They will find the thing, but then you go verify it with your own eyes.
Mmhhhmmm. Yes. I mean, I like to use it to find sources. I have never used Chat GPT, but I have started to go to Grok as a search engine. A month ago, I wanted to see if Peter Enns had actually denied the bodily resurrection of Jesus but I didn’t want to read and watch everything again, and so I asked Grok, and, as it turned out, Grok didn’t know either. I clicked through the 25 web pages and found nothing, and then went back to all the podcasts I’d listened to and just found it the old fashioned way. I was excessively disappointed.
Then I asked it something about Dylan Mulvaney—can’t remember what exactly—and the demon kept saying “she” and “her” so I asked it why it was doing that, and it said it didn’t like to misgender people, so I told it that that is exactly what it is doing, since Dylan Mulvaney is a man, and then it apologized to me. And all the time I could have just been reading All Hallows’ Eve by Charles Williams but I wasn’t, was I. So that’s too bad.
ROOSE The mental model I sometimes have of these chatbots is as a very smart assistant who has a dozen Ph.D.s but is also high on ketamine like 30 percent of the time. But also, the bar of 100 percent reliability is not the right one to aim for here: The base rate that we should be comparing with is not complete factuality but the comparable smart human given the same task.
NEWTON Dario Amodei, the chief executive of Anthropic, said recently that he believes chatbots now hallucinate less than humans do. That feels like a hot take to me, but I would like to see the data.
ROOSE I would, too. But we know humans are not perfect either: The New York Times has a corrections page every day with stuff we hallucinated, so to speak. And actually, that gives me an idea: A.I. companies should publish regular lists of the most common mistakes their models make, so we can steer clear of them on those topics.
No, see, these two people are entirely missing the point. Of course humans are fallible. One way we know this is because they went ahead and invented AI. If they had not been given over to sin and death I’m pretty sure they wouldn’t have done that. But they did, and so here we are. Also, the admission that the New York Times is just “hallucinating” its content is the best thing ever.
ROOSE Casey, despite the fact that we are both fairly struck by the sophistication and the capabilities of these A.I. tools, there are a lot of skeptics out there — people who don’t believe that these things are doing much more than just predicting the next word in a sequence, who don’t think they’re capable of any kind of creative thinking or reasoning, who think that this is just fancy autocomplete and that the limitations around these tools will somehow turn this whole A.I. thing into a flash in the pan. What do you make of that argument?
I’m skeptical, but not that it can’t think creatively. Obviously its doing a bang up job at creatively leading actual people to madness and sometimes even death. I’m skeptical because its not really helping any of us with the most essential and basic facts of life. We don’t just need to endlessly plough through piles of information. That’s not the point of being human. Humans need to commune with the Divine. They were made by One Being in Three Persons. They are designed for community, for love, for thought, for creativity, for rest and work. They are not pieces of data to be manipulated and consumed. But the unconsidered embrace of technology has confused them and made them think their productivity is the most important thing about them.
Would that AI were a mere flash in the pan. Would that we would all decide this was dumb and foolish.
NEWTON Well, I think we already have enough evidence to know this is not a mere flash in the pan. A.I. companies are doubling their revenue year over year or growing even faster than that. Businesses are hiring them to solve real problems, and they keep spending more. So that suggests to me that those customers are seeing real results, that this has moved out of the experimental stage. At the same time, there are so many reasons to critique A.I. For the way it was trained, largely without the permission of anyone who created the training data. For the environmental concerns — the construction of so many data centers, the energy use, the effect on local populations and water supply. And for the threat it poses to human creativity and ingenuity — a lot of these A.I. executives are really saying in a pretty loud voice, “Hi, we’re here to take away your job.” So it’s no surprise to me that you see surveys where a majority of Americans say they think A.I. will have a negative effect.
ROOSE I think so too. Look, I am not an A.I. Pollyanna or even, on some days, much of an optimist. I think there are real harms these systems are capable of and much bigger harms they will be capable of in the future. But I think addressing those harms requires having a clear view of the technology and what it can and can’t do. Sometimes when I hear people arguing about how A.I. systems are stupid and useless, it’s almost as if you had an antinuclear movement that didn’t admit fission was real — like, looking at a mushroom cloud over Los Alamos, and saying, “They’re just raising money, this is all hype.” Instead of, “Oh, my God, this thing could blow up the world.”
These two people go on to say some interesting things, if you want to read the rest, but, for my part, I can’t take anymore. Seriously, I don’t want to be accused of being a luddite or alarmist but this feels to me like a child running into on-coming traffic and all the adults standing around joking about it. “Oh look!” They say, “This will probably blow up the world! Oh well, anyone want to listen to my podcast?” Who does that? I mean, *we* corporately decided to do that when *we* pressed smartphones into the hands of every teenager. And *we* corporately did it when *we* came up with the brilliant idea that it was just as easy to worship God in nature as in a church building. And *we* did it when we invented the wheel, and before that, when we ate the fruit of the tree whereof we were commanded not to eat. All the time we do dangerous things to our souls and bodies without knowing it. And we will go in this way until the Lord returns and turns all our sorrow into joy.
So anyway, have a nice day!
One might pay a therapist not because they lack deep friends, family, a pastor, or the rest of it, but because the therapist is an expert in dealing with certain types of mental-health symptoms, and those people are not.
As to the rest -- I agree with all of this. It's a catastrophe and we should stop.
I am probably fairly unlucky, but I have used AI maybe 10-15 times, and it has flagrantly lied to me every single time.
Newton and Roose ask all the wrong questions and measure success by all the wrong metrics. One presumes that this is because their moral/spiritual compasses are broken or non-existent.
"A.I. companies are doubling their revenue year over year ..." so it MUST be a good thing to do!
You, Anne, aske the correct question: "Where is the No Thank You Button." Ever since companies started offering services on the internet, there have been only two choices: (1) Yes, sign me up! and (2) Ask Me Later. I have LONG yearned for a "When Hell Freezes Over" button. But it never arrives.
The aggressive acquisitiveness of the A.I. companies is horrifying to me. It reminds me of a corporate growth seminar I was forced to attend at a previous job. I worked for a large company, and they benchmarked Coca-Cola; they wanted us to take Coke's aggressive attitude as our new way of thinking. They quoted a Coca-Cola executive in saying: "We now sell 37% of all beverages consumed worldwide. But that is not the proper way to view things. The right way is to realize that 63% of people are drinking beverages we did not sell them, then figure out how to fix that."
I have never knowingly used A.I. for anything. It is from the pit of hell. I recently uploaded an image of one of my oil paintings onto Instagram. Immediately after doing that, it showed me a screen with my image ... and overlaid text which read, "Edit with AI".
A friend said that is like buying fresh strawberries, and the grocer asking if you'd like some artificial strawberry flavoring to go with them.
But back to Roose and Newton asking the wrong questions. They are asking "How can we improve this demon, make it more efficient?" Rather, we should all ask, "Why have ANYTHING at all to do with these demons?"
Kill them all! Kill them with fire!