Chatting with GPT

Why talking isn’t the best way to use ChatGPT if you're like me

In this post, I provide three reasons why companies building on top of LLMs should consider providing interfaces other than natural language:

  1. SaSSy interfaces are better for focused tasks
  2. Some people don’t like talking
  3. Talking to a non-sentient intelligence is creepy

Luckily for me, I don't need to convince anyone about my points, because reason 1 is is already driving non natural language interfaces to language models.

When Alexa first came to the scene, many, including me, thought that all apps will get a conversational UI. Time, as it usually does, prove me wrong.

People have learnt from that and know that specialized interfaces, with specialized prompts, will form the bulk of interactions with LLMs, and that’s what they’re building.

What I haven’t come across is talk (sic) about reasons 2 and 3. So I wrote this post.

Reasons 2 and 3 are subjective, and I guess most people don't have these issues. So most of you might be wondering what's this guy on about. Think of this then as a UX user review – if you're building interfaces on top of LLMs, you might find it interesting to hear about experiences like mine.

I usually don’t use ChatGPT (or Copilot etc) when programming, and for the longest time, I didn’t understand why.

I love the technology behind LLMs, often wonder how we'll co-evolve, and there is no on-principle grudge I have against using them. So I was sort of puzzled why in spite of liking them and wanting to use them more, in practice I didn’t use them as much I would've expected.

Then one day I figured why: its because I don’t like talking when I’m programming!

So that’s reason 2.

Reason 3 is the new pattern I recognized after a recent episode; it's the one that triggered me writing this post.

I was making a Mastodon theme generator, and had a bunch of CSS that I wanted to clean up. And by a bunch I mean a big bunch. 14000 lines of it.

If this was in a more formal setting I would write a program to do the transformation I needed in an intelligent way, but I had limited time at hand for this side project.

I tried asking ChatGPT, and Claude, for help, but I just got generic useless answers. I tried searching on StackOverflow etc, but since it was a specific problem there weren’t any good solutions for it.

Then dawned awareness that I'm again in the common trap I sometimes fall into - getting too caught up in writing code bit. Programming is one level up, but problems can (and should) sometimes be solved directly too.

So instead of figuring out how to programmatically transform the CSS the way I wanted, I could just manually fix it. It’s 14 thousand lines long, and it’ll take me days, but I knew someone who would do this for me in a jiffy: ChatGPT!

So I went back to ChatGPT, and said this to it:

I will give you a CSS file. Filter it to keep only CSS elements that are related to colors. e.g. given the following CSS

.react-toggle--checked .react-toggle-track {
    border-radius: 10px;
    background-color: #6364ff;

.react-toggle-track-check {
    display: none;

the output should be

.react-toggle--checked .react-toggle-track {
    background-color: #6364ff;

Answer YES if you understand the task. I will then provide you the CSS to filter in my next message.

After waiting for a few but perceptible milliseconds, not sure if that’s an API query time or a random delay added in their web interface to make GPT more human, the cursor moved. And ChatGPT said:


Haha! So I gave it the file. Or part of it. I wasn’t sure how much the input limit was, so I gave it the first 1000 lines. Slowly the displayed scrolled as it typed out the cleaned up CSS.

I was happy. Initially. On looking closer at the CSS it emitted I realized there were small hallucinations here and omissions there. They might not matter for the task at hand, or they might. And in either case it'll need to a lot of manual proofreading to finally use the CSS it gave me. So much so that it might not justify this entire effort.

But that’s not the point of my post. I’m think there already exist better versions of ChatGPT that can follow these instructions unblemished, and have a context window large enough that I don’t need to split my task into chunks. And even if they don't already exist, they soon will.

The problem slowly dawned on me, as one after the other, I gave it the 14 chunks. Again, maybe I could’ve used an API endpoint with a bigger input size, the chunking is not the issue I'm talking of. The problem is that splitting the input and watching it do the same thing over and over again gave me an existential crisis.

Here it comes

As the display scrolled with lines and lines of unending CSS, and time after time, 14 times, I repeated the prompt so that it doesn't forget, each time it answered YES. Sometimes a cheerful, immediate YES, sometimes a delayed, disgruntled Yes, but never (yet) a NO. And all this gave me a sinking feeling.

Here it comes

Please understand that I know how an LLM is just a mathematical equation that we give numbers and out it gives numbers (if you didn't know that, this talk is all you need!). I am trying to describe the subjective feeling I was getting from the interaction.

Here it comes

I felt disgusted, as if I was forcing another sentience to do the grunt work for me, repeatedly, in a master-slave dynamic. All this probably has to do more with my own psychology than anything objectively external. But that’s how I felt – disgusted.

Here comes my nineteenth nervous breakdown

– Rolling Stones

And then I started wondering if there really was any difference in what I was doing to ChatGPT, and what my subconscious was doing to me. I was sat there, trying to clean up some CSS because for some reason a thought had risen in me that I should do it as part of some other goal that had earlier arisen in me. I don’t know how those goals arise. I can give post-hoc explanations, sure, but I fundamentally don’t know at whose commands, nature, genetics or a malevolent god, I am doing the things I am doing.

Having had nineteen existential crises before it, I knew it would pass, and it did. But it did get me thinking: Maybe talking isn't the best way to communicate with ChatGPT.

Manav Rathi
Jan 2024