Prompting debate

Surely there must be a scale that ranks our resistance to technology from fully resistant (1 – won’t use it) to no resistance (10 – fully embraced it). I think I’m now a “6,” techno curious.

I’ve been using the recent beta release of ChatGPT the past couple of weeks. On the surface it resembles a search engine like Google, with a search bar where you type a prompt. But I bet a lot of people at Google are nervous about ChatGPT, because it threatens to upend ‘search.’ Because unlike search engines that return a list of related web pages, ChatGPT returns written, crafted text. And it has a kind of written voice in the way Siri or Alexa does. You can ask it to modify the voice (like make it funnier or more jargony) and it will return a new version instantly.

So how does it do that? ChatGPT is a natural language processing (NLP) tool, which sounds impressive, but it’s also just a bot. Most of us are used to bots being pretty lame, the ones we interact with on web pages, faux customer service agents. I’ve gotten used to batting them away like flies.

But this is a different kind of bot, one most of us haven’t seen before. And that’s why ChatGPT is getting a lot of attention. As an NLP tool, it interprets inputs (commands, prompts) against a deep bank of language it’s been trained on. That deep bank is the internet. It uses the internet as its library to mimic what we want based on our prompt. Like search engines, ChatGPT is also highly sensitive to variations in prompts. Tweak the prompt and you get a much different result. This emerging value of prompts is even prompting new job opportunities in prompt engineering.

With just a few prompts, people are saying you can write a 60K-word novel with ChatGPT, complete with dialogue and character development. And like most things we hear, some of that is true and some of it isn’t.

But all this has sparked headlines and discussion around what it means for white-collar workers and creatives. Even for developers, now that similar AI technology has automated software code creation. Microsoft CEO Satya Nadella has cited an example of “coders embracing automated code” from a renowned coder who says he’s automating 80% of his code (“I don’t even really code, I prompt. & edit.”)

Code writing was thought by many to be an untouchable realm for machines, a uniquely human endeavor. But we thought the same thing about poetry, painting, filmmaking, or graphic design. And image-generators like DALL-E (which work similarly to ChatGPT) prove that bots can learn to create too. Which makes you wonder if we humans are mere language processing tools ourselves.

You can feel a number of ways about this, and every way you feel about it is right. Getting back to my techno-resistance scale, I’ve had a mixture of fear and fascination towards technology for the past several years. Ever since we started talking to machines or counting on them for driving directions, I’ve resisted.

But in the realm of creativity, how well does that resistance benefit me and my work? And how much does hubris enter in? That is a super interesting, confronting question for me. How precious and unique is the manual effort I perform?

Lots of existential fodder here. I postulated to my wife last week, will there come a day we don’t need to teach handwriting in schools anymore? What would be lost in that? Is that human? Are we too attached to what we know to prevent ourselves from knowing more?

And she reminded me that there was a time when orators didn’t want people to learn handwriting because then people wouldn’t have to memorize the Iliad anymore. Do we need to memorize the Iliad? Or have we evolved by learning to write by hand? Will we evolve by learning to write prompts instead?

I think about the Iliad and that oral tradition and it all feels so 8th century to me. Isn’t evolution about getting something new by giving something up? Losing the digits you don’t need anymore when you stop climbing trees. Leaving the chrysalis behind so you can fly. Using the automated editor when you forget how to spell words like chrysalis. Ditching the dictionary.

And while evolution used to take eons, it seems we can make it go a lot faster now. Is that natural? Or artificial?

Is that what bothers me most?

Categories: technology, writing

Tags: , , ,

41 replies

  1. The level of detail in the face terrifies me, but I need to learn to be less resistant to change, and see it as a new development in science. Fear is never good, if not used in the overcoming sense, best to proceed with caution instead.

    Liked by 2 people

  2. Which makes you wonder if we humans are mere language processing tools ourselves. — I’d say yes. and with strict limits as well. We learn much slower than bots, and many of us will be quickly bypassed by auto-generated language too nuanced for us (me) to fully grasp. Presumably, these programs will, in time, be able to mimic what ever effort we humans put into creating art. Then they will surpass us, become self aware, and you know what happens next.

    Liked by 3 people

  3. I used to be a 10, I’m now a 1. The faster things spin up, the more I want to slow down.

    Liked by 2 people

  4. I’m always hesitant to hop on board new tech. Didn’t get a smartphone till 2019! But I think it has more to do with liking simplicity rather than being afraid of the new stuff. When I remember what it was like not to be checking things online all the time, I get misty-eyed.

    So far, I’m not too impressed with what ChatGPT pumps out, but we all know it will improve. And when it does improve, we’ll all just have to live with whatever effects it has on society. We’ll adapt. We’ll lose our virtual tails. Same as it ever was, I guess.

    Thanks for the thought provocation!

    Liked by 2 people

  5. I sit in the middle of that receptiveness scale – to reduce seesawing up and down.
    The creative element is impressive. But what about the factual side? Isn’t the source material for ChatGPT all the good, bad and indifferent stuff that we have put on the Internet? The truth, the post- modern-truth and the alt_truth?
    Machine learning truth?
    I should have a play with this thing, see if it can improve the commentary I write on OH&S stuff at work whilst it reduces the burden.
    Thanks Bill

    Liked by 2 people

    • Definitely play around with it! It’s free and takes just a minute to sign up. Though when I tried to get on today at 4 AM it was already at capacity. So will see how long the ‘free-lunch’ lasts. I think some people are clearly running a lot through it, putting a strain on how much computing power they can support. You’re exactly right about the factual side. Unlike Google, where you get the attributes, you can’t see where the content is coming from because it’s a kind of amalgam like you said, of the good/bad etc. It’s that way for purely creative stuff you’d want to generate as your own, but less of concerns for other formats. Worth checking out I think.

      Liked by 3 people

  6. Once again a pinklightsabre post has catalysed an instant response/reaction. This time at Vinyl Connection. Great debate to be having. Thanks my friend.

    Liked by 1 person

  7. I’m not sure how I feel about AI. Realistically, it’s in its infancy. It’s only going to get more sophisticated and will make some types of jobs, like copywriting, obsolete. But style and imagination still count. I don’t think all creativity will become stifled.

    The bigger question is, will it contribute to the dumbing down of the population? Just a simple thing like texting has turned a whole generation into people who can’t sit still long enough to read or write a book. Tell me what to think (and want to hear) in instant gratification, bite-sized pieces will become even more pronounced.

    I’m reminded of an old sci-fi book called “The Marching Morons.” (Look it up.) Seems like we’re going further down that path all the time.

    Liked by 1 person

    • I’m with you on the dumbing down Dave (sorry for the alliteration there). And you’re former Tech if I’m right too. Also the shifting attention, I think there’s something to that. Getting action and response out of things so quickly and with such little effort, what will that do to our minds? Good to look for comparable evidence like you have with the texting and consider that x1K. Thanks for sharing, love the sentiment and the tip. Makes me wonder if it influenced WALL-E. I think that’s where we’re headed. Bunch of fatsos.

      Liked by 1 person

  8. Fascination & revulsion. Because I’ve never had very good fine motor skills, DALL-E sounds wonderful to me, at a minimum, a fantastic playground but maybe something more. Or is it yet another symptom of laziness, desire for shortcuts and lack of discipline. I remember a handout my first week in college, warning that the profs were very good at spotting “mosaic plagiarism” which I’m wondering might be a decent definition of ChatGPT.

    Liked by 1 person

    • Ha! Mosaic plagiarism! That sounds like the digital marketing consulting work I do! No seriously. Never. Dawn just told me a Harvard prof has developed an app that can detect if something is ChatGPTeed. Interesting times! Fascination & revulsion, that’s fair. Thanks Robert for chiming in!


  9. What bothers me most is having no clue where the data comes from … if indeed facts vs. fiction. Just look at politics. OUCH! I don’t really think human creativity will go away, and certainly a bit of creativity went into generating this AI “ease”. But it isn’t my style, I’m too ingrained in “current ways”. I might be a 4 on your scale. I still follow hunches.

    Liked by 1 person

  10. I have been in many discussions over the last few weeks around how to best leverage this technology as a tool to augment, and not replace, the jobs of many. The tech du jour, indie darling appeal it currently has is largely due to the way the words are presented back to the user (and media hype). The information used to return such interesting results is actually almost 2 1/2 years old. It is currently trained on 175 billion parameters (books, movies, speeches, etc.) and consumes that in 6 months. It has been rumored that GPT4 will have processed over 100 trillion parameters of information. That’ll soon be rivaling the number of synapse connections in the human brain.

    TBH, being exposed to it the first time, I was reminded of primitive cultures seeing something like a transistor radio or photograph of themselves for the firs time. Slightly scary, but exciting all the same.

    Liked by 1 person

    • Brilliantly said and thanks for all this added color and detail here Don. I’ve heard similar about the planned jump to GPT4, and the human brain analogy. And yes. I agree the initial craze is likely in part due to that dialogue factor. That’s its more the action than the quality of content produced. Both are amazing but the former feels almost more so just in that dialogue capability. Thanks too for reading my piece over at LI, bonus points for that. Gold star for Robot Boy 🤓


  11. Decades ago I first read a short story by Roald Dahl, which he’d written decades before that, called The Great Automatic Grammatizator about a man who invents a machine that writes novels. The story stayed with me though I never thought we’d get to the point where it was plausible. Yet here we are, or seemingly close to it.
    The question I still struggle with is, if we can’t tell the difference between art made by a person and art made by a machine does art even have value anymore? And if art no longer has value then artists don’t. To be fair technology has prompted artistic developments. The camera led to Impressionism and the other -isms that followed. Looking at history, though, it doesn’t seem realistic that what people make will still be valued when a machine can do the same thing just as well, if not better. Technology has definitely benefited us but it’s also meant that too many people are only valued by how well they keep the machines running. At what point do we stop being users and become the used? I don’t think there’s a clear line, and we may not know until we’ve crossed it. So it may be that the real questions will be, can we go back, and will we even want to?

    Liked by 1 person

    • I heard the Dalai Lama said something like, in our culture (his) we judge the quality of art not on how it’s moved the audience but how it moved the artist. Which is interesting isn’t it? And relates to your comment about machines producing art, what that does to the value of the artist. Super interesting point there. I’m going to order that Dahl book too, thanks for the tip on that.

      I also like what you said about the crossing of lines because I’ve been playing with this idea of thresholds too. The fact we all have them but they all differ, they change, they get removed altogether. And it’s just like you say: we don’t know where the line is until it’s been crossed. That’s the same way I used to define some things: you can’t see what it is until you draw the line that says what’s it’s not.

      What troubles me about how quickly we’ve adopted tech (and the motives to design the tech like smart phones and social media platforms) is the fact they’re built to monetize and hold our attention (ie addictive, sell ads etc). That changes our brains. So reflecting back to the thresholds idea, we may change or lose something, like the “human” act of breaking up with someone in person vs text or video call…and we don’t realize that was really a threshold for us until it’s too late. And like you say, we can’t go back.

      Or worse, there’s a groundswell of people (could be cultural or generational) who change the culture enough through behaviors like this that it affects everyone else.

      So to wrap up this rant, it signifies change and change is often good, but always scary I think too. Sometimes the fear is warranted, no one really knows. What we all feel is right. Keep feeling! I guess that would be my advice.

      Liked by 1 person

  12. I had a play with ChatGPT, asking it for an Extreme weather policy (because I have one on hand). It took two minutes to come up with a reasonable top level draft. It can be directed to address shortcomings and an edited version might be sent to line managers to flesh out details for different occupations, tasks, equipment and situations. Not perfect but it would be an absolute flying start.
    Another consulting job gone?

    Liked by 1 person

    • It’s a real slippery slope. One of the main lines of argument “for” is there will be this transitional period where we work alongside our digital assistants to QA and refine prompts, etc. plus “do other things” while the models help do thinks we used to do, like draft or produce rote communications. For example, in corporate settings we’ll have “digital assistants” who can pen simple emails or distribute notes and action items. And then the digital assistants may be able to effectively communicate between each other with minimal oversight. One analogy I’ve been playing with is this: have you ever been in a position where you’re asked to train someone to be your replacement at a job? It feels really shitty. Especially when you know they are younger than you, better, and more capable. That day is coming! I realize I’m fear mongering but that’s in the back of my mind. Or front, where the fear sits.

      Liked by 1 person

  13. I posed the following question to ChatGPT: “Will AI ever produce world class fiction?”

    Here is the response:

    Artificial intelligence has the potential to produce fiction, but whether it can produce “world-class” fiction is still an open question. While AI has made advancements in generating text, creating a piece of fiction that is considered “world-class” requires a level of creativity, emotional intelligence, and understanding of human experience that may be beyond the reach of current AI technology.

    Moreover, the definition of “world-class” fiction is subjective and can vary greatly depending on cultural, personal, and historical perspectives. So, whether AI-generated fiction can ever be considered “world-class” will depend on the evolution of AI technology as well as societal attitudes towards AI-generated content.

    Liked by 2 people

    • Yes I’ve actually heard similar tone when I’ve asked questions like that. Which made me think there is some built-in positioning perhaps to help stem that kind of fear that they will compete. I’d think that would be an easy thing to embed in the algorithm, a kind of positioning like that. But you know, I’d like to verify and learn more! It is a funny thing that has drawn so much interest and attention in such a short time. Here in the States, anyway.

      Liked by 2 people

  14. All of this sounds squarely in art criticism. Does the work of art stand alone or does the creator matter? Is the Blue Danube any less because we know Strauss was a Nazi sympathizer? An high end art salesperson once told me that the only real thing that matters when considering buying a piece of art is how it impacts you, not who the artist was. Does art get judged by standing shoulder to shoulder with all other art of a similar nature? If a great piece of fiction is created, if you enjoy it, if it moves you and expands your mind and causes you to grow and change because if its impact, then does it matter if it came from a machine or a human? Machines can do a lot of things better than humans….why not writing? I am of course being a bit of a devils advocate here.

    Liked by 1 person

    • Well put mister! One of the things most interesting to me about this recent topic of authenticity, creativity, “mimicry,” and so on is the question about what’s really human. It gets murky the more people collaborate with machines to produce things previously only people could. Very fun times I think. And I intend to write some about Gary Numan as a result. You know, the whole synthesizer thing in the 70s. Thanks for the great insight here.


  15. I was waiting for the punchline: “And this post was written by ChatGPT!” But of course “in the style of Bill” is inimitable.

    Liked by 1 person

  16. HI, read an article this morning about MS incorporating ChatAI into it’s Bing browser. It made me wonder: does your wife get upset when you use google? Does she use google?

    Liked by 1 person

    • Yes they had a special announcement about that at her work yesterday I think, and I shared the article with Bruce when it came across. A great move for them! Good for one that you can now attribute the sources. And no, Dawn certainly doesn’t get upset about what engine I use (though I could see why you’d think that). I think she probably uses both, which I sometimes do (in the rare times I forget to use Google). It’s a matter of training isn’t it? The garbage-in, garbage-out and the usage. Like the way we use our own language in a sense, we gravitate to what’s accepted and used more than what is “correct” by the rules. I’ve been working up an article about this for the past week or so and have gone really deep. (Help, I think I’m stuck!) throw me a rope Cann!

      Ha ha hope you’re well and thanks for this.

      Liked by 1 person



Leave a comment!

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: