top of page
Writer's pictureMark Loftus

5 Lessons From Writing 1000 Prompt Chains




Over the past 12 months I have written and tested well over 1,000 prompt chains in ChatGPT.

I thought I’d share my learning so far. 


Lesson 1: Cheat sheets don’t work - for me

Probably the same as you, I see lots of LinkedIn posts and Medium articles offering prompt-writing cheat sheets and quick tips. 

I’ve yet to find any of them useful other than as an aide-memoire for some learning that I’ve already made. 


So what does work?


Lesson 2: ABE - always be experimenting

Create, test, iterate, retest. 

Yup. There’s no mystery here. But what I’ve learned is that prompt-chaining really lends itself to iterative learning. The feedback loop is fast and tight, so learning accumulates quickly to those who iterate. 


Write a prompt, any prompt. Check the response. Change a word in the prompt and rerun. Put in a single phrase about style, role, context, desired response format, and rerun. Change a single parameter (Temperature, P value, etc) and rerun. 


Build your own internal intuitive understanding of what makes a difference. Sure, use the cheat sheets if they help, but in support of your own personal internal model.


Lesson 3: Let’s think this through step by step

A lot of the prompts we write for Text Alchemy require ChatGPT to work systematically rather than creatively. LLMs can find it surprisingly hard to do apparently simple logical reasoning and even basic arithmetic. There’s really good research been done on this (Chain of Thought prompting).


The prompt phrase ‘Let’s think this through step by step in order to avoid errors and inaccuracies’ gets to the heart of how to stimulate the LLM into a different mode, one in which it will ‘reason’ in a more systematic way. It’s particularly valuable if you’re putting together longer, more complex prompt chains that involve ‘reasoning’ rather than ‘recall’. 


As an example, in my own work for Jyre, helping create an assistant that would break Goals down into Key Results, I found that the response would frequently get date sequences muddled such that the KR’s didn’t ladder-up to the Goal. Bringing in CoT prompting sorted the problem.


Lesson 4: Work systematically

We call ChatGPT an Artificial Intelligence and that may be so. Yet for me LLMs combine deep stupidity with amazing abilities. Rather than seeing ChatGPT as magical (which, to be honest, it is), see it as a machine that responds best when you are really clear with it.


Ambiguities tend to be a feature of our natural communication (have you ever read an unedited transcript of a conversation you had with someone?)  LLMs can struggle to interpret nuances and subtleties - and hence the output can be far away from what you intended. 


And the more complex your prompt chains, the greater the need to be systematic. 

So…

  • Use ### to mark-out the sections of your prompt (e.g. ### Your role ###). Does it have to be ###? Not as far as I can tell as long as you’re consistent

  • Try using [ ] or { } to delimit the context and content you’re inputting

  • Be really clear about the output you are looking for

  • Evolve a structure for your prompts and stick to it (but don’t forget Lesson 2)

  • Define the tone of voice you want and stick to it (but see above)

  • Break your prompts into component parts where possible and iteratively test how they work before combining into longer prompt chains

  • Use the most direct plain language you can


Lesson 5: There are great resources available

Here are a few of my go-to resources:


1. OpenAI’s own guidelines on writing prompts is characteristically clear and concise.

2. Maximillian Vogel has scanned a lot of prompts and has useful guidelines.

3. If you want to dig a little deeper, Ali Arsanjani seems a reliable guide - at times pretty technical, but worth persevering - and he cites his references!

4. The Prompt Engineering Guide has a huge and well-structured set of resources from the basics of zero-shot prompting all the way through to technical research papers. 


There’s so much more learning… How to deal with hallucinations? What role does RAG need to play in your work? How to get ChatGPT to provide systematic analyses and ratings? How to script data analyses?


But for me, that’s the excitement about working with LLMs.  I’m certain that my pace of learning has never been higher, and with it, my personal productivity and team contribution has rocketed. 


0 comments

Recent Posts

See All

コメント


bottom of page