Prompt Engineering Process

*No LLM was used to create this article. Enjoy.*

Before we start you can run this notebook to see an example of good prompt engineering.

Basic prompting

Prompting is easy, prompt engineering is hard. There’s an illusion of ease because there’s no bad form or 404. If you give a model something you’re getting something back.

However it’s your job to consistently get that specific something back. Think of it as a tree of options. You start at the top and work your way down. Each node you traverse is another word that’s added to the output until finally you arrive at a leaf. A leaf represents a full response. It's your job to guide the model and make it have a high likelihood of ending up at certain leaves rather than others. A good mental model would be to think you’re talking to a toddler. The toddler can speak fluent english however it might not always behave as expected. So basically talk to the model the same way you would a toddler.

Prompts have a very well defined structure: Task, Context, Role, Tone, format, examples, constraints.

Follow this link if you want a good short overview of the basic structure.

Examples

What are they?

To make it simple. Let’s say you are writing a prompt to elicit the model to generate a cold email. An example would be a good cold email similar to the one you’re trying to produce.

Tip: one good example goes a very long way.

Why they matter?

They matter a lot. If you’ve ever heard of zero-shot or few-shot you’ve heard of examples being used. In benchmarking models the difference between 10 or 30 examples can be the difference between number #1 and number #2. (Gemini’s shot deception)

Tip: You can use a strong model such as GPT4 to create examples.

Context

For the past year companies and open source alike have been working to extended the information that can be given to a single prompt. We’re now at 200k token with Anthropic’s Claude 2.1. To give you a banana for scale, 200k token equate to 500 book pages. The bible is 1500 pages long. It may not seem so but when working on production use cases very often companies will fill the context window. At that point your left with employing all kind of techniques such as fine tuning, LLMlingua, RAG, which we’ll go into in later posts.

Advanced techniques

Once you’ve understood how to structure a prompt it’s time to delve deeper into some more advanced techniques. Here I’d like to supply you with a nifty checklist of methods we’ve found to be very effective.

  • Chain of thought
  • Skeleton of thought
  • Self consistency 
  • Generated knowledge 
  • Least to most 
  • Chain of verification 
  • Step back prompting 
  • Rephrase and respond 
  • Emotion prompt  
  • Directional stimuli 
  • Recursive Criticism and Improvement
  • Reverse prompting

You don’t have to use them all, in fact getting the perfect prompt is more about balance than it is about throwing everything you can into it.

Tip: If you’re a beginner you can make good use of reverse prompting. Again returning to our cold email example. To perform reverse prompting you would give the model an actual example of a good cold email and ask “What prompt generated this email?”

Concepts

This will not always be used but it’s important to remember these concepts when prompt engineering:

  • Clear instructions
  • Provide reference text
  • Split complex tasks into smaller ones
  • Use external tools
  • Be specific
  • Remove fluff 
  • Explain
  • Prioritize
  • Negative prompts are less effective than positive prompts
  • Recency bias (long)
  • Negating bias 
  • Be aware of hallucinations
  • How to get more examples: use GPT4 (a stronger model)
  • Chatiness vs. succinct
  • How to engage
  • Continue / elaborate more
  • Separators for clarity

This is not an exhaustive list. For the sake of making this post readable in one sitting check out this video for a deeper explanation.

Prompt engineering process

Good news - you can get better over time
Bad news - There’s no one article you can read
If you think it’s easy, think again. If you’re still not convinced, here’s an example of a prompt that takes around an hour to get right.

Prompt engineering is an iterative process.

  1. State your problem
  2. Examine relevant information
  3. Propose a solution
  4. Adjust the solution:some text
    1. Test some text
      1. Basics
      2. Advanced
      3. Concepts
    2. Examine output
    3. Researchsome text
      1. Ask GPT
      2. Search the web
  5. Test at *Scale
  6. Launch your solution

For a run down on how to prompt engineer see this video.

Tip: Avoid beginner blindspots by use revision technique. Every now and then ask the model to “revise your prompt to make it better”.

Conclusion

Prompt engineering may have the illusion of ease but it’s both an art and a science. Practice makes perfect. 

Tip: Challenge yourself to solve difficult problems using prompts.