What is Prompt Engineering?
Prompt engineering is the process of designing and constructing prompts to guide large language models (LLMs) towards the desired outcomes. It is a relatively new field, but it has the potential to revolutionise the way we interact with computers.
LLMs are trained on massive datasets of text and code, and they can be used to perform a wide range of tasks, including generating text, translating languages, and writing different kinds of creative content. However, LLMs can be difficult to use effectively, as they can be biased, produce inaccurate results, and be unpredictable.
Prompt engineering is a way to address these challenges by providing LLMs with clear and concise instructions. By carefully designing prompts, we can guide LLMs towards generating the specific outputs we need.
Basic Steps
- Create an identity - “You are a <role>”
- Define a clear task - Use action verbs explicitly. State the task at hand
- Provide context - Provide background info and how you want the data structures
- Specify your output style - “Be professional”
- Iterate - Further prompt based off the previous response to fine tune the data
LLM Patterns
Best Practice Patterns | What It Does |
Define Output template:
“Format your response like this: product, strength, weakness, target user.” | Include specific aspects of the output. |
Provide cues and hints:
“Summarize the above text, starting like this: the key takeaways of the article are” | Define how you’d like to start the response. |
Separate instruction from context using “”” “”” | Define what is task vs context. |
Perspective prompting:
“Develop a travel itinerary from the perspective of <role>”
“Develop a travel itinerary from the perspectives of <role A> and <role B>” | Define output that is PoV specific. |
Ask-Before-Answer Pattern:
“Before you write <A>, ask for more info you might need to improve <B>. “ | Self-evaluation. Reduce hallucination. |
Contextual prompting:
“Summarize <A>. evaluate <B> and consider <C>.” | Provide processing logic / chain of thought. |
Emotional prompting:
“Take the sentiment and tone of <persona> into account.” | Infuse emotional intelligence. |
Laddering prompting:
Break up a complex (long) problem into separate prompts. | Problem breakdown for easier prompting / chain of thought. |
Identify missing info:
“What extra info do you need to do <A> better?” | Find out what AI need to give better answers. |
Self-evaluating prompting:
What can be improved about your above response? | Self-evaluation. Guide AI to reflect for better answers. |
Proof-reading:
“Check this text for grammar and spelling errors, and provide the correct version.” | Seek editorial help from AI. |
Hacks
Give ChatGPT a Role so that it generates answers from the perspectives of the role.
I want you to act as <role> / You are a <role>
Specify the audience so that it generates answers by taking their attributes into account.
You are writing to <audience>
Provide tabulated view of the response.
Please return your response in a markdown table.
Less is more. To the point is good.
Please respond in bullets. Be concise.
Research something and elicit information the way you want.
Please criticize the article from following perspectives:
Get the most random results (Chat-gpt)
Please have your response temperature = 1.
Repeat things as little as possible (Chat-gpt)
Please have your response presence penalty = 1.
Delay response (Chat-gpt)
Please read it and I will ask you questions later: “””<text>“”””
Tones:
Be formal / informal / professional / assertive / casual / down to earth / exciting / encouraging / positive / negative / concise.
Handling Hallucination
Cause of Hallucination | Mitigation Strategy | Prompt Patterns |
Lack of Context Awareness | Lack of Context Awareness | 1. "Please consider the context of [Topic] when answering: [User Query]"
2. "In the context of [Specific Field or Topic], how would you answer: [User Query]?” |
Inadequate Evaluation Metrics | Use custom evaluation scripts that focus on
factual accuracy. | 1. "Provide an answer that is factually accurate for the question: [User Query]"
2. "Ensure your response is based on verified information: [User Query]” |
User Input Ambiguity | Create prompts that ask for clarification
when user input is ambiguous. | 1. "If [User Query] is ambiguous. Could you please ask for clarification?"
2. "If the following question is unclear, please request more information: [User Query]” |
Lack of Explainability | Design prompts that encourage the model
to provide justifications for its answers. | 1. "Provide a justification along with your answer to the question: [User Query]"
2. "Include the reasoning behind your response for: [User Query]” |
ㅤ | Use a Chain of Thought and Reasoning
Chain to make the model's logic
transparent. | 1. "Explain the steps you took to arrive at your answer for: [User Query]"
2. "Provide your answer along with a reasoning chain for the question: [User Query]” |