Skip to main content
June 14, 2023

Learn to use Prompt Engineering Techniques

GPT models, developed by OpenAI, have positioned themselves as one of the engines of artificial intelligence and Natural Language Processing (NPL). Their capabilities to create human-like code and text have made them one of the great revolutions of the moment.

As Azure OpenAI specialists, we have compiled some of the most important advanced techniques in prompt design and prompt engineering for the APIs that GPT models rely on. Read on!

Introduction to Prompt Engineering

The GPT-3, GPT-3.5, and GPT-4 models are instruction-based. The user interacts with the model by entering a text directive, to which the model responds by completing the text.

Like all other generative language models, GPT models attempt to reproduce the next set of words that are most likely to follow the previous text as if it were a human typing it. As more complex cues are developed, their fundamental behavior needs to be taken into account.

Although these models are very powerful, their behavior is also very sensitive to instructions. Therefore, the creation of instructions is a very important skill to develop.

In practice, instructions are used to adjust the behavior of a model to perform the intended task, but this is not an easy task and requires experience and intuition to create them successfully. In fact, each model behaves differently, so even more attention needs to be paid to how to apply the following tips.  

Prompt Engineering Techniques in OpenAI

For Azure OpenAI GPT models, one API takes center stage in prompt engineering: Chat Completion (compatible with ChaGPT and GPT-4).

Each API requires the input data to be in a different format, which also affects the overall design of the request. Chat Completion models are designed to take inputs from within a series of formatted dictionaries and convert them into a specific chat-like transcript.

A basic example would be:

MODEL = “gpt-3.5-turbo” 

response = openai.ChatCompletion.create( 

    model=MODEL, 

    messages=[ 

        {“role”: “system”, “content”: “You are a helpful assistant.”}, 

        {“role”: “user”, “content”: “Knock knock.”}, 

        {“role”: “assistant”, “content”: “Who’s there?”}, 

        {“role”: “user”, “content”: “Orange.”}, 

    ], 

    temperature=0, 

) 

 

Although, technically, ChatGPT models can be used with either of the two APIs, the Chat Completion API is the most recommended. In fact, even if this type of rapid engineering is used effectively, validating the responses generated by these LLM models is necessary. 

Prompt Engineering Options

We summarise the most important options for getting your project off the ground.  

System Message

This is the message that appears at the beginning of the request and is used to prepare the model with instructions or information relevant to each use case. This message can be used to describe the assistant’s personality, define what the model should or should not respond to, and define the response format.

A well-designed system message increases the probability of a certain outcome, but there may be some error in the response. Indeed, an important detail is that if the model is instructed to answer “I don’t know” if it is unsure of the answer, there is no guarantee that the request will be fulfilled.  

Few-Shot Learning

One of the best ways to adapt language models to new tasks is to use trial-and-error learning. This option provides training examples to give additional context to the model.

When using the Chat Completion API, such messages appear between the user and the assistant. This can prepare the model to respond in a certain way, mimic specific behaviors, and generate answers to common questions.  

Non-Chat Scenarios

Although the Chat Completion API is optimized to work with conversations, it can also be used for non-chat scenarios. For example, it can be used to analyze feedback on reviews, queries, etc., and extract the degree of satisfaction or dissatisfaction of a user.

These scenarios are the perfect time to use Copilot (for Windows or Office 365), as you will have a much more powerful chat at your disposal than normal. It is commonly used to use a chat as an assistant to give orders to execute actions (summarise a piece of text, make a PowerPoint presentation from an email thread, etc.).  

Tips for using Prompt Engineering

Once you are clear on all of the above, it is time to get your project off the ground. If you want to do this successfully, we recommend the following: 

  • Start with clear instructions and syntax: Starting the prompt by telling the model what you want it to do before sharing additional contextual information or examples can be key to helping it produce higher-quality results. This includes punctuation, headings, or markers, which will help better convey each request’s intent.
  • Repeat instructions at the end: to avoid models being susceptible to topicality bias, i.e., the final information in the request having more influence over the initial information, it is advisable to repeat instructions from the beginning and also at the end of the prompt. 
  • Prepare the output: Emphasis should be placed on including descriptive words or phrases at the end of the message to help elicit a response that follows the form we want.
  • Break the task down: LLMs often work best if each task is broken down into smaller steps. For example, for a search query, instructions can be given to first extract relevant data and then generate search queries that can be used to check facts.
  • Use of features: Sometimes, you may choose to have the model use features instead of its own parameters to obtain information and provide answers. You can stop generating answers once the model generates the calls and then paste the results back into the indicator.  
  • Chaining of Thoughts (CoT): closely related to the division of the task, it takes an approach in which the model is instructed to proceed step by step and present all the steps involved. This reduces the possibility of errors in the results and facilitates the evaluation of the model’s response.
  • Specifying the data source: This step can be key to the quality of the model results. Indications such as “include actual quotes and statements” help to reduce incorrect answers, as this will be substantiated information. You may also be asked to extract empirical statements from a paragraph, which will further increase the veracity.
  • Hallucination: Closely related to the previous point, hallucination refers to the generation of results that may sound plausible but are incorrect or unrelated to the given context. They arise from the inherent biases of the AI model, so it is very important to add citations where sources are referenced, so the user can cross-check the information and verify its authenticity.  
  • Temperature and Top_p parameters: changing the temperature parameter changes the output of the model. The higher the value, the more random the output will be and the more disparate the responses, whereas one closer to zero will give a more deterministic (less random or stochastic) result. Top probability is a parameter similar to temperature and controls the randomness of the model’s response, but slightly differently. Therefore, it is advisable to alter only one of these parameters, not both at the same time.
  • Provide a reference context: this is one of the most important points. The model should be given contextual data to extract more effective answers, especially when it is used to give reliable information and not a creative scenario. The more source material you give, the closer the final answer will be to the answer you want, as the model will have to work less, and there will be less chance of error. 

You can achieve a much more efficient model by following the steps above. Here is a basic example of a simple prompt with context, instructions and the task to be performed. 

Implementing ChatGPT and OpenAI in your business

ChatGPT and OpenAI services are changing how business is and will be done. We have explained how to take advantage of prompt engineering techniques, but you need to pay close attention to security, governance, and monitoring of the data you enter.

Having a good strategy when adopting this service is essential to achieve good results and great benefits in terms of creativity, efficiency, innovation, and decision-making.

At Plain Concepts, we offer you a unique OpenAI Framework in which we ensure its correct implementation, improving the efficiency of your processes, as well as increasing critical business security while meeting production needs.

We will accompany you and help you accelerate your journey through generative AI in all its phases, establishing together a solid foundation for you to leverage its full potential in your organization, find the best use cases, and create new business solutions tailored to you.

If you want to find out how we can help you improve your business, don’t hesitate to contact us. Get the most out of Generative AI! 

Elena Canorea
Author
Elena Canorea
Communications Lead