Insights for Developers: Exploring Prompt Engineering in ChatGPT from deeplearning.ai course

Prompt engineering is important in AI because it plays a crucial role in determining the behavior and output of language models like ChatGPT. The prompts are the initial instructions or queries provided to the model to elicit a desired response. By carefully crafting and engineering the prompts, developers can guide the model’s output to be more accurate, relevant, and aligned with human values.

Here are a few reasons why prompt engineering is significant in AI:

  1. Controlling bias: Language models can inadvertently generate biased or inappropriate responses. Prompt engineering allows developers to mitigate bias by carefully specifying the instructions and constraints to encourage fair and unbiased outputs.
  2.  Steering the output: Language models are incredibly powerful but can be prone to generating incorrect or nonsensical responses. With prompt engineering, developers can shape the behavior of the model by providing specific prompts that help guide it toward producing accurate and coherent responses.
  3.  Customizing responses: Different applications require models to generate responses tailored to specific domains or contexts. Prompt engineering enables developers to fine-tune the model to generate responses that are relevant and specific to the given task or domain.
  4.  Improving safety and ethics: AI systems need to adhere to ethical guidelines and prioritize user safety. Prompt engineering allows developers to incorporate safety measures, such as instructing the model to avoid harmful content or promoting responsible behavior.
  5.  Enhancing user experience: Well-engineered prompts can result in more meaningful and helpful responses for users. By crafting prompts that provide clear instructions and context, developers can ensure that the model generates responses that align with user expectations and intentions.

It’s important to note that prompt engineering is an ongoing and iterative process. Developers continuously refine and improve the prompts based on feedback and evaluation, working towards optimizing the model’s behavior for specific applications while minimizing unintended biases or errors.

Below are the highlights from the course –

To start the course, we will need to get the API keys from the OpenAI developer console.

Guidelines –

Principle 1: Write clear and specific instructions

Tactic 1 – Delimitter helps to avoid possible ‘Prompt Injection’

Tactic 2 – Ask for structured output. Some of the formats that I tried include JSON, HTML, doc, XML, Key-Value pair, CSV, RTF, XLS

Tactic 3 – A) Check whether the conditions are satisfied; B) Check assumptions required to do the task

Tactic 4 – ‘Few-Shot’ Prompting. Give examples of tasks before asking the model to perform the task

Principle 2: Give the model time to “think”

Tactic 1 – A) Specify the steps required to complete a task; B) Ask for output in a specified format

Tactic 2 – Instruct the model to work out its solution before rushing to a conclusion. If we want to validate our solution, let the model draft its resolution and then compare it against our solution.

Understanding the model limitations –

Hallucination – Makes statements that sound plausible but they are not true

Tricks to avoid hallucination – Find the relevant information, then answer the question based on the relevant information. Find a way to trace to the source of truth.

Iterative –

Finding the perfect prompt for our applications is not a one-size-fits-all solution. Instead, it requires an iterative process to determine the prompt that best fits our ideas. It’s important to have a clear understanding of your idea. Begin with a prompt and if the outcome is not what you expected, evaluate why that is the case. Adjust the prompt to better align with your goals and try again.

Tactic 1 – Limit the number of words to avoid longer texts or paragraphs

Tactic 2 – Focus on the aspects that are relevant to the intended audiences

Tactic 3 – To look at the data clearly, extract the information and organize the data in a table

Summarizing –

When dealing with a large amount of text, summarizing can be useful in understanding the overall context of the data.

Tactic 1 – Summarize with a word/sentence/character limit

Tactic 2 – Summarize with a focus on the context

Tactic 3 – Use ‘Extract’ instead of ‘Summarize’

Inferring –

Trying to understand the positive/negative sentiment using an LLM rather than deploying the different models for a specific purpose

Use case 1 – Identify the different types of emotions

Use case 2 – Extract a specific piece of information from a lengthy text.

Use case 3 – Understand whether the large data falls in your area of interest

Use case 4 – ‘Zero-shot learning algorithm’ – no pre-training data for a specific data

Transforming

The LLMs can be used to pretty much used to transform the text from one format to another

Use case 1 – Translate one language to another

Use case 2 – Use it as a universal translator to infer the language and later translate it to a common language

Use case 3 – Tone transformation

Use case 4 – Data format conversions

Use case 5 – Spellcheck/Grammar check. To proofread the text, use ‘proofread’ or ‘proofread and correct’

Use case 6 – Redline Python package to show the difference between the original text and the model output

Expanding –

Expanding a brief text into a longer one.

Caution – Be responsible and do not produce spam text out of it

Use case – Customize the automated reply to a customer email

How to set the temperature of the response?

  1. For tasks that require reliability and predictability use temperature = 0
  2. For tasks that require creativity and variety use a higher temperature of upto 1

Chatbot –

At last build a custom AI chatbot with a simple user interface.

High level system components in utilizing LLMs from the cloud as APIs

Above all,

The most effective way to learn prompt engineering is by practicing. As you move forward in the course, modify the prompts and try experimenting on your own. This will help you improve your skills!

Happy learning!

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.