OpenAI API – {AI as API} – The Gateway to the Next Generation of AI Technology

ChatGPT and DALL-E are creating a huge impact on how we interact with AI. These are the products from OpenAI. They are targeted for general use cases. OpenAI provides API offerings that businesses and tech enthusiasts can take advantage of.

It is easy to utilize the APIs and build applications around them.

Setup OpenAI developer account –

Step 1 – Sign up using Google or Microsoft accounts. Alternatively, sign up using your email and password.

Step 2 – Generate the API key and setup the organization

  1. API key is the only way to call the APIs and should be kept in a secure way
  2. Setting up the organization (By default one is created) plays a key role when we build our models based on OpenAI models using our training data

Step 3 – Invite members, manage the billing, and manage usage for the APIs

Understanding Tokenizer –

Tokens are a way to represent words in natural language processing. In OpenAI, 1 token is approximately equivalent to 4 characters or 0.75 words in most cases. It is important to understand tokens and play around with the tokenizer tool as this is the crucial part of determining the usage of the API and how it impacts the pricing.

APIs available to use – OpenAI provides APIs around 2 major models – 1) Language models, and 2) Image models.

First and foremost is authentication. OpenAI uses an API key for authentication. So the key should be kept secret.

1) Language models – Before we deep dive into what the APIs can offer. We have to understand that there are 4 major language models that OpenAI provides –

  1. Ada – This is the fastest and cheapest
  2. Babbage
  3. Curie
  4. Davinci – This is the most powerful and priciest

One of the main advantages is that we can train the base model of OpenAI with our training data. Let’s see in detail about that and the list of APIs that facilitate this.

  1. Files endpoint provides APIs to upload files, list and retrieve the files, delete the file content, and delete files. These files are nothing but training data.
  2. Fine-tunes endpoint provides APIs to tune the base model, retrieve the fine tunes, delete the fine tunes, and even list the fine tune events.
  3. Models endpoint provides API to list and retrieve the models. This provides the organization-related details and permissions.

Training the models for our purposes is purely optional work. However, to get any business-specific results it is highly recommended to train the base models to derive the business context.

On the other hand, we will need ways to start using the models and derive the generative text.

  1. Completion endpoint accepts a prompt and returns one or more predicted completions. It also provides alternate text responses.
  2. Edit endpoint creates a new edit for the provided input, instruction, and parameters.
  3. Moderation endpoint checks whether the prompt violates the content policy.

2) Image models – Image models provide 3 APIs for generating the images.

  1. Generations endpoint provides a way to generate an image based on a given prompt. Also, it can generate a max of 10 images for a prompt at a time.
  2. Edit endpoint creates an edited or extended image given an original image and a prompt.
  3. Variation endpoint creates a variation of a given image.
OpenAI API – Overall design

Rate Limits –

Computing is always a hard problem to solve. OpenAI is no exception to it. Maybe someday there will be an AI that could solve computing needs. At present OpenAI comes with very strict rate limits in the form of RPM(Requests Per Minute) and TPM(Tokens Per Minute). TPM varies based on the models. For example, 1 TPM in ‘DaVinci’ is not the same as 1 TPM in ‘Ada’.

This may not pose an issue for developers who are trying to understand what can be done with AI APIs. However, applications implementing OpenAI APIs in production-ready environments may reach the limit quickly. To mitigate the rate limit issue, Open AI strongly recommends considering ‘Retrying with exponential backoff’ using –

  1. Tenacity from Apache 2.0
  2. Backoff library
  3. Manual backoff implementation

OpenAI is accommodating requests to increase the rate limits. However, as per their documentation, most often they reject the request when there is a lack of supporting data. While submitting the request it helps to submit the analytics data or justification on why the increase in quota helps for the application. OpenAI has provided some examples.


At the time of writing this blog, there is no SLA provided by OpenAI on the availability of the various models. There is a status page that provides operational details of the different components at OpenAI.

Pricing –

There are 2 commonly utilized offerings from OpenAI at present – 1. Image models, and 2. Language models.

  1. Image models – Pricing is based on the resolution of the image. At the time of writing this blog, the pricing per image varies from $0.016 / image to $0.020 / image
  2. Language models – This is based on the tokens. Tokens are how OpenAI evaluates the words. Also, there are different language models and pricing is based on models as well. At the time of writing this blog, the pricing per 1K token varies from $0.0004  / 1K tokens (For the Ada model, being the fastest) to $0.0200  / 1K tokens (For the Davinci model, being the most powerful)

Other models and their pricing are below –

  1. Fine-tuned models – Train the existing models on top of our training data. Pricing is based on training and the usage of the trained model.
    • Training – At the time of writing this blog, pricing varies from $0.0004 / 1K (For Ada) tokens to $0.0300 / 1K tokens (For Davinci)
    • Usage of the custom model – At the time of writing this blog, pricing varies from $0.0016 / 1K (For Ada) tokens to $0.1200 / 1K tokens (For Davinci)
  2. Embedding models – This helps in feeding the vector representation of the words and sentences to machine learning algorithms. At the time of writing this blog, Ada for this purpose was priced at $0.0004 / 1K tokens.
OpenAI API – Pricing with example

What is the other alternative to start using OpenAI APIs?

OpenAI APIs are generally available in certain regions within Microsoft Azure. What’s the advantage of using OpenAI APIs from the Azure cloud?

  1. There is no limit on how many times the APIs can be used
  2. If the application is already running at Azure cloud, the OpenAI APIs can be extended
  3. Azure covers OpenAI SLA as part of Azure cognitive services with a monthly uptime percentage and service credits
  4. Azure offers OpenAI API with compliance, regional support, and enterprise-grade security

Other general information –

Playground usage is counted against the usage quota.

OpenAI postman collection can be found here.

For any safety and security-related issues that arise by using the OpenAI APIs, submit the issues using the Coordinated vulnerability disclosure policy

As we learn to develop using AI, it is a must to understand the best safety practices, guidelines, and standards in developing applications using AI APIs.

OpenAI provides a comprehensive set of best practices to help you transition from prototype to production

To read more and stay updated on the OpenAI offerings, recommends following the OpenAI blog.

Overall application architecture in integrating OpenAI APIs –

OpenAI API – Application implementation architecture

Share your thoughts on how OpenAI is changing the landscape of your business.

Happy learning!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.