A Complete Guide to the ChatGPT API

https://static1.makeuseofimages.com/wordpress/wp-content/uploads/2023/03/chatgpt-openai-logo-on-green-background-feature.jpg

Through the release of its API, OpenAI has opened up the capabilities of ChatGPT to everyone. You can now seamlessly integrate ChatGPT’s power into your application.

Follow through these initial steps to get started, whether you’re looking to integrate ChatGPT into your existing application or develop new applications with it.

Getting Access to the OpenAI API Keys

To start using the ChatGPT API, you first need to obtain the OpenAI API keys. Sign up or log in to the official OpenAI platform.

MAKEUSEOF VIDEO OF THE DAYSCROLL TO CONTINUE WITH CONTENT

Once you’re logged in, click on the Personal tab in the top-right section. Select the View API Keys option from the dropdown, and you’ll land on the API keys page. Click on the Create new secret key button to generate the API key.

You won’t be able to view the key again, so store it somewhere safe.

The code used in this project is available in a GitHub repository and is free for you to use under the MIT license.

How to Use the ChatGPT API

The OpenAI API’s gpt-3.5-turbo and gpt-4 models are the same models that ChatGPT and ChatGPT+ use respectively. These powerful models are capable of understanding and generating natural language text.

Please note that the ChatGPT API is a general term that refers to OpenAI APIs that use GPT-based models for developing chatbots, including the gpt-3.5-turbo and gpt-4 models.

The ChatGPT API is primarily optimized for chat but it also works well for text completion tasks. The gpt-3.5-turbo and gpt-4 models are more powerful and cheaper than the previous GPT-3 models. However, as of writing, you can not fine-tune the GPT-3.5 models. You can only fine-tune the GPT-3 base models i.e., davinci, curie, ada, and cabbage.

As of writing, the GPT-4 API is on the waitlist. But the GPT-3.5 models are accessible to everyone, so we will be using the same in this article. Although, you can use GPT-4 right now by upgrading to ChatGPT+.

Using the ChatGPT API for Chat Completion

You need to configure the chat model to get it ready for the API call. This can be better understood with the help of an example:

 import openai

openai.api_key = "YOUR_API_KEY"

completion = openai.ChatCompletion.create(
  model = "gpt-3.5-turbo",
  temperature = 0.8,
  max_tokens = 2000,
  messages = [
    {"role": "system", "content": "You are a funny comedian who tells dad jokes."},
    {"role": "user", "content": "Write a dad joke related to numbers."},
    {"role": "assistant", "content": "Q: How do you make 7 even? A: Take away the s."},
    {"role": "user", "content": "Write one related to programmers."}
  ]
)

print(completion.choices[0].message)

Running this code produces the following output:

The above code demonstrates a ChatGPT API call using Python. Note that the model was able to understand the context ("dad joke") and the type of response (Q&A form) that we were expecting even though we didn’t explicitly mention it in the last user prompt.

Thus, when building applications, you can provide the context in advance and the model will adapt to your requirements accordingly.

Here, the most important part is the messages parameter which accepts an array of message objects. Each message object contains a role and content. You can provide three types of roles to the message objects:

  • system: It sets up the context and behavior of the assistant.
  • user: It’s used to give instructions to the assistant. It is typically generated by the end user. But you as a developer can also provide some potential user prompts beforehand.
  • assistant: We provide the assistant with some information in advance so that it gives us the response we expect from the API.

You can further customize the temperature and max_tokens parameters of the model to get the output according to your requirements.

The higher the temperature, the higher the randomness of the output, and vice-versa. If you want your responses to be more focused and deterministic, go for the lower temperature value. And if you want it to be more creative, go for the higher value. The temperature value ranges between 0 and 2.

Like ChatGPT, its API also has a word limit. Use the max_tokens parameter to limit the length of responses. However, setting a lower max_tokens value can cause potential issues as it may cut off the output mid-way. As of writing, the gpt-3.5-turbo model has a token limit of 4,096, while the gpt-4 model has a limit of 8,192 tokens.

You can further configure the model using the other parameters provided by OpenAI.

Using the ChatGPT API for Text Completion

Apart from the chat completion tasks, the gpt-3.5-turbo model also does a good job with text completion. It outperforms the previous text-davinci-003 model and is priced at only one-tenth of its cost.

The following example demonstrates how you can configure the ChatGPT API for text completion:

 import openai

openai.api_key = "YOUR_API_KEY"

completion = openai.ChatCompletion.create(
  model = "gpt-3.5-turbo",
  temperature = 0.8,
  max_tokens = 2000,
  messages = [
    {"role": "system", "content": "You are a poet who creates poems that evoke emotions."},
    {"role": "user", "content": "Write a short poem for programmers."}
  ]
)

print(completion.choices[0].message.content)

You don’t even need to provide the system role and its content. Providing just the user prompt will do the work for you.

 messages = [
  {"role": "user", "content": "Write a short poem for programmers."}
]

Running the above code will generate a poem for programmers:

Response Format of the ChatGPT API

The ChatGPT API sends the response in the following format:

You further need to extract the assistant’s reply that’s stored in the content.

Building Applications Using the ChatGPT API

You can directly use the API endpoint or the openai Python/Node.js library to start building ChatGPT API-powered applications. Apart from the official openai library, you can also develop applications using the community-maintained libraries recommended by OpenAI.

However, OpenAI does not verify the security of these community-maintained libraries, so it’s better to either directly use the API endpoint or use the official openai Python/Node.js library.

Method 1: Using the API Endpoint

You need to use the /v1/chat/completions endpoint to utilize the gpt-3.5-turbo and gpt-4 models.

 import requests

openai.api_key = "YOUR_API_KEY"
URL = "https://api.openai.com/v1/chat/completions"

payload = {
  "model": "gpt-3.5-turbo",
  "temperature" : 1.0,
  "messages" : [
    {"role": "system", "content": f"You are an assistant who tells any random and very short fun fact about this world."},
    {"role": "user", "content": f"Write a fun fact about programmers."},
    {"role": "assistant", "content": f"Programmers drink a lot of coffee!"},
    {"role": "user", "content": f"Write one related to the Python programming language."}
  ]
}

headers = {
  "Content-Type": "application/json",
  "Authorization": f"Bearer {openai.api_key}"
}

response = requests.post(URL, headers=headers, json=payload)
response = response.json()

print(response['choices'][0]['message']['content'])

The above sample code demonstrates how you can directly use the endpoint to make the API call using the requests library.

First, assign the API key to a variable. Next, you need to provide the model name to the model parameter of the payload object. After that, we provided the conversation history to the messages parameter.

Here, we’ve kept a higher temperature value so that our response is more random and thus more creative.

Here’s the response output:

Note that there are some problems with OpenAI’s ChatGPT, so you may get offensive or biased replies from its API too.

Method 2: Using the Official openai Library

Install the openai Python library using pip:

 pip install openai 

Now, you’re ready to generate text or chat completions.

 import openai

openai.api_key = "YOUR_API_KEY"

response = openai.ChatCompletion.create(
  model = "gpt-3.5-turbo",
  temperature = 0.2,
  max_tokens = 1000,
  messages = [
    {"role": "user", "content": "Who won the 2018 FIFA world cup?"}
  ]
)

print(response['choices'][0]['message']['content'])

In this code, we only provided a single user prompt. We’ve kept the temperature value low to keep the response more deterministic rather than creative.

You’ll get the following response after running the code:

The ChatGPT responses may seem magical and can make anyone wonder how ChatGPT works. But behind the scenes, it’s backed by the Generative Pre-trained Transformer (GPT) language model that does all the heavy lifting.

Build Next Generation Apps Using the ChatGPT API

You learned how to configure the ChatGPT API. The ChatGPT API has opened gates for you and developers around the world to build innovative products leveraging the power of AI.

You can use this tool to develop applications like story writers, code translators, email writers, marketing copy generators, text summarizers, and so on. Your imagination is the limit to building applications leveraging this technology.

Apart from the ChatGPT API, you can also use other OpenAI models to develop cool applications.

MakeUseOf