Encoding JSON messages for the ChatGPT API is straightforward. Here is an example:
```
{
“model”: “gpt-3.5-turbo”,
“messages”: [
{“role”: “system”, “content”: “You are a helpful assistant.”},
{“role”: “user”, “content”: “Who won the world series in 2020?”},
]
}
```
In this payload:
- The “model” field is set to the name of the model you want to use. In this case, `gpt-3.5-turbo` is used.
- The “messages” field is an array of message objects. Each object has a “role” that can be `system`, `user`, or `assistant`, and “content” which is the text of the message from the role.
The `system` role is used to set up the behavior of the assistant. This message is generally used to provide instructions for the assistant.
The `user` role represents the instruction from the user towards the assistant. These are typically questions or commands.
Note: To continue a conversation, simply extend the series of messages. The assistant will respond to each user message in the order they are received.
```
{
“model”: “gpt-3.5-turbo”,
“messages”: [
{“role”: “system”, “content”: “You are a helpful assistant.”},
{“role”: “user”, “content”: “Who won the world series in 2020?”},
{“role”: “assistant”, “content”: “The Los Angeles Dodgers won the World Series in 2020.”},
{“role”: “user”, “content”: “Where was it played?”}
]
}
```
Remember that every message counts towards the total number of tokens in an API call.