Tracking your usage of the ChatGPT API can be done by monitoring the API responses. Every response includes fields like ‘usage’ which contains information like ‘prompt_tokens’, ‘completion_tokens’, ‘total\_tokens’, etc. Here is a description of these fields:
1. prompt\_tokens: the number of tokens in the API call input. This includes both system and user tokens.
2. completion\_tokens: the number of tokens in the API call model-generated message.
3. total_tokens: sum of prompt_tokens and completion\_tokens. This should equate roughly to the total number of tokens you will be billed for.
However, please note that OpenAI may not provide detailed analytics or reports on your usage. One might have to manually keep track of these numbers to understand their usage better.
For up-to-date information, look for OpenAI’s pricing details and guidelines to understand how they calculate tokens and charge for the use of their API.
Remember that each API key has a rate limit, too. Pay-as-you-go users usually have a limit of 60 requests per minute (RPM) and 60000 tokens per minute (TPM). These will also help in managing usage.