OpenAI trains models like ChatGPT on a broad data set and doesn’t have access to control its outputs in specific ways after training. However, there are ways to avoid possible misuse which primarily include two aspects:
Hardcoding the AI system’s behavior: OpenAI establishes defaults to make the AI system useful “out of the box” to as many users as possible while reducing both glaring and subtle biases in how ChatGPT responds to different inputs. Offensive or inappropriate behavior is also minimized.
Customizing the AI’s behavior: OpenAI is developing an upgrade to ChatGPT that will easily allow individual users to customize its behavior so they can define their AI’s values within broad bounds set by society.
While OpenAI builds such systems, it also simultaneously works on safeguards that prevent extreme customization to avoid malicious use of technology. Apart from these, rigorous human evaluations help in the deployment of safe and useful AI.
Remember, no technology is perfect at launch and it improves over time through strong feedback and consistent updates. It’s important for any company using AI technology like ChatGPT to have a robust feedback loop to minimize and prevent breaches or misuse.