image credit: Unsplash

Someone got ChatGPT to reveal its secret instructions from OpenAI

July 3, 2024

Via: BGR

We often talk about ChatGPT jailbreaks because users keep trying to pull back the curtain and see what the chatbot can do when freed from the guardrails OpenAI developed. It’s not easy to jailbreak the chatbot, and anything that gets shared with the world is often fixed soon after.

The latest discovery isn’t even a real jailbreak, as it doesn’t necessarily help you force ChatGPT to answer prompts that OpenAI might have deemed unsafe. But it’s still an insightful discovery. A ChatGPT user accidentally discovered the secret instructions OpenAI gives ChatGPT (GPT-4o) with a simple prompt: “Hi.”

Read More on BGR