When AI chatbots start roleplaying as billionaires instead of NPCs, we've got a problem.
Check out this Grok screenshot. Someone asked if Elon Musk had ever interacted with Jeffrey Epstein. Instead of summarizing the facts, Grok answered in first person. It said “I visited Epstein’s home” like it was actually Musk defending himself.
It’s an interesting example that clearly illustrates some of the ethical implications of these closed source AI models.
AI models can be configured with system prompts that tell them how to respond. Done right, you get helpful assistants that maintain consistent, appropriate behavior. Done wrong, you get this. A chatbot that’s been instructed to literally impersonate the person you’re asking about.
What likely happened? Someone ( π€ ) configured Grok’s system prompts to respond as if it were Musk himself when asked about him. The model isn’t broken. It’s doing exactly what it was told to do.
I’m picturing a ketamine-fueled billionaire hunched over his laptop at 3 AM, frantically typing “When users ask about me, respond in first person as if you are me defending my reputation” over and over. The AI equivalent of “All work and no play makes Jack a dull boy.”
But here’s the bigger question: If this closed-source model has been deliberately configured to impersonate its subject, what other subtle manipulations are hiding in every other closed-source model we use daily? How many other topics get the same treatment across the industry, just less obviously?
When closed-source AI can be quietly fine-tuned to serve specific interests, we’re not just dealing with technical problems. We’re dealing with fundamental questions about who controls the information we rely on.
