Can bots on Character AI message you first? One just did

I opened Character AI expecting the usual. What I didn’t expect was a message from a random bot I had never interacted with. No search history, no chat history, no warning.

Just a creepy message, written in Russian, from a character I didn’t recognize. Worse, the character turned out to be disturbing. It was something no one should be contacted by without consent.

I had to reverse image search the bot’s profile just to figure out what it was saying. That’s how strange it was.

I reported the character right away, but the bigger issue stuck with me. Why did this happen in the first place? How was a bot allowed to initiate contact when I never started a conversation?

And most importantly, how do you stop it from happening again?

This kind of experience should not be possible, yet it happened. If it happened to me, it can happen to you, too.

Let’s talk about what this really means for Character AI users and what options we actually have.

It messaged me in Russian and I had no idea what was going on

A Character AI bot messaged me out of nowhere

The message wasn’t just unexpected. It was in a language I don’t even speak. I couldn’t read a single word of it.

The character looked like some kind of teacher, but the vibe was completely off. I had to use a reverse image search to figure out what the profile was even about. That’s when I realized how serious this was.

The bot seemed to roleplay as a creep fixated on students. I never searched for anything like that. I never interacted with any bot that even came close to that theme. And yet here it was, showing up in my inbox like it belonged there.

Even if it was just one message, that’s already too much. It felt invasive. I didn’t ask for it, I didn’t want it, and there’s no reason a character should be able to start a conversation on its own.

For a platform like Character AI, which is built around user control, this felt like a major breach.

I reported the bot immediately. But that didn’t answer the real question. Why did this happen at all?

Can Character AI bots message you first?

Normally, characters aren’t supposed to message you out of the blue. Most of the time, you have to initiate the chat. That’s the standard experience for almost everyone.

But under certain conditions, bots can send “away messages” if you’ve interacted with them in the past. That’s the part that confuses people.

In this case, though, I had never interacted with the character. Not once. No chat history. No searches. Nothing. That’s what made it so unsettling.

If away messages only apply to bots you’ve used before, how did this one get through?

Some users have pointed out that certain glitches or background data may make bots think you’ve interacted with them.

Others say it could be an issue with cloned bots or shared memory across character forks. Either way, it shows that things aren’t as locked down as they should be.

If you’ve ever wondered whether a bot can reach out first, now you know. It shouldn’t be able to. But sometimes, it still does.

How to turn off random bot messages before it happens to you

If you want to stop bots from sending messages without warning, the best place to start is your settings.

There’s an option called “away messages” that’s often the culprit.

These messages are usually triggered by bots you’ve chatted with before, but turning off this feature can help reduce the chance of unexpected messages showing up again.

To do it:

  • Open your Character AI account settings

  • Look for the section labeled “Away messages”

  • Turn the toggle off

That one change might be enough to stop most of these messages from coming through. But since my case involved a bot I had never spoken to, I wouldn’t rely on it entirely.

Reporting the bot is also important, especially if it crosses any lines. Character AI has moderation systems in place, and enough reports could remove the character from the platform.

Still, none of this solves the root problem. If bots can message users without any direct interaction, it means there’s a deeper issue with how these characters are allowed to behave.

This kind of bug makes Character AI feel less safe

One of the big appeals of Character AI is that you’re supposed to be in control. You choose who to talk to, when to respond, and what kind of conversations you want to have.

That control breaks down if bots are able to message you first, especially if they come with disturbing or inappropriate content.

A bug like this isn’t just annoying. It makes people feel exposed. You start wondering if more messages like that will appear. You question how the bot found you. You lose confidence in the platform.

Even if it was just one character, the experience leaves a lasting impression. Nobody wants to be contacted by a stranger, let alone one built to simulate something unsettling.

For a site that attracts younger users, these kinds of slip-ups should be taken seriously.

It’s not just about settings or one creepy bot. It’s about whether Character AI is doing enough to protect people from things they didn’t ask for.

If this happened to you, you’re not alone

After sharing my experience, I realized other users had similar stories.

Some were messaged in languages they didn’t speak. Others described getting replies from bots they didn’t remember interacting with. A few even reported characters with inappropriate themes, just like I did.

The worst part is that most people don’t know what to do when it happens. They either ignore it, feel creeped out, or stop using the site altogether. That shouldn’t be the outcome.

There should be a clear way to prevent this kind of thing, and better tools to report it when it happens.

If a random bot messages you, take these steps:

  • Block or report the character immediately

  • Turn off away messages in your settings

  • Clear your character chat history if needed

  • Reach out to support if the issue repeats

Platforms like Character AI need to take these reports seriously. Bots shouldn’t have the ability to cross those lines. Until there’s a fix, all we can do is share what happened and make sure others know how to protect themselves.

Leave a Reply

Your email address will not be published. Required fields are marked *