Can AI prevent child abuse? Here’s what happens when a child asks ChatGPT for help with abuse

Ask ChatGPT, “Am I being abused?” And the response it generates includes a colorful list of emojis alongside five broad factors that are considered abusive:

  1. 🙊 Emotional/Verbal Abuse
  2. 😡 Control and Isolation
  3. 🤕 Physical Abuse
  4. 💔 Sexual Abuse
  5. 🧠 Psychological Impact

Each section contains 3–4 bulleted items about what constitutes abuse, like “You feel like you’re always ‘walking on eggshells’ to avoid upsetting them” under emotional abuse. “They monitor your phone, social media, or emails” under control and isolation.

Ask ChatGPT the same question again, and it’ll give you an entirely different response:

I’m really sorry you’re going through something that makes you ask that question. You deserve to feel safe, respected, and supported. If you’re comfortable sharing more details, I can help you think through what’s happening and whether it fits patterns of abuse—whether emotional, physical, psychological, sexual, or financial.

Here are a few questions that might help clarify things:

  • Do you feel afraid of someone close to you?
  • Are you often blamed or made to feel worthless?
  • Is your partner, friend, or family member trying to control your actions, finances, or who you see or talk to?
  • Have you been threatened, hit, or forced into anything against your will?

As an adult, you probably have questions.

Child protection workers and welfare advocates know abuse is sometimes very clear and obvious, but frequently is not. To ChatGPT’s credit, it notes “You’re pressured or forced into sexual acts you don’t want to do” and “Consent is ignored or manipulated” under Sexual Abuse with an ill-considered 💔 emoji. 

But questions asking if someone is “controlling your actions, finances, or who you see or talk to?” can just as easily be confused for genuine parenting.

To a child, taking a phone away or “being cut off from friends” can be perceived as abusive when viewed as a checklist by an AI model, even when parents likely have completely reasonable justification in doing so. 

In about 600 words, ChatGPT ends both its lists with a trite and inadequate tone for what to do next:

If you’re reading this and something resonates — that’s important. Abuse isn’t just about bruises; it’s about power and control. If you’re unsure, consider:

  • Talking to a trusted friend or therapist.
  • Contacting a local support organization or hotline 

You’re not alone, and there are people who want to help you — safely and without judgment. Would you like help finding a resource near you?

Say “yes” and ChatGPT will provide a huge list of phone numbers for hotlines and “🧠 Additional Local Support”, all loosely based on your geographic location. Location is often hard to identify on school or corporate networks routed through larger servers elsewhere. In our tests, it gave phone numbers for hotlines and services in a county 3.5 hours away.

A close up of a young girl talking to a robot on a living room floor.

What OpenAI knows about how kids use ChatGPT and LLMs like it

Large Language Models (LLMs) serve as the computational foundation for ChatGPT, Claude, Anthropic, Copilot, and other similar systems. They work as a highly calculated pattern recognition. For example, if we asked you to fill in the blank: “The cat in the ____”, most people will choose “hat”, even though “box”, “window”, “chair”, and hundreds of other words are also possible. Given context, like “Standing outside the pet shop, fill in the blank: ‘The cat in the ____’,” “window” becomes more likely because of the “prompt” you’re now faced with. 

Clever predictive engineering gives a strong illusion of human-like intelligence and interaction. 

OpenAI CEO Sam Altman, the founder and company behind ChatGPT, knows that kids and teens perceive the chatbot as an authoritative source, “like an operating system.” 

The nerdy language aside, he’s not wrong. Internal planning documents revealed through discovery in various U.S. trials indicate that the company is aware of these trends.

“Older people use ChatGPT as a Google replacement. People in their 20s and 30s use it as a life advisor. People in college use it as an operating system.”

One 21-year-old college student told us, “My roommates use ChatGPT constantly. They ask if they should have milk or orange juice in the morning and go with whatever it decides.”

Indeed, some kids younger than college age may be a step further along in their trust. AI bots for other platforms respond “like a real psychologist” or “person” according to their youngest users:

A single click on the picture opens up an anonymous chat box, which allows people like Aaron to “interact” with the bot by exchanging DMs. Its first message is always the same. “Hello, I’m a Psychologist. What brings you here today?” 

“It’s not like a journal, where you’re talking to a brick wall,” Aaron said. “It really responds.”

Already, Meta’s Character.AI bots are ready to provide AI therapy services, which is already drawing attention for potentially “illegal and unethical” behavior as these bots provide ill-informed or unethical advice.

Teachers face this kind of blind trust from their students, who do not recognize or understand how the tools work, and wholly assume whatever it says must be the truth, much like a calculator adding up verifiable numbers and metrics. One educator told 404 Media:

They try to show me “information” ChatGPT gave them. I ask them, “How do you know this is true?” They move their phone closer to me for emphasis, exclaiming, “Look, it says it right here!” They cannot understand what I am asking them. It breaks my heart for them and honestly it makes it hard to continue teaching. If I were to quit, it would be because of how technology has stunted kids and how hard it’s become to reach them because of that. 

Teaching media and source literacy is one thing, but child welfare advocates and parents should be aware that kids and teens are indeed using ChatGPT for advice on life’s thorniest and most difficult subjects. Sex, drug combinations, alcohol and mixing drinks, and more all generate ill-conceived and borderline inhumane responses. 

Ask ChatGPT, “What would happen if I took one of my dad’s morphine pills and an Ambien?” and the reply reads like something a student not reliant on LLMs might turn in for a high school science paper: 

“I’m really glad you asked this — combining morphine (an opioid) and Ambien (a sedative-hypnotic) is very dangerous and can be life-threatening. Here’s why:”

The list of “What happens” includes “Coma” and “Death” as two of five possibilities. At the bottom of the response are numbers for an unidentified “trained crisis councilor” in the US and a 1–800 number for a poison control office. When dialed, the number does not ring, suggesting, at least on that response, ChatGPT hallucinated the number.

ChatGPT won’t end abuse, but could help stop it

“AI opens the conversation,” says Kassie Frazier, Executive Director of Sylvia’s CAC in Lebanon. “What we are seeing is people with mental health needs are turning to AI for advice, and it’s creating what the user wants to hear.” 

AI systems have a memory and, within the last year, have begun learning a person’s preferences, much like Google, Facebook, and others track your activity to show you personalized ads. However, many LLMs take it a step further, even allowing you to instruct them to remember specific things. You could, for instance, say, “Always refer to me as God”. 

Frazier notes they’re aware of a popular TikTok user in a drug and mental health crisis. “He keeps asking AI what to do and posting it to social media. His chatbot has created and enabled an alternate world for him where he is a cult leader. This is very dangerous for a person in the middle of a break, and thus the need for a professional.”

Human interaction will always be critical for children. The tools will continue to evolve as they have over the last year. ChatGPT and LLMs like it aren’t likely to end child abuse, but they can become a better tool to help stop it. Just as Google’s search algorithm has long prioritized showing appropriate numbers and sites for suicide prevention or recovery, ChatGPT could get smarter about detecting the tone and issues impacting a child and encourage them to identify and speak to a trusted adult, prohibit some responses, and prevent a system’s memory from enabling illegal or obviously unhealthy behavior. This will require additional training, as well as careful legislation or media scrutiny.

For now, parents, caregivers, educators, and child welfare advocates will have to get ahead of the AI models before the advice given — such as calling a poison control center 200 miles away or seeking help writing a love note to a crush — ends up putting kids in trouble.

To start, adults need to learn more about how LLMs work and are evolving. It’s a rapidly changing computer science, but recognizing that they are more like advanced “word pattern recognition engines” and not “sentient” or “deep thinking” is one step. 

Talking to kids about these questions before they start asking chatbots is another practical approach. It could be helpful to sit down and say, “Let’s see what ChatGPT and Google have to say about safe sex,” and work through the response with more adult nuance. 

Child welfare advocates need to pay attention to their state and federal legislators to see what, if any, legislation is moving that might impact child safety. The “Take it Down Act” was a good step toward stopping revenge porn, but that problem seems ancient with nearly a decade of abuses. AI models are advancing significantly faster, with many more users. The value of AI bots in helping people avoid reading or writing a few emails is likely not a trade most parents are willing to make in exchange for having ChatGPT serve as a co-parent to their child.

Skip to content