How Google and others build an advertising profile of children born in 1900

Somewhere online your child has claimed to be born on January 1, 1900, the very same year as William McKinley’s presidency and the birth of Queen Elizabeth I. This claim comes when a child signs up for an online account and is asked for their birthday. At the bottom of the list of available years is “1900” and it’s a sure way to get past age verification requirements.

Those questions, along with usage agreements and federal laws have likely been broken to access or use sites like YouTube and Facebook dozens of times by most children. Sites like YouTube account for 2.5 times the amount of viewing than TV among young people, and yet for years YouTube has “rejected” users under the age of 13.

Managing online accounts and their data about children has been a difficult challenge for parents and providers for years. Tech’s go-to solution has been to inform site visitors their services are unavailable to those under 13 – even if parents were okay with their child using such a service. The US Children’s Online Privacy and Protection Act says companies can’t collect or store information about children under 13 without parental consent. That’s proven difficult-to-impossible for websites to enforce without a central identification system. Thus, the age confirmation dialog boxes that appear in account signup forms, on cigarette and alcohol sites, and other adult sites.

Because so many children lie in the signup process, parents frequently never know their child set up a YouTube, Facebook, or another online account.

This has three significant consequences for children.

  1. Accounts track their movement online and in real life.
  2. Websites decide and learn what is advertised and presented to them.
  3. Many sites impact how children develop analytical skills about what’s being presented to them online.

Online advertising profiles get smarter to increase “engagement”, which generates revenue

Imagine a 12-year-old boy who spends time at the local pool with his friends one weekend. He takes photos and shares them to Snapchat and Facebook. Come Sunday, he takes his phone to the local sporting goods store with his dad. They pick up some fishing supplies and later head to a nearby park. Throughout the day he searches Google about what different fish might eat. Afterward, they all meet back with the family and head to a local fast food restaurant for dinner. After dinner, he plays a video game and searches YouTube for tutorials.

In this scenario, Google has enough to information to build a profile of this person. Their business models depend on it. Using the phone’s GPS, Google Maps and Facebook knows he likes fast food cheeseburgers, fishing, and that he stopped at the local sporting goods store. YouTube knows he wants to watch video game tutorials. Google Search knows he’s curious about fishing and certain video games, too. They also know his name and where he lives. In this scenario, the age of the user is largely irrelevant.

Now Google Search starts displaying ads for more fishing supplies. Google Maps starts to suggest what amounts to more places to eat junk food. YouTube starts playing ads related to fishing and new video game releases – some of which are more violent than his parents would rather he see. These aren’t worst-case scenarios, this is how they’re built. It’s a convenience and can be helpful to adults (like giving you traffic alerts on the way to work), but can be dangerous for children. The worst-case scenario is a family member who sees the photos he posted at the pool earlier in the weekend. We know 90% of children who are abused know their abuser. Those innocuous photos became something much more sinister.

Online services only get more intelligent with time, building a profile of where he attends school what music he likes, what his level of education likely is, and so on.

An opportunity for parents to exercise analytical skills and safety features

Because children often lack the ability to distinguish fact and fiction, ads and not-ads – despite their perceived tech savviness by parents – children fail to develop the analytical skills necessary to differentiate.

But this is also an excellent opportunity for parents to have a discussion with their kids about what’s fake, what’s real, what’s an ad and what’s not. It’s important for children to exercise their brain in critical thinking. After all, we can’t protect children from every ad. Schools have ads hanging on hallways ranging from anti-smoking campaigns to where the vending machine is and the local mini-golf course is a sponsor with a sign on the football field’s fence. Plenty of commercial material exists inside the classroom.

Google for their part has begun to open special accounts for children. Apple has also moved in a similar direction with various “Family” account mechanisms when you set up a new phone or account. Parents can designate a child’s email address as a minor and parents get more alerts, information, and permission requests from children before they buy apps, play games, and use their devices. This is also handy for seeing what photos your child is taking with their camera’s phone, too.

Parents should take advantage of these features at families.google.com. Apple device users can learn more about iCloud accounts for children and Family Sharing at apple.com/icloud/family-sharing/.

Both services allow you to pinpoint the location of your family member’s other devices globally, too. Which isn’t something people born in 1900 could do.

See also: how to protect kids on YouTube with parental controls

Join the over 2,500 others receiving periodic updates about the Indiana Chapter, CACs, and child abuse prevention.

We send campaigns a handful of times a year.

"*" indicates required fields

Name*
This field is for validation purposes and should be left unchanged.

Skip to content