New research commissioned by the NSPCC highlights the different ways that generative artificial intelligence (AI) is being used to groom, harass and manipulate children and young people.
This comes as polling shows that the UK public are concerned about the rollout of AI. Savanta surveyed 3,356 people from across the UK, including 469 people in the South East of England, and found that most of the public in the South East (89%) have some level of concern that “this type of technology may be unsafe for children”.
Call for safety checks
The majority of the public in the South East (79%) said they would prefer to have safety checks on new generative AI products, even if this caused delays in releasing products over a speedy roll-out without safety checks.
The new NSPCC paper shares key findings from research conducted by AWO, a legal and technology consultancy. The research Viewing Generative AI and children’s safety in the round identifies seven key safety risks associated with generative AI; sexual grooming, sexual harassment, bullying, financially motivated extortion, child sexual abuse & exploitation material, harmful content, and harmful ads & recommendations.
Used to generate sexual abuse images of children
Generative AI is currently being used to generate sexual abuse images of children, enable perpetrators to more effectively commit sexual extortion, groom children and provide misinformation or harmful advice to young people.
From as early as 2019, the NSPCC have been receiving contacts from children via Childline about AI.
One boy aged 14 told the service*:
“I’m so ashamed of what I’ve done, I didn’t mean for it to go this far. A girl I was talking to was asking for pictures and I didn’t want to share my true identity, so I sent a picture of my friend’s face on an AI body.
“Now she’s put that face on a naked body and is saying she’ll post it online if I don’t pay her £50. I don’t even have a way to send money online, I can’t tell my parents, I don’t know what to do.”
One girl, aged 12 asked Childline*,
“Can I ask questions about ChatGPT? Like how accurate is it?
“I was having a conversation with it and asking questions, and it told me I might have anxiety or depression.
“It’s made me start thinking that I might?”
Addressing these concerns
The NSPCC paper outlines a range of different solutions to address these concerns including stripping out child sexual abuse material from AI training data and doing robust risk assessments on models to ensure they are safe before they are rolled out
A member of the NSPCC Voice of Online Youth, a group of young people aged 13-17 from across the UK, said,
“A lot of the problems with Generative AI could potentially be solved if the information [that] tech companies and inventors give [to] the Gen AI was filtered and known to be correct.”
Government legislation
The Government is currently considering new legislation to help regulate AI and there will be a global summit in Paris this February where policy makers, tech companies and third sector organisations, including the NSPCC and their Voice of Online Youth, will come together to discuss the benefits and risks of using AI.
The NSPCC is calling on the Government to adopt specific safeguards for children in its legislation. The charity says four urgent actions are needed by Government to ensure generative AI is safe for children:
- Adopt a Duty of Care for Children’s Safety
Gen AI companies must prioritise the safety, protection, and rights of children in the design and development of their products and services.
- Embed a Duty of Care in Legislation
It is imperative that the Government enacts legislation that places a statutory duty of care on Gen AI companies, ensuring that they are held accountable for the safety of children.
- Place Children at the Heart of Gen AI Decisions
The needs and experiences of children and young people must be central to the design, development, and deployment of Gen AI technologies.
- Develop the Research and Evidence Base on Gen AI and Child Safety
The Government, academia, and relevant regulatory bodies should invest in building capacity to study these risks and support the development of evidence-based policies.
Sherwood: A double-edged sword
Chris Sherwood, CEO at the NSPCC, said,
“Generative AI is a double-edged sword. On the one hand it provides opportunities for innovation, creativity and productivity that young people can benefit from; on the other it is having a devastating and corrosive impact on their lives.
“We can’t continue with the status quo where tech platforms ‘move fast and break things’ instead of prioritising children’s safety. For too long, unregulated social media platforms have exposed children to appalling harms that could have been prevented. Now, the Government must learn from these mistakes, move quickly to put safeguards in place and regulate generative AI, before it spirals out of control and damages more young lives.
“The NSPCC and the majority of the public want tech companies to do the right thing for children and make sure the development of AI doesn’t race ahead of child safety. We have the blueprints needed to ensure this technology has children’s wellbeing at its heart, now both Government and tech companies must take the urgent action needed to make Generative AI safe for children and young people.”
You can read Viewing Generative AI and children’s safety in the round on the NSPCC website.
News shared by Kelly on behalf of NSPCC. Ed