Oxford researchers outline recommendations for AI
The Sector > Research > Oxford researchers outline recommendations for AI and young people

Oxford researchers outline recommendations for AI and young people

by Freya Lucas

January 24, 2025

Oxford Internet Institute (OII) researchers have outlined recommendations for studying the impact of artificial intelligence on young people’s mental health, divided into four sections: 

 

  • A brief review of recent research on the effects of technology on children’s and adolescents’ mental health, highlighting key limitations to the evidence.
  • An analysis of the challenges in the design and interpretation of research that they believe underlie these limitations.
  • Proposals for improving research methods to address these challenges, with a focus on how they can apply to the study of AI and children’s wellbeing.
  • Concrete steps for collaboration between researchers, policymakers, big tech, caregivers and young people.

 

With the rapid adoption of AI by children and adolescents using digital devices to access the internet and social media, OII experts are calling for a clear framework for AI research considering the impact on young people and their mental health, given the rapid adoption of artificial intelligence by children and adolescents using digital devices to access the internet and social media.

 

Its recommendations are based on a critical appraisal of current shortcomings in the research on how digital technologies’ impact young people’s mental health, and an in-depth analysis of the challenges underlying those shortcomings.

 

The paper, From Social Media to Artificial Intelligence: Improving Research on Digital Harms in Youth,‘ was published 21 January in The Lancet Child and Adolescent Health, calls for a “critical re-evaluation” of how we study the impact of internet-based technologies on young people’s mental health, and outlines where future AI research can learn from several pitfalls of social media research. Existing limitations include inconsistent findings and a lack of longitudinal, causal studies.

 

“Research on the effects of AI, as well as evidence for policymakers and advice for caregivers, must learn from the issues that have faced social media research,” lead author Dr Karen Mansfield said. 

 

“Young people are already adopting new ways of interacting with AI, and without a solid framework for collaboration between stakeholders, evidence-based policy on AI will lag behind, as it did for social media.”

 

The authors propose that effective research on AI will ask questions that don’t implicitly problematise AI, ensure causal designs, and prioritise the most relevant exposures and outcomes.

 

The paper concludes that as young people adopt new ways of interacting with AI, research and evidence-based policy will struggle to keep up. However, by ensuring our approach to investigating the impact of AI on young people reflects the learnings of past research’s shortcomings, we can more effectively regulate the integration of AI into online platforms, and how they are used.

 

“We are calling for a collaborative evidence-based framework that will hold big tech firms accountable in a proactive, incremental, and informative way,” contributing author Professor Andrew Przybylsk said. 

 

“Without building on past lessons, in ten years we could be back to square one, viewing the place of AI in much the same way we feel helpless about social media and smartphones. We have to take active steps now so that AI can be safe and beneficial for children and adolescents.”

 

Read From Social Media to Artificial Intelligence: Improving Research on Digital Harms in Youth using the link provided.

Download The Sector's new App!

ECEC news, jobs, events and more anytime, anywhere.

Download App on Apple App Store Button Download App on Google Play Store Button
PRINT