top of page

The APA Wades Into AI

  • Writer: Kevin D
    Kevin D
  • 4 days ago
  • 3 min read

The American Psychological Association has released a health advisory with regards to adolescents and AI. With 13 experts (none outside of the psychology field), 3 staff leads, and 8 listed staff consultants, it's worth considering what this organization says, even with a dearth of direct AI expertise on the panel. Also worth noting this organization didn't public a health advisory on social media use in adolescence until May of 2023, but whatever.


AI Generated via Ideogram
AI Generated via Ideogram

The Report

After standard background information on the growth of AI and its impact on youth, the APA document clearly links concerns with AI to concerns with social media and internet technology in general:


We urge all stakeholders to ensure youth safety is considered relatively early in the evolution of AI. It is critical that we do not repeat the same harmful mistakes that were made with social media. This is particularly important for at least two reasons. First, unlike social media use, adolescents may be unaware when they are using AI or AI-assisted technology (see below) and may not realize how AI is impacting their lives. Second, AI has made the process of discerning truth even more difficult. Although misinformation has always been disseminated on the internet, AI can produce inaccurate information in new ways that have the effect of making many users believe the information is true, requiring adolescents to be especially vigilant.

The report than makes 10 recommendations with accompanying with more detailed explanation and concrete suggestions:

  1. Ensure healthy boundaries with simulated human relationships.

  2. AI for Adults should differ from AI for adolescents

  3. Encourage use of AI that can promote healthy development

  4. Limit access to and engagement with harmful and inaccurate content

  5. Accuracy of health information is especially important

  6. Protect adolescents' data privacy

  7. Protect likenesses of youth

  8. Empower parents and caregivers

  9. Implement comprehensive AI literacy education

  10. Prioritize and fund rigorous scientific investigation of AI's impact on adolescent development


Thoughts


It’s encouraging to see a national scientific organization acknowledging past missteps with social media and proactively addressing a similar emerging crisis. Several of the recommendations resonate with concerns I’ve raised before—particularly around chatbots, developmentally appropriate AI use, and more.


Three recommendations especially stand out:

1. Differentiating AI for Children and Adults (Recommendation 2): The idea of building AI systems fundamentally different for children and adults is compelling—perhaps a Sal Khan-style vision. However, it’s unclear whether a market driven by capitalism and consumer demand would support such a model. Could a regulated, tightly controlled industry make it happen? Would an “Instagram for kids” have prevented the societal issues we now face, 15 years into the social media era? It’s hard to say.


Given Meta’s track record—prioritizing market dominance and profit over safety, as reinforced in Sarah Wynn-Williams’ Careless People—I’m skeptical. The APA’s call to reduce “persuasive design” (e.g., gamification, personalized responses, manipulative notifications) in youth-accessible AI is a welcome stance.


If AI is here to stay, then creating verified, monitored, child-only networks with robust safeguards is far better than trusting platforms like Character.AI to self-regulate.


2. Deepfakes and Adult Supervision (Recommendations 7 & 8): The report highlights the dangers of deepfakes and the importance of adult oversight. There’s an interesting tension here: adults may want to use AI-generated content involving their children in appropriate ways. For instance, I once tried to create a comic for my daughter featuring a character modeled after her. The system blocked the request—likely due to safeguards against CSAM. Would a privacy-focused, device-based AI (like a hypothetical Apple AI) allow such use?


3. Health Information Accuracy (Recommendation 5): The APA rightly flags the decline of traditional search in the face of AI-generated content. It recommends that AI systems providing health information to youth ensure accuracy—or at least include clear disclaimers and encourage consultation with trusted adults.


This is especially important for controversial health topics like vaccines, gender identity, or abortion. The APA emphasizes literacy and disclaimers over content regulation—an approach that may be safer than attempting to control which “answers” AI provides, even if most systems currently align with liberal Western values.


Given the APA's influence and national scope - it will be interesting to see what traction is garnered.

Comentários


©2018 by Kevin Donohue. Proudly created with Wix.com

bottom of page