Book Review: A New Direction for Students in an AI World from Brookings' Center for Universal Education
- Kevin D

- 1 day ago
- 4 min read
Released on January 14, 2026, "A New Direction for Students in an AI World: Prosper, Prepare, Protect" seeks to analyze the current moment, globally, with education and AI and present a path forward. Like many treatments and the way education handles innovation in general, the rapidly changing nature of Artificial Intelligence, fueled by capitalistic drive, can make it seem to be older than one month. However, the reports basic approach and tenets can prove valuable for educators and those in policy.
The conclusion of the report sets the tone:
After interviews, focus groups, and consultations with over 500 students, teachers, parents, education leaders, and technologists across 50 countries, a close review of over 400 studies, and a Delphi panel, we find that at this point in its trajectory, the risks of utilizing generative AI in children’s education overshadow its benefits. This is largely because the risks of AI differ in nature from its benefits—that is, these risks undermine children’s foundational development—and may prevent the benefits from being realized.
Organized into four main sections, the report introduces itself as seeking to identify AI's negative risks for education and working to prevent them while maximizing potential benefits (10). This "premortem" matches with my own approach - the necessity of recognizing this moment and doing our best as parents and educators to avoid mistakes that were made in the rapid adoption of the internet, social media, and mobile devices in the affected age cohorts. The report first offers context, before presenting two potential outcomes - one rooted in a positive approach, the other discussing the risks AI poses. These outcomes are analyzed and then a series of recommendations under a "Prosper, Prepare, and Protect" framework is given.
The key to the context - and one that is rarely discussed by the salesmen of the AI field - is "when children use AI, it affects their cognitive and emotional development" (16). This prism is what informs the analysis of risks and benefits and leads to Brookings' cautious approach. The report does a better job than most policy ones in calling for parental notification and instruction - especially about the risks of using AI in and out of the classroom.

Brookings' benefits are detailed but fall into two large categories: "a powerful productivity tool that replaces routine tasks...personalized learning" (34). This personalized learning can assist educators in closing gaps and supporting students with disabilities - physical or neurological. The analysis is succinct without being sparse and provides a good overview of why AI must be part of the education system moving forward.
The risks are firmly rooted in the effect on learning, children, and their development. Although focusing on children in their teens, the report acknowledges these risks extend down to infants: "AI could influence the attachment process between infants or young children and their parents or caregivers" (55). For older children this harm is clearly laid out: "routine use and overuse of AI do not simply harm student's cognitive development - both actively place children at risk of cognitive decline" (56). (See WALL-E for an example). Cognitive offloading in the age of AI is a real issue and one reinforced by the traditional education model which Brookings describes as shifting "from commitment to compliance, inverting the intrinsic and extrinsic motivations critical to student learning" (59).
Beyond the classroom, the report highlights risks in the design and function of the bots. This highlighting of Seductive Design or "they reinforce users' existing beliefs, ideas, or self-perceptions...[children] can still develop emotional attachments to AI systems" (71). Later, the report states (echoing data ably presented in The Anxious Generation) that (105):
interview data suggest that dependence carries a strong psychological and emotional dimensions for students, manifesting in excessive use of AI tools, heightened anxiety when AI becomes unavailable, and a minished sense of control over usage patterns
This is also reinforced in the third risk highlighted - "the trust the ensures young people have what they need in school to meet their needs and prepare them for the future, which sustains faith in educational institutions themselves" (83). Data protection and equitable access are also highlighted as concerns.
The penultimate section weights these two against each other and then finally presents a series of recommendations - 12 - under the heading of Prosper, Prepare, and Protect. The Prosper recommendations focus on the incorporation of AI tools and a shift in learning to push positive experiences and growth for students. Prepare focuses on development and envisioning. This includes the development of an ethical vision that centers human agency, an opportunity for our religious schools. Lastly, the Protect recommendations highlight the necessity for regulation around chatbots and data; along with offering more parental support.
These sections provide some broader strategies for school and system leaders in promoting these 12 recommendations. One strikes close to my own approach - highlighting the need to "use philosophy to understand the difference between can and should" (132). The authors call on "integrating philosophical approaches [to] foster ethical, reflective, and deliberative thinking" (132). This is a must but has been lost as many of schools lack a true sense of purpose. For Catholic schools aimed at formation and fulfillment, philosophical and theological grounding must start at the youngest of ages.
The last recommendation remains a growing area of focus in my own presentations with schools. Not only must teachers be equipped with an effective, efficient, and ethical understanding of Generative AI; but our parents must recognize this as well. If we are truly in an Artificial Intelligence-Anxious Generation moment, we must have a broad societal effort to limit childhood interaction with these tools, before we lose another cohort to electronic servitude.
The report includes several appendices, the best of which compares five AI frameworks from around the world - offering templates for further development with an international focus.
Although the report does not present anything sweeping, the synthesis of so much discussion and preliminary research in one place leading to the conclusion advocating a limited and specific approach, rather than widespread adoption is what makes the report unique. For further analysis, I'd recommend Stephen Fitzpatrick's post in Teaching in the Age of AI.



Comments