The Promise and Peril of Chatbots: A Framework
- Kevin D
- Dec 4, 2024
- 4 min read
In preparing to expand into the resolution phase of my brief series on chatbots and education (Their Promise and Peril Part 1 and Part 2). I fed those blog posts into ChatGPT and had the content analyzed for theme, strengths, potential next steps, and improvements. This was done using an "office assistant" prompt that I use on a daily basis using GPT-4. More on AI-assisted writing, research, and dialogue to come!
If we believe that chatbots have both their promise (increased and personalized student support, supplements for teachers and tutors, expanded opportunities for correcting student misconceptions or access) and peril (supplanting human contact and intimacy, ethical issues of bias and access, and underlying problems inherent with LLMs), we must also recognize a path forward through this forest to an initial successful implementation.
I hope to propose some next steps that reflect a Catholic sense of the person and the opportunity presented at the beginning of the AI era. I don't possess deep philosophical, theological, technological, or educational knowledge, but think this is an important next step. My thinking was heavily influenced by this piece in Slate: "Schools Really Messed Up with Social Media. Now, We Have a Second Chance" by Nate Green. Green writes:
When social media was nascent, educators failed to see it as more than a purely social and extracurricular distraction, assuming that it didn’t deserve pedagogical attention and that, if we just banned it, we could simply ignore it. Many schools brought in an outside consultant (sometimes, that was me!) to discuss the dangers of social media, and then proceeded to ignore it for the remainder of the year. Often, the extent to which social media instruction was included in curricula was a single unit in a special course like Health or Library. This neglect left students to navigate social media on their own, and ultimately forced parents to deal with the fallout, which ranges from bullying, bigotry, and body-image issues to misinformation, disinformation, and radicalization. We still don’t fully understand how bad things will get.
He argues that at the beginning of a new era, we must get ahead of AI to write policies around usage, knowledge, and ethics in the domain. His piece is also from June of 2023.
What might an initial framework look like?
Primacy of the Person - An emphasis should be placed on AI as a supplement not a replacement for humans, relationships, and teachers. A commitment to this must be made in our policies, budgets, and contracts. This means investing in our staff, communicating with parents and students, and drawing a firm line around what AI can do to assist teachers (analyzing data, simplified grading, relieving "rote" skills et al.) and what it cannot do (sensitive communication, genuine feedback, et al.). This is a great opportunity to contrast the Church’s teaching on personhood with gnostic and technocratic views, tying our belief in human dignity (as explained in the Theology of the Body) to the ethical use of technology.
Sensitivity to Development - Just as "screen time" should be limited for developmental reasons depending on age level, an approach to AI should acknowledge its role for students changing as they change and grow. A chatbot providing feedback to an early reader or writer (i.e. Google's Read Along) might be appropriate at a younger age for a short(er) period of time, but AI-interaction should not be counted for anything other than screen time and not as a substitute for in-person learning. As students age, their access would increase based on their understanding of my next point.
Intellectually and Ethically Grounded - Staff and students should develop a deep understanding of what AI is and what it is not prior to using it outside of specific tools and functions. Access to chatbots or extensive use of a "standard" LLM function should take place within a context where the basic development and structure of generative AI is understood, its limitations and advantages presented, and a deeper understanding of the human personhood underlies its use. This especially applies to the creation of AI-generated content and the recognition of such things - especially in the context of broader day to day life. Given that AI can generate content that may appear credible, it's critical for both staff and students to understand its capabilities and limitations. Without this knowledge, there's a risk of students using AI to create misleading or false information without realizing it or to do so on purpose.
Monitored and Reflective - Tools and time for monitoring and promoting reflection should be in place to avoid falling into inappropriate usage, content, and relationships. Like all internet tools - AI can easily lead to pornography, erotica, and violence - monitoring and selected tools only should be available to our developing minds. As adults, we can provide guardrails and help students develop a conscience around these barriers as they grow. Providing opportunities for reflection on usage and relationship to/with AI can also enable us to correct misconceptions and deepen those same understandings and limits.
Transparent and Collaborative - By presenting a plan for the broader community, church and school leaders can communicate ahead of time and discuss these steps to solicitate feedback and refine an approach that reflects the needs and wants of the staff and family communities. Sensitivity to concerns should not deflect from the broader mission of forming young women and men into saints in the world we currently and will inhabit as they grow. An understanding of what AI is (see point 3) is not incompatible with banning its usage over deeper concerns. This communication must also be amongst school leaders and other Catholic institutions - with resources around the ethical, theological, and technical aspects provided at little to no costs to support proper implementation.
I believe these five points are a starting approach to this moment and key considerations before deciding which, what, and when AI should be used in the classroom.

Comments