Subsidiarity and AI
- Kevin D
- Sep 19
- 3 min read
From the Catechism of the Catholic Church 1879-1884:
The human person needs to live in society. Society is not for him an extraneous addition but a requirement of his nature. Through the exchange with others, mutual service and dialogue with his brethren, man develops his potential; he thus responds to his vocation.
A society is a group of persons bound together organically by a principle of unity that goes beyond each one of them. As an assembly that is at once visible and spiritual, a society endures through time: it gathers up the past and prepares for the future. By means of society, each man is established as an "heir" and receives certain "talents" that enrich his identity and whose fruits he must develop...
...Socialization also presents dangers. Excessive intervention by the state can threaten personal freedom and initiative. the teaching of the Church has elaborated the principle of subsidiarity, according to which "a community of a higher order should not interfere in the internal life of a community of a lower order, depriving the latter of its functions, but rather should support it in case of need and help to co-ordinate its activity with the activities of the rest of society, always with a view to the common good."
God has not willed to reserve to himself all exercise of power. He entrusts to every creature the functions it is capable of performing, according to the capacities of its own nature. This mode of governance ought to be followed in social life. the way God acts in governing the world, which bears witness to such great regard for human freedom, should inspire the wisdom of those who govern human communities. They should behave as ministers of divine providence.
This excerpt defines the Catholic conception of subsidiarity - how then can we apply this in the age of AI?
When discussing the companies with power large language models the most relevant aspect in its intersection with subsidiarity is that these are large multi-national companies, driven by a profit margin (despite claims to the otherwise), and generally lack an ethical conception of their practices (Anthropic might be the best on this front, but that is certainly not saying much). A great piece on this concern is Gaia Bernstein's "Why AI Companies are Morally More Guilty than Social Media":
AI developers have known the dangers from the beginning. They've long expressed concerns about anthropomorphism—AI bots with human characteristics that could create emotional dependence.
Companies went ahead anyway.
With that in mind - how should subsidiarity be used to ensure that AI is a human-centered technology and not a predatory one?

First and contra Marc Watkins - I believe it is best to begin at the national or international level and add age restrictions for many AI features. Children and teens can maintain access to tutor and research modes, but with parental controls, notifications, and limited response windows. Throttling and restricting access is a great first step and one rightly done at the governmental level. This should extend to background AI in devices as well - Mattel and others - should not allow students to form deep emotional attachments with advanced machine algorithms.
Second, as we move down the change of subsidiarity, we could see state and district level educational decisions about specific tools and literacy frameworks that parents can have a voice in on adopting so that students have a safe place, in a supervised classroom, to learn to use AI effectively, efficiently, and ethically.
Finally at the local and hyperlocal levels, we should promote pro-societal family reforms that allow tech-free childhoods, real-life play, and familial education. Just as a nascent anti-phone movement is growing and helping our culture move beyond safetyism and screenism, we need to get ahead and recognize that AI isn't embodied in a screen. Through voice mode, home devices like Siri and Alexa, wearables, and capitalism's efforts to grow and expand, AI will be more and more present in ways that iPads/phones, et al. aren't. By getting ahead of this, we can prevent the deep harms the iPhone era (2010-2024) have had on our children and adults.
Comments