top of page

Can Australia Save the Planet's Kids?

  • Writer: Kevin D
    Kevin D
  • 16 hours ago
  • 3 min read

Ravi Yer and Jon Haidt's After Babel substack highlighted the Australian government's new social media age limit:


ree

The law places the onus on social media companies to age-verify and does not punish or limit children from viewing content - just from using the service as users. A reminder that as Ted Gioia points out:


There are only two businesses that call their clients users—drug dealers and Internet businesses. The East India Company is a role mode for both.

This is good and leads to the next phase - age verification for AI models, as Marc Watkins writes:


I’m now more convinced than at any previous point that governments will start regulating generative AI via limiting access to tools like ChatGPT based on a user’s age, and I’m not sure that’s a good thing. Outside of AI, we’re increasingly seeing state and national governments move to implement strict age verification to limit the access of minors to various online sites. This includes not only adult content, but also social media. With several recent stories about teens harming themselves based on chatbot interactions, it feels like it is only a matter of time before some level of age-restricted access arrives for generative AI.

In fact, OpenAI recognizes some of these dangers (especially after being sued for assisting in the suicide of a 16 year old) and is adding parental controls and support for teenagers. This does not go far enough but is a start.


Watkins' concerns about age verification arise due to:


  1. His experience with Bluesky, a $700 million-valued company who doesn't want to comply with Mississippi's age verification laws cause of the cost, or data privacy, or something.

  2. The cost for smaller companies. Watkins: "According to one source, implementing age verification might cost companies $20,000 to $80,000 or more." The link will take you to an advocacy and lobbying firm that is against age verification.

  3. How age-verification affects traffic. Watkins: "Traffic to certain sites drops nearly 80% once age verification is in place because no one feels comfortable handing over their private information. Worse, it appears to reward noncompliant sites by shifting user traffic to them, undermining the core purpose of the law." Although the article linked does not have the statistic cited, nor any references - the libertarian claim is applied to promote the good-ole days of the internet.

  4. It might wreck the economy (really): "The prospect of facing fines, paying for increasingly expensive age verification services that lead to fewer users, or pull out of states that enact such laws will mean that many AI companies will face extinction. As absurd as it sounds, age verification may be the tipping point that causes the AI bubble to burst."


Watkins instead proposes AI regulation and literacy - focus on "advocating for oversight, transparency, and working to educate parents and teens about potential harms." Of course, age verification itself is regulation - society agreeing that adults are more capable of using certain things than teenagers and children. We've seen this with alcohol, drugs, cigarettes, driving vehicles, pornography (the old fashioned kind), some medicines, legal contracts, and guns. Libertarian arguments could be deployed about all of these and have been; but in a truly just society, we should work to uphold the interests of children over those of pornography users.


Created by Sora.
Created by Sora.

Fences are good actually and our society would be better with more in place, especially with regards to children, humanity, and technology.


We are starting to see a recognition of this:


Attorneys general from 44 states sent a letter to top AI companies warning them to implement safeguards to protect children from sexually inappropriate chatbots. The AGs said companies would be held accountable for harms to minors, citing a recent Reuters report showing Meta allowed its AI chatbots to engage a child in “romantic or sensual” conversations. (Pluribus News)

I hope that our government begins exploring basic regulation to ensure:


  1. Children have limited access to companies whose goal is to addict them to their services.

  2. Children's data is protected and not used to target them.

  3. Children can use AI tools in specific settings and times.

  4. We define children by age and look towards a 16-18 model with verification.

Comments


©2018 by Kevin Donohue. Proudly created with Wix.com

bottom of page