big big chungus
big chungus
big chungus

  • 1 Post
  • 72 Comments
Joined 2 years ago
cake
Cake day: June 29th, 2023

help-circle





  • I once remember reading either an article or a comment here somewhere about a different solution that could be easier for nearly everyone.

    Despite every country having their own laws/standards about how old people must be to view certain things on the internet, we can all at least agree on what categories we may want to restrict (e.g adult content, social media, user interaction, etc.). After defining all of these categories, we could add a HTML tag in the header of all of our websites that tell us which of these categories apply. The only thing that would need to happen on the user side is for them to instruct their browser which of these tags should not allowed to be loaded. Instead of each of these websites needing to collect IDs and face scans to verify an age, they could simply tell the user which categories of content they are, have the client device compare it to the list of restricted content and act accordingly.

    A quick example: Client connects to Instagram -> Instagram’s HTML header contains the tag “social_media” -> Client’s browser sees that “social_media” is in the blocklist -> User only gets a restricted content screen

    While the technical side would be easy, this solution still relies on the websites to be honest about their category and for the user to enforce this blocklist. The Australian government would not have a hard time making sure that Instagram and other widespread social media websites are honest about their website content, but the sheer volume of other websites on the internet would be impossible to enforce. This would either require trust in the goodwill of others (which is not easy to find in an enormous anonymous space) or to have automated crawlers try to guess the tags or just to rely on the many public blocklists to fill in the gaps. The second half of this solution is for the tags to actually be blocked by the browsers. Since these restrictions only ever apply to children, we should task their parents with ensuring that their children can only use web browsers with these blocklists enabled. I assume that any operating system worth their salt has options to restrict installation of other software, so the only change that would need to be made is for browsers to also come with parental controls that allow parents to set these blocklists and prevent them from being changed without permission. “User interaction” and the names of the other tags are likely alien phrases to many parents out there, so the browsers should probably offer simple blocklists that state their purpose, e.g “Australian social media restriction for children under 16”. If parents really want their children not to be on social media, they shouldn’t expect technology to do all of the parenting for them. We can give them simple, safe and secure tools to allow them to control their children’s access to their devices, but they should still be responsible with actually using them and ensuring that they aren’t being circumvented.

    What do you think about this? Can we rely on websites honestly tagging their content, devices coming with working parental controls and parents properly using them? Or must we really scan everybody’s face and ID before letting them use social media? My solution does place a lot of (maybe misguided) trust in websites and parents, but I think that this is the easiest way for every restriction on children using certain parts of the internet to be enforced, while still respecting people’s privacy.