Policing the Internet

There is someone in this world I find myself disagreeing with a lot. Families are like that, after all. But recently I had a heated debate with them about the state of the Internet, and I conceded that some kind of innovation was required, though I did explain that most of the provided solutions weren’t valid, and didn’t really solve the right problems…

The debate ranged far and wide over the state of the Internet, but the first thing we agreed on is that there is a lot of out of date information on the Internet.

And it’s true, there is. Some of it has dates on, a lot of it doesn’t but swift validation with the likes of the Internet Archive demonstrates how old some of it is. Now, some of it won’t go out of date very quickly, some of it will be barely minutes old and be out of date – and herein lies the problem.

How do you know when something’s out of date? Better than that, how do you police it? The suggestion raised was to send a query to the author on the anniversary of publication and ask them if it’s still up to date. Putting aside the multitude of practical problems, the theory is quite sound: presumably the author of something published – especially if they’re a hobbyist – would be aware if what they had written had become out of date.

Unfortunately, reality trumps vision here: the overhead of physically tracking pages, and their content, being up to date would not only create a truly massive database (not quite in the multi-petabyte archive that IA has, or however many exabytes Google has, but still huge), you’d have to send out a great many messages to people to ask them if it was up to date.

Then, realistically, how many of them are going to actually respond, and if they do, how many of them are just going to reply with a quick ‘yes’, even if it isn’t? The answers are ‘not that many’ and ‘most of them’, respectively. The optimists among you will no doubt think I got the order the wrong way around, but sadly not.

So, this lead to the second major suggestion: vet incoming sites onto the web. Get them in the system up front. Essentially, give people a licence to publish information on the web. This has several interesting connotations to it.

It would pretty heavily cut down on fraudulent sites. Right now, setting up a site is a free-for-all exercise, anyone can do it, anyone can own a domain name etc. even if it’s a trademark or copyrighted, anyone can still register a domain and wait for the inevitable wrangling. After that, the actual exercise of dealing with publishing content, and content management, is surprisingly commonplace.

Now, if you put in place some kind of vetting procedure – especially if you have one that requires regular renewal – you would distinctly be encouraging users to get their software up to date on the server side. That would certainly help with reducing bad behaviour going on out there.

I see two problems with this otherwise glorious view of a policed internet, even putting aside the whole ‘global’ thing. Firstly, the Internet as it stands is very good at one thing due to its structure: it is very, very good at rerouting around failures.

The so-called Great Firewall of China is not exactly the greatest barrier to content out there, and you can bet that whatever policing measures are introduced here to prevent unauthorised publication, it would be rerouted around.

Secondly, and probably more importantly for some people, is the whole censorship angle. So you have an authority of some description, presumably at least partly government funded, authorising publication of material onto the public internet. Now, sure, there are measures in place to withdraw illegal material, but they’re not that reliable, certainly not across global spaces, and again the Internet reroutes around it.

But if you have a government-funded agency proactively (as opposed to reactively) vetting content, you can block content before it is published. If the penny hasn’t dropped yet, think state published and state vetted news…

Then we come back to the other matter: globalisation. The ‘net is global. It’s everywhere, and that means if one country implements stricter measures, they mean precisely squat except inside that country. I’ve mentioned the Great Firewall of China – the strict border control there doesn’t affect the US internet content, except to limit material going into China, at least theoretically.

It’s long since been shown, though, that getting through the GFC isn’t exactly the hardest thing to do, which makes you question the practicality of it in the first place – but whether it’s practical or not, the fact remains that you have two different countries with different rules. The rules would have to be enforced equally everywhere to have any meaningful impact, and as it stands, that just isn’t going to happen.

A bit like policing the Internet generally, I guess, because it will just reroute around any other blockages you care to place.

This entry was posted in Uncategorized. Bookmark the permalink.

4 Responses to Policing the Internet

  1. Adonis says:

    For my blog (linked!) I try and write things in a ‘timeless’ fashion. Certain things like composition and f/stop really don’t change.

    I find that a lot of sites, in an attempt to be ‘New, hip and relevant’ tend to ignore information that doesn’t change, assuming that someone else somewhere probably already has it written up. This is likely true, but I find that re-linking current info is more of a plague.

    In the rare instance where I’m talking about a specific lens, camera or even generation of cameras that is likely to become out of date, I’ll usually throw in the qualifier of ‘Currently, as per the date of this article…’

  2. Arantor says:

    Interestingly, that ‘someone else somewhere’ tends to be Wikipedia, at least for most people, I find.

    Linking current information is usually problematic because of what has become known as ‘link rot’, where links die over time as sites disappear off the net due to the author getting fed up and moving on, or where links get updated due to a software change and there’s a logistical break in getting from the old to new content.

    The problem is that most people are simply not that thorough, and are always chasing what’s new rather than making sure there are good resources for already out there, and Wikipedia is not necessarily the best resource, simply because anyone can edit it – more than once I’ve thought about encyclopaedic resources that I wanted to see that weren’t managed by committee but a small, select group of trusted editors. At least that way, there’s a chance the sites will be kept up to date and well moderated etc.

  3. Adonis says:

    Wikipedia is still, at it’s best, an encyclopedic source of information. You can look up ‘handstand’ and will tell you various types and the history of each — but not how to do one.

    Similarly (and more relevant to me) F-number (cameras) goes into all sorts of formulae, history and why it does what it does. The practical application is a small picture and a bit of very buried text.

  4. Arantor says:

    True enough, Wikipedia is vaguely encyclopaedic. But I’ve found there is a vague link between accuracy of items and subject matter. Specifically, I’ve found that items about culture (especially geek culture) tend to be more militantly moderated and kept up to date.

    I say militant because for an encyclopaedia that ‘anyone can edit’, there is a surprising number of ‘edit wars’ and moderation exercises, and surprisingly less self-moderation than I would have expected to see.

    Your example of F-numbers on Wikipedia seems to follow what I’ve observed, generally: the geeky part (the formulae, history and so on) is thorough, encyclopaedic and likely kept up to date, while the bit that most people would want is pretty small. It’s also worth noting that it’s that last part where someone new to the concept of F-numbers would be looking for, as a general introduction to what they are and what they do, and would hang around for the meat if they were interested.

    Therein, really, lies the problem. Sure, you have companies putting up information about their products but by and large, the information epicentres of the Internet – the hobbyist blogs, and all of the wikis (and related type sites) out there – are populated by people who just add that information in their spare time.

    They may not be industry experts. They’re probably not good copy-writers. But they are passionate about it, which is how it makes it up there in the first place. So far, so good. But passion only takes you so far – it doesn’t take much before life gets in the way and the hobby has to be put on hold, which means the maintenance cycle and updating and fact-checking may not happen promptly if at all.

    Now, on something like Wikipedia, it’s not that easy to find a page that hasn’t been updated in a while and usually it happens because of the above: the one or two people that looked after it had, for whatever reason, let it go.

    That’s the main problem that we discussed originally: that there’s so much information out there put up by hobbyists that goes out of date so quickly, and there’s no process by which to review that information.

Comments are closed.