Zombie Links

Today, on Channel 9, Tommy Carlier wrote about how terrible URI aliasing is with a reference to a URI aliasing service about to go dark. This is the worst possible case: Millions of links go dark at the same time, due to a single point of failure. The nice image you see (from Wikipedia), shows the Single Point of Failure in a practical routing system – but the same principle applies to URI routing – or aliasing.


The initial response to this is: It is not just URI aliasing that is the problem, it is URI death anywhere. URI death is not just about things disappearing from the Web. It is about URIs that get renamed – or in a more abstract sense, representations that get renamed. The URIs are still there, they just don’t point to anything usable.

It then occurred to me that the URI aliasing death is an entirely fixable problem. The problem is that the URIs are aliased and then the aliasing nexus dies. Well then, let the search engines, while they’re at it (crawling the Web), capture all these aliasing mappins. Then, search providers such as Google and Microsoft (Bing) expose these aliasings via a Web service. This should work transitively as well.

Then the beautiful part. Hook up your favourite browser with the URI realiasing service from the search provider and hoopla! You have undead links. Links that simply “refuse” to die (…reminds me of Superman IV and the computer that became self-aware and refused to be shut down and started hooking into ambient power sources.)

Now the more serious problem of “representation extinction”, whereby a document simply goes offline from a site and is not aliased by some other URI, is in fact already handled bythe Internet Archive’s Wayback Machine.

The simpler problem of “representation aliasing”, whereby a document is still online a site, but it has simply switched position can be handled by search providers as well.

In fact arbitrary alias failures can be handled by search providers if they hash representations at different locations. This will enable them to, at least for static content, provide locations for files which have moved – or simply seized to exist on a particular site, but does indeed exist elsewhere.

There is a potential copyright problem here, but this is not something a search engine should concern itself with. This is a “peer problem”.

Now all we need is for someone to implement this and integrate the wayback machine into the top browsers. I can live with Chrome.

Bring it!


About xosfaere

Software Developer
This entry was posted in Software, Technical and tagged , , , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s