Happy Halloween!
Anyway, while reading the redesigned Blogdex, I came across an article about backlinking and web logs. This interested me so I figured I’d add a little heat and noise to the conversation.
So far as I know, there are basically two ways to trace links to a page back to the pages that link to them:
- Use a search engine. Many of the major engines have advanced settings to allow you to do something like this.
- Use your web server log files. Browsers, unless they’re using a proxy or something to hide this information, often send web servers location information about the page they’ve previously requested when requesting new pages.
The problem with using a search engine is that the search engines don’t spider more obscure sites as frequently as others which means their database is often a few weeks out of date. For example, I wasn’t aware that Graham Leuschke, a guy I don’t even know, mistook me for someone else back in August until today. (Just to set to set the record straight, I am not the Web Nouveau guy. I had a few brief conversations with him back when I added a few of my sites to his list but that was about it.)
The problems with using server logs is that they can be spammed by robots and they meaningless if a visitor is using a browser set up that hides referrer information. Mark Pilgrim has thought about ways to deal with the spambot problem but he hasn’t said anything about browsers that hide page references.
It’s kind of a moot point in my case anyway. The Fish doesn’t run MySQL on this server anyway so I can’t use Movable Type, which has an automated way of assembling backlinks to other servers that use Movable Type. I guess that’s a call for me to start setting these things up myself.