I may come back to tweak this essay repeatedly in the coming months. If, in this work in progress, I get a good groove going on, I may move those parts of the essay into my developer section to exist as permanent documents.
Page Design Rants
Link Text Complaints
I hate it when some page designers make raw URLs as link text. The rationale behind doing this is that if a visitor decides to hardcopy your page, they’ll still be able to extract the links. To me this is lame and is really the fault of the browser that’s being used. Ideally the browser, before sending the page to a printer, would extract all the URL’s hiding behind the link text (including anchors!) and format the page so these are seen in the printed document. Lynx does this. And with print/paged media style sheet you can do this(although it still doesn’t quite work in IE6.).
More sad neglect of hypertext.
How come most of the major news sites don’t put links into the body of their articles? Why? They have a myriad of navigation bars, tables of contents and frames, most of which are useless in comparison to a good site map or search function, but their actual content, their articles, are just mildly formatted text. Again what’s the point of putting a document on the Web if you don’t use hypertext the way it was intended?
Another thing that I’d like to see in browsers is a way to warn you that the link you’re going to invoke is going to open another browser window (Or in Opera’s case, open another window inside the browser). The link should be formatted differently somehow or perhaps there be some note in the status bar that warning the user. Some site designers set their links this way because they think it’s helpful or because they don’t want you to forget about their pages and, sometimes, they’re right. But often times it’s annoying and I think it breaks the Web metaphor: A link flips the user to a new page or a section in the current page. It doesn’t cause a whole other book to appear. Or it shouldn’t without warning the user first that it’s going to do that.
Page Scrolling is Good!
I hate it when major sites break up an article into separate web pages. Invariably, if the option is provided, I invoke the link that fuses the article into one page so I can scroll from top to bottom. If this causes problems in hardcopying the page, that’s a browser design problem, not mine. I have a theory about the motivations behind breaking up long articles into separate pages: This forces you to view more advertisements.
Jake still complains that some people dislike scrolling through long outlines of links (See point six in his Top 10 Mistakes.) but I think this is because links by themselves in a nested, bulleted list aren’t informative. Each bulleted point, in addition to pointing to a page and in addition to having descriptive link text, should also have a little detailed paragraph describing the page in more detail. This makes reading well designed site maps worthwhile and people are willing to scroll through them. The site map itself becomes content not merely navigation. Readers can see how you imagine your site to be structured. They may not agree with you but at least they can still find things. Site maps must become content and always be up to date.
Client-side Script Rants
As I’ve said DHTML is dubious, browser elitism that should not be relied on. Bookmarklets and other examples of client-script functionality and chrome, must be clearly be designated on a page with ample warning that invoking such won’t work in all browsers. No content or essential functionality should rely on client-scripts without clearly warning the user what they are getting into and giving extensive description on what they are missing out on. Client-side scripts are nice to use in reducing the load on server scripts–the smarts of advanced browsers should be taken advantage of–but server-side scripts should be written on the assumption that JavaScript doesn’t exist. Which brings us to–
Server-side scripts and page generation
Server scripts should be only used for Web I/O from a database or flat file–and this means only in changing content. They shouldn’t be wasted on generating page layout unless it’s all pre-fabricated before being uploaded to the Web directory. Why waste CPU cycles on changing the page layout when you have CSS and client-scripts? Why load the server down in real-time when you can pre-fabricate all the alternative page layouts as static HTML files before-hand?
Still More Sad Neglect of Hypertext
One reason why people may not use links as much as they should is because of the legal issues that arise. If I point to a site with questionable material, in some cases, in some countries, I myself may be guilty of illegality too. If I make illegal material easy to find, I could be in trouble too. This seems stupid to me. Search engines often link to the same sites. It may require a little digging, but you can find links to DeCSS in Google, DMOZ and so on. Is someone going to sue them? Is this really just to intimidate the little people?
WWW Version 2.0
Tim Berners-Lee is still not happy with the Web. He thinks it can be better. Ted Nelson, or at least the staff on Project Xanadu, have often complained that the World Wide Web is an incomplete implementation of the hypertext idea.