Dr. Mark Humphrys

School of Computing. Dublin City University.

Home      Blog      Teaching      Research      Contact

Online coding site: Ancient Brain

coders   JavaScript worlds


CA114      CA170      CA686

Online AI coding exercises

Project ideas

The Web - Overview

Life before the Web

I was on the Internet before the Web took off in 1993.
Many archives existed on the Internet before the Web. You accessed them as follows:


	Run the ftp program.

c ftp.cs.ucla.edu	

	Connect to some ftp site that you know of.
	There is no easy way of bookmarking 
	or linking to these sites.
	People have to build and maintain their own lists 
	of sites and passwords.

enter userid "anonymous"
enter password (your full email address)

	Typing or Pasting all this in every time was VERY tedious.


	List files. Plain format, showing list of filenames.
	Little or no idea what is in these files.

get index.txt

	Get a master file that will explain 
	what is in the archive.
	You have to read it offline and then find what you are
	interested in - say a collection of Shakespeare plays.

cd Shakespeare

	Go into a sub-category.

get index.txt

	Find out what is in there.

get macbeth.txt

	Finally get what you are looking for (possibly).

	All these files you get end up in random places
	on your disk. They are not all stored in a place
	like the browser cache, periodically wiped.
	Instead, you have to manage them all.

With this user interface, isn't it no wonder that the Internet never took off!

In fact, there were even worse interfaces. Some archives were accessible only by commands embedded in email messages!

There was lots of information and resources online before the Web, but it simply wasn't "browsable". You couldn't casually follow links, and move on. You had to invest lots of effort in everything you looked at. So it was only used by those who were basically interested in the technology. Mass adoption had to wait for a "browsable", "memory-free" user interface.


Mosaic - Perhaps the most revolutionary program of all time. Combines all of that complexity in a single address:

(Berners-Lee's idea), but crucially, Mosaic makes it mouse-driven.

The above is the address of a file. You can bookmark it privately, or provide a link on your page for others to follow. No passwords, no typing, just an address. To view it, you click on this address. It downloads into temporary file space (browser cache). Browser maintains this space - you don't have to manage it. And the final act of beauty: This file contains within it a description of what is in the archive, including a link to:

which contains a link to:
You can browse, "graze", and move on, with no clean-up to be done afterwards. You can casually follow links with little effort, no typing, just mouse clicks.

More history

Strictly speaking, Mosaic wasn't the first mouse-driven web browser. It was the first that was widely used. This seems to be because it was the first that:
  1. ran on Windows, Mac and UNIX (Berners-Lee's browser was for NeXT)
  2. was easy to install (a single file)
  3. was easy to use (looked like a normal modern app)
  4. had inline images (Mosaic invented the IMG tag)
For instance, I had heard about mouse-driven (UNIX) web browsers before Mosaic, but never got around to downloading them because I didn't see the point of the Web until I saw Mosaic.

Why did the Web (since Mosaic) work?

  1. Hide addresses (hypertext).
  2. Share the work (people construct links for you to follow).
  3. Browsable (cache, no passwords). Index files browsable and discardable.

  4. Clickable (there were text-based Web browsers before Mosaic, but they involved typing numbers to say which link to follow). A mouse is perfect for once-off, "discardable" selections like this.
  5. Readable texts - The text-based browsers filled the text with intrusive numbers, so many didn't see the point of the system. Mouse-clicking on underlined words restores the readability of the text.
  6. Distributed hypertext - Hypertext had been around for years. But when hypertext meant you could click on words in a help system, many said "cute" but didn't really see the point. When the click could take you to points in new systems, suddenly everyone saw the point and hypertext finally became popular.

  7. Browsers allow handy organisation and editing of private bookmarks.
  8. Not restricted to one interaction - Browsers allow you to spawn off multiple simultaneous window sessions while waiting for slow downloads.
    • New tab
    • New window
    • A nice tip: Preferences - Save windows and tabs of last time

Things that break the Web model

  1. In general, any page you can't bookmark or link to breaks this model:

  2. Using HTTP POST for a Web form breaks this model (you can't link to a filled-in version of the form).
    You might do this deliberately, of course. Many forms should use POST. e.g. Form contains password. This should not end up in a URL that might get shared. Or form uploads a binary file, or a large amount of text.
    To allow someone link to the form filled in with arguments, use HTTP GET.

  3. Temporary URLs and changing URLs break this model (have to follow a process to find the data again).

  4. Listing an email address without making it a mailto: link breaks this model. Unfortunately this is increasingly essential because of spam. See example using image and JavaScript.
  5. Sending email attachments round a company (instead of having the file on a public Internet page or private Intranet page) breaks the model (harder to see an archive of all such files, not so easily browsable, can't link to it).

  6. Referencing things online without providing a hyperlink to them breaks this model.
  7. Not linking to other sites, not linking creatively within your site, and in short, just using hypertext to present a series of menu options, breaks this model (back to hypertext as it was used before the Web, to just present a few menu options within a site).

  8. Many P2P file-sharing systems break this model, by having temporary "web sites", that you can't link to.
    • Under BitTorrent you can link to a .torrent file to launch your download. This address can be permanent if the file is legal.

  9. Any more examples?

Diversion - p2p

p2p is an important model for distributing CPU load, disk space, bandwidth, addressing and routing data, and so on, as the Internet shows.

You could argue that the applications email, usenet and DNS, are all p2p.

  • Applications of P2P networks apart from Content delivery

  • Skype VoIP uses p2p to distribute traffic load of the calls (some of your bandwidth may be used to route other people's calls, just as your Internet host may be asked to route other people's email).

  • p2p used to negotiate a temporary comms session: p2p network to connect directly 2 players for online game

p2p for publishing

p2p is best-known for file sharing (data or programs). Sharing files with p2p often seems worse than the Web, since there is (usually) no permanent address you can link to. Does p2p for publishing have a function other than sharing data that it is illegal to set up a website for (such as copyrighted files)?

Possible legal uses:

  1. A small site (e.g. a blog) sharing large files (e.g. video) that are the subject of massive topical interest (flash crowds). P2P could be used to distribute the load.
  2. A big organisation sharing thousands of gigabyte-files. e.g. the BBC archive.
  3. Private p2p network. e.g. Extended family sharing contents of each other's photos and videos directories.
  4. Using p2p to distribute large releases of very popular downloads, e.g. a new Windows update or a new Linux release. To stop the main server getting overloaded.
    • One of the problems is would you trust the person you are downloading from? What if they altered the data? With the fixed HTTP model you can trust a download from microsoft.com. Need watertight error-checking and detection to prevent any client being able to interfere with the data.

Summary of legal uses?

  1. Small-to-medium bandwidth legal data: p2p no use, use website.
  2. Small-to-medium bandwidth legal software: p2p no use, use website.
  3. Massive bandwidth legal data: p2p could be useful - doesn't matter so much if data is corrupted - few people will do that anyway.
  4. Massive bandwidth legal software: p2p could be useful - but dangerous if software is corrupted - and lot of incentive to do so.

Of course, as it stands today, most actual use of p2p file sharing is probably illegal.

ancientbrain.com      w2mind.org      humphrysfamilytree.com

On the Internet since 1987.

Note: Links on this site to user-generated content like Wikipedia are highlighted in red as possibly unreliable. My view is that such links are highly useful but flawed.