Many things can be done on the server-side
to speed up Web performance:
Start servicing new client while still responding to last client.
- Cache of (maybe huge numbers of) files in memory.
Disk reads are slow.
So don't make separate disk access for every file request.
Instead maintain cache in RAM of frequently accessed files
and/or small files which are easy to hold in RAM.
e.g. Search my genealogy site.
Searches text of web pages.
Over 1,700 web pages.
But search is instant.
Think it is caching every single web page in RAM.
Web pages are text files and so are small compared to images, video.
1,700 pages is only about 20 M total.
Could easily hold all that in RAM.
Entire site is about 30 G.
So the HTML text is less than 1/1000 of the site.
This is normal enough.
- Multiple disks.
Site could be spread over multiple disks
to allow many reads going on at once.
Reduce seek times.
- Multiple servers.
- Content delivery network
- distributed distribution of resources.
Related to how the site is designed:
- Do various transforms to JS and other files
to reduce size (reduce download time) and make parsing faster.
Text files tend to be tiny anyway.
Bundling of Files.
One network request for a bundled JS file for the page,
instead of 20 network requests for 20 JS files.
Same for CSS - bundle into one CSS file.
Reducing network calls can make a big difference.
e.g. At time of writing I have 4 JS files for each page
on Ancient Brain
that I bundle into one JS file
And I have 12 CSS files for each page
that I bundle into one CSS file
- Small / low-resolution images (for any images used inline).
Can click to expand.
Definition of "small" changes over time.
For high-demand sites:
Multiple copies of entire site -
- front end routes requests to different CPUs.
Problem: OK to have all (small size) requests come in through one front end
and get routed to searching nodes.
Not OK to have all (large size) replies go back through one front end - bottleneck.
Solution: TCP handoff
- trick to have the searching node reply directly
in a manner that is invisible to client.
The reply load is therefore distributed over all the nodes.
- Caching can be done on server and client.
- Server can cache files in memory.
Could, say, check file date each time file requested. Only do disk read if changed.
- Client can cache files in memory or disk.
Does not ask server. Just uses local copy.
- Server can tell client how long to cache a resource for.
Uses HTTP headers.
- Cache-related HTTP headers:
- Caching using HTTP headers on Apache:
- How long to cache for?
- Some files change regularly:
As site develops, HTML, CSS, JS might change many times.
- Some files hardly ever change: JPEG might be unchanged for years.
HTTP servers can log all accesses.
Can have separate log for errors.
Typical web server logs.
(Apart from being colour-coded.
Normal logs are not colour-coded.)
Shows how the Web has tried to provide a unifying interface to all Internet protocols, data and activities.
Some URL formats.
URI schemes listed above (in use):
- plain http
- if intercept traffic can see sensitive data -
- being replaced by https
- can browse off disk
- very useful
(may not need prefix)
- very useful
- but spammers search for these
- guest login with no password - pre-web publication system
- news: -
- pre-web discussion system
- now survives on Google Groups
- gopher: - pre-web publication system
- telnet login to a server as guest
- not used any more
- old WAP system - can launch phone call
- launch phone call
- launch phone call / chat
- send text to phone
https: - secure http.
Mixed content rule - https should include https content not http content. Browser may enforce this.
- can include image in page direct, without it being a separate file.
- can make a link like this
to view source
Blocked by Chrome, and yet when you view source, you get a "view-source:" URL. So why block it?
Uses MIME types.
(a) Plug-in - Runs inside browser process.
(b) Helper application - Separate process.
Relating one client-server stateless request
with other client-server requests.
Identify user (pay-to-view, register, personalisation).
- Server sends data that is stored on client side (in file).
Server can only read cookies that it previously sent
(not other site's cookies).
- How to see your cookies
in different browsers.
- PHP cookies
- Security issue:
- Can you spoof someone else's cookie?
e.g. For user login. If userid is stored as simple cookie, you could set cookie:
userid = someoneelse
and then log in as them.
are more secure.
Many things can be done on the client-side
to speed up Web performance.
Actually, all of these things, though taking place on the client, involve server support too:
- Client-side caching
- Site-wide (or ISP-wide) cache
via proxy server.
- Lazy load
- of images etc.
- Infinite scroll
- Load more of page on scroll to bottom.
Use with moderation.
about why this is only suitable for some types of sites.
- Delayed loading of resources.
Delayed running of scripts.
Fetch some resources / run some JS only after initial page is rendered.
DCU is (apparently) not using proxy servers any more.
But they are still in use outside DCU.
some machines may
communicate with the outside world through a proxy server
Some communicate directly (not through a proxy).
(forwards requests through 220.127.116.11)
alternates between different IP addresses
(for load balancing)
- port: 8080 or 3128
alternates randomly between:
To set proxy, something like:
- Firefox - Tools - Options - Advanced - Network - Settings
- IE - Tools - Options - Connections - LAN settings
You may use a
proxy auto-config (PAC) file:
Test the IP address other sites see: