The Internet is great when it comes to making information accessible to others and search engines hugely amplify the effect by allowing people to actually find what is offered online. But there is also a catch!
Just go back a number of years on this blog and click on a couple of external links and you will notice that quite a lot of them don't work anymore. Companies restructure their sites, remove interesting information entirely, private web site owners loose interest and sites are shutting down. There is a saying that the Internet never forgets but that doesn't seem quite true to me in this context. Quite the contrary.
Books are on the other end of the scale. Information contained in them can sometimes be hard to find and obtaining books no longer in print is sometimes also difficult. But once you have it, the content is available to you for as long as you want. You and not an external party is in control of when that information goes away.
Don't get me wrong I don't advocate books over the information stored and to be found on the Internet. However when I want to preserve information found on the Internet I no longer only store the link to it but also either print the page into a PDF document (iWeb2Print is a nice service) or get a copy of the page and sub-pages with tools like scrapbook.
Several years ago, I started storing copies of those web pages I found particularly useful on my local disk. Apart from protecting against dangling URL, this makes it easier to copy and paste extracts for citations or references by retaining the original markup and resource structure.
Unfortunately, recent trends in web sites render this approach less effective. When web pages are no longer documents but full-fledged Javascript/AJAX/JSON programs that fabricate DOM trees/images out of interactions with a server, one must increasingly resort to screen dumps (which a PDF-printout is, basically) to save a reproducible representation of the page. In that sense, things are not exactly improving.