Joined: Oct 2007
Posts: 4,582
Thanks:
0
Also a possible use: Snagging the entries of blogs that have certain types of links that might possibly be conducive to those blogs being taken down without notice.
Edit: BTW If anyone knows how to use the regex filter to include/exclude in such a way that only specific domain links get procured.... that would be useful information.
Edited: 2010-11-29, 2:56 pm
Joined: Sep 2010
Posts: 17
Thanks:
0
I find myself using it instead of bookmarks and printouts. And for technical stuff I might need at a customer with no internet access (for me and my laptop, anyway).
Very useful for grabbing useful blog posts, news stories, other stuff that will disappear one of these weeks or that I simply don't remember how I found. I tend to just grab the page I need w/dependencies (css, images) and not do any recursion - there are better tools for that, and it's a bit rude.
Joined: Oct 2007
Posts: 4,582
Thanks:
0
Yeah to be clear about above entries, I'd only recommend downloading data one link deep for specific subsets of small files that a site operating in a legal gray area operates in, for posterity's sake, preferably only doing this once and then sharing elsewhere to minimize server load; elsewhere, under the same legal/posterity criteria I'd advocate essentially a text-only archiving of specific entries (the ones that contain direct links elsewhere so essentially it's a quick building of an index of links to be followed in the future, as Google only caches partially).
One of my favourite sites went down recently with no signs of reemergence, and I wish I'd discovered this add-on before, as now hundreds of links are lost.
I just discovered that include/exclude filter's actually really simple, no need for complicated regex, methinks.
Edited: 2010-11-29, 3:44 pm
Joined: Feb 2007
Posts: 1,558
Thanks:
0
oh, Scrapbook plus. Done. Thx
So the academic revolutionaries regrouped. Glad I mentioned it! You're on my christmas list.