pull down to refresh
71 sats \ 1 reply \ @aljaz 9 Dec 2023 \ parent \ on: I grinded ~5000 hrs of Bitcoin content to build WhyBitcoinOnly.com. AMA AMA
wayback machine has "save now" function where it will take a snapshot of the website if the website is not forbidding crawlers - that would be the easiest solution but i'm sure some websites will have crawlers disabled in robots.txt
Does it have a bulk capability where I could tell it to save every link on the page (in the 4-digits I believe 😅) or would I need to click through every single one manually?
reply