pull down to refresh

wayback machine has "save now" function where it will take a snapshot of the website if the website is not forbidding crawlers - that would be the easiest solution but i'm sure some websites will have crawlers disabled in robots.txt
this territory is moderated
Does it have a bulk capability where I could tell it to save every link on the page (in the 4-digits I believe 😅) or would I need to click through every single one manually?
reply