WindowlessBasement

joined 1 year ago
[–] WindowlessBasement@alien.top 1 points 11 months ago

I have tried using the in-built Pagination API to retrieve all relevant domain entries by splitting them into blocks but, due to the way the filters are applied, this only tells me if the entry is in the current block and I have to search each one manually. I have basically no coding knowledge

Short answer: you're asking questions that will take a program requesting data (the whole internet archive?) non-stop for a month or more. You are gonna need to learn to code if you want to interact with that much data.

I definitely don't have the ability to automate the search process for the paginated data.

You're going to need to automate it. A rate-limiter is going to kick in very quickly if you are just spamming the API.

explain to me like I'm 5

You need to learn for yourself if this is a project you are tackling. Also will need to familiarize yourself with the terms of service of the archive, because most services would consider scraping every piece of data they have as abusive behavior and/or malicious.

[–] WindowlessBasement@alien.top 1 points 11 months ago (1 children)

The error message tells you what to.

If this is just a random error you are unconcerned by (recent power outage, flakely cables, etc), you can clear the errors. If you believe the drive is failing, you can replace the drive. The array will remain in a degraded state until you make a decision.

[–] WindowlessBasement@alien.top 1 points 11 months ago

loseless compression doesn't exist for video. Like mathematically impossible.

[–] WindowlessBasement@alien.top 1 points 11 months ago

How long is a piece of string?

[–] WindowlessBasement@alien.top 2 points 11 months ago (1 children)

I have 7-8tb of vital info on that drive that I need to get off

If it's vital, it should already have a backup.

You don't always get warning signs; especially with a laptop or portable drive. They can fall off a table at any point and never get back up.