this post was submitted on 18 Aug 2024
221 points (97.4% liked)

Linux

48364 readers
1709 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

I'm writing a program that wraps around dd to try and warn you if you are doing anything stupid. I have thus been giving the man page a good read. While doing this, I noticed that dd supported all the way up to Quettabytes, a unit orders of magnitude larger than all the data on the entire internet.

This has caused me to wonder what the largest storage operation you guys have done. I've taken a couple images of hard drives that were a single terabyte large, but I was wondering if the sysadmins among you have had to do something with e.g a giant RAID 10 array.

(page 3) 42 comments
sorted by: hot top controversial new old
[–] Nibodhika@lemmy.world 3 points 3 months ago (4 children)

Why would dd have a limit on the amount of data it can copy, afaik dd doesn't check not does anything fancy, if it can copy one bit it can copy infinite.

Even if it did any sort of validation, if it can do anything larger than RAM it needs to be able to do it in chunks.

[–] nik9000@programming.dev 2 points 3 months ago

Not looking at the man page, but I expect you can limit it if you want and the parser for the parameter knows about these names. If it were me it'd be one parser for byte size values and it'd work for chunk size and limit and sync interval and whatever else dd does.

Also probably limited by the size of the number tracking. I think dd reports the number of bytes copied at the end even in unlimited mode.

[–] data1701d@startrek.website 1 points 3 months ago

It’s less about dd’s limits and more laughs the fact that it supports units that might take decades or more for us to read a unit that size.

[–] CrabAndBroom@lemmy.ml 0 points 3 months ago

Well they do nickname it disk destroyer, so if it was unlimited and someone messed it up, it could delete the entire simulation that we live in. So its for our own good really.

load more comments (1 replies)
[–] apotheotic@beehaw.org 3 points 3 months ago

Do cloud platform storage operations count? If so, in the hundreds of terabytes (work)

[–] krazylink@lemmy.world 2 points 3 months ago

I recently copied ~1.6T from my old file server to my new one. I think that may be my largest non-work related transfer.

[–] ipkpjersi@lemmy.ml 2 points 3 months ago* (last edited 3 months ago)

20TB (out of 21TB usable), a second 6x6TB zfs raidz2 server as my send target.

[–] potentiallynotfelix@lemdro.id 1 points 3 months ago

I think it would be my whole broken manjaro install, I just used dd to make a copy so I could work on it later lol. About 500 gigs

[–] Matriks404@lemmy.world 1 points 3 months ago

Probably some vigeo game on that is ~150-200 GiB. Does that count?

[–] possiblylinux127@lemmy.zip 1 points 3 months ago (1 children)

While I haven't personally had to move a data center I imagine that would be a pretty big transfer. Probably not dd though.

[–] CrabAndBroom@lemmy.ml 1 points 3 months ago

I can't imagine how nerve-wracking it would be to run dd on something like that lol. I still don't trust myself to copy a USB stick with my unimportant bullshit on it with dd, let alone a server with anything important on it!

load more comments
view more: ‹ prev next ›