Usually my process is very… hammer and drill related - but I have a family member who is interested in taking my latest batch of hard drives after I upgraded.

What are the best (linux) tools for the process? I’d like to run some tests to make sure they’re good first and also do a full zero out of any data. (Used to be a raid if that matters)

Edit: Thanks all, process is officially started, will probably run for quite a while. Appreciate the advice!

  • @thenumbersmason
    link
    fedilink
    English
    9
    edit-2
    11 months ago

    dd works fine, you’d use it something like this

    dd if=/dev/zero of=/dev/[the drive] status=progress conv=fsync bs=4M

    if: input file

    of: output file

    status=progress: shows progress

    conv=fsync: basically does the equivalent of running “sync” after the command, makes sure all the kernel buffers have actually written out and are on the device. This causes the command to “hang” near the end depending on how much RAM is installed on the computer. It’s not actually hanging it’s just finishing writing out the data that’s still cached in RAM. This can take a while depending on drive speed and quantity of system RAM.

    bs=4M sets the block size to something high enough you’re not CPU bottlenecked. Not particularly important exactly what the value is, 4M is a good sane default for most things including this full disk operation.

    edit: one pass of zeros is enough to protect against all trivial data recovery techniques. If your threat model includes three letter agencies the hammer and drill bit technique is 👍

    • ScrubblesOP
      link
      fedilink
      English
      311 months ago

      Thanks! I’ve used dd for things like recovering/cloning drives but it makes complete sense I can wipe it too. Thanks for the progress trick too, it was always just a blank cursor to me when I ran it before!

      • @WaterWaiver@aussie.zone
        link
        fedilink
        English
        4
        edit-2
        11 months ago

        I recommend using a different set of flags so you can avoid the buffering problem @thenumbersmason@yiffit.net mentions.

        This next example prevents all of your ram getting uselessly filled up during the wipe (which causes other programs to run slower whenever they need more mem, I notice my web browser lags as a result), allows the progress to actually be accurate (disk write speed instead of RAM write speed) and prevents the horrible hang at the end.

        dd if=/dev/urandom of=/dev/somedisk status=progress oflag=sync bs=128M

        “oflag” means output flag (to do with of=/dev/somedisk). “sync” means sync after every block. I’ve chosen 128M blocks as an arbitrary number, below a certain amount it gets slower (and potentially causes more write cycles on the individual flash cells) but 128MB should be massively more than that and perfectly safe. Bigger numbers will hog more ram to no advantage (and may return the problems we’re trying to avoid).

        If it’s an SSD then I issue TRIM commands after this (“blkdiscard” command), this makes the drive look like zeroes without actually having to write the whole drive again with another dd command.

      • @thenumbersmason
        link
        fedilink
        English
        111 months ago

        On most systems the entropy pool would quickly be depleted and it would take forever, there’s hardware solutions to combat this. Like the RTL-SDR tuned to lightning frequency trick. But that’s more complicated, and unnecessary.