I always hear people saying you need to leave ~20% of the space on your SSD free otherwise you’ll suffer major slowdowns. No way I’m buying a 4TB drive and then leaving 800GB free on it, that is ridiculous.

Now obviously I know it’s true. I have a Samsung 850 Evo right now that’s 87% full, and with a quick CrystalDiskMark test I can see some of the write speeds dropped to about a third of what they are in reviews.

I’m sure that the amount of performance loss varies between drives, which to me would be a big part in deciding what I’d rather buy. AnandTech used to test empty and full drives as part of their testing suite (here, for example), but they don’t have any reviews for the more interesting drives that came out in the last couple of years, like 990 Pro, SN850X, or KC3000.

Is there anyone else doing these kinds of benchmarks, for an empty and filled drive? It would be a lot better knowing just how bad filling a drive is instead of throwing 20% of it away (some suggest to keep 50% full at most) as some kind of rule of thumb.

  • GhostReddit@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    When you overwrite data on your flash disk it doesn’t just go to the same location and replace the data like a hard disk, it writes to a new location and saves that location as “valid” for a given block address.

    NAND cells are erased when you need to free up space for new writes - but the erase block is large, so much so it’s likely that what you’re erasing contains data you need to save. You have to move that valid data to a new place and erase the block and this slows the whole operation down.

    If you have very little unallocated space on your disk - more and more operations will require this shuffle and this shuffle is less efficient (you can’t wait to erase an ‘optimal’ block because they all contain 90% or more valid data that must be moved). A packed drive is therefore less efficient, but 2 different size drives at the same % fill won’t see the same performance loss because the larger one still has more scratch space to work with (most consumer drives set this aside as an SLC ‘cache’ to enable snappy performance and a more efficient stripe packing, but that cache shrinks as the drive fills.)

    Tl;Dr - you don’t need to save 800GB of your 4TB drive, but saving 200GB or so will keep you from seeing any performance loss. If you use any consumer SSD in a 24/7 server workload they’ll all hit a wall eventually because they’re not designed for sustained performance.

  • Stevesanasshole@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Tom’s Hardware usually does longer write tests with iometer to analyze cache size and performance however I think it may only be a 15 minute run. Here’s their review of the 2tb samsung 990

    However it’s becoming less of an issue with modern ssd’s that incorporate hybrid caches and faster NAND. As long as the drive has a large enough static cache to absorb most of what you throw at it or fast enough NAND to at least be bearably fast it’s not that big of a deal. You just have to know the NAND’s limitations. Even drives like the crucial p3 are fine as long as you understand eventually writes will slow down to 80MB/s or so.

    For secondary storage it’s less important but I would still try to leave some free space on OS drive for fast swap and temp file performance

    • Filipi_7@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      I think one of us, including some other comments with links to similar benchmarks, is misunderstanding the conclusion behind a sequential write.

      My question is, if a drive is, say, 90% full, how much slower it is compared to 0% full.

      The linked test starts with an empty drive and writes data for 60 seconds, which is not enough to fill the drive. If you use the WD numbers as an example, it gets ~6000MB/s for ~35 seconds before the speed plummets. That’s 210GB filled for a 1000GB drive (which is explained in their methodology, they are filling 20% of the drive). Here, the speed going down is a result of the cache filling up and forcing the drive to write directly to the flash memory.

      In my question, I am assuming that when the drive is 90% full and idle, the cache is not being used, but I could be wrong. But if so, when I start writing the cache should be used as normal, keep the data there temporarily before writing it to flash at a later date. Question is how much slower this entire process is when full, but not when the cache is saturated. I don’t think the test answers that.

      • f3n2x@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        That’s 210GB filled for a 1000GB drive

        No, that’s 630GB of TLC space filled. Writing in SLC mode requires 3x the space. The dynamic SLC caching stops at some point because the controller still needs space left to rewrite 630GB down to 210GB during idle time plus a safety margin so the user won’t run into a situation where the drive has to “freeze” to catch up with the work. There is always some minimal amount of SLC cache available through overprovisioning, but that’s typically only a few GB.

  • owari69@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I don’t keep up with SSD benchmarks, but the mechanism behind this phenomenon is not anything mysterious. Most consumer SSDs ship with TLC or QLC NAND, which supports either 3 or 4 bits of storage per cell. However, writing the full 3 or 4 bits is slower than just writing one bit in each cell, so the drives use available empty NAND as an SLC write cache while the drive is still not full. So when you write to a relatively empty drive, your data will go into the DRAM cache on the controller (if there is any being used as a write buffer) and then get written into available NAND chips in SLC mode. Later on, the drive will consolidate the data down into QLC/TLC properly, but you get the advantage of a fast write so long as there is enough empty NAND to use SLC caching.

    Obviously this falls apart once your drive gets close to full and there are no available empty NAND chips to write to as SLC cache. This is also why the write performance of budget drives tends to drop off worse than higher end drives. The nicer drives have faster NAND and usually have DRAM on the SSD controller to help performance in the worst case. Enterprise drives often sidestep this issue entirely via just using SLC or MLC NAND directly, or by having additional overprovisioning (extra NAND chips).

    • Filipi_7@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      990 Pro 4TB has two 16Tb TLC chips (2TB each),and 442GB SLC cache.

      Does that mean the SLC cache is included in that 4TB, or is it separate? Because if it was separate, this would imply that it’s there to be used even if the TLC chips are completely filled so the cache speed would not decrease when full, only writing to the TLC flash afterwards.

      Unless that’s how overprovisioning works? 990 Pro has 370GB overprovisioning within the TLC flash and 442GB SLC cache, together they roughly cancel out to give a total of 4TB capacity, which I guess would explain why cache runs out when the drive is full.

      • owari69@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        I can’t speak to the details of the 990 Pro specifically, I don’t keep up with individual drives that closely. I would guess that your understanding is correct, but someone else on the sub can probably chime in with the details for the current crop of high end drives.

  • EasyRhino75@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I think storage review (on enterprise products) preconditions the drives for a while before testing, and they might be kinda full for testing?

    Serve the home also tests against a moderately full SSD. But that’s their only set of data.

  • autogyrophilia@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    You can always buy enterprise grade disks if you are concerned about this.

    Effect it’s still present, but with more spare flash becomes less pronounced

    It’s also bad for durability as it impairs wear leveling

  • regenobids@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Almost everything you hear is exaggerated. Fill the drive. Leave some room to work with is all.

    • regenobids@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      If it doesn’t need to be so fast why would you care about a 10-50% slowdown from peak SSD speeds?

      HDD is for cold storage and very large storage needs.

      • Glittering_Chard@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        So that you can have an ssd that’s at 100% speed…

        For the same price you could have much more storage and faster speeds.

        Also there are several techs like directstorage for having a gpu stream assets directly from the ssd, why cripple that?

        btw hdds are rarely used for cold storage, cold storage means inactive/offline and is normally tape.

        • regenobids@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          Ah, yes, the tape which nobody uses. Right.

          Are you going to buy less SSD and an HDD. So that you can use the always severely slower HDD more, so that you don’t have to, gasp, lose 1% read speed off the SSD, every day? Or, to not lose some write speed, particularly random writes, (because that’s where a penalty would be at at) but without really writing anything? No comment.

          If by directstorage you mean the technology that isn’t even in meaningful use yet, but also only makes a clear difference on SATA SSD as of now? I have no comment on that one either.

          Obviously if you’re at 75% and up it’d be a good time to look around for another drive. but sitting there, not daring to fill it up anymore because you’re scared? That is pointless, pedantic and stupid.

        • Sopel97@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          Also there are several techs like directstorage for having a gpu stream assets directly from the ssd, why cripple that?

          why would the reads be crippled?

  • xxchacha@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    If you don’t do much writing or mixed IO (e.g., things like a database), it doesn’t really matter how full you want to keep your SSD. Writing slows (and write amplification goes up) as the disk fills because garbage collection has to work harder. As you may well know, NAND media consists of a set of blocks, and each block contains a set of pages. Writes are at the page level, but erases are at the block level. As data is overwritten or trimmed, some data in a given block will no longer be valid. When a block is garbage collected, the penalty lies in the amount of still-valid data that needs to be copied to another block. The fuller you keep the SSD, the more laden each block is with data you still care about, which has to be moved each time garbage collection is invoked. But if you’re doing read-mostly work, it probably doesn’t matter.

  • Thercon_Jair@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    The slowdown is write speed, if you read from the drive mostly, don’t worry about it much.