D. Moonfire<p>So, hacky trick with SeaweedFS when you have a danger of pinning and have a relatively small cluster (mine is only 40 TB with one extra copy or 010, so effectively 20 TB). Keep 10% free as a default.</p><pre><code>services.seaweedfs.clusters.default.volumes.c0.minFreeSpacePercent = 10;
</code></pre><p>When you run out of space, the cluster volumes flip the read-only flag which means if you go and delete a terabyte or so off the system, they don't pick it up (because the individual volumes are read-only).</p><p>So what I do is change the free space to 5%, do the deletes, and then change it back to 10% once the threshold goes under. There were a few times when I had to do the same thing with Ceph. I suspect many distributed file systems really don't like to be more than 90% full in general.</p><p>I actually go one step further by having a <code>cluster.nix</code> file that includes things like the free percentage so I can change one file, do a deploy to the entire homelab, then do some maintenance, and change it back.</p><p>I really should set up helm or something to tell me about these things before they happen but I haven't really figured out how on Nix. It was just too overwhelming and I couldn't figure out any useful monitoring that would ping me when there was an issue.</p><p>I've been also ignoring a problem with my one little Raspberry Pi in the cluster. It only had 1 TB drive on it, but that was preventing me from rebalancing everything. I've used some of my tax refund to get a new node to replace it, but it will take a while as they race the tsunami of tariffs that are following the order.</p><p><a href="https://polymaths.social/tags/seaweedfs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SeaweedFS</span></a> <a href="https://polymaths.social/tags/nixos" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>nixos</span></a></p>