About six months ago I made the decision to upgrade my work setup to a Network Attached Storage unit and use it for data storage and backups. I talk about that decision here.
How did it go? I’m quite happy with the decision and results. I didn’t get everything perfect, but I came close. My Synology DS413j is a nice, solid unit and it has been rock solid since day one. I did run into a few challenges and complications, which is why about two months ago I made a decision the solution was to buy a second NAS.
So I’m now the proud owner of both the DS413j and it’s big brother, a Synology DS414. Why? That’s a (hopefully) interesting discussion.
In working with the NAS over the first few months, I found a few things where I’d made guesses on capacity and loads and gotten them a bit wrong. The DS413j is a nice unit, but it’s a single-core CPU and that limits its processing power. I think it’d be perfect for a single user with large data sets, but we’re two users with three computers and we were asking the NAS to handle day to day usage plus backups of all three computers, and to be honest, under load the unit struggled. I tried (with some success) to time the backups and other consistent loads away from each other, but the single-core vs. multiple computers was a challenge and I was spending time constantly futzing with things to minimize it.
The other problem — the I/O out the back side was USB2. My backup strategy for the NAS — which needs to be backed up, because RAID redundancy doesn’t protect you from unit failure or data corruption — was a pair of 4TB USB disks hung off the back. Which worked fine, except that a backup to empty drives took three days and the nightly backup refreshing them took a couple of hours, and during the time the backup was going, that poor single-core CPU struggled even more to keep up.
A third, solvable problem was that I’d under-estimated the size of the data set I was migrating to; by the time I got everything onto it and the Time Machine backups fully built out and running, I realized the NAS was now almost 80% full. I was about a terabyte and a half beyond where I expected to be. Fixable by replacing some of the drives in the NAS with bigger drives, but that creates a different and nasty problem.
That problem? Two 4TB drives hanging out the back for backups fails when the size of the data being backed up approaches 8 terabytes. Looking at my trip to Yellowstone and Laurie and my planned outing to Lee Vining for the fall foliage workshop this fall, I could easily see adding a terabyte just in photo images in the next six months. That put both my NAS capacity and my NAS backups right up against the limits, and when that backup fails, it would fail hard. Adding another drive out the back would require adding a hub, complexity, cables that might fail, and since that’s an offsite backup, actually means adding TWO more external drives to the setup, one live, one offsite. And as the data set continues to grow, the time to refresh a full backup would continue to grow from 3 days to close to a week. I was frankly really uncomfortable with the thought that I would have incomplete backups for close to 25% of the time as things refresh after every offsite swap.
Plus, handling all of those drives is a royal pain in the butt. And I felt it just didn’t continue to scale.
I’m a big fan of, when a problem is identified, fixing it so I can stop worrying about it. Before it breaks. Because if you wait until it breaks, chances are it’ll break when you’re on deadline and the last thing you need is MORE STRESS. I also think ifxing things early minimizes the hours you spend screwing with it and the cost of the fix.
I went through 3-4 different potential fixes, starting with the easy one — plop in bigger drives and add another disk to the backup array. That cost out to about $450, for what it’s worth, and my best guess is that it meant I’d have to consider what to do next in a year or 18 months. To me, that’s not a fix, that’s a patch, and not a cheap one.
That sent me back thinking about a second NAS, which would allow me to solve the other challenges the 413J has with keeping up with the load we were throwing at it. It also allowed me to re-use a bunch of drives I’d collected along the way (existing backup drives, retired backup drives, etc) rather than buying lots of new disks, and I could switch from hauling around drives in enclosures and make my offsite drives bare drives — which are about $50 cheaper per unit than drives in enclosures to boot, not to mention smaller and more portable and without power bricks and cables and crap.
That’s why I ended up going to the DS414. I ended up deciding to buy a pair of 4TB drives to populate it, although I could have gotten away with using what I had — it saved me a couple of drive swaps and about four days of shifting data around. I also bought a set of drive boxes (anti-static, sealed and padded, at $7.99 each — I’m using these and they’re very well built) because it was a lot cheaper to buy a set of these and a small laptop bag to haul them offsite than it was buying a Pelican case to do the same thing.
I fitted out the DS414 with three 4TB and a 2TB (because I had a bunch of them handy). which, after formatting gives me working space of almost 9.5 terabytes of usable space. I can, by replacing that 2TB with a 4TB up that to over 11 terabytes when I need to. Synology has just certified it’s first 5TB drive, so I can take the DS414 to over 14 terabytes usable before I need to worry about options (and then the option is either another DS414 and another 14 terabytes, or a NAS extender and another 14 terabytes that way).
My allocated space dropped from about 80% of available space to 43% — along the way I finally went into my “I need to clean this out some day” area I call the morgue and did some spring cleaning and deleted about a terabyte of crap out of it, which helped, too, but now I have plenty of working space and won’t need to worry about it for a while, and an easy upgrade path when I do.
That allowed me to free up the older NAS — the 413J — to become the backup unit. it’s configured similarly, although backups take up a little more space than the data being backed up, so it’s about 47% full. Those drives are fully redundant as well, but looking out really long term I could switch it to RAID 0 (striping) without drive redundancy and back up two completely full DS414′s to the DS413j if I need to.
The first thing I did when I got all of the data over to the DS414, of course, was backups; in this case, to external USB drives so I had a backup before I rebuilt and reformatted the drives in the DS413j, so I had redundant copies of everything during the transition. Don’t forget that little step, there’s nothing quite like reformatting your backup drive and then losing your master with no backup before you have a chance to refresh the backup. At no time did my data have less than two fully independent copies, at least one usually unplugged and hidden from mayhem. In most cases, I had three. Data paranoia isn’t just a mental illness, it’s a good idea.
So, the process: migrate all of the data (except time machine backups) to the new unit. Fire up and start over Time Machine for the three computers onto the new unit. Back up the new unit. THEN reformat the old unit and network backup the new one to it. Then remove those drives, stick in the 2nd backup set, format them, and do another round of backups, and take that first set offsite.
A lot of data slogging. But what I found was a full backup over the network to freshly formatted drives in the baby NAS took about 12 hours, as opposed to 3 days, and the big bottleneck there is that the baby NAS is single core, and so the CPU redlines early on. Still, since it’s doing it while I’m primarily asleep. that’s also solvable, if I ever get impatient, but upgrading the NAS unit without having to reinvent the processes.
And I think that’s the point. four months with the NAS and I realized I was about six months away from things breaking badly, and probably at a time not of my choosing. I could fix it now and take the time to fix it right, or I could wait until the crisis hits and fix it in a hurry, and possibly badly. I hate running into a crisis I know I could have avoided…
My cheap/quick fix was about $500 and probably bought me a year. What I chose instead cost me closer to $800, but I can look at both the data storage part and the backup part and know that, with current technology, I’ve got 3-4 years before I have to worry about rethinking how I do it, and with 5TB drives and what might be beyond it, probably double that. So it’s about worrying about upgrades, not rearchitecting the problem. And that makes me happy.
And over the long haul, it’ll save me money, because I’ll need fewer drives because I can take more efficient use of the ones I have, and spend less money, because on top of fewer drives, I won’t be buying housings (and power supplies, and cables,a nd etc), and having just taken a bunch apart to scavenge the drives for this project, I’m seriously unimpressed with the quality of most of them. They got landfilled instead of sent off to Goodwill, because very few survived being taken apart well enough to be usable if put back together. Oh well.
The new unit is perfect for what I need; plenty of power to handle our data and backups for three units. the first, smaller unit would be perfect for a single user, and works fine as a backup device. I can triple my data needs and handle it “only” by replacing drives with bigger ones, and the backup process scales with it. That’s a nice comfort zone to be in when it’s data and you want to make sure that data continues to survive.
Even better, the process is almost entirely automated; now that it’s working smoothly, the only manual operation is to physically swap the disks and reconfigure the backups to the new set, and that’s about one hour a month. As long as nothing breaks, everything else manages itself. And that’s the big issue with backups: the bigger the pain in the butt your backups are, the easier it is for all of us to rationalize delays or cutting corners. And ultimately, that’ll whack you in the kneecap.
and I dunno about you, I don’t want that. it’s worth some effort to make sure it doesn’t happen.
(I need to update the backup and NAS articles to include this info, but at least until then I have this to point people to….)
The post I have committed even more NAS appeared first on Chuq Von Rospach.