I had a server called Rosie; a huge box, 5U in size. Originally Rosie had 12 80gb parallel ATA drives; that was back when 80 gb was as big as it got. I could get 12 drives by having four off the motherboard, and two additional ATA controller cards that handled four each. So Rosie was my first one terabyte box. Rosie was one of those big heavy servers, like the one that put my back out.
Rosie has been recycled long since; the 5U server case is now holding 14 two-terabyte drives (two off the motherboard, and four on each of three additional SATA controller cards. But something went wrong with my record-keeping, and my records showed Rosie as still doing the job it was doing before. When I noticed this, I thought, I need to replace Rosie, and what better to replace a huge computer like that, than a Raspberry Pi with one drive. And that's what I've done, and I'm currently loading it up with data.
I tried connecting six 2tb drives to a Pi; that worked fine. So, naturally, I tried twelve. I didn't get that to work, but I haven't given up. It nearly works, it's just that the power to the Pi gets a bit iffy. I'm doing nine, and that's working. I have them in banks of three, each three in a bracket with a fan, to keep the drives cool. The whole thing pulls 400 ma, that's 100 watts. Most of that, of course, is for the drives.
The main limitation in doing this, is going to be the rate at which I/O can happen through the USB port. But if I don't try to take much data at once from those 18 tb of drives, it should be OK, and the use I have in mind, is for storing a huge amount of data, which is accesses lightly.
I also tried a new web server. For 16 years now I've been using Apache, and I'm used to its funny little ways. But on a Pi, it uses 12mb of memory even when it isn't doing anything, and one thing I'm always thinking about on a Pi, is economy of memory, because it only has 512mb and you can't give it more. So I had a look at lighttpd, which bills itself as a lightweight web server, and I can tell you, yes it is. The memory footprint is only 1mb when it isn't doing much. And setting it up is very easy. You do "apt-get install lighttpd" and then you edit one file to tell it where the public access files are. I don't think I'll use it for any heavy duty stuff, but all of my servers run a web server, for internal diagnostics.
What happens, is each server runs an internal diagnostic once per minute. It checks that it's running properly, looks at loading, and so on. It puts all this information into a little file, and that file can be accessed using a web browser. One of my servers, once per minute, goes round all the other servers accessing this file. If it can't access the file, or if the file is warning about an error, then it sends me an email. So, pretty much all my servers are running a web server, but all that's used for, is this diagnostic file. So a lightweight web server is just what they need.
On the trolley; I got an email from the vendor to tell me that the 150kg model that I'd ordered was out of stock (this is despite their Ebay page saying that they had more than ten available). Would I like to wait a month (no) or pay an extra £22 for the 300 kb model? So I looked it up, and that looks like a great bargain. So I've ponied up the extra cabbage, and it should arrive later this week.
My back has pretty much recovered from whatever I did to it. But I've learned a lesson - heavy servers need to be handled in such a way that I don't put my back out.
It looks like I can only get ten drives on a Pi, using the ten-way hub. I've tried various ways to get a couple more on, but every which way, after a few minutes, the voltage to the Pi drops below 4.8 and it crashes. Giving the 10-way hub its own power supply, doesn't help. Still, ten is pretty good, and nicely matches some small server cases I can use. But there does seem to be a couple of big problems left.
1) Mounting a volume can take between 1 and 15 minutes (it should take a couple of seconds). I don't know why. But that means that mounting 20 volumes (10 drives) might take a couple of hours every time I reboot.
2) Drive names (sda, sdb, sdc etc) are assigned to drives in the order in which the Pi sees them. For ordinary computers, that's (almost) consistent. But in this case, because everything's going via the USB, the drives come up in a random order. So what was sdd last time, might be sdg next time I boot. I can see a couple of ways to deal with this.
I've handled problem 2. I've given each volume a label, and I'll do the mount using that label. So instead of:
mount /dev/sda1 /home/mountpoint01
mount -L label01 /home/mountpoint01
Which drive is sda1 will vary each time I reboot, but the labels will stay the same. Now if only I can work out why mounting takes such a long time!
I still don't understand why mounting takes a long time, but I've noticed that this is only the case for drives with clusters less than 4k. My guess is, something has to time out (and it takes about five minutes) before mount does its job. By changing all the drives to have 4k clusters, mounting only takes 30 seconds per volume, which is still a lot. But it means that I can get all 24 volumes (that's 24 terabytes) fscked and mounted in 12 minutes or so. Which is still a long time, but bearable.
And yes, I've managed to get 12 drives onto the Pi. Here's how.
The PC power supply powers the drives and the fans (one fan per three drives). It also powers a Ubec set to 5.3 volts, which powers a USB hub. That hub powers the Pi, and three of the USB-to-Sata gizmos are plugged into that hub, so they're also powered by it. I found that one Ubec couldn't supply enough power for both the hubs and all twelve gizmos; after a few minutes, the Ubec heated up and the supplied voltage dropped to below 4.5 volts, at which point the Pi would crash. But two Ubecs give me enough power for all this, and it can all run from a single PC power supply. Nine more hard drive gizmos are plugged into a 10-way USB hub, which is powered by another Ubec, set to six volts. Both hubs are plugged into the Pi. When I measure the voltage in the Pi, I get 5.15 volts, which is just what is needed. The whole thing pulls 0.5 amps at 240 volts, which is 120 watts - obviously the drives are taking nearly all of that. Tomorrow, I'll try to cram these 12 drives into a normal-sized tower case, together with fans for cooling the drives and other stuff, the Pi, the two hubs and the Ubecs.
The costs of all this are pretty low, too. The hard drives, of course, cost whatever hard drives cost. The Pi is £20, the Ubecs are £3, the hubs are £2 or so, the hard drive gizmos are £3 each (12 needed for this box).
The PC power supply is about £8, the case another £8 so I get a 24 terabyte server for about £85 plus the cost of the hard drives. A dozen 2tb drives is £773 (including VAT). If I used 3tb drives, I get 36tb for £1122. As you can see, 90% or more of the cost is the hard drives.
I'll leave it on overnight, to see if it's stable.