With the big box up and running, time to start loading it with data. I have about 8 terabytes to put on it.
First I had to move it from my workbench, to one of my racks. The problem here is that A) this thing is very heavy, and B) I need to move it, and the drive box, together. So I used my pump trolley, every home should have one.
There's five Sata ports, four on the card I added, and one on the motherboard. I'm going to use two ports to attach already-loaded drives, which means I can stop using the server hosting them, and the other three ports for 4tb of data that I've got on another computer, but I plan to copy.
For the drives, I'm using Seagate 3tb drives. I've had a lot of failures with these, but I have three on the shelf unused, and if I don't ever use them, I might as well throw them away. I'll use them for this, and if they fail during the data loading, it's no big deal. Three 3tb drives raided together gives me 9tb for a 4tb file load. That gives me plenty of space for future expansion.
4tb is a lot. If I can copy at the full 100 mbit rate (which can't actually be reached), I should get 12.5 megabytes per second, which works out to about 90 hours. Experience tells me that 6mb/sec is more the size, so 180 hours, about a week. OK, that's doable (I've done it before), but maybe I can do better. The big box has gigabyte ethernet - the source of the data doesn't. But several years ago, I looked into gigabit ethernet. I didn't look very much, but I did wind up with a gigabit switch and a handful of gigabit cards. I tried one of these cards in the source machine, but it didn't like it. Rather than try to persuade it, I tried an Intel card. It liked that, and after a bit of fumbling around, I was able to make it work. So I can copy at gigabit speeds. If I could get the full speed of that, it would reduce my copying time to 9 hours, nice. But copying isn't just chucking bits down the ethernet, it's also reading from the source and writing to the destination.
I timed it on the 100 mbit network, and I was getting about 4 mb/sec.
I timed it on the 1000 mbit network, and I'm getting 4.2 mb/sec (34 mbit/sec), which means that the bottleneck isn't the network, it's the reading from the source disk and writing to the destination. Disk write speed is supposed to be around 120 mb/s, so the extra time must be in opening and closing files, in the rsync protocol, and other stuff I haven't thought of.
Bottom line - loading a big drive using gigabit ethernet isn't faster than using megabit ethernet. Bah.
The other thing I tested was power consumption. It's a big box, and when the top is open, it roars like a dozen lions, that's the blowers cooling the CPUs and memory. So I expected a big power draw. Not so. I put my clamp meter onto it, and it pulls 0.7 amps, which is 170 watts. I should add to that, another 40 watts, which is the power draw of the second box containing the drives, so a total of 210 watts, just under 1 amp.
My other big servers (which are really boxes with 12-14 drives), pull 200 watts. My small boxes (computer plus three drives) are 72 watts.
So the power consumption isn't too bad - that 210 watt draw will be replacing three other computers each pulling some 160 watts. Just for fun, the Raspberry Pi 3 takes 0.6 amp at 5 volts, which is 3 watts. That's why I like to use Pies instead of ordinary computers wherever I can.
I'll be saving on power, although that's not the main motivation for this box. The main motive is to improve customer service; the fast CPU (8 2.7 ghz cores) and huge memory (64gb) should translate to better throughput.
... later ...
It occurred to me that maybe I can reduce the overheads on file transfer by doing several at once. So I tried that, and I'm getting 6.8 mb/sec. So this file transfer will take about a week. Well, that's not a problem, it's just two computers communing.