Build Your Own NAS
So you want storage for your home lab, but you don’t want to spend $1.000,- on an empty NAS box and still have to purchase disks for capacity. How about a “build your own”? There are numerous free and semi-free alternatives in the market. You just need to provide some compute platform where you can attach a few disks to. More than a year ago, I wrote a blog series on building a SMB or lab environment that is supported and does not cost you your life savings. You can find it here. Some time has passed and as we look for new toys for our lab, why not repurpose the old and use one of those HP Microservers to build us a performant NAS system. It really is easier than you might think.
The Parts
So we already have the HP Microserver Gen8 with a Celeron CPU and 16 Gigs of RAM, an SD-card and a quad-port PCIe NIC. If you don’t own one, they can be purchased for around $200 online in a basic configuration, maybe less if you look for a used one. What else do we need to make this happen? To make sure we get the optimum performance, we insert an SSD as a log disk and write cache. The Microserver has an extra onboard SATA port meant for the CD-ROM slot. This port can be repurposed to attach our SSD. We only need an adapter cable for the powerplug from the available floppy connector to a SATA power connector and a normal SATA cable from the SATA port to the CD-ROM mount-slot. The adapter cable for the powerplug can be purchased from sites like AliExpress.com for a few bucks, as generic SATA cable can be bought in every computerstore.
This leaves all four drivebays for capacity disks. For this example, we’ve picked 4 traditional HGST 4TB NAS drives. Bear in mind that the Microserver does come with drive cradles, but they are a bit flimsy and do not support hotplugging. You can also decide to use SSD’s as capacity disks. The drive cradles do not provide mounting a 2.5 inch SSD so you would need an adapter from 3.5 inch to 2.5 inch. As we’re in Europe, where SSD’s are not as cheap as in the US or Asia, we’re chosen the traditional magnetic drives, but picked 7.200RPM drives to at least have the most performance out of them as possible.
And that’s all we need in hardware parts.
The Software
There are numerous opensource NAS solutions. For this build, we’ve looked at OpenFiler, NAS4Free, Xpenology and FreeNAS. For this build, we’ve decided to go with FreeNAS as it has support for VMware ESX and has a solid base with FreeBSD as well as an expandable filesystem based on ZFS. Most of the mentioned solutions will allow for installation on an SD or USB drive, hence the already available SD-Card in the system where we used to run ESX from, can be repurposed to install FreeNAS to.
The Build
Mounting the harddrives in the cradles is a simple screw-on action with 4 screws per drive. As mentioned, the cradles are a bit flimsy, so be careful not to screw them too tight to your disks or you risk cracking them. If you decide to go for SSD’s for capacity, you need a cradle. To be honest, we have not done a proper comparison of the different cradles out there. The Icy Dock models seem to get high praise for build quality as well as price. You can find their website here. Our SSD however is placed in the CD-ROM slot and wired accordingly. It really is as simple as it sounds.
Installing FreeNAS is pretty straight-forward. Download the ISO from their website. As we enabled ILO in our server, we can use the virtual drive option in the remote console to install. If you do not have this enabled, you may want to produce a bootable USB drive from the FreeNAS ISO using something like UnetBootin. This is an free software that works on every platform and will produce a bootable USB drive from almost any bootable ISO.
In the install we select the SD-Card as installation target. The minimum size FreeNAS v11, which is what we install here, runs on is 8 Gigs of storage. Our SD-Card has 16 so we’re safe. Alternatively the Microserver has an onboard USB port where you can insert a USB drive to boot from. You can use either one as in installation target to boot from later on. We will skip the “next-next-finish” screens as we assume you will be able to run through them on your own.
After the installation the server automatically reboots and starts the FreeBSD OS on which FreeNAS runs. It is not the fastest boot procedure we’ve seen but then again, we do not intent to boot this box very often. Being rock solid is more important than being a quick boot. Once done, we’re confronted with the “status screen” on the console.
Now, if you have a DHCP server configured in your LAN, you should probably already see an assigned address. Otherwise, or if you wish to define a static address, you can use this menu to assign an address to an interface of bind a group of NIC’s together. A Link Aggregation Group or LAGG is a group of NIC’s that combine their bandwith into 1 interface with 1 IP. This can be done over LACP, Round Robin or Load Balancing. Bear in mind that if you wish to use iSCSI with MultiPath IO (MPIO), you might want to assign an IP too each NIC rather than bind NIC’s together. Also, be aware that your switch plays along with interface binding, otherwise you will end up with an unresponsive NAS.
Fine Tuning
Now, we have that SSD there and we have 4 disks. Out of the box, FreeNAS does not allow you to have 1 SSD doing both Logging and Caching at the same time. But there’s a trick for that. It’s one from the old book and it’s called “partitioning”. If you split the SSD in two, you can assign one half to the logging and one half to caching. As I have an SSD with 120 Gigs, I decided to split it 80/40. 80 Gigs for logging, 40 Gigs for caching. To do this, you need to go into your console (either use the one from the box or enable SSH and log into it) and enter a couple of commands. But first things first, we need to create a volume where we can then assign the SSD to. This part can be done via the GUI.
Log into the GUI and go to Volume Manager. Here you see the 4 disks and your SSD. Maybe you see other storage volumes as well but that is irrelevant for this setup. Make sure you drag and drop the 4 disks into 1 group. You can choose your filesystem here. FreeNAS uses ZFS by default. For those of you who never heard about ZFS; ZFS (Or Zettabyte File System) is a file system developed by SUN Microsystems in 2005 to deal with large capacity disks at speed without drawbacks. More info on ZFS and what it can do, can be found here and here.
So you create a ZPool of the 4 disks. We picked RAIDZ1 as a pool system. This more or less compares to RAID5 and will prep for the loss of 1 disk. You can also choose RAIDZ2. This sort of compares to RAID6 and will prep for the loss of 2 disks. Logically, RAIDZ2 will have less storage capacity versus RAIDZ1 as more checkblocks are written to the disks in the pool. Once the RAIDZ mode is chosen and applied, you should have a ZPool consisting of your 4 disks.
Now we go to the command line to configure the SSD and add it to the pool. First we check what the device assignments are. This can be done with command “camcontrol devlist” In our case, this outputs to this:
root@NAS:~ # camcontrol devlist <HGST HDN724040ALE640 MJAOA5E0> at scbus0 target 0 lun 0 (pass0,ada0) <HGST HDN724040ALE640 MJAOA5E0> at scbus1 target 0 lun 0 (pass1,ada1) <HGST HDN724040ALE640 MJAOA5E0> at scbus2 target 0 lun 0 (pass2,ada2) <HGST HDN724040ALE640 MJAOA5E0> at scbus3 target 0 lun 0 (pass3,ada3) <OCZ-VERTEX2 1.37> at scbus4 target 0 lun 0 (pass4,ada4) <SanDisk U3 Cruzer Micro 8.02> at scbus7 target 0 lun 0 (pass5,da0) <HP iLO Internal SD-CARD 2.10> at scbus8 target 0 lun 0 (pass6,da1) <HP iLO LUN 01 Media 0 2.10> at scbus8 target 0 lun 1 (pass7,da2)
So now we know that our SSD has the assignment “ada4”. Next we need to make sure it is cleared and we assign a partition to it. This is done with “gpart” using these commands:
gpart destroy ada4 gpart create -s gpt ada4
Now we have an assignable GPT partition on the SSD. We can now divide it up into a log part and a cache part. This is done like this:
gpart add -a 1m -l zlog0 -s 80G -t freebsd-zfs ada4 gpart add -a 1m -l l2arc0 -t freebsd-zfs ada4
First we add the log partion with a size of 80 Gigs and define it with type “zlog0”. If your SSD is larger, you can pick a different size for the log partition. The next command assigns the rest of the free space to the “l2arc0” type. Now we have created both parts, we can now add them to the ZPool. First we check what ZPools there are available on our NAS. This is done with the command “zpool list”. In our case, the output is like this:
root@NAS:~ # zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT Vol0 14.5T 9.93T 4.57T - 39% 68% 1.05x ONLINE /mnt freenas-boot 7.38G 750M 6.64G - - 9% 1.00x ONLINE -
So you see, we have our previously created volume, consisting of 4 disks with a size of 14.5 TB and called “Vol0” and we also have a boot volume. As you might have guessed, this dump already has some data on it, hence the data statistics you see regarding compression and deduplication. Now we add the SSD parts to the ZPool. This is done with these 2 commands:
zpool add Vol0 log gpt/zlog0 zpool add Vol0 cache gpt/l2arc0
The first one adds the log disk to the Vol0 ZPool, the second one adds the l2Arc cache to the Vol0 ZPool. When this was succesful, you can check with the command “zpool status Vol0” how it looks now. This is our output:
root@NAS:~ # zpool status Vol0 pool: Vol0 state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM Vol0 ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 gptid/d9102e42-7906-11e7-a4b2-a0369f3e83bd ONLINE 0 0 0 gptid/d9ffdc49-7906-11e7-a4b2-a0369f3e83bd ONLINE 0 0 0 gptid/daf808a9-7906-11e7-a4b2-a0369f3e83bd ONLINE 0 0 0 gptid/dbe9229d-7906-11e7-a4b2-a0369f3e83bd ONLINE 0 0 0 logs gpt/zlog0 ONLINE 0 0 0 cache gpt/l2arc0 ONLINE 0 0 0 errors: No known data errors
If all went right, yours should be similar. As you can see, both partitions are added to the ZPool with their respective functions. There are no errors to report.
Datastores
Next up is where to store your VM’s. There are two ways to hand out storage. Either you create a Dataset or you create a ZVol. Basically, you can say that if you want to do block (iSCSI for instance) you should do ZVol and if you want to do shares (like NFS or SMB) you should do Datasets. It’s a file vs block thing. That’s not the complete explanation but it will suffice for us. As in this lab we run the VM’s from NFS, a Dataset is the way to go. You need to be careful here as you also have to deal with users and access rights.
So first we create a user called “vmware”. A group called vmware will automatically be created as well and the user will get this group assigned as primary group. Next, we create the dataset for our VM’s to store. Go to Volumes, select your volume (in our case, Vol0) and click “Create Dataset”. Make sure you use the share type UNIX. It’s up to you to enable or disable Compression and Deduplication. It can add to the CPU load quite quickly but it might save you storage space as well. If your box is up for the task, enable them. As our CPU is a mere Celeron, we will disable both for now. Click OK and go back and select your newly created datastore and click “Change Permissions. Here you enter the user we just created as owner of this Dataset with user and group rights. This is important as it will determine if you can write to the datastore once it’s shared. Click OK to close this window. Now the most important part. Creating the share.
To be able to share anything, you need to enable the service you wish to use for sharing, first. In our case, this is NFS. If you click the settings for NFS, be sure to enable “Allow NonRoot Mount”, otherwise you might run into errors mounting the share to your VMware environment. If you have configured the service, start the service and make sure you enable “Start at Boot” or you will have to manually start it every time you restart the NAS.
Now for the share part, here’s where it all comes together. Go to “Sharing” and click “Add UNIX (NFS) Share”. Browse the path to the Dataset you just created and select it. Enter a comment for the share and scroll all the way down and click “Advanced”. Now you can define a username and group from the dropdown at the settings for “Mapall User” and “Mapall Group”. This means that every read and write action to this share will be done with the username you select here. As we previously named our user “vmware” owner of this Dataset, we need to make sure this is the user we use to read and write data. Once selected, go on and click OK to make the share active.
Closing
If you followed all the steps correctly, you should be able to add the NFS datastore to your ESX Host. It might be a little bit more work then you are used to with NAS systems from QNAP or Synology, but it’s far cheaper and the performance is very much there. We’ve spend under $900,- in total for a NAS solution that has 14,5 Terrabyte of assignable storage capacity. Can you find anything cheaper that can do the same with the same performance?
Related Posts
Leave a Reply Cancel reply
You must be logged in to post a comment.