Thoughts on using FreeNAS and ZFS – Expandability

I think FreeNAS rocks and ZFS is cool too. They are working together now quite solidly using the FreeNAS nightly builds.

I think I’ve come up with a setup to use FreeNAS and ZFS so you can have a safety net of using RAID 5 or 6 under ZFS and expand ability for the future.

Currently you can’t simply add a vdev to an existing ZFS pool, you can add larger drives to a Raid 5 or 6 vdev and increase the vdev size and therefore the pool will increase as well. But to really gain space in a raid 5 or 6 system you should swap out all the drives to keep them all the same otherwise you will lose a lot of storage space. See this calculator and put in different size drives and see the bottom about how much space is wasted.

I think the way to go is to create two pools of similar size and just rsync them nightly.

Let’s say you have a bunch of old drives lying around and you want to use them for you FreeNAS and you also want to make sure you don’t lose data (very important).

For example you have 5 drives, 4 old IDE drives of different sizes (80 Gig, 120 Gig, 200 Gig and 250 Gig) and you also have 1 500 GIG SATA drive. Once you have FreeNAS booting from a USB thumb drive you will free up the four IDE ports on the motherboard. So you connect all 5 drives to your computer.

Now calculate how best to split up the drives so you have two equal (or as close as you can) half’s of storage space. In the example above we have a total of 1150 Gigs, so half is 575 Gigs. So we would make two ZFS pools – pool0 would be (120 Gig, 200 Gig and 250 Gig) drives. pool1 would be the 80 Gig and the 500 Gig drive.

So we create a vdev0 with the 120,200 & 250 Gig drives striped together. Then create vdev1 with the 80 & 500 Gig drives striped together. Next make pool0 from the vdev0 and pool1 from the vdev1 (keep it simple).

Create an rsync job to backup pool0 to pool1 every night and you now have a backed up system that is fault tolerant of one drive (even more if they are in the same pool). Make sure to rsync nightly from pool0 to pool1 and only make pool0 available to the network, pool1 will be invisible to everyone on the network.

The beauty of this setup is it will let you expand as you get larger drives and also is fault tolerant.

If one drive dies you simply delete the pool and vdev. Replace the drive and create a new vdev and pool and do a manual rsync from the good pool to the newly created pool and your done.

For example the 80 Gig drive dies in pool1 from above so we lose pool1. We still have all of our data on pool0. So we delete pool1 and vdev1. Remove the 80 Gig drive and replace it with a cheap new 300 Gig drive. Make a new vdev1 striping the 500 Gig and our new 300 Gig drive to give us an 800 Gig vdev1. Create pool1 with the vdev1. Next we do a manual rsync from pool0 to poo1 once it completes we are back up and good to go with fault tolerance again. Make sure to rsync nightly again from pool0 to pool1 and only make pool0 available to the network, pool1 will be invisible to everyone on the network.

Now we are running with pool0 at 570 Gigs and pool1 at 800 Gigs. We found a great deal on a 500 Gig SATA drives and picked one up. So no we can add it to our ZFS system.

We delete pool0 and vdev0. Add the 500 Gig drive to the computer and install it as a new ZFS drive. Now recreate vdev0 with the 120, 200, 250 and our new 500 Gig drive. Create pool0 from our new vdev0 and do a manual rsync from pool1 to pool0. Once rsync completes we’re good to go and now have over 1 Terabyte of storage to use for pool0 and 800 Gigs on pool1. So once we get close to the 800 Gig point with pool0 then we must add another drive to pool1 to keep up and so on…

These are the benefits of this system:

-No need to manually backup all your data and re-copy it back again.

-Let’s you efficiently use hard drive space at any given time.

-Is fault tolerant of one bad drive maybe more if they are on the same ZFS pool (you could go overkill and have each pool it’s own raid 5 or 6 vdev)

-Using ZFS your data is ensured to be stored without error (checksums and scrubbing)

I’m in the middle of rsyncing a setup right now just as I explained above. I’ll post if I come across anything more of interest on this subject.


This entry was posted in ZFS. Bookmark the permalink.

18 Responses to Thoughts on using FreeNAS and ZFS – Expandability

  1. Pingback: Learn FreeNAS » Ramblings on FreeNAS, ZFS, Expandability and RAID 5

  2. Jason says:

    Glen- I’ve been interested in the Drobo since its release. However, I never jumped due to the slow transfer speeds of USB2 and Firewire 800. Buying a DroboShare puts the price in the realm of a small server so I haven’t bought one. However, I’ve been extremely interested in the ability to use various drive sizes.

    I’m not ECSTATIC that you have created a poor man’s Drobo. I’m thinking of building my first NAS using your technique. I have 2 1TB drive and 2 300GB SATA drives ready to roll.

    My questions are:
    1. Using your method of creating two pools (with one acting as a backup of the first) I assume then that I could use the full 1.3TB of storage space in each pool and still have back-up redundancy on the other pool?

    Anyhow, I read your wiki and as a FreeNAS noob I think I can build this booger. Thanks for the pioneering effort and the continued support.


    • nowhereman999 says:

      Hi Jason,

      Yeah I agree the Drobo does suffer from a speed issue but it delivers 50 mb/s over my network which is perfectly fine for my uses (media storage). Mine is the Gen 1 version so I can’t say if the 2nd Gen is faster or not. Firewire 800 itself is very fast! But the performance of the machine itself may still be slowwing it down.

      I think I’ve come up with a better partition scheme in my drobo like setup and one day I think I’ll post a PDF.

      It is a little involved but it’s not too complicated, it’s actually a little simpler then my first aproach at the FreeNAS drobo thing.

      But from my experiments I’ve found that I only get about 100mb/s transfer speed from the ZPOOL istelf. The bottleneck is the hard drives themselves as I have tons of RAM (2 Gigs) and I’m running a AMD 64 two CPU processor. Ive tuned the settings so ZFS can access lots of RAM but my CPU never goes higher then 20% usage.

      As I said 50 M/bs is enough for my use so this latest FreeNAS config I’ve come up with is going as fast as my 100mb/s ethernet network.


      1) Yeah you can use the full 1.3 TB of storage.

      2) I have no insider knowledge, I’m just a user but I think it will be awhile until all the ZFS WEbGUI is worked out. The OS is running quite solid. If you don’t mind using the command line for ZFS stuff it is fully functional.

      Good luck with your FreeNAS setup!

  3. Jason says:

    Sorry. Didn’t get question two typed before I posted.

    Question 2. Do you know when a FreeNAS is going to formally release 0.7?

  4. Jason says:

    Wow. Thanks, Glen.

    I just re-read my post and realize a “not” got slipped in. I’m actually really excited to try your method. I’m planning on using my old AMD FX-57 box with 3gb of ram. My mobo is an ASUS SK8N with only two SATA ports, which is obviously is not enough. So I’m currently trying to see if my on-hand RAID card is supported (a 6-port LSI SER523 from an old Dell server). I won’t be able to run tests until I receive a few things from NewEgg.

    Thanks again and keep up the good work.

  5. Jason says:

    Hey hey, Glen. I don’t have permission to access this spreadsheet. In google docs you have to add my email address to share it with me.

    Take care!

  6. Jason says:

    hey hey, glen.

    I went with this method listed instead of the more complicated partition method. Everything has run along swimmingly until today. I recently moved and the last time the old FreeNas was running all was well. Well, I shut it down and before I did, I checked both pools to make sure they are ok. Well, today almost a month later, I decided to boot it up and check on it. Pool0 won’t import and Pool1 only has 4.5mb on it. So I’m not sure what happened. Using the shell, I looked in on the pool and found out that pool0 is “FAULTED” (the message in fact says it has corrupted data). So I tried to “-f” and force it to import and that won’t work. The problem is that pool1 used to contain a pure backup of all my pool0 data. However, with pool0 not importing and having corrupted data, pool1 seems to have decided to just erase itself too.

    The only thing I can think of that might have happened is that maybe a drive in pool0 failed and then the rsync attempted and deleted everything on pool1.

    But I’m at a loss. Any ideas to Frankenstein this thing?

    Hope all is well.

  7. nowhereman999 says:

    Hi Jason,

    I’m sorry to hear of your troubles. I agree with your assumption that a drive in pool0 has gotten corrupt and the pool1 did rsync to pool0. So pool0 is your only hope.

    I suggest you look for a program called spinrite by Steve Gibson.

    Don’t look at any other disk maintenance utility. It works at a low level on the drives so it doesn’t even matter what operating system the hard drives were running on.

    Hopefully spinrite can fix the bad disks on pool0 and then the pool0 will show up properly again.

    Good luck,

  8. Aaron Eiche says:


    This setup is brilliant. I’m still increasing my understanding of how this whole process works (I’ve never setup a RAID or used ZFS). I appreciate your work on this and sharing it with the community. I hope that in the future the process can be automated in FreeNAS.

    I’m currently running FreeNAS, but with no Disk mirroring (I know, not a great idea, but I’m only using it for non-vital media and a backup.) and I’ve been wanting a solution that operates in the same vein as the Drobo (but at a significantly lower cost)

    You did it! Well done.

    • nowhereman999 says:


      It’s always great to read others are benefitting from my configuration!

      Good luck with your expandable FreeNAS!


  9. mike says:

    Rather than have 2 zfs servers running in a single box I’ve had two separate servers each with a single set of pool, and to synchronise them I periodically rsync.

    All of the same advantages apply (delete serve A and expand, copy back info from server B), but in the mean time I can store server B at a differnt person’s house. Proper backup against fire, flood, electrical surge and theft. I don’t see the point of having 10 copies of everything all in one physical location only to lose the lot due to nature or a burglury

  10. Jeff says:

    I have both pools setup. Might be simple questions, but here goes…
    1. How do make 1 pool invisible to the network??
    2. How do I set up rsync properly between the pools (1 & 2)?
    3. How do I open the pool up to sharing on all fronts?

    I know it’s a lot, but this blog has guided

  11. Doug says:

    Hi Glen,

    Thanks for this post, it’s interesting.
    I’m wondering, with your method is it required to do some type of automatic check on pool0 before the rsync? What happens if pool0 is corrupt and a lot of data is lost, will pool1 then rsync and delete data during the process in order to match pool0?


    • I too would like to know this. Surely this is true. The filesystem has no way of knowing whether pool0 has been corrupted or legitimately modified.

      What happens if you have a faulty RAM module (I assume a lot of home-brew systems won’t have ECC RAM) which causes your data to become corrupt?

      I find it frustrating that there is little information about data decay on the Internet.

      • This method replicates a mirror RAID, and suffers from the same dangers as it. Both mirror RAIDs and this rsync method only protect against disk failure, not against data corruption/accidental deletion/virus activity/etc. To protect against that you must says:

        This method replicates a mirror RAID, and suffers from the same dangers as it. Both mirror RAIDs and this rsync method only protect against disk failure, not against data corruption/accidental deletion/virus activity/etc. To protect against that you must use backups. Don’t confuse disk redundancy with backing up data.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s