RSS

ZFS & FreeNAS – A poor (and very geeky) man’s Drobo setup!

19 Apr

I’ve taken ZFS and FreeNAS to the next level of hard drive expansion. I’ve come up with a configuration that let’s you turn a FreeNAS using the nightly builds which includes ZFS into a Drobo.

Want to use 4 different sized drives together as one large storage tank with 1 drive fault tolerance, and maximize drive usage?

Do you also want to be able to upgrade the smallest capacity drive with a larger one without having to move all of your data to a temporary location then re-build a new array and then copying all your data back?

I’ve figured out how you can do it, it’s pretty geeky but it is cool! I posted all the details at the following wiki:
http://wiki.mattrude.com/Freenas/ZFS_and_FreeNAS_expansion

I found that the wiki formats the text automatically and kills a lot of the readability. So I suggest you view it in edit mode here:
http://wiki.mattrude.com/Freenas/ZFS_and_FreeNAS_expansion&action=edit

Maybe someone with better wiki skills can fix it up… I just wanted to share this info with the world.

Cheers,
Glen.

About these ads
 
22 Comments

Posted by on April 19, 2009 in ZFS

 

22 responses to “ZFS & FreeNAS – A poor (and very geeky) man’s Drobo setup!

  1. Jason

    May 4, 2009 at 3:55 pm

    Glen- So since I am still sort of a FreeNAS noob, I haven’t attempted to full execute this method. I really want to but I’m no partition wizard so I’ve been hesitant. Currently, I’m running your previous suggestion of creating two like sized pools of drives which most accurately represent half of the total HD available. Then I just set an automatic rsync to copy the pool each night. This is a nice solution but I’m anxious to try this method.

    I’ve been studying your spreadsheet and I’m not sure I’ve figured out what the most effective partitions are for my harddrives. I wonder if you wouldn’t mind helping me out? I shared your a copy of your google doc with you in hopes you could point me in the right direction.

    Thanks again for this brilliant solution.

     
    • nowhereman999

      May 5, 2009 at 9:40 pm

      Hi Jason,

      I took a look at the spreadsheet and I came up with two methods of partitioning your drives to make the best use of them.

      Phase 1 and Phase 2. They both let you use 59% of your drive space.

      I suggest you use Phase 1, it keeps the second and third raid arrays partition sizes smaller and therefore more flexible for future expansion.

      Just so you know I’ve done some cool testing with ZFS. I have build a ZFS pool simialr to the one you are looking at making. Then I took out the four hard drives and swapped there order around and put the four drives in another computer. Then setup FreeNAS on that computer booting from a USB thumb drive (using a nightly build) and then I did a zpool import and it found the ZFS pool and I was able to import (with a -f) it and use it without any problems… I thought that was wicked! I did that test to see what would happen if my computer died. I wouldn’t want to lose my data…

      ZFS has some cool backup features built in too. I’ve only read about these and haven’t tested it yet. I’m currently running my zpool without backups now and I feel good about it. I’m not saying that you should do I just thought I’d pass along the info.

      I’ve also started using gpt to do the hard drive wiping and partitioning. It is more Freenas ready and it is built in.

      I’ve created my own wiki to refer to. It might help you out if you decide to use gpt. You can always google it too. :)

      In the middle of the page you’ll see some of my ZFS documents:

      http://glenhewlett.comze.com/wiki/index.php/Main_Page

      Good luck with your FreeNAS setup,
      Glen.

       
  2. Jason

    May 6, 2009 at 2:59 am

    Awesome. Thanks again, Glen. Your help has been very appreciated and definitely above the call of duty. I was so excited to try this when you pinged me back I got after it tonight.

    Using the spreadsheet as a guide, I destroyed, created, and then partitioned the drives. I then checked them to make sure everything went as planned. It did. Then I went about setting up the first pool. When I attempted to create it I got a “can’t use GEOM provider” on my smallest 300gb drive. I thought ok. Well, let’s just reboot and at least implement all these changes. Then I can attempt to create the pool again. Which looking back seemed like a bone head idea. Now I can’t get the system to post or get into BIOS.

    I hope to resurrect this sucker.

    The testing continues…

    btw, very cool that you could remove the drives and do a fresh install on a new system and import the ZFS pool.

    Later on,
    J

     
  3. Jason

    May 8, 2009 at 2:19 pm

    Glen- I have tow quick questions about your config. First, how long did it take for the RAIDz pool to be build after you finished your partitioning and and pool creation? (Twice, I’ve accidentally changed a setting causing a reboot before the Raid was built and then I can’t get the drives to boot again. So I zero-fill and rebuild. I think this is the issue I’m having.)

    Secondly, in the configuration you suggested on in Google Docs you called for a “RaidZ2″ mirror. When I try to execute this it tells me I need to have 3 drives in order to build a RaidZ2. Is there something I’m missing or is there another way around this?

    Thank you, thank you again for your thought leadership and helping me build my dream NAS.

    later- Jason

     
  4. mattrude

    May 15, 2009 at 12:19 am

    Im glad you enjoy my Wiki. If you need anything, let me know.

    -Matt

     
  5. zadzagy

    June 3, 2009 at 11:21 pm

    Hey Glen,

    Thanks for the writeup. I took a few minutes tonight and added some wiki markup to better call out the various sections (I hope you don’t mind!). If so, you can just roll back the page. I didn’t get the chance to finish it up, but you can take a look at the code and see what I did if you want to.

    Cheers!

    Bob

     
    • nowhereman999

      June 4, 2009 at 10:54 pm

      Hi Bob,

      Thanks for updating the wiki, your new version does look cleaner and is easier to read (and follow). I’m glad to see your interest in my FreeNAS ZFS method.

      Please feel free to clean up the entire wiki.

      Thanks,
      Glen.

       
  6. Julian

    June 15, 2009 at 7:30 pm

    Would be very nice if it can be done by the Webinterface or nearly automatic like the Drobo does!

    So i could build the holy NAS System i am searching for long time.

    Drobo is nice. But veeery slow and expensive…

    A beyond raid feature on the Freenas System would be the killer feature. If its working with more than 4 drives… *dreamin*

    Greetings from Germany!

     
  7. tekkman75

    March 1, 2010 at 10:45 pm

    Hi Glen,

    I know that not many others have made comments since some time in 2009.

    I was wondering, since 0.7 release of FreeNAS with the ZFS support is now available in stable release.

    Would I be able to do the following :
    – dual boot FreeNAS with another Desktop OS (Linux Mint 5 Elyssa KDE or newer Linux Mint)
    – setup FreeNAS with your instructions but with an initial 3 500GB drive layout and adding a 4th drive later

    Current box I am considering for FreeNAS deployment contains the following :
    mobo : Gigabyte GA-M78SM-S2H
    video : on-board, Nvidia 8200 Chipset
    gigabit ethernet : onboard, Nvidia 8200 Chipset

    cpu : AMD Athlon 64 x2 5000 2.6GHz
    ram : 4GB OCZ DDR2 800Mhz (ocz2v8002g)
    psu : Seasonic S12 Energy+ 550W 80Plus Certified
    hdd : 500GB Seagate 7200.11 (currently single booting)
    case : Antec Sonata

    add-on : 3 port firewire 400 PCI card

    current OS : Linux Mint 5 Elyssa KDE LTS
    purpose : 2nd workstation/home network printer/network backup & storage

    trying to setup : dualboot with FreeNAS w/ZFS + current OS (Linux Mint 5 Elyssa KDE LTS)

    Have currently the 3x 500GB I need in a Windows XP machine running RAID-5, wish to break this down, due to the slow performance I am noticing in the RAID-5 based on mobo RAID on Intel DP35DP mobo with 4GB of Kingston RAM. Intend to rebuild a RAID-1 (under windows xp) or use Virtualbox and run Freenas to setup a Virtual drive with Freenas + ZFS support (if this is possible)..written to the Windows XP made RAID-1 (planning 2x 1TB Seagate 7200.12 3.5″ drives. Don’t know if this a good strategy for the Intel DP35DP machine or not. (only intended as data storage drive not boot)

    With the Gigabyte built machine, intend to install FreeNas on either a Compact Flash or USB Flash drive..I think I need the “embedded” version (still checking).

    I am also curious if the 3x 500GB drives will result anywhere near a TB of storage like it was under a RAID-5 under Windows XP

    Thoughts appreciated.

    Thanks,
    tekkaman75

     
  8. tekkman75

    March 1, 2010 at 10:52 pm

    One more thing, sorry, does all the magic need to be done at the command line still since the official release of the : FreeNAS-amd64-LiveCD-0.7.1.4997? Or can it be done via the FreeNAS gui?

    Thanks again for the great effort.

    tekkaman75

     
  9. Eric

    November 15, 2010 at 9:40 pm

    Hello,

    I just found your FreeNAS setup and it’s pretty interesting info.

    I’m still new to ZFS, and I tried to find the info in the manual but couldn’t find anything about some of your steps found in your wiki.

    When you create the first raidz1 and add it to the tank0. That’s cool, I follow you.

    But then you add the second raidz1 and the 2 others as well to the same pool. And it doesn’t seems to increase your size, since the other raidz1 are not the same size as the first one.

    What is the benefit of doing that exactly ?

    I thought that your plan by create multiple partition and then creating the raidz1 drives and adding them to a separate tank pool would have been sufficient and even better since you don’t loose some of the space ?

    Anyway, maybe there’s something that I missed. So if you could give me more info, that would be appreciated!

    Thanks!

     
  10. Stijn Hendrickx

    November 22, 2010 at 9:01 am

    Hi,

    I’d love to configure my FreeNAS using this wiki, but got stuck in one of the first steps: fdisk
    I replaces the /sbin/fdisk by the fdisk-linux file and got this:

    freenas:/sbin# /sbin/fdisk-linux
    ELF interpreter /libexec/ld-elf.so.1 not found
    Abort

    Stijn

     
    • nowhereman999

      November 22, 2010 at 5:47 pm

      Hi Everyone,

      I’m sorry I can’t support this info any longer, I have moved on to an unraid setup about a year ago for my server and I haven’t looked back.

      I still love ZFS, but I don’t have the time to fight with a server anymore. I just want to through more drives at it and let it grow. That’s what Unraid let’s me do.

      Good luck, with your ZFS storage.
      Glen

       
  11. nakanote

    December 24, 2010 at 12:05 pm

    Hi,

    I really appreciate you to show this info.
    I’ve build “poorman’s Drobo” and found that it’s more flexible when using “mdadm” and “LVM” on Linux.
    I wrote the method http://nakanoteblog.blog136.fc2.com/blog-entry-3.html

    Again, thank you very much for your idea.

     
  12. vyccid

    February 12, 2011 at 9:45 am

    Thanks for the writeup, very interesting solution. Thanks for the tip about unRAID, glad it works for you.

    For others like me who may be thinking about this ZFS setup, my understanding is that using drive partitions/slices like this means the disk cache isn’t used well, significantly impacting performance (search “ZFS_Best_Practices_Guide” for details). Of course it still might be worthwhile to squeeze usage from small disks for some, while others want more performance, but it’s something to consider.

     
  13. carl

    February 15, 2012 at 12:32 am

    the link is gone :( you have another link?

     
  14. nowhereman999

    February 15, 2012 at 7:19 pm

    Hi Matt,

    Thanks for hosting the wiki. I have updated the link.

    Glen

     
  15. bnjf

    April 11, 2012 at 9:15 pm

    For the initial set of disks [80,120,200,200] and [120,200,200,250], wouldn’t the following partitioning allow more usage?

    80 120 200 200:
    [40 0 40 40]
    [40 0 40 40]
    [ 0 60 60 60]
    [ 0 60 60 60]

    for 400 available, migrating to:

    120 200 200 250:
    [ 0 40 40 40]
    [60 60 0 60]
    [60 0 60 60]
    [ 0 90 90 90]

    for 500 available.

     
  16. http://nikisdadrambles.blogspot.de/

    August 5, 2013 at 6:11 am

    I loved as much as you will receive carried out right here.
    The sketch is attractive, your authored material stylish.
    nonetheless, you command get got an nervousness over that you wish be delivering the following.
    unwell unquestionably come more formerly again since exactly the same nearly very often inside case you shield this hike.

     

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
Follow

Get every new post delivered to your Inbox.

%d bloggers like this: