Ehbeta's Techie Blog

A Network Admin's daily finds and other poopoo

Monthly Archives: December 2010

A Poor Man’s DR with ESXi 4.1 – Part 2 Openfiler Config

    Preface:

    In my earlier blog, we installed our openfiler server on a virutal machine in our ESXi network.  For the most part, the install is rather cut and dry, with not too many options to configure on an appliance such as this.

    For this part, we will get into the web configuration console for our openfiler service, and do what we need to, in order to get a functional NFS out of our ESXi server, and share it with our other servers as a datastore for backups.

    Getting to the website

    The website you need to get to, is at https://<youripaddress>:446

    Login  with the default openfiler account:

    ID=openfiler
    PW=password

    Change the account now for security reasons, by clicking on the Accounts tab on the top, then the Admin Password Link on the Right side under Accounts Section.

    Configuration

    Network Access

    Under the system tab, Verify the network configuration, and the Network Interface Configuration.  Change appropriately, but in most cases, with a proper default installation if you configured your IP addresses, you shouldn’t have to mess too much with this.

    Toward the bottom, Network Access configuration is very important.  You want to tell Openfiler what networks have access to your NAS.  Simply type in a logical name for your subnet, the IP, SM, and they type in this case, which would be Share.  The address within that subnet you add here, will have access to the Openfiler system, for your NAS configuration.

     

    Volume Management

    After getting your Network configuration verified, you can now step to making volumes out of the new Hard Disk you created for you openfiler VM.

    Select the Volume Tab.  If you have no volume groups selected, you will see a page similar to the following:

      

    Click on the underlined blue link above, to create new physical volume(s).  There, you will be forwarded to a page like this:

     

    Select the appropriate Disk to edit, which in this case, would be the /dev/sdb Disk, at our 100GB size.  Here you will see the following page:

      

    At the bottom of the page, you will see a section to Create a partition on the disk you selected.  In normal cases, you can just make one partition (for this test, we will).  By default, it will take the full amount of space available and allow you to just hit the Create button to create your partition.

    After you hit the create button, your page will change similar to this:

    Notice, that we now have a list at the top, which allows us to view the device and the primary partition we just created, as well as an option to Delete it, if we want to.  At the bottom, we see now, that the space on the partition is all used, and now gives us the option to create a logical partition for the device.  Again, keep the default selections as they are, and hit the Create button.

    Now your screen will change to the following:

     

    Now with our physical disk configured the way it needs to be, we can move on to the next step, which is creating a Volume Group.  Click on the the Volumes tab at the top again, and you should see the following screen.  If not, you can click on the Volume Groups link on the Right side under the Volumes Section.

     

    Select the physical volume we just created from the previous steps with the checkbox, and give the volume group a name.  In this case, we will name it the VMX-Backup volume group.

     A few seconds later, we will see the Volume Group Manage Screen, which will show us our newly created Volume group.

     

    One more thing to do (I promise!) before we can open a share for our backups (or data, or whatever you’re doing).

    Just like with any other disk that needed to be shared, it first needs to have some kind of file system to read / write data to it.  So now, we are going to create a Volume in our group (vmx-backup) that we previously created, and format it to an appropriate format.

    If we click on the Volumes tab at the top once more, we will see a page such as this:

     

    Here, we will have to Add a Volume to our group, and format it.  On the right side, select the link Add Volume under the Volumes section.  On the screen you see next, you will have an option at the top to select the volume group (we should only have one, if you followed the directions correctly), and also see the Block storage stats…which you shouldn’t have to worry too much about right now.

    The bottom part is important here, because you will have to give this your volume name, the space you want for this volume, and the file system type you want it to be.  For your ESXi backups, you should be using the Ext3 file system, which will suit well for what we are doing.

    A quick tip here…for the most part, you will probably be using the full free available space to make your volume.  Where the “Required Space (MB):” is shown at the bottom, you can copy the Free Space from above, for example here at 102368 MB, and paste it into the space provided at the bottom.  This is a sure way to get the total amount of free space consumed.  If you would like to just use a portion of your free space, just input the amount of space you need.  You can create multiple volumes for your volume group, if you wish to keep some things separated.  Here we are just concerned about using all the space, and creating one big EXT3 volume.

     

    After hitting the Create button, it may take some time to format.  This of course, depends upon the size of the volume you are crating.  In this example, I create a volume of approximately 100 GB in size, and it took about 2 minutes before the screen would refresh with a list of my volumes.  All I can say here, is be patient…it’s working… 🙂

    When it does complete, you will see your page look like this:

     

    Creating the Network Share

    OK, so far we’ve been getting crafty with getting our raw disk data managed to a point where we can now share it with the rest of our network.  Now, we need to create the share in order for our VMware ESXi server to see it and add to our datastore.

     On the top of the page, we want to click on the Shares tab, and it will open a screen such as this:

     

    Remember what these are?  Well, the top one it our volume group, and the pink section (or sections if you created more than one) are our volumes.  We need to create a share now, on our volume we created in just the previous steps.  Select the volume you need too, in this case, would be the vol-backup volume.  You will immediately get a small window that looks like this.

     

    All this is asking you to do at this point, is create a folder in your formatted volume.  Call it what you must, in my test case, I called it simply “vmx” without the quotes.  This is just a name I need to remember what this folder is going to have inside it.  No different from creating a folder on your Linux or Windows based workstation using it’s explorer tool.

    You will now see a sub-folder under the vol-backup that looks like this:

     

    Now, in order to share the “vmx” folder we just created, click on the folder and you will see the following:

    Here, you can just hit the button on the bottom “Make Share” and you will be pointed to this page:

    For this test, I will keep the share name and description at what they are.  We will change the Share Access Control Mode from “Controlled access” to “Public guest access.”  If you would like to use Controlled access with LDAP groups and uses within your Active Directory, you can do so.  For the scope of this document, we will just use the basic Public guest access.

     

    A word here, just because it says “Public” doesn’t mean it’s wide open for everything we do.  Previously in this document, we added the appropriate subnets for access, under the System tab.  This can control what subnets have availability to get to our NAS solution here.  As we go further, I’ll show you some other options we will have to select in order to get things working properly, and control access using our public selection.

    Now, once we select the Public option, and hit the Update button, you will see the bottom of our page populate with the following:

    You can see here, the Name of the networks that we added previously under the System tab.  Here, we need to select the networks that need either No, RO (read-only) or RW (read-write).  Because of our need to have our ESXi boxes to actually write the backups to our NAS, we need to select the RW column under the NFS block as shown below:

    After selecting the RW radio buttons, we click on Update, and are placed back here at this page.  For the scope of this document, we need not yet to worry about the Edit underlined link under the Options columns for our NFS settings.

    Now we are all setup, but need one final setting.  We have to turn on the NFS service on our server.

    Go to the services tab, and you will see the following, which turns off our NFS service by default:

    All you have to do here, is just select the Enabled underline link, for the NFSv3 server row, and it will refresh and show the Status as Enabled. Should take only a few seconds.

    Adding as a Datastore in ESXi

    OK, now we are ready to roll!  Let’s find out if what we did can get us working from our ESXi boxes.  You can do this from either the same server that your openfiler is running on, or a different one.  Just keep in mind, that we need to use a ESXi server that belongs to one of the subnet networks we placed in our list. 

    Log into your ESXi server.  Select your server at the top of the list, and go to Configuration->Storage.

    Select the Add Storage… link at the top of your storage page.  You will get the wizard which will look like this.  Select Network File System as your choice, and click next.

    Here, you enter in your Openfiler server name or IP address, then the folder path.  If you remember correctly from the previous steps you can type it in, or just go to your shares page on your openfiler administration webpage, and copy / paste it into the “Folder:”  space provided.  Also, give your Datastore a name, so you can decipher what you are using this datastore for.  From the example I followed, mine will look like this:

    Click on the Next Button, then Finish.  If all works well, you will be place back at your Configuration screen, with your newly created Datastore available in the list.

    We are all finished!

    Summary:

    Here, we configured our openfiler server, and created a NFS share visible to our ESX servers.  We then attached the NFS as a Datastore on our ESXi server or servers.

    Next, we’ll go through our ghettoVCB file, and tailor it to backup our Virtual machines to our NFS share.

    Visit the next Blog in this series to continue -> A Poor Man’s DR with ESXi 4.1 – Part 3 ghettoVCB

A Poor Man’s DR with ESXi 4.1 – Part 1 Openfiler Install


  • Introduction

  • Here we will be looking at how to install the OpenFiler server from a downloaded ISO image from the Openfiler site.  Remember, that OpenFiler is OpenSource, but they do have support options availble for purchase, including and Administrators guide as well.

     

    The document on their website is very good on how to install Openfiler.  What I have done here, is copied a lot of the text and picutres from their site and pasted here.  Then, I edited for install on a VMware server, inlcuding some options that you may need to take into account when installing.  I made some comments as well, on changes that I made during the install, due to either sizes of disks, or other small tweaks.

     

    This document describes the process of installing Openfiler using the default graphical installation interface. If you experience any problems with the graphical install track, such as a garbled screen due to the installer not being able to auto-detect your graphics hardware, please try a text-based install. The text-based install track is described here.

    Total time for installation is about 15 – 20 minutes including software installation to disk.

     

This install will view pictures of a install, that would be different from how we install our basic virtual machines in house.  For example, we give our /sda drive at least a 10GB partition at a minimum, and either the 512MB or 1024MB of ram.

     

    Also, it may be a good idea to creat your drive space now, on what you are going to be using for your NAS storage.  In the next blogs after this, I will be commenting on a 100GB VMDK that was created for useage on Openfiler as NAS storage.  When installing Openfiler, be aware that if you have a sda and a sdb drive availble for space, that you leave the sdb alone, without partitioning, and without formatting the drive.  The Openfiler interface is where you should be carving out your physical (or vmdk) disks into volumes and formatting them properly.

     

    Installation

     

    The installation process is described with screenshots for illustrative purposes. If you are unable to proceed at any point with the installation process or you make a mistake, use the Back button to return to previous points in the installation process. Any errors or intractable problems with the installation process should be reported either to the Openfiler Users mailing list or, alternatively, if you feel you have found a bug please use the bug tracking system. If you report a bug, be sure to enter a valid email address so that you can keep track of any updates to it right up to resolution. You *must* first register with the bug tracker in order to be able to post a new bug.

     

    Starting the Installation

     

    On a VMware ESXi server, Create your VMware server by walking though the wizrd, in Custom mode.  For mine, I selected the Linux->Other 32 bit version.  I gave it a 10GB and a 100GB hard drive (for examples in the next blogs) and 512MB of ram.

     

    Don’t forget, to attach your downloaded ISO image to the CDROM drive, and enable at power on.

     

    After the system POSTs, the installer boot prompt will come up. At this point, just hit the Enter key to proceed.

     

    After a few moments, the first screen of the installer will be presented. If at this point your screen happens to be garbled, it is likely that the installer has been unable to automatically detect your graphics subsystem hardware. You may restart the installation process in text-mode and proceed accordingly in that case. The first screen of the installer is depicted below. The next step is to click on the Next button to proceed with the installation.Graphical installation: Proceed

     

    Keyboard Selection

    This screen deals with keyboard layout selection. Use the scroll bar on the right to scroll up and down and select your desired keyboard layout from the list. Once you are satisfied with your selection, click the Next button to proceed.

     

    Disk Partitioning Setup

    Next comes the disk partitioning.  You must select manual disk partitioning as it ensures you will end up with a bootable system and with the correct partitioning scheme. Openfiler does not support automatic partitioning and you will be unable to configure data storage disks in the Openfiler graphical user interface if you select automatic partitioning. Click the Next button once you have selected the correct radiobutton option.

     

    Disk Setup

    On the disk setup screen, if you have any existing partitions on the system, please delete them. DO NOT DELETE ANY EXISTING OPENFILER DATA PARTITIONS UNLESS YOU NO LONGER REQUIRE THE DATA ON THEM. To delete a partition, highlight it in the list of partitions and click the Delete button. You should now have a clean disk on which to create your partitions.

     

    You need to create three partitions on the system in order to proceed with the installation:

     

“/boot” – this is where the kernel will reside and the system will boot from

“swap” – this is the swap partition for memory swapping to disk

“/”– this is the system root partition where all system applications and libraries will be installed

 

    Create /boot Partition

    

    Proceed by creating a boot partition. Click on the New button. You will be presented with a form with several fields and checkboxes. Enter the partition mount path “/boot” and the select the disk on with to create the partition. In the illustrated example, this disk is hda (the first IDE hard disk). Your setup will very likely be different as you may have several disks of different types. You should make sure that only the first disk is checked and no others. If you are installing on a SCSI-only system, this disk will be designated sda. If you are installing on a system that has both IDE and SCSI disks, please select hda if you intend to use the IDE disk as your boot drive.

    The following is a list of all entries required to create the boot partition:

     

  • Mount Point: /boot
  • Filesystem Type: ext3
  • Allowable Drives: select one disk only. This should be the first IDE (hda) or first SCSI disk (sda)
  • Size(MB): 100 (this is the size in Megabytes, allocate 100MB by entering “100”)
  • Additional Size Options: select Fixed Size radiobutton from the options.
  • Force to be a primary partition: checked (select this checkbox to force the partition to be created as a primary partition)
  •  
  •  After configuration, your settings should resemble the following illustration:

  •  
  •  
  • Once you are satisfied with your entries, click the OK button to create the partition.

     

    NOTE:  Dont’ forget!  If you created your seperate VMDK that you will be using as  your NAS storage (for example 100 GB like my demonstrations), then you need to be sure that you have not added any of that physical drive your your /, boot or swap partitions!  In future blogs, I will show you how to edit your VMDK that you will be using as your shared storage drive.  More than likey, it will come up as “sdb” in your windows of “Allowable Drives:”.

     

    Create / (root) Partition

    Proceed by creating a root partition. Click on the New button. You will be presented with the same form as previously when creating the boot partition. The details are identical to what was entered for the /boot partition except this time the Mount Point: should be “/” and the Size(MB): should be 2048MB or at a minimum 1024MB.  Please notice here, that if you have your drive selected that you will be using for your NAS storage, be sure to unselect it (more than likely, will be sdb, or sdc…and so on).  We will be carving our volume(S) and formatting the space by using the web GUI instead.

     

    Once you are satisfied with your entries, click the OK button to proceed.

     

    Create Swap Partition

    Proceed by creating a swap partition. Click on the New button. You will be presented with the same form as previously when creating the boot and root partitions. The details are identical to what was entered for the boot partition except this time the Mount Point: should swap. Use the drop down list to select a swap partition type. The Size(MB): of the partition should be at least 1024MB and need not exceed 2048MB.  DO NOT place your swap space on the physical drive where you will be using your NAS storage.  When we format that drive and partition, your swap space will be gone!

     

     

    Once you are satisfied with your entries, proceed by clicking the OK button to create the partition. You should now have a set of partitions ready for the Openfiler Operating System image to install to. Your disk partition scheme should resemble the following:

     

    Graphical Installation: final partition scheme

    You have now completed the partitioning tasks of the installation process and should click Next to proceed to the next step.

     

    Network Configuration

    In this section you will configure network devices, system hostname and DNS parameters. You will need to configure at least one network interface card in order to access the Openfiler web interface and to serve data to clients on a network. In the unlikely event that you will be using DHCP to configure the network address, you can simply click Next and proceed to the next stage of the installation process.

     

    If on the other hand you wish to define a specific IP address and hostname, click the Edit button at the top right corner of the screen in the Network Devices section. Network interface devices are designated ethX where X is a number starting at 0. The first network interface device is therefore eth0. If you have more than one network interface device, they will all be listed in the Network Devices section.

     

    When you click the Edit button, a new form will popup for you to configure the network device in question. As you do not wish to use DHCP for this interface, uncheck the Configure Using DHCP checkbox. This will then allow you to enter a network IP address and Netmask in the appropriate form fields. Enter your desired settings and click OK to proceed.

     

    Once you have configured a network IP address, you may now enter a hostname for the system. The default hostname localhost.localdomain is not suitable and you will need to enter a proper hostname for the system. This will be used later when you configure the system to participate on your network either as an Active Directory / Windows NT PDC client or as an LDAP domain member server. You will also, at this point, need to configure gateway IP address and DNS server IP addresses. To complete this task you will need the following information:

     

    Hostname

    Gateway Address

    DNS Servers (Primary, secondary, Tertiary)

     

     

    The following illustration shows an example where a hostname has been assigned, and gateway IP, primary and secondary DNS information has also been entered.

    Once you are satisfied with your entries, please proceed by clicking the Next button.

     

    Time Zone Selection

    Set the default system time zone. You can achieve this by following the instructions on the left side of the screen. If your system BIOS has been configured to use UTC, check the UTC checkbox at the bottom of the screen and click Next to proceed.

     

    Set Root Password

    You need to configure a root password for the system. The root password is the superuser administrator password. With the root account, you can log into the system to perform any administrative tasks that are not offered via the web interface. Select a suitable password and enter it twice in the provided textboxes. When you are satisfied with your entries, click Next to proceed with the installation process.

     

     

    NB: the root password is meant for logging into the console of the Openfiler server. The default username and password for the Openfiler web management GUI are: “openfiler” and “password” respectively.

     

     

    About To Install

    This screen informs you that installation configuration has been completed and the installer is awaiting your input to start the installation process which will format disks, copy data to the system and configure system parameters such as setting up the boot loader and adding system users. Click Next if you are satisfied with the entries you have made in the previous screens.

     

    Note

    You cannot go back to previous screens once you have gone past this point. The installer will erase any data on the partitions you defined in the partitioning section.

     

     

    Installation

    Once you have clicked Next in the preceding section, the installer will begin the installation process. The following screenshots depict what happens at this point.

     

     

     

     

    Installation Complete

    Once the installation has completed, you will be presented with a congratulatory message. At this point you simply need to click the Reboot button to finish the installer and boot into the installed Openfiler system.

     

    Note

    After you click Reboot, be sure to remove the iso image from your Virtual Machine, we we dont’ boot back into the installation mode. 

     

     

     

    Conclusion

     

    In my next blog, I will go through how to access, and configure your NFS storage that you want to make as NAS datastore available to your VMware ESXi servers.  Be sure that you remember your IP address or DNS name that you have given this server you just created.

     

    For configuration, go to my next blog -> A Por Man’s DR with ESXi 4.1 – Part 2 Openfiler Config.

     

    You can learn how to manage the Openfiler system by browsing the administrator guide online which can be found here.

     

A Poor man’s DR with ESXi 4.1 – Introduction

Ok, so my department isn’t a cash cow…

Which might be why they have used VMware Server 1.0.x for years, prior to my arrival.  I always thought it was used just for testing and for enthusiasts, but I guess it’s got it’s place.   I myself, come from the background of ESX 3, 3.5 and vSphere, and when dragged-and-dropped into a new company, I was in for a shock on what’s out there in the free world for VMware.

So take on the challenges I did, one for starters, to get ESXi introduced into the company for better management and control of our VMware infrastructure.  Of course, I would like to go vSphere, but right after our “Great Rescission” (which is actually a depression, but that’s another topic), floating the bill on new hardware and OS licences for VMware, that turned into a pipe dream.

The other challenge, is to start thinking about our DR (disaster recovory) process, as it pertains to our non-datacenter sites in our company.  Simple enough, each site has the same amount of servers, for the most part.  Some sites may deviate, but here we are for the most part:

  • Windows Dominan Controller
  • File and Printer Server
  • Mail Server (Groupwise, yuck!)
  • SQL 2005/2008 server (it won’t be a part of this DR process, I’m using Replication for it instead)
  • Application Server

In the event of a single-server failure, which has all of my Virtual Machines stored on it, what do I do?  For the conventional admin, We would repair the server (disks bad, power supplies failure, motherboard failure), and then rebuild almost everything.  Most of our sites are using Tivoli or BackupExec for just file based backups, and no VCB (hence, only using VMware Server 1.0.x).

As luck turns out, at just about every site, I have the ability to use 2 servers, which are identically physical, and rebuild them to our liking.  One, which houses our VMware Server install on top of Windows and is hosting 3-5 VM’s, and the other is a dedicated SQL server, either SQL standard or Express 2005.

I know there are a thousand ways to sink a ship, or skin a cat….or whatever, but here is the way we decided to institute a DR process for our ESXi 4.1 servers in one of our sites, and hopefully, do at other sites too.  I figured I would share the wealth, for those between the same rocks and hard places as I.

Issues to tackle:

Here are some things that stood in our way, we needed to look into resolving:

  • Storage is local only on each server – no shared storage available (like iSCSI or NAS)
  • Money
  • Experience with Bash or other Linux based scripting
  • Multiple backup instances of VMDK storage on DR server

With the two physically identical servers, we have about 1.1 TB or storage available on each, after a RAID 5 with one hot spare available.  So storage was not an issue, except they were not shared.  We do have at one of our main offices, a iSCSI solution with snapshots, backups, redundancy….but not the same at our child or remote sites.

Money just was not there, to purchase any of the VMware products to aid in our DR abilities, let alone to purchase the full version of VMware vSphere.

I’m a Windows admin, and not much of a Linux / Unix guy.  So when looking at using Bash scripts, or even configuring / installing services on a VMware  box, I knew enough to be dangerous only.

Since storage on my DR was the same size as my Production machine, How am I able to have multiple backups on my DR server?

Theory to ponder

If you are attached to the world of ESXi, then you should know about the wonderful script that is called ghettoVCB, which is the cheap admin’s way to use the VCB (Virtual Consolidated Backup) products offered by VMware.  In my first take on the ghettoVCB and ghettoRestore, I was a little pounded on how I could use it.  I don’t have a central storage medium for both my Production or my DR server to talk to.  And also, If I needed to take the time to do a restore using the ghettoRestore script, then I don’t really have a DR process, do I?  I really wanted a manual failover, with very little steps I had to do, to get things backup and online, without waiting for a long restore to take place for a large-sized VMDK file.

If only I had shared storage…NFS perhaps….availabe…someplace….

NFS would be good…I can attach a Datastore to both ESXi 4.1 servers using the same NFS share…

Humm…I don’t have a NFS share anywhere….nor space anywhere to make one

So let’s think outside the box for just a minute.  If I only had these two servers, and nothing else to play round with in my enviroment, the only logical solution, would be to use the available DR server space, to hold the VMDK backups.  But…But…how do I do that and run my DR servers at the same time?  I don’t know of a way to have both ESXi servers use the local storage of my DR server.  If there was only a way to make a NFS server on my DR server….

WAIT!  Can I use Openfiler on my DR server.  Sure, but as a VM only.  

OK, that may work.  

I can then let both VMware servers see the NFS and attach a datastore on each so I can backup, and restore (Add to inventory) on both boxes.

And, in VMware ESXi 4.1, in order to just add a backed-up vm onto another server, I can right-click on the .VMX backup file in my Browser on vSphere client, and just “Add to Inventory” and my VM is added!

The Test

This sounds like it may be a winner, but I need to test it out with what I have currently.  I need to gather all my resources to make this work, so here we go.

  • ESXi 4.1 installed on both my Production and DR servers ( I guess this can work too for 3.5, the x32 version)
  • Openfiler 2.3 iso image for making a VM server.  It’s free which is good, and has a great reputation from what I hear for making simple NFS, iSCSI and other disk shares.
  • ghettoVCB script.  We don’t need the ghettoRestore for this actually, just the ghettoVCB.
  • A couple of testing VM’s running on our production server.

After I have both VMware vSphere 4.1 servers up and running, I can create my VM’s on my production server at anytime.  I simply created 2 servers just to test out – bare-bones windows OS with the VMware tools installed, and not much more.

For my DR server, I’m going to need to have Openfiler 2.3 installed.  I created a simple 10 GB partition, which from what I hear you can make a 1 or 2 GB and it would work just fine.  I also added the remainder of my Datastore1 space on my DR box, to be a another drive for Openfiler of about 600GB (I’m only using 600, because my SQL server, which is not part of this process, is already taking up about 300 for itself).  This 600 GB partition will store all the backups from ghettoVCB.

Stop!

Here, I’m going to end this article, and start the next series of articles as continuations of this topic.

For our next topic, I will go over installing Openfiler on your VMware server acting as our Disaster Recovory, as just another server in our VM process.

Go to my next article -> A Poor man’s DR with ESXi 4.1 – Part 1 Install Openfiler