Creating Hyper-V Failover Cluster (Part 1)

by Marin Franković on 20 April, 2010


This will be two part series on how to configure and test Hyper-V Live Migration scenario without using hardware storage. To recreate this demo you will need 3 physical machines with 2 network cards in each (although 3 is recommended for nodes and 2 for storage server), Windows Server 2008 R2 Enterprise Server, Windows 2008 Storage Server and for the last demo, you will need SCVMM 2008 R2 installation disk. In both videos, language is Croatian.


SAN (Windows 2008 Storage Server) NODE1 (Windows Server 2008 R2 Ent) NODE2 (Windows Server 2008 R2 Ent)
IP: (Public and Storage LAN) IP: (Public and Storage LAN)
IP: (Cluster LAN)
IP: (Public and Storage LAN)
IP: (Cluster LAN)

failover cluster

In the first part I will explain and show how to configure storage and failover cluster. At the end, you will learn how to add virtual machine as clustered resource and how to invoke controlled Live migration.

Configuring Windows 2008 Storage Server for iSCSI

First, obtain Windows 2008 Storage Server from the MSDN site and also download Windows Server iSCSI CD. Steps for configuring iSCSI target on storage server:

  1. Install Windows 2008 Storage Server x64 (name it SAN or whatever you like)
  2. Install latest patches and service packs
  3. Create domain on it (not recommended, only for demo purposes)
  4. Install iSCSI x64 target
  5. From Administrative tools start „Microsoft iSCSI Software Target“
  6. Right click on iSCSI Target and Create new one
    1. Type in name (eg. Storage)
    2. On iSCSI initiators Identifiers page click advanced and add IP addresses of your two nodes that will be accessing this target
  7. Repeat step 5 and create another target and name it „Quorum“
  8. Now we have to create disk for iSCSI target
  9. Right click on Storage iSCSI target and select third option from the top
    1. On File option enter location of vhd file (eg. C:\storage.vhd)
    2. Enter size of the disk (min. 30 GB)
  10. Repeat step 8 for Quorum iSCSI target (eg. C:\quorum.vhd, min 512 MB )

Now we have created two disk resources on our storage server.

Adding disk resources to NODE1 and NODE2

  1. Install Windows Server 2008 R2 Enterprise on two remaining computers (I named them NODE1 and NODE2 for easier management)
  2. Install latest patches and service packs
  3. Connect two nodes via private network
  4. Add both nodes to domain created on SAN storage server
  5. Install Hyper-V role and Failover Cluster feature on both nodes
  6. Shutdown NODE2 (IMPORTANT)
  7. On NODE1 start iSCSI initiator from Administrative tools
    1. Select Yes to start automatically if asked
    2. Select OK to open required ports (for demo purposes you can disable firewall on all three computers, but for production open required ports manually)
    3. Select Discovery
    4. Click Discover Portal
    5. Enter IP Address of SAN server and click OK
    6. Select Targets (you should see two targets)
    7. Select each target and click Connect
    8. Select Volumes and Devices
    9. Click Auto Configure
    10. Click OK
  8. Open Disk Management tool from Server manager Console
  9. Scroll down until you see two new disks
  10. Bring them Online, Initialize them and format them with NTFS
  11. Smaller disk (Quorum) select Q as drive letter
  12. Larger disk (Storage) select J as drive letter
  13. Shutdown NODE1
  14. Startup NODE2
  15. Repeat steps 7 – 12 on NODE2 (you will not need to format disks again, drive letters must be same as on NODE1)
  16. Startup NODE1

Now we have two nodes that are connected to the same iSCSI targets on SAN server.

Creating Failover Cluster

  1. On NODE1 start Failover Cluster Manger console
  2. In the middle pane, click Validate a Configuration
    1. Add all nodes that will be part of a cluster
    2. Run all tests
    3. All result should be green (ignore errors about updates)
  3. Select option Create Cluster
  4. Add all nodes and enter cluster name
  5. When cluster is created in the tree pane right click on Storage and add both disks to it (cluster will automatically configure smaller disk, Quorum, as witness and larger disk, Storage, as storage disk)
  6. Select your cluster name in the left pane
  7. In the middle pane select Cluster Core Resources and verify that they are all online
    1. Usually you will have to change Cluster IP address from automatic to manual, after that, bring all failed resources online

Our Failover Cluster is now complete.

Creating highly available virtual machine

Now you can minimize Failover Cluster Manager Console on NODE1 and open Server Manager or Hyper-V console. Before you start, on NODE1 and NODE2, create new Virtual Network (External) that is connected to one of your physical network adapters (not Cluster Private Adapters).

  1. Copy Windows Server 2008 ISO file to J disk on NODE1 (J disk is our iSCSI disk)
  2. Open Hyper-V console, right click on NODE1 and create new virtual machine
  3. Name it (eg. FailoverDemo)
  4. Store it on J disk (IMPORTANT)
  5. Give it 1024 MB of RAM
  6. Connect it to previously created network
  7. Create new virtual disk, size 20 GB on J disk (IMPORTANT)
  8. On installation options select second bullet and select ISO image that you copied on J disk
  9. Click Finish
  10. Right click on newly created Virtual machine and select Settings
  11. On the lover left side select Automatic Start action and select Nothing
  12. Click OK

Now you can minimize Hyper-V console and maximize Failover Cluster Manager Console.

  1. Right click Services and applications and select Configure a Service or Application
  2. Find Virtual Machine near the bottom, select it and click next
  3. Select newly created virtual machine and click Next

Our virtual machine is now configured as highly available. Restore Hyper-V console and start your virtual machine. It should boot from ISO DVD image that is attached to it and install Windows Server 2008 R2 operating system. After installation is completed, install latest Hyper-V additions into the virtual machine.

Migrating virtual machine from NODE1 to NODE2

  1. Restore Failover Cluster Manager console
  2. Select Services and applications
  3. Right click on your virtual machine (it should be running)
  4. Select Live migrate virtual machine to another node and select NODE2

After couple of minutes (no more than 2 – 3) virtual machine should be migrated to NODE2. You can test the migration process by pinging virtual machine (ping IP address -t) while it is being migrated. Ping loss should be only one or max two packets (due to the fact that migration is done by using iSCSI disks and not real storage). Here is the video of the procedure and controlled failover.


In the second part I will explain how to install SCVMM 2008 R2, add our cluster to it and how to migrate virtual machine from one node to another using SCVMM 2008 R2 administrator console.

{ 27 comments… read them below or add one }

Seni April 22, 2010 at 21:57

It is so nice that I can watch presentation like this on my native language… After lot of presentation of people who thinks that they speak English this is so refreshing to watch it in sensible and god quality edition. Congratulations Marin! I hope that you will save presentation of installing and setting SCVMM R2… But with your presentations you kill every chance for me to get salary boost 🙂


Marin Franković April 23, 2010 at 8:25

Hi Seni,

SCVMM 2008 R2 setup and video is comming soon. Sorry about your salary. 🙂


Andrew Alaniz April 30, 2010 at 17:40

I just ran across your blog, great how to by the way. I was curious if you had run into an issue I am seeing. I have my cluster setup and hyper v in high availability mode. The failover works perfectly for a standard run of the mill VM. The senario I am facing is a number of my production VMs connect to the SAN via iSCSI to store databases, exchange databases, etc. I have created a VM and given it an iSCSI connection to a test LUN I created. This is the only thing that changed, and now it will not migrate. It went into a saved state, and the saved state was bad. I had to delete it and recreate the VM (VHD was still viable). Have you tried this scenario? Any suggestions? I basically want to be able to virtualize my SQL Server and Exchange Server. Thanks


Marin Franković April 30, 2010 at 18:35

Hi Andrew,

do I understand correctly, you added another virtual machine to failover cluster and asigned it iSCSI target on the storage? So basicaly, your virtual machine is running on a LUN1 and it has attached iSCSI target on LUN2?


Andrew Alaniz April 30, 2010 at 18:40

I you are correct. Let’s take a physical SQL Server first. You have the physical machine which is connected to two LUNs, one for Tlogs and one for DBs. The goal would be to virtualize this. When I attempt to I get issues on the live migration where the virtual network adapter errors out. Is the only solution to convert those physically connected LUNs to VHDs, add new LUNs to the CSV, copy the VHDs to the CSV and add the VHDs to the SQL VM? There would be potentialy performance hits with this solution which is why I was hesitant to go this way first. Make sense?


Marin Franković April 30, 2010 at 21:06

Well, I did not have that much experience with SQL in virtualized environments, but here is a great article from Microsoft describing that scenario.


CypherBit May 28, 2010 at 11:51

I’m planning on building a 2 node Hyper-V cluster. Since I have critical applications running on one of the two servers (node2) that will be part of the cluster is it somehow possible to prepare the 1st node, have everything installed and once node 1 is ready virtualize the second server?

How far along would I come in this scenario, the SAN is naturally ready.


Marin Franković May 29, 2010 at 20:15

Hi CypherBit,

not a good idea. Node 2 that you will virtualize, should not be part of a fail-over cluster. You can not have node 2 virtualized and as a part of a cluster at the same time.


CypherBit June 21, 2010 at 9:16

Long overdue, but I completely forgot to comment.

Thank you for your reply, fortunately it appears I’ll have enough budget to get two servers not just one so it should be much easier.


Andrius July 28, 2010 at 8:02

Windows Storage Server 2008 can’t be promoted to domain controler.


Marin Franković July 28, 2010 at 10:59

Hi Andrius,

I am aware that whitepaper for Storage Server 2003 states that it can not be used as DC, but if you look closely video in this post (use full screen) you will notice that I have AD console in the task bar on the same machine on which I am creating iSCSI targets. I was using Windows Storage Server 2008 Enterprise edition as iSCSI target. Sadly, that test environment is now gone and I can not check it. As far as I remember it, DC was installed on Storage Server. I did try to install Storage server in virtual machine now, but I keep getting errors during installation, so again, I was unable to test AD installation procedure. I do not have more time right now to test it further, but as soon as possible, I will give it another look.


Mac November 14, 2011 at 9:08

I have one question, I need to add a data disk to a virutal machine which is in Hyper-V Cluster. What is the best paractice for adding this disk, so when the host machine shutdown the VM will move to another Host smoothely with new disk added to it.


Marin Franković November 14, 2011 at 13:22

Hi Mac,

here is a nice article for you: . Check out step 11. Basically, it is recommended to use Failover Cluster Manager console for such tasks.


khairil anwar November 25, 2011 at 3:46


I planned to create cluster with 4 hyperv, but due to limitation of network switch i only manage to get one network connection for each machine.

So that connection will be for SAN and Internet access. You diagram shows private connection between servers. I can cross connection between 2 servers but now I have 4. So can I just use connection that already existed for cluster LAN?

Second thing is, I have Active Directory replication on hyperv (on other machine non-clustered). Since the cluster domain already join to existing domian, can it host that domain replicated AD into it? If it can, can I set that AD replication for fail over guest?


Marin Franković November 27, 2011 at 15:56

Hi khairil anwar,

it is highly recommended that all nodes in a cluster have direct dedicated heath-beat network between them. Connect servers as you propose and then run fail over cluster validation wizard. See what results it gives you.

It is highly unrecommended that hyper-v host machine be member of a domain domain controller especial if guest virtual machines are in the same domain. AD fail over is unnecessary if you have more than one domain controller. For now, it is recommend that at least one DC be deployed on physical hardware.


Fertje January 28, 2012 at 20:55

Hi there,
Nice discussion. Sure, one DC on physical hardware is recommended, but what about the storage server itself? I’d say it’s the best way to have the storage server 2008 r2 running on physical hardware too.

Does anyone have something to say about this concerning configuration (with hyper-v failover) or performance (virtualized vs physical hardware) ?



Marin Franković January 28, 2012 at 20:59

Hi Fertje,

as you may figured it out, this post and configuration is purely for demonstration purposes. I would recommend that storage server (iSCSI) is on physical hardware in production scenarios.


Cool April 11, 2012 at 14:36

Video je dobar ali bilo bi bolje da nisi sve napravio unaprijed vec da se sve radilo u prezentaciji. tako da se vidi postupak instalacije, kreiranja i eventualne greske. inace sve 5


Marin Franković April 11, 2012 at 15:24

Pozdrav Cool,

poanta je da sve radi iz prve jer onda kasnije imam manje editiranja. 🙂


Gaby Makhlouf July 12, 2012 at 21:16

Hi Guys, i have followed this step by step, but i am facing a weird problem i don’t know if some has any idea about it
When i finished configuring the cluster the Node1 and Node2 showed the hard disk (Quorum and storage) as reserved and not able to go online with them.
can anybody advise about this?


Marin Franković July 12, 2012 at 21:24

Hi Gaby,

only one node at the time can own cluster disks. Did your cluster pass verification?


Shuja Najmee August 2, 2012 at 2:50

Is there any way to have a failover cluster built using to Hyper-v hosts with its own local storage without using a SAN? Does failover cluster always require a share storage? Having all VM files on a single share storage location, creates a single point of failure for all data, How can Hyper-v hosts can provide failover while SAN is lost?

Answer to these questions will help me understand Hyper-v failover cluster. Thank you for answering.



Marin Franković August 2, 2012 at 7:14

Hi Shuja,

you could use iSCSI target software to crete virtual disks on the local storage, but again it would be single point of failure. You can use local storage in Windows Server 2012 to create failover cluster by utilizing file server failover cluster or “share nothing” redundancy. However, I still think that failover clusters should be created on storage devices since they are originaly created to be fault tolerrant. You can allways buy another storage and replicate between two.


Zoho August 2, 2012 at 7:33

I am now testing 2012 server. What also i think that even the iSCSi cluster required a Storage, which mean there is no benifit of having that cluster still the storage is a single point of failure unless as Marin said we have to have replicated storages.


Marin Franković August 2, 2012 at 8:21

Hi Zoho,

there will always be something that is single point of failure. Think of power supply (internal and external). memory, processor, disks, network, operating system, SAN … List goes on and on. The point is, more money more redundancy.


Elias November 6, 2012 at 12:26

Maybe this is a too trivial question, but I mainly work with Unix flavoured systems. Is a must getting a Windows 2008 R2 Cluster of Hyper-V VM’s into a Domain.
Thanks in advance.


Marin Franković November 6, 2012 at 12:31

Hi Elias,

nodes in a cluster must be part of a domain ( VMs that are running on a cluster do not have to be part of a domain.


Leave a Reply

{ 2 trackbacks }

Previous post:

Next post: