Heading Home after Ignite 2016

While traveling back home here are some musings on Microsoft Ignite 2016. I’m not going to regurgitate all the news and announcements here.

Image result

There were many and they were divers. Azure Identity, Security, storage, management, Windows Server 2016, Hyper-V, Storage Spaces, Storage Replica … are all offering a wide variety of new capabilities and options. It’s impressive now and it will be even more impressive in the future. When I connect the dots and look at the opportunities my take on what the future roadmap can and might be visualizes in front of my eyes. That’s the value I can add to an organization that’s committed to its future and realizes it needs to leverage IT to it’s fullest potential. That means you cannot treat IT as a facility because we build it on commodity products. Every success is build on creative and well directed use of the components and the capabilities. This requires a lot more than lip service or merely covering up bad choices and political ambitions with a thin layer of “big principles”. The key to success is speed, agility, insight in a world where mobile and cloud offer tremendous new opportunities. Large, long term, centralized projects have their place but sticking to them by default in the wrong place, the wrong way and manner will lead to failure in a 24/7/365 mobile world where federation, collaboration across boundaries are paramount. The small, cost effective and efficient projects delivering real value with a purpose will make giants, bot in government and the private sectors stumble and even fall.

We have so much opportunity here that many cannot see the trees through the forest anymore. This will lead to many failed projects, ambitions and organizations in combination with a waste of time and money. That’s were we can make the difference.

As an attendee and MVP I was very happy to be able to attend in order to calibrate my compass and correct course. In good tradition I signed the billboard for attending MVPs at Ignite 2016  I’m already looking forward to heading back to Redmond for the MVP Global Summit and continue the discussion at the Microsoft Head Quarters.


To me, the Ignite 2016 edition was one of intensive networking with Microsoft experts and management. This extended to 3rd party vendors and partners of Microsoft. This, in combination with the discussions with my peers  to discover their views and insights have given me a very up to date view on where we are at and where things are going. That’s the value I’m taking back home to work with and help people reach their full potential. That’s not an easy task as many today are or feel at least a bit out of balance to completely lost. Technologists are the one to step up all the way to the board level and steer their organizations towards a successful future.  Many companies are not ready for this and some management feels threatened by this. There’s basically no need for that fear as we are technologists, not politicians. We solve problems, we don’t create them. We drive companies towards success, if you let us.

DELL Compellent Storage Center 7.1 Certified for Windows Server 2016

When it comes to selecting storage, especially when it comes to a “traditional” SAN, you all know that price performance wise I’ve been using the DELL Compellent series with great success for many years now. It’s a very capable solution that also has some other benefits when it comes to Windows Server and Hyper-V. It has one of the better hardware VSS providers, way better than average support for ODX  and UNMAP etc. but it’s also very good at delivering fast support for new versions of Windows Server. This has allowed us to move from Windows Server 2008 R2 to 2012 and from there to Windows Server 2012 R2 very fast.

In that regards I’m very happy to see that Storage Center 7.1 is already in the catalog as certified for Windows Server 2016.


Customers that have up to date hardware and want to move fast to benefit from and leverage the new and improved capabilities in Windows Server 2016 Hyper-V, Clustering, Networking, …are ready to do so. Nice Smile.

Disk2VHD on a Generation 2 VM results in an unbootable VHDX

Most people who have been in IT for a while will know the Windows  Sysinternals tools and most certainly the small but brilliant Disk2VHD tool we can use for Physical To Virtual (P2V) and Virtual to Virtual (V2V) conversions. It’s free, it’s good and it’s trustworthy as it’s made available by Microsoft.

For legacy systems, whether they are physical  with IDE/SATA/SAS controllers or virtual with an IDE generation 1 VMS thing normally go smooth.


But sometimes you have hiccups. One of those is when you do a V2V of a generation 2 virtual machine using Disk2VHD. It’s a small issue, when you create a new generation 2 VM and point it to the OS vhdx it just won’t boot. That’s pretty annoying.


Why do a V2V in such a case you might ask. Well, sometimes is the only or fasted way to get out of pickle with a ton of phantom, non-removable checkpoints you’ve gotten yourself into.

But back to the real subject, how to fix this. What we need to do is repair the boot partition. Well recreate it actually as when you look at it after the conversion you’ll notice is RAW. That’s no good. So let’s walk through how to fix a vdhx that your created from a source generation 2 Hyper-V vm via Disk2VHD.

First of all create a new generation 2 VM that we’ll use with our new VHDX we created using Disk2VHD. Don’t create a new vdhx but select to use an existing one and point it to the one we just created with Disk2VHD. Rename it if needed to something more suitable.

Don’t boot the VM but add a DVD and attach the Windows Server ISO of the version your vhdx contains to the DVD.


Move the DVD to the top of the boot order I firmware.


The VM will boot to the DVD when you hit a key.

Select your language and keyboard layoout when asked and the don’t install or upgrade the OS but boot










Type diskpart and  list the disks. Select the disk we need (the OS disk, the only one here) and list the volumes. You can see that volume 3 off 99MB is RAW. That’s not supposed to be that way. So let’s fix this by creating boot loader directory structure, repair the boot record by creating the boot sector & copy the needed boot files into it.


select volume 3

assign drive letter L:


That’s it we can now us that 99MB volume to make our disk bootable to windows again.  Type Exit to leave diskpart.


So now we have a formatted boot partition we can create the need folder structure and fix the boot record and configure our UEFI bootloader

Switch to the L: volume

create efi\microsoft\boot folder structure for the bootloader as show below with the md command(make directory)

Type: bootrec /fixboot to create the bootrecord

Type: bcdboot C:\Windows  /l en-us /s l: /f ALL

This creates the BCD store & copies the boot files from the windows system directory


Just click Continue to exit and continue to Windows Server 2012R2


.. and voila, your new VM has now booted.


Now it’s a matter of cleaning up the remnants of the original VMs hardware such as the NIC and maybe some other devices. The NIC is very important as it will have any static TCP/IP configuration you might want to assign tied to it which mean you can’t reuse it for your new VM. So, the 1st thing to do is uninstall the old network adapters from device managers, you’ll see them when you select “show hidden devices” in the view menu.

Good luck!

Veeam Leads the way by leveraging ReFS v3 capabilities


You might have noticed that I’m pretty impressed by what Microsoft is doing with ReFS v3 in Windows Server 2016. You can read some of my musing on it in ReFS vNext Block Cloning and ODX and take a look at a comparison between ReFS & ODX speeds when creating VHDX files in Lightning Fast Fixed VHDX File Creation Speed With ReFS on Windows Server 2016 .

Note that this is also leveraged for accelerated checkpoint merges, VHDX resizing etc.

Now it goes without saying that Hyper-V (they’re the tip of the spear at MSFT) and other Microsoft products would take advantage of the capabilities of ReFS. But now we know that Veeam Backup & Replication 9.5 has made use of ReFS to help with the resilience of their backups, the speed of their Synthetic Full backups and the space required.


To a Hyper-V MVP and a Veeam vanguard it was obvious these two combined just had to lead to way for others to follow.

Veeam Leads the way by leveraging ReFS v3 capabilities

Veeam Backup & Replication 9.5 will leverage ReFS v3 …



and by doing so they deliver the following benefits:

  • Shorted backup windows and a reduced backup storage load on the repository
  • Reduced backup target storage capacity which is reducing or eliminating the need for deduplication in many scenarios.
  • Better backup data protection by leveraging the ReFS native capabilities to protect against bit rot which was one of the prime goals for which Microsoft designed ReFS.

How is this done?

ReFS v3 has “fast cloning” technology which Veeam is leveraging. This results in up to 10 times  faster creation and transformation of synthetic full backup files!  ReFS fast cloning allows for creating new files without physically moving data blocks between files. This is what delivers even shorter backup windows and lower backup storage load on the repository or repositories.

They use what they call “Spaceless full backup technology” which allows multiple full backup files to reside on the same ReFS volume that share the same physical data blocks. As a result they need less storage capacity which can reduce or eliminate the need (and cost) of deduplication appliances whilst leveraging commodity storage.

Lest see how this is done. A “legacy” full backup is created an consumes 30% storage capacity. Then we make incremental backups.


3 incremental backups add 3 * 10% of delta to the needed backup storage capacity which adds up to 60%.


We create a synthetic full backup and the copies of the data require another 30% of space (90%). 


No let’s compare this to v9.5 that leverages a Windows Server 2016 ReFS formatted backup target repository. Instead of copying data ReFS references already existing data block for a new file. This saves on IO, space and time!


Is this safe? What if those data blocks that are reference multiple times are corrupted? Well Veeam does have protection against that in place already! But it goes the extra mile as ReFS has the capabilities to protect against that itself or it’s power would also become its biggest weakness.

Veeam’s data integrity streams integration leverages ReFS data integrity scanner and even proactive error correction when used in combination with Storage Spaces to protect backup files from bit rot and allows for more reliable forever-incremental archiving. This helps make the spaceless full backup technology trustworthy & safe alongside the health checking & error fixing capabilities already available in Veeam Backup & Replication.


I’m impressed by the forward looking and fast adoption of the capabilities of ReFS v3 by Veeam and I’m testing Backup & Replication v9.5 Beta today in the lab. They have more up their sleeve by the way as they have some interesting work with PowerShell Direct to make backups ever more resilient in ever more scenarios. More on that later.

Anyone who said Veeam would lose its edge in the world of Hyper-V backups when Microsoft introduced their own native change block tracking (resilient change tracking) has clearly never dealt with Veeam seriously and professionally. I have and I’m always happy to chat to them as they have serious technical skills combined with vision and business acumen that makes sure they’re leaders in the business of backup. It makes me proud to be a Veeam vanguard and a MVP with a specialization in Hyper-V.