I’ve discussed this before in Windows Server 2012 Deduplication Results In A Small Environment but here’s a little updated screenshot of a backup volume:
Not to shabby I’d say and 100% free in box portable deduplication … What are you waiting for ![]()
I’ve discussed this before in Windows Server 2012 Deduplication Results In A Small Environment but here’s a little updated screenshot of a backup volume:
Not to shabby I’d say and 100% free in box portable deduplication … What are you waiting for ![]()
Here’s an early X-Mas gift from Altaro. They are giving away 50 free licenses of their desktop backup solution to all Hyper-V admins until December 24th 2012. Altaro is better known for their cost effective and good Hyper-V Backup product.
There is no catch. Now there is no such thing as a free lunch in life but there are some very decent meals to be gotten at very democratic pricing. This is one such case. All you need to do is send them a screenshot of Hyper-V in your environment that proves that you’re really using Hyper-V. I guess that means I qualify due to the amount of Hyper-V related screenshots on my blog
. I’m going to check it out for sure.
What do you get? 50 licenses of their desktop backup solution ($2,000 worth of software). You’re free to use them in your company, at home of as a gift to friends and family. 50 Licenses is something that a lot of companies using Hyper-V in the SMB market can leverage to protect their desktops so that’s a pretty nice gift.
If you’re interested you can go to http://www.altaro.com/hyper-v/50-free-pc-backup-licenses-for-all-hyper-v-admins
There more information about Altaro Hyper-V Backup at http://www.altaro.com/hyper-v/ and http://www.altaro.com/hyper-v-backup/?LP=Xmas. If you’re a SMB shop in need of easy to use, affordable backup software for Hyper-V and want one that has full support for all features in Windows Server 2012 you should try them out. In that respect they were very fast to market beating most or all competitors I know (a lot of them still don’t have that support) They are also a non-aggressive vendor, which is something I appreciate.
There is a small environment that provides web presence and services. In total there a bout 20 production virtual machines. These are all backed up to a Transparent Failover File Share on a Windows Server 2012 cluster that is used to host all the infrastructure and management services.
The LUN/Volume for the backups is about 5.5 TB of storage is available. The folder layout is shown in the screenshot below. The backups are run “in guest” using native Windows Backup which has the WindowsImageBackup subfolder as target. Those backups are archived to an “Archives” folder. That archive folder is the one that gets deduplicated, as the WindowsImageBackup folder is excluded.
This means that basically the most recent version is not deduplicated guaranteeing the fastest possible restore times at the cost of some disk space. All older (> 1 day) backup files are deduplicated. We achieve the following with this approach:
About 20 virtual machines are backed up every week (small delta and lots of stateless applications).As the optimization runs we see the savings grow. That’s perfectly logical. The more backups we make of virtual machines with a small delta the more deduplication can shine. So let’s look at the results using Get-DedupStatus | fl
A couple of weeks later it looks like this.
Give it some more months, with more retained backusp, and I think we’ll keep this around 88%-90% .From tests we have done (ddpeval.exe) we think we’ll max out at around 80% savings rate. But it’s a bit less here overall because we excluded the most recent backups. Guess what, that’s good enough for us
. It beats buying extra storage of paying a wad of money for disk deduplication licenses from some backup vendor or appliance. Just using the build in deduplication mechanisms in Windows Server 2012 Server saved us a bunch of money.
The next step is to also convert the production Hyper-V cluster to Windows Server 2012 so we can do host based backups with the native Windows Backup that now supports Cluster Shared Volumes (another place where that 64TB VHDX size can come in handy as Windows backup now writes to VHDX).
The volume reports we’re using 3TB in data. So 2.4TB is free.
Looking at the backup folder you see 10.9TB of data stored on 1.99 TB of disk .
So the properties of the volume reports more disk space used that the actual folder containing the data. Let’s use WinDirStat to have a look.
So the above agrees with the volume properties. In the details of this volumes we again see about 2TB of consumed space.
Could it be that the volume might is reserving some space ensure proper functioning?
When you dive deeper things we get some cool view of storage space used.. Where Windows Explorer is aware of deduplication and shows the non deduplicates size for the vhd file, WinDirStat does not, it always shows shows the size on disk, which is a whole lot less.
This is the same as when you ask for the properties of a file in Windows Explorer.
Is it the best solution for everyone? Not always no. The deduplication is done on the target after the data is copied there. So in environments where bandwidth is seriously constrained and there is absolutely no technical and/or economical way to provide the needed throughput this might not be viable solution. But don’t dismiss this option to fast. In a lot of scenarios is it is very good and cost effective feature. Technically & functionally it might be wiser to do it on the target as you don’t consumes to much memory (deduplication is a memory hog) an CPU cycles on the source hosts. Also nice is that these dedupe files are portable across systems. VEEAM has demonstrated some nice examples of combing their deduplication with Windows dedupe by the way. So this might also be an interesting scenario.
Financially the the cost of deduplication functionality with hardware appliances or backup software hurts like the kick of a horse straight onto the head. So even if you have to invest a little in bandwidth and cabling you might be a lot better of. Perhaps, as you’re replacing older switches by new 1Gbps or 10Gbps gear, you might be able to recuperate the old ones as dedicated backup switches. We’re using mostly recuperated switch ports and native Windows NIC teaming, it works brilliantly. I’ve said this before, saving money whilst improving operations rarely gets you fired. The sweet thing about this that this is achieved by building good & reliable solutions, which means they are efficient even if it costs some money to achieve. Some managers focus way to much on efficiency from the start as to them means nothing more than a euphemism for saving every € possible. Penny wise and Pound foolish. Bad move. Efficiency, unless it is the goal itself, is a side effect of a well designed and optimized solution. A very nice and welcome one for that matter, but it’s not the end all be all of a solution or you’ll have the wrong outcome.
As I blogged in a previous post we’ve been building a Disk2Disk based backup solution with commodity hardware as all the appliances on the market are either to small in capacity for our needs, ridiculously expensive or sometimes just suck or a combination of the above (Virtual Library Systems or Virtual Tape Libraries come to mind, one of my biggest technology mistakes ever, at least the ones I had and in my humble opinion
) .
Here’s a logical drawing of what we’re talking about. We are using just two backup media agent building blocks (server + storage) in our setup for now so we can scale out.
Now in future post I hope to be discussing storage spaces & Windows deduplication thrown into the mix.
Not to shabby … > 1TB/Hour
To great …
In close up you are seeing just 2 Windows 2012 Hyper-V cluster nodes, each being backed up over a native LBFO team of 2*1Gbps NIC ports to one Windows Server 2012 Backup Media Agent with a 10Gbps pipe. Look at the max throughput we got …
Sure this is under optimal conditions, but guess what? When doing backup from multiple hosts to dual backup media servers or more we’re getting very fast backups at very low cost compared to some other solutions out there. This is our backup beast
. More bandwidth needed at the backup media server? It has dual port 10Gbps that can be teamed and/or leverage SMB 3.0 multichannel. High volume hosts can use 10Gbps at the source as well.
As I said before, you will not get fired for:
Cost reduction is one thing but look at what we get whilst saving money… wow!
What am I missing? Specialized dedupe. Yes, but we’re going for the poor mans workaround there. More on that later. As long as we get enough throughput that doesn’t hurt us. And give the cost we cannot justify it + it’s way to much vendor lock in. If you can’t get backups done without, sure you might need to walk that route. But for now I’m using a different path. Our money is better spend in different places.
Now how to get the same economic improvements from the backup software? Disk capacity licensing sucks. But we need a solution that can handle this throughput & load , is reliable, has great support & product information, get’s support for new release fast after RTM (come on CommVault, get a move on) and is simple to use ==> even more money and time saved.
Why is support for new releases in backup software important. Because the lack of it is causing me delays. Delays cost me, time, money & opportunities. I’m really interested to covert our large LUN file servers to Windows Server 2012 Hyper-V virtual machines, which I now can rather smoothly thanks to those big VDHX sizes that are possible now and slash the backup times of those millions of small files to pieces by backing them up per VHDX over this setup. But I’ll have to wait and see when CommVault will support VHDX files and GPT disks in guests because they are not moving as fast as a leading (and high cost) product should. Altaro & Veeam have them beaten solid there and I don’t like to be held back.