Using VEEAM FastSCP for Microsoft Azure to help protect my blog

My buddies in IT know about some of my mantras. The fact that I like “* in depth”. Backup in depth for example. Which is just my variant on the 3-2-1 rule in backups. Things go wrong and relying on one way to recover is risky. “One is none, two is one” is just one of the mantras I live by in IT. Or at least try to, I’m not perfect.

So besides backups in Azure I also copy the backup files I make for my blog outside of the VM, out of Azure. That means the BackWPup files and the MySQL dumps I create regularly via a scheduled job.

That copy is not made manually but is automated with VEEAM FastSCP for Microsoft Azure. It’s easy, free and it works.  I’ve blogged about it before but that blog might have been lost in the huge onslaught of Microsoft Ignite 2015 announcements.

It’s all quite simple. First of all you need to create a data dump location for the backups we do on our blog server. That’s copied out by but VEEAM FastSCP for Microsoft Azure ensures I have an extra copy do those which doesn’t rely on Azure

image

 

Add your VM in Azure to VEEAM FastSCP for Microsoft Azure

image

It’s easy, specify the information you can find about your VM on the Azure management portal. Optionally you can skip the SSL requirement and certificate verifications. Do note you need to use the correct PowerShell port (end point) for that particular VM in your Azure subscription.

image

When successful you can browse the file system of your Azure VM.

image

Create one or more jobs (depending on what & how you’re organizing your backups)

image

Give the job a descriptive name

image

Select what folders on the Azure VM you want to backup by simply browsing to it.

image

Select the target folder on the system where VEEAM FastSCP for Microsoft Azure is running by, again, simply browsing to it.

image

Set a schedule according to your needs

image

If you need to run some PowerShell before or after a download here’s the place to do so.

image

Click finish and hit Start Job to lick it of and test it. Here’s the WordPress Blog backup download job running.

image

By using VEEAM FastSCP for You can download folders and files to your system at home, to a virtual machine, whether this is on premise or also in the cloud. Perhaps even in AWS (IAAS) if you’re really paranoid. By doing a simple restore of your blog and changing your DNS entry you can even get it up and running if Azure would ever be the target of a major outage causing attack. You could even keep blogging about it Smile.

So do yourself a favor. Check it out!

WorkingHardInIT Blog Maintenance Window & Tools Used

As you might have noticed my blog was down last night for about 1 hour and 45 minutes between 22:20 and 00:10. A bit longer than I wanted but I needed more time do deal with the upgrade of MySQL as part of the routine maintenance I do on my WordPress blog server.

In the environments under my care I take care to take the time to do routine maintenance to avoid falling behind to much in firmware, drivers, patches, etc. This takes some effort but as it helps prevent bigger issues in the long run it’s worth while to do so. I take the same approach with my blog as much as possible. Most of this maintenance goes by without you ever noticing. The windows updates reboots being the exception. WordPress upgrades, plugin upgrades, PHP upgrades, etc. … all go swiftly usually which means I’m pretty well covered there, frequently.

Upgrading MySQL however is always a bit of a time consuming effort and depending on what version you’re upgrading from and to witch one it can actually mean multiple sequential upgrades (5.1 to 5.5.44 to 5.6.25).image

I practiced this upgrade on a copy of the VM in azure to make sure I could handle whatever came up and still I had to deal with some challenges I did not encounter in the test environment. That show that I’m not a full time hard core MySQL guru I guess.

Anyway after getting to MySQL 5.6.25 from 5.5.44 and fixing some issues with TIMESTAMP with implicit DEFAULT value is deprecated (easy fix) and dealing with the error in MySQL Workbench:

An unhandled exception occurred (Error executing ‘SELECT t.PROCESSLIST_ID,
IF (NAME = ‘thread/sql/event_scheduler’,’event_scheduler’,t.PROCESSLIST_USER) PROCESSLIST_USER,t.PROCESSLIST_HOST,t.PROCESSLIST_DB,t.PROCESSLIST_COMMAND,
t.PROCESSLIST_TIME,t.PROCESSLIST_STATE,t.THREAD_ID,t.TYPE,t.NAME,t.PARENT_THREAD_ID,
t.INSTRUMENTED,t.PROCESSLIST_INFO,a.ATTR_VALUE FROM performance_schema.threads t 
LEFT OUTER JOIN performance_schema.session_connect_attrs a ON t.processlist_id = a.processlist_id AND (
a.attr_name IS NULL OR a.attr_name = ‘program_name’) WHERE t.TYPE <> ‘BACKGROUND”
Native table ‘performance_schema’.’threads’ has the wrong structure.
SQL Error: 1682). Please refer to the log files for details.

which I fixed by running run mysqld –performance_schema I’m rocking everything up to date once more.

image

Always have good backups, make exports of your database schema, data and structures in MySQL and have multiple ways out when things go south. In Azure I’m relying on Backup Vault where I protect my virtual machine with schedules backup jobs. I also backup my WordPress with the data via a plug in and export the database via MySQL Workbench.

image

Those dumps are copied out of the VM to where ever I want (Azure, One Drive, home PC, a VM running in AWS …) to make sure I have multiple options to recover.

image

VEEAM FastSCP for Microsoft Azure comes in very handy for this by the way. You might want to check it out if you’re in need of an automated and secure way to get data out of a VM running in Microsoft Azure!

Dell Storage Replay Manager 7.6.0.47 for Compellent 6.5

Recently as a DELL Compellent customer version 7.6.0.47 became available to us. I download it and found some welcome new capabilities in the release notes.

  • Support for vSphere 6
  • 2024 bit public key support for SSL/TLS
  • The ability to retry failed jobs (Microsoft Extensions Only)
  • The ability to modify a backup set (Microsoft Extensions Only)

The ability to retry failed jobs is handy. There might be a conflicting backup running via a 3rd party tool leveraging the hardware VSS provider. So the ability to retry can mitigate this. As we do multiple replays per day and have them scheduled recurrently we already mitigated the negative effects of this, but this only gibes us more options to deal with such situations. It’s good.

image

The ability to modify a backup set is one I love. It was just so annoying not to be able to do this before. A change in the environment meant having to create a new backup set. That also meant keeping around the old job for as long as you wanted to retain the replays associated with that job. Not the most optimal way of handling change I’d say, so this made me happy when I saw it.

image

Now I’d like DELL to invest a bit more in make restore of volume based replays of virtual machines easier. I actually like the volume based ones with Hyper-V as it’s one snapshot per CSV for all VMs and it doesn’t require all the VMs to reside on the host where we originally defined the backup set. Optimally you do run all the VMs on the node that own the CSV but otherwise it has less restrictions. I my humble opinion anything that restricts VM mobility is bad and goes against the grain of virtualization and dynamic optimization. I wonder if this has more to do with older CVS/Hyper-V versions, current limitations in Windows Server Hyper-V or CVS or a combination. This makes for a nice discussion, so if anyone from MSFT & the DELL Storage team responsible for Repay Manager wants to have one, just let me know Smile 

Last but not least I’d love DELL to communicate in Q4 of 2015 on how they will integrate their data protection offering in Compellent/Replay manager with Windows Server 2016 Backup changes and enhancements. That’s quite a change that’s happing for Hyper-V and it would be good for all to know what’s being done to leverage that. Another thing that is high on my priority for success is to enable leveraging replays with Live Volumes. For me that’s the biggest drawback to Live Volumes: having to chose between high/continuous availability and application consistent replays for data protection and other use cases).

I have some more things on my wish list but these are out of scope in regards to the subject of this blog post.

Free VEEAM Endpoint Backup Goes RTM – First Upgrade Experiences!

VEEAM Endpoint backup has gone RTM and that’s great news. I’ve been using it since the beta version with great results. I moved to the release candidate when that became available and now I’m running RTM. The version number of the RTM bits is 1.0.0.1954.

image

You can download it here and put it into action straight away!

Quick Tips & Findings

There is no supported upgrade path form the beta release. As a matter of fact the RTM version cannot read the backup files. When trying to upgrade from beta to RTM you’ll be greeted with this message:

image

Now that’s OK. You should have been on the RC already and there things are better Smile. Mind you, there’s no way to do an in place upgrade either but it can read the backups made by the RC version!

image

With a clean install (green field or after uninstalling the beta or RC version) the installation will kick off.

image

Now in the case of or RC backups we tested 2 things:

  • Can we restore the existing backups? Yes we can!

image

  • How are the backs made by the RTM version handled in regards to the already present ones. We just reconfigured the backups to the same repository and kicked of a backup. A new backup job folder was created and the backup was made there. So our DBA’s great self service SQL Server backup offloading repository made with the RC candidate is still available for restores while RTM backups to it’s own new folder.

image

Well there you go, VEEAM Endpoint Backup just got launched in production. We still have to wait for the production ready update for integration with VEEAM Backup & Replication v8 but that will arrive soon enough. The future looks bright.