Symbolic Link to an Azure File Share

Symbolic link to an Azure file share

We recently used a symbolic link to an Azure file share to transparently replace a local folder in which data sets are cached for download. That means that the existing service transparently copies the data sets to an Azure file share without having to change anything in the code to do so. With a small adaptation of the code, we can now provide download links to data in the Azure file share so this process is also transparent for the clients downloading the data sets.

You can already guess the reason for this exercise. We did this to fix a bandwidth issue on-premises by creating an easy workaround with minimal code changes. As more and more clients download more and more data sets, this service consumes too much bandwidth. This means we have to throttle the service and/or implement QoS to it. While this helps the other services using that internet connection, it does nothing to improve download speeds for the clients. This is just an example and is not meant as architectural or design advice. It is an interim fix to an existing problem. This trick is something that is used with AKS as well for example.

How to add a symbolic link to an Azure file share

Create an Azure file share

Create a storage account and create a file share.

Symbolic Link to an Azure File Share
Our Azure file share.

Handling credentials

Dealing with the credentials needed for this is easy. All we need to do is add the information into the credential manager as a Windows credential. That would be the user, the password, and the file share UNC path. Note that here the password is our storage account key.

Symbolic Link to an Azure File Share
Go to the connect settings of your file share.

Grab the info you need from the “connect” settings for your Azure file share. We will not map the the files hare to a drive, so there is no need to run this PowerShell script.

So in this example that is:
Internet or networkk address: \\datasets.file.core.windows.net\fscache
User name: localhost\datasets
Password: real2Nonsense4Showing8AfakeStorage28Accountkey/goobledeGookStuffa/AndSomeMoreNonsentMD==

We will add these credentials to the Credential Manager as Windows Credentials.

Click on “Add a Windows Credential”.
Add the file share UNC path, the username and the storage account key

That is it, if you entered everything correctly, this will work.

Creating the symbolic link

Once you have added the credentials creating the symbolic link is very easy.

mklink /d "E:\Download\Cache" "\\datasets.file.core.windows.net\fscache"

You do need to take care you create the symbolic in the right place in your folder structure. But other than that, that is all you need to do.

Symbolic Link to an Azure File Share
the symbolic link to the Azure file share

The symbolic link is available and can be used transparently by the service/application.

To test the file share in Azure you can upload or download data via azcopy or Azure Storage Explorer. The download functionality in our case is handled in the code, But here is a quick example of how to do a download it via azcopy using a shared access key signature.

azcopy copy "https://datasets.file.core.windows.net/fscache/DataSetSatNavSouthernUtah.zip?sv=2020-02-10&ss=bfqt&srt=sco&sp=rwdlacuptfx&se=2021-06-25T06:06:02Z&st=2021-06-24T22:06:02Z&spr=https&sig=%2FA%9SOrrY4KFAKEikPKeysOycLb4neBLogpPostpAQ624%3D" "ED:\MyDataSetDownloads" --recursive

Pro tip: if you need to remove the symbolic link but keep the data, use rmdir “E:\Download\Cache” and not del “E:\Download\Cache” or you will delete the data. That might not be what you want.

Next steps

Mind you, this was the easy and quick fix for a problem this service was facing. This is not a design or architecture. We are considering replacing the symbolic link solution with Azure File Sync. With a bandwidth cap and QoS on-premises, we would offer the primary download link to the cloud. There they can get all the bandwidth Azure can offer. Next to that, we would have an alternative link, marked as slow, that still points to the on-prem version of the data. This means the current implementation is still fully functional even when the Azure files share has an issue. Sure, the local copy comes with a significantly reduced performance, but it provides a failsafe.

The future

Well, the future lies in turning this into a solution running 100% in the cloud. Now, due to a large number of dependencies on various on-premies data sources, this is a long-term effort. We decided no to let perfection be the enemy of the good and fixed their biggest pain point today.

Conclusion

For sure, the use of a symbolic link to access an Azure file share is not something that will amaze people that have been working in the cloud for a while. It is however a nice example of how the use of Azure combined with on-premises services can result in a hybrid solution that solves real-world problems

This particular scenario enables them to distribute their data sets without having to worry about bandwidth limitations on-premises. That means they do to invest in a bigger internet pipe and a firewall with more throughput, or having to port their service and all its dependencies to a full-blown Azure solution.

Sometimes successful and cost-effective solutions come in the form of little tweaks that allow us to fix pain points easily.

Azure Virtual Datacenter

Image

Azure Virtual Datacenter

Sometimes Microsoft times a blog post exactly right. For a while now working on bridging worlds (on-premises / cloud) in a responsible and realistic manner. Making sure the transition is smooth and avoids pitfalls. You use what you need where you need it, when you need it and in a way that fits your needs, right? Anyway, in real life that means that I’m working on a Azure Virtual Datacenter deployment (brainstorming/architecture/design phase).

Last week during white boarding twitter notified me of the release of a new portal for the Azure Virtual Datacenter. That’s great timing! And no, there’s no need for thin foil head paranoia here. We MVP are not linked directly into the mother ship.

Azure Virtual Datacenter

MSFT previously delivered an Azure Virtual Datacenter eBook with the concepts and the Lift and Shift Guide. But right now we are mainly looking at workload migrations and not lift and shift. You evaluate and make the best decision within the context at hand.

Help with project communications

The nice thing is they also published a slide deck about the Azure Virtual Datacenter concept. This helps me build presentations on this subject. Well, after removing the marketing slides and adding some extra content. Both technical content and information specific to the environments I’m working in.

Azure Virtual Datacenter

Right now I’m working on the network part (VNETs, subnets, peering, BGP), but I need to pause now and go take care of some Dell PowerEdge R740 and maybe R940 server configurations to order together with some RDMA NICs. Yeah, my existing skills are still in high demand and I know how to bridge worlds pragmatically, efficiently and effectively. There is server- less in our future as well as hardware, at least for now. Now I need to get some IoT in this mix, that’s the fun full stack game right now.

Real life cost savings with Azure IAAS B-Series virtual machines

I recently move some low end virtual machines from the rather low spec but cheap basic A series (A2, 2 CPU  AMD Opteron and 3.5 GB RAM) to the newer B-Series. These have better processors and better specs all over. I did not want to move to the standard A series A2 or A2v2 as those are more expensive. I had to needs: reduce costs, get better performance. I achieved real life cost savings with Azure IAAS B-Series virtual machines

The B-Series are burstable and offer better pricing if you can build up credits when the VM is not going over it’s base line. The B-series provides you with the ability to purchase a VM size with baseline performance and the VM instance builds up credits when it is using less than its baseline. When the VM has accumulated credit, the VM can burst above the baseline using up to 100% of the vCPU when your application requires higher CPU performance. So picking the VM size is key here. The B2S seemed the best option as the base line for the CPU is 40%. and we needed at least 2 CPU and 3.5GB of RAM. The CPU type is the Intel Broadwell or Haswell E5-2673 so these are also better than the AMD Opteron.

You can see a quick price comparison here. More on the B-Series can be found here: Introducing B-Series, our new burstable VM size

One concern was that we might not be under the base line enough to build up the credits for when we go over the base line. That might kill our cost reduction hopes. That concern was invalidated by the fact that the average vCPU usage % is lower anyway due to the fact the the processors are faster and better. This helps to stay below the vCPU base line and as a result gives me credits for when I need more CPU cycles.

Overall I now have better performance at lower costs. As you can see in the screenshot of 1 VM below the savings are real after swapping over from the basic A2 to the B2S size.

image

So, that’s an optimization that has worked out well for me. I suggest you check it out and see where you can reduce or optimize your spending in Azure.

Cloud & Datacenter Conference Germany 2018

Cloud & Datacenter Conference Germany 2018

The Cloud & Datacenter Conference Germany 2018 is a shining beacon of light in a sea of marketing driven IT events. It is organized by Carsten Rachfahl via his company Rachfahl IT Solutions. Carsten is a Microsoft MVP and Regional Director whose commitment to excellence has show for many years in his community engagements. I think his integrity and style is a driving force behind this conferences ability to attract the quality of attendees, speakers and sponsors.

clip_image002[4]

Cloud & Datacenter welcomes top expert speakers from the community and the industry. They deliver high quality sessions and share their combined experience and knowledge with attendees that are truly interested in working with those technologies. That combination delivers high value interactions and knowledge sharing. The sessions in combination with the interaction between everyone there is works very well due to the size of the conference. Its big enough to have the breath of topics needed I todays IT landscape while it is small enough to allow people to dive in deeper and discuss architecture, design, implementation and visions.

Some extra information

The Cloud & Datacenter Conference Germany 2018 is being held on May 15-16 2018 in the Congress Park Hanau, Scholes Plats 1, Hanau, 63450 Germany. That’s close to Frankfurt and as such has good travel accommodations. Topics of interest will be Azure, Azure Stack, Hybrid Cloud, Private Cloud, Software Defined Datacenter, System Center & O365. The conference is a real-life technology event so no one is pretending that the esoteric future is already here. We are working on that future by building it in our daily job and helping organizations move forward in an efficient and effective manner.

This is a great conference by technologists, for technologists. The opportunities to learn, network and exchange information are great. The speakers are approachable and all of them together are there both share and learn themselves. From my past experiences the organization outstanding and the feedback from attendees was outstanding.

I’ll be speaking on RDMA to give a roadmap on this ever more important technology. On top of that I’ll be around to discuss high availability, clustering, data protection both on premises as in (hybrid) cloud scenarios.

Do your self a favor and register for the Cloud & Datacenter Conference Germany in 2018.

All I can say is that you should really consider attending. It’s most definitely worthwhile. The quality of the attendees, the speakers and the absolute top-notch organization of the conference have been proven in the previous years. The Cloud & Datacenter Conference is a testimony to the professionalism, integrity and quality my fellow MVP and friend Carsten Rachfahl delivers with his company Rachfahl IT solution on a daily basis to his customers. So, help yourself out in your career and register right here. I hope to see you there.

Note: The CDC is German spoken conference but as some speakers are from around the globe you’ll have to listen to some of them speaking in English. If you’ve ever heard my German, I’m sure you’ll prefer me speaking English anyway.