If you are interested in Microsoft and QUIC I have some good news for you. Recently a new article, SMB over QUIC Technology | StarWind Blog (starwindsoftware.com) was published. It is the first in a series about what Microsoft is working on in regards to QUIC. While not without some controversy, QUIC does a lot for a number of issues connectivity over “the internet at large” has been dealing with.
It leverages UDP.
TLS 1.3 is built into the protocol.
Reduces RTT during connection & encryption setup.
Handles and optimizes flow control and loss recovery.
QUIC reduces the round trips during the TLS handshake significantly
Over the internet, with mobile clients, this is a big deal. Since it is secure by default people really started thinking about where this can be used to improve things for all involved.
I think QUIC is going to be more and more important in the future and this article positions QUIC in the Microsoft ecosystem. So, head over there, read it, and let me know what you think.
TLS 1.3, QUIC, HTTP/3, and SMB 3.1.1 are shaking up things a bit by challenging TCP. Microsoft dropped QUIC into Windows Server 2022 Azure edition. That went into public preview last week and I dove in to the lab to figure out what I can do with it.
As a technologist, I am having a lot of fun testing this out in the lab. Last weekend I was busy with SMB over QUIC and QUIC in IIS. I learned a lot and have made up my mind I can use this in the real world to solve requirements. I will share my findings and musing with you in near the future. But today, start with an introduction in SMB over QUIC Technology | StarWind Blog (starwindsoftware.com).
We recently used a symbolic link to an Azure file share to transparently replace a local folder in which data sets are cached for download. That means that the existing service transparently copies the data sets to an Azure file share without having to change anything in the code to do so. With a small adaptation of the code, we can now provide download links to data in the Azure file share so this process is also transparent for the clients downloading the data sets.
You can already guess the reason for this exercise. We did this to fix a bandwidth issue on-premises by creating an easy workaround with minimal code changes. As more and more clients download more and more data sets, this service consumes too much bandwidth. This means we have to throttle the service and/or implement QoS to it. While this helps the other services using that internet connection, it does nothing to improve download speeds for the clients. This is just an example and is not meant as architectural or design advice. It is an interim fix to an existing problem. This trick is something that is used with AKS as well for example.
How to add a symbolic link to an Azure file share
Create an Azure file share
Create a storage account and create a file share.
Our Azure file share.
Handling credentials
Dealing with the credentials needed for this is easy. All we need to do is add the information into the credential manager as a Windows credential. That would be the user, the password, and the file share UNC path. Note that here the password is our storage account key.
Go to the connect settings of your file share.
Grab the info you need from the “connect” settings for your Azure file share. We will not map the the files hare to a drive, so there is no need to run this PowerShell script.
So in this example that is: Internet or networkk address: \\datasets.file.core.windows.net\fscache User name: localhost\datasets Password: real2Nonsense4Showing8AfakeStorage28Accountkey/goobledeGookStuffa/AndSomeMoreNonsentMD==
We will add these credentials to the Credential Manager as Windows Credentials.
Click on “Add a Windows Credential”.Add the file share UNC path, the username and the storage account key
That is it, if you entered everything correctly, this will work.
Creating the symbolic link
Once you have added the credentials creating the symbolic link is very easy.
You do need to take care you create the symbolic in the right place in your folder structure. But other than that, that is all you need to do.
the symbolic link to the Azure file share
The symbolic link is available and can be used transparently by the service/application.
To test the file share in Azure you can upload or download data via azcopy or Azure Storage Explorer. The download functionality in our case is handled in the code, But here is a quick example of how to do a download it via azcopy using a shared access key signature.
Pro tip: if you need to remove the symbolic link but keep the data, use rmdir “E:\Download\Cache” and not del “E:\Download\Cache” or you will delete the data. That might not be what you want.
Next steps
Mind you, this was the easy and quick fix for a problem this service was facing. This is not a design or architecture. We are considering replacing the symbolic link solution with Azure File Sync. With a bandwidth cap and QoS on-premises, we would offer the primary download link to the cloud. There they can get all the bandwidth Azure can offer. Next to that, we would have an alternative link, marked as slow, that still points to the on-prem version of the data. This means the current implementation is still fully functional even when the Azure files share has an issue. Sure, the local copy comes with a significantly reduced performance, but it provides a failsafe.
The future
Well, the future lies in turning this into a solution running 100% in the cloud. Now, due to a large number of dependencies on various on-premies data sources, this is a long-term effort. We decided no to let perfection be the enemy of the good and fixed their biggest pain point today.
Conclusion
For sure, the use of a symbolic link to access an Azure file share is not something that will amaze people that have been working in the cloud for a while. It is however a nice example of how the use of Azure combined with on-premises services can result in a hybrid solution that solves real-world problems
This particular scenario enables them to distribute their data sets without having to worry about bandwidth limitations on-premises. That means they do to invest in a bigger internet pipe and a firewall with more throughput, or having to port their service and all its dependencies to a full-blown Azure solution.
Sometimes successful and cost-effective solutions come in the form of little tweaks that allow us to fix pain points easily.
How to survive in the ever-changing IT world is something that all IT professionals and developers want to figure out. No matter how junior or senior you are, no matter what level of expertise you have, it is a challenge for everyone. At least if they are honest to themselves and others.
The IT world changes at a very fast pace and longevity of technology seems to be a distant pipe dream. The only thing that has tenure it IT seems to be considered legacy and tech debt. But is that always the case? There is flip side to fast paced changes. The faster things change the shorter they last.
The faster things change the shorter they last
Today’s IT world is changing extremely rapidly in terms of technologies used, hardware and software lifecycle management, trends, and hypes. I have always said IT looks a lot like fashion-driven show business.
Everyone strives to keep up with these changes, both individuals and organizations. Can they in the end, or will we outrun ourselves? If we can at what cost? Quality, longevity, minimal viable products, bugs, journeys instead of products, etc.
The skill sets needed to make this happen are grown and groomed, not produced at will. All this while investing in education and training is often only lip service as the requirements change so fast, the willingness to do so is diminished. And as such we make our problems even bigger.
Find out in a chat with Yusif Ozturk (Co-Founder and Chief Software Architect at @VirtualMetric) and me how we view and think about these challenges. Companies must ensure their software is using up-to-date technologies and finding the best talent experienced with modern technologies. You need to adapt pretty fast so that you can survive. When a company creates a new software product and develops a new solution, everything changes in a short time. There are new programming languages, new frameworks, and new technologies coming to the market. One of the challenges that organizations face is the fast technology shifts, new skills needed, new experienced staff, etc. Maybe you will find some insights on how to survive in the ever-changing IT world.
The SecretStore local vault extension is a PowerShell module extension vault for Microsoft.PowerShell.SecretManagement. It is a secure storage solution that stores secret data on the local machine. It is based on .NET cryptography APIs, and works on Windows, Linux, macOS thanks to PowerShell Core.
The secret data is stored at rest in encrypted form on the file system and decrypted when returned to a user request. The store file data integrity is verified using a cryptographic hash embedded in the file.
The store can be configured to require a password or operate password-less. Requiring a password adds to defense-in-depth since password-less operation relies solely on file system protections. Password-less operation still encrypts data, but the encryption key is stored on file and is accessible. Another configuration option is the password timeout, which by default is 15 minutes for automation purposes you can use Unlock-SecretStore to enter the password for the current PowerShell session for the duration of the timeout period.
Testing the SecretStore local vault extension
Below you will find a demonstration script where I register a vault of the type secret store. This is a local vault extension that creates its data and configuration files in the currently logged-in user scope. You specify the vault type to register by the ModuleName parameter.
$MySecureVault1 = 'LocalSecVault1'
#Register Vault1 in secret store
Register-SecretVault -ModuleName Microsoft.PowerShell.SecretStore -Name
$MySecureVault1 -DefaultVault
#Verify the vault is there
Get-SecretVault
#Add secrets to Vault 1
Set-Secret -Name "DATAWISETECH\serverautomation1in$MySecureVault1" -Secret "pwdserverautom1" -Vault $MySecureVault1
Set-Secret -Name "DATAWISETECH\serverautomation2in$MySecureVault1" -Secret "pwdserverautom2" -Vault $MySecureVault1
Set-Secret -Name "DATAWISETECH\serverautomation3in$MySecureVault1" -Secret "pwdserverautom3" -Vault $MySecureVault1
#Verify secrets
Get-SecretInfo
Via Get-SecetInfo I can see the three secrets I added to the vault LocalSecVault1
SecretStore local vault extensionThe three secrets I added to vault LocalSecVault1
The configuration and data are stored in separate files. The file location depends on the operating system. For Windows this is %LOCALAPPDATA%\Microsoft\PowerShell\secretmanagement\localstore. For Linux and MacOS it is $HOME/.secretmanagement/localstore/
The localstore files
As you can see this happens under the user context. Support for all users or machine-wide context or scope is a planned future capability, but this is not available yet. Access to the SecretStore files is via NTFS file permissions (Windows) or access control lists (Linux) limiting access to the specific user/owner.
Multiple Secret stores
It is possible in SecretManagement to register an extension vault multiple times. The reason for this is that an extension vault may support different contexts via the registration VaultParameters.
At first, it might seem that this means we can create multiple SecretStores but that is not the case. The SecretStore vault currently operates under the scope of the currently logged-on user at a very specific path. As a result, it confused me when I initially tried to create multiple SecretStores. I could see all the secrets of the other stores. Initially, that is what I thought happend. Consequenlty, I had a little security scare.. In reality, I just register different vault names to the same SecretStore as there is only one.
$MySecurevault2 = 'LocalSecVault2'
$MySecureVault3 = 'LocalSecVault3'
#Register two more vaults to secret store
Register-SecretVault -ModuleName Microsoft.PowerShell.SecretStore -Name $MySecurevault2 -DefaultVault
Register-SecretVault -ModuleName Microsoft.PowerShell.SecretStore -Name $MySecureVault3 -DefaultVault
#Note that all vaults contain the secrets of Vault1
Get-SecretInfo
#Add secrets to Vault 2
Set-Secret -Name "DATAWISETECH\serverautomation1in$MySecureVault2" -Secret "pwdserverautom1" -Vault $MySecureVault2
Set-Secret -Name "DATAWISETECH\serverautomation2in$MySecureVault2" -Secret "pwdserverautom2" -Vault $MySecureVault2
Set-Secret -Name "DATAWISETECH\serverautomation3in$MySecureVault2" -Secret "pwdserverautom3" -Vault $MySecureVault2
#Note that all vaults contain the secrets of Vault1 AND Vault 2
Get-SecretInfo
Note that every registered local store vault beasically sees the same SecretStore as they all point to the same files.
Websites store cookies to enhance functionality and personalise your experience. You can manage your preferences, but blocking some cookies may impact site performance and services.
Essential cookies enable basic functions and are necessary for the proper function of the website.
Name
Description
Duration
Cookie Preferences
This cookie is used to store the user's cookie consent preferences.
30 days
These cookies are needed for adding comments on this website.
Name
Description
Duration
comment_author_email
Used to track the user across multiple sessions.
Session
comment_author_url
Used to track the user across multiple sessions.
Session
comment_author
Used to track the user across multiple sessions.
Session
These cookies are used for managing login functionality on this website.
Name
Description
Duration
wordpress_logged_in
Used to store logged-in users.
Persistent
wordpress_sec
Used to track the user across multiple sessions.
15 days
wordpress_test_cookie
Used to determine if cookies are enabled.
Session
Statistics cookies collect information anonymously. This information helps us understand how visitors use our website.
Google Analytics is a powerful tool that tracks and analyzes website traffic for informed marketing decisions.
Used to monitor number of Google Analytics server requests when using Google Tag Manager
1 minute
__utmx
Used to determine whether a user is included in an A / B or Multivariate test.
18 months
_ga
ID used to identify users
2 years
_gali
Used by Google Analytics to determine which links on a page are being clicked
30 seconds
_ga_
ID used to identify users
2 years
_gid
ID used to identify users for 24 hours after last activity
24 hours
__utma
ID used to identify users and sessions
2 years after last activity
__utmt
Used to monitor number of Google Analytics server requests
10 minutes
__utmb
Used to distinguish new sessions and visits. This cookie is set when the GA.js javascript library is loaded and there is no existing __utmb cookie. The cookie is updated every time data is sent to the Google Analytics server.
30 minutes after last activity
__utmc
Used only with old Urchin versions of Google Analytics and not with GA.js. Was used to distinguish between new sessions and visits at the end of a session.
End of session (browser)
__utmz
Contains information about the traffic source or campaign that directed user to the website. The cookie is set when the GA.js javascript is loaded and updated when data is sent to the Google Anaytics server
6 months after last activity
__utmv
Contains custom information set by the web developer via the _setCustomVar method in Google Analytics. This cookie is updated every time new data is sent to the Google Analytics server.
2 years after last activity
_gac_
Contains information related to marketing campaigns of the user. These are shared with Google AdWords / Google Ads when the Google Ads and Google Analytics accounts are linked together.