Windows Server 2019 SMB Direct Best Practices

Windows Server 2019 SMB Direct Best Practices

The Hyper-V amigos, @Hypervserver and working@workinghardinithardinit recently did a webinar together about Windows Server 2019 SMB Direct Best Practices. We also discuss some trends and put some things into perspective. Rachfahl IT Solutions does more of these cool webinars for you to check out (see the info at the end of the video). You can watch the webinar below on Vimeo. It’s quite an honor to be invited to talk on this subject as Carsten is one the worlds most experienced S2D practitioners.

Windows Server 2019 SMB Direct Best Pratices Webinar

Need to know more?

I hope this get’s you started and updated. Need to know more? Want more details, advice and a deeper and more elaborate discussion. I will be talking about this on various occasions this year. One of them is the Cloud & Datacenter Conference Germany 2019 in Hanau. Register to secure your spot. It is a great conference with a lot of hands on, real life knowledge being shard. I will be around for the Hyper-V pre day and during the entire conference. This means you can find me to talk shop. Be warned, I can go one about the subject or a while

Replay Manager Configure Server There was an error loading the configuration information.

Replay Manager Configure Server There was an error loading the configuration information

When Replacing a bunch of servers with new DELL R740s (Hyper-V clusters, File clusters, backup targets etc.) I ran into an issue with the DELL Replay Manager software. The servers leverage multiple DELL EMC Storage Center SANs. The have multiple ones for Scale-Out, Redundancy, Failover, Mutliple Datacenters, …

With some of the servers I noticed that the loading of the information was slow, while most others were just fine. But with 4 out of all servers the connection never actually happens. The connectivity was just fine, and test connectivity confirmed this. As this had zero impact on the actual replays that were scheduled this went unnoticed. But when you are adding and removing servers you might need to dive into Server Configuration and that were after a minute we got the below error thrown

Configure Server
There was an error loading the configuration information.
Error Message:
The request channel timed out while waiting for a reply after 00:01:00. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout
.

Notice that the GUI says connecting to our demo server82… but unless you need info from the server you might still see the info it get’s from the Storage Center SAN itself.

This is quite annoying as we need to be in there. So how to fix this. I have some ideas as I know this error from .NET WCF but in this case I was looking for an easier way out especially when I don’t have all the information about this 3rd party application. The good news is that it is easily fixed.

Fixing this

Replay manager stores the replays and metadata info about those replays it creates on the SAN itself. That’s why you can still see those even when you actually ca’t connect to the server. The config of servers you add and use in Replay Manager is stored locally where the client lived. This files is portable, just copy it form your profile and had it to a colleague. No big deal.

Now the server configuration you do from the Replay Manager GUI tool itself is stored on each and any server where you have the Replay Manager service installed. You will find that file, ReplayManager.config.xml, under C:\ProgramData\Compellent\ReplayManager.

Make a copy to be sure and edit the original using a text editor that has elevated permissions so you can save your changes. In the example file of one server below note that server82 (green) has 2 old Compellent SC entries (yellow) that are no longer in service. One SAN it cannot find won’t exceed the time-out windows, but it does slow the GUI down significantly. 2 or more phantom old SAN slow things down looking for them and you get the time out error.

The fix is easy, cut the key values out of the file and save the file. You then restart the Replay manager service on that server via an elevated command prompt (or use the GUI):
net start ReplayManager
net stop ReplayManager

Restart the Replay manager service on the server you need to manager before connecting to the server again with the Replay manager client tool GUI.

When you now close and launch the Replay Manager GUI and connect to the server things will be a lot faster and certainly wont time out anymore.

Conclusion

Maintain your environment. Try to remove and decommissioned storage center SAN from your server configurations in Replay Manager before you take it off line an dispose of it.I f you forget you and run into slow loading Replay Manager GUI or hit a time out. Don’t panic. The Replay manager is actually quite solid and recoverable. We have shown you how to fix this by editing the ReplayManager.config.xml file on the server you need to connect to but can’t.You basically just remove the references to the no longer existing storage centers I hope it helps some of you out there if you run into this. Feel free to reach out in the comments if you have any questions.

Upgrading to DELLEMC Unisphere Central for SC Series

Upgrading to DELLEMC Unisphere Central for SC Series

To prepare for rolling out SCOS 7.3 (see my blog post SC Series SCOS 7.3 for more information on this version) we upgraded our Dell Storage Manager Data Collector and DELL Storage Manager Client to 18.1.10.171. I am happy to report that went flawlessly. This means we are ready to work with CoPilot and will upgrade our SANs over the next week. That is always a phased roll out, to minimize risk.

Upgrading to 18.1.10.171 actually means we are upgrading to DELLEMC Unisphere Central for SC Series, which was announced as part of the SCOS 7.3 upgrade benefits.

The upgrade process itself is straight forward and isn’t different from what we are used to. First you upgrade the Storage Manager Data Collector and then the Storage Manager Client. If you have a remote Storage Manager Data Collector you must then upgrade that one as well.

image

Make sure you have successful backup and create a checkpoint before you start the upgrade. That way you also have an easy exit plan when things go south.

Upgrading the Storage Manager Data Collector & Client

This needs to be done first. It can take a while so be patient. Run the Storage Manager Data Collector 178.1.10.171.exe with elevated permissions. It unpacks and asks you to select a language.

image

Click OK to continue and just follow the wizard.

image

It will ask you to confirm you want to upgrade.

image

Click yes and follow the wizard.

image

Click “Next” to kick of the upgrade and relax.

image

The wizard will provide you with plenty of feedback of what it is doing along the way.

image

The final step after the upgrade is to start the Data Collector service.

image

Starting the Data Collector service can take quite a while. Be patient. When it’s done the wizard will inform you of this.

image

Click “Finish” to close the installer.

On your desktop you’ll notice that you now have an icon called DELL EMC Unisphere Central. This indicates that the storage management for DELL EMC offerings are converging.

image

Do note that if you have a remote Storage Manager Data Collector you must now upgrade that one also, Do NOT forget to keep both deployments at the same software levels.

You are now ready to upgrade the Storage Manager Client. Run the installer with elevated permissions.

image

Just follow the wizard, normally this goes really fast and that’s it. You can log into the new and see that the GUI is very familiar to anyone using already.

What is new is the look and feel of the DELLEMC Unisphere Central for SC Series. It’s not you father’s data collector any more.

image

We’ll talk about DELLEMC Unisphere Central for SC Series later when we have had a chance to work with it some more in real live.

Nuclear data waste

Introduction

Nowadays everyone seems to be heading to the hills to try and cash in on the new gold rush. Data! You have all heard that data is the new gold. Some call it the new oil, as that is their favorite fantasy, that’s all good. But there are drawbacks to this.

The cost of data and data waste

Like any resource you mine it does come with associated costs. A cost that has to be covered by the value you derive from it. That value has to exceed the money spend to gather, process and consume it. That can be an expensive business.

On top of that you have to deal with “the waste” it creates as a byproduct. Waste can be toxic. Mining data tends to produce nuclear data waste, the bad kind where “safe levels” are hard to determine. In the rush to grab the gold many forget a couple of important lessons from history. We should know by now we need to act proactively to avoid waste. That is the cheapest option in the long run and mitigates many of the risks. We should also know that not all data is gold, some of it just glitters, but it isn’t valuable. Fool’s data, like fool’s gold, is essentially worthless no matter how much money you have spent. Even worse, it’s still produces nuclear data waste asset than can get you into (legal) trouble.

Data storages and backups

How much data to you need to get to the gold and at what cost. Storage capabilities as well as storage capacity grows fast and cost seems under control for now. But will this last forever? And even if so, what’s the ratio of data gold versus raw data stored? Can we improve this ratio? Because even when things are cheap, why even do it if it is not needed?

Protecting the data and the waste

And then there is the cost with protecting that data as well as the governance around it.  The sad reality with data is that once you have it the probability that it will get you into trouble is real. Data, sooner or latter will get lost, misplaced, sold, hacked, leaked, … it’s almost guaranteed. Ask any real InfoSec professional (not the standard issue, policy quoting security officers, those are just windows dressing) and they will open your eyes to the reality of the risks. It’s very sobering.

The gold rush

As with any hype or gold rush we can avoid costly mistakes buy looking at history. Think about who benefited and who lost out. Think about why this happened and how. Can you see any parallels?

  • Many people are drawn to the data gold fields. Very few strike a gold vein.
  • There is a lot of money to be made selling the tools, supplies and gear to mine the data, process it, store and protect it.
  • Gathering raw data and processing it can be highly toxic
  • Storing and protecting the gold is expensive and hard.

Let’s dive a bit deeper into these issues, what these mean and how they materialize. They all have one thing in common for sure and that is that the fear of missing out is one of the driving factors.

Gold diggers

The reality is that many people that now become “data scientists” are not all highly skilled mathematicians and experts at statistics analysis. It’s a new hype, just like OLAP tools and data mining where before. We now have BI, big data and data science. That’s where the gold can be found so that’s where gold diggers flock to. Some have the skills, abilities and the luck to derive wealth from that. Most will just have the job digging.

Pick Axe

There is a lot more data than there is science in the hype created around data scientists. Data scientists should be great at math and statistics. Those are not very fields of human endeavor that do not scale well. They are not even popular. Attaching “scientist” to something doesn’t make it a science. Be sure of the quality of your gold and make sure it is not fool’s gold. But the gold rush is on. There’s money to be made. In an era where science is viewed by many as “an opinion” the urge to derive some credibility from adding “science” to any endeavor is a paradox. It is on the rise. Clearly, this shows the value of real science even when some only seem to like it when it suits their agenda.

But the field is exploding as companies want people working on all the raw data they collect. As one statistician stated: “I used to be a boring, underpaid geek with glasses, now that I’m a data scientist I’m cool, in demand and paid very well even if the work is less scientific.” That’s the nature of the beast. Her employer got a real statistician, but as the mines require a lot more bodies, many will make due with less.

Merchants

The “sure” money is in the supply chain. Storage, networking, compute … no matter where it is (cloud, fog or on-premises computing). There is money to be made with tools to process the data, protect it (backups) and secure it against unwanted prying eyes and theft. If you’re selling any of those business is a booming.

Everyone seems obsessed with collecting data. Luckily storage costs are down per GB and we can store ever more. We also need to protect more. But whoever deletes data? Who dares push the button? A lot of data is collected “just in case”. We might find gold in there later and if we don’t have it we cannot for sure. The fear of missing out in action. That is great if you’re selling stuff. Data lakes, data ponds, storage blobs or tables, Mongo DB or SQL PaaS, storage arrays, data processing technology and data protection. These can be products or services, it doesn’t matter, there is money to be made. And while you’re selling you’re not asking the buyers if the really need it. You don’t question them, you praise their insights and help them protect their investment. Everyone is doing it, so must you. The copy/paste strategy in action.

Nuclear data waste

While the vast growth in data is spectacular. A lot of it is crap. But there is very little effort put into  being selective. It’s too cheap right now to collect and store it. No one want to say “we don’t need it” and be the one to blame if you don’t have it.

But in the age of data leaks, hackers, privacy concerns and ever more legislation around data protection it’s worth making sure you don’t store data just because you can. Storing data holds inherent risks. Risk of losing it, corrupting it, deriving faulty information from it, leaking it, have it stolen or abused. It

In the age of GDPR and many other rightful privacy and data protection concerns collecting data should be treated like nuclear power. The value it brings is undeniable. But you don’t need vast amounts of nuclear fuel to deliver that value. You do need very good processes, fail safes, regulation, capable people and technology.

We should start looking at data as nuclear fuel and as such, after use and processing, part of it is left as toxic nuclear data waste. It’s a long-term toxic by product of the process of deriving information form data. Minimize the collection, storage and of data to achieve your goals at minimum cost and risk. Luckily, we have a very good solution for toxic data waste. You can delete it and wipe it securely.

We have to stop thinking that more is better when much of it is junk. The overhead of caring for that junk is ridiculous. We might have to do so for nuclear waste out of need. But for data there are alternatives. Destroy it if you don’t need it. It’s the safest way to handle the legal and reputation risks related to it. That will take a conscious effort.

Efforts and costs

Critical thinking about collecting data is lacking. That is understandable. There is a lot of the money to be made in data mining is in providing the tools to collect, process, store and protect the data. Even with many people that warn us of the security issues and legal responsibilities around it is often about selling services and products. For many all this might turn out to be a lot like the other gold rushes. There were a lot more suppliers of the tools that got rich than actual finders of profitable gold mines. This means there is also a lot of pressure and incentive to feed the “data is the new gold” beast.

Where a SQL database or a data warehouse at least meant you had to put effort into collecting the data, the rise of unstructured data technologies means way too often we don’t care and we’ll figure it out later. Imagine doing that to nuclear fuel! For now, the technical advances in storage and data technologies has allowed us to act without too much deliberation on the sanity of our choices. That might change, it might be wise to avoid the cold shower when it does and benefit from minimizing toxic data risks today.

Conclusion

Now true data gold is very valuable, but make sure you can recognize it. Just going through the motions and buying the tools, copying “in the know” statements from the internet isn’t going to cut it. That is called pretending. Sure, it’s fun. It is also a very dangerous and costly mistake when things get real. At best you look like an idiot with money. Many sales people will separate you from your money very efficiently.

The smarter organizations already have a data strategy that includes waste avoidance, reduction and management. Many don’t unfortunately. Collecting data for those is the main goal, driven by the tyranny of action over strategies. You have to be seen acting and being in charge. The buzz words have to be present and you have to come across as a “can do sir, yes sir” person. Well that is what will kill you. The late Norman Schwarzkopf knew this all too well.

Take care of your weaknesses, figure them out before they hurt you and before they destroy your ability to exploit your strengths. That people is a strategy exercise. I can do that for you and it will cost you a lot of money. But remember, strategies are not products you can buy, they are not commodities and as such buying them is a paradox. A strategy is what will give you the edge over your competitors.If you have others determine your strategy, your competitors will pay them more to find out . So, roll up your sleeves and put in the effort yourself. In the end, it’s all about common sense and this is true for data-mining, AI and BI as well.