After a short break in broadcasts, due to extremely busy times, the Hyper-V Amigos ride again! Yes, fellow Microsoft MVP Carsten Rachfahl (@hypervserver) from Rachfahl IT Solutions and yours truly are back in the saddle!
In episode 15 of the Hyper-V Amigos Showcast we look at VMFleet. VMFleet is a free Microsoft test suite to show/test and help analyze the performance of a Hyper-converged Storage Spaces Direct Setup.
In this episode Hyper V Amigos Showcast 15 – VMFleet explained we show you how to set it up, how it works and how to measure and evaluate your S2D deployment with it. We also share some real life tips with you. All this in just around 2 hours. So stick around for the long haul!
Unless you have been living under a rock you must have heard about Storage Spaces Direct (S2D) in Windows Server 2016, which has gone RTM in Q4 2016.
There is a lot of enthusiasm for S2D and we have seen heard and assisted in early adopter situations. But that’s a bit of pioneering with OEM/MSFT approved components. So now bring in the DELL EMC Ready Nodes and Storage Spaces Direct.
DELL EMC Storage Spaces Direct Ready Nodes
So enter the DELL EMC ready nodes. These will be available this summer and should help less adventurous but interested customers get on the S2D bandwagon. These were announced at DELL EMC world 2017 and on may 30th they published some information on the TechCenter.
If offers a fully OEM supported S2D solution to the customers that cannot or will not carry the engineering effort associated with self built solution.
I was sort of hoping these would leverage the PowerEdge 740DX from the start but they seem to have opted to begin with the DELL 730DX. I’m pretty sure the R740DX will follow soon as it’s a perfect fit for the use case having 25Gbps support. In that respect I expect a refresh of the switches offered as well as the S4048 is a great switch but keeps us at 10Gbps. If I was calling the shots I’d that ready and done sooner rather than later as the 25/50/100Gbps network era is upon us. There’s a reason I’ve been asking for 25Gpbs capable switches with smaller port counts for SME.
Maybe this is an indication of where they think this offering will sell best. But I’d be considering future deployments when evaluating network gear purchases. These have a long service time. And when S2D proves it self I’m sure the size of the deployments will grow and with it the need for more bandwidth. Mind you 10Gbps isn’t bad even if if, for Hyper-V nodes would be doing 2*dual port Mellanox Connect-X 3 Pro cards.
Having mentioned them, I am very happy to see the Mellanox RoCE cards in there.That’s the best choice they could have made. The 1Gbps on board NICs are Intel, which matches my preference. The game is a foot!
Hello there! Carsten Rachfahl and I have a new Hyper-V Amigo showcast out. In this Hyper-V Amigo Showcast Episode 11 we talk about Hyper-converged Storage Spaces Direct in Windows Server 2016 installed.
Carsten has a nice demo system to play with and we have some fun playing with it. We’ll see the outstanding performance S2D can deliver and how resilient is. On top of this we also talk about Distributed Storage QoS, Node Isolation and Node Fairness.
The show some crazy performance numbers and even turn 2 of the 4 nodes off. While playing with the Cluster we see also some new Features like Distributed Storage QoS, Cluster and VM Isolation and Node Fairness.