Unless you have been living under a rock you must have heard about Storage Spaces Direct (S2D) in Windows Server 2016, which has gone RTM in Q4 2016.
There is a lot of enthusiasm for S2D and we have seen heard and assisted in early adopter situations. But that’s a bit of pioneering with OEM/MSFT approved components. So now bring in the DELL EMC Ready Nodes and Storage Spaces Direct.
DELL EMC Storage Spaces Direct Ready Nodes
So enter the DELL EMC ready nodes. These will be available this summer and should help less adventurous but interested customers get on the S2D bandwagon. These were announced at DELL EMC world 2017 and on may 30th they published some information on the TechCenter.
Dell EMC Microsoft Storage Spaces Direct Ready Nodes Solution Overview
- Dell EMC Microsoft Storage Spaces Direct Ready Nodes – Deployment Guide
If offers a fully OEM supported S2D solution to the customers that cannot or will not carry the engineering effort associated with self built solution.
I was sort of hoping these would leverage the PowerEdge 740DX from the start but they seem to have opted to begin with the DELL 730DX. I’m pretty sure the R740DX will follow soon as it’s a perfect fit for the use case having 25Gbps support. In that respect I expect a refresh of the switches offered as well as the S4048 is a great switch but keeps us at 10Gbps. If I was calling the shots I’d that ready and done sooner rather than later as the 25/50/100Gbps network era is upon us. There’s a reason I’ve been asking for 25Gpbs capable switches with smaller port counts for SME.
Maybe this is an indication of where they think this offering will sell best. But I’d be considering future deployments when evaluating network gear purchases. These have a long service time. And when S2D proves it self I’m sure the size of the deployments will grow and with it the need for more bandwidth. Mind you 10Gbps isn’t bad even if if, for Hyper-V nodes would be doing 2*dual port Mellanox Connect-X 3 Pro cards.
Having mentioned them, I am very happy to see the Mellanox RoCE cards in there.That’s the best choice they could have made. The 1Gbps on board NICs are Intel, which matches my preference. The game is a foot!
Would you say that you only place VHDs on a S2D cluster or is S2D also for general file server workload like User Home Directories, general file shares for Information Workers with excel/word documents?
You can do that inside of VMs if there are no show stoppers to that idea. For general file share its a bit of an expensive proposition as to standard servers in a 2 node cluster will serve a truck load of files. Also many workloads with users files might need NFTS instead of ReFS for now.
Ok, this would actually be our scenario. 2 VMs in a S2D cluster to server Home/Common fileshares for Word/Excel and NTFS for permissions. This would be to simplify the setup rather than have to deal with shared storage inside the VMs. Is this a good or bad idea? Is S2D supported for these kind of general purpose file server shares with Word/Excel etc?
It’s not their primary target use case but it would not be my 1st choice or that it would not work. I cannot speak for MSFT in regards to support, but I’d say “no”. You have the option of using a shared VHDX (VHD Set). You don’t need CSV/SOFS for continuous available General purpose file shares so that’s a non issue in regards to backup limitations with a VHD Set. It also avoids the metadata overhead associated with user files and storage traffic for S2D between your VMs. Where I’d pull of this option would be in Azure IAAS where I cannot do shared VDHX and have no other option.
Great input – thanks! And I agree with S2D in Azure. Keep up the good work with the blog, I always enjoy your posts regarding how you setup and design infrastructure stuff 🙂
I totally agree with S2d is pioneering right now. This kind of documentation was very needed, it took me sooo much time getting all the details, starting with the RTM of WS16. I would have expected more from Dell/MS. And even with all that checking it seems we have a way different config. It’s good we just rented the hardware because I saw this coming.. 😉
And in that documentation I also found;
NOTE: At present, within the Hybrid and All-Flash (AF) configurations, the minimum and maximum number of nodes in a cluster
I expect it to change, but boy.. this 4 node config based on R730xd did cost us 122k euro, excluding the S4048 switches. Not exactly cheap and I’m all for distributed storage but with all the pioneering and slow response from Dell/MS I’m starting to wonder if it’s all worth it..