Comments for Working Hard In IT https://blog.workinghardinit.work My view on IT from the trenches Fri, 29 May 2015 16:13:48 +0000 hourly 1 Comment on Hyper-V and Disk Fragmentation by workinghardinit https://blog.workinghardinit.work/2015/05/25/hyper-v-and-disk-fragmentation/#comment-1672 Fri, 29 May 2015 16:13:48 +0000 https://blog.workinghardinit.work/?p=7556#comment-1672 It’s fragmentation of the internal structure of the VHDX, that has no impact on the space consumed on the CSV/LUN. There is a bit more info in the Windows 201 R2 Optimization guide to check out. Just do a search for that and you’ll find the download.

]]>
Comment on Hyper-V and Disk Fragmentation by Eole Netto https://blog.workinghardinit.work/2015/05/25/hyper-v-and-disk-fragmentation/#comment-1671 Fri, 29 May 2015 10:17:26 +0000 https://blog.workinghardinit.work/?p=7556#comment-1671 Refering to “3.Block fragmentation of the VHDX”. Are these vhdx occupying more space on CSV? In other words, in addition to performance gain due to defragmentation, would we also free up some space on CSV?

]]>
Comment on SMB Direct over RoCE Demo – Hosts & Switches Configuration Example by workinghardinit https://blog.workinghardinit.work/2015/04/22/smb-direct-over-roce-demo-hosts-switches-configuration-example/#comment-1669 Wed, 27 May 2015 02:05:34 +0000 https://blog.workinghardinit.work/?p=7796#comment-1669 Sure, I would not make it losses. Some of the examples out there do that but it’s not needed, some of them even tagged it with the same value as the RoCE traffic. The example you have seems fine. On the switch just add it to the second or a 3rd priority group. Now this example catches just about any traffic (UDP/TCP not in SMBDIRECT and DEFAULT), in reality that will be just about anything everything, and gives it the same priority as default, so unless you want to do something different with it ETS wise (by using a different PG) it could be not much use. Your mileage may vary.

]]>
Comment on Hyper-V and Disk Fragmentation by workinghardinit https://blog.workinghardinit.work/2015/05/25/hyper-v-and-disk-fragmentation/#comment-1668 Wed, 27 May 2015 01:54:51 +0000 https://blog.workinghardinit.work/?p=7556#comment-1668 It all depends on what storage you have and what type of fragmentation. No one size fits all, but good storage design and not being to cheap goes a very long way in preventing fragmentation and having to deal with it.

]]>
Comment on Hyper-V and Disk Fragmentation by Bart van de Beek https://blog.workinghardinit.work/2015/05/25/hyper-v-and-disk-fragmentation/#comment-1667 Tue, 26 May 2015 22:37:16 +0000 https://blog.workinghardinit.work/?p=7556#comment-1667 This becomes more and more of a “problem” as VM-density per host (local storage) or CSV (shared storage) goes up. I’ve totally given up on any defragging, as mostly anyway all IOps together turn out to be largely random from the storage-perspective anyways (I know there are exceptions depending on application/workload type ofcourse). It’s just very costly in terms of IOps, and even outright “unnecessary” when using tiered storage as you’re just wearing your SSDs more. The tiering optimize even deliberately causes fragmentation, let alone when using dedup. Dynamic disks are dealing with this very very well actually. From all my VMs, only two ever get above 11% frag, and those 2 are VHDs instead of VHDx :-). Designing storage that can deal with any workload randomly in a more than sufficient matter is the solution to all of these “problems”, making defragging abbundant and even very counter-productive…

]]>
Comment on Updating Hyper-V Integration Services: An error has occurred: One of the update processes returned error code 1603 by mattglg https://blog.workinghardinit.work/2015/05/13/updating-hyper-v-integration-services-an-error-has-occurred-one-of-the-update-processes-returned-error-code-1603/#comment-1666 Tue, 26 May 2015 15:52:48 +0000 https://blog.workinghardinit.work/?p=7298#comment-1666 I’m getting this same issue with a 2012 VM. I can’t uninstall the IS components – any ideas?

]]>
Comment on SMB Direct over RoCE Demo – Hosts & Switches Configuration Example by Daniel Lind https://blog.workinghardinit.work/2015/04/22/smb-direct-over-roce-demo-hosts-switches-configuration-example/#comment-1665 Tue, 26 May 2015 06:28:56 +0000 https://blog.workinghardinit.work/?p=7796#comment-1665 Thanks, for now we have to play with stacking I’m afraid. Can you take the cluster network as well on the RDMA card with these settings?

I’m Reading Another post that describes almost the same settings as you posted but with som differences:
#SMB Direct traffic to port 445 is tagged with priority 3
New-NetQosPolicy “SMBDIRECT” -NetDirectPortMatchCondition 445 -PriorityValue8021Action 3
New-NetQosPolicy “DEFAULT” -Default -PriorityValue8021Action 1
#New-NetQosPolicy “TCP” -IPProtocolMatchCondition TCP -PriorityValue8021Action 1
#New-NetQosPolicy “UDP” -IPProtocolMatchCondition UDP -PriorityValue8021Action 1

#Enable PFC (lossless) on the priority of the SMB Direct traffic.
Enable-NetQosFlowControl -Priority 3
Disable-NetQosFlowControl 0,1,2,4,5,6,7

Is this is a more suitble config if I’m going to run cluster network as well? Can you please let me understand then how this Qos settings will look like in the switch config for DCB map policy settings? Most grateful!

]]>
Comment on Unable to retrieve all data needed to run the wizard. Error details: “Cannot retrieve information from server “Node A”. Error occurred during enumeration of SMB shares: The WinRM protocol operation failed due to the following error: The WinRM client sent a request to an HTTP server and got a response saying the requested HTTP URL was not available. This is usually returned by a HTTP server that does not support the WS-Management protocol. by pariswells https://blog.workinghardinit.work/2014/01/28/unable-to-retrieve-all-data-needed-to-run-the-wizard-error-details-cannot-retrieve-information-from-server-node-a-error-occurred-during-enumeration-of-smb-shares-the-winrm-protoc/#comment-1664 Tue, 26 May 2015 05:18:06 +0000 https://blog.workinghardinit.work/?p=5762#comment-1664 Thanks very much for this , I disabled IPv6 in the end on the 2 clusters due to no internal network support and it fixed it!!

]]>