There is some controversy in the Lync world at the moment regarding hyperthreading when virtualising. do you follow Microsoft’s advice, and turn hyperthreading off at the host level? VMWare would much prefer you didn’t actually. in Exchangeland, the compromise was reached some time ago; you turn it ON at the host, but turn it OFF at the guest. we will now pause for a brief torrent of dispute. this depends on you having very little cpu contention – if you have cpu contention, you will end up increasing the amount of cpu ready time while the esx scheduler waits for physical cpus to become available – in other words, in an oversized environment, this is best. if you have undersized, then guess what, you’re stuck with leaving hyperthreading enabled at the guest level. Sizing correctly is key – size for the physical cpu cores in your host, and add up all the exchange servers on the host. so if you physically have 2 hex core sockets, you have 12 vcpus to allocate to Exchange servers on that host. no more. enabling hyperthreading doesn’t make any difference here. but that’s off the point*. i was asked “what difference does turning off hyperthreading make to the guest, nick? will SQL lose a scheduler? will it all go horribly wrong in my guest?” so here is the answer.
no difference.
no.
and here’s the proof.
this is an edge server in one of my labs. it has 2 sockets, with two cores each. HT is enabled at the HOST level as well.
yeah, i know. that’s a really dull way to set stuff up.
hyperthreading is set to “any” in the guest. what does this look like inside the guest?
which is, i’d hazard, exactly what we’d expect.
i was asked how this differed if we disable hyperthreading at the guest level. so…
and…
exactly the same. which is, of course, what you’d expect. but it’s nice to have proof, right?
*do you know what else is off the point? if you have 2 hex core sockets in your host, you’re likely to hit some issues when you try to avoid those NUMA boundaries, aren’t you? Exchange is sized for multiples of four.