So… content indexing the passive node. Whassat all about, ‘en?

I surprised an architect at one of my customers, today. i told him that in a DAG, the search service on a server indexes the active copy of the database. he didn’t believe me. i asked him how he thought it worked, and he said “replication”. uh-uh.

it’s pretty easy to get that impression. after all, it’s sort of what the official documentation says is going on:

during the seeding process, DAG members with a passive mailbox database copy replicate the content index catalog from the DAG member that has the active mailbox database copy

but that’s during the *seeding* process. what about during normal operations?

After initial seeding, the server with the passive database copy gets message data from the server with the active database and performs content indexing locally.

What does that even mean? I’ll tell you… it means the server with the passive database makes a connection over the network to the *active* database, because the database has to be *mounted* for any MAPI activities to take place. That’s right, it makes a MAPI connection. This also has ramifications for your network, because a MAPI connection is a… anybody? That’s right; it’s a *client* connection, so the traffic is carried over the CLIENT network, not the REPL network.


But it isn’t going to be much traffic is it? I mean, it’s just a bunch of indexing, right?


Hmmm. You’da thunk, but no. Microsoft claim in their documentation for 2016 that indexing the local copy of a database, as opposed to the active copy, will save approximately 40% traffic. The ever-awesome Rhoderick Milne says in this thread it’s about equivalent to the total of REPL traffic.


The official documentation does carry a community contribution at the bottom stating in plain English how things work, and there’s a bunch more detail here. That last article while awesome, is most impressive for its tone of surprise.


Wahey! UCDay! Hooray!

So… I’m properly honoured to have been selected to present at UCDay on the 28th of September. I’m really excited because it’s been quite some little while since I’ve done this sort of thing – decades rather than years! Hopefully I’ll not let myself down too badly, eh?

What is UCDay? It’s the UKs only independent Microsoft unified communications technical conference focussing on Skype, Office 365 and Exchange; basically a day of sessions from some of the best technical presenters in the world (and me). People like Michael Van Horenbeeck, one of my favourite technical authors, and Brian Reid, who delivered the hardest three days of training I’ve ever had in my life during my MCM rotation. Pretty much every speaker is an MVP, an MCM or an MCT; in the case of Gary Steere, all three. There’s 18 sessions to choose from, in three tracks; Skype for Business, Office 365 and Exchange. This conference is worth every penny, especially as it’s FREE.

So, what am i going to do? Basically a session on designing Exchange for shared services – how simplification and repeatable design units reduced the number of support calls we generated. Why is that of interest to anybody? Because if it reduced support costs for us, it will reduce support costs for other people too (probably). There’s no magic – it’s just following good design and documentation practices, but with some real world figures to reinforce the common sense. There’ll be a little bit on how (i hope) we’re going to apply it to Exchange 2016, a little bit on how it can be extended to other applications. I hope people find it interesting.

The conference is at the National Motorcycle Museum near Birmingham Airport on 28th September. I’d thoroughly recommend anyone with an interest in Exchange, Office 365 and Skype for Business to attend. I’d be there even if i wasn’t speaking.

There’s also a quiz the night before. shriek. I LOVE quizzes. my two favourite answers are “Tavares” and “goldcrest”.

What difference does hyperthreading make to a VMWare guest cpu config?

There is some controversy in the Lync world at the moment regarding hyperthreading when virtualising. do you follow Microsoft’s advice, and turn hyperthreading off at the host level? VMWare would much prefer you didn’t actually. in Exchangeland, the compromise was reached some time ago; you turn it ON at the host, but turn it OFF at the guest. we will now pause for a brief torrent of dispute. this depends on you having very little cpu contention – if you have cpu contention, you will end up increasing the amount of cpu ready time while the esx scheduler waits for physical cpus to become available – in other words, in an oversized environment, this is best. if you have undersized, then guess what, you’re stuck with leaving hyperthreading enabled at the guest level. Sizing correctly is key – size for the physical cpu cores in your host, and add up all the exchange servers on the host. so if you physically have 2 hex core sockets, you have 12 vcpus to allocate to Exchange servers on that host. no more. enabling hyperthreading doesn’t make any difference here. but that’s off the point*. i was asked “what difference does turning off hyperthreading make to the guest, nick? will SQL lose a scheduler? will it all go horribly wrong in my guest?” so here is the answer.


no difference.


and no.


and here’s the proof.

this is an edge server in one of my labs. it has 2 sockets, with two cores each. HT is enabled at the HOST level as well.


yeah, i know. that’s a really dull way to set stuff up.


hyperthreading is set to “any” in the guest. what does this look like inside the guest?



which is, i’d hazard, exactly what we’d expect.


i was asked how this differed if we disable hyperthreading at the guest level. so…







exactly the same. which is, of course, what you’d expect. but it’s nice to have proof, right?



*do you know what else is off the point? if you have 2 hex core sockets in your host, you’re likely to hit some issues when you try to avoid those NUMA boundaries, aren’t you? Exchange is sized for multiples of four.

Your new PAL

As you may recall, I’m very keen on performance analysis. It’s kind of a hobby, like fishing, but less wet. And with fewer fish. Plus I can do it indoors, in the warm. One of my favourite fishing rods tools is the Performance Analyzer for Logs – PAL. For the last two years however, it’s been a little bit hamstrung in that there has been no exchange 2013 threshold template. This has made me a sad panda.

Well yesterday that changed. Clint Huffman has published a new version (2.7.3) and it includes a 2013 template. O frabjous day. The template was written by Adrian Moore, a senior PFE at Microsoft.

Download it now, I should. Be aware, though, that it is quite different to the enormous 2010 template. It doesn’t inherit the system overview threshold template, for one thing, so makes no comment regarding things like cpu and memory, other than for the counters listed in the article the template is based on, Exchange 2013 Performance Counters. I don’t foresee this as a problem, but it may mean running PAL twice if you don’t spot anything obvious the first time. On the other hand, it does mean it’ll run a damn sight faster.

And that’s another blog I’ll have to follow.

Tech Camp UK

I did something a little different yesterday – i spent the day with young, enthusiastic, *smart* people, and it was great.

My employer agreed to loan out some senior engineers (and me) to Ed Baker of the Digital Skills Agency to run a project day for young people interested in careers in IT, as part of one of their Tech Camps. We had about 35 people attend our session on Internet of Things. Nig Greenaway kicked things off with a short talk on the topic, and then we gave them a project to complete in groups of four – come up with an IoT idea related to care of the elderly.

I was absolutely blown away by the quality of the work these folks produced; they not only came up with some properly innovative ideas (which i won’t blab about here; they were their ideas after all), but they came up with ways to deploy, support, fund and secure them, and finally gave engaging presentations on them to a roomful of people they barely knew.

The most amazing thing about them, in one way, though, is that they are all unemployed. i mean… how? they are presentable (see above), articulate, educated (many of them had degrees), enthusiastic, capable and a downright joy to spend the day with. If i was responsible for hiring (as opposed to responsible for fixing email servers) i’d take them as a job lot.

In the absence of any concrete assistance, i can only proffer my advice (hey kids, don’t do marine biology!) which may be wildly idiosyncratic, possibly unhelpful, but hopefully isn’t actively harmful. so…


to the folk i met yesterday:

IT moves pretty fast, if you hang around waiting for someone to tell you how to do it, you might miss it. There’s lots of really good quality training available for free at Coursera, edX and futurelearn, among others. Codecademy is pretty good also, if less formal; you’ll get the chance to try lots of different languages, including HTML and Java. if i was starting out, i’d look to either learn a language like Java or Python if i was thinking about coding, or possibly a more academic compsci course. There’s a bunch of courses on there around app and game programming also. There’s lots of stuff on the microsoft virtual academy, but it’s mostly pretty proprietary. good if you need to learn Microsoft technology, though.

doing this stuff on your own can be a bit disheartening – luckily, you know 30-odd other keen people in the same boat as you. organise a study group to aid motivation and understanding. use tools like teamviewer to share your screens.

If you ARE going to do a coding course, getting the certificate of participation isn’t likely to be enough – you need to do stuff with it. write short programmes, build web sites. Amazon web services does a long free trial – enough to keep your own server running continuously for a year. you can use that to showcase what you’re doing.

It’s important that you actually enjoy this stuff – if you want to be on the technical side in IT, you’re going to spend a lot of your own time learning new things. also, as Steven Levitt said the other week on the Freakonomics podcast, “When I interview young professors and try and decide if we should hire them, I’ve evolved over time to one basic rule, if I think they love economics and its fun for them I am in favor of hiring them. No matter how talented they seem otherwise if it seems like a job or effort or work then I don’t want to hire them.” basically, if you enjoy it, you’ll be thinking about it all the time. if you only think of it as work, you’ll spend all your time stressing over it.

make sure you have sources of inspiration – i like makeuseof and instructables, but there’s lots of others.

Tell people how amazing you are, and all about the amazing things you’re doing. Get on linkedin, if you’re not there already, and hassle people you know for recommendations (you know me, for instance…). when you do something interesting, write it up on your blog, then tweet that you’ve written it. obviously, you need your “professional” social profile and your personal one, but you know this already.

finally, this might not get you a job; it will get you useful new skills and experience. it may give you the capability to turn that incredible idea you had (or will have soon) into a viable business.

Good luck, all of you. you deserve it. stay in touch – dm me on twitter, or ping me on linkedin.


EDIT: you may also find this a really useful page. it’s from a long time ago – 2008 – but things haven’t improved that much. plus it has links to really interesting stuff, like the 2015 salaries and careers guide.

Herts BCS meeting, March 2015

My son Tom and I attended the monthly BCS meeting last week in the Lindop building at the University of Hertfordshire; a fantastic session entitled “Kit Computer: Talk, Build and Program” presented by Mr Stephan Barnard of Noble Touch Ltd. The talk covered the design and build of a simple computer based around the ATmega328 microcontroller – the heart of the Arduino hobby kit, –  and programming it as a lightmeter, among other things.

It was probably the best attended of the BCS meetings I have been to in Hatfield; at least 80 people, many of them students. no surprise really as it was a practical session, with a free kit computer.  The slides, such as they were, are here. It was however very much a practical session. We were given a small bag of components, some instructions, and then left to get on with it.


Stephan talked around the subject while we worked, explaining what the components were for (for instance, the crystal oscillator is used to provide a faster clock signal than the mega will if left to its own devices) and other things you can do with the chip – for example have a look at the self-balancing two wheel robot, here.

After 90 minutes, we had a working light meter, that also functions as a disco light system for small woodland creatures.


This picture is quite bright.


This picture is not. Putting the photoresistor next to an LED is possibly a design flaw.

We also knew a lot more than previously about how to pulse LEDs so that they can be driven at higher than recommended voltages, the dangers of ordering a few gross of short wires from Ali Baba and how to use software to emulate a 50 kilo-ohm resistor. Let’s face it, we knew nothing about any of these things to start so it wasn’t hard to come away enlightened.

The lad found all this very impressive; our Arduino clones are on order, due to arrive next week, and luckily, so we don’t have to scratch around wondering what to do with them, we’ve been asked to write a review of “Python Programming for Arduino” for Packt. Which is nice.

Anyone casting about looking for a potential speaker or activity for a meeting, say, could do very much worse than speak to Mr Barnard. I can thoroughly recommend him for providing an engaging and interesting evening, and the small flashing souvenir is very Mr Benn. The next BCS Herts meeting is on 16th April at the Steria Campus in Hemel Hempstead. It’s entitled “The Origin of British Computers” and is presented by Alan Wray, late of this parish; it may therefore be of particular interest to older Hertfordshire employees; I’ll probably not take the boy.

why *wouldn’t* you want a group called “Content Submitters”?

I can’t think of a good reason…

My colleague Mark Bodley has drawn my attention to this KB article: Content Index status of all or most of the mailbox databases in the environment shows “Failed”. He has recently experienced this on an exchange 2013 CU5 estate, and, during the course of his research, has seen evidence that it occurs in CU6. My money would be on it persisting in CU7 as well. He points out that while the article states “all or most” of the databases will be affected, he only saw a minority of databases suffering.

if you read the article you can see that the problem is caused by Exchange failing a permissions check on an AD security group called “Content Submitters”, because it doesn’t exist. The fix is to ummm… create an AD security group called “Content Submitters” and grant full access to “Administrators” and “NetworkService”.

I can’t think of a single reason not to go ahead and create that group as part of an install. If you’ve already got Exchange 2013 up and running, why not create the group anyway? That’s one less cause of failed databases you need to worry about.

Exchange 2013: setting diagnostic logging levels the quick way

TL;DR how to set a bunch of logging levels with similar names to a specific level, plus a script that sets *everything* back to the defaults.


I’ve got a customer who is having trouble with Exchange 2013 and Active Directory, flip-flopping between DCs. i can see it occurring in the event log, but there’s no suggestion of what the problem might be. No worries, lets just hoik* the logging level up on ADAccess, and have a look at what’s happening.mmmm…

first problem with that; with the demise of anything approaching a usable GUI in exchange 2013, we’ll have to use powershell. it’s the “set-eventloglevel” cmdlet that i need, but usage examples are pretty thin on the ground. in fact, there’s just one.

Set-EventLogLevel -Identity "Exchange01\MSExchangeTransport\SmtpReceive" -Level High

which is peachy, but i don’t know which of the many adaccess logging objects i need. there are quite a few:


i don’t fancy running that cmdlet ten times, and my customer fancies it even less. what we need is some powershell magic. Why don’t we get the objects, and then feed them via the pipeline into the set-eventloglevel cmdlet? we can use the get-eventloglevel cmdlet. unfortunately it returns a great long list of objects, so we’ll need to filter them.


oh well, worth a try**. to do that we’ll need the where-object cmdlet and the “–like” comparator.

get-EventLogLevel | Where-Object {($_.identity) –like “*adaccess*”}


now we can feed that straight into the set-eventlogginglevel cmdlet:

get-EventLogLevel | Where-Object {($_.identity) –like “*adaccess*”} | set-EventLogLevel –level medium


you’ll not want to leave it there, though. that’ll fill your event log up quicksmart. once you’re done, set everything back. the handy “default” radio button that used to work in 2010 is gone:



so what you’ll need is a little script that puts everything back where you found it. if you run get-eventloglevel you’ll see that nearly everything is set at lowest, but there are one or two exceptions:


is that MSExchange RBAC\RBAC that’s set to low, there? god knows. my eyesight isn’t all that. let’s run a bit more powershell and dump out all the objects that aren’t set to lowest:


Bugger. that didn’t work. let’s run get-eventloglevel | gm and find out why .level didn’t select the –level parameter:


aha – why call your property after the parameter it sets? what we want isn’t called .level, it’s called .eventlevel. duh.


great, so everything needs to be set to “lowest” apart from those objects.

so, we could run a script that sets everything to ”lowest”, and then sets them to”low” afterwards, except… what about those “2”s there. you can’t set a value of 2 with set-EventLogLevel .I’ve tried. there’s two things we could do there, either ignore them, or use the registry powershell provider to set them back to 2 afterward. ignoring them is the easiest way, isn’t it? mm?


so my script looks like this:

<# this script returns Exchange 2013 server diagnostic levels to their default.

The first line sets everything but "msexchange oauth\server" and

"msexchange backendrehydration\server" objects to "lowest".

these objects are set to 2 by default, a value that can’t be set using set-EventLogLevel.

you can set them in the registry at

HKLM\currentcontrolset\services\msexchange backendrehydration\diagnostics


HKLM\currentcontrolset\services\msexchange oauth\diagnostics

the rest of the script sets the exceptions to their correct level

this script will only work on the local server, obviously#>

Get-EventLoglevel | where-object {($_.eventlevel) -notlike "2"} | set-eventloglevel -level lowest

set-eventloglevel -identity "MSExchange RBAC\RBAC" -level low

set-eventloglevel -identity "MSExchange ADAccess\Topology" -level low

set-eventloglevel -identity "MSExchange ADAccess\Validation" -level low

set-eventloglevel -identity "MSExchangeADTopology\Topology" -level low

set-eventloglevel -identity "MSExchange OAuth\Configuration" -level low

set-eventloglevel -identity "MSExchange BackEndRehydration\Configuration" -level low

how could it be improved? well, adding the two lines to set those values to 2 in the registry would make it quicker, rather than filtering them out. adding in a line for server identity that defaults to the local host would be good. signing it might be a good idea. maybe later.


why am i using “–notlike” in the first line, instead of “–ne”? i *think* it’s because the value is an integer, and –ne is interpreting the input as a string… whatever. “-ne” doesn’t work. “-notlike” does.


* yeah, that’s a word. hoik.

** turns out that get-EventLogLevel “msexchange adaccess*” DOES work though…never mind, this way is betterish.

exchange,windows and the terrifying leap second.

This leap second thing…


We had one in 2012. and in 2008.

I may be wrong, but I don’t recall the world ending. I’d look out the window and check, but I’m in Stevenage, so that might not be as informative as I’d hope.

Clocks get moved about all the time in exchange; just have a look on virtualised systems for this event:






The system time has changed to ‎2015‎-‎01‎-‎19T14:31:54.447000000Z from ‎2015‎-‎01‎-‎19T14:31:51.850000000Z.

Look! That exchange server *went back in time* to 3 seconds before. It is Dr Who’s mail server. So long as it isn’t enough to break Kerberos, it’ll be fine. (1 second forward won’t break kerb.)

We’ve seen shifts of six and seven minutes on some of our customers, and that causes issues, especially in DAGs; just one of the reasons I really really hate virtualised exchange servers.

Anyway, here are some links on it:

What’s all this about the Leap Second, and how does it affect the Microsoft Windows OS and other products?

How the Windows Time service treats a leap second



Support Learnings of Exchange

A happy New Year to you all – may it be peaceful and prosperous. To help you on your way, I urge you to read this article from Ross Smith IV on the EHLO blog:

now you may read this and, if you’ve read my outpourings over the last few years, remark on the similarity… all I can say is “this is because I’m not lying to you”.

So what does Ross call out?

Software patching. He recommends you be on the latest patch, or the next oldest. I also recommend you leave it a week or so after release before contemplating investigating it, so that you are aware of all the issues that are introduced in the latest patch.

Change control. The article points out the necessity of implementing change control for ALL changes, including the simple ones; on the distaff, change control should not be an excuse for inaction. If your change control process is so sclerotic nothing ever happens, that is just as bad. Possibly worse…

 Complexity. Complexity is the enemy. It leads to unpredictable failure, and “grey areas” where everyone just shrugs their shoulders and says “not my problem, boss.” There is a conflict between solution architects, who relish devising clever solutions to complicated problems, and operations, who want to run solutions as cheaply as possible, and therefore prefer the simple. With a move to shared services, it is imperative* that we consider reducing complexity in everything we do.

Ignoring recommendations. Respect my authoritah! Not because I know more about it than you do, but because I’m speaking to people who do. People like Devin Ganger.

 Deployment practices. You didn’t fill in the role requirements calculator, did you? Or maybe you did, but made up all the input? your users get 4 mails a day. Yes they do. Uhuh. Perhaps you followed the advice from a vendor to turn off background database maintenance while running jetstress? There’s a reason they don’t write that stuff down, you know. Time spent here saves a geometric amount of time (and money) later on. You can’t repair bad design. By the way, there is no law against running through the role requirements calculator every now and then. I’ve checked. It’s a very interesting exercise.

 Historical data, AKA baselining, AKA capacity planning, call it what you want. If I had a pound for every customer that was surprised when they ran out of resource, I’d have 13 pounds. I’ve run webex sessions ion how to do this in the past – if you’d like me to run one again, let me know.

*you should now have at least a line in this week’s game of “captain kirk buzzword bingo”.