Monthly Archives: January 2015

why *wouldn’t* you want a group called “Content Submitters”?

I can’t think of a good reason…

My colleague Mark Bodley has drawn my attention to this KB article: Content Index status of all or most of the mailbox databases in the environment shows “Failed”. He has recently experienced this on an exchange 2013 CU5 estate, and, during the course of his research, has seen evidence that it occurs in CU6. My money would be on it persisting in CU7 as well. He points out that while the article states “all or most” of the databases will be affected, he only saw a minority of databases suffering.

if you read the article you can see that the problem is caused by Exchange failing a permissions check on an AD security group called “Content Submitters”, because it doesn’t exist. The fix is to ummm… create an AD security group called “Content Submitters” and grant full access to “Administrators” and “NetworkService”.

I can’t think of a single reason not to go ahead and create that group as part of an install. If you’ve already got Exchange 2013 up and running, why not create the group anyway? That’s one less cause of failed databases you need to worry about.

Advertisements

Exchange 2013: setting diagnostic logging levels the quick way

TL;DR how to set a bunch of logging levels with similar names to a specific level, plus a script that sets *everything* back to the defaults.

 

I’ve got a customer who is having trouble with Exchange 2013 and Active Directory, flip-flopping between DCs. i can see it occurring in the event log, but there’s no suggestion of what the problem might be. No worries, lets just hoik* the logging level up on ADAccess, and have a look at what’s happening.mmmm…

first problem with that; with the demise of anything approaching a usable GUI in exchange 2013, we’ll have to use powershell. it’s the “set-eventloglevel” cmdlet that i need, but usage examples are pretty thin on the ground. in fact, there’s just one.

Set-EventLogLevel -Identity "Exchange01\MSExchangeTransport\SmtpReceive" -Level High

which is peachy, but i don’t know which of the many adaccess logging objects i need. there are quite a few:

image

i don’t fancy running that cmdlet ten times, and my customer fancies it even less. what we need is some powershell magic. Why don’t we get the objects, and then feed them via the pipeline into the set-eventloglevel cmdlet? we can use the get-eventloglevel cmdlet. unfortunately it returns a great long list of objects, so we’ll need to filter them.

image

oh well, worth a try**. to do that we’ll need the where-object cmdlet and the “–like” comparator.

get-EventLogLevel | Where-Object {($_.identity) –like “*adaccess*”}

image

now we can feed that straight into the set-eventlogginglevel cmdlet:

get-EventLogLevel | Where-Object {($_.identity) –like “*adaccess*”} | set-EventLogLevel –level medium

image 

you’ll not want to leave it there, though. that’ll fill your event log up quicksmart. once you’re done, set everything back. the handy “default” radio button that used to work in 2010 is gone:

image

 

so what you’ll need is a little script that puts everything back where you found it. if you run get-eventloglevel you’ll see that nearly everything is set at lowest, but there are one or two exceptions:

image

is that MSExchange RBAC\RBAC that’s set to low, there? god knows. my eyesight isn’t all that. let’s run a bit more powershell and dump out all the objects that aren’t set to lowest:

image

Bugger. that didn’t work. let’s run get-eventloglevel | gm and find out why .level didn’t select the –level parameter:

image

aha – why call your property after the parameter it sets? what we want isn’t called .level, it’s called .eventlevel. duh.

image 

great, so everything needs to be set to “lowest” apart from those objects.

so, we could run a script that sets everything to ”lowest”, and then sets them to”low” afterwards, except… what about those “2”s there. you can’t set a value of 2 with set-EventLogLevel .I’ve tried. there’s two things we could do there, either ignore them, or use the registry powershell provider to set them back to 2 afterward. ignoring them is the easiest way, isn’t it? mm?

 

so my script looks like this:

<# this script returns Exchange 2013 server diagnostic levels to their default.

The first line sets everything but "msexchange oauth\server" and

"msexchange backendrehydration\server" objects to "lowest".

these objects are set to 2 by default, a value that can’t be set using set-EventLogLevel.

you can set them in the registry at

HKLM\currentcontrolset\services\msexchange backendrehydration\diagnostics

and

HKLM\currentcontrolset\services\msexchange oauth\diagnostics

the rest of the script sets the exceptions to their correct level

this script will only work on the local server, obviously#>

Get-EventLoglevel | where-object {($_.eventlevel) -notlike "2"} | set-eventloglevel -level lowest

set-eventloglevel -identity "MSExchange RBAC\RBAC" -level low

set-eventloglevel -identity "MSExchange ADAccess\Topology" -level low

set-eventloglevel -identity "MSExchange ADAccess\Validation" -level low

set-eventloglevel -identity "MSExchangeADTopology\Topology" -level low

set-eventloglevel -identity "MSExchange OAuth\Configuration" -level low

set-eventloglevel -identity "MSExchange BackEndRehydration\Configuration" -level low

how could it be improved? well, adding the two lines to set those values to 2 in the registry would make it quicker, rather than filtering them out. adding in a line for server identity that defaults to the local host would be good. signing it might be a good idea. maybe later.

 

why am i using “–notlike” in the first line, instead of “–ne”? i *think* it’s because the value is an integer, and –ne is interpreting the input as a string… whatever. “-ne” doesn’t work. “-notlike” does.

ttfn.

* yeah, that’s a word. hoik.

** turns out that get-EventLogLevel “msexchange adaccess*” DOES work though…never mind, this way is betterish.

exchange,windows and the terrifying leap second.

This leap second thing…

Baffling.

We had one in 2012. and in 2008.

I may be wrong, but I don’t recall the world ending. I’d look out the window and check, but I’m in Stevenage, so that might not be as informative as I’d hope.

Clocks get moved about all the time in exchange; just have a look on virtualised systems for this event:

Information

########

Microsoft-Windows-Kernel-General

1

None

The system time has changed to ‎2015‎-‎01‎-‎19T14:31:54.447000000Z from ‎2015‎-‎01‎-‎19T14:31:51.850000000Z.

Look! That exchange server *went back in time* to 3 seconds before. It is Dr Who’s mail server. So long as it isn’t enough to break Kerberos, it’ll be fine. (1 second forward won’t break kerb.)

We’ve seen shifts of six and seven minutes on some of our customers, and that causes issues, especially in DAGs; just one of the reasons I really really hate virtualised exchange servers.

Anyway, here are some links on it:

What’s all this about the Leap Second, and how does it affect the Microsoft Windows OS and other products?

How the Windows Time service treats a leap second

http://en.wikipedia.org/wiki/Leap_second

 

clip_image001

Support Learnings of Exchange

A happy New Year to you all – may it be peaceful and prosperous. To help you on your way, I urge you to read this article from Ross Smith IV on the EHLO blog:

http://blogs.technet.com/b/exchange/archive/2015/01/08/concerning-trends-discovered-during-several-critical-escalations.aspx

now you may read this and, if you’ve read my outpourings over the last few years, remark on the similarity… all I can say is “this is because I’m not lying to you”.

So what does Ross call out?

Software patching. He recommends you be on the latest patch, or the next oldest. I also recommend you leave it a week or so after release before contemplating investigating it, so that you are aware of all the issues that are introduced in the latest patch.

Change control. The article points out the necessity of implementing change control for ALL changes, including the simple ones; on the distaff, change control should not be an excuse for inaction. If your change control process is so sclerotic nothing ever happens, that is just as bad. Possibly worse…

 Complexity. Complexity is the enemy. It leads to unpredictable failure, and “grey areas” where everyone just shrugs their shoulders and says “not my problem, boss.” There is a conflict between solution architects, who relish devising clever solutions to complicated problems, and operations, who want to run solutions as cheaply as possible, and therefore prefer the simple. With a move to shared services, it is imperative* that we consider reducing complexity in everything we do.

Ignoring recommendations. Respect my authoritah! Not because I know more about it than you do, but because I’m speaking to people who do. People like Devin Ganger.

 Deployment practices. You didn’t fill in the role requirements calculator, did you? Or maybe you did, but made up all the input? your users get 4 mails a day. Yes they do. Uhuh. Perhaps you followed the advice from a vendor to turn off background database maintenance while running jetstress? There’s a reason they don’t write that stuff down, you know. Time spent here saves a geometric amount of time (and money) later on. You can’t repair bad design. By the way, there is no law against running through the role requirements calculator every now and then. I’ve checked. It’s a very interesting exercise.

 Historical data, AKA baselining, AKA capacity planning, call it what you want. If I had a pound for every customer that was surprised when they ran out of resource, I’d have 13 pounds. I’ve run webex sessions ion how to do this in the past – if you’d like me to run one again, let me know.

*you should now have at least a line in this week’s game of “captain kirk buzzword bingo”.