Monthly Archives: July 2011

Preventing Automatic Forwarding by Client Side Rules

A customer has had a query regarding how to prevent automatic forwarding and automatic replies in Exchange.

in Exchange 2003 this is controlled by the properties of Internet Message Formats:

Exchange Organization|Global Settings|Internet Message Formats.

chances are there is only one format, called “default”. right click properties and away you go.

Things are slightly different in exchange 2007 and 2010:

  1. Open Exchange Management Console
  2. Expand Organization Configuration-> Hub Transport
  3. In the right pane select the Remote Domains tab
  4. Right click Default and choose Properties
  5. On the General tab you can set which type of Out of Office Messages you will allow to be sent out. By default only external OOF messages are allowed. You can change the option to also allow OOF messages created by Outlook 2003 and previous.
    On the tab named “Format of original message sent as attachment to journal report:” (Exchange 2007) or “Message Format” (Exchange 2010) you can enable or disable the automatic replying/forwarding.

The call became interesting, however, when the customer wanted to know if this would affect rules set by the Out of Office Assistant. I couldn’t find a definitive answer, so I set about playing.

I set a test account to have out of office on, auto forward to my external email account (nickxxxxx). In IMF advanced settings on the server I had the following:

I sent a test message via telnet to the test account, and in message tracking saw the following:

The auto forwarded message got stuck at the categorizer stage, which is expected behaviour.

I then enabled automatic forwarding, bounced the store, sent another test message from an external account and got the following:

As you can see in this instance, the message is queued for remote delivery. If my test system were connected to the internet, the message would be delivered.

want to stop it by default, but allow it for some users? This article is your friend:

Recovery Storage Groups are Making Your Life Hell

An interesting call this week. High severity issue with a CCR cluster with geographically separated nodes. The customer was following, which is the TechNet article on how to patch a CCR to sp1 or 2 – it’s valid for sp3 as well, but MS haven’t updated the article to reflect this. They customer had got to step 9, but things were then going wrong trying to move the cluster from the active node to the passive node (top tip: “active” and “passive” refer to the state of the nodes; don’t use those words to name your nodes, or we will fall out). The move would fail, and then fail to move back to the original node as well, leaving the cluster in a down state. The quick fix to restore service was to shut both nodes off, then power up the sp2 machine. Once the sp2 machine was running, the sp3 machine was turned on. At this point we were called.

First things first was to get a worst case action plan sorted. Sp3 cannot be uninstalled ( so basically, they would need t ouninstall exchange, evict the node from the cluster, reinstall exchange and recluster. Henrik Walther has documented this process perfectly in his blog: the second part is linked from that page. You can do this with 100% availability, pretty much (except for the failover). Reseeding the db can be done online.


Once the customer was happy that we had a backout plan we got some basic troubleshooting evidence collected; a bpa in healthcheck mode, mps reports from both nodes, with cluster and exchange options ticked.

With the collection under way, we started to look at the state of the cluster.

The “clustered mailbox server” tab in the properties of each node showed everything ok – both nodes were listed on each machine, the correct node was listed as operational.

get-storagegroupcopystatus showed all storage groups as healthy, with copy and replay queue lengths of 0, and a timely last inspection timestamp. All storage groups except the recovery storage group, that is. Not supported.

Get-ClusteredMailboxServerStatus ( ) and
Test-ReplicationHealth ( ) likewise showed everything cool.

in system event logs i could see event id 1069, source: clussvc
Cluster resource ‘rsg1db1/sg4db1 (<servername>)’ in Resource Group ‘<servername>’ failed.
more details here:
this implicates the recovery storage group that they shouldn’t be running in the problem. the recovery storage group that can’t take part in a clustered environment, but is a resource in the cluster. hmmm.

A little digging got me to this:

Databases in an RSG cannot be set to mount automatically when the Exchange Information Store service is started. You must always start the databases manually. If mounted at the time of a cluster failover, databases will not mount automatically after failover is completed.
the implication being if the rsg being online is a dependency, then failover will not complete successfully in either direction.


Now, the literature all states that while a RSG cannot mount, it doesn’t say that it will prevent failover. However as it was set as a cluster resource (as shown in the 1069 error, above) in this case it will cause failover to crash out when it doesn’t come online.


The agreed plan was that the customer would remove the rsg, as per the best practice article here:

Once this was complete, they would rerun the prereq tests, repatch the sp3 server to ensure that there was no issue there, and failover the cluster as per step 9 of the upgrade document. With no RSG to bugger things up, this went great. They successfully patched their now passive sp2 node the following day, and away they go…


So to summarise, recovery storage groups are making your life hell. If you’re not using them, get rid of them. Don’t have them hanging about on your box. Especially not if it’s a cluster.

Auditing Extreme Public Folder Deletion

A quick and easy call this morning. We have a customer who has had around 80GB of public folders deleted mysteriously. The change has been replicated round, but they have restored the data and would like to know how it happened, and if there is any way of auditing events that happened in the past. The short answer is no.
Public folders exist within the exchange database, not active directory, so there is no way of tracking it via AD tools. A quick troll through the MS partner forums confirms that there is no other way t ofind this information other than following the procedures below.
It is possible to turn auditing on for exchange 2003 sp2 and later, by adjusting the diagnostic logging for the msexchangeis/public folder/general object to medium. this will produce a 9682 information event in the application event log that looks like this:
9682 info event
In Exchange 2007 sp1 you need to use the shell, and the following command:

set-eventloglevel “msexchangeis/9001 public/general” -level medium.

in sp2 it is possible to use “set diagnostic logging” in the action pane if you select the server object. This also works for Exchange 2010.

diagnostic logging option ex2k10

Once you have the logging enabled you can trawl the event logs using a script from the blog post here:
So what caused it? don’t know. my money would be on a user, but it might also be a policy, although the customer says not, or a third party tool that’s been mis-set.

Checking the permissions on the folders would be a good place to start – anyone with owner permission could delete the folder.