Tag Archives: RPC

Failing databases, sulking network manager

Interesting call here. After a hardware firewall change and a reboot, my customer’s DAG had a database copy in a failed state. The set up is a two node DAG across two sites, with a FSW and an Alternate FSW preconfigured. It’s also IL3. If you don’t know what IL3 is, please stop reading this article. You don’t have clearance. Look out the window. See the guy with the dark glasses watching you? No? THAT’S how good we are.

So, just change the firewall back, dummy. big deal. sheesh. Except, it didn’t fix the problem. interestingly – this is the first time that cluster failover has been tested… DAG has been tested a number of times.

So… he’s got one database copy mounted, one failed.
We ran get-mailboxdatabasecopystatus and saw this error; “replication server encountered transient network error. Network manager not yet initialised”
It’s been in this state for a while now, and through multiple reboots. Sitting watching it won’t help. It’s not really transient.
Oh right, so the FSW needs rebooting, right? I’ve seen this before…(http://port25guy.com/2012/12/10/witness-server-boot-time-getdagnetworkconfig-and-the-pain-of-exchange-2010-dr-tests/). No. The boottime cookies are the correct way around.
So we start checking things. the IP addresses show as down in the DAG. This picture is not their DAG. IL3, remember?

The cluster node shows as “down” in the failover cluster manager.
So, let’s see what happens when we try to start the node. Lots of errors in the event log (which I can’t see… IL3…), but one sticks out like a sore thumb – event id 4123:

Log Name: Application
Source: MSExchangeRepl
Date: 2/26/2012 11:12:08 AM
Event ID: 4123
Task Category: Service
Level: Error
Keywords: Classic
User: N/A
Computer: LABMBX-1.exlab.mydomain.com
Failed to get the boot time of witness server ‘labcas-1.exlab.mydomain.com’. Error: The remote procedure call failed. (Exception from HRESULT: 0x800706BE)

There’s a great big clue right there. “the remote procedure call failed”. For some reason the endpoint mapper on the FSW isn’t responding. This is a resource domain which just contains a DC, the two Exchange boxes and a Vcenter manager. (I did mention the VMWare, yes?) What is the FSW machine? Well, it’s the Vcenter console machine in the domain.

And there is the problem.

When you install exchange on a box, it adds a security group to the local admins group, and makes changes to the windows firewall (http://marksmith.netrends.com/Lists/Posts/Post.aspx?ID=83). When you put the FSW on a NON-Exchange box, you need to add the exchange trusted subsystem group to the local admins manually – you’ve not installed exchange, so setup won’t do it for you. It’s documented here: http://technet.microsoft.com/en-us/library/dd351172.aspx

If the witness server you specify isn’t an Exchange 2013 or Exchange 2010 server, you must add the Exchange Trusted Subsystem universal security group to the local Administrators group on the witness server. These security permissions are necessary to ensure that Exchange can create a directory and share on the witness server as needed. If the proper permissions aren’t configured, the following error is returned:
Error: An error occurred during discovery of the database availability group topology. Error: An error occurred while attempting a cluster operation. Error: Cluster API “AddClusterNode() (MaxPercentage=12) failed with 0x80070005. Error: Access is denied.”

What it doesn’t say, but assumes, is that RPC will work. Why does it need RPC? It’s just a fileshare, yes? It doesn’t say anything about RPC here: http://technet.microsoft.com/en-us/library/bb331973.aspx

• The Clustering data path listed in the preceding table uses dynamic RPC over TCP to communicate cluster status and activity between the different cluster nodes. The Cluster service (ClusSvc.exe) also uses UDP/3343 and randomly allocated, high TCP ports to communicate between cluster nodes.
• For intra-node communications, cluster nodes communicate over User Datagram Protocol (UDP) port 3343. Each node in the cluster periodically exchanges sequenced, unicast UDP datagrams with every other node in the cluster. The purpose of this exchange is to determine whether all nodes are running correctly, and also to monitor the health of network links.
• Port 64327/TCP is the default port used for log shipping. Administrators can specify a different port for log shipping.
• For HTTP authentication in which Negotiate is listed, Kerberos is tried first, and then NTLM.

Well it does, but for nodes, not FSW. However, when the single remaining node checks it has quorum it needs to compare the current boot time of the FSW against the time stored in the boottime cookie. How does it get the current boot time? Remote registry, I reckon WMI, which requires RPC.

So… open Windows firewall for RPC, reboot FSW and… bingo. Everything up, sweet as a nut.

We ran the cluster validator (http://technet.microsoft.com/en-us/library/bb676379(v=exchg.80).aspx) and Paul Cunningham’s DAG healthcheck script (http://exchangeserverpro.com/get-daghealth-ps1-database-availability-group-health-check-script/ ) and everything comes back clean.

The moral of this story? Stop being clever.

A great takeaway for everyone is this:

unlike earlier versions of Microsoft Exchange where IT administrators had to perform multiple procedures to lock down their servers that were running Microsoft Exchange, Exchange 2010 requires no lock-down or hardening

From the Exchange 2010 Security Guide, here: http://technet.microsoft.com/en-us/library/bb691338(v=exchg.141).aspx



edit: if you look at Scott Schnoll’s wonderful high availability deep dive, here, then you will find that the node gets the FSW boot time using WMI, not remote registry.