User login

News aggregator

New Networking-related KB articles for the week of May 10 - May 16

Microsoft Enterprise Networking Team - Thu, 05/29/2008 - 20:39

950876  Group Policy settings are not applied on member computers that are running Windows Server 2008 or Windows Vista SP1 when certain SMB signing policies are enabled

951058  The "Automatically restore this connection when computer starts" option may not work on a Windows Server 2008-based computer when CHAP authentication is used

950749  MS08-028: Vulnerability in the Microsoft Jet Database Engine could allow remote code execution

950574  A Windows Server 2003-based DHCP server does not respond correctly to DHCP INFORM requests if the requests are forwarded from the IP Helper API or from relay agents

946775  IP packets that are transferred over aggregated links may be dropped by the Multilink feature on a Windows XP-based computer

951624  A 30-second delay occurs during the initialization of some network-based applications when Windows XP Service Pack 2 starts

- Mike Platts

Categories:

Announcing Small Business Server 2008 RC0 Public Preview

Windows Server Division WebLog - Thu, 05/29/2008 - 00:16

After extensive development and private evaluation, I’m pleased to announce that a public preview of Windows Small Business Server 2008 RC0 is now available.

Windows SBS 2008 is the next major release in the Windows Small Business Server product family, and it offers a wave of new features for technology consultants and small business owners.  There are too many updates and changes to list in one post, but here are some highlights:

  • A new setup and administrative experience that has been re-designed to make initial deployment and day-to-day management significantly easier
  • Many new management features such as extensible monitoring reports, and tools to manage internet domain names, data folders, certificates and more
  • A revamp of the Premium Edition to include both SQL Server 2008 Standard Edition technologies and a 2nd copy of Windows Server 2008  
  • A new server backup wizard built on the Windows Server 2008 block-based backup technologies, which allows you to back up your server in minutes rather than hours
  • A redesigned Remote Web Workplace with new features such as custom links, logos and user to computer mapping
  • Built-in anti-virus and anti-spam support with 120-day trial subscriptions to Microsoft Forefront Security for Exchange Server and Windows Live OneCare for Server
  • Integration with Microsoft Office Live Small Business to simplify setup and management of professional Web sites and extranets 
  • And of course, updates to Microsoft’s newest server technologies: Windows Server 2008, Exchange Server 2007, SQL Server 2008, Windows Server Update Services v3, and Windows SharePoint Services v3

To learn more about the product and to enroll in the public preview program, please visit http://technet.microsoft.com/evalcenter/cc184870.aspx.

We look forward to your feedback.

Regards,
Dean Paron
Group Program Manager
Windows Small Business Server

Categories:

Troubleshooting top Exchange 2007 SP1 SCR issues

Microsoft Exchange Team Blog - Wed, 05/28/2008 - 17:12

This blog post discusses several top issues seen to date by the Microsoft Exchange Product Support Team regarding the Standby Continuous Replication (SCR) feature introduced in Exchange 2007 Service Pack 1. We wanted to share this information as it can be used as a preventative measure as well as for resolving issues you may have experienced. It is understood that this will not cover all that can possibly go wrong, but it should give you some good pointers in some situations that you might have seen.

For basic configuration information on SCR, please review the following article available on Microsoft TechNet: Standby Continuous Replication

Issues covered here include:

  • Enable-StorageGroupCopyStatus -StandbyMachine reports error "Another standby continuous replication source is already configured..."
  • SCR Target Log Files Fail to Truncate After the TruncationLagTime is Surpassed
  • SCR does not replicate logs in a disjoint namespace scenario
  • Database seeding error: Error returned from an ESE function call (0xc7ff1004), error code (0x0)
  • SCR Hidden Network share not created in a Cluster with Event id 2074

Enable-StorageGroupCopyStatus -StandbyMachine reports error "Another standby continuous replication source is already configured at <path to Storage Group logs> for 'CopyLogFolderPath'."

Possible Causes 

The SCR target server may be using the same log file path as the SCR source server.  This can happen when attempting to enable SCR on the First Storage Group.

Resolution 

Change the log file, system file paths on the Storage Group and database path on the Mailbox database to another location on the SCR target server.  Note: In order for the file path change to take effect the databases in the Storage Group will be temporarily dismounted and then remounted.

Step-by-step instruction

This can be done from the Exchange Management Console or through the Exchange Management Shell.  For specific instructions, please click the following links:

How to Set or Change the Location of Storage Group Log Files
How to Set a Database File Location

SCR Target Log Files Fail to Truncate After the TruncationLagTime is Surpassed.

Possible Causes

The SCR log file truncation time is set to a value over 24 hours.

Resolution

Set TruncationLagTime to 0.0:00:00 minutes and then restart the Microsoft Exchange Information Store and Microsoft Exchange Replication services.  Next, take a backup of the Storage Group on the SCR Source server and then confirm that SCR Target log files get truncated after successful backup.  After SCR target files truncate properly, you may change the TruncationLagTime to your desired values.

Note: This issue will be addressed in a future rollup for Exchange 2007 Service Pack 1.

Step-by-step instruction

In order to change the TruncationLagTime, you must disable SCR and then enable SCR using the desired values.  For specific instructions, please click the following links:

How to Disable Standby Continuous Replication for a Storage Group
How to Enable Standby Continuous Replication for an Existing Storage Group
How to Enable Standby Continuous Replication for a New Storage Group

SCR does not Replicate Logs in a Disjoint Namespace Scenario

Possible Causes

The SCR source and the SCR target servers have FQDNs with disjointed domain names

Resolution

Issue will be fixed in a future rollup for Exchange 2007 Service Pack 1.  To resolve this issue, contact Microsoft Customer Support Services to obtain fix 951955.

More Information

Understanding Disjoint Namespace Scenarios with Exchange 2007

Database Seeding Error: Error returned from an ESE function call (0xc7ff1004), error code (0x0).

Possible Causes

Windows firewall settings are blocking the command

Resolution

Add the "Windows PowerShell" to the Exceptions list under Windows Firewall settings.

Step-by-step instruction

Add a Program to the Exceptions List

SCR Hidden Network Share is not created in a Cluster with Event id 2074

Possible Causes

Resources in the default Cluster group, such as Cluster IP Address, Cluster name and Quorum disk were moved to a different cluster group.

Resolution

Move the Cluster IP Address, Cluster name and Quorum disk to the default Cluster group.

Step-by-step instruction

Best practices for configuring and operating server clusters

If you experience failures other than those listed here, look at the event log on both nodes to determine the cause and use the information in the logs to determine what recovery steps need to be taken.  You can also review other events that occurred around the same time that the failure occurred to help assess if they could be attributed to the issue.

Here are some How-to Webcasts on SCR configuration created by Scott Schnoll:

SCR in Exchange Server 2007 SP1 - Part 1
SCR in Exchange Server 2007 SP1 - Part 2
SCR in Exchange Server 2007 SP1 - Part 3
SCR in Exchange Server 2007 SP1 - Part 4
SCR in Exchange Server 2007 SP1 - Part 5

- Gurpreet Erickson

Share this post :

Configuring Custom NPS Policies Per DHCP scope

Microsoft Windows DHCP Team Blog - Wed, 05/28/2008 - 09:03
DHCP server administrators deploying DHCP NAP have often queried about provisioning clients on different subnets with separate Network policies. Here a step-by-step walk through for configuring such policies from the NPS management console as well as...(read more)
Categories:

Windows server 2003 sp2 blocker tool to be removed from windows update and automatic update

Windows Server Division WebLog - Wed, 05/28/2008 - 00:38

Just a reminder - It has been a little over a year since we released Windows Server 2003 SP2.  When we release a Service Pack at Microsoft, we want to make sure that IT professionals and system administrators have ample time to assess the service pack and choose when to deploy it. 


As with other service packs, we offered support for Windows Server 2003 SP2 within the  Windows Service Pack Blocker Toolkit.  This allowed administrators to block the automatic deployment of Windows Server 2003 SP2 for a period of one year.


Now that time has expired, organizations should be aware that over the next month, support for Windows Server 2003 SP2 within the blocker tool will be phased out Windows Server 2003 SP2 will then be automatically offered, downloaded and/or installed (depending on user or administrator settings) through standard mechanisms including Windows Update and Automatic Update.

 

Ward Ralston

Categories:

Two Minute Drill: Find /3GB without using boot.ini

Ask the Performance Team - Tue, 05/27/2008 - 11:00

We've talked a lot about the /3GB switch and its effect on system resources in previous posts.  Today we are going to discuss how to determine whether or not /3GB is enabled on a 32-bit system without looking at the boot.ini file or using MSCONFIG.EXE.  Finding out this information is not as difficult as you would think – there are actually multiple ways to find this information.  We are going to find this information in three different ways – by looking in the registry, by using PSTAT.EXE and by looking at a Memory Dump File.  So, without further delay, let’s look at the simplest of the three methods – finding the information in the registry.

To find the information in the Registry, all you have to do is look in the HKLM\SYSTEM\CurrentControlSet\Control key, and examine the SystemStartOptions value.  Below is the value from a Windows XP system that I have configured with /3GB.

 

As you can see, the ‘/’ character is removed from the string in the Registry, but the options themselves are determined easily enough.  With this in mind, here’s a quick tip for Systems Administrators who might need to find this information for multiple systems – use a simple script or batch file to query this value in the registry on all your machines and write the output to a text file.  Remember that you will need to be able to access the registry remotely for this to work!

Let’s now take a look at the second method of finding out if /3GB is enabled – by using PSTAT.EXE.  PSTAT.EXE is part of the Resource Kit Utilities for Windows 2000 and can be downloaded from the Microsoft web site.  Run PSTAT.EXE and redirect the output to a text file:

When you examine the output file, search for HAL.DLL (the Hardware Abstraction Layer DLL.  Below is the output from my Windows XP SP3 system:

ModuleName Load Addr ------------------------ hal.dll E0B82000

The key piece of information here is the Address at which the module is loaded.  In our post on the x86 Virtual Address Space we noted that the System Space (Kernel Mode) memory range on a 32-bit system ranged from 0x80000000 to 0xFFFFFFFF on a system without /3GB and 0xC0000000 to 0xFFFFFFFF on a system with /3GB enabled.

Memory Address ranges without /3GB Memory Address ranges with /3GB

As you can see from the diagram above, the Kernel and Executive, HAL and Boot Drivers load between Addresses 0x80000000 and 0xBFFFFFFF on a system that does not have /3GB configured.  So, looking at the address where HAL.DLL is loaded, we can see that the module is loaded at Address 0xE0B82000.  Since this address is outside of the range where the module would load if the system was not configured with /3GB we can deduce that /3GB is configured on this system.

Finally, let’s look at determining whether or not /3GB is in use by examining a memory lmdump file.  I generated a manual dump on my XP Machine with and without /3GB enabled.  Let’s first take a look at the dump with /3GB enabled.  Believe it or not, you really don’t have to do any work to determine if /3GB is enabled beyond loading up your memory dump file into the debugger!  Below is the output from the debugger when I opened the dump file:

Microsoft (R) Windows Debugger Version 6.9.0003.113 X86 Copyright (c) Microsoft Corporation. All rights reserved. Loading Dump File [C:\WINDOWS\3GBMEMORY.DMP] Kernel Complete Dump File: Full address space is available Symbol search path is: SRV*C:\SYMBOLS*http://msdl.microsoft.com/downloads/symbols Executable search path is: Windows XP Kernel Version 2600 (Service Pack 3) MP (2 procs) Free x86 compatible Product: WinNt, suite: TerminalServer SingleUserTS Built by: 2600.xpsp.080413-2111 Kernel base = 0xe0ba3000 PsLoadedModuleList = 0xe0c29720 Debug session time: Thu May 15 09:33:21.044 2008 (GMT-5) System Uptime: 1 days 2:14:13.500

The important piece of information here is the Kernel base.  As you can see, the address is 0xE0BA3000 (the text in red above).  Remember that if /3GB is not configured, the Kernel loads between 0x80000000 and 0xBFFFFFFF – since we are loading at 0xE0BA3000, we can deduce that /3GB is configured.  Before we wrap up, let’s take a look at a dump from the same machine when /3GB is not configured.

Microsoft (R) Windows Debugger Version 6.9.0003.113 X86 Copyright (c) Microsoft Corporation. All rights reserved. Loading Dump File [C:\WINDOWS\NO3GBMEMORY.DMP] Kernel Complete Dump File: Full address space is available Symbol search path is: SRV*C:\SYMBOLS*http://msdl.microsoft.com/downloads/symbols Executable search path is: Windows XP Kernel Version 2600 (Service Pack 3) MP (2 procs) Free x86 compatible Product: WinNt, suite: TerminalServer SingleUserTS Built by: 2600.xpsp.080413-2111 Kernel base = 0x804d7000 PsLoadedModuleList = 0x8055d720 Debug session time: Thu May 15 12:58:35.741 2008 (GMT-5) System Uptime: 0 days 1:54:45.750

As we can see in this output, the Kernel Base is at 0x804D7000 – inside the range for the Kernel on a system without /3GB.

So there you have it – three different ways to find out whether or not a system is configured with the /3GB switch using different tools.  That brings us to the end of this Two Minute Drill.  Until next time …

- CC Hameed

Share this post :

Version Store issues revisited - Event ID 623 at Information Store service startup

Microsoft Exchange Team Blog - Fri, 05/23/2008 - 18:05

Recently we've seen some cases in Exchange Support where the error event 623 gets generated immediately at the start of the Information Store service.  So far we've only witnessed this on some Exchange 2003 Servers. Please note that this is specifically about this event being generated after the IS start. There were other causes of even 623 that we had fixes for already.

When this behavior occurs, you may see the Information Store appear to take upwards of 45 minutes to fully respond at service startup.  Monitoring "Version Buckets Allocated" (viewable with Show Advanced Counters - see Nageshs' excellent post here:  http://msexchangeteam.com/archive/2006/04/19/425722.aspx) will show the Store is immediately running high (over 70%) and until the number falls the Information Store will be unresponsive to clients and ESM.  After Version Buckets Allocated falls, the server responds fine and no other issues are observed.  623 errors go away.  Restarting a 3rd party server that ties into users' mailboxes (if present) or restarting the Information Store service may cause the issue to occur again.

This problem occurs because of a large amount of hidden search folders that have been created by applications (other than Microsoft Exchange) that have access to users' mailboxes. When the Information Store starts, it becomes available to the host of 3rd party applications which might reconnect and want to sync the contents of the search folders at the same time. These search folder updates can then result in the search folders for a user's mailbox to all be updated at the same time.  When a mailbox has a large item count in the Inbox folder (more than 5,000 items) you can experience higher than normal store CPU % utilization and Version Buckets Allocated spikes which can lead to version store out of memory problems. Depending on the type of search performed, the impact can be greater or smaller.  Once version store cache has been depleted, the offending transaction gets canceled or it times out and is rolled back and everything moves along as if nothing happened.  That's why the event 623 eventually corrects itself.

To avoid this scenario, there are a few things you can do to monitor this:

  1. Keep your Inbox item count down to 5,000 or less.  In some cases with this problem we've seen 60,000 to 80,000 items per user Inbox. To find out if you have a problem like this, we suggest you use the Exchange Server Profile Analyzer tool which we blogged about here.
  2. Keep an eye on the number of search folders querying against the Inbox folder. This will require you to run ISINTEG on your server (please see below for what exactly to look for). Most people don't realize is that some third party applications that plug into Exchange (for example Fax servers, Mobile device synch servers, Unified Messaging clients, desktop search clients) create hidden search folders and restricted views. Each time a change happens in the folder that is being monitored (an modification, deletion, addition) - backlinks to the search folders are looked at and we will evaluate each search folder to see if this new items meet that set view.
  3. It is possible that when 3rd Party products are upgraded older versions of search folders are not cleaned up as well.  In some cases we've seen users with well over 150 hidden search folders.  Just a few users with high item counts in their Inbox and this many hidden search folders can cause some serious trouble for your environment.

So - how do you do #2 above?

You'll have to run: isinteg -s servername -dump -l logfilename

Then open up the "logfilename" file and look for the following:

[7412] Folder FID=0028-00000002451E
Parent FID=0028-00000002451B
Root FID=0028-00000002451A
Folder Type=1
 Msg Count=29232
Msgs Unread=112
Msgs Submitted=0
Rcv Count=4
Subfolders=0
Name=Inbox
Comment=
Restriction=
 Search FIDs=0028-000008859B57,0028-000008859B60,0028-000008859B67,0028-000008859B54,0028-000008859B5D,0028-000008859B59,0028-00000C8C48C3,0028-000008859B4E,0028-000008859B4C,0028-000008859B58,0028-00000C8C2DF3,0001-0000001995E3,0028-000008859B55,0028-000008859B5E,0028-000008859B53,0028-000008859B4D,0028-000008859B4B,0028-00000C649EB1,0028-00000C8C48E5,0028-000008859B66,0028-000008859B69,0028-000008859B56,0028-000008859B5F,0028-00000C64A1EA,0028-000008859B65,0028-000008859B50,0028-00000C8C48D6,0028-000008859B5A,0028-000008859B64,0028-00000C8C48CE,0028-000008859B52,0028-000008859B4A,0028-000008859B68,0028-000008726E8B,0001-000000197413,0001-000000197C59,0001-000000198A12,0001-0000001CF526,0001-0000001CF53B,0001-0000002284E0
Scope FIDs(search folder only)=
Recursive FIDs=
 Search Backlinks=0001-000000031BEA,0028-000006CD9DC2,0028-000007594A53,0028-0000075DEE07,0028-00000857AB81,0028-00000A027DBC,0028-000008726E8B,0001-000000198A12,0001-000000197C59,0001-000000197413,0028-00000C8C48D6,0028-000008859B59,0028-000008859B55,0001-0000001995E3,0028-000008859B69,0028-000008859B66,0028-00000C8C48C3,0028-000008859B67,0028-000008859B65,0028-000008859B64,0028-000008859B68,0028-000008859B57,0028-000008859B53,0028-000008859B50,0028-00000C8C48CE,0028-00000C8C48E5,0028-000008859B4B,0028-000008859B52,0028-000008859B4D,0028-000008859B58,0028-000008859B5E,0028-000008859B54,0028-000008859B5A,0028-000008859B56,0028-000008859B4E,0001-0000001CF526,0001-0000001CF53B,0028-000008859B4A,0001-0000002284E0,0028-00000C8C2DF3,0028-000008859B5D,0028-00000C64A1EA,0028-000008859B60,0028-000008859B5F,0028-000008859B4C

What we're looking at here is a high number of Search folders (Search FIDs above) and Search Backlinks that - when they have to generate or update - have to scan over 29000 items each (MsgCount above). This is the crux of the 623 version store problem at startup that you might be seeing.

At this time, we do not have a simple solution for this problem... If you have this problem and have identified it using the above step #2 (as situation described in step #1 can be solved by reorganizing the folders/number of items), please contact our Exchange support line. Once we have a better way of resolving this, we'll post about it here.

For more information on Search Folders, review:

KB260322 - How To Search Folders with the SetSearchCriteria Method http://support.microsoft.com/kb/260322/en-us

Best Practices for Exchange 2003 Search Folders (there are several subsections here to look at as well)
http://technet.microsoft.com/en-us/library/aa997533.aspx

Creating Search Folders:
http://msdn.microsoft.com/en-us/library/ms878645.aspx

Exchange Store Search Folders:
http://msdn.microsoft.com/en-us/library/aa123899.aspx

- Jeff Stokes, Dave Goldman, Michael Blanton

Share this post :

DST: Upcoming Changes for Morocco and Pakistan

Ask the Performance Team - Fri, 05/23/2008 - 11:00

On June 1, 2008 there will be two new Daylight Saving Time changes that go into effect.  Pakistan and Morocco plan to introduce Daylight Saving time.  The governments of the two nations recently announced the change as part of their energy savings plans.  Although the changes go into effect on the same day in both countries, please note that they will have different end dates, as outlined below:

Pakistan:  DST begins on June 1st.  The clocks will move forward from 12:00:59 AM to 1:01:00 AM.  The UTC Offset will change from +5 hours to +6 hours for Pakistan.  DST will end at 12:00:59 AM on Sunday, August 31.  At this time, the clocks will roll back to 11:01:00 PM on Saturday, August 30.  The UTC Offset will change from +6 hours to +5 hours for Pakistan.

Morocco:  DST will begin on Saturday, May 31 at 11:59:59 PM when the clocks will move forward to 1:00:00 AM on Sunday, June 1.  This will result in the UTC Offset for Morocco changing from 0 to +1 hour.  DST in Morocco ends on Saturday, September 27, at 11:59:59 PM.  At this time, the clocks will roll back to 11:00:00 PM on Saturday, September 27, and the UTC offset will change from +1 hour to 0.

From Microsoft’s perspective, due to the short notice provided for these changes, Windows will not be creating one-off hotfix packages to accommodate the changes.  The plan is to include these updates in the next Windows DST cumulative package, which is scheduled for release later in the summer.  To provide a more immediate solution, we will be updating Microsoft KB Article 914387 with the changes for Morocco and Pakistan.  The changes for this article should be in place by the end of this week.  Additional information will also be available on the Daylight Saving Time Help and Support Center.

OK, that’s it for this post – please remember to test any changes in an isolated environment before implementing them in your live production environment.  Until next time …

- CC Hameed

Share this post :

How to deploy XP SP3 in an existing wired 802.1x environment

Microsoft Enterprise Networking Team - Thu, 05/22/2008 - 19:38

Prior to SP3, the 802.1x service for XP is the Wireless Zero Configuration Service.  This service handles the 802.1x needs for both wired and wireless connections.  This has been problematic since not everyone uses wired 802.1x.  Also, because the wired 802.1x engine listens passively for EAP Identity traffic, we are not fully compliant with the IEEE spec, which state the client should initiate authentication by sending an EAPOL-Start frame.

With SP3, we have separated the wireless service from the wired service and created a new Dot3Svc (Wired AutoConfig).  This service is set as a manual start as opposed to being automatic.  The default behavior of the Dot3Svc is now compliant with the IEEE specification.

In most environments, this is not a problem since most folks are not using 802.1x on their wired networks.  However, if the network has 802.1x deployed, having the service set to manual creates the unfortunate side effect of preventing the client from connecting back to the network after the required reboot has occurred. 

One of the suggested workarounds was to set the service type to Automatic in a GPO and push this out to all the clients prior to deploying SP3, but unfortunately you cannot do this.  Because Dot3Svc is a new service and does not exist on systems prior to SP3, XP cannot consume the necessary settings from a GPO and apply them after the service has been installed.

So to address this issue, you need to take the following steps:

Step 1: Pre-deployment

1.  Create a file called dot3svc_start.reg and put it in \\<domainname>\sysvol\<domainname>\scripts\

a. Add the following to the file

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Dot3svc]

“Start”=dword:00000002

2. Create a file called dot3svc.bat and put it in \\<domainname>\sysvol\<domainname>\scripts\

a. Add the following to the file

regedit /s \\<domainname>\sysvol\<domainname>\scripts\dot3svc_start.reg

3. Using a GPO, add dot3svc.bat to the Shutdown scripts object.

4. In the same GPO, set the dot3svc to Automatic

Step 2: Deployment

1. Confirm the clients process the shutdown script.  All that needs to be done is to confirm the Dot3svc registry key exists after a reboot.

2. Deploy SP3 using normal procedures. 

Step 3: Post Deployment

1. After you have confirmed SP3 installs correctly and the dot3svc service starts, remove the scripts/GPO.

For more information on the Dot3Svc, see http://support.microsoft.com/kb/949984

Categories:

Exchange TechCenter got a face lift

Microsoft Exchange Team Blog - Thu, 05/22/2008 - 16:46

We've just redesigned the homepage of the Exchange TechCenter to give Exchange administrators a single place to start when they're looking for information about Exchange. We're pretty confident that you're going to find it easier to get where you want to go by beginning at this redesigned homepage.  That should be true whether you need a quick answer to a specific question; are doing heavy-duty planning and research for, say, a large deployment; want to read the feature articles for the current month; or just want to catch up with the Product team to see what's going on in the Exchange ecosystem.

In our redesign we placed the things we think you're most interested in front and center.  So, for instance, there's now a search box in the left pane where it's hard to miss; search queries entered here will scour the core product documentation in the Exchange TechCenter library, the events and errors database, KB articles, and other Microsoft collections for relevant content.  There's a link to "You had me at EHLO" (this blog) on the homepage now, and you can get to Exchange forums, downloads and webcasts with one click.   The Exchange MVPs also get some face time here, via a rotating display and a link to their dedicated page. 

More changes are on the way.  But for now take a look-and consider adding the homepage to your favorites list. We think you'll probably use it a lot.  And let us know what you think about the changes!

- Tim Lulofs

Share this post :

Export DNS records to Excel to read time stamps and static records

Microsoft Enterprise Networking Team - Wed, 05/21/2008 - 19:07

Ask a DNS administrator and he’ll tell you there is no such thing as being “too careful” with DNS data! One of the dreaded things is to check the box for Auto Scavenging. A slight mis-configuration can lead to useful DNS entries getting deleted.

Some of the common questions that may come to an Administrator’s mind when thinking about scavenging is – How many static records do I have? Do I really have aged records lingering? Well, the answers to these questions are easy to find. Just open each record in the DNS console and look at the time stamp. This is easy if you have 20 records. That’s far from practical in the real world, though.

What one really needs is data in an organized form, say in Excel. Unfortunately the format of “dnscmd enumrecords” is not exactly ready to be imported as data. Let’s look at a sample output of “dnscmd /enumrecords contoso.com @ /Type A /additional”:

Returned records: @ [Aging:3570365] 600 A 192.168.0.3 [Aging:3570365] 600 A 192.168.0.1 [Aging:3570365] 600 A 192.168.0.4 [Aging:3570365] 600 A 192.168.0.2 2K-A [Aging:3558828] 1200 A 192.168.0.14 clusdfs [Aging:3570365] 1200 A 192.168.0.31 cluster [Aging:3570365] 1200 A 192.168.0.30 contoso-dca [Aging:3570521] 3600 A 192.168.0.1 CONTOSO-DCB [Aging:3570521] 3600 A 192.168.0.2 CONTOSO-DCC [Aging:3570413] 1200 A 192.168.0.3 CONTOSO-DCD [Aging:3570394] 1200 A 192.168.0.4 R2-A [Aging:3570365] 1200 A 192.168.0.11 R2-B [Aging:3570365] 1200 A 192.168.0.12 R2-C [Aging:3570496] 1200 A 192.168.0.13 R2-E [Aging:3570365] 1200 A 192.168.0.199 R2-F [Aging:3570365] 1200 A 192.168.0.19 R2-G [Aging:3570365] 1200 A 192.168.0.20 rat-r2 [Aging:3562303] 1200 A 192.168.0.254 test 3600 A 10.1.1.10 VISTA-A [Aging:3558828] 1200 A 192.168.0.17 VISTA-B [Aging:3570365] 1200 A 192.168.0.51 XP-A [Aging:3562227] 1200 A 192.168.0.15 XP-B [Aging:3562227] 1200 A 192.168.0.16 Command completed successfully.

We do get the name of the record, time stamp, TTL, type & IP address. This data cannot be directly imported into Excel, however; it needs to be formatted with delimiters so that Excel can import it. We have chosen to use a “,” (comma) in this case.

Some points to keep in mind are:

  1. Observe the first few lines of the data in the example above. Each “Same as parent folder” is on a separate line with the Record name missing in subsequent lines.
  2. For static records, the text “[Aging:xxxxxxxx]” is missing.
  3. We have tried to accommodate more types of records like SRV, NS, SOA, MX, and CNAME, though typically one would be interested in the A records.

We will achieve the desired result in two steps using two VBScripts. The scripts perform the following functions:

  1. Put in the delimiter “,” to separate the data on each line. In our example, the script is named “changetocsv.vbs”.
  2. Perform a calculation to convert the “Aging” number to a readable date format and then open the file in Excel, provided Excel is installed on the machine being used. We will name this script “openexcel.vbs”.

Note that both scripts manipulate contents of the file. Each script should be run only once on a file. Here is a summary of how the overall process will work:

  • Create a directory/folder to hold the exported DNS data and script files.
  • Copy the contents of both scripts given below and place them in the folder created.
  • Export the data from DNS using the dnscmd.exe utility included with Windows Server.
  • At a Command Prompt in the folder created, run each script against the exported data to format it for and import it into Excel.

Detailed steps:

1.  Create a folder, such as C:\dnsdata, in which to store each of the scripts below.  Eg: changetocsv.vbs and openexcel.vbs.

2.  At a Command Prompt, run the following command:

dnscmd /enumrecords contoso.com @ /Type A /additional > c:\dnsdata\dns.csv

Note: For more information on dnscmd.exe, run ‘dnscmd /?’ at a Command Prompt.

3.  Save the below script as “changetocsv.vbs” in the directory created. This script will read the raw output taken from dnscmd command, format it by inserting comma delimiters, and then save it as the same filename specified at the command prompt when it is run.

Const ForReading = 1 Const ForWriting = 2 strFileName = Wscript.Arguments(0) Set objFSO = CreateObject("Scripting.FileSystemObject") Set objFile = objFSO.OpenTextFile(strFileName, ForReading) strText = objFile.ReadAll objFile.Close strNewText = Replace(strText, " [Aging:", ",") strNewText1 = Replace(strNewText, "] ", ",") Set objFile = objFSO.OpenTextFile(strFileName, ForWriting) objFile.WriteLine strNewText1 objFile.Close 'please modify Rtype array as per the record requirements Rtype = Array("A", "SRV", "NS", "SOA","MX","CNAME") For i = 0 To UBound(Rtype) rrtype = " "+Rtype(i) +" " Set objFile = objFSO.OpenTextFile(strFileName, ForReading) strText = objFile.ReadAll objFile.Close strNewText = Replace(strText, rrtype, ","+Rtype(i)+",") Set objFile = objFSO.OpenTextFile(strFileName, ForWriting) objFile.WriteLine strNewText objFile.Close Next Set objFile = objFSO.OpenTextFile(strFileName, ForReading) strText = objFile.ReadAll objFile.Close strNewText = Replace(strText, " ", ",,") Set objFile = objFSO.OpenTextFile(strFileName, ForWriting) objFile.WriteLine strNewText objFile.Close

4.  The script takes one argument. At the command prompt while in the directory created earlier, run the following command:

C:\dnsdata> changetocsv.vbs dns.csv

This command modifies the content of dns.csv and overwrites the same file.

5.  (optional) View the modified dns.csv. If you open the new version of dns.csv, you will see that it has been changed, similar to our example below:

Returned,,records: @,3570365,600,A,192.168.0.3 ,3570365,600,A,192.168.0.1 ,3570365,600,A,192.168.0.4 ,3570365,600,A,192.168.0.2 2K-A,3558828,1200,A,192.168.0.14 clusdfs,3570365,1200,A,192.168.0.31 cluster,3570365,1200,A,192.168.0.30 contoso-dca,3570521,3600,A,192.168.0.1 CONTOSO-DCB,3570521,3600,A,192.168.0.2 CONTOSO-DCC,3570413,1200,A,192.168.0.3 CONTOSO-DCD,3570394,1200,A,192.168.0.4 R2-A,3570365,1200,A,192.168.0.11 R2-B,3570365,1200,A,192.168.0.12 R2-C,3570496,1200,A,192.168.0.13 R2-E,3570365,1200,A,192.168.0.199 R2-F,3570365,1200,A,192.168.0.19 R2-G,3570365,1200,A,192.168.0.20 rat-r2,3562303,1200,A,192.168.0.254 test,,3600,A,10.1.1.10 VISTA-A,3558828,1200,A,192.168.0.17 VISTA-B,3570365,1200,A,192.168.0.51 XP-A,3562227,1200,A,192.168.0.15 XP-B,3562227,1200,A,192.168.0.16 Command,,completed,,successfully.

Thanks to the new formatting, the file could now be easily opened in Excel as a csv file. However, the “aging” number (second column) needs to be converted to a readable date. The Aging number in the DNS data gives hours since 1/1/1600 00:00, while Excel is configured with 1/1/1900 00:00 as starting point. So we need to remove a constant from the aging number to normalize it and then specify the format. In the following script, we remove constant 2620914.50 and divide the result by 24 since Excel understands “days” rather than “hours”.

6.  Save the script file below to “openexcel.vbs”. This script will modify the comma delimited file, dns.csv in our example, to convert the number mentioned for Aging to a date format and opens the file in Excel automatically.

Const ForReading = 1 Const ForWriting = 2 strfile= wscript.Arguments(0) Set objFSO = CreateObject("Scripting.FileSystemObject") Set objFile = objFSO.OpenTextFile(strfile, ForReading) Do Until objFile.AtEndOfStream strLine = objFile.ReadLine If not strLine = "" Then arrItems = Split(strLine, ",") intDatevalue = 0 If not(arrItems(1))="" Then intDateValue = (arrItems(1) - 2620914.50)/24 End if intItems = Ubound(arrItems) ReDim Preserve arrItems(intItems + 1) If intDateValue > 0 Then arrItems(intItems + 1) = intDateValue Else arrItems(intItems + 1) = "" End If strNewLine = Join (arrItems, ",") strNewText = strNewText & strNewLine & vbCrLf End If Loop objFile.Close Set objFile = objFSO.OpenTextFile(strfile, ForWriting) objFile.Write strNewText objFile.Close Set objExcel = CreateObject("Excel.Application") objExcel.Visible = True Set objWorkbook = objExcel.Workbooks.Open(strfile) Set objRange = objExcel.Cells(1, 6) Set objRange = objRange.EntireColumn objRange.NumberFormat = "m/d/yyyy hh:mm:ss AM/PM"

7.  The script takes one argument. At the command prompt, run the following command:

C:\dnsdata> openexcel.vbs c:\dnsdata\dns.csv

The script modifies the content of dns.csv and overwrites the same file with modified content. The above script opens the resultant file in Excel, provided Excel is available J.

IMPORTANT: Please give full path name of the file otherwise the Excel will give an error while attempting to open the file dns.csv.

The columns are Name, Aging, TTL, Type, IP address & Time Stamp. Blanks in Time Stamp indicate a static record. Below is the result after running both scripts on our example data:

8.  Once the file is open, save the resultant as dns.xls and use that for all future reference.

Thanks “Scripting Guy” for your archives (http://www.microsoft.com/technet/scriptcenter/resources/qanda/all.mspx ) without which the VB scripts would not have been possible.

Contributed by Rajeev Narshana & Kapil Thacker

Categories:

New branch office appliance from Citrix

Windows Server Division WebLog - Tue, 05/20/2008 - 13:59

We've blogged about the use of Windows Server within branch offices several times here and here. Many of them kicked off with the introduction of Windows Server 2003 R2.

Today Citrix announced Branch Repeater, which they say is a new line of branch office appliances. Here's a description I received from Citrix:

Citrix Branch Repeater debuts with the ability to stage, cache, or pre-position content at the branch using technologies such as Microsoft Windows file services and Microsoft Distributed File System (DFS).  These technologies allow Citrix Branch Repeater to pre-position XenApp “Portable Application” installation files for rapid update and delivery to branch employees. The Citrix Branch Repeater also uses Microsoft ISA Server 2006 Web caching to accelerate delivery of web content to the branch.  But the repeater goes much further.  It can accelerate all TCP-based network traffic to and from the branch using Citrix WANScaler technology. And through integration with Microsoft Windows Server 2003 R2, local branch services are consolidated and delivered locally for optimal performance and availability.<?xml:namespace prefix = o ns = "urn:schemas-microsoft-com:office:office" />

In essence, Citrix Branch Repeater does three things that can help you provide better services to your remote or branch offices: (1) it stages streamed apps closer to the branch-based employees by using Citrix XenApp; it consolidates Windows-based branch services; and accelerates WAN app delivery.

There are three models to choose from:

  • Citrix Branch Repeater 100 - 1 Mbit appliance, $5,500 list
  • Citrix Branch Repeater 200 - 2 Mbit, $7,500 list
  • Citrix Branch Repeater 300 - 10 Mbit, $11,500 list

Patrick

Categories:

Two Minute Drill: RELOG.EXE

Ask the Performance Team - Tue, 05/20/2008 - 11:00

Following on from our last Two Minute Drill, today's topic is the RELOG.EXE utility.  RELOG.EXE creates new performance logs from data in existing performance logs by changing the sampling rate and / or converting the file format.  RELOG.EXE is not a new tool - it is however one of those tools that most administrators are not aware of.  Although RELOG.EXE is a fairly simple tool, it is incredibly powerful.  Let's look at the built-in help file for RELOG.EXE:

RELOG <filename [filename ...]> [options]

Parameters:
  <filename [filename ...]>     Performance file to relog.

Option Description -? Display context sensitive help -a Append output to the existing binary file -c <path> Counters to filter from the input log -cf <filename> File listing performance counters from the input log.  The default is all counters in the original log file -f <CSV | TSV | BIN | SQL> Output file format -t <value> Only write every nth record into the output file -o Output file path or SQL database -b <M/d/yyyy h:mm:ss [AM | PM> Begin time for the first record to write into the output file -e <M/d/yyyy h:mm:ss [AM | PM> End time for the last record to write into the output file -config <filename> Settings file containing command options -q List performance counters in the input file -y Answer yes to all questions without prompting

Now, let's look at some common scenarios:

Scenario 1: Converting an existing Performance Monitor Log

Although most administrators are comfortable using the .BLG file format and reviewing Performance data within the Performance Monitor tool, there are some advantages to reviewing the data in a different format such as a Comma-Separated Value file (.CSV).  The process to convert a .BLG to .CSV is straightforward using RELOG.EXE: relog logfile.blg -f csv -o logfile.csv

Scenario 2: Filtering a Performance Monitor Log by Performance Counter

In our last Two Minute Drill we showed you how to capture a baseline performance monitor log.  We also provided a couple of sample commands that we use in our troubleshooting to capture performance data.  However, once we get those performance logs, filtering through them can sometimes be very time consuming - especially in instances where the system is extremely active.  Oftentimes, it is useful to have both the raw data as well as a filtered subset that only shows a couple of counters.  Using RELOG.EXE we can do just that - in this example, we are going to separate out just the Private Bytes counter for all Processes: relog originalfile.blg-c "\Process(*)\Private Bytes" -o filteredfile.blg

Scenario 3: Filtering a Performance Monitor Log by Time

The last scenario we are going to look at is extracting a subset of performance data from a Performance Monitor log based on time.  This is especially useful when you have a large data sample where there are multiple instances of an issue that occurred during the time that the performance data was captured.  Using RELOG.EXE with the -b and -e options we can pull out a subset of this data and write it to a separate file - I am going to use a sample of the baseline file I created earlier: RELOG.EXE baseline.log.blg -b "5/6/2008 8:00:00 AM" -e "5/6/2008 8:34:00 AM" -o filteredcapture.blg.

As you can see there are fewer samples in the filteredcapture.blg file.  This particular type of filtering is extremely useful when you want to send a subset of performance data to other systems administrators (or even Microsoft Support!)

And that's it for our post on RELOG.EXE.  Until next time ...

- CC Hameed

Share this post :

SBS and EBS videos on Technet Edge

Windows Server Division WebLog - Tue, 05/20/2008 - 03:57

There are new demonstration videos of the upcoming Small Business Server 2008 and Essential Business Server 2008 on TechNet Edge.

Program manager Bjorn Levidow provides an overview of EBS management, including remote administration, add-in management and license tracking.  Becky Ochs, program manager for SBS, shows setup of SBS 2008, including the new Answer File tool.

Joel

Categories:

New Networking-related KB articles for the week of May 3 - May 9

Microsoft Enterprise Networking Team - Mon, 05/19/2008 - 20:10

951088  Error message when you use SMB-to-NFS gateway software that exposes mounted NFS shared folders as SMB shared folders on a Windows Server 2008-based computer: "Stop 0x0000007E"

951016  Description of User Account Control and remote restrictions in Windows Vista

951830  When you disable and then re-enable the LAN-side network adapter on a Windows XP SP3-based computer that is configured as a Connection Sharing host, a client computer on the network cannot access the Internet

946480  List of fixes that are included in Windows XP Service Pack 3

Categories:

Terminal Services Application Analyzer Beta

Terminal Services Team Blog - Mon, 05/19/2008 - 17:35

What is Application Compatibility for Terminal Services?<?xml:namespace prefix = o ns = "urn:schemas-microsoft-com:office:office" />

Application Compatibility is the term given to the collection of issues which prevent an application from executing satisfactorily (or in an expected manner) in a given environment - in this case Windows Terminal Services (TS) platform.

TS is deployed for a variety of reasons such as reducing total cost of ownership (TCO), better security & compliance, enabling mobility, etc.

Following are some problems faced by client applications in a TS environment:

  1. Client applications are generally written for a single user. Because TS is a multiuser system, this might cause synchronization problems.
  2. Some applications are written with the assumption that their binaries are running with administrator privileges. In TS, a normal user is rarely given administrative privileges.
  3. Behavior of some APIs is different in a TS server environment than in a client operating system. This might cause programs such as unexpected results from some operating system calls.

 

TS Application Analyzer

TS Application Analyzer is a runtime program analysis tool to enable administrators/users to determine if they can deploy an application on TS with confidence. It provides a summary of an application’s TS-incompatible behavior. The classes of application compatibility issues targeted for detection are:

  1. Shared resources – files/registries
  2. Access/privilege issues
  3. Windows API calls with special cases for TS

The tool does the following:

  1. Enables administrators to analyze test runs on a given binary.
  2. Determines whether the binary will face any problems when deployed on TS. If so, the tool determines the type of problem and its severity.
  3. Summarizes the findings along with a recommendation.
  4. The findings can be exported and analyzed at another computer (e.g. for analysis by a test team).
  5. The tool can be deployed on a set of user computers or test computers (running the client OS or the TS server OS) seamlessly. The findings can be collected at the administrator’s computer. The administrator can then analyze the findings from all computers and decide whether the application should be deployed on TS or not.

 

More information and downloads for the tool are available at the Connect website for TS Application Compatibility: http://connect.microsoft.com/tsappcompat

 

The guide for using the tool and the End User License Agreement (EULA) are also available at this site.

 

For any queries about the tool or about TS Application Compatibility, email tsappsup@microsoft.com.

Windows HPC Server 2008 Beta 2 is Here

Windows Server Division WebLog - Sat, 05/17/2008 - 16:19

Whew! Friday at 2:18PM we signed off on Beta 2 of Windows HPC Server 2008. It’s a good thing too since the Redmond team is looking at the first sunny and hot Northwest weekend this year. Mother nature usually gives us these days on weekdays. It’s been a hard push since November when we shipped our last beta. Since then we’ve done test runs on a cluster with over 1000 nodes, fixed over 1000 bugs, coded a bunch of new features, and made a bunch of design changes based on customer feedback. For example, one beta customer was using our new WCF Broker for financial risk modeling but wanted a totally reliable messaging solution. We built a solution leveraging MSMQ that still provides high throughput while allowing for reliable messaging.<?xml:namespace prefix = o ns = "urn:schemas-microsoft-com:office:office" />

 

Now that Beta 2 is finished our Technology Adoption Partners (TAP) will put this beta into production environments. We’ll carry pagers to help them out if they run into a crit-sit after hours. Actually, we have cell phones. Pagers have gone the way of sock punch cards, teletypes, and sock garters. I suspect there are teenagers wandering around that don’t know what a pager is.

 

Anyway, there’s a bunch of new stuff in Beta 2.

 

We checked in high availability for the head node and a new set of diagnostic tests to help people identify and troubleshoot their clusters. The new UI model is really coming together but for users more comfortable with command line interfaces we provide scripting support through COM and PowerShell. Finally, administrators can run administrative scripts in parallel across the cluster using our improved Clusrun feature.

 

A bunch of humbling (heh) usability testing pushed us to redesign the To Do List. It should be much easier for people to get through setting up a cluster, adding drivers to images, and configuring patching for the cluster (new feature!). The heat map is working so well we’ve thrown out our internal monitoring tools we use on Top500 runs.

 

After lots of, um, passionate debate we’ve finalized the APIs for job submission. It will continue to be easy for ISVs to integrate directly with our job scheduler while at the same time working with a cluster that may have thousands of jobs in the queue, each job with thousands of tasks.

 

A lot of people don’t know that we co-chair the HPC Basic Profile working group at the Open Grid Forum. With Beta 2, we ship our support for “HPC Basic Profile,” allowing us to interop with the LSF and PBSPro job schedulers.

 

We completed a few great Top500 runs in the last few weeks. We can’t talk about the numbers until the International Supercomputing Conference in June but it looks like Beta 2’s new MPI stack and new Network Direct RDMA interface are starting to hum.

 

Finally, our new programming model based on SOA is getting some nice usage from beta customers. Most of the feedback has come from folks in computational finance but there are also a couple folks in the life sciences industry that are kicking the tires. For example, what if you came up with a new theory about cancer and wanted to search through thousands of medical scans to see if your theory was correct. For Beta 2 we improved scalability, reduced latency and improved session initialization time. Beta 2 supports multiple WCF Brokers, allowing HPC Server 2008 to run really big SOA workloads.

 

So, we’re done with Beta 2. Lots of new features (whew) and lots of scalability improvements. We’ve posted build 1345, Beta 2, up at http://connect.microsoft.com

 

Thanks!

Ryan Waite

Group Program Manager - HPC

 

Categories:

Troubleshooting Server Hangs – Part Four

Ask the Performance Team - Fri, 05/16/2008 - 11:00

Welcome to Part Four of our Server Hang troubleshooting series.  Today we are going to discuss PTE depletion and Low Physical Memory conditions and how those two issues can lead to server hangs.  In our post on the /3GB switch we mentioned that in general, a system should always have around 10,000 free System PTE’s.  Although we normally see PTE depletion issues on systems using the /3GB switch, that does not necessarily mean that using the /3GB switch is going to cause issues – what we said was that the /3GB switch is intended to be used in very specific instances.  Tuning the memory further by using the USERVA switch in conjunction with the /3GB switch can often stave off PTE depletion issues.  The problem with PTE depletion is that there are no entries logged in the Event Viewer that indicate that there is a resource issue.  This is where using Performance Monitor to determine whether a system is experiencing PTE depletion comes into play.  However, Performance Monitor may not identify why PTE’s are being depleted.  In instances where a process has a continually rising handle count that mirrors the rate of PTE depletion, it is fairly straightforward to identify the culprit.  However, more often than not we have to turn to a complete dump file to analyze the problem.

Below is what we might see in a dump file in a scenario where we have PTE depletion when we use the !vm command to get an overview of Virtual Memory Usage:

*** Virtual Memory Usage *** Physical Memory: 2072331 ( 8289324 Kb) Page File: \?? \C: \pagefile.sys Current: 2095104Kb Free Space: 2073360Kb Minimum: 2095104Kb Maximum: 4190208Kb Available Pages: 1635635 ( 6542540 Kb) ResAvail Pages: 1641633 ( 6566532 Kb) Locked IO Pages: 2840 ( 11360 Kb) Free System PTEs: 1097 ( 4388 Kb) ******* 1143093 system PTE allocations have failed ****** Free NP PTEs: 14833 ( 59332 Kb) Free Special NP: 0 ( 0 Kb) Modified Pages: 328 ( 1312 Kb) Modified PF Pages: 328 ( 1312 Kb) NonPagedPool Usage: 11407 ( 45628 Kb) NonPagedPool Max: 32767 ( 131068 Kb) PagedPool 0 Usage: 11733 ( 46932 Kb) PagedPool 1 Usage: 855 ( 3420 Kb) PagedPool 2 Usage: 862 ( 3448 Kb) PagedPool 3 Usage: 868 ( 3472 Kb) PagedPool 4 Usage: 849 ( 3396 Kb) PagedPool Usage: 15167 ( 60668 Kb) PagedPool Maximum: 40960 ( 163840 Kb) Shared Commit: 3128 ( 12512 Kb) Special Pool: 0 ( 0 Kb) Shared Process: 25976 ( 103904 Kb) PagedPool Commit: 15197 ( 60788 Kb) Driver Commit: 1427 ( 5708 Kb) Committed pages: 432175 ( 1728700 Kb) Commit limit: 2562551 (10250204 Kb)

In this particular instance we can clearly see that we have a low PTE condition.  In looking at the Virtual Memory Usage summary, we can see that the server is most likely using the /3GB switch, since the NonPaged Pool Maximum is only 130MB.  In this scenario we would want to investigate using the USERVA switch to fine tune the memory and recover some more PTE’s,  If USERVA is already in place and set to 2800, then it is time to think about scaling the environment to spread the server load.  For more granular troubleshooting, where we suspect a PTE leak that we cannot explain using Performance Monitor data, we can modify the registry to enable us to track down the PTE leak.  The registry value that we need to add to the HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management key is as follows:

Value Name: TrackPtes
Value Type: REG_DWORD
Value Data: 1
Radix: Hex

Once we implement this registry modification we need to reboot the system to enable the PTE Tracking.  Once PTE Tracking is in place, we would need to capture a new memory dump the next time the issue occurs and analyze that dump to identify the cause of the leak.

To wrap up our post, we are going to take a quick look at a dump file of a server that is experiencing a low physical memory condition.  Below is the output of the !vm command (with a couple of comments that we’ve added in)

3: kd> !vm *** Virtual Memory Usage *** Physical Memory: 851843 ( 3407372 Kb) <----- Server has 3.4 GB physical RAM Page File: \??\C:\pagefile.sys Current: 3072000Kb Free Space: 2377472Kb Minimum: 3072000Kb Maximum: 3072000Kb Page File: \??\D:\pagefile.sys Current: 4193280Kb Free Space: 3502716Kb Minimum: 4193280Kb Maximum: 4193280Kb Page File: \??\E:\pagefile.sys Current: 4193280Kb Free Space: 3506192Kb Minimum: 4193280Kb Maximum: 4193280Kb Page File: \??\F:\pagefile.sys Current: 4193280Kb Free Space: 3454596Kb Minimum: 4193280Kb Maximum: 4193280Kb Page File: \??\G:\pagefile.sys Current: 4193280Kb Free Space: 3459764Kb Minimum: 4193280Kb Maximum: 4193280Kb Available Pages: 1198 ( 4792 Kb) <-------- Almost no free physical memory ResAvail Pages: 795226 ( 3180904 Kb) Modified Pages: 787 ( 3148 Kb) NonPagedPool Usage: 6211 ( 24844 Kb) NonPagedPool Max: 37761 ( 151044 Kb) PagedPool 0 Usage: 11824 ( 47296 Kb) PagedPool 1 Usage: 895 ( 3580 Kb) PagedPool 2 Usage: 881 ( 3524 Kb) PagedPool 3 Usage: 916 ( 3664 Kb) PagedPool 4 Usage: 886 ( 3544 Kb) PagedPool Usage: 15402 ( 61608 Kb) PagedPool Maximum: 65536 ( 262144 Kb) Shared Commit: 771713 ( 3086852 Kb) Special Pool: 0 ( 0 Kb) Free System PTEs: 7214 ( 28856 Kb) Shared Process: 7200 ( 28800 Kb) PagedPool Commit: 15402 ( 61608 Kb) Driver Commit: 1140 ( 4560 Kb) Committed pages: 2161007 ( 8644028 Kb) <------ Total committed pages is 8.6GB.  This amount is far larger than physical RAM, paging will be high. Commit limit: 5777995 (23111980 Kb)

 

 

 

Total Private: 1363369 ( 5453476 Kb)

In this particular instance, the server simply did not have enough memory to keep up with the demands of the processes and the OS.  Paged and NonPaged Pool resources are not experiencing any issues.  The number of available PTE’s is somewhat lower than our target of 10,000.  However, if you recall from our earlier posts, if a server is under load, the number of Free PTE’s may drop below 10,000 temporarily.  In this case, as a result of the low memory condition on this server there were several threads in a WAIT state – which caused the server to hang. The solution for this particular issue was to add more physical memory to the server to ease the low physical memory condition.

And with that, we come to the end of this post.  Hopefully you’ve found the information in our last few posts useful.

- Sakthi Ganesh

Share this post :

Preparing the Network for NLB 2008

Microsoft Enterprise Networking Team - Thu, 05/15/2008 - 20:06

Windows Server 2008 is here, along with a new version of Network Load Balancing (NLB).  Just as in previous versions, NLB continues to provide an excellent option for scaling many kinds of applications and promoting higher availability.  And while the deployment and configuration of NLB is fairly straightforward, it’s important to ensure the network environment is ready for NLB. 

Unicast

If you choose to deploy NLB using unicast, all of the NLB adapters will share a Cluster MAC address, in addition to the Virtual IP (VIP) address.  The idea behind the shared MAC is that when a host communicates with the MAC address for the NLB Cluster, all of the NLB nodes will respond, making it impossible for the switch to associate the MAC address to a particular port.  This in turn will cause the switch to simply flood the frames destined to the Cluster MAC out all of its ports, ensuring that all of the NLB nodes receive the frames.  Problems may arise when using multi-layer switches or virtual network environments if the switch does associate the Cluster MAC or the Virtual IP to a specific port.  In this case, only one NLB node will receive traffic destined to the Virtual IP address of the Cluster, preventing the remaining NLB nodes from sharing the load.  One way to get around this issue is to employ a hub.  By connecting all the NLB nodes into a hub, and then connecting the hub to a port on the switch, all of the NLB nodes will receive the traffic destined to the Cluster.  Another solution is to configure port mirroring on the switch to ensure traffic sent to one of the NLB ports is replicated to all of them.

As mentioned earlier, unicast NLB relies on switch “flooding” behavior to function properly.  If you want to limit the flooded traffic on your network, you  can create a separate VLAN encompassing only the ports the NLB nodes are connected to.

Multicast

You can also opt to deploy NLB using multicast.  With multicast, each NLB node effectively has two MAC addresses: a physical MAC and a multicast MAC.  Switches typically do not associate ports with a multicast MAC address, so the traffic will be flooded out all ports.  The flooding of the multicast traffic may cause unintended network performance issues.  To resolve these issues, you can configure the switch with static mappings of the multicast MAC and the ports that the NLB nodes are connected to.

NLB Manager

One other point to keep in mind when deploying Windows Server 2008 Network Load Balancing is that the NLB Manager from Windows Server 2003 cannot be used to manage Windows Server 2008 NLB nodes.  You can manage the Windows Server 2008 nodes with the NLB Manager on a Windows Server 2008 server or with Windows Vista if you have the Remote Server Administration Tools (RSAT) installed.

For more information on deploying NLB, including upgrading from Windows Server 2003 NLB, check out the following article:

http://technet2.microsoft.com/windowsserver2008/en/library/d7c4efd2-3cf0-4b3d-9207-4746cab1f9aa1033.mspx?mfr=true

- Baruch Frost

Categories:

Windows Server 2008 in Action - HostMySite.com

Windows Server Division WebLog - Wed, 05/14/2008 - 17:23

Since the launch of Windows Server 2008 on February 27th, I have had a phenomenal opportunity to hear a lot of positive feedback from IT Pros, developers and our partners.  I truly have enjoyed talking with customers from around the globe to hear their experiences and implementations. 

Since I am back in the office for the foreseeable future I thought I would take some time over the next couple of weeks to showcase some of the implementations of Windows Server 2008 that I have come across that have caught my attention.

One customer who has seen great results in the Web hosting area with WS08 is HostMySite.com. If you are not familiar with this company - they are a Web hosting company that owns and operates its own datacenters and networks and provides support for dedicated server environments.  HostyMySite hosts more than 85,000 web sites on 3,100 Servers (and growing).

One of the initial goals of their WS08 deployment was to offer the highest levels of application stability to their customers.  In addition HostMySite wanted to increase the site capacity on their web servers and minimize the amount of time spent troubleshooting.

Prior to Windows Server 2008 HostMySite was getting roughly 500 application pools on each of their servers. IIS 7.0 new application pool management features has allowed HMS to scale up to 3000 application pools per server.  In addition to increased application pool capacity, HMS was also able to reduce the numbers of servers.....what normally took 10 servers now takes 4.  (Although I wish WS08 was solely responsible for that metric - they moved to Dual Core Dell PowerEdge Servers)  Both of these are very impressive to step back and take a look at: 6x the application capacity on 60% less servers.

HostMySite is just one of the many customers seeing strong results with Windows Server 2008 and you can read more about their deployment story here.....it is actually a good read and you will see they are doing a lot more with the remote management capabilities of IIS 7.0

If you want more info on IIS, be sure to visit IIS.net. There is also a great blog post here that talks more about the hosting features in IIS 7.0.

Stay tuned for more.....
 
-Ward Ralston

Categories:
Syndicate content