User login

News aggregator

Windows Essential Server Solutions go public

Windows Server Division WebLog - Tue, 05/13/2008 - 16:30

Good news. Today we opened up the Public Preview for Windows Essential Server Solutions - Small Business Server 2008 and Essential Business Server 2008.  Visit here to find out how to download and evaluate the Release Candidate 0 versions of both.  EBS is available now, SBS will be up within a few weeks - but you can sign up today and be notified when it is ready.

We also unveiled pricing for SBS 2008 and EBS 2008.  Both products offer big savings versus buying the similar component products separately, not to mention the time and effort saved by the Solutions' integrated, SMB-oriented set up and management.  We've made a lot of changes to SBS as a product, so the price has changed.  In most 1-75 user cases, SBS 2008 Standard is actually less expensive than SBS 2003 Standard.  SBS 2008 Premium is now a two box solution with an additional copy of Windows Server and SQL Server running on a second box, in order to provide a great application platform - a big request from partners and customers.  There are a number of changes to make SBS CALs more flexible and cost-effective, too - see here for details.

Joel Sider

Categories:

May 2008 Monthly Release

This is Tami Gallupe, MSRC Release Manager, and I want to let you know that we just posted our May 2008 Bulletins. We released four bulletins today, which include three bulletins with severity rating of critical and one with the severity rating of moderate. We also re-released MS06-069 to add XP SP3 as an affected version. <?xml:namespace prefix = o ns = "urn:schemas-microsoft-com:office:office" />

 

Here is a summary of what we released:

 

MS08-026  Vulnerabilities in Microsoft Word Could Allow Remote Code Execution

MS08-027  Vulnerability in Microsoft Publisher Could Allow Remote Code Execution

MS08-028  Vulnerability in Microsoft Jet Database Engine Could Allow Remote Code Execution

MS08-029 Vulnerabilities in Microsoft Malware Protection Engine Could Allow Denial of Service

 

I think it is also worth noting that MS08-026 includes additional security mitigations against attacks as identified in Microsoft Security Advisory 950627. We recommend that customers install the updates provided in both MS08-026 and MS08-028 for the most up to date protection against these types of attacks.  

 

Our Security Vulnerability Research & Defense blog this month discusses MS08-026.  You can find a post discussing built-in functionality to turn off the vulnerable parsing code for one of the fixed vulnerabilities at http://blogs.technet.com/swi/archive/2008/05/13/file-block-and-ms08-026.aspx

 

I want to invite you to join us for the monthly webcast that starts tomorrow (Wednesday, May 14th) at 11:00 AM PST.  We’ll be discussing today’s release and answering your questions on the air. Click here to register for the May Security Bulletin Webcast.  We look forward to hearing from you tomorrow.

 

Thanks!

   Tami

 

*This posting is provided "AS IS" with no warranties, and confers no rights.*

 

Two Minute Drill: LOGMAN.EXE

Ask the Performance Team - Tue, 05/13/2008 - 11:00

Today we are continuing on with our Two Minute Drill series.  Our topic in this post is one that we discuss quite frequently with customers - namely the automation of creating Performance Monitor and Trace Logs.  Most administrators are comfortable creating Local and Remote Performance Monitor logs using the Performance Monitor MMC and the GUI tools.  However, there are some extremely powerful command line utilities that can be used to configure and capture Performance data.  Today we will be discussing the LOGMAN.ExE utility.  So without further ado ...

The LOGMAN.EXE utility can be used to create and manage Event Trace Session and Performance logs.  Many functions of Performance Monitor are supported and can be invoked using this command line utility.  Before we look at some examples of how to configure Performance logs using this utility, let's quickly cover some of the syntax.  Running LOGMAN /? from a command prompt brings up the first level of context sensitive help:

Basic Usage:  LOGMAN [create | query | start | stop | delete | update | import | export] [options].  The verbs specified determine what actions are being performed:

Verb Name Description CREATE Create a new data collector QUERY Query data collector properties.  All data collectors are listed if no specific name is provided START Start an existing data collector STOP Stop and existing data collector DELETE Delete an existing data collector UPDATE Update the properties of an existing data collector IMPORT Import a data collector set from an XML file EXPORT Export a data collector set to an XML file

Running LOGMAN <verb> /? brings up context sensitive help for the verb specified.  There are also some options to be aware of:

Option Description -? Display context sensitive help -s <computer> Perform the command on the specified remote system -ets Send the command directly to an Event Tracing Session without saving or scheduling

So now that we have our basic commands, let's take a look at how we can use LOGMAN.EXE for one of our most common scenarios - capturing baseline Performance data for a system.  We've discussed the importance of capturing baseline server performance data in several previous posts.  In our example, we are going to capture a binary circular performance monitor log that has a maximum size of 500MB.  The reason we are going to use a binary circular log is that we can record the data continuously to the same log file, overwriting previous records with new data once the log file reaches its maximum size.  Since this will be a baseline performance log that will be constantly running, we want to ensure that we can capture a significant data sample, and not have the log file being overwritten in such a short timeframe that useful data is lost.  Put another way, we want to set our capture interval up so that we do not overwrite our data too quickly.  For the purposes of this example, we'll set up our log to capture data every two hours.  We want to save our data to a log file, so we will need to specify a log file location.  Given that we want to capture baseline data, there is a good possibility that we want to use the same settings on multiple servers so we'll need to ensure that we can repeat this process with a minimum of administrative fuss ...

So, to recap, we are going to capture our baseline performance log that is:

  • a binary circular log that will be a maximum of 500MB in size
  • configured with a capture interval of two hours
  • saved to a file location
  • configured with standard counters so that we can capture consistent baseline data across multiple servers if needed

The one piece of this equation that we have not specified is which counters we need to capture.  One of the key reasons to use LOGMAN.EXE is that we can specify which counters we want to capture in a standard configuration file and then use that configuration across to configure our capture for multiple servers.  Creating the configuration file is fairly simple - we are going to create a .CONFIG file that enumerates the counters that we want to capture, one per line.  An example is shown below:

"\Memory\Available MBytes" "\Memory\Pool Nonpaged Bytes" "\Memory\Pool Paged Bytes" "\PhysicalDisk(*)\Current Disk Queue Length" "\PhysicalDisk(*)\Disk Reads/sec" "\PhysicalDisk(*)\Disk Read Bytes/sec" "\PhysicalDisk(*)\Disk Writes/sec" "\PhysicalDisk(*)\Disk Write Bytes/sec" "\Process(*)\% Processor Time" "\Process(*)\Private Bytes" "\Process(*)\Virtual Bytes"

These are some fairly standard Performance Counters.  Let's save this file as Baseline.config on a folder on one of our file servers.  Now we have all of the pieces that we need to configure and capture our baseline.

logman create counter BASELINE -f bincirc -max 500 -si 2 --v -o "e:\perflogs\SERVERBASELINE" –cf "\\<FILESERVER>\Baseline\Baseline.config"

Let's quickly examine the different elements of this command:
  • logman create counter BASELINE: This creates the BASELINE Data Collector on the local machine
  • -f bincirc -max 500 -si 2: This piece of the command specifies that we are creating a Binary Circular file, sets the Maximum Log file size to 500MB, sets the Capture Interval at 2 hours
  • --v -o "e:\perflogs\SERVERBASELINE": In this part of the command, we turn off the versioning information, and set the Output Location and Filename.  The Performance Monitor log will be created with a .BLG extension
  • –cf \\<FILESERVER>\Baseline\Baseline.config: Finally, we point the LOGMAN utility to the location of our standard counter configuration file

Once we run this command, we can run LOGMAN.EXE and use the QUERY verb to ensure that our Data Collector has been created:

The last thing we need to do is start our Data Collector set.  There are a couple of options here - the first is to run LOGMAN.EXE START BASELINE from the command line.  This will launch the Data Collector.  However, when we reboot our system, the Data Collector will not run.  If you create a startup script to run the command above to start the Data Collector set, then you can capture your performance data from the time that the server starts.

Before we wrap up our post, here is another common scenario.  You can create a Data Collector set on a full installation of Windows Server 2008 or Windows Vista.  Then export that Data Collector Set configuration to an XML Template.  You can then use the LOGMAN.EXE command with the IMPORT verb to import that Data Collector set configuration on a Windows Server 2008 Server Core system, then use the LOGMAN.EXE command with the START verb to start the Data Collector Set.  The commands are below:

  • LOGMAN IMPORT -n <Data Collector Set Name> -xml <XML template that you exported>:  This will create the Data Collector Set named whatever name you choose when passing the -n parameter
  • LOGMAN START <Data Collector Set Name>: Start the Data Collectiion process.

Finally, here are two more sample commands where we use LOGMAN.EXE for gathering Performance Monitor data for troubleshooting:

High CPU Issue

logman.exe create counter High-CPU-Perf-Log -f bincirc -v mmddhhmm -max 250 -c "\LogicalDisk(*)\*" "\Memory\*" "\Network Interface(*)\*" "\Paging File(*)\*" "\PhysicalDisk(*)\*" "\Process(*)\*" "\Redirector\*" "\Server\*" "\System\*" "\Thread(*)\*"   -si 00:00:05

In this example, we have a capture interval of five seconds, with a Maximum Log size of 250MB.  The Performance Counters that we are capturing are fairly generic.

Generic Performance Monitor Logging

logman.exe create counter Perf-Counter-Log -f bincirc -v mmddhhmm -max 250 -c "\LogicalDisk(*)\*" "\Memory\*" "\Network Interface(*)\*" "\Paging File(*)\*" "\PhysicalDisk(*)\*" "\Process(*)\*" "\Redirector\*" "\Server\*" "\System\*"  -si 00:05:00

In this example, we are using a five minute capture interval - the rest of the parameters are fairly straightforward.  Remember that in both of these cases, you will need to use LOGMAN.EXE with the START verb and specifying the name of the Data Collector Set to begin the capture.  These samples work on all Windows Operating Systems from Windows XP onwards.

And with that, we come to the end of this Two Minute drill.  Until next time ...

- CC Hameed

Share this post :

New Networking-related KB articles for the week of April 26 - May 2

Microsoft Enterprise Networking Team - Mon, 05/12/2008 - 21:48

Here are the latest Networking-related Knowledge Base articles:

951764  How to enable the port scalability feature for RPC proxies and for applications in Windows Server 2008

950499  You may be unable to use the "netsh interface" context in some Server Core installations of Windows Server 2008

951598  On a computer that is running an Itanium-based version of Windows Server 2008, the Ftp.exe utility crashes when you run the "mput" command

947557  The WINS automatic scavenging process may not start as expected at the expiration of the configured interval on a Window Server 2008-based computer

951745  After you install a non-English-language Input Method Editor on a Windows Vista-based computer, you cannot enter any numeric character in the WEP box when you try to join a secure wireless network

951025  The Server service and the Workstation service do not start in Windows 2000, and you receive a "The specified file could not be found" error message

951656  UPnP devices may not be displayed in the "My Network Places" folder after you restart a Windows XP-based computer

- Mike Platts

Categories:

Troubleshooting Server Hangs – Part Three

Ask the Performance Team - Fri, 05/09/2008 - 11:00

In our last post on Server Hangs, we discussed using the Debugging Tools to examine a dump file to analyze pool depletion.  Today we are going to look at using our troubleshooting tools to examine a server hang caused by a handle leak.  Issues where there are an abnormal number of handles for a process are very common and result in kernel memory depletion.  A quick way to find the number of handles for each process by checking the Task Manager > Processes.  You may have to add the handles column from View > Select columns.  Generally if a process has more than 10,000 then we probably want to take a look at what is going on.  That does not necessarily mean that it is the offending process, just a suspect.  However, there are instances where the process may be for a database or some other memory intensive application.  The most common instance of this is the STORE.EXE process for Exchange Server which routinely has well over 10,000 handles.  On the other hand if our Print Spooler process has 10,000 (or more) handles then we most likely have an issue.

Once we know there is a handle leak in a particular process, we can dump out all the handles and figure out why it is leaking.  If we want to find out from a dump if there is a process that has an abnormally large number of handles, we first have to list out all the processes and then examine the number of handles being used by the processes.  To list out all the processes that are running on the box using the Debugging Tools, we use the !process 0 0 command.  This will give us an output similar to what we see below:

0: kd> !process 0 0 **** NT ACTIVE PROCESS DUMP **** PROCESS 8a5295f0 SessionId: none Cid: 0004 Peb: 00000000 ParentCid: 0000 DirBase: 0acc0020 ObjectTable: e1002e68 HandleCount: 1056. Image: System PROCESS 897e6c00 SessionId: none Cid: 04fc Peb: 7ffd4000 ParentCid: 0004 DirBase: 0acc0040 ObjectTable: e1648628 HandleCount: 21. Image: smss.exe PROCESS 89a26da0 SessionId: 0 Cid: 052c Peb: 7ffdf000 ParentCid: 04fc DirBase: 0acc0060 ObjectTable: e37a7f68 HandleCount: 691. Image: csrss.exe PROCESS 890f0da0 SessionId: 0 Cid: 0548 Peb: 7ffde000 ParentCid: 04fc DirBase: 0acc0080 ObjectTable: e1551138 HandleCount: 986. Image: winlogon.exe PROCESS 89a345a0 SessionId: 0 Cid: 0574 Peb: 7ffd9000 ParentCid: 0548 DirBase: 0acc00a0 ObjectTable: e11d8258 HandleCount: 396. Image: services.exe

The important piece of information here is the HandleCount.  For the purposes of this post, let’s assume that there is a problem with SMSS.EXE and that there is an unusually high HandleCount.  To view all of the handles for the process, the first thing we need to do is switch to the context of the process and then dump out all of the handles as shown below.  The relevant commands are:

  • .process –p –r <processaddress> – this switches us to the context of the process
  • !handle – this dumps out all of the handles
0: kd> .process –p –r 897e6c00 Implicit process is now 897e6c00 0: kd> !handle processor number 0, process 897e6c00 PROCESS 897e6c00 SessionId: none Cid: 04fc Peb: 7ffd4000 ParentCid: 0004 DirBase: 0acc0040 ObjectTable: e1648628 HandleCount: 21. Image: smss.exe Handle table at e1674000 with 21 Entries in use 0004: Object: e1009568 GrantedAccess: 000f0003 Entry: e1674008 Object: e1009568 Type: (8a5258b8) KeyedEvent ObjectHeader: e1009550 (old version) HandleCount: 53 PointerCount: 54 Directory Object: e10030a8 Name: CritSecOutOfMemoryEvent 0008: Object: 8910b370 GrantedAccess: 00100020 (Inherit) Entry: e1674010 Object: 8910b370 Type: (8a54c730) File ObjectHeader: 8910b358 (old version) HandleCount: 1 PointerCount: 1 Directory Object: 00000000 Name: \WINDOWS {HarddiskVolume1} 000c: Object: e1af9828 GrantedAccess: 001f0001 Entry: e1674018 Object: e1af9828 Type: (8a512ae0) Port ObjectHeader: e1af9810 (old version) HandleCount: 1 PointerCount: 12 Directory Object: e1002388 Name: SmApiPort

At this point we can continue to dig into the handles to determine if there is something amiss.  More often than not, this would be an issue for which systems administrators would be contacting Microsoft Support.  However, by using this method you can quickly determine whether the problem lies with a third-party component and engage that vendor directly.  Being able to provide them with a dump file that shows that their component is consuming an excessive number of handles can assist them in providing you with a quicker resolution.

That’s it for today.  In our next post on Server Hangs, we’ll look at how a lack of Available System PTE’s can cause server hangs.

- Sakthi Ganesh

Share this post :

May 2008 Advance Notification

Hello, Bill here.<?xml:namespace prefix = o ns = "urn:schemas-microsoft-com:office:office" />

I wanted to let you know that we just posted our Advance Notification for next week’s bulletin release which will occur on Tuesday, May 13, 2008 around 10 a.m. Pacific Standard Time.

It is important to remember that while the information posted below is intended to help with your planning, because it is preliminary information, it is subject to change.

As part of our regularly scheduled bulletin release, we’re currently planning to release:

 

·        Three Microsoft Security Bulletins rated Critical and one that is rated as Moderate. These updates may require a restart and will be detectable using the newly released version of the Microsoft Baseline Security Analyzer.

 

As we do each month, the Microsoft Windows Malicious Software Removal Tool will be updated.

 

Finally, we are planning to release high-priority, non-security updates on Windows Update and Windows Server Update Services (WSUS) as well as high-priority, non-security updates on Microsoft Update and Windows Server Update Services (WSUS). For additional information, please see the Other Information section of the Advanced Notification.

 

As always, we’ll be holding the May edition of the monthly security bulletin webcast on Wednesday, May 14, 2008 at 11 a.m., Pacific Standard Time.  We will review this month’s release and take your questions live on-air with answers from our panel of experts. As a friendly reminder, if you can’t make the live webcast, you can listen to it on-demand as well.

 

You can register for the webcast here:

 

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032357221&Culture=en-US

 

Thanks,

 

Bill Sisk

 

An Evangelist in Las Vegas

Network Monitor Blog - Wed, 05/07/2008 - 20:41

A few weeks ago I visited the glitzy town of Las Vegas for the Interop Conference 2008. While I’m not much of a gambler, the lure of technology was more than enough to get me excited about the trip. Pack thousands of tech-heads in a luxurious hotel, present information about new technologies, and now you’ve got a hot time in the dessert. Buzz Words: Unified Communication and Virtualization

I don’t know if they do word counts on submitted Power Point presentations, but if I had to guess, both UC (Unified Communications) and Virtualization would be at the top of the list. And both technologies have a direct impact on the future of Network Monitor.

In terms of Virtualization, there are many challenges. The host machine can now house dozens to hundreds of virtual servers which means the backbone connection to the host machine requires a quick link. This means a 1 gig or even 10 gig network connection may be the norm for these configurations.  Making sure we can support these environments from the host machine and hosted VMs is going to be an important scenario to understand and support properly. 

For Unified Communications, the range of devices expands as your phone/pc/video conference merges.  And as wireless environments with remote users becomes the norm, trouble-shooting connections for voice and video become incredibly important in this domain.  And finally, interoperability between multiple devices and infrastructures requires a tool that can easily determine where interoperability problems may exist. Roaming the Exhibition Floor Jungle

The Interop Exhibition Floor is a microcosm of the hosting city, Las Vegas. Sounds from various booths ring out like slot machines and shows start on the hour as vendors attempt to garner your attention for longer periods of time. Booth babes and circus-like performances try to steal your eye away, in hopes of attracting you to their line of products. For me, however, I was curious about products that compete with or compliment Network Monitor.

What I found is that in general the protocol analyzer type tool that seems to be popular is more a of an aggregate tool that tells you where your network is sick. It may not tell you what the exact problem is, but it helps you monitor your network as a whole. I suppose that is where our tool would come in. Network Monitor is more targeted for solving specific problems and diving deep. Our conversation tree and soon to release process tracking creates a unique way to take a lot of data and filter down to a specific problem. Soaking up the Technology

As I introspect over my trip I try to find connections between our product’s future and the current state of technology. I realize that as things continue to accelerate, the need for a way to drill down from the very large to the very specific is of vast importance. And while we can continue to do general things, it will be more and more important to provide technology specific tools to help troubleshoot the dizzying array of protocols, devices and network mediums. As we continue to reach forward in this regard, we should be able to use our community to understand and provide tools to help us all solve difficult networking problems.

We’re very excited about showing off the beta of NM 3.2. Watch for it on our site, (http://connect.microsoft.com), in about a month! If you register into our community we’ll send you a note when it’s ready.

Windows XP Service Pack 3 has released!

Microsoft Enterprise Networking Team - Wed, 05/07/2008 - 14:24

The latest Service Pack for Windows XP, SP3, is now available for download.  Of note in this release, Windows XP with Service Pack 3 will have the ability to be a NAP (Network Access Protection) client.  Also, Wi-Fi Protected Access 2 (WPA2) support is now included (previously available as a separate download for Windows XP SP2).

Windows XP SP3 Released to Web (RTW), now available on Windows Update and Microsoft Download Center

Service Pack 3 Resources for IT Professionals (Microsoft TechNet)

How to obtain the latest Windows XP service pack (Microsoft KnowledgeBase)

List of fixes that are included in Windows XP Service Pack 3 (Microsoft KnowledgeBase)

Thanks to Boyd Benson for his assistance with this post.

-Mike Platts

Categories:

SMB2 Parser for NM3.1

Network Monitor Blog - Tue, 05/06/2008 - 19:12

We have decided to release an SMB2 parser for Network Monitor 3.1 (released July 07) to hold people over untill the beta for Network Monitor 3.2 releases in early June.

Where can I get the SMB2 parser?

You can download SMB2.NPL parser, along with SPARSER.NPL, CER.NPL, FCCS.NPL, SCNA.NPL and SMB.NPL (all supporting parsers) on http://connect.microsoft.com under the Network Monitor 3 project. If you’ve already signed up you’ll see it as one of your active projects. If you need to sign up you will need to create a passport account and join our project. Once you are in on the Network Monitor 3 project, click on the Downloads link on the left. You will see SMB2 Parser as one of the selections.

How do I use the new SMB2 parser?

Look at the article on using the SSL parser (http://blogs.technet.com/netmon/archive/2007/10/23/new-ssl-public-parser-available-how-to-deal-with-new-parsers.aspx) in the sections “Where do I stick it?” and “Working with NPL Parser path”. The instructions for installing the SMB2 parsers are the same.

Happy SMB2 parsing!

Troubleshooting Server Hangs – Part Two

Ask the Performance Team - Tue, 05/06/2008 - 11:00

Several months ago, we wrote a post on Troubleshooting Server Hangs.  At the end of that post, we provided some basic steps to follow with respect to server hangs.  The last step in the list was following the steps in KB Article 244139 to prepare the system to capture a complete memory dump for analysis.  Now that you have the memory dump, what exactly are you supposed to do with it?  That will be the topic of today’s post – more specifically, dealing with server hangs due to resource depletion.  We discussed various aspects of resource depletion including Paged and NonPaged pool depletion and System PTE’s.  Today we’re going to look at Pool Resource depletion, and how to use the Debugging Tools to troubleshoot the issue.

If the server is experiencing Non paged pool (NPP) memory leak or a Paged pool (PP) memory leak you are most likely to see the following event id’s respectively in the System Event log:

Type: Error Date: <date> Time: <time> Event ID: 2019 Source: Srv User: N/A Computer: <ComputerName> Details: The server was unable to allocate from the system nonpaged pool because the pool was empty. Type: Error Date: <date> Time: <time> Event ID: 2020 Source: Srv User: N/A Computer: <ComputerName> Details: The server was unable to allocate from the system Paged pool because the pool was empty

Let’s load up our memory dump file in the Windows Debugging tool (WINDBG.EXE).  If you have never set up the Debugging Tools and configured the symbols, you can find instructions on the Debugging Tools for Windows Overview page.  Once we have our dump file loaded type !vm in the prompt to display the Virtual Memory Usage for the system.  The output will be similar to what is below:

kd> !vm *** Virtual Memory Usage *** Physical Memory: 917085 ( 3668340 Kb) Page File: \??\C:\pagefile.sys Current: 4193280 Kb Free Space: 4174504 Kb Minimum: 4193280 Kb Maximum: 4193280 Kb Page File: \??\D:\pagefile.sys Current: 4193280 Kb Free Space: 4168192 Kb Minimum: 4193280 Kb Maximum: 4193280 Kb Available Pages: 777529 ( 3110116 Kb) ResAvail Pages: 864727 ( 3458908 Kb) Locked IO Pages: 237 ( 948 Kb) Free System PTEs: 17450 ( 69800 Kb) Free NP PTEs: 952 ( 3808 Kb) Free Special NP: 0 ( 0 Kb) Modified Pages: 90 ( 360 Kb) Modified PF Pages: 81 ( 324 Kb) NonPagedPool Usage: 30294 ( 121176 Kb) NonPagedPool Max: 32640 ( 130560 Kb)

********** Excessive NonPaged Pool Usage *****

PagedPool 0 Usage: 4960 ( 19840 Kb) PagedPool 1 Usage: 642 ( 2568 Kb) PagedPool 2 Usage: 646 ( 2584 Kb) PagedPool 3 Usage: 648 ( 2592 Kb) PagedPool 4 Usage: 653 ( 2612 Kb) PagedPool Usage: 7549 ( 30196 Kb) PagedPool Maximum: 62464 ( 249856 Kb) Shared Commit: 3140 ( 12560 Kb) Special Pool: 0 ( 0 Kb) Shared Process: 5468 ( 21872 Kb) PagedPool Commit: 7551 ( 30204 Kb) Driver Commit: 1766 ( 7064 Kb) Committed pages: 124039 ( 496156 Kb) Commit limit: 2978421 ( 11913684 Kb)

As you can see, this command provides details about the usage of Paged and NonPaged Pool Memory, Free System PTE’s and Available Physical Memory.  As we can see from the output above, this system is suffering from excessive NonPaged Pool usage.  There is a maximum of 128MB of NonPaged Pool available and 121MB of this NonPaged Pool is in use:

NonPagedPool Usage: 30294 ( 121176 Kb) NonPagedPool Max: 32640 ( 130560 Kb)

Our next step is to determine what is consuming the NonPaged Pool.  Within the debugger, there is a very useful command called !poolused.  We use this command to find the Pool Tag that is consuming our NonPaged Pool.  The !poolused 2 command will list out NonPaged Pool consumption, and !poolused 4 lists the Paged Pool consumption.  A quick note here; the output from the !poolused commands could be very lengthy as they will list all of the tags in use.  To limit the display to the Top 10 consumers, we can use the /t10 switch:  !poolused /t10 2.

0: kd> !poolused 2 Sorting by NonPaged Pool Consumed Pool Used: NonPaged Paged Tag Allocs Used Allocs Used R100 3 9437184 15 695744 UNKNOWN pooltag 'R100', please update pooltag.txt MmCm 34 3068448 0 0 Calls made to MmAllocateContiguousMemory , Binary: nt!mm LSwi 1 2584576 0 0 initial work context TCPt 28 1456464 0 0 TCP/IP network protocol , Binary: TCP File 7990 1222608 0 0 File objects Pool 3 1134592 0 0 Pool tables, etc. Thre 1460 911040 0 0 Thread objects , Binary: nt!ps Devi 337 656352 0 0 Device objects Even 12505 606096 0 0 Event objects naFF 300 511720 0 0 UNKNOWN pooltag 'naFF', please update pooltag.txt

Once the tag is identified we can use the steps that we outlined in our previous post, An Introduction to Pool Tags to identify which driver is using that tag.  If the driver is out of date, then we can update it.  However, there may be some instances where we have the latest version of the driver, and we will need to engage the software vendor directly for additional assistance.

That brings us to the end on this post – in Part Three, we will discuss using Task Manager and the Debugging Tools to troubleshoot Handle Leaks which may be causing Server Hangs.

- Sakthi Ganesh

Share this post :

New Networking-related KB articles for the week of April 19-25

Microsoft Enterprise Networking Team - Mon, 05/05/2008 - 19:29

Here are the latest Networking-related KB articles:

948927  Error message when you use SmartCard-only authentication to log on to a Windows Vista-based client computer in a wireless network environment: "Cannot connect to <SSID>: Please contact network administrator"

950923  The SNMP Event Log Extension Agent does not initialize correctly on a computer that is running Windows Vista with Service Pack 1 or Windows Server 2008

949127  You cannot establish a wireless connection by using EAP authentication on a Windows XP-based client computer if the Service Set Identifier (SSID) includes a comma

- Mike Platts

Categories:

Team DHCP wants to hear from you!

Microsoft Windows DHCP Team Blog - Sun, 05/04/2008 - 16:13
Is there a particular feature in DHCP (eg. reservations, callout DLL, failover, netsh, ...) that interests you? Have you customized the DHCP server, using scripts or external utilities to suit your environment? Are there features, that you would like...(read more)
Categories:

Microsoft Operations Framework (MOF) 4.0 Available

Windows Server Division WebLog - Wed, 04/30/2008 - 22:53

We announced the new Microsoft Operations Framework 4.0, and the MOF 4.0 online community. Check out Jeff's blog post over on the System Center team blog. Here's an excerpt:

So what’s new?  First, where the old MOF talked mainly about operations, the new MOF 4.0 describes the entire IT life cycle, including business planning, project organization can use a common language and a consistent framework for planning and coordinating their activities.

The second improvement is to the design of the content.  If you’re looking for a way to overhaul your organizations service management practices, then MOF 4.0 provides that comprehensive view that will help you choose where to start.  However, if you’re just looking for a best practice around one particular area then MOF 4.0 can help as well, with short (25-page) “service management functions” that can give you ideas on improving a particular function in 20 minutes.

Patrick

Categories:

Problems using default credentials with Vista RDP clients with Single Sign-on Enabled

Terminal Services Team Blog - Wed, 04/30/2008 - 20:49

With Single Sign-on enabled, the current user’s credentials, also known as “default credentials”, are used to log on to a remote computer. In several scenarios, users may get the following error message when trying to connect to a TS server with <?xml:namespace prefix = st1 ns = "urn:schemas-microsoft-com:office:smarttags" />Vista clients using default credentials:

<?xml:namespace prefix = o ns = "urn:schemas-microsoft-com:office:office" />

<?xml:namespace prefix = v ns = "urn:schemas-microsoft-com:vml" />

Possible causes (and their recommended solutions) are identical to the issues presented for saved credentials. Just as it does with saved credentials, Windows Vista Credential Delegation policy does not allow a Vista RDP client to send default credentials to a TS server when the TS server is not authenticated. By default Vista RDP clients use the Kerberos protocol for server authentication. Alternatively, they can use SSL server certificates, but these are not deployed to servers by default.  There are three common scenarios where using the Kerberos protocol to authenticate the server is not possible, but using SSL server certificates is possible. Because SSL server certificates are not deployed by default, using default credentials does not work in these scenarios.

Below is a list of scenarios in which this problem appears, along with recommended solutions. This is identical to the scenarios for saved credentials.

Scenario 1: Connecting from home to a TS server through a TS Gateway server

When you connect from home through a TS Gateway server to a TS server hosted behind a corporate firewall, the TS client has no direct connectivity to a key distribution center hosted on a domain controller behind the corporate firewall. As a result, server authentication using the Kerberos protocol fails.  

Scenario 2: Connecting to a stand-alone computer

When connecting to a stand-alone server, the Kerberos protocol is not used.

Scenario 3: Connecting to a terminal server farm

Kerberos authentication does not work in terminal server farm scenarios because farm names do not have accounts associated with them in Active Directory. Without these accounts, Kerberos-based server authentication is not possible.  

Recommended Solution for Scenarios 1 & 2

For scenarios 1 and 2, to enable server authentication, use SSL certificates that are issued by a trusted Certificate Authority and have the server name in the subject field.  Deploy them to all servers that you want to have server authentication. To set the SSL certificate for a connection:

  1. At a command prompt, run tsconfig.msc. Note: tsconfig.msc is only available on servers.
  2.  Double-click the RDP-Tcp connection object.
  3.  On the General tab, click Select.
  4.  Select the certificate you want to assign to the connection, and then click OK.
Recommended Solution for Scenario 3

To enable server authentication in a server farm, use SSL certificates that are issued by a trusted Certificate Authority and that have the farm name in the subject field. Deploy them to all servers in your farm. The SSL certificate will provide server authentication for a TS server and therefore Credential Delegation policy will allow saved credentials to be used for remote desktop connections. 

Alternative workaround for these scenarios (less secure than recommended options):

Another option is to allow delegation of users’ default credentials with NTLM authentication mechanism. This option is not recommended because NTLM-only server authentication does not confirm the server's identity. Sending your credentials to such servers can be dangerous.

  1. At a command prompt, run gpedit.msc to open the Group Policy Object Editor.
  2. Navigate to Computer Configuration -> Administrative Templates -> System -> Credentials Delegation.
  3. Select the "Allow Default Credentials With NTLM-only Server Authentication" setting:

 

When enabling this policy, you also need to add "TERMSRV/<Your server name>" to the server list for all servers to which you want to allow NTLM-only server authentication.

 

Virtualization News from the Microsoft Management Summit

Windows Server Division WebLog - Tue, 04/29/2008 - 15:36

Helping businesses address the growing complexity of managing their IT environments, today at Microsoft Management Summit 2008 we announced the public beta release of System Center Virtual <?xml:namespace prefix = o ns = "urn:schemas-microsoft-com:office:office" />

Machine Manager 2008 (formerly referred to as code name “Virtual Machine Manager vNext”).

 

 System Center Virtual Machine Manager 2008 enables customers to configure and deploy new virtual machines and to centrally manage their virtualized infrastructure, whether running on Windows Server 2008 Hyper-V, Microsoft Virtual Server 2005 R2 or VMware ESX Server. When used in conjunction with the broad System Center management suite, customers can use SCVMM 2008 to effectively manage both their virtualized and physical servers and applications.

 

For more information about this news and other activities taking place at MMS, check out the Virtual Press Room.

 

 -Tina

Categories:

Questions about Web Server Attacks

Hi there this is Bill Sisk.<?xml:namespace prefix = o ns = "urn:schemas-microsoft-com:office:office" />

There have been conflicting public reports describing a recent rash of web server attacks. I want to bring some clarification about the reports and point you to the IIS blog for additional information.

To begin with, our investigation has shown that there are no new or unknown vulnerabilities being exploited. This wave is not a result of a vulnerability in Internet Information Services or Microsoft SQL Server. We have also determined that these attacks are in no way related to Microsoft Security Advisory (951306). 

The attacks are facilitated by SQL injection exploits and are not issues related to IIS 6.0, ASP, ASP.Net or Microsoft SQL technologies. SQL injection attacks enable malicious users to execute commands in an application's database.  To protect against SQL injection attacks the developer of the Web site or application must use industry best practices outlined here.  Our counterparts over on the IIS blog have written a post with a wealth of information for web developers and IT Professionals can take to minimize their exposure to these types of attacks by minimizing the attack surface area in their code and server configurations. Additional information can be found here: http://blogs.iis.net/bills/archive/2008/04/25/sql-injection-attacks-on-iis-web-servers.aspx

I hope this helps to answer any questions

Bill

*This posting is provided "AS IS" with no warranties, and confers no rights.*

Script to display ALL the reserved addresses configured on the DHCP server.

Microsoft Windows DHCP Team Blog - Fri, 04/18/2008 - 10:58
The Microsoft DHCP server provides show command to display reserved addresses configured at a particular scope level. But diplaying ALL the reserved addresses configured in ALL the scopes in the server is not possible with a single command. You can make...(read more)
Categories:

MSRC Blog: Microsoft Security Advisory 951306

Hello, Bill here,<?xml:namespace prefix = o ns = "urn:schemas-microsoft-com:office:office" />

I wanted to let you know that we have just posted Microsoft Security Advisory (951306).

This advisory contains information regarding a new public report of a vulnerability within Microsoft Windows which allows for privilege escalation from authenticated user to LocalSystem. Our investigation has shown that this vulnerability affects Windows XP, Windows Server 2003, Windows Vista, and Windows Server 2008.

At this time, we are not aware of attacks attempting to use the reported vulnerability, but we will continue to track this issue.  The advisory contains several workarounds that customers can use to help protect themselves. Upon completion of this investigation, Microsoft will take the appropriate action to help protect our customers. This may include providing a security update through our monthly release.

We will continue to monitor the situation and post updates to the advisory and the MSRC Blog as we become aware of any important new information.

In the meantime, we encourage customers to review the advisory and implement the workarounds.

Bill Sisk

*This posting is provided "AS IS" with no warranties, and confers no rights.*

Reassembly with NM3

Network Monitor Blog - Wed, 04/09/2008 - 13:59

Ever wonder how a network works? Maybe it just seemed so easy, and in your mind sending a file was just putting each byte on the wire to the receiving machine. That’s not too far from the truth but you’d be very selfish to think that the network was there for your bidding only. Since a network has to be shared between many users, strategies have been created to chop your file up and send it in chunks.

Like Packing for a Move

We’ve all done it before. We take all of the bits and pieces of junk we’ve collected over the years and are tasked with moving them from one location to another. We invite friends and relatives, and like ants, they arduously move each piece from house A to house B. If your house=file, and boxes=packets, then we’ve created an analogy for reassembly. If you are organized, you’ll label each box, 1 of 20, 2 of 20, and so on. When you get to the new location, you can put all your things back together using the labels you created and verify you received everything.

The architects of our Internet are not any cleverer. They’ve borrowed these same techniques to determine how a chopped up file can reach its destination and be reconstituted. As long as the data is sequenced, the other side should be able to put the pieces back together. The Transport Layer

In networking we like to talk about layers. You may have heard terms like “Layer 3 switch” or the “Network Layer”. Well, the architects of our Internet are just like everybody else. They get easily confused if too much is going on. Transferring network data from one place to another is a difficult problem. And a common strategy used to solve difficult problems is to divide and concur. Each layer is responsible for different things, and the transport layer is the one we use to define how files get chopped up and rebuilt.

We’ll narrow our focus down to TCP for a bit. This is the workhorse transporter for most networking applications. There are others, such as SPX/IPX, but TCP is by far the most popular. And it turns out RCP, SMB and HTTP can also fragment data. SMB might chop your file up into 4K chunks, and then each of these chunks could be further fragmented by TCP. Labeling Your Boxes

So in our analogy, labeling=sequencing. We could label each packet 1 of 20 and so on, and some protocols do use this strategy. But for TCP, we go a bit further and describe the number of bytes that are sent. We’ll say, for instance, that this packet contains Sequence 1000-2000. What we actually send is the first sequence number and the size, but this range can be derived from that data. When the other side gets the data, even if it’s out of order, it knows how to put the puzzle back together. Also, the receiving TCP will keep the sender up to date by acknowledging what it’s received so far. In the simplest scenario, the receiver ACKs the latest segment it received. If the sender gets an acknowledgment for sequence 2000, then that confirms the receiver has seen all data up to that point. Using NM3 to Reassemble the Data

When we capture data from the network, we are capturing it before the data has been put back together. But this can make it difficult to read, as you might imagine. With NM3 we’ve created a way to reassemble the data, so that you can see the data as it is seen by the application layer (there’s one of those darn layers again, see http://en.wikipedia.org/wiki/OSI_model). So when you click on a web page and get a trace, you can see the entire packet as sent by the browser, rather than a bunch of fragments after TCP has gotten to them. Data Before You Reassemble

So if you start a network trace and view a web page you’ll notice that the traffic that gets created shows HTTP and TCP. The HTTP packets are the headers sent/received by your web browser. But the TCP that is in between is also your browser traffic. The original HTTP packet was larger than would fit in single packet, so TCP has chopped it up for you.

In this example, you can see that the server has responded with a page. Frame 6 is a continuation of frame 5 and is where TCP has chunked up the data. Frame 7 is an acknowledgment so the server knows we are receiving data. And frame 8 is the final frame in the fragmented data.

Frame Source Destination Description 5 Srv Client HTTP:Response, HTTP/1.1, Status Code = 502, URL: http://239.255.255.250/ 6 Srv Client TCP:[Continuation to #5]Flags=...A...., SrcPort=HTTP(80), DstPort=49382, PayloadLen=1460, Seq=3331697971 - 3331699431, Ack=1190309335, Win=65217 (scale factor 0) = 65217 7 Client Srv TCP:Flags=...A...., SrcPort=49382, DstPort=HTTP(80), PayloadLen=0, Seq=1190309335, Ack=3331699431, Win=255 (scale factor 8) = 65280 8 Srv Client TCP:[Continuation to #5]Flags=...AP..., SrcPort=HTTP(80), DstPort=49382, PayloadLen=1392, Seq=3331699431 - 3331700823, Ack=1190309335, Win=65217 (scale factor 0) = 65217 Data After You reassemble

To reassemble with NM3.1, you go to the Frames menu and select “Reassemble All Frames”. In NM3.2 we’ve created a more prominent button on the tool bar so this should be easier to find. Once the reassembly is complete a new window opens and contains all the original frames PLUS new frames for each reassembled piece.

Frames Source Destination Description 5 Srv Client HTTP:Response, HTTP/1.1, Status Code = 502, URL: http://239.255.255.250/ 6 Srv Client TCP:[Continuation to #5]Flags=...A...., SrcPort=HTTP(80), DstPort=49382, PayloadLen=1460, Seq=3331697971 - 3331699431, Ack=1190309335, Win=65217 (scale factor 0) = 65217 7 Client Srv TCP:Flags=...A...., SrcPort=49382, DstPort=HTTP(80), PayloadLen=0, Seq=1190309335, Ack=3331699431, Win=255 (scale factor 8) = 65280 8 Srv Client TCP:[Continuation to #5]Flags=...AP..., SrcPort=HTTP(80), DstPort=49382, PayloadLen=1392, Seq=3331699431 - 3331700823, Ack=1190309335, Win=65217 (scale factor 0) = 65217 9 Srv Client HTTP:Response, HTTP/1.1, Status Code = 502, URL: http://239.255.255.250/

So frame 9 is the new frame we inserted. While the Description doesn’t look much different, the devil’s in the details. Quite literally we have to look at the frame details to see the difference. Frame: Number: 9, Captured Frame Length = 4449, MediaType= PayloadHeader + PayloadHeader: Re-assembled Payload + Http: Response, HTTP/1.1, Status Code = 502, URL: http://239.255.255.250/

The major difference is that we don’t see Ethernet, IP, or TCP anymore. We’ve replaced the original network header with our own called PayloadHeader. This header contains info about the protocols that have been reassembled, as well as information we might need from those layers. We don’t show the original frame numbers, but you can enable some debug NPL to get this information if you need. Just look at Payload.NPL for more info.

If you were to open up HTTP Response in the details, you would also see that all the HTML and header information is in this one packet. This reassembling of the data makes is very easy to understand that data from the HTTP level.

Also realize that in cases where both HTTP and TCP are fragmenting data, there can be multiple levels of reassembly. In those cases you may see a PayloadHeader for each TCP fragmentation, and then another for the HTTP fragmentation. Reassembly FAQ Reassembly doesn’t seem to be working for me, why?

The reassembly feature does depend on two things. One, conversations have to be enabled. Make sure this option is enabled when you open a trace. Currently the option can be found on the start page. Two, you can only reassemble a saved trace. If you just started a trace, you’ll need to save and reopen it. Why do all the original frames show up?

When NM3 does reassembly, new frames are added in. Rather than remove the original frames, we’ve decided to leave them in as they still provide information about the original TCP data. Also, if there’s a problem with reassembly, perhaps due to missing packets, you could use the data to figure out why it didn’t reassemble correctly. Can I only see reassembled frames?

Well yes, you can filter on only those things that have been reassembled. As we mentioned above, we add a PayloadHeader, which is just a protocol we’ve devised. So you can filter out all reassembled data, but applying a filter of “PayloadHeader”. I actually create a color filter for this so they stick out.

What you can’t do, however, is see only data that has been reassembled and the data that was never originally fragmented. While we realized this would be useful, we haven’t exposed a way to make this possible. However, with the API and NM32, it would be possible to do this programmatically. Why doesn’t the Reassembly window have a conversation tree?

When we originally designed reassembly, the conversation window wasn’t really integral to the product yet. The simple work around is to save the capture file and reopen. Putting it All Together

Networks have devised ways to break apart data and rebuild them. But with protocol analyzers that capture this raw data, it’s sometimes difficult to follow. With the reassembly option in NM3 your network traffic is put back together making it easier to read. Add this ability with the new Process Tracking feature in the upcoming NM3.2 and finding the only the data you need will be easier than ever.

April 2008 Monthly Release

April 2008 Monthly Bulletin Release<?xml:namespace prefix = o ns = "urn:schemas-microsoft-com:office:office" />

I'm Simon, Release Manager in the MSRC.  The April 2008 release contains 8 new bulletins, 5 of which have maximum severities of "Critical".

MS08-018            Vulnerability in Microsoft Project Could Allow Remote Code Execution (950183)

MS08-019            Vulnerabilities in Microsoft Visio Could Allow Remote Code Execution (949032)

MS08-020            Vulnerability in DNS Client Could Allow Spoofing (945553)

MS08-021            Vulnerabilities in GDI Could Allow Remote Code Execution (948590)

MS08-022            Vulnerability in VBScript and JScript Scripting Engines Could Allow Remote Code Execution (944338)

MS08-023            Security Update of ActiveX Kill Bits (948881)

MS08-024            Cumulative Security Update for Internet Explorer (947864)

MS08-025            Vulnerability in Windows Kernel Could Allow Elevation of Privilege (941693)

 

I’d also like to tell you about an improvement we’re introducing to the bulletins this month.

Back in December, you might have noticed a change in the IE bulletins.  We had been looking at moving the File Specifications lists out of the bulletins and into their associated bulletin Knowledge Base (KB) article.  We decided to pilot this with the IE bulletin because it has typically the largest file manifest.  We’ve successfully piloted this with two IE releases, and now it’s time to roll this change out to the rest of our bulletins.

By moving the file manifest out of the bulletins and into the KBs, this significantly reduces the size of the bulletins which will improve the rendering time when you open a bulletin.  Also, the KB tends to be more of a repository of specific package deployment details, and as such, the file manifests are better located there in order to serve those looking for reference-level material on the bulletins.  For bulletins which contain multiple distinct package KBs (such as Office), each KB will contain only the file manifest that directly relates to the associated package.

We hope that you find this improves both rendering performance and readability.

Please join us for the regular monthly security bulletin webcast, Wednesday April 9 11:00 AM PDT (GMT -7). We'll have an overview of the April bulletins, and you'll have the opportunity to ask us questions around the release.

Cheers,

Simon

*This posting is provided "AS IS" with no warranties, and confers no rights.*

Syndicate content