User login

News aggregator

NTLM and MaxConcurrentApi Concerns

Active Directory Blog - Tue, 09/23/2008 - 19:17
Although not one of our highest volume issues we get our customers calling about there is one complex scenario that seems to me would be a winner if we handed out prizes to problems that took longest to resolve. That scenario is NTLM client to server...(read more)

DNS Scavenging and AD

Active Directory Blog - Mon, 08/25/2008 - 14:00
Recently I wrote a post about how, in an uncommon scenario, Active Directory integrated DNS could lose an entry regarding a domain controller in a global SRV record. Here’s another aspect of AD integrated DNS which you can run into, particularly if you...(read more)

Differences in network performance between Windows Vista/Windows Server 2008 and Windows XP/Windows Server 2003

Microsoft Enterprise Networking Team - Wed, 06/25/2008 - 22:29
Overview of Windows Vista and Windows Server 2008 performance improvements

This is just an overview; I am including several links with more detail on Windows Vista and Windows Server 2008 networking. You can take this as far as you like; it is a very deep rabbit hole.

For simplicity I am going to compare Windows XP and Windows Vista. Please remember that Windows Server 2008 contains the updated networking stack that was introduced in Windows Vista so the comparison is still valid.

We have had a few support calls in Networking Support lately where people are comparing network performance between operating systems and they want to know two things:

  1. Why is Windows Vista so much faster on the network when connecting to another Windows Vista or Windows Server 2008 system, especially for SMB?
  2. How do I make Windows XP perform like Windows Vista?

Actually, the second question is usually asked more along the line of "why is my Windows XP computer broken and what can I do to "fix" it?" but I think you get the point.

Let me start by saying that there is nothing "wrong" with Windows XP.  It is not "broken" and does not need to be "fixed".

To answer question 2 first, you will never get Windows XP to perform exactly like Windows Vista from a networking perspective; the network stack is very different between the two. There are some changes that can be made to Windows XP that may affect performance. Notice that I said "changes". This is because in some of these changes there are potential tradeoffs to resources on the local system that could negatively impact overall system performance and could change the behavior of TCP in a way that may actually decrease performance on the network. In some instances making these changes on a large scale to several clients could even negatively impact the overall performance of your entire network. 

So why the difference?

I recently had a call from a customer who was seeing up to a 7 times performance improvement when transferring files between two systems running Windows Vista, compared to transferring the same files between two Windows XP systems. I found this fairly impressive since in the testing and studies I have read about the expected improvement was generally about 3.5 times. So I agreed to investigate to ensure that there was in fact not a problem with Windows XP. After reviewing much data and testing some changes on the Windows XP system we concluded that he was in fact seeing that much better performance across the wire for his Windows Vista systems.

To answer question one, what changed that could explain such a difference in performance? Well, a lot. Starting with Windows Vista we have a new network stack. The Cable Guy, aka Joe Davies (you may have noticed his name on the cover of some of the MS Press books), has written some good overviews of the new network stack, you can find them at the following links.

Next Generation TCP/IP Stack in Windows Vista and Windows Server 2008
Performance Enhancements in the Next Generation TCP/IP Stack

Some of the really cool stuff that has been added to the new network stack is:

  • Receive Windows auto-tuning - Allows for tuning the maximum receive window size based on current network conditions.
  • Compound TCP - This allows for more aggressive increase of the send window especially in high bandwidth and high delay networks.
  • ECN Support - Allows routers that are experience congestion to mark packets so peers who receive these packets can lower their transmission rates.

I hope everyone reading this can appreciate how huge this is. More aggressive send and receive and more intelligent congestion avoidance! If you’re a network admin and you didn't already know about this and your still in your chair, check your pulse, you should be dancing about and people should be looking at you like you have lost your mind. This is part of the "magic" that will allow for more throughput while also avoiding congestion so fewer retransmitted packets. Yay!

But then you have to sit down and realize that these are changes to the very core of the networking stack and these changes which involve large amounts of code changes can never be made to Windows XP, the new stack is just too different.

But that's not all, act now and receive...

So besides the network stack there is another improvement. This is more at an application layer but very important for things like file copies.

Let me point out again that the better performance we saw was doing a file copy between Windows Vista or Windows Server 2008 connecting to another Windows Vista or Windows 2008 system. One reason this is significant is something called SMB2. SMB2 is only available starting with Windows Vista so even if you are on a Windows Vista client, if you connect to a Windows XP or Windows Server 2003 system you will not be able to take advantage of the improvements made in SMB2.

A good quick overview of SMB2 is actually on the Performance Team blog.

Some of the changes made in SMB2 include;

  • Sending multiple SMB commands in the same packet which reduces the number of packets sent between a client and server
  • Larger buffer sizes
  • Increased scalability in terms of the number of shares, users, and simultaneously open files
  • Support for Durable Handles - These are handles that can survive a network disconnect.

So this also translates to a much improved user experience for anything using SMB 2, such as file copies.

In summary

As I mentioned this was just an overview but I wanted to make sure everyone understands why they may be seeing some difference in the performance of legacy systems and Windows Vista and Windows server 2008 and also help explain why these changes won't be back ported to the legacy systems.


For a comparison of Windows XP and Windows Vista networking performance, see the results of the the analysis done by The Tolly Group.  This can be downloaded from the following link.
Mark Russinovich's blog "Inside Vista SP1 File Copy Improvements."
Next Generation TCP/IP Stack in Windows Vista and Windows Server 2008
Performance Enhancements in the Next Generation TCP/IP Stack
SMB2 Two Minute Drill on the Performance Team blog.

- Clark Satter


Remote Desktop Connection (Terminal Services Client 6.1) for Windows XP SP2 x86 platforms

Terminal Services Team Blog - Wed, 06/25/2008 - 20:19

Hello everyone,

we heard a lot of feedback from you about the need for the Remote Desktop Connection client 6.1 to be made available as a standalone install for Windows XP SP2 to ease deployments of Windows 2008 Terminal Services. <?xml:namespace prefix = o ns = "urn:schemas-microsoft-com:office:office" />

In response to this feedback, we have released the Remote Desktop Connection client (RDC 6.1) for Windows XP SP2 on x86 platforms.

You can download RDC6.1 for Windows XP SP2 from the Microsoft Download Center (KB 952155) for the following languages:

Arabic, Chinese - Simplified, Chinese – Traditional, Danish, Dutch, English, Finnish, French, German, Greek, Hebrew, Hungarian, Italian, Japanese, Korean, Norwegian, Polish, Portuguese – Portugal, Portuguese – Brazil, Russian, Spanish – Spain, Swedish, Turkish.

We have also released the MUI package for RDC6.1 on Windows XP SP2 from the Microsoft Download Center (KB 952230).

These are some of the supported features of Remote Desktop Client 6.1 for Windows XP SP2:

  • Windows Server 2008 & Windows Vista feature support
  • TS Web Access support
  • TS Easy Print support
  • TS Remote Programs support
  • TS Gateway support

Please review the complete list of features and details about RDC6.1 for Windows XP SP2 in this Knowledge Base article.


 RDC6.1 is now available on the following platforms:


Windows Server 2008

Windows Vista SP1

Windows XP SP3

Windows XP SP2 (KB 952155)

Windows Server 2008 Goes Back to School

Windows Server Division WebLog - Wed, 06/25/2008 - 16:39

With summer in Redmond just around the corner, I know a number of teachers  that like to take trips or do odd jobs  around the house while school is out.  However the teachers in California’s Manteca Unified School District still have access to classroom applications at home (or anywhere they have internet access) because of Windows Server 2008.  <?xml:namespace prefix = o ns = "urn:schemas-microsoft-com:office:office" />


The school district is a prime example of success that can be had with the Terminal Services RemoteApp feature of WS08.


One of the initial goals of their WS08 deployment was to move away from establishing a dedicated virtual private network (VPN) for their 30 schools and 4,000 staff members to access information.  With Terminal Services, teachers are now able to securely access the same information available in their classrooms, using their home PC. Due to its success, the district also plans to install Terminal Services on nine more servers before the 2008-2009 school year begins.


We continue to hear great feedback on the actual deployment time of WS08 as well.  Manteca’s deployment of WS08 was pretty quick—IT staff was able to deploy all applications to one server, rather than 5,500 times to individual desktop computers.


If you are looking for more information on Terminal Services, check out the Terminal Services Team Blog.




Sample Exchange 2007 transport agent - add the name of the group to subject line

Microsoft Exchange Team Blog - Wed, 06/25/2008 - 16:34

Many people use rules to automatically sort messages from various distribution lists into folders in order to keep the volume of email traffic in their inbox to a manageable level. This work for the most part - until someone decides to BCC a list. Since the distribution list isn't visible in the list of recipients, it bypasses all rules and gets dropped in their inbox. This can cause quite a distraction for everyone on that DL because something appeared in their inbox, and it's not quite apparent why they received that message.

Mailman and Majordomo have had this problem solved for a while now. It's actually a very simple solution: make sure the name of the mailing list is included in the subject of the email. Then users can set up filters based on words in the subject and they never encounter the problem when someone BCC's the list.

Exchange 2007 can do the same thing; it just needs a little help from a custom transport agent. I have written a very basic agent to add the name of the DL into the subject. You can use it as a starting point and add your own features.

To install this agent, follow these instructions:

  1. Copy ShowDLInSubjectAgent.dll to your transport server. In this example, I place it in C:\MyAgents
  2. Open the Exchange Management Shell
  3. Type Install-TransportAgent -Name ShowDLInSubjectAgent -TransportAgentFactory ShowDLInSubjectAgent.ShowDLInSubjectFactory -AssemblyPath C:\MyAgents\ShowDLInSubjectAgent.dll
  4. Close and restart the Exchange Management Shell.
  5. Type Enable-TransportAgent -Identity ShowDLInSubjectAgent
  6. Restart the transport service by typing Restart-Service MSExchangeTransport

If all goes well, all emails to a distribution list will now include the name of the DL in the subject of the email.

Note: This is a sample transport agent and it is not officially supported by Microsoft. Please see the readme.txt file included in the package for more information.

The ZIP file with the binary and the entire source for you to play with is here:

- Jesse Weigert

Share this post :

UrlScan 3.0 Beta and Tools to Help Mitigate SQL Injection Attacks

Windows Server Division WebLog - Tue, 06/24/2008 - 22:36

Microsoft published a Security Advisory today providing information for developers and Web administrators on ways in which they can mitigate and prevent SQL injection attacks. As you might have seen, there was a spate of such attacks in late April and it caused quite a few headaches for administrators. Remember that SQL injection attacks target Web application code, not Web server code, so they can only be avoided by making sure that any Web application that accepts user input, which is then used to query a database, follows best practices to ensure that the input does not contain malicious code or syntax that might compromise the database, Web site, or even the whole server.

So the advisory today is not a security bulletin - there are no patches for IIS or SQL Server or ASP.NET to download. However, we are making available some tools that can help mitigate these attacks while the underlying Web application code is being fixed to follow security best practices for protecting against SQL injection in ASP and ASP.NET. There is a tool from HP that tests sites to help identify pages that might be susceptible to SQL injection attacks, and also a Microsoft Source Code Analyzer from our SQL Server team that actually parses ASP code for data access commands that might be vulnerable to SQL injection.

But the one that I'm most excited about is UrlScan 3.0 Beta. As you may remember, UrlScan originally released with the IIS Lockdown Tool to help mitigate security vulnerabilities that affected IIS 5.0 in Windows 2000 Server. It's an ISAPI filter that examines HTTP requests to check that URLs and other headers are not being padded with overlong strings or unusual characters as a way to conduct a buffer overflow attack. We haven't updated this tool since we released UrlScan Version 2.5 alongside IIS 6.0, because most of the functionality is now available in IIS 7.0 as the Request Filtering module. But as of today, you can download 32-bit and 64-bit versions of UrlScan 3.0 Beta, which extends the functionality to also examine the querystring part of the URL (i.e. the part that comes after a "?" in a URL - typically name/value pairs or other parameters that are passed to a script or application). This can therefore help prevent SQL injection attacks while the underlying Web application code is fixed.

Over on the site, you can find a full walkthrough of the tool, as well as some great articles by Wade Hilmo (the guy who wrote UrlScan) and Nazim Lala, another member of our IIS security team. They have full details on the tool and other security guidance you can follow to help protect your Web servers and applications.



SQL Injection Attacks Exploiting Unverified User Data Input

Hey Andrew Cushman here.<?xml:namespace prefix = o ns = "urn:schemas-microsoft-com:office:office" />


Today I’m pleased to announce the coordinated release of three security tools in Security Advisory 954462 to help customers deal with SQL injection attacks:


·         UrlScan version 3.0 Beta, a security tool that restricts the types of HTTP requests that Internet Information Services (IIS) will process. By blocking specific HTTP requests, the UrlScan helps prevent potentially harmful requests.

·         Microsoft Source Code Analyzer for SQL Injection Community Technology Preview (June 2008), a tool that can be used to detect ASP code susceptible to SQL injection attacks.

·         Scrawlr, a free scanner, developed by HP Web Security Research Group in conjunction with Microsoft, which will allow customers to identify whether their Web sites might be susceptible to SQL injection. 


Back in the day, I participated in the first release of URLScan as a member of the IIS team. Things are a bit different now than they were back then. Nowadays people applaud IIS’ excellent security track record and point to it as a “poster child” of the SDL (Security Development Lifecycle).


Some things are unchanged though. Microsoft teams and partners remain committed to deliver tools and solutions to make it easier for Administrators to protect themselves from mis-configuration and application coding errors.  URLScan v3.0 beta, Microsoft Code Analyzer for SQL Injection and HP Scrawlr continue the tradition of development collaboration. These tools, and the quick turn around by the teams, demonstrate to me the dedication to a more secure computing experience by the SQL Server and IIS teams and our friends at Hewlett-Packard.. 


Special thanks go to Wade Hilmo on the IIS team and Bala Neerumalla on the SQL team.

Wade is the original and sole developer of URLScan. Another great job! Bala is the driving force behind the SQL tool and is responsible for the idea and the realization of it. 

Thanks guys!


Microsoft has posted a number of new related blogs posts. In addition to the SQL and IIS blogs mentioned above, I encourage you to check out the SVRD blog and the SDL blog from my colleagues down the hall.





Director, MSRC


*This posting is provided "AS IS" with no warranties, and confers no rights.*

Task Scheduler Changes in Windows Vista and Windows Server 2008 – Part One

Ask the Performance Team - Tue, 06/24/2008 - 11:00

Today we are looking at a couple of new changes/additions to the Task Scheduler service in Windows Vista and Server 2008.  As an overview, the Task Scheduler service provides controlled, unattended management of task execution, launched either on a schedule or in response to events or system state changes.  If you have worked with Task Scheduler in the past, then the updates/changes are fairly significant.  So, with that said, let’s dive right in … starting with the User Interface:

As you see above, Task Scheduler has now been integrated into the MMC as a new snap-in.  Say goodbye to the stand alone Scheduled Tasks window via Control Panel, and hello to your one stop shopping location for everything related to the Task Scheduler.  Within this window, you are presented with the Task Status and Active Tasks' section.  These sections allow you to quickly view the status of your tasks and which ones are currently active.  There are quite a few changes, so to keep our post brief, we’re only going to cover Triggers and Conditions and Settings in this post – beginning with Triggers:

The ability to trigger a task based on any event captured in the event log is one of the most powerful new features of the Windows Vista / Server 2008 Task Scheduler.  This new capability allows administrators to send an e-mail or launch a program automatically when a given event occurs.  And it can be used to automatically notify a support professional when a critical event—for example, a potential hard drive failure—occurs on a client machine.  It also enables more complex scenarios, such as chasing down an intermittent problem that tends to manifest overnight.  Task Scheduler can be configured to notify an administrator by e-mail that a problem has occurred.  An administrator can also use Task Scheduler to automatically launch a program to collect more data when the error occurs.

Setting up tasks to launch when events occur is easy with the new Task Scheduler Wizard in Windows Vista / Server 2008.  An administrator can simply select the task in the Event Viewer to be used as a trigger and, with one click, launch the Task Scheduler Wizard to set up the task.  The seamless integration between the Task Scheduler user interface and the Event Viewer allows an event-triggered task to be created with just five clicks.  In addition to events, the Task Scheduler in Windows Vista / Server 2008 supports a number of other new types of triggers, including triggers that launch tasks at machine idle, startup, or logon.  A number of additional triggers allow administrators to set up tasks to launch when the session state changes, including on Terminal Server connect and disconnect and workstation lock and unlock.  Task Scheduler still allows tasks to be triggered based on time and date, and provides easy management of regularly scheduled tasks.

In the new Task Scheduler, triggers can be further customized to fine tune when tasks will launch and how often they will run. You can add a delay to a trigger, or set up a task to repeat at regular intervals after the trigger has occurred.  Administrators can also set limits on tasks, indicating that the task must stop running after a given period of time.  Activation and expiration dates can also be specified.

In addition to specifying Triggers, a number of conditions can be defined for each task.  Conditions are used to restrict a task to run only if the machine is in a given state.  For example, you can launch a program when an event occurs only if the network is available, launch an action at a specific time only if the machine is idle, or launch an action at logon only if the computer is not operating in battery mode.  In Windows Vista / Server 2008, administrators can define conditions based on the idle state of the computer, the power source of the computer (AC versus batteries), network connectivity, and the power state of the computer ("ON" versus in a sleep state).  Perhaps most importantly, a task can be configured to awaken the computer from hibernation or standby to run a task.

Administrators can use settings to instruct Task Scheduler what actions to take if a task fails to run correctly. In case the task fails, administrators can indicate how many times to retry it. If the computer is not powered on when a task is scheduled, an administrator can use settings to ensure that the task will run as soon as the machine is available. An administrator can also define a maximum execution time for a task, ensuring that the task will time out if it runs too long.

With that, it’s time to wrap up this post.  In our next post we will cover Flexible Actions and Triggers, Security and Reliability.

- Blake Morrison

Share this post :

But what about those lesser known features of windows server 2008?

Windows Server Division WebLog - Mon, 06/23/2008 - 23:50

TechEd 2008 IT Professional was a blast! It was great to meet and talk with many of you IT Pros. An opportunity came my way to discuss some of the lesser known features of Windows Server 2008, especially with relation to Active Directory. Take a look, this is a 2 part video. RODC, Auditing, Password Policies, and Domain Controller location (this is the biggie) are all discussed. Hopefully this gives some insight into the little things. Enjoy!


Justin Graham


New Networking-related KB articles for the week of June 7 - June 13

Microsoft Enterprise Networking Team - Mon, 06/23/2008 - 19:47

948745  MS08-034: Vulnerability in WINS could allow elevation of privilege

951376  MS08-030: Vulnerability in Bluetooth stack could allow remote code execution

953979  Device Manager may not show any devices and Network Connections may not show any network connections after you install Windows XP Service Pack 3 (SP3)

- Mike Platts


Anti-Spam Connection Filtering when installed on Hub servers and other AS configuration misunderstandings

Microsoft Exchange Team Blog - Mon, 06/23/2008 - 16:01

Recently I came across a situation where it was reported that Connection Filtering stopped working (IPs on the Blocklist and RBLs were no longer being blocked). The solution led me to write this blog to clarify some confusion about "when" connection filtering is applied and how configuration settings are applied when the agents are installed on a Hub server.

Let's begin by looking at the online documentation regarding Connection Filtering:

"By default, connection filtering is enabled on the Edge Transport server for inbound messages that come from the Internet but are not authenticated. These messages are handled as external messages. You can disable the filter in individual computer configurations by using the Exchange Management Console or the Exchange Management Shell.

When connection filtering is enabled on a computer, the Connection Filter agent filters all messages that come through all Receive connectors on that computer. As noted earlier in this topic, only messages that come from external sources are filtered. External sources are defined as non-authenticated sources. These are considered anonymous Internet sources."

From this explanation we see 4 things:

  1. That Connection Filtering is installed on Edge by default (as are all the other AS agents)
  2. Enabled for inbound (ExternalMail) by default
  3. For connections that have not authenticated
  4. Connection Filtering (and all AS agents) can be disabled/enabled on individual computers

So in the scenario (where connection filtering was no longer blocking) we checked:

  1. Get-Transportagent which showed the Connection Filtering agent enabled
  2. Get-IPBlocklistconfig which showed True for both Enabled and ExternalMailEnabled (False for InternalMailEnabled - default setting)
  3. Get-IpBlocklistentry which contained IPs that should be blocked
  4. Confirmed that ActiveDirectory correctly reflected that the agent and config were enabled
  5. Agent Logs did not show activity related to the IPs that should be blocked

The missing piece was in understanding that connection filtering is a combination of how the Agents are enabled (noted above) and what rights the connecting SMTP session is granted. Examining the SMTP receive log files indicated that the session was granted all the rights possible (including ByPassAntiSpam) which only occurs with "Externally Secured" Authentication.

So here's the way it works:

When the AS Filter components "Enabled" and "ExternalMailEnabled" parameters are set to true, any mail that comes in from an SMTP Session anonymously or via a Partner may be scanned. If the AS Filter components "Enabled" and "InternalMailEnabled" parameters are set to true, any mail from an authenticated session may be scanned. Note: Authenticated partner sessions are not considered Internal.

So to recap: The following 5 points should be considered when determining whether an AS agent executes against a particular SMTP session.

1) The agent itself must be enabled. i.e. The Connection Filtering agent. Use Get-TransportAgent to determine which agents are installed and enabled/disabled.

2) The Anti-Spam config must be enabled. i.e. Get-IPBlockListconfig | fl enabled

3) Consider whether the Anti-spam component is set for ExternalMailEnabled and/or InternalMailEnabled

Default settings IPAllowListConfig:

4) Anonymous and Partner SMTP Sessions are governed by the ExternalMailEnabled parameter. Authenticated sessions (including connectors that are configured for External Authoritative) are governed by the InternalMailEnabled parameter.

5) What permissions does the submitting client have? i.e. All Exchange Servers and Externally Secured sessions get the Bypass Anti-spam privilege (this cannot be removed). Even when ExternalMailEnabled is true and the SMTP session is anonymous, if NT Authority\Anonymous Logon has the Bypass Anti-Spam associated with the receive connector, mail will not be checked.

Now to dispel some other misunderstandings with regard to Configuration controls

IPAllowlistconfig or IPBlocklistconfig command default settings are below. However, if InternalMailEnabled is set to True... action is taken on trusted servers in the Exchange Organization. For grins, I decided to test this in my lab sending mail from one Hub to another. The sending server passed the X-EXPS Auth command which would be the auth used for "Exchange Servers". In the debug tracing you could see that the IP was checked against the IPBlocklist, but not rejected because the Exchange Servers group is granted ByPassAntispam permissions on the connector.

Configuration Misunderstandings when Anti-Spam Agents are installed on Hub servers

Anti-spam Agents are installed per server by running install-antispamagents.ps1 script.

After running the script you will have Organization level and Server level controls. There are two Anti-Spam Tabs added to the Exchange Management Console, one at the Org level and another at the Server\Hub level.

Organization level settings in the Exchange Management Console:

Server level setting in the Exchange Management Console:

Get-TransportAgent cmdlet is a per Transport server configuration setting. This example has 3 of the agents disabled. So this will only affect the Hub this is configured on:

The cmdlet, Set- Transport server, -AntispamAgentsEnabled, is a bit confusing at first. The default value is True when you run the script to install the agents on a Hub. When set to False, it does not disable the AS agents. It simply hides the Anti-Spam tab at the Server level for that particular Hub server in the Exchange Management Console (may require restart of the msExchangeTransport and close / reopen the console).

The overlooked 'internalSmtpServers' list

Imagine this scenario:

Mail with valid SPF records is rejected by your SenderID Agent. The SPF shows these IP addresses: text =
"v=spf1 ip4: ip4: ip4: -all"

The rejected Message headers are:

Received: From ( by (
Received: From ( by (

Since Exchange has to pick an IP to compare to the SPF records, which one does it pick?

To determine this, Exchange starts with the last "received: From" header in the mail message and looks for a match in the internalSmtpServers list moving up the received: From headers until a match is NOT found. In the example above, "Received: From ( will be the first IP match attempted. The reason the mail was rejected above was that IP was not in the internalSmtpServers list. Adding it then returns a match so the next Recevied:From header is now examined and that IP is not only the last external IP (not in the list of internalSmtpServers) but also on the Sender's SPF records ( and the mail passes SenderID Agent.

In some scenarios mail is filtered through a hosted service provider that provides services such as Anti-Spam, Anti-Virus. By failing to add the hosted service provider IP addresses to the internalSmtpServers list, it's possible that all inbound mail will cease. Upon investigation you find the following in your Agent Log:

Agent : Connection Filtering Agent
Event : OnEndOfHeaders
Action : RejectMessage
SmtpResponse : 550 5.7.1 External client does not have permissions to submit to this server
Reason : LocalBlockList
ReasonData : machine-generated entry

Machine generated entries are those added by the Sender Reputation Agent. You can get a quick look with the following cmdlet:

PS> get-IPBlockListEntry | {where $_.IsMachineGenerated}

Remember, the internalSmtpServers determines what the 'last external IP' to be used by the AS agents. If incoming mail is filtered through an appliance or hosted service it's imperative that the ip address(s) of those servers be listed here.

When the AS agents are installed but the InternalSmtpServers is not populated, Event 1022 is logged:

Anti-spam agents are enabled and the list of internal SMTP servers is empty. Please use the set-TransportConfig task to populate this list.

Troubleshooting connection filtering

  1. Determine if the connecting server authenticated by examining the SMTP protocol receive logs
  2. What permissions were ultimately granted to the session (get-adpermission for the receive connector Exchange Extended rights on the user)
  3. Check the IPAllowlistconfig or IPBlocklistconfig for how they are enabled
  4. Check the IPAllowlistentry and / or IPBlocklistentry
  5. Check the individual server settings with Get-Transportagent

- Dave Forrest

Share this post :

SQL Use of MiniShells

Windows Powershell Team Blog - Mon, 06/23/2008 - 02:58

The SQL team has been receiving a lot of bashing lately over some of the decisions they made integrating PowerShell into SQL 2008.  I thought I would take a couple minutes to clarify a few things, eat some sin and talk about a constructive engagement model between the community and the feature teams implementing PowerShell cmdlets.<?xml:namespace prefix = o ns = "urn:schemas-microsoft-com:office:office" />

First let me declare my long standing admiration for the SQL team.  Those superstars have consistently been one of the teams that really GOT what it meant to release software for production environments.  They have a great quality culture and process and they have top-shelf leadership that reinforces this across the board.  SQL has been the gold standard of great scripting because their GUIs produce scripts that you could harvest for reuse (yes it wasn't full coverage but they GOT IT years before anyone else).  They are a great team - full stop.

The majority of the heartburn has come from SQL's use of MiniShells.  A MiniShell is a non-extensible version of PowerShell with a set of baked in Cmdlets and providers.  Some of my best community friends have pointed to SQL's use of MiniShells as evidence that they "don't get it".  This is not correct.  I told the SQL team about MiniShells and recommended that they use them because I thought they were a good fit for the sort of production-oriented value proposition they provide their customers.  So direct your criticisms at me on this one.

First let's talk about MiniShells and why they exist.  During the Vista reset, there was a great deal of anxiety about .NET versioning and general angst about instability arising from plugin models where code coming from the various sources run in the same process.  Imagine the case where you load plugins from 10 different sources and then something goes wrong - who is responsible?  Who do you call for support?  MiniShells allow teams to address these issues by creating fixed execution environments that built in our labs and fully tested/verified before release.  If you have a problem with a SQL PowerShell and call PSS, the first thing they are going to do is to have you try to reproduce the problem using the SQL MiniShell.   (NOTE:  In my experience, 9 out of 10 times that you have a problem with multiple plugins in a process comes from bad memory management  - a problem largely [but not completely] managed out of existence by the CLR.)

The problem is not that SQL shipped a MiniShell but rather that there are SQL UX scenarios that use the MiniShell instead of a general purpose PowerShell.  The SQL Management Studio GUI has context menus which launch their MiniShell.  This is where we made a mistake.  By definition, this is an escape out to an environment to explore and/or perform ad hoc operations.  This environment does not benefit from the tight production promises that a MiniShell provides, in fact it is hampered by them.  Because the MiniShell is a closed environment, you can't even manually add snap-ins.  This is what sent people’s meters into the red - and understandably so. 

Sadly it is too late to make this change for SQL 2008 but the SQL team will change this at their next opportunity.  In the meantime, when you are at the MiniShell prompt, you can just launch regular PowerShell with a console file that contains whatever snapins you want to use (including the SQL snapins - they can be added to a PowerShell session).  Clearly this is less than optimal but it is not onerous either.  We are working with the SQL team on the PowerShel V2 designs to make sure that we can offer teams like SQL the safety/production quality they need while providing the customers the flexibility they want.


Let's take a minute and talk about the engagement model.  I've encouraged the community to complain loudly when we screw up and aren't giving you want you want/need.  No one benefits by you suffering in silence.  In that regard, I can say that the MiniShell firestorm has been a big success.  That said, for complaints to be actionable, they need to arrive in time to be acted upon.  I'm sure that someone can point to a blog or email somewhere that pointed this out a long time ago but the reality is that it didn't pop as a problem until recently and now it is going to have to wait until the next cycle to get fixed.  The good news is that the community feedback mechanism works, we just need to improve the timing.

One last note about tone.  I've often joked that complaints were critical and politeness was desirable but optional (in other words, I'd rather get a rude complaint than polite silence).  Let me take a moment to tweak that guidance a little.  PowerShell is our passion and our day jobs and we'll endure almost anything to get the information we need to make this the best product ever.  So that engagement model is totally applicable to the PowerShell team.  That said, PowerShell is not the feature teams day jobs.  In the case of the SQL team, it was a stretch goal pursued because of the passion of a small set of individuals (Michiel Wories in particular).  The bottom line is that it is still critical to complain but when you complain about the feature team's support of PowerShell, just double check the tone and try make it constructive.  Above all, let them know that you appreciate their efforts (just don’t get so wrapped up that you forget to include the complaint J)


Jeffrey Snover [MSFT]
Windows Management Partner Architect
Visit the Windows PowerShell Team blog at:
Visit the Windows PowerShell ScriptCenter at:


Getting Credentials From The Command Line

Windows Powershell Team Blog - Fri, 06/20/2008 - 21:09

When you use the Get-Credential cmdlet, you get a GUI dialog box to enter the credentials.  This is the "Common Criteria Certified" way of handling credentials.  It is also a pain in the butt at times.  If you are an admin, you can alter this and request credentials via the command line as follows:


<?xml:namespace prefix = o ns = "urn:schemas-microsoft-com:office:office" />PS> $key = "HKLM:\SOFTWARE\Microsoft\PowerShell\1\ShellIds"
PS> Set-ItemProperty $key ConsolePrompting True
PS> Get-Credential

cmdlet Get-Credential at command pipeline position 1
Supply values for the following parameters:
User: ntdev\jsnover
Password for user ntdev\jsnover: **************

UserName                                                           Password
--------                                                           --------
ntdev\jsnover                                  System.Security.SecureString




Jeffrey Snover [MSFT]
Windows Management Partner Architect
Visit the Windows PowerShell Team blog at:
Visit the Windows PowerShell ScriptCenter at:

How does Outlook Anywhere work (and not work)?

Microsoft Exchange Team Blog - Fri, 06/20/2008 - 14:57

It's been a while since I've been thinking of writing a blog post about various aspects of Outlook Anywhere that people have been asking questions about. Somehow, I keep getting myself caught up in one thing or another, and have consequently delayed writing this blog post by almost 4 months. Ugh. Better late than never I figure.

Given how long this blog post is overdue, I plan to cover a lot of topics, from frequently asked questions to common misconceptions to problems with Outlook Anywhere to suggested solutions for different problems.

How does Outlook Anywhere work?

I won't cover details on the cmdlets that enable and change settings for Outlook Anywhere. There is already a bunch of documentation on it. Instead, let's do a slightly deeper dive than the cmdlet documentation provides.

The values that you provide to Outlook Anywhere settings can be classified into 2 types of properties - client facing and server facing. Examples of client facing properties are ClientAuthenticationMethod, External Host Name. Examples of Server facing properties are IISAuthenticationMethods, SSLOffloading. Client facing properties are picked up by Autodiscover and supplied to Outlook to configure client access to the Outlook Anywhere service. Server facing properties are picked up by a servicelet called RpcHttpConfigurator which runs as part of the Microsoft Exchange Service Host service. This servicelet runs every 15 mins by default, but the interval can be adjusted by changing the value of the HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\MSExchangeServiceHost\RpcHttpConfigurator\PeriodicPollingMinutes regkey. Note that setting this value to 0 turns off the RpcHttpConfigurator.

When the RpcHttpConfigurator runs, it picks up the IISAuthenticationMethods and SSLOffloading values from the AD and stamps it on the \rpc vdir settings in the IIS metabase - overwriting any previously set value. This means that if you manually change the settings on this vdir, you should expect to be run over pretty shortly by the RpcHttpConfigurator (unless you have set the reg key to 0).

Ok, so that's just part of what the servicelet does.

Outlook Anywhere depends on the RPC/HTTP Windows component to do the marshalling and unmarshalling of the RPC packets from the client to the CAS server. A client side RPC component is responsible for marshalling every RPC packet into an HTTP tunnel and sending it over to the \rpc vdir on the CAS server. RPCProxy is an ISAPI extension that unmarshals the RPC packet, retrieves the RPC endpoint that the client wants to talk to and forwards the packet to the endpoint. But imagine if you were able to connect to any server in the organization if you were able to auth against an IIS box running RPCProxy. By the weakest link theory, all you'd need to do would be hack into a single IIS server and you'd have free access to all servers in the org. Ouch ! To alleviate this problem, RPCProxy only allows connections to be made to servers and ports that are in a trusted list. This list is maintained through the HKEY_LOCAL_MACHINE\Software\Microsoft\Rpc\RpcProxy\ValidPorts regkey and contains all the servers/ports that RPCProxy is allowed to talk to. So, the other part of what the RpcHttpConfigurator servicelet does it that is queries the AD for all mailbox servers and stamps them in the ValidPorts regkey allowing access to ports 6001, 6002, 6004 for both FQDN and Netbios access. So, you will typically see something like mbx1:6001-6002;mbx1:6004;; as the value for the key. As new mailbox servers are added to the org, they will be picked up when the servicelet runs and be added to the key. Again, if you manually change this regkey, you should expect to be bulldozed by the servicelet.

Note that the ValidPorts key is only used by RPCProxy as a filter to disallow communication with unlisted server ports. It is not used to determine which server to send requests to. For the same reason, the order in which servers are listed in this key does not matter. I just thought I'd clarify this since I was recently told that there was confusion on what this key accomplished.

Ok, simple enough, now that all the configuration is done, how does Outlook Anywhere actually establish its connections. The following diagram may help:

As you see above, the client specifies the VIP of the Load balancer (or direct CAS FQDN if the CAS is exposed to the Internet) as the HTTP endpoint and the mailbox server as the RPC endpoint. The query string is somewhat like this:

This tells the RPCProxy on CAS1 that the client is trying to connect to server on port 6001. RPCProxy looks up the ValidPorts key and if is listed there, it allows the connection to go through.

The blue and red arrows above represent the 2 different connections spawned by the RPC/HTTP client component to represent a single RPC connection. This is done because HTTP connections are half duplex (i.e. they either allow you to send information or receive information, not both at the same time). In the case of RPC, connections need to be long lived and full duplex, so the RPC_IN_DATA connection acts as the sending half duplex connection, while the RPC_OUT_DATA connection acts as the receiving half duplex connection. Since HTTP requires that each connection be given a max length, each of these connections are 1GB "long" and are terminated when this limit is reached. Each of these connections is tagged with a client session id. When the RPC server component receives the RPC_IN_DATA and RPC_OUT_DATA with the same client session id, it knows that for any request received on the RPC_IN_DATA connection, it must reply back on the RPC_OUT_DATA connection. That's magic.

Ok, so you already know this, but I'll reiterate - the mailbox server has 3 ports that are used for RPC/HTTP: port 6001 is used for Mail connections, port 6002 is used for directory referral, port 6004 is used for proxying directory connections to AD. The Referral Service running on port 6002 and DSProxy running on port 6004 are part of the same mad.exe process, and the Referral Service just refers clients back to DSProxy to establish their Directory connections. If you Ctrl+Right Click the Outlook icon and click on Connection Status, it will tell you what connections exist (Mail vs. Directory), what server they are going to and what protocol they are using (HTTPS vs. TCP(direct Exchange RPC connection)).

I have conveniently omitted any discussion around certificates, since that can take up another few blog posts. As some would say, that is beyond the scope of this article and is left as an exercise to the reader.

How do I know Outlook Anywhere is working?

Simple... when no one is complaining! Seriously though, it is preferable is to run diagnostics on Outlook Anywhere before subjecting it to thousands of users. The one tool that works pretty well in most cases is rpcping. Yes, it has a lot of parameters and is confusing, but it does provide pretty good diagnostic information and as long as you have the KB open, you can figure out where problems lie. Start by pinging just the RPCProxy by using the -E option. Once that works, move onto testing the mailbox server endpoints by removing the -E and adding -e 6001 instead. Similarly for 6002, 6004.

A typical command line would be something like this. Refer to for usage details

rpcping -t ncacn_http -o -P "user,domain,password" -H 1 -F 3 -a connect -u 9 -v 3 -s -I " user,domain,password " -e 6004

How does Outlook Anywhere not work?

Unfortunately, there are some cases where Outlook Anywhere does not work without requiring manual tweaks. This is the part I wish I had blogged about earlier. I'm sure there are poor folks out there that have hit these issues and wasted their time figuring out what I had already learned...

DSProxy and IPv6

As of E12 SP1, Outlook Anywhere on Windows 2008 requires that IPv6 be manually turned off on the CAS server. This is because the DSProxy component that listens on port 6004 (mad.exe) for directory connections does not listen on the IPv6 stack. If you do a netstat -ano | findstr 6004, you will see only 1 LISTENING entry - the one that corresponds to the IPv4 stack. Contrast this with ports 6001 and 6002 that have 2 entries.

(As most of you already know, if you are running your Mailbox role on the same machine as a DC, lsass.exe not mad.exe listens on port 6004, so this problem will not surface since lsass.exe listens on both protocol stacks.)

How do you turn off IPv6 ? It depends on whether you are running CAS and Mailbox on the same server or different ones.

If you're in a multi-server scenario where the RPCProxy is not on the same server as the Mailbox, then you need to do the following:

  1. Unselect IPv6 from the properties of your NIC (on the RPC-over-HTTP Proxy machine); that will force the RPC-over-HTTP Proxy to use IPv4 to talk to Exchange and everything will be fine. In most cases, this step suffices. If it does not, continue with steps 2 and 3.
  2. Under the regkey HKLM\SYSTEM\CurrentControlSet\Services\Tcpip6\Parameters, add a 32 bit DWORD with the name Disabled Components and value 0xFF
  3. Reboot the machine

If you're in a single-server scenario where the RPCProxy and Mailbox are on the same machine, then the above does not work since the loopback interface still uses IPv6. In this case, you need to make the following changes in the system32\drivers\etc\hosts file:

  1. Comment out the line ":::1    localhost"
  2. Add the following two lines:
       <IPv4 address>    <hostname of the computer>
       <IPv4 address>    <FQDN of the computer>

Thanks to Kevin Reeuwijk and others for finding and reporting the issue and solution. A fix (make DSProxy listen on the IPv6 stack) is on the way and is expected to be available in Exchange 2007 SP1 RU4 in Q3 2008.

DSProxy and Split RPC_IN_DATA, RPC_OUT_DATA connections

In the diagram above, you will notice that I have used a Source IP Loadbalancing layer. This ensures that the RPC_IN_DATA and RPC_OUT_DATA connections coming from a single Outlook instance are always affinitized to the same CAS server. However, there are some legitimate scenarios where Source IP affinity is not viable for customers. A typical example is when a large number of end users are behind NAT devices causing all connections to end up with the same IP and hence the same CAS server... yay load balancing! Outlook Anywhere does not support cookies, so cookie based Load balancing cannot be used either. The only way of spreading load across the server farm is to use with no affinity or SSL-ID based affinity. However, this has the problem that the RPC_IN_DATA and RPC_OUT_DATA connections could (and most likely would) end up on different CAS servers as shown in the diagram below:

If you've been reading closely, you'll remember my earlier mention that the RPC server component is well aware of client session IDs and can reply on RPC_OUT_DATA for any requests on RPC_IN_DATA. And if that's the case, we should still be fine since Outlook always specifies the mailbox server as it's RPC endpoint. Well, almost. We are fine for ports 6001 and 6002 which are real RPC end points. The issue is with port 6004 where DSProxy pretends to be an RPC endpoint, but is just a proxy as the name implies. DSProxy only proxies client connections through to the DC. In the example above, RPC_IN_DATA is proxied to DC1 while RPC_OUT_DATA is proxied to DC2. The DCs are the real RPC endpoints. However, now that the 2 connections have been split, neither of the DCs is aware of the other connection and requests sent on RPC_IN_DATA are lost in oblivion. We call this split connectivity and it is a problem surfaced by SSL-ID or no affinity load balancing. While I would recommend not using these configurations if avoidable, it is clear as described earlier that these may be the only alternatives. Think hard if this is the case since the workaround that I am describing below will be tedious to maintain.

The goal of these steps is to eliminate the possibility of split connectivity by (1) having clients bypass DSProxy wherever possible and (2) constrain DSProxy to talking to a single DC for any requests to DSProxy.

First off, you need to avoid using DSProxy wherever possible. Normally, the Referral Service running on port 6002 refers clients to DSProxy on port 6004. By setting the following regkey, you instruct Referral Service to not send clients to DSProxy, but instead give them a referral to a DC for directory connections. So, instead of client connections going from Client to RPCProxy to DSProxy to DC, the path would be from Client to RPCProxy to DC. Note that the client is not directly connecting to the DC, so it is not required to publish the DCs to the internet or open any new firewall ports. See KB for details:

On the Mailbox servers: a DWORD  entry needs to be created on each Mailbox server named "Do Not Refer HTTP to DSProxy" at HKLM\System\CCS\Services\MSExchangeSA\Parameters\ and the value set to 1

Next, as indicated earlier, the RPCProxy will block access to the DC servers unless there servers are included in the ValidPorts regkey. So, set the following on the Client Access Servers

  1. The ValidPorts setting at HKLM\Software\Microsoft\RPC\RPCProxy needs setting so that the entries referring to 6004 point to DC servers in addition to the mailbox server.
  2. The PeriodicPollingMinutes key at HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\MSExchangeServiceHost\RpcHttpConfigurator\ needs setting to zero to prevent RpcHttpConfigurator from updating the Valid Ports key automatically.

Finally, you need to make sure that the DCs are listening on port 6004:

On the Global Catalog servers: a REG_MULTI_SZ  entry needs to be created on each GC named NSPI interface protocol sequences at HKLM\System\CCS\Services\NTDS\Parameters\ and the value set to ncacn_http:6004

These fixes will make sure that all directory connections bypass DSProxy and terminate at the DCs, thereby allowing the DC RPC server side component to receive both the RPC_IN_DATA and RPC_OUT_DATA connections.

There is 1 last thing to deal with in this SSL-ID load balanced configuration. Outlook profile creation hard codes a call to DSProxy on 6004. Which means that we can get split connectivity during profile creation. To deal with this minimal volume of traffic, there is 1 final regkey that should be set on the mailbox servers:

On the Mailbox Servers - set the HKLM\System\CCS\Services\MSExchangeSA \Parameters key "NSPI Target Server" to the FQDN of the DC that profile creation should use.

By using only 1 DC for profile creation, all DSProxy calls will be proxied into that single DC, once again avoiding split connectivity.

That's it folks!

Of course, subsequent releases will provide cleaner solutions for such topologies, but for now, rest assured that having gone through the above steps multiple times, I feel your pain.

That's pretty much it. I hope that adds some clarity to how Outlook Anywhere works and hasn't succeeded in confusing everyone even more.

Until the next post - Hasta Luego!

- Sid

Share this post :

Working with Very Large Print Jobs

Ask the Performance Team - Fri, 06/20/2008 - 11:00

There are sometimes situations where printing of very large documents containing high resolution graphics, text and images is needed.  With the growing technology of high end cameras flourishing in the market, image sizes are growing larger and larger.  Additionally, image editing applications present endless opportunities to enhance and modify images to your heart's content. Due to the amount of information stored in images like this, the final spool job can sometimes reach multiple gigabytes in size.  There are some issues seen when we print extremely large print jobs – our focus today will be on those issues, as well as some solutions.  Let’s get started with having a look at the issues first.

  1. When a print job reaches 3.99 GB in size, the counter for the job size resets to 0 and it starts growing again.
  2. While printing the job, the printer prints the initial data and then suddenly spits out the paper as if the print job is over.  On restarting the print job, it starts printing again from the beginning.

To keep the scope of our discussion within reason, the environment in our example is Windows XP/ Windows Vista/Windows Server 2003 x64 clients and Windows Server 2003 or Windows Server 2008 x64 as the print server.  To begin with, it is often thought that a print job cannot grow over 4 GB in size, but this is not true.  The spool file (.spl file) which gets created can actually grow easily to over 4 GB in size.  Thus, the obvious question, why do very large print jobs fail to print as expected?  There are two reasons for this behavior:

The first aspect is when we are seeing the print job go to 3.99 GB and reset to zero and start over again.  This behavior is just a benign issue of the display of the print job and has no actual effect on the actual printing of the Job.  The issue with the UI showing the wrong size is known and does not impact the actual print job.  The UI only displays 32-bit sizes and wraps larger values.  Internally print job sizes are kept as 64-bit values.

The second issue is why the print job itself actually fails.  First, it is essential to know which application is being used to generate the print job - is it a 32-bit application or 64-bit native application We normally see this issue when we have a 32-bit application printing to a 64-bit server.  Here is what happens. When the application is printing, there are two ways the job may be programmatically created, as we can see in the diagram below (we also discussed several aspects of printing in our post on Basic Printing Architecture last year:

  1. Printing via GDI
  2. Printing directly through the Print Spooler (winspool.drv) bypassing GDI

The Graphics Device Interface (GDI) enables applications to use graphics and formatted text on both the video display and the printer.  Microsoft Windows based applications do not access the graphics hardware directly.  Instead, the GDI interacts with device drivers on behalf of applications. The GDI can be used in all Windows-based applications.  When a print job is created via the GDI interface, there is a limitation of 4 GB per page.  If a single page is over 4 GB in size, it will not print properly.  If a job is made up of multiple pages, but no single page is over 4 GB in size, you should not have a problem.  So, what is the solution for printing large documents?

  • In the case of a single job for instance, you can select the option to 'Print directly to the printer' on the Advanced tab under the printer properties.  However, it would not be recommended to configure this as the default setting, since that basically defeats the purpose of having a print server.
  • The application you are using may allow you to resize or spread out the images so that a single page will not be over 4 GB in size.  The problem with this of course is knowing which pages will be of what size until you try to print and it fails.
  • There is another way to make this work - if you are the developer of the application in question.  You can use certain API's to facilitate large print jobs.  The application can generate a printer-ready PDL of any size and complexity and use the AddJob, ScheduleJob, StartDocPrinter, WritePrinter and EndDocPrinter API's to spool the raw PDL.  PDL stands for Page Description Language, and is basically the format by which a page is put together into a print job.  You can think of it as sort of the most basic format of a print job.  PCL and PostScript for instance are forms of PDL.

Winspool.drv is the client interface into the spooler.  It exports the functions that make up the spooler's Win32 API, and provides RPC stubs for accessing the server.  The OpenPrinter, StartDocPrinter, StartPagePrinter, WritePrinter, EndPagePrinter, and EndDocPrinters functions mentioned above are all provided by winspool.drv.  The functions in winspool.drv are mainly RPC stubs to the local spooler service (Spoolsv.exe).  By using these API's to create the job, the spooler will be able to bypass GDI and send the PDL directly to the printer via Winspool.  Here is how a print job would be created with help of these API's:

  1. Application calls OpenPrinter to get a handle to a printer from the Spooler.
  2. Application calls StartDocPrinter to notify the Spooler that a print job is started.  If successful, a job id is returned to represent the job created by the Spooler.
  3. Application calls StartPagePrinter, WritePrinter, and EndPagePrinter repeatedly to define pages and write data to the printer.
  4. Application calls EndDocPrinter to end the print job. In your code it may look similar to the following:

OpenPrinter() StartDocPrinter() StartPagePrinter() (This starts a new page) WritePrinter() (This writes data to the page) EndPagePrinter() (This ends the page. Repeat the last three steps until all pages are done) EndDocPrinter() CloserPrinter()

And with that, we’ve reached the end of this post.  Hopefully this information helps you understand some of the challenges involved with very large print jobs.

- Ashish Sangave

Share this post :

Group Policy Health Cmdlet

Windows Powershell Team Blog - Fri, 06/20/2008 - 03:06

SDMSoftware has just released a Group Policy Health Cmdlet HERE.

They have a great, high-quality, 9 minute DEMO VIDEO showing what the cmdlet does and how to work it.

Nice stuff.

Jeffrey Snover [MSFT]
Windows Management Partner Architect
Visit the Windows PowerShell Team blog at:
Visit the Windows PowerShell ScriptCenter at:

Windows PowerShell Virtual User Group Meeting #6 - June 24th.

Windows Powershell Team Blog - Thu, 06/19/2008 - 17:57

Marco Shaw is hosting the 6th PowerShell Virtual User Group meeting on June 24th. (See for details and sign-up). He has graciously allowed Wassim and I to present some of the new post-CTP2 things that the PowerShell team is working on. This should be an excellent opportunity to see the latest and greatest and also to provide us with feedback on these new features.

I'm going to be talking about the new Modules feature in PowerShell V2. Modules are essentially a replacement for the old V1 snap-in concept. We haven't talked a lot about modules in public yet so I asked Marco for a chance to remedy that.  JayKul had a blog posting a while back that has pretty good coverage of what's in CTP2 so I'll be looking at what we expect to see in the final product.

Wassim Fayed will be presenting an update on remoting, including some new and advanced features that will be in the final product.


Hope to see you all there!



Bruce Payette, Principal Developer, Windows PowerShell Team

MS08-030 Re-released for Windows XP SP2 and SP3

Hello, this is Christopher Budd.

<?xml:namespace prefix = o ns = "urn:schemas-microsoft-com:office:office" /> 

I wanted to let folks know that we’ve just re-released MS08-030. This is to let you know there’s a new version of this security update available for Windows XP SP2 and SP3 customers and to encourage them to deploy these new updates. There are no new updates for the other versions of Windows discussed in the bulletin.


After we released MS08-030 we learned that the security updates for Windows XP SP2 and SP3 might not have been fully protecting against the issues discussed in that bulletin. As soon as we learned of that possibility, we mobilized our Software Security Incident Response Process (SSIRP) to investigate the issue.


Our investigation found that while the other security updates were providing protections for the issues discussed in the bulletin, the Windows XP SP2 and SP3 updates were not.


Our engineering teams immediately set to work to address the issue and release new versions of the security updates for Windows XP SP2 and SP3. These are available now and are being delivered through the same detection and deployment tools as the original update.


If you’re running Windows XP SP2 or SP3, you should go ahead and test and deploy these new security updates. If you’ve deployed security updates for MS08-030 for other versions of Windows, you don’t need to take any action for those systems.


Our focus has been on delivering new versions of these updates to protect customers as quickly as possible. Now that that’s done, as part of our standard process, we’re beginning an investigation into how this happened. We’re just starting this investigation, but early on, it appears that there may have been two separate human issues involved. When we’re done with our investigation, we’ll take steps to better prevent it in the future.





*This posting is provided "AS IS" with no warranties, and confers no rights.*


Windows Powershell Team Blog - Thu, 06/19/2008 - 12:34

We looked at a lot of CLI models when designing PowerShell.  One of them was netsh.  One of the things I had a love/hate relationship with was netsh's use of context.  In netsh, you can set context via navigation and then you don't need to provide that context on the command line.  For instance you navigate into the FIREWALL and then you perform a set of operations and you don't need to say that you are working on the firewall because it picks it up from the context.

I thought I would experiment with this a little this morning.  I don't know if this is useful or not but it shows a couple of interesting scripting techniques so I thought I would share it anyway.  This is PUSH-NOUN.PS1.  You can push a NOUN context and then you don't need to specify that noun for the first command in a command sequence - you just specify the verb.  You type "?" for help,  "exit" to exit and "! cmd" to escape and execute a command directly.  The examples will make it more clear. 

First the code:

# Push-Noun.ps1
# Sets a context which allows you to work on a noun similar to the way NETSH does
while ($true)
    Write-Host "[$Noun]> " -NoNewLine
    $line = $host.UI.ReadLine().trim()
    switch ($line)
    "exit"   {return}
    "quit"   {return}
    "?"      {Get-Command "*-$Noun" |ft Verb,Definition -Auto |out-host}
                $Cmd = $_.SubString(1)
                Invoke-Expression $cmd |out-host
    default  {
                $Verb,$args = $Line.Split()
                $Cmd = "$verb-$Noun $args"
                Invoke-Expression $cmd |out-host

Now let's run it:

PS> .\push-noun service
[service]> ?

Verb    Definition
----    ----------
Get     Get-Service [[-Name] <String[]>] [-ComputerName <String[]>] [-Include <String
New     New-Service [-Name] <String> [-BinaryPathName] <String> [-DisplayName <String
Restart Restart-Service [-Name] <String[]> [-Force] [-PassThru] [-Include <String[]>]
Resume  Resume-Service [-Name] <String[]> [-PassThru] [-Include <String[]>] [-Exclude
Set     Set-Service [-Name] <String> [-DisplayName <String>] [-Description <String>]
Start   Start-Service [-Name] <String[]> [-PassThru] [-Include <String[]>] [-Exclude
Stop    Stop-Service [-Name] <String[]> [-Force] [-PassThru] [-Include <String[]>]
Suspend Suspend-Service [-Name] <String[]> [-PassThru] [-Include <String[]>]

[service]> get a*

Status   Name               DisplayName
------   ----               -----------
Running  AeLookupSvc        Application Experience
Running  ALG                Application Layer Gateway Service
Stopped  Appinfo            Application Information
Running  AppMgmt            Application Management
Running  AudioEndpointBu... Windows Audio Endpoint Builder
Running  Audiosrv           Windows Audio

[service]> get |where {$ -match "^A" -AND $_.Status -eq "stopped"}
Status   Name               DisplayName
------   ----               -----------
Stopped  Appinfo            Application Information

[service]> !gps *ss

Handles  NPM(K)    PM(K)      WS(K) VM(M)   CPU(s)     Id ProcessName
-------  ------    -----      ----- -----   ------     -- -----------
    673       8     2308       6636    70     6.75    492 csrss
    539       7    16044      21460   190   149.98    540 csrss
   1633      16    11040      17680    68    67.47    656 lsass
    866       5     3976       5720    33     2.56    548 psxss
     33       1      312        760     5     0.08    428 smss

[service]> exit

Note that given the mechanisms we have, this only really works with the first command in the sequence.  Image that you wanted to do something like:

PS> Get | where {$ -match $test} |STOP

Where STOP did not have to specify the NOUN.  You really can't do that today.  We have been thinking about exposing a TOKEN-NOT-RESOLVED event which would call a user-defined function which would allow you to do runtime fixups.  If we had that mechanism in place, you could do this.   Hmmmmmm.


Jeffrey Snover [MSFT]
Windows Management Partner Architect
Visit the Windows PowerShell Team blog at:
Visit the Windows PowerShell ScriptCenter at:

Syndicate content