Running PowerShell Scripts from a Management Pack – Part 1

Courtesy of Brian “The Brain” Wren:

Considering all my work with OpsMgr management packs and my infatuation with PowerShell, it’s surprising that I’ve never put the two together in a public forum. I keep meaning to, just don’t ever seem to get there. I finally committed myself to this blog post while sitting at my local It’s a Grind (that’s a coffee house to all you Seattle Starbucks fans. It’s a Grind was born in Long Beach, and I am fanatically loyal to my home town).

So here’s the basic question – can I use a PowerShell script in an OpsMgr management pack? In other words, can I execute a PowerShell script from a rule, monitor, diagnostic, recovery, or task? The answer is absolutely you can, but there are some considerations to keep in mind. First let’s cover those considerations, then we’ll get to concepts and code. If you just want the modules to copy and paste into your MP, go ahead and jump to the end.

The biggest issue is that most of the servers in your environment don’t have PowerShell installed. Rules and monitors sent to an agent are executed locally on the agent computer, and any executables must be installed prior to the agent attempting to use them. In the case of VBScript and JScript, we rely on cscript.exe being present, but that’s a safe bet since it is automatically installed on all Windows machines. Powershell is obviously not installed by default, and even in Windows Server 2008 it’s an optional feature. If you want to use a PowerShell script for a console task, then you’re going to need to make sure that it is installed on the client workstation the task is being executed from. This is less of a concern though since, while PowerShell and Command Shell are not required for the OpsMgr User Interfaces, they are pretty strongly suggested.

There can also be some significant overhead from launching PowerShell. Go ahead and launch a PowerShell window and then have a look at Task Manager. Powershell.exe will probably be consuming something like 30 MB of memory which is about 5x the typical cscript.exe instance in my personal testing. This is not surprising considering all the rich functionality that PowerShell provides. My only point is to try to stay away from scenarios where you need to launch a PowerShell script every couple of minutes. I’ve heard the OpsMgr product team is working on some strategies to reduce this overhead, but until then you want to use PowerShell scripts where they don’t have to be launched too frequently.


The basic idea of running a PowerShell script is to use of the Command Executer modules to launch powershell.exe with your script. This is actually the method that quite a few of the scripts in existing management packs use – calling cscript.exe with these modules. The Command Executer modules will launch the executable of your choice, allow you to specify command line arguments, and allow you to specify the name and contents of one or more text files (which will obviously be your script). These text files are created on the agent prior to command execution, so you can be guaranteed they will be in place when your specified command is launched.

Have a look at Microsoft.Windows.ScriptProbeAction in Microsoft.Windows.Library for example. That module uses System.CommandExecuterProbe from System.Library. It specifies cscript.exe as the command to execute, provides the appropriate command line arguments such as /nologo and the name of the script, and then passes in the body of the script as a file. In order to execute Powershell, we really just need to figure out the command line required to launch PowerShell, execute a script, and then exit.

PowerShell Command Line

If you run powershell.exe /?, you get the command line arguments for PowerShell. To launch a command and exit, you use the -command argument, the invoke operator (&), and a command to execute. The example syntax given by that help is as follows:

powershell -command “& {get-eventlog -logname security}”

We’re going to need that basic syntax but put it in a format that the management pack can understand. It won’t handle that ampersand, and we’re go to have to have to specify a path for the script. PowerShell demands a complete path to a script even if it’s in the current directory. The Command Executer will drop the script to a temporary directory and then use that directory as its default when it executes the script, so we can assume the script will be located in the current directory. Assuming that we are going to use a parameter called ScriptName for the name of the script, then the command line in the management pack would look like the following:

-command “& {.\$Config/ScriptName$}”

If you’re completely baffled by that syntax, here’s a quick explanation. We replace & with & because OpsMgr interprets the & as a special character. We get away with that character in the script itself if we enclose the script in CDATA tags, but we can’t use CDATA tags on our command line. $Config/ScriptName$ is a context variable. OpsMgr will replace the variable inside the dollar signs with its actual value at run time. Config refers to the parameters of the module, and ScriptName is the name of the parameter. Finally, the .\ just refers to the current directory. So, if we used MyScript.ps1 for the script name, we would end up running the following command:

powershell -command “& {.\MyScript.ps1}”

Implementing the Modules

Rather than write a specific rule that uses the Command Executer modules and includes the specific PowerShell complexity, it is way more valuable to create a couple of base modules that can be leveraged by rules, monitors, and tasks. A Data Source module that runs a PowerShell script on a timed basis and returns a property bag could be used for a rule, monitor, or diagnostic. Another Data Source module returning discovery A Write Action module that just runs a PowerShell script on demand to perform some defined action would support a task or recovery.

I’ll provide the data source below. Given this it should be pretty straightforward to create a write action (hint – use System!System.CommandExecuter). You could also create a discovery module based on System!System.CommandExecuterDiscoveryDataSource.

<DataSourceModuleType ID=”PowerShell.Library.PSScriptPropertyBagSource” Accessibility=”Public”>
<xsd:element xmlns:xsd=”” name=”IntervalSeconds” type=”xsd:integer”/>
<xsd:element xmlns:xsd=”” name=”TimeoutSeconds” type=”xsd:integer”/>
<xsd:element xmlns:xsd=”” name=”ScriptName” type=”xsd:string”/>
<xsd:element xmlns:xsd=”” name=”Arguments” type=”xsd:string”/>
<xsd:element xmlns:xsd=”” name=”ScriptBody” type=”xsd:string”/>
<DataSource ID=”DS” TypeID=”System!System.CommandExecuterPropertyBagSource”>
<CommandLine>-Command “&amp; ‘$Config/ScriptName$”</CommandLine>
<Node ID=”DS”/>

Writing the Script

For the write action, there’s really nothing special about the script. Any working PowerShell script will be fine. If you need to return a property bag or discovery data from a data source though, you’re going to need to use MOM.ScriptAPI just like in VBScript. PowerShell works fine with COM objects, so this is not a problem at all. The following line will create an object variable, and using it is pretty similar to how you did it in VBScript.

$api = new-object -comObject “MOM.ScriptAPI”

For example, below is a PowerShell script to output the name and size of a specified file as a property bag. This could be called from a rule or monitor that uses a condition detection to map the property bag information to performance data. While this is coded in PowerShell, the basic process is identical to a script

$file = Get-Item $args[0]
$api = New-Object -comObject “MOM.ScriptAPI”
$bag = $api.CreatePropertyBag()

More Later

This should be enough information to get people off and running. I’d like to provide some more samples and thought of actually writing a library MP with a complete set of PowerShell modules. No hard commitment on that, but I’ll do my best in coming weeks.

2 Responses to Running PowerShell Scripts from a Management Pack – Part 1

  1. Mike October 8, 2008 at 1:27 pm #

    Quick question, or not 🙂 When a state changes via an event detection the monitor can kick off an alert and run a diagnostic task. I would like to call a web service through powershell to retrieve more info concerning the event from our event dictionary. I have all of that working but am having a problem passing the Event Id or Event context as a whole to powershell to make this dynamic. There doesn’t seem to be a way to do this in the alert mechanism and the diagnostic task variables don’t make the Data context available to retrieve this information. Is there a way to make this happen?


  2. Softeditor December 28, 2010 at 1:42 am #

    That is easily extracted, with that and parting is easy.

Leave a Reply