Show MenuHide Menu

Scripting Street Knowledge

October 8, 2015

Whenever it comes to building a scripted IT Automation solution that goes beyond the scope of a few commands in a single script file, you need more than sufficient technical knowledge and scripting skills: apart from that you need to approach the task in a way that helps you to resolve complexity and to plan ahead. With this post I want to raise your awareness for some General Principles for the Design of Scripted IT Automation Solutions that help you to master the situation. Here we go…

Nip it in the bud!

Don’t underestimate the beginnings. A quick-and-dirty approach isn’t a bad way per se, meaning that up to a certain point it’s a good way to go (at least from benefit-cost perspective). Exceeding that “certain point”, though, at worst could lead to chaos: If you follow the quick-and-dirty approach you’ll succeed to some degree and end up fixing/updating, adapting, flanging new features and partly rewriting your solution over time. The day will dawn when you want to rebuild the entire solution from scratch as it has evolved into something hardly manageable. It’s difficult to walk in mud, so to speak. As things usually turn out there’s no time/budget for tasks like this, thus you must drink as you have brewed.

And the moral of this story, never ever default to quick-and-dirty to begin with. It’s very, possibly even the most important to mind the early stages because later there will be hardly more advantageous moments to put things (back) on the right track. So, be sure to make time for thinking/planning, and through the end! Else you might draw the short straw.

Whenever you’re confronted with a new scripting challenge, howsoever minor it seems, ask yourself:

  1. Is the requested solution a feature of the scripting language?
  2. Have co-workers or I ever created a solution like the requested one?
  3. Could co-workers or I eventually re-use the solution or parts thereof?

Don’t get me wrong regarding the first question, but quite often people tend to rebuild existing commands or features due to lack of knowledge or preparation time. So, be sure to make time for figuring out that you’re really dealing with something nonexistent!

If the answer to the second question is “Yes” you definitely should figure out whether it’s worth the effort to adopt and adapt that solution. Here, likewise, the point is that you don’t want to reinvent the wheel.

If the answer to the third question is “Yes” you should opt for quick-and-dirty by no means, meaning that you should design the solution for repeated application or rather reusability.

As time goes on, you’ll get used to re-use your own work as partial solutions over and over again. At the end of the day you’ll realize that quick-and-dirty hardly ever would have been a suitable approach.

Apart from the above mentioned questions you should try to get to the bottom of the requested automation solution. Too often it turns out that an original request just covered the tip of the iceberg rather than the big picture. The big picture is exactly what you need, thus be prepared to clarify that. Ideally, you carry out a scoping date to get the big picture and write down a scope statement that exactly defines the requested scripting solution.

Divide and conquer

Facing a more sophisticated scripting challenge gives rise to the question how to tackle that task from scripting perspective. How to handle complexity? In computer science, there is an algorithm design paradigm called “divide and conquer” (D&C) that works by recursively breaking down a given problem into sub-problems, solving the (plain) sub-problems and combining these to solve the original (complex) problem. I highly recommend you to adopt the D&C approach as general scripting principle, in other words you break down a scripting challenge into as many separate scripting tasks as possible; after that you solve these scripting tasks individually; and finally you combine these partial, self-contained solutions to a complete solution. To put it another way, write for each single task a function; organize/bundle functions in libraries/modules; leverage these functions to solve the problem.

The basis for making many sub-solutions work together as a good “working team” is consistency and micromanagement if you will. Essentially, it’s about establishing a scripting framework that allows for and ensures information flow between the individual tasks. You need to define a set of rules, let’s call it a scripting policy that covers important details such as:

  • How functions deal with input
  • How functions pass back results (output)
  • How functions handle errors
  • How functions support testing and debugging/troubleshooting scenarios
  • Logging
  • Naming conventions
  • You name it

Again, avoid to reinvent the wheel! Be sure to check what your preferred scripting language has to offer with regard to your scripting policy and leverage this features. Blueprint a mandatory function template that incorporates all your rules and use it consequently to embed the effective “payload script code” safely within your scripting framework.

I must admit though that establishing such a scripting framework for the divide and conquer approach will blow up your solution. You need to do it anyways. However, you can keep it within reasonable limits if don’t overdo things!

Expect Failure

While preparing a dish experienced chefs are distinguished from amateur cooks by frequent tasting. Put simply: the pros expect failure and therefore continuously taste to identify a need for improvement/adjustment as quickly as possible. They retain total control. I highly recommend you to adopt the chef’s approach as another general scripting principle, in other words leave nothing to chance.

Given that, according the D&C principle, you write a function for each individual task you should do this in fear of losing control if you will. It’s about micromanagement! Mind each damned detail and ask yourself what could go wrong with it. No twilight zones allowed. Validate incoming information, test for connectivity, whatever. Got it? Better double-check each detail than rely on a fortunate series of events.

Always script with testing and debugging scenarios in mind. Take care to be one step ahead and insert by default some debug messages to put out parameter values that were given to a function. Someday, trivialities like these will make your (or a co-worker’s) day.

Design for Change

This one is about separating the data from logic. Never ever mingle logic and data. This is essentially the key to flexibility and instant reusability. Script logic should only “know” how to process data while the actual values should derive from separate resources like SQL tables, XML/JSON/CSV files, you name it. Not even dream of hard-coded values!

With separation of data and logic it’s almost a no-brainer to set up the solution for another environment.


To conclude, there’s definitely more to tell but in the end it’s all about thinking end-to-end and micromanage. Mind the details!

If you miss or think different about something feel free to contribute your ideas by submitting comments.

ISESteroids 2.0 is getting off the starting blocks…

April 23, 2015

… and you should give it a try: ISESteroids 2.0

To tell you the truth, up to yesterday I had some kind of prejudice against Tobias Weltner’s ISESteroids meaning that I considered it a PowerShell ISE Add-On that rather addresses a beginner’s needs.

During the 3rd german ‘PowerShell Community Konferenz 2015’ I changed my opinion. While giving talks, Tobias showcased by the way several times features of the upcoming release. I changed opinions. ISESteroids not only help you to produce better PowerShell solutions, but also it brings you to speed – regardless if your level is beginner, advanced, expert, guru whatever.

For example, imagine you’re challenged to write an advanced function with different parameter sets, mandatory parameters, and some optional parameters. With ISESteroids loaded, you can do things back to front, in other words you start with writing the syntax as you want it to be, such as…

After that you just need to highlight the code, right-click, select the ISESteroids action to create a function, and – voilà – you get a neatly written skeleton for a function that exactly matches the syntax specifications you made.

Another great feature I saw in action was a WPF GUI builder that requires/interacts with Visual Studio.

I could continue listing my memory minutes. But, it still would be just the tip of the iceberg. ISESteroids is packed features you need to discover while working with it. So do I

Enums in Windows PowerShell Less Than Version 5.0

January 8, 2015

Maybe you’ve noticed that the upcoming version of Windows PowerShell, 5.0, will make Enumerators (Enums) very easy to create with the new enum keyword. With this post I share an approach to create enums in PowerShell 4.0 and lower as well.

(If you know what an Enumerator is you can skip this section.) Enums help you to deal with rather small ranges of integer values (each value gets a name) and, even more importantly, they simplify programming robust solutions. Put the case that you have to deal with different environments, for example Dev, Test, Acceptance, and Prod. And let’s say that each environment is represented by an int value (thus, 0 to 3 represents Dev to Prod). What happens if you assign the value 4 by mistake? For PowerShell it’s ok because 4 is a valid int value. Therefore, this error will remain undetected at the scene and – according Murphy – reveal its dark energy in the worst possible moment. You get the idea, I hope. It’s no fun to narrow down such problems. How to prevent such failure? You could mess around with if statements and -lt, -gt, -eq for example. Or you make use of, guess what, an Enum. If you have an Enum type for the afore-mentioned environments, PowerShell will refuse a variable of this type to be assigned any value outside of the scope 0..3 and throws an error at the root cause. Therefore, I like to use Enums ever since PowerShell 1.0.

In Windows PowerShell 4.0 and below, Enums are created as follows:

Now, play with it (that’s how I like to learn stuff, btw):

Now, let’s get dirty…

Btw, did you notice the hint within the error message? PowerShell lists the possible values for you.

Hope this helps

Citrix PVS Image Preparation Script for XenApp 7.x Workloads

January 5, 2015

With this post I share a Powershell script that prepares the master installation of a XenApp 7.x Worker for imaging with Citix Provisioning Services, Prepare-XenApp7.ps1.

Due to fact that Citrix has ported its flagship XenApp to the architecture that was introduced with XenDesktop 5, there’s strictly speaking no need to generalize the PVS vDisk that provides the workload of a XenApp Worker because it doesn’t contain IMA-related stuff anymore. On the other hand there’s still room for some optimization steps before putting a XenApp vDisk into production/standard mode. The script automates the following steps:

  • Investigate the PVS’ Personality.ini in the root of the system drive in order to determine the disk mode that is read-write, read-only, or started from local HD
  • Clear Citrix User Profile Manager’s cache
  • Resync time
  • Update GPO settings
  • Clear network related caches (DNS and ARP)
  • Clear WSUS Client related settings
  • Clear event logs
  • Based on the findings in Step 1, suggest a convenient main action, that is either “Exit” (if we’re in maintenance/private w/ read-write vdisk access), or “Invoke ImagingWizard” (if we started from local HD), or “Invoke XenConvert” (reverse imaging scenario w/ read-only vdisk access)

BTW, the script should work for desktop workloads as well but I haven’t tested it so far.

Hope this helps

Latest version on GitHub: Prepare-XenApp7.ps1

How To Backup MS SQL Express Databases? #PowerShell

January 4, 2015

Happy Twenty Fifteen! The first post of the new year deals with a common question I am confronted with from time to time: Do you have a script to backup MS SQL Express? Yes, I have. The script requires the SQLPS PowerShell Module that will be installed automatically with newer versions of MS SQL Express. Basically, it simplifies the usage of its Backup-SqlDatabase Cmdlet:

Hope this helps

[Updated] How To Get The Clientname Within A Logon Script? #PowerShell #RemoteDesktopServices #CitrixXenApp

December 2, 2014

This morning, a workmate seeked my support regarding an issue that I wasn’t aware of: on a Windows Server 2008 R2 Remote Desktop Session Host you can’t leverage the CLIENTNAME environment variable within a logon script. I stumbled upon a post regarding the same issue and decided to port their VBScript based solution to Windows PowerShell and here’s the result:

Hope this helps

What was, is, and will be new in Windows PowerShell?

October 12, 2014

If you need an overview what’s new in the upcoming version and what was new in the current and former versions of Windows PowerShell as from version 3.0 I recommend Microsoft TechNet What’s New In PowerShell.

[Updated] Citrix PVS: Sync Local vDisk Store #PowerShell

September 29, 2014

The other evening, I’ve updated the Sync-PvsLocalStore.ps1 script in order to support multiple Citrix Provisioning Services (PVS) vDisks due to customer request.

The purpose of this script is to copy or rather sync changed and new Versions of one or more given vDisks between the local Stores within a Farm of two or more PVS servers. Basically, you can think of the Sync-PvsLocalStore.ps1 as a wrapper for Robocopy.exe /MIR with some extra brains on top. That is because it is able to detect and exclude a Maintenance Version of a vDisk from the copy process, meaning that the script only spreads out the latest Production and Test versions of a vDisk while it doesn’t bloat the stores with Maintenance versions that is work in progress typically.

The usage is very simple. Look at this example:

You need to specify one ‘MasterServer’, one or more ‘MemberServer’, the path of the Store (needs to be the same on each server), one or more vDisk names, the name of the corresponding Site and Store. The two latter help the script to identify any Maintenance Version.

Sync-PvsLocalStore.ps1 needs to be run on a system where the PVS Console or rather its command-line interface MCLI.EXE is installed. (The script utilizes MCLI.EXE because to date there’s no advantage in using the PVS PowerShell Snapin.)

Design PowerShell Scripts For Feedback

September 11, 2014

What is the “secret” ingredient of building rather sophisticated solutions with Windows PowerShell?

If you’d ask me, it’s about applying the principle of Feedback Loops. Maybe you’ve heard or read of Feedback Loops in the context of Agile Development Practices or DevOps. To put it in a nutshell, from a DevOps-perspective Feedback Loops stand for communicating early, often, and openly with a view to identify the need or the room for improvements as quickly as possible. It’s like frequent tasting while preparing a dish: experienced chefs do it because they know that a late feedback is worth nothing, or in the case of a real first-class dinner, it would be nothing more than the result of a fortune series of events.

How can a PowerShell script be designed for Feedback Loops? I’ll show you my personal approach. Please note that it makes no claims of being the one and only.

First of all, starting from a scripting challenge that includes multiple actions you need to apply the divide and conquer strategy; meaning that for one thing you have to identify each individual action that is necessary to succeed and for the other thing you have to think of each individual action as an independent “battle”. Solve each task detached from the overall context, i.e. create fully functional partial solutions that receive input values through parameters and that at least return $true in case of success (and otherwise $false). If you’re done with isolating the partial solutions, you’re ready to “conquer” that is to put the “lego bricks” together. Now, the band begins to play…

What you need now is a sort of meta-script that controls the work in process if you like. My approach, in its very basic form, looks like this:

The code sample shows the basic concept how to “misuse” switch as a simple workflow engine that terminates processing as soon as something went wrong. Initially, I give each step a friendly name and put these names in an array ($subTask). Then I commit that collection of names as “test value” to switch. Switch evaluates each item (name) in the order in which it was defined, i.e. for each name switch invokes the appropriate action.

The advantages of this approach really come to light for example when it comes to …
… repeat the entire workflow before raising an error,
… repeat single steps individually before raising an error, or
… go into reverse in case of a fatal condition (rollback)

For now, I leave you alone with your thoughts.

Why Merge Hash Tables #PowerShell

September 7, 2014

As part of building an automation framework, typically you’re facing the challenge to separate the data from logic as this is the key to an agile and re-usable solution. The automation logic by itself should “only” know how to process data within a workflow, but the logic by itself shouldn’t know any (hard-coded) value. Instead, the logic should get data values from separate resources like configuration files, registry, databases, whatever. With separation of data and logic it’s almost a no-brainer to set up the solution for another environment, not to mention maintenance that is as easy as pushing DEV into the git repo and pulling the changes into TEST and PROD for example. I personally like to maintain data in hashtables, to be more precise I usually have for example config-global.ps1 and a config-abbreviation.ps1 file per environment that 1. adds specific environment settings and 2. overrides baseline settings from the global config. These ps1 files only contain a (usually nested) hash table. In order to import the data I dot-source both the global config and the environment config. The next step is the creation of a single data resource from both hash tables; meaning that I’m able to access the keys and their values through a single variable like $ConfigData afterwards.

Merging hash tables is straightforward as long as each hash table contains different keys: it’s simply $hashTable1 + $hashTable2. In case of duplicate keys this approach will fail because PowerShell isn’t able to resolve that conflict by deciding which key takes precedence. That’s why I wrote the function Merge-Hashtables (get your copy from the TechNet Gallery). Merge-Hashtable creates a single hash table from two given hash tables. The function considers the first hash table as baseline and its key values are allowed to be overridden by the second hash table. Internally, Merge-Hashtables uses two sub functions. The first sub function detects duplicates and adds for each conflict the key of the second hash table to the resulting hash table. The second sub function adds additional keys from the second hash table to the resulting hash table. Both sub functions are designed to support nested hash tables through recursive calls.

Hope this helps.