Category Archives: Scripting Technique

  • 0

[Updated] Citrix XenServer 7: Unattended Installation Pitfalls

Citrix XenServer 7.x still supports the previous approach for Network Boot Installation of XenServer hosts that is based on PXE, TFTP, HTTP or NFS, and some post install shell scripting. However, if you adopt your existing XenServer 6.x deployment system you may face some challenges due to changes that I didn’t found explicitly mentioned in the installation guide.

Given that you have experience with unattended XenServer 6.x deployments, I just want to quickly share two major issues that prevented the installation process succeeding:

1. Carefully double-check the files in the pxelinux.cfg directory, especially the append statement. Within that statement there has to be the word “install”. Example (see line 4):

2. Unlike previous versions, XenServer 7.x won’t execute a “XenServer First Boot” shell script in /etc/rc3.d (chmod +x given). Although it will execute a shell script in /etc/init.d that has a proper symbolic link in /etc/rc3.d you should be aware of the fact that XenServer 7 uses systemd for bootup (instead of Sys V style). Therefore I recommend to embrace systemd. For an example see CTX217700)

Please note that systemd might terminate a long-running “XenServer First Boot” script due to exceeding a timeout limit. A simple approach to avoid that is to set TimeoutStartSec to “infinity” as follows:

A rather sophisticated approach to avoid script termination is to set the systemd unit’s type for “XenServer First Boot” to “notify” (in the code above it is “simple”). The notify type binds you to add notification commands to the XenServer First Boot script. systemd-notify seems a proper command for that purpose. But I haven’t tested it as I opted for the infinit timeout approach.

Hope this helps


  • 0

Scripting Street Knowledge

Category : Scripting Technique

Whenever it comes to building a scripted IT Automation solution that goes beyond the scope of a few commands in a single script file, you need more than sufficient technical knowledge and scripting skills: apart from that you need to approach the task in a way that helps you to resolve complexity and to plan ahead. With this post I want to raise your awareness for some General Principles for the Design of Scripted IT Automation Solutions that help you to master the situation. Here we go…

Nip it in the bud!

Don’t underestimate the beginnings. A quick-and-dirty approach isn’t a bad way per se, meaning that up to a certain point it’s a good way to go (at least from benefit-cost perspective). Exceeding that “certain point”, though, at worst could lead to chaos: If you follow the quick-and-dirty approach you’ll succeed to some degree and end up fixing/updating, adapting, flanging new features and partly rewriting your solution over time. The day will dawn when you want to rebuild the entire solution from scratch as it has evolved into something hardly manageable. It’s difficult to walk in mud, so to speak. As things usually turn out there’s no time/budget for tasks like this, thus you must drink as you have brewed.

And the moral of this story, never ever default to quick-and-dirty to begin with. It’s very, possibly even the most important to mind the early stages because later there will be hardly more advantageous moments to put things (back) on the right track. So, be sure to make time for thinking/planning, and through the end! Else you might draw the short straw.

Whenever you’re confronted with a new scripting challenge, howsoever minor it seems, ask yourself:

  1. Is the requested solution a feature of the scripting language?
  2. Have co-workers or I ever created a solution like the requested one?
  3. Could co-workers or I eventually re-use the solution or parts thereof?

Don’t get me wrong regarding the first question, but quite often people tend to rebuild existing commands or features due to lack of knowledge or preparation time. So, be sure to make time for figuring out that you’re really dealing with something nonexistent!

If the answer to the second question is “Yes” you definitely should figure out whether it’s worth the effort to adopt and adapt that solution. Here, likewise, the point is that you don’t want to reinvent the wheel.

If the answer to the third question is “Yes” you should opt for quick-and-dirty by no means, meaning that you should design the solution for repeated application or rather reusability.

As time goes on, you’ll get used to re-use your own work as partial solutions over and over again. At the end of the day you’ll realize that quick-and-dirty hardly ever would have been a suitable approach.

Apart from the above mentioned questions you should try to get to the bottom of the requested automation solution. Too often it turns out that an original request just covered the tip of the iceberg rather than the big picture. The big picture is exactly what you need, thus be prepared to clarify that. Ideally, you carry out a scoping date to get the big picture and write down a scope statement that exactly defines the requested scripting solution.

Divide and conquer

Facing a more sophisticated scripting challenge gives rise to the question how to tackle that task from scripting perspective. How to handle complexity? In computer science, there is an algorithm design paradigm called “divide and conquer” (D&C) that works by recursively breaking down a given problem into sub-problems, solving the (plain) sub-problems and combining these to solve the original (complex) problem. I highly recommend you to adopt the D&C approach as general scripting principle, in other words you break down a scripting challenge into as many separate scripting tasks as possible; after that you solve these scripting tasks individually; and finally you combine these partial, self-contained solutions to a complete solution. To put it another way, write for each single task a function; organize/bundle functions in libraries/modules; leverage these functions to solve the problem.

The basis for making many sub-solutions work together as a good “working team” is consistency and micromanagement if you will. Essentially, it’s about establishing a scripting framework that allows for and ensures information flow between the individual tasks. You need to define a set of rules, let’s call it a scripting policy that covers important details such as:

  • How functions deal with input
  • How functions pass back results (output)
  • How functions handle errors
  • How functions support testing and debugging/troubleshooting scenarios
  • Logging
  • Naming conventions
  • You name it

Again, avoid to reinvent the wheel! Be sure to check what your preferred scripting language has to offer with regard to your scripting policy and leverage this features. Blueprint a mandatory function template that incorporates all your rules and use it consequently to embed the effective “payload script code” safely within your scripting framework.

I must admit though that establishing such a scripting framework for the divide and conquer approach will blow up your solution. You need to do it anyways. However, you can keep it within reasonable limits if don’t overdo things!

Expect Failure

While preparing a dish experienced chefs are distinguished from amateur cooks by frequent tasting. Put simply: the pros expect failure and therefore continuously taste to identify a need for improvement/adjustment as quickly as possible. They retain total control. I highly recommend you to adopt the chef’s approach as another general scripting principle, in other words leave nothing to chance.

Given that, according the D&C principle, you write a function for each individual task you should do this in fear of losing control if you will. It’s about micromanagement! Mind each damned detail and ask yourself what could go wrong with it. No twilight zones allowed. Validate incoming information, test for connectivity, whatever. Got it? Better double-check each detail than rely on a fortunate series of events.

Always script with testing and debugging scenarios in mind. Take care to be one step ahead and insert by default some debug messages to put out parameter values that were given to a function. Someday, trivialities like these will make your (or a co-worker’s) day.

Design for Change

This one is about separating the data from logic. Never ever mingle logic and data. This is essentially the key to flexibility and instant reusability. Script logic should only “know” how to process data while the actual values should derive from separate resources like SQL tables, XML/JSON/CSV files, you name it. Not even dream of hard-coded values!

With separation of data and logic it’s almost a no-brainer to set up the solution for another environment.

Summary

To conclude, there’s definitely more to tell but in the end it’s all about thinking end-to-end and micromanage. Mind the details!

If you miss or think different about something feel free to contribute your ideas by submitting comments.


  • 0

Enums in Windows PowerShell Less Than Version 5.0

Maybe you’ve noticed that the upcoming version of Windows PowerShell, 5.0, will make Enumerators (Enums) very easy to create with the new enum keyword. With this post I share an approach to create enums in PowerShell 4.0 and lower as well.

(If you know what an Enumerator is you can skip this section.) Enums help you to deal with rather small ranges of integer values (each value gets a name) and, even more importantly, they simplify programming robust solutions. Put the case that you have to deal with different environments, for example Dev, Test, Acceptance, and Prod. And let’s say that each environment is represented by an int value (thus, 0 to 3 represents Dev to Prod). What happens if you assign the value 4 by mistake? For PowerShell it’s ok because 4 is a valid int value. Therefore, this error will remain undetected at the scene and – according Murphy – reveal its dark energy in the worst possible moment. You get the idea, I hope. It’s no fun to narrow down such problems. How to prevent such failure? You could mess around with if statements and -lt, -gt, -eq for example. Or you make use of, guess what, an Enum. If you have an Enum type for the afore-mentioned environments, PowerShell will refuse a variable of this type to be assigned any value outside of the scope 0..3 and throws an error at the root cause. Therefore, I like to use Enums ever since PowerShell 1.0.

In Windows PowerShell 4.0 and below, Enums are created as follows:

Now, play with it (that’s how I like to learn stuff, btw):

Now, let’s get dirty…

Btw, did you notice the hint within the error message? PowerShell lists the possible values for you.

Hope this helps


  • 0

Why Merge Hash Tables #PowerShell

As part of building an automation framework, typically you’re facing the challenge to separate the data from logic as this is the key to an agile and re-usable solution. The automation logic by itself should “only” know how to process data within a workflow, but the logic by itself shouldn’t know any (hard-coded) value. Instead, the logic should get data values from separate resources like configuration files, registry, databases, whatever. With separation of data and logic it’s almost a no-brainer to set up the solution for another environment, not to mention maintenance that is as easy as pushing DEV into the git repo and pulling the changes into TEST and PROD for example. I personally like to maintain data in hashtables, to be more precise I usually have for example config-global.ps1 and a config-abbreviation.ps1 file per environment that 1. adds specific environment settings and 2. overrides baseline settings from the global config. These ps1 files only contain a (usually nested) hash table. In order to import the data I dot-source both the global config and the environment config. The next step is the creation of a single data resource from both hash tables; meaning that I’m able to access the keys and their values through a single variable like $ConfigData afterwards.

Merging hash tables is straightforward as long as each hash table contains different keys: it’s simply $hashTable1 + $hashTable2. In case of duplicate keys this approach will fail because PowerShell isn’t able to resolve that conflict by deciding which key takes precedence. That’s why I wrote the function Merge-Hashtables (get your copy from the TechNet Gallery). Merge-Hashtable creates a single hash table from two given hash tables. The function considers the first hash table as baseline and its key values are allowed to be overridden by the second hash table. Internally, Merge-Hashtables uses two sub functions. The first sub function detects duplicates and adds for each conflict the key of the second hash table to the resulting hash table. The second sub function adds additional keys from the second hash table to the resulting hash table. Both sub functions are designed to support nested hash tables through recursive calls.

Hope this helps.


  • 0

Simple SQL Scripting Framework #PowerShell

With this post I share my approach to facilitate the re-usage of SQL commands within PowerShell scripts. You should continue reading if you often deal with SQL. And even if you’re not about to use PowerShell scripts against SQL databases you can take some inspiration on how to build a smart automation solution.

My solution uses a PowerShell function Invoke-SQL and a hash table that contains generalized SQL commands. First things first…

In order to issue SQL command text against an ODBC database connection I prefer my function Invoke-SQL. The function either accepts an existing OdbcConnection object or a connection string in order to create a connection on the fly. By default the function returns $true if the execution of the given SQL statement succeeded. With the -PassThru switch the function loads the results into a DataTable object and returns it for further processing. I uploaded Invoke-SQL to the Microsoft TechNet Gallery. Get your copy from there.

Before I proceed to the SqlCommand hash table let me explain why you need it. The Invoke-SQL example below shows how to pass a simple SELECT statement to an existing ODBC connection ($DBConnection) and save the query’s result into the DBResult variable:

So much for the simple scenario. As you know SQL command texts can be far more complex than SELECT-foo-FROM-bar and often span multiple lines. With PowerShell it is good practice to use here-strings to deal with multi-line SQL commands. Take a look at the next example that shows the concept (meaning that you shouldn’t care about the content of the SQL query):

Ok. And now put the case that you have 15, 20 and more of such rather sophisticated SQL commands and you have to re-use them over and over but with different values. Take a look at the value for Location.ID in the previous example. It is hard coded. Therefore, in order to re-use the $SelectVMM here-string you need to leverage the copy-paste-align method (which is error-prone and bloats scripts with redundant code). Or is there another, better, smarter way? Yes, there is. Take a look at the slightly altered example below:

As you can see I replaced the hard coded value from the here-string with the placeholder {0}. And afterwards, in order to re-use the here-string, I used PowerShell’s format operator to replace that placeholder with a specific value, and saved the resulting here-string into a new variable. That’s nice. There’s still room for improvement though. Finally, I bring that SqlCommand hash table into play…

Basically, the hash table is a collection of named here-strings each containing a generalized SQL command text like above. It could look like this for example:

With that SqlCommand hash table in memory SQL scripting is as easy as:

Key take-away:

In order to re-use SQL commands, create for each a generalized here-string and provide them through a hash table.

Hope this helps.


  • 0

Expand %varname%

Hello again! Today I will share a small PowerShell function, ExpandEnvironmentVariables that I use to expand environment variables that are written in legacy syntax (%varname%). For instance, the function is useful if you read values from configuration text files that may contain variable names.

ExpandEnvironmentVariables supports PowerShell 1 and 2. It is designed to accept pipeline input and uses the ExpandEnvironmentVariables() method of the System.Environment class to resolve variables. The function supports nested variable references


  • 3

What Is A PowerShell Loop Label?

One of these days, while analyzing a PowerShell script, I noticed something strange that I didn’t see before: a so-called Loop Label.

Loop labels allow you to name a looping statement (like for, foreach, or while) and to specify that name with a continue or break statement in order to instruct PowerShell to skip the rest of the current iteration (continue) or completely halt the execution (break) of that specific loop.

When does a loop label make sense? Loop labels are only useful with nested loops meaning that if you want to halt the outer loop execution within the inner loop.

Take a look at the code below. The inner loop will be halted as soon as the inner loop counter $j has reached the value 2:

If you want to halt execution of the outer loop instead you need to define and specify a loop label as follows:


  • 2

grep for PowerShell

Hello again. This time I will share a small function that resides in my PowerShell profile script: grep. What is grep? grep is a text search utility originally written for Unix. The command name is an abbreviation for “global regular expression print”. The original grep – and my PowerShell grep function as well – searches files for lines matching a given regular expression and displays the matches in standard output.

Here comes the ready for use function…

Actually the grep function simplifies the usage of a command line like below:

Using grep you just need to type this:

Ah, almost I forgot to mention that the Select-String cmdlet is PowerShell’s grep.


  • 1

Copy Con For PowerShell?

Hello again! This time I share a quick tip.

MS-DOS and cmd.exe shell power users remember for sure how easy it was to write a script (or any other text file) quickly from the command line without an editor. It was possible to copy the CON device (which stands for Console meaning keyboard/input and monitor/output as one device) to a file, for example COPY CON TEST.CMD. Once a COPY CON command has been invoked, it was possible to type whatever you want, even multiple lines were possible. When completed you could save the file and return to the prompt by pressing CTRL-Z (or F6) which would create ^Z (end of file) and then press Return.

In PowerShell there’s a similar approach as well but it doesn’t correspond one-to-one. The trick takes advantage of a single-quoted here-string to create a script:

Actually, Here-strings are used to embed more or less large text blocks inline in scripts. Here-strings start with “@” plus double- or single-quote followed by newline and end with newline, double- or single-quote followed by “@”.


  • 3

Citrix Provisioning Services 5.1 PowerShell Interface, Part III

Update 2016-04-29: as from Provisioning Services 7.7, the PVS API provides a standard object-oriented PowerShell interface!

Hello again and welcome to the third part of my blog post series dealing with McliPSSnapIn – the PowerShell SnapIn to automate the Citrix Provisioning Services (PVS). Hope you enjoyed the first two parts and played with the McliPSSnapIn cmdlets meanwhile. This time you’ll learn how to handle the output of the Mcli-Get cmdlet meaning how to convert its structured text to an object (or an array of objects).

For example, suppose you want to display all vDisks sorted by size in descending order. By intuition, Mcli-Get DiskInfo and Sort-Object seem well-suited for this job get done.

Worse luck! This will end up in a mess (which I don’t want to show here). As formerly mentioned, McliPSSnapIn‘s cmdlets don’t deliver objects but rather text information just in the fashion of a legacy console utility. This is not much fun.

Since PowerShell supports the creation of custom objects you can convert Mcli-Get‘s output to your own “homebrew” object, strictly speaking to a Device, Disk, Site, Store, Farm, whatever PVS object. Look at this working solution:

Compared to the task to create an object and to assign property names with values to that object it is actually more challenging to parse the text-based information in order to identify the information related to each single object.

You’re not scared now, are you? 😉 Let’s take one thing at a time – according to Louis X divide et impera strategy.

First, let’s analyze the text returned by Mcli-Get DiskInfo on my test system:

The cmdlet returned the “properties” of three vDisks. The text is structured nicely in order to support human readability:

  • The output begins with a short summary followed by an information block for each returned record (object).
  • Each information block is introduced with “Record #nnn” where nnn stands for a counter
  • Furthermore empty lines separate the information blocks from each other.
  • The lines that contain the actual data for the records (objects) are indented “name: value” pairs like “diskSize: 41940702720”

Based on this research findings you’re able design an algorithm as stated above to convert the cmdlet’s output to an array of objects.

The Power Of “switch -regex” Unveiled

Naturally you need to process the returned text data in a loop. At first glance it seems perfectly possible to use a foreach loop as follows:

Inside each run it is important to distinguish between three cases:

  • A new information block begins (initiated by “Record #nnn“)
  • Information for the current record (object) in the form of “name: value
  • Superfluous information (as for instance the summary)

I consider this a classic case for a switch statement with the -regex option in order to identify a match with “Record #nnn” and a match with “name: value“.

You know what? switch is able to process a pipeline (an array of elements) itself, meaning you can forget foreach and let switch process Mcli-Get‘s output directly:

According PowerShell’s help, the switch statement’s -regex option “indicates that the match clause, if a string, is treated as a regex string.” So, you need two regex strings, one to identify a “Record #nnn” line and one to identify a “name: value” pair. I will explain my regex statements below.

This regex string matches “Record #nnn“: ^Record\s#\d+$

Element Description
^ Matches the beginning characters
Record Matches “Record”
\s Matches any single white-space character
# Matches “#”
\d Matches any decimal digit
+ Matches repeating instances of the preceding characters
$ Matches the end characters

This regex string matches “name: value“: ^\s{2}(?\w+):\s(?.*)

Element Description
^ Matches the beginning characters
\s{2} Matches exactly two white-space characters
(?<Name>\w+) Matches any word character until the next non-word character and assign the result to $Matches.Value
: Matches “:”
\s Matches any single white-space characters
(?<Value>.*) Matches the rest of the line and assign the result to $Matches.Value

And what is $Matches? PowerShell places resulting matches automatically in the $Matches hashtable where $Matches[0] always holds the entire match. Furthermore, PowerShell allows to specify (optionally named) capture groups inside a regex string. These captures will be assigned to separate elements of the $Matches hashtable, either numbered or named. The example below shows what I’m trying to explain:

This is so cool! Did I already mention that I love PowerShell?

How To Avoid Flattening Of Arrays?

I need to tell you that the return value of the above-stated Get-DiskInfo function messed me around a while.

The function should always return an array regardless of the number of objects found by Mcli-Get. The problem I was running into was that the function didn’t returned an array when the object count was 1. Instead it returned the object directly.

In order to avoid that PowerShell unrolls or flattens the array when it would contain only one element I used the comma operator to force the function to return the array as is:

Ah, I’ve missed to show the Get-DiskInfo function in action:

The Get-DiskInfo function is a very basic example for creating custom objects in PowerShell and therefore should considered to be a kick-start.

I hope it inspired you to dive deeper at Christmas time.