Automation-as-a-Commodity
Show MenuHide Menu

ISESteroids 2.0 is getting off the starting blocks…

April 23, 2015

… and you should give it a try: ISESteroids 2.0

To tell you the truth, up to yesterday I had some kind of prejudice against Tobias Weltner’s ISESteroids meaning that I considered it a PowerShell ISE Add-On that rather addresses a beginner’s needs.

During the 3rd german ‘PowerShell Community Konferenz 2015′ I changed my opinion. While giving talks, Tobias showcased by the way several times features of the upcoming release. I changed opinions. ISESteroids not only help you to produce better PowerShell solutions, but also it brings you to speed – regardless if your level is beginner, advanced, expert, guru whatever.

For example, imagine you’re challenged to write an advanced function with different parameter sets, mandatory parameters, and some optional parameters. With ISESteroids loaded, you can do things back to front, in other words you start with writing the syntax as you want it to be, such as…

After that you just need to highlight the code, right-click, select the ISESteroids action to create a function, and – voilà – you get a neatly written skeleton for a function that exactly matches the syntax specifications you made.

Another great feature I saw in action was a WPF GUI builder that requires/interacts with Visual Studio.

I could continue listing my memory minutes. But, it still would be just the tip of the iceberg. ISESteroids is packed features you need to discover while working with it. So do I

Enums in Windows PowerShell Less Than Version 5.0

January 8, 2015

Maybe you’ve noticed that the upcoming version of Windows PowerShell, 5.0, will make Enumerators (Enums) very easy to create with the new enum keyword. With this post I share an approach to create enums in PowerShell 4.0 and lower as well.

(If you know what an Enumerator is you can skip this section.) Enums help you to deal with rather small ranges of integer values (each value gets a name) and, even more importantly, they simplify programming robust solutions. Put the case that you have to deal with different environments, for example Dev, Test, Acceptance, and Prod. And let’s say that each environment is represented by an int value (thus, 0 to 3 represents Dev to Prod). What happens if you assign the value 4 by mistake? For PowerShell it’s ok because 4 is a valid int value. Therefore, this error will remain undetected at the scene and – according Murphy – reveal its dark energy in the worst possible moment. You get the idea, I hope. It’s no fun to narrow down such problems. How to prevent such failure? You could mess around with if statements and -lt, -gt, -eq for example. Or you make use of, guess what, an Enum. If you have an Enum type for the afore-mentioned environments, PowerShell will refuse a variable of this type to be assigned any value outside of the scope 0..3 and throws an error at the root cause. Therefore, I like to use Enums ever since PowerShell 1.0.

In Windows PowerShell 4.0 and below, Enums are created as follows:

Now, play with it (that’s how I like to learn stuff, btw):

Now, let’s get dirty…

Btw, did you notice the hint within the error message? PowerShell lists the possible values for you.

Hope this helps

Citrix PVS Image Preparation Script for XenApp 7.x Workloads

January 5, 2015

With this post I share a Powershell script that prepares the master installation of a XenApp 7.x Worker for imaging with Citix Provisioning Services, Prepare-XenApp7.ps1.

Due to fact that Citrix has ported its flagship XenApp to the architecture that was introduced with XenDesktop 5, there’s strictly speaking no need to generalize the PVS vDisk that provides the workload of a XenApp Worker because it doesn’t contain IMA-related stuff anymore. On the other hand there’s still room for some optimization steps before putting a XenApp vDisk into production/standard mode. The script automates the following steps:

  • Investigate the PVS’ Personality.ini in the root of the system drive in order to determine the disk mode that is read-write, read-only, or started from local HD
  • Clear Citrix User Profile Manager’s cache
  • Resync time
  • Update GPO settings
  • Clear network related caches (DNS and ARP)
  • Clear WSUS Client related settings
  • Clear event logs
  • Based on the findings in Step 1, suggest a convenient main action, that is either “Exit” (if we’re in maintenance/private w/ read-write vdisk access), or “Invoke ImagingWizard” (if we started from local HD), or “Invoke XenConvert” (reverse imaging scenario w/ read-only vdisk access)

BTW, the script should work for desktop workloads as well but I haven’t tested it so far.

Hope this helps

Latest version on GitHub: Prepare-XenApp7.ps1

How To Backup MS SQL Express Databases? #PowerShell

January 4, 2015

Happy Twenty Fifteen! The first post of the new year deals with a common question I am confronted with from time to time: Do you have a script to backup MS SQL Express? Yes, I have. The script requires the SQLPS PowerShell Module that will be installed automatically with newer versions of MS SQL Express. Basically, it simplifies the usage of its Backup-SqlDatabase Cmdlet:

Hope this helps

[Updated] How To Get The Clientname Within A Logon Script? #PowerShell #RemoteDesktopServices #CitrixXenApp

December 2, 2014

This morning, a workmate seeked my support regarding an issue that I wasn’t aware of: on a Windows Server 2008 R2 Remote Desktop Session Host you can’t leverage the CLIENTNAME environment variable within a logon script. I stumbled upon a post regarding the same issue and decided to port their VBScript based solution to Windows PowerShell and here’s the result:

Hope this helps

What was, is, and will be new in Windows PowerShell?

October 12, 2014

If you need an overview what’s new in the upcoming version and what was new in the current and former versions of Windows PowerShell as from version 3.0 I recommend Microsoft TechNet What’s New In PowerShell.

[Updated] Citrix PVS: Sync Local vDisk Store #PowerShell

September 29, 2014

The other evening, I’ve updated the Sync-PvsLocalStore.ps1 script in order to support multiple Citrix Provisioning Services (PVS) vDisks due to customer request.

The purpose of this script is to copy or rather sync changed and new Versions of one or more given vDisks between the local Stores within a Farm of two or more PVS servers. Basically, you can think of the Sync-PvsLocalStore.ps1 as a wrapper for Robocopy.exe /MIR with some extra brains on top. That is because it is able to detect and exclude a Maintenance Version of a vDisk from the copy process, meaning that the script only spreads out the latest Production and Test versions of a vDisk while it doesn’t bloat the stores with Maintenance versions that is work in progress typically.

The usage is very simple. Look at this example:

You need to specify one ‘MasterServer’, one or more ‘MemberServer’, the path of the Store (needs to be the same on each server), one or more vDisk names, the name of the corresponding Site and Store. The two latter help the script to identify any Maintenance Version.

Sync-PvsLocalStore.ps1 needs to be run on a system where the PVS Console or rather its command-line interface MCLI.EXE is installed. (The script utilizes MCLI.EXE because to date there’s no advantage in using the PVS PowerShell Snapin.)

Design PowerShell Scripts For Feedback

September 11, 2014

What is the “secret” ingredient of building rather sophisticated solutions with Windows PowerShell?

If you’d ask me, it’s about applying the principle of Feedback Loops. Maybe you’ve heard or read of Feedback Loops in the context of Agile Development Practices or DevOps. To put it in a nutshell, from a DevOps-perspective Feedback Loops stand for communicating early, often, and openly with a view to identify the need or the room for improvements as quickly as possible. It’s like frequent tasting while preparing a dish: experienced chefs do it because they know that a late feedback is worth nothing, or in the case of a real first-class dinner, it would be nothing more than the result of a fortune series of events.

How can a PowerShell script be designed for Feedback Loops? I’ll show you my personal approach. Please note that it makes no claims of being the one and only.

First of all, starting from a scripting challenge that includes multiple actions you need to apply the divide and conquer strategy; meaning that for one thing you have to identify each individual action that is necessary to succeed and for the other thing you have to think of each individual action as an independent “battle”. Solve each task detached from the overall context, i.e. create fully functional partial solutions that receive input values through parameters and that at least return $true in case of success (and otherwise $false). If you’re done with isolating the partial solutions, you’re ready to “conquer” that is to put the “lego bricks” together. Now, the band begins to play…

What you need now is a sort of meta-script that controls the work in process if you like. My approach, in its very basic form, looks like this:

The code sample shows the basic concept how to “misuse” switch as a simple workflow engine that terminates processing as soon as something went wrong. Initially, I give each step a friendly name and put these names in an array ($subTask). Then I commit that collection of names as “test value” to switch. Switch evaluates each item (name) in the order in which it was defined, i.e. for each name switch invokes the appropriate action.

The advantages of this approach really come to light for example when it comes to …
… repeat the entire workflow before raising an error,
… repeat single steps individually before raising an error, or
… go into reverse in case of a fatal condition (rollback)

For now, I leave you alone with your thoughts.

Why Merge Hash Tables #PowerShell

September 7, 2014

As part of building an automation framework, typically you’re facing the challenge to separate the data from logic as this is the key to an agile and re-usable solution. The automation logic by itself should “only” know how to process data within a workflow, but the logic by itself shouldn’t know any (hard-coded) value. Instead, the logic should get data values from separate resources like configuration files, registry, databases, whatever. With separation of data and logic it’s almost a no-brainer to set up the solution for another environment, not to mention maintenance that is as easy as pushing DEV into the git repo and pulling the changes into TEST and PROD for example. I personally like to maintain data in hashtables, to be more precise I usually have for example config-global.ps1 and a config-abbreviation.ps1 file per environment that 1. adds specific environment settings and 2. overrides baseline settings from the global config. These ps1 files only contain a (usually nested) hash table. In order to import the data I dot-source both the global config and the environment config. The next step is the creation of a single data resource from both hash tables; meaning that I’m able to access the keys and their values through a single variable like $ConfigData afterwards.

Merging hash tables is straightforward as long as each hash table contains different keys: it’s simply $hashTable1 + $hashTable2. In case of duplicate keys this approach will fail because PowerShell isn’t able to resolve that conflict by deciding which key takes precedence. That’s why I wrote the function Merge-Hashtables (get your copy from the TechNet Gallery). Merge-Hashtable creates a single hash table from two given hash tables. The function considers the first hash table as baseline and its key values are allowed to be overridden by the second hash table. Internally, Merge-Hashtables uses two sub functions. The first sub function detects duplicates and adds for each conflict the key of the second hash table to the resulting hash table. The second sub function adds additional keys from the second hash table to the resulting hash table. Both sub functions are designed to support nested hash tables through recursive calls.

Hope this helps.

Simple SQL Scripting Framework

September 6, 2014

With this post I share my approach to facilitate the re-usage of SQL commands within PowerShell scripts. You should continue reading if you often deal with SQL. And even if you’re not about to use PowerShell scripts against SQL databases you can take some inspiration on how to build a smart automation solution.

My solution uses a PowerShell function Invoke-SQL and a hash table that contains generalized SQL commands. First things first…

In order to issue SQL command text against an ODBC database connection I prefer my function Invoke-SQL. The function either accepts an existing OdbcConnection object or a connection string in order to create a connection on the fly. By default the function returns $true if the execution of the given SQL statement succeeded. With the -PassThru switch the function loads the results into a DataTable object and returns it for further processing. I uploaded Invoke-SQL to the Microsoft TechNet Gallery. Get your copy from there.

Before I proceed to the SqlCommand hash table let me explain why you need it. The Invoke-SQL example below shows how to pass a simple SELECT statement to an existing ODBC connection ($DBConnection) and save the query’s result into the DBResult variable:

So much for the simple scenario. As you know SQL command texts can be far more complex than SELECT-foo-FROM-bar and often span multiple lines. With PowerShell it is good practice to use here-strings to deal with multi-line SQL commands. Take a look at the next example that shows the concept (meaning that you shouldn’t care about the content of the SQL query):

Ok. And now put the case that you have 15, 20 and more of such rather sophisticated SQL commands and you have to re-use them over and over but with different values. Take a look at the value for Location.ID in the previous example. It is hard coded. Therefore, in order to re-use the $SelectVMM here-string you need to leverage the copy-paste-align method (which is error-prone and bloats scripts with redundant code). Or is there another, better, smarter way? Yes, there is. Take a look at the slightly altered example below:

As you can see I replaced the hard coded value from the here-string with the placeholder {0}. And afterwards, in order to re-use the here-string, I used PowerShell’s format operator to replace that placeholder with a specific value, and saved the resulting here-string into a new variable. That’s nice. There’s still room for improvement though. Finally, I bring that SqlCommand hash table into play…

Basically, the hash table is a collection of named here-strings each containing a generalized SQL command text like above. It could look like this for example:

With that SqlCommand hash table in memory SQL scripting is as easy as:

Key take-away:

In order to re-use SQL commands, create for each a generalized here-string and provide them through a hash table.

Hope this helps.