Citrix PVS: Sync Local vDisk Store #PowerShell

Among the files in my ‘powershell-zone’ repository on Github you may stumble upon a script called Sync-PvsLocalStore.ps1. This script helps you to keep a given vDisk within the local vDisk store of two or more Citrix Provisioning Services (PVS) servers in sync. Actually, the script uses robocopy.exe /MIR to mirror the VHD, AVHD, and according PVS files; meaning that …

  • it copies modified and new files from a designated master PVS server to one or more member PVS servers, and
  • it removes EXTRA files from the member servers

The script is designed to exclude maintenance versions of a vDisk from this process. It just keeps production and test versions of a vDisk in sync.
Therefore Sync-PvsLocalStore.ps1 makes only sense if you consequently use ever the same PVS server (what I call the “master” PVS server) for vDisk maintenance purposes.

Design PowerShell Scripts For Feedback

What is the “secret” ingredient of building rather sophisticated solutions with Windows PowerShell?

If you’d ask me, it’s about applying the principle of Feedback Loops. Maybe you’ve heard or read of Feedback Loops in the context of Agile Development Practices or DevOps. To put it in a nutshell, from a DevOps-perspective Feedback Loops stand for communicating early, often, and openly with a view to identify the need or the room for improvements as quickly as possible. It’s like frequent tasting while preparing a dish: experienced chefs do it because they know that a late feedback is worth nothing, or in the case of a real first-class dinner, it would be nothing more than the result of a fortune series of events.

How can a PowerShell script be designed for Feedback Loops? I’ll show you my personal approach. Please note that it makes no claims of being the one and only.

First of all, starting from a scripting challenge that includes multiple actions you need to apply the divide and conquer strategy; meaning that for one thing you have to identify each individual action that is necessary to succeed and for the other thing you have to think of each individual action as an independent “battle”. Solve each task detached from the overall context, i.e. create fully functional partial solutions that receive input values through parameters and that at least return $true in case of success (and otherwise $false). If you’re done with isolating the partial solutions, you’re ready to “conquer” that is to put the “lego bricks” together. Now, the band begins to play…

What you need now is a sort of meta-script that controls the work in process if you like. My approach, in its very basic form, looks like this:

$result = $null
$subTask = (
    '1-FirstStep',
    '2-SecondStep',
    '3-ThirdStep',
    '4-FinalStep'
)
# Each sub script returns a PSObject with properties [boolean]Success and [string]Reason
switch ($subTask)
{
    '1-FirstStep'
    {
        $result = & .\FirstStep.ps1
        if (!$result.Success)
        {
            break
        }
    }
    '2-SecondStep'
    {
        $result = & .\SecondStep.ps1
        if (!$result.Success)
        {
            break
        }
    }
    '3-ThirdStep'
    {
        $result = & .\ThirdStep.ps1
        if (!$result.Success)
        {
            break
        }
    }
    '4-FinalStep'
    {
        $result = & .\FinalStep.ps1
        if (!$result.Success)
        {
            break
        }
    }
}
if ($result.Success)
{
    'All script actions succeeded.' | Write-Output
}
else
{
    'Failed, because: {0}' -f $result.Reason | Write-Output
}

The code sample shows the basic concept how to “misuse” switch as a simple workflow engine that terminates processing as soon as something went wrong. Initially, I give each step a friendly name and put these names in an array ($subTask). Then I commit that collection of names as “test value” to switch. Switch evaluates each item (name) in the order in which it was defined, i.e. for each name switch invokes the appropriate action.

The advantages of this approach really come to light for example when it comes to …
… repeat the entire workflow before raising an error,
… repeat single steps individually before raising an error, or
… go into reverse in case of a fatal condition (rollback)

For now, I leave you alone with your thoughts.

Why Merge Hash Tables #PowerShell

As part of building an automation framework, typically you’re facing the challenge to separate the data from logic as this is the key to an agile and re-usable solution. The automation logic by itself should “only” know how to process data within a workflow, but the logic by itself shouldn’t know any (hard-coded) value. Instead, the logic should get data values from separate resources like configuration files, registry, databases, whatever. With separation of data and logic it’s almost a no-brainer to set up the solution for another environment, not to mention maintenance that is as easy as pushing DEV into the git repo and pulling the changes into TEST and PROD for example. I personally like to maintain data in hashtables, to be more precise I usually have for example config-global.ps1 and a config-abbreviation.ps1 file per environment that 1. adds specific environment settings and 2. overrides baseline settings from the global config. These ps1 files only contain a (usually nested) hash table. In order to import the data I dot-source both the global config and the environment config. The next step is the creation of a single data resource from both hash tables; meaning that I’m able to access the keys and their values through a single variable like $ConfigData afterwards.

Merging hash tables is straightforward as long as each hash table contains different keys: it’s simply $hashTable1 + $hashTable2. In case of duplicate keys this approach will fail because PowerShell isn’t able to resolve that conflict by deciding which key takes precedence. That’s why I wrote the function Merge-Hashtables (get your copy from the TechNet Gallery). Merge-Hashtable creates a single hash table from two given hash tables. The function considers the first hash table as baseline and its key values are allowed to be overridden by the second hash table. Internally, Merge-Hashtables uses two sub functions. The first sub function detects duplicates and adds for each conflict the key of the second hash table to the resulting hash table. The second sub function adds additional keys from the second hash table to the resulting hash table. Both sub functions are designed to support nested hash tables through recursive calls.

Hope this helps.

Simple SQL Scripting Framework

With this post I share my approach to facilitate the re-usage of SQL commands within PowerShell scripts. You should continue reading if you often deal with SQL. And even if you’re not about to use PowerShell scripts against SQL databases you can take some inspiration on how to build a smart automation solution.

My solution uses a PowerShell function Invoke-SQL and a hash table that contains generalized SQL commands. First things first…

In order to issue SQL command text against an ODBC database connection I prefer my function Invoke-SQL. The function either accepts an existing OdbcConnection object or a connection string in order to create a connection on the fly. By default the function returns $true if the execution of the given SQL statement succeeded. With the -PassThru switch the function loads the results into a DataTable object and returns it for further processing. I uploaded Invoke-SQL to the Microsoft TechNet Gallery. Get your copy from there.

Before I proceed to the SqlCommand hash table let me explain why you need it. The Invoke-SQL example below shows how to pass a simple SELECT statement to an existing ODBC connection ($DBConnection) and save the query’s result into the DBResult variable:

PS:\> $DBResult = Invoke-SQL -CommandText 'SELECT * FROM MyTable' -PassThru
-Connection $DBConnection
PS:\>

So much for the simple scenario. As you know SQL command texts can be far more complex than SELECT-foo-FROM-bar and often span multiple lines. With PowerShell it is good practice to use here-strings to deal with multi-line SQL commands. Take a look at the next example that shows the concept (meaning that you shouldn’t care about the content of the SQL query):

PS:\> $SelectVMM = @'
>> SELECT vmm.hostname, vmm.id 
>> FROM myVmmServers as vmm
>> JOIN myDataCenters as DC ON (DC.id = vmm.dcid)
>> JOIN myDCLocations as Location ON (Location.id = DC.lcid)
>> WHERE Location.id = '49'
>> '@
>>
PS:\> $DBResult = Invoke-SQL -CommandText $SelectVMM -PassThru
-Connection $DBConnection
PS:\>

Ok. And now put the case that you have 15, 20 and more of such rather sophisticated SQL commands and you have to re-use them over and over but with different values. Take a look at the value for Location.ID in the previous example. It is hard coded. Therefore, in order to re-use the $SelectVMM here-string you need to leverage the copy-paste-align method (which is error-prone and bloats scripts with redundant code). Or is there another, better, smarter way? Yes, there is. Take a look at the slightly altered example below:

PS:\> $SelectVMM = @'
>> SELECT vmm.hostname, vmm.id
>> FROM myVmmServers as vmm
>> JOIN myDataCenters as DC ON (DC.id = vmm.dcid)
>> JOIN myDCLocations as Location ON (Location.id = DC.lcid)
>> WHERE Location.id = '{0}'
>> '@
>>
PS:\> $SelectVMM49 = $SelectVMM -f '49'
PS:\> $DBResult = Invoke-SQL -CommandText $SelectVMM49 -PassThru
-Connection $DBConnection
PS:\>

As you can see I replaced the hard coded value from the here-string with the placeholder {0}. And afterwards, in order to re-use the here-string, I used PowerShell’s format operator to replace that placeholder with a specific value, and saved the resulting here-string into a new variable. That’s nice. There’s still room for improvement though. Finally, I bring that SqlCommand hash table into play…

Basically, the hash table is a collection of named here-strings each containing a generalized SQL command text like above. It could look like this for example:

$SqlCommand = @{
GetValue = @'
    SELECT {0}
    FROM {1}
'@
GetLatestRecord = @'
    SELECT TOP 1 *
    FROM {0}
'@
SetIdValue = @'
    UPDATE {0}
    SET {1}='{2}',timestamp=GETUTCDATE()
    WHERE Id={3}
'@
SelectVMM = @'
    SELECT vmm.hostname, vmm.id 
    FROM myVmmServers as vmm
    JOIN myDataCenters as DC ON (DC.id = vmm.dcid)
    JOIN myDCLocations as Location ON (Location.id = DC.lcid)
    WHERE Location.id = '{0}'
'@
}

With that SqlCommand hash table in memory SQL scripting is as easy as:

PS:\> $SelectVmm = $SqlCommand.SelectVMM -f '49'
PS:\> $DBResult = Invoke-SQL -CommandText $SelectVmm -PassThru
-Connection $DBConnection
PS:\>

Key take-away:

In order to re-use SQL commands, create for each a generalized here-string and provide them through a hash table.

Hope this helps.

Yet Another Invoke SQL PowerShell Script

A few weeks ago, I uploaded my PowerShell function Invoke-SQL to the Microsoft TechNet Gallery and I forgot to mention it here.

Invoke-SQL is designed to issue any valid SQL command text against an ODBC database connection. The function either accepts an existing OdbcConnection object or a connection string in order to create a connection on the fly. By default the function returns $true if the execution of the given SQL statement didn’t fail. With the PassThru switch the function loads the results into a DataTable object and returns it for further processing. Invoke-SQL returns nothing on error opening the ODBC connection or executing the SQL command text.

The Traditional IT Department – Your Business’ Blind Spot

Hey, CEO!

Is your organization still pursuing a non-Cloud strategy? If I’d ask you why I bet you won’t be stuck for an answer. You or your CIO would tell me that Cloud Computing doesn’t meet your requirements in terms of security for example. It’s your valid decision and that’s fine by me. But may I ask another question? Do you apply exactly the same standards you used to define your Cloud investigation criteria for your current IT operational concept? Really?

So, let’s stick at security as I guess it’s one of your main concerns regarding Cloud. Usually, Cloud security concerns cover all aspects related to a Cloud Reference model. Mostly the Cloud Provider has to undertake that the IT Infrastructure is secure and that the tenants’ data are protected. In order to ensure this demand on security the Cloud Provider has to implement several defensive controls that detect and prevent attacks and also reduce the impact of attacks. It’s about reducing the overall attack surface and Cloud Providers need to be pretty good in this discipline – not least because they are constantly in the public eye. Cloud Providers who want to continue to exist have to face up to each security concern.

Now I ask you again. Does your traditional IT meet the same level of security that you have set to evaluate Cloud Computing or do you have double standards? I see, you have firewalls, backup, desaster recovery, antivirus, data encryption and so on – so why bother. I’ll tell you why bother. All these security thingies are firstly just tools and guidelines. But did you ever consider who operates this? Of course your IT department, or spin-off, maybe assisted by external workers. But do you really know what they do or do you rather implicitly trust them? In the latter case the IT department is in a blind spot from business perspective. Quite foggy, right? Fog… Cloud… Frankly speaking you should consider your IT department as a separate attack surface, perhaps it’s the weakest link in your security strategy.

First of all, in order to reduce this risk you should get in touch with your “IT crowd”, not just the CIO. Your business relies on these gals and guys. They are in a key position to proverbially shutdown your business. Listen to them carefully, be thankful and be willing to reward them. Maybe you’ll realize that you need a change in your organization’s culture if you will. Go ahead! Invoke a cultural movement driven by the management. At the end of the day it should be possible for any person to give any person a bit of one’s mind regardless of the hierarchy or command structure, because exactly the opposite leads to vulnerability. Think it over.

From a technical perspective, ironically, your IT department can benefit from the lessons learned in Cloud Computing. Here’s an example. Since this blog is mainly about Windows PowerShell I take the liberty and draw your attention to Just Enough Administration (JEA) (Download Whitepaper). It’s based on technology you should already have in place and helps your organization “reduce risk by restricting operators to only the access required to perform specific tasks”.

Regards
Frank Peter Schultze

Updated: PowerShell Subversion Module

Yes, I know, these days Git is king of the hill. Anyways, I shared my PowerShell module for Subversion at PoshCode.org (see below) Microsoft TechNet Gallery. The module exposes a bunch of functions and aliases:

  • The function Update-SvnWorkingCopy is a wrapper for “svn.exe update” and brings the latest changes (HEAD revision) from the repository into an existing working copy.
  • The function Publish-SvnWorkingCopy is a wrapper for “svn.exe commit” and sends the changes from your working copy to the repository.
  • The function Import-SvnUnversionedFilePath is a wrapper for “svn.exe import” and commits an unversioned file or directory tree into the repository.
  • The function New-SvnWorkingCopy is a wrapper for “svn.exe checkout” and checks out a working copy from a repository (HEAD revision).
  • The function Get-SvnWorkingCopy is a wrapper for “svn.exe status” and returns the status of working copy files and directories
  • The function Add-SvnWorkingCopyItem is a wrapper for “svn.exe add” and puts files and directories under version control, that is scheduling them for addition to repository in next commit.
  • The function Remove-SvnWorkingCopyItem is a wrapper for “svn.exe delete” and removes files and directories from version control, this is scheduling them for deletion upon the next commit. (Items that have not been committed are immediately removed from the working copy.)
  • The function Repair-SvnWorkingCopy fixes a working copy that has been modified by non-svn commands in terms of file addition and removal. The function identifies items that are not under version control and items that are missing. It puts non-versioned items under version control, and it removes missing items from version control (i.e. schedule for deletion upon next commit).

Furthermore, it alters PowerShell’s Prompt function in order to display some information about the state of a SVN working copy

Notes on PowerShell Summit NA 2014 #PSHSummit

Fortunately I was able to attend the community-driven event PowerShell Summit 2014 at the Meydenbauer Center in Bellevue. I’m back at home and really glad to see the back of 20+ hrs travel time. Before the jet lag kicks in again, I want to share impressions and notes with you…

If you like Windows PowerShell and normally no one understands what the heck you’re talking about the PSHSummit is the right place for you. It’s like being for three days in the epicenter of PowerShell knowledge. It’s about both the great sessions and the chance to meet up with people you may know from twitter, blogs, including some members of the PowerShell Team and not least the godfather of PowerShell Jeffrey Snover. So, if you ever wanted to get in touch with PowerShell-minded folks, go ahead and check PowerShell.org for details about the the next PSHSummit. By the way, PowerShell.org has a great monthly marketing-free newsletter.

The bottom line from my perspective is that PowerShell, although mature, is still about to revolutionize how Microsoft-centric IT infrastructures are built, configured and (securely) administered in future. J.Snover kicked the event off with a session on “JitJea. Just In Time/Just Enough Admin. A PowerShell toolkit to secure a post-Snowden world.” – from the high level perspective the JitJea approach enables admins to perform functions at a given timeframe without giving them admin privileges directly. The PowerShell Team showed in some lightning demos that they put huge effort in evolving DSC into an agile operations or DevOps toolkit including cross platform support. It was a great moment to see DSC in action how it configured a shell profile on a CentOS machine. For those who were asking why DSC leverages OMI, MOF and these strange things: now you have the answer. PowerShell goes Open – slowly but surely!

5101vg+Jd9L

Book Review: Windows PowerShell 4.0 for .NET Developers

The tech book publisher Packt asked me to review Sherif Talaat‘s book “Windows PowerShell 4.0 for .NET Developers“, subtitled “A fast-paced PowerShell guide, enabling you to efficiently administer and maintain your development environment“. According to their own statement Packt’s “books focus on practicality, recognising that readers are ultimately concerned with getting the job done“. The book is available for purchase on www.packtpub.com.

To put it in a nutshell, from my perspective Sherif delivers exactly what the book’s subtitle and Packt Publishing promises: it’s a well-made balancing act between being fast-paced, easy-to-follow and providing the reader (that is a .NET developer) with essential information on how to leverage PowerShell.

This book is like distilled water: it delivers pure information in order to take the reader from “101” to a professional level in PowerShell. It’s about 140 pages only. That’s definitely no huge tome and therefore it’s self-evident that it lacks of deeper background information, trivia, and cleverly thought out examples that unveil the proverbial Power of PowerShell. But, whenever useful the author cross-referenced the book to resources with further information. Ideally, the reader already has some basic scripting knowledge and hands-on experience with one of the .NET programming languages – not least because the book is aimed at .NET developers who want to learn how to use PowerShell.

There’s only five chapters:

  • Chapter 1 — Getting Started with Windows PowerShell — covers the basic stuff like the PowerShell console, PowerShell ISE, key features & concepts, and fundamentals like the object pipeline, aliases, variables, data types, operators, arrays, hash tables, script flow, providers, drives, comments, and parameters.
  • Chapter 2 — Unleashing Your Development Skills with PowerShell — is about working with CIM/WMI, XML, COM, .NET objects, PowerShell Modules and about script debugging/error handling.
  • Chapter 3 — PowerShell for Your Daily Administration Tasks — covers PowerShell Remoting, PowerShell Workflows, and PowerShell “in action” (from a .NET developers perspective) that is managing Windows roles & features, managing local users & groups, managing IIS, and managing MS SQL.
  • Chapter 4 — PowerShell and Web Technologies — is about working with web services & requests, RESTful APIs, and JSON.
  • Chapter 5 — PowerShell and Team Foundation Server — covers the TFS cmdlets and shows how to get started and work with them.

Did I miss something? Yes, there’s no information about Desired State Configuration. But that’s as far as it goes.

Considering the fact that we are about to enter a new era in IT where developers and operators need to work closely together (or in one person) to continuously deliver services in automated IT infrastructures, a .NET developer should at least get a copy of this book in order to be prepared for the best ;-)

How to quickly fire up an iTunes playlist with PowerShell

With the PowerShell script below you can quickly start playing an iTunes playlist.

Preparation steps:
1. buy a device from the dark side (ha ha), install iTunes on your Windows computer
2. buy/import music and organize the tracks in playlists

Usage:
Let’s say you’ve prepared a playlist called ‘Blues Rock till the cow comes home’ in an iTunes library called ‘Mediathek’ and you want it to play in shuffle mode. Open a PowerShell command window and type:

C:\PS> .\Start-PlayList.ps1 -Source 'Mediathek' -Playlist 'Blues Rock till the cow comes home' -Shuffle

The iTunes application will open automagically and start playing tracks – and you can party till the cow comes…

<#
.SYNOPSIS
    Plays an iTunes playlist.
.DESCRIPTION
    Opens the Apple iTunes application and starts playing the given iTunes playlist.
.PARAMETER  Source
    Identifies the name of the source.
.PARAMETER  Playlist
    Identifies the name of the playlist
.PARAMETER  Shuffle
    Turns shuffle on (else don't care).
.EXAMPLE
   C:\PS> .\Start-PlayList.ps1 -Source 'Library' -Playlist 'Party'
.INPUTS
   None
.OUTPUTS
   None
#>
[CmdletBinding()]
param (
    [Parameter(Mandatory=$true)]
    $Source
    ,
    [Parameter(Mandatory=$true)]
    $Playlist
    ,
    [Switch]
    $Shuffle
)

try {
    $iTunes = New-Object -ComObject iTunes.Application
}
catch {
    Write-Error 'Download and install Apple iTunes'
    return
}

$src = $iTunes.Sources | Where-Object {$_.Name -eq $Source}
if (!$src) {
    Write-Error "Unknown source - $Source"
    return
}

$ply = $src.Playlists | Where-Object {$_.Name -eq $Playlist}
if (!$ply) {
    Write-Error "Unknown playlist - $Playlist"
    return
}

if ($Shuffle) {
    if (!$ply.Shuffle) {
        $ply.Shuffle = $true
    }
}

$ply.PlayFirstTrack()

[System.Runtime.InteropServices.Marshal]::ReleaseComObject([System.__ComObject]$iTunes) > $null
[GC]::Collect()