How to quickly fire up an iTunes playlist with PowerShell

With the PowerShell script below you can quickly start playing an iTunes playlist.

Preparation steps:
1. buy a device from the dark side (ha ha), install iTunes on your Windows computer
2. buy/import music and organize the tracks in playlists

Usage:
Let’s say you’ve prepared a playlist called ‘Blues Rock till the cow comes home’ in an iTunes library called ‘Mediathek’ and you want it to play in shuffle mode. Open a PowerShell command window and type:

C:\PS> .\Start-PlayList.ps1 -Source 'Mediathek' -Playlist 'Blues Rock till the cow comes home' -Shuffle

The iTunes application will open automagically and start playing tracks – and you can party till the cow comes…

<#
.SYNOPSIS
    Plays an iTunes playlist.
.DESCRIPTION
    Opens the Apple iTunes application and starts playing the given iTunes playlist.
.PARAMETER  Source
    Identifies the name of the source.
.PARAMETER  Playlist
    Identifies the name of the playlist
.PARAMETER  Shuffle
    Turns shuffle on (else don't care).
.EXAMPLE
   C:\PS> .\Start-PlayList.ps1 -Source 'Library' -Playlist 'Party'
.INPUTS
   None
.OUTPUTS
   None
#>
[CmdletBinding()]
param (
    [Parameter(Mandatory=$true)]
    $Source
    ,
    [Parameter(Mandatory=$true)]
    $Playlist
    ,
    [Switch]
    $Shuffle
)

try {
    $iTunes = New-Object -ComObject iTunes.Application
}
catch {
    Write-Error 'Download and install Apple iTunes'
    return
}

$src = $iTunes.Sources | Where-Object {$_.Name -eq $Source}
if (!$src) {
    Write-Error "Unknown source - $Source"
    return
}

$ply = $src.Playlists | Where-Object {$_.Name -eq $Playlist}
if (!$ply) {
    Write-Error "Unknown playlist - $Playlist"
    return
}

if ($Shuffle) {
    if (!$ply.Shuffle) {
        $ply.Shuffle = $true
    }
}

$ply.PlayFirstTrack()

[System.Runtime.InteropServices.Marshal]::ReleaseComObject([System.__ComObject]$iTunes) > $null
[GC]::Collect()

Get-Up, Get-IntoIt, Get-Involved…

Hello, World!

OMG, I haven’t posted anything since June 2013 when I was so enthusiastic about the upcoming DSC feature in PowerShell 4. What happened meanwhile?

Get-Up

In these days I’m feeling that we’re facing a new era in IT Consulting business. Here in Germany, a growing number of companies requests automation solutions. For example, automation is a core requirement in call for bids increasingly. Folks, I think we’re just about to enter the Golden Age of IT Automation. Finally! Automation will be commodity. I am so excited I just can’t hide it.

Get-IntoIt

Why I am so happy? Since the mid-nineties when I’ve entered the IT business I’m an automation guy. I learned to get the most out of batch files, excessively leveraged other scripting languages like VBScript and tools in order to design automation frameworks for several purposes. After I’d changed into the IT consulting business and didn’t lose my affinity for automation. As from 2008, I was part of a team that built a PowerShell-based configuration management framework. With this framework we were able to help huge enterprises to fully automate the installation and configuration of their Citrix farms, for example. It kept hundreds of servers in their defined state. Furthermore it separated the business logic from configuration logic. Thus, it was easy to build and maintain identical environments for Test, UAT, and Prod for example. Pretty similar to Chef or Puppet, but for “the other OS” that is Windows ;-)

Get-Involved, Get-Involved, Get-Involved…

And now? Nowadays, the core for such a framework is directly built-in into PowerShell and called Desired State Configuration. This is so cool. Just built your solution around DSC.

There’s more to come…

Pit

P.S.: The title of this post is freely adapted from James Brown (RIP)

Start the Windows Update Service Depending on Citrix PVS Disk Mode

A Microsoft Windows installation that is properly optimized to run from a Citrix PVS vDisk in standard mode won’t start the Windows Update Service and a couple of other background services that alter the system. It makes no sense to start these services because PVS redirects any write access to the write cache that is known to be volatile. When it comes to maintenance the system needs to be started from the vDisk in a writeable state (that is nowadays a maintenance version/vhd snapshot of the vDisk). In order to let the system pull updates from Microsoft WSUS the Windows Update service needs to be configured to start. Following the maintenance tasks, as part of re-sealing the vDisk, the service will be disabled again through the PVS TargetOSOptimizer.exe utility.

How about a startup script that configures and starts the Windows Update service automatically dependent on the vDisk mode? Here we go:

@ECHO OFF
SETLOCAL
SET PrivateOrMaintenance=
FOR /F %%i IN (
  '"%ProgramFiles%\Citrix\Provisioning Services\GetPersonality.exe" $WriteCacheType /o'
) DO (
  IF %%i EQU 0 SET PrivateOrMaintenance=Y
)
IF NOT DEFINED PrivateOrMaintenance GOTO :END
sc.exe qc wuauserv | find.exe "START_TYPE" | find.exe "DISABLED" && (
  sc.exe config wuauserv start= auto
  sc.exe start wuauserv
)
REM
REM Add more here...
REM
:END
ENDLOCAL
EXIT /B

So, what happens here? Basically, the batch file uses the so-called personality data. In course of booting from vDisk PVS injects these data to a file called Personality.ini in the root directory of the vDisk file system. The script leverages a command line tool called GetPersonality.exe to retrieve the value for $WriteCacheType. A value of 0 indicates that the vDisk is writeable (private mode or maintenance version) and, thus, the script configures the Windows Update service to start automatically and starts it. Additional read: Managing Target Device Personality.

How to use? Save the script as a batchfile on the vDisk, for example in a scripts folder. Configure it to run on system startup (Windows task scheduler, LGPO, whatever).

Disclaimer: I hope that the information in this post is valuable to you. Your use of the information contained in this post, however, is at your sole risk. All information on this post is provided “as is”, without any warranty, whether express or implied, of its accuracy, completeness, fitness for a particular purpose, title or non-infringement, and none of the third-party products or information mentioned in the work are authored, recommended, supported or guaranteed by me. Further, I shall not be liable for any damages you may sustain by using this information, whether direct, indirect, special, incidental or consequential, even if it has been advised of the possibility of such damages.

Convert Citrix PVS Mcli-Get Output To Objects

Back in late 2009, I wrote a series of posts about the Citrix Provisioning Services’ PowerShell Snapin. 3,5 years later, even within the latest version of PVS the cmdlets return structured text output instead of “real” objects. I’m still hoping that Citrix will provide us a PVS Module/Snapin that follows the common PowerShell standards.

Whatever, today I want to share a generalized version of my function that converts a text array (Mcli-Get output) to PowerShell/.NET objects. For more background information and explanation how the function below works read my former blog post Citrix Provisioning Services 5.1’s PowerShell Interface, Part III

function ConvertTo-PvsObject
{
    <#
    .SYNOPSIS
        Converts the output of Mcli-Get from text array to regular objects with properties.

    .DESCRIPTION
        The Citrix Provisioning Services cmdlets return text arrays instead of .NET objects.
        This function takes the output of a given Mcli-Get command and turns it into
        "PVS objects" with properties.

    .PARAMETER InputObject
        The output of a Mcli-Get command

    .EXAMPLE
        PS C:\> Mcli-Get DiskInfo | ConvertTo-PvsObject

    .EXAMPLE
        PS C:\> $diskinfo = Mcli-Get DiskInfo
        PS C:\> ConvertTo-PvsObject $diskinfo
    #>

    [cmdletBinding(SupportsShouldProcess=$False)]
    param (
        [Parameter(Mandatory=$true, Position=0, ValueFromPipeline=$true)]
        $InputObject
    )

    process {
        switch -regex ($InputObject) {
            "^Record\s#\d+$" {
                if ($record.Count) {
                    New-Object PSObject -Property $record
                }
                $record = @{}
            }
            "^\s{2}(?<Name>\w+):\s(?<Value>.*)" {
                $record.Add($Matches.Name, $Matches.Value)
            }
        }
    }

    end {
        if ($record.Count) {
            New-Object PSObject -Property $record
        }
    }
}

Disclaimer: I hope that the information in this post is valuable to you. Your use of the information contained in this post, however, is at your sole risk. All information on this post is provided “as is”, without any warranty, whether express or implied, of its accuracy, completeness, fitness for a particular purpose, title or non-infringement, and none of the third-party products or information mentioned in the work are authored, recommended, supported or guaranteed by me. Further, I shall not be liable for any damages you may sustain by using this information, whether direct, indirect, special, incidental or consequential, even if it has been advised of the possibility of such damages.

Login VSI Benchmarking Suite Reaches New Maturity Level

Login Virtual Session Indexer, the core product of my dutch friends at Login VSI B.V., goes Version 4.0. It will be available as from tomorrow, that is May 2, 2013.

How time flies! I clearly remember how sometime in 2008/2009 a small team of brainiacs in the lab of Login Consultants started to build the first version of Login VSI. Their intention was to finally address the increasing demand in VDI and SBC projects for a tool that helps to measure performance, impact, and scalability of different infrastructure options. The need for such a benchmarking tool was really urgent as other existing tools were (and are) too expensive, too complicated and not vendor-independent. After one, two years of development Login VSI made its breakthrough: shortly after an Express edition of the tool had been made available for free it was downloaded thousands and thousands of times and thus became a well-known benchmarking tool for VDI and SBC environments. Quite rightly Login VSI claims to be the “de facto industry standard”. As of this blog post Login VSI has been adopted by vendors, by system integrators, by service providers as well as eventually you? (See also the list of Whitepapers based on testing with Login VSI.)

Enough storytelling. What’s new in the new major release of Login VSI? In a nutshell: It is easier to setup and integrate; it is easier to create tests; it simulates real world users in a more realistic way. Login VSI is better than ever! A lot of effort has been put into the new release. It is improved in every respect, from A to Z.

Since a picture paints a thousand words, I’ve prepared a couple of screenshots below. Skimming over the gallery you may will notice that Login VSI v4 has a new, more intuitive GUI which includes a wizard that helps to create and configure tests, an integrated workload editor, and a new dashboard that displays real-time testing results/progress. So much for the outer appearances. But where do the inner values come in? Version 4 introduces a new meta language for easier workload customization. In order to improve the test realism, for one thing the duration of the standard workloads has been increased from 14 to 48 minutes loop and for another thing the datasets now offer 1000 different documents per type, more and larger websites, and a video library in every format. This ensures a far more realistic simulation of a real world variety in data usage.

Get-Acl: Show Users Who Are Member Of (Nested) Groups, #PowerShell

From time to time, customers charge me to report about file access rights from the user account perspective, meaning a summary regarding the allowed and the denied file system accesses per user. Typically, administrators implement role-based access control (RBAC) using nested groups. Nested groups simplify the management of file system access and security audits. Individual user account only acquire access through group memberships that correspond with their business role (see also AGDLP). So much for theory! Over time, more and more exceptions prove the rule, and user accounts acquire access to file system ressources out of the RBAC concept. A few lines of PowerShell can help to distinguish between the good and bad apples.

The function below, Get-ResolvedAcl, leverages the ActiveDirectory module’s Cmdlets Get-Acl (to list explicit allow/deny access), Get-ADObject (to identify the objectClass of an Access Control Entry), and Get-ADGroupMember (to list the members of a group). Furthermore, a sub function called Get-ADNestedGroupMember calls Get-ADGroupMember recursively in order to identify user accounts in nested groups.

function Get-ResolvedAcl
{
    [cmdletBinding(SupportsShouldProcess=$false)]
    param (
        [Parameter(Position=0, Mandatory=$true)]
        [string]
        $Path
    )
    
    function Get-ADNestedGroupMember ($Group)
    {
        Get-ADGroupMember -Identity $Group | ForEach-Object {
            $ADObjectName = $_.name
            switch ($_.ObjectClass) {
                'group' {
                    Get-ADNestedGroupMember -Group $ADObjectName
                }
                'user' {
                    @{UserName=$ADObjectName;Group=$Group}
                }
            }
        }
    }
    
    Import-Module -Name ActiveDirectory
    
    (Get-Acl -Path $Path).Access | Where-Object {$_.IsInherited -eq $false} | ForEach-Object {
        [string]$Trustee = $_.IdentityReference
        $UserDomain = $Trustee.Split('\')[0]
        $SamAccountName = $Trustee.Split('\')[1]
        $ADObject = Get-ADObject -Filter ('SamAccountName -eq "{0}"' -f $SamAccountName)
        switch ($ADObject.ObjectClass) {
            'group' {
                $NestedUser = Get-ADNestedGroupMember -Group $SamAccountName
                if ($NestedUser) {
                    foreach ($User in $NestedUser) {
                        $UserName = '{0}\{1}' -f $UserDomain, $User.UserName
                        $GroupName = $User.Group
                        @{
                            UserName=$UserName;
                            GroupName=$GroupName;
                            DirectAccess=$false;
                            FileSystemRights=$_.FileSystemRights;
                            AccessControlType=$_.AccessControlType
                        }
                    }
                }
            }
            'user' {
                @{
                    UserName=$Trustee;
                    GroupName='';
                    DirectAccess=$true;
                    FileSystemRights=$_.FileSystemRights;
                    AccessControlType=$_.AccessControlType
                }
            }
        }
    }
}

Disclaimer: I hope that the information in this post is valuable to you. Your use of the information contained in this post, however, is at your sole risk. All information on this post is provided “as is”, without any warranty, whether express or implied, of its accuracy, completeness, fitness for a particular purpose, title or non-infringement, and none of the third-party products or information mentioned in the work are authored, recommended, supported or guaranteed by me. Further, I shall not be liable for any damages you may sustain by using this information, whether direct, indirect, special, incidental or consequential, even if it has been advised of the possibility of such damages.

Set AD User Profile Paths (Roaming and RDS) #PowerShell

Hello,

with this post I show how to set the paths of the Roaming Profiles and the Remote Desktop Services (RDS) Profiles, formerly known as Terminal Services (TS) Profiles, in a set of Microsoft Active Directory user accounts.

The easy part is the Roaming Profile path. You just need to leverage the ActiveDirectory PowerShell module from RSAT:

Import-Module ActiveDirectory
$Filter = <define a filter for Get-ADUser here>
$Path = <define the roaming profile path here>
Get-ADUser -Filter $Filter | ForEach-Object {
    Set-ADUser $_ -ProfilePath $Path
}

The RDS Profile is not that easy. It’s easy too though. You just need to leverage ADSI in order to set the RDS Profile.

Import-Module ActiveDirectory
$Filter = <define a filter for Get-ADUser here>
$Path = <define the roaming profile path here>
Get-ADUser -Filter $Filter | ForEach-Object {
    $ADSI = [ADSI]('LDAP://{0}' -f $_.DistinguishedName)
    try {
        $ADSI.InvokeSet('TerminalServicesProfilePath',$Path)
        $ADSI.SetInfo()
    }
    catch {
        Write-Error $Error[0]
    }
}

Disclaimer: I hope that the information in this post is valuable to you. Your use of the information contained in this post, however, is at your sole risk. All information on this post is provided “as is”, without any warranty, whether express or implied, of its accuracy, completeness, fitness for a particular purpose, title or non-infringement, and none of the third-party products or information mentioned in the work are authored, recommended, supported or guaranteed by me. Further, I shall not be liable for any damages you may sustain by using this information, whether direct, indirect, special, incidental or consequential, even if it has been advised of the possibility of such damages.

WSUS for XenApp?

“Is there something like WSUS for XenApp, or is it possible to extend Microsoft WSUS in order to download and apply updates and hotfixes to our XenApp servers?” No. But you can try my simple XenApp hotfix deployment framework that I want to share with you today.

Introduction

While (Cloud) service providers have evolved (or are evolving) an ideal approach to maintain their XenApp server farms there are uncountable companies across all sizes that are running one or more XenApp farms w/o a proper maintenance concept. Plenty of these companies indeed have service time windows with corresponding tasks. But quite often the implementation lacks class meaning that at the worst the admin updates each and every damned (sorry) XenApp server by hand.

A year ago or so, I developed a bunch of batch files that build a simple script framework for automated XenApp hotfix installation. My goals were to build a solution that supports the broadest range of XenApp farms as possible on the one hand and to build a “learning” framework that identifies new hotfixes in the source folder tree on its own (and releases the admin from the challenge to script). Therefore I chose Batch scripting and ini files instead of PowerShell and XML. Furthermore my solution either relies on MFCOM (API for legacy XenApp before version 6) nor it uses XenApp PowerShell Cmdlets. It just leverages cmd.exe and some standard commands that you’ll find on every Windows Server with Terminal Services or Remote Desktop Services. In short, the XenApp hotfix deployment framework is designed to update XenApp 4.5/5.0/6.0/6.5 on Windows Server 2003/2003 R2/2008/2008 R2.

Simple means basic. Please don’t expect a superduper allrounder solution. The script framework cares about the sessions though (if you want), gives them some (configurable) grace time and repeatedly sends a warning message. And it disables logons as well. This seems quite a lot but in a perfect world you would like a solution that monitors itself and stops processing on all remaining servers in case of fault for example.

Requirements

The XenApp hotfix deployment framework just needs storage space on a central folder share. The scripts and related files on its own allocate less than 200 KB, but you need to take the hotfix files into account as they’ll be stored inside the framework’s folder structure as well.

Strictly speaking, a dedicated active directory account that runs the maintenance process is not really required. But it is recommended to create such an account, especially if you opt for an automated invocation of the maintenace process via scheduled task for example.

Give the account that runs the maintenance appropriate access on the above mentioned share. Of course the account needs administrative access on the XenApp servers to be able to install the hotfixes successfully.

That’s all, nothing more required from a technical perspective.

Overview

The XenApp hotfix deployment framework consists of a main script, a bunch of library scripts, one or more config files, a folder structure, and – last but not least – a naming convention with a hidden sense (below more).

The main script, XenAppMaintenance.cmd, initializes and controls the progress of the maintenance work.

The config file, Settings.ini, includes changeable global settings that configure logging and session warning timeframe for example. Additional config files (per XenApp Server) can be placed aside the Settings.ini in order to overwrite to common settings for whatever reason or purpose.

The folder structure separates the framework components. This helps the admin to locate easily a config file or to save a new hotfix for example.

The naming convention for additional config files, for library scripts, and hotfix subfolders make up an important part of the framework. For example a part of a hotfix subfolder name is tied to the library script that the framework shall use in order to install the hotfix inside that folder.

Setup

This section outlines the steps to configure the XenApp hotfix deployment framework for your XenApp Farm:

  • Create an Active Directory user (optional)
  • Create a directory on a file server and share it.
  • Apply appropriate NTFS and share security.
  • Download and unzip the XenApp hotfix deployment framework to the shared folder
  • Open the file\Config\Settings.ini in order to configure settings (optional).
  • Download hotfixes from www.citrix.com. For each hotfix, create a subdirectory in the Source folder of the framework. A hotfix subdirectory consists of three parts separated by an underscore: three-digit installation order number, action library script, and optionally a comment. For example, the subdir “010_InstallMsp_CTX126679″ stands for order number 10, use action script “InstallMsp” to install the hotfix, and article number CTX126679.

Invoke Maintenance

This is simple. Just double click XenAppMaintenance.cmd or create a scheduled task.

 

Disclaimer: I hope that the information in this post is valuable to you. Your use of the information contained in this post, however, is at your sole risk. All information on this post is provided “as is”, without any warranty, whether express or implied, of its accuracy, completeness, fitness for a particular purpose, title or non-infringement, and none of the third-party products or information mentioned in the work are authored, recommended, supported or guaranteed by me. Further, I shall not be liable for any damages you may sustain by using this information, whether direct, indirect, special, incidental or consequential, even if it has been advised of the possibility of such damages.

Automation-as-a-Commodity