Show MenuHide Menu

PowerShell Profile In Da Cloud

October 19, 2015

If you want to keep the same PowerShell Profile on more than one Windows computer, how about transfering the relevant common parts of the profile to a file share or even the Cloud? Actually, that’s a no-brainer!

Below, I outline my approach with a few brief strokes.

1. Identify the parts of both the console and the ISE profile that you want to share with all computers.
2. Create a “Documents\WindowsPowerShell” folder in the root of your (cloud) storage mount point.
3. Within that folder, save the profile code related to console sessions as “Microsoft.PowerShell_profile.ps1” and the ISE profile code as “Microsoft.PowerShellISE_profile.ps1”
4. Replace the “outsourced” profile code from the local profile scripts with a reference to their cloud-based equivalents:

Apart from dot-sourcing an existing external profile script, the above code initiales a global CLOUDPROFILE variable with the full file name of the cloud-based profile script. Thus it’s very easy to access that file for editing purposes or so.

The next code snippet works with ISE only and enables you to skip profile loading by holding down the left CTRL key:

Hope this helps

ISESteroids 2.0 Random Features

October 14, 2015

A few months ago, I wrote only a short note about the (at that time) upcoming release of the PowerShell module ISESteroids. More precisely, ISESteroids is an Add-On for PowerShell ISE by PowerShell MVP Tobias Weltner. This time I’ll highlight some randomly chosen features.

First start

Ok, that is not really a feature. You can start ISESteroids by entering Start-Steroids. Owing to the Module Auto-Loading feature, this will load the ISESteroids module. But, mind the “sensitivity” of Module Auto-Loading! It already loads a module behind the scenes as soon as you “touch” it with Get-Command, Get-Help and Tab Expansion. So be prepared that ISESteroids loads when you invoke commands like Get-Command -Module ISESteroids, Get-Help Start-Steroids, and such…

Expert Level

In a nutshell, ISESteroids‘ objective is to assist you in writing better PowerShell code more quickly. As ISESteroids is so packed with features, however, it could twist that aim right around and tend to confuse especially beginners. If you start ISESteroids for the first time it will ask for the Expert Level. Whatever you’ll choose there, you can it later set to a more appropriate level by choosing the corresponding option the “Expert Level” menu.

Screenshot: Selecting the Expert Level


Yes, the concept of code snippets is already included in ISE. It’s a bit halfhearted, though. ISESteroids uses ISE’s snippet mechanism to the advantage it deserves. If you press the default snippet key sequence STRG-J ISESteroids opens the snippet selector. First surprise: The snippets are organized in folders. Selecting a folder leads to corresponding code snippets. (Btw, with the backspace key you move up in the folder structure.) Second suprise: The snippet selector allows for adding new folders and new snippets.

Screenshot: Selecting a code snippet

Screenshot: Snippet Manager


If you deal with the latest PowerShell version only, be happy in your bubble. My field experience differs regarding the PowerShell versions my scripts have to support. PowerShell 1.0 really faded away meanwhile, but I still stumble over version 2.0 for example. ISESteroids helps you to handle or rather to prevent compatibility issues by marking code that isn’t compatible to the targeted PowerShell version. So, be sure to check the appropriate option in the “Compatibility” menu. Apart from version-related compatibility ISESteroids can mark commands that are not shipping with PowerShell default, thus you have a sort of visual reminder of your script/code requirements.

Screenshot: Selecting compatibility

Risk Management

While you’re scripting, ISESteroids by default checks/analyzes your code against pre-defined risks. On potential risk detection you’ll be notified without attracting attention meaning that ISESteroids will change the risk status indicator from green to yellow or red. Keep an eye on that indicator. You’ll find it in ISE’s status bar and you can click it to enable/disalbe Autochecking, to approve the script, etc. Furthermore, there’s an option to manage black/white lists. (Note that the trail version doesn’t allow you to edit the pre-defined rules.)

Screenshot: Risk Management settings

Screenshot: Risk assessment result


Starting from say 100 lines of code you find yourself scrolling more and more. If you deal with rather huge scripts on a regular basis you’ll definitely like ISESteroids‘ ScriptMap feature. Turned on, it will show a preview of the entire script. If you move the mouse pointer over that ScriptMap area it will act as a reading-glass that helps you to identify the code region in question. A single click will navigate to the chosen code region.

Screenshot: Navigating with ScriptMap

Navigating to function definition/references back and forth

Apart from ScriptMap ISESteroids has more help to offer regarding navigation within (huge) script files. Above the definition of a function ISESteroids displays the number of references to this function (within the same file).

Clicking on this information will navigate to the references:

If you want to navigate (back) to the function definition, right-click on the reference to function in question on choose “Go To Definition”:

CloneView and split screen

Again, if you regularly deal with larger scripts ISESteroids helps you to minimize the ongoing efforts to navigate back and forth in the code. CloneView displays the current editor in a detached external window. Just right-click anywhere within the editor you want to clone and choose “Open CloneView”. The split screen feature divides the current editor in two sections, thus you’re able to simultaneously view/work on different sections of a single script.

Screenshot: Splitted editor window

Navigation Bar

OMG, yet another about navigating huge scripts? Yes and far more than that. If you turn on the Navigation Bar, at first sight you can both search text and instantly navigate to any function within the loaded script by selecting a function from the list.

Screenshot: Selecting a function to navigate to from the Navigator Bar

Beyond that, the Navigation Bar…

  • offers access to a couple of snippets and templates,
  • enables you to create a function from selected code,
  • and enables you to export a selected function to a new/existing PowerShell module

Screenshot: Selecting snipptes and templates from the Navigation Bar

Screenshot: Create a function from selected code from the Navigation Bar

Screenshot: Export a selected function to a new PowerShell module from the Navigation Bar

File Version History

To come to an end, ISESteroids has a rather casual file versioning feature. For those who care about version control but don’t want to get worked up over git, svn, csv, tfs, etc. ISESteroids can keep a file version history for a given file. (Behind the scenes it maintains a zip archive with all the past major and minor versions.)

Screenshot: File version history feature

Bottom line: Give it a try!

Scripting Street Knowledge

October 8, 2015

Whenever it comes to building a scripted IT Automation solution that goes beyond the scope of a few commands in a single script file, you need more than sufficient technical knowledge and scripting skills: apart from that you need to approach the task in a way that helps you to resolve complexity and to plan ahead. With this post I want to raise your awareness for some General Principles for the Design of Scripted IT Automation Solutions that help you to master the situation. Here we go…

Nip it in the bud!

Don’t underestimate the beginnings. A quick-and-dirty approach isn’t a bad way per se, meaning that up to a certain point it’s a good way to go (at least from benefit-cost perspective). Exceeding that “certain point”, though, at worst could lead to chaos: If you follow the quick-and-dirty approach you’ll succeed to some degree and end up fixing/updating, adapting, flanging new features and partly rewriting your solution over time. The day will dawn when you want to rebuild the entire solution from scratch as it has evolved into something hardly manageable. It’s difficult to walk in mud, so to speak. As things usually turn out there’s no time/budget for tasks like this, thus you must drink as you have brewed.

And the moral of this story, never ever default to quick-and-dirty to begin with. It’s very, possibly even the most important to mind the early stages because later there will be hardly more advantageous moments to put things (back) on the right track. So, be sure to make time for thinking/planning, and through the end! Else you might draw the short straw.

Whenever you’re confronted with a new scripting challenge, howsoever minor it seems, ask yourself:

  1. Is the requested solution a feature of the scripting language?
  2. Have co-workers or I ever created a solution like the requested one?
  3. Could co-workers or I eventually re-use the solution or parts thereof?

Don’t get me wrong regarding the first question, but quite often people tend to rebuild existing commands or features due to lack of knowledge or preparation time. So, be sure to make time for figuring out that you’re really dealing with something nonexistent!

If the answer to the second question is “Yes” you definitely should figure out whether it’s worth the effort to adopt and adapt that solution. Here, likewise, the point is that you don’t want to reinvent the wheel.

If the answer to the third question is “Yes” you should opt for quick-and-dirty by no means, meaning that you should design the solution for repeated application or rather reusability.

As time goes on, you’ll get used to re-use your own work as partial solutions over and over again. At the end of the day you’ll realize that quick-and-dirty hardly ever would have been a suitable approach.

Apart from the above mentioned questions you should try to get to the bottom of the requested automation solution. Too often it turns out that an original request just covered the tip of the iceberg rather than the big picture. The big picture is exactly what you need, thus be prepared to clarify that. Ideally, you carry out a scoping date to get the big picture and write down a scope statement that exactly defines the requested scripting solution.

Divide and conquer

Facing a more sophisticated scripting challenge gives rise to the question how to tackle that task from scripting perspective. How to handle complexity? In computer science, there is an algorithm design paradigm called “divide and conquer” (D&C) that works by recursively breaking down a given problem into sub-problems, solving the (plain) sub-problems and combining these to solve the original (complex) problem. I highly recommend you to adopt the D&C approach as general scripting principle, in other words you break down a scripting challenge into as many separate scripting tasks as possible; after that you solve these scripting tasks individually; and finally you combine these partial, self-contained solutions to a complete solution. To put it another way, write for each single task a function; organize/bundle functions in libraries/modules; leverage these functions to solve the problem.

The basis for making many sub-solutions work together as a good “working team” is consistency and micromanagement if you will. Essentially, it’s about establishing a scripting framework that allows for and ensures information flow between the individual tasks. You need to define a set of rules, let’s call it a scripting policy that covers important details such as:

  • How functions deal with input
  • How functions pass back results (output)
  • How functions handle errors
  • How functions support testing and debugging/troubleshooting scenarios
  • Logging
  • Naming conventions
  • You name it

Again, avoid to reinvent the wheel! Be sure to check what your preferred scripting language has to offer with regard to your scripting policy and leverage this features. Blueprint a mandatory function template that incorporates all your rules and use it consequently to embed the effective “payload script code” safely within your scripting framework.

I must admit though that establishing such a scripting framework for the divide and conquer approach will blow up your solution. You need to do it anyways. However, you can keep it within reasonable limits if don’t overdo things!

Expect Failure

While preparing a dish experienced chefs are distinguished from amateur cooks by frequent tasting. Put simply: the pros expect failure and therefore continuously taste to identify a need for improvement/adjustment as quickly as possible. They retain total control. I highly recommend you to adopt the chef’s approach as another general scripting principle, in other words leave nothing to chance.

Given that, according the D&C principle, you write a function for each individual task you should do this in fear of losing control if you will. It’s about micromanagement! Mind each damned detail and ask yourself what could go wrong with it. No twilight zones allowed. Validate incoming information, test for connectivity, whatever. Got it? Better double-check each detail than rely on a fortunate series of events.

Always script with testing and debugging scenarios in mind. Take care to be one step ahead and insert by default some debug messages to put out parameter values that were given to a function. Someday, trivialities like these will make your (or a co-worker’s) day.

Design for Change

This one is about separating the data from logic. Never ever mingle logic and data. This is essentially the key to flexibility and instant reusability. Script logic should only “know” how to process data while the actual values should derive from separate resources like SQL tables, XML/JSON/CSV files, you name it. Not even dream of hard-coded values!

With separation of data and logic it’s almost a no-brainer to set up the solution for another environment.


To conclude, there’s definitely more to tell but in the end it’s all about thinking end-to-end and micromanage. Mind the details!

If you miss or think different about something feel free to contribute your ideas by submitting comments.

ISESteroids 2.0 is getting off the starting blocks…

April 23, 2015

… and you should give it a try: ISESteroids 2.0

To tell you the truth, up to yesterday I had some kind of prejudice against Tobias Weltner’s ISESteroids meaning that I considered it a PowerShell ISE Add-On that rather addresses a beginner’s needs.

During the 3rd german ‘PowerShell Community Konferenz 2015’ I changed my opinion. While giving talks, Tobias showcased by the way several times features of the upcoming release. I changed opinions. ISESteroids not only help you to produce better PowerShell solutions, but also it brings you to speed – regardless if your level is beginner, advanced, expert, guru whatever.

For example, imagine you’re challenged to write an advanced function with different parameter sets, mandatory parameters, and some optional parameters. With ISESteroids loaded, you can do things back to front, in other words you start with writing the syntax as you want it to be, such as…

After that you just need to highlight the code, right-click, select the ISESteroids action to create a function, and – voilà – you get a neatly written skeleton for a function that exactly matches the syntax specifications you made.

Another great feature I saw in action was a WPF GUI builder that requires/interacts with Visual Studio.

I could continue listing my memory minutes. But, it still would be just the tip of the iceberg. ISESteroids is packed features you need to discover while working with it. So do I

Enums in Windows PowerShell Less Than Version 5.0

January 8, 2015

Maybe you’ve noticed that the upcoming version of Windows PowerShell, 5.0, will make Enumerators (Enums) very easy to create with the new enum keyword. With this post I share an approach to create enums in PowerShell 4.0 and lower as well.

(If you know what an Enumerator is you can skip this section.) Enums help you to deal with rather small ranges of integer values (each value gets a name) and, even more importantly, they simplify programming robust solutions. Put the case that you have to deal with different environments, for example Dev, Test, Acceptance, and Prod. And let’s say that each environment is represented by an int value (thus, 0 to 3 represents Dev to Prod). What happens if you assign the value 4 by mistake? For PowerShell it’s ok because 4 is a valid int value. Therefore, this error will remain undetected at the scene and – according Murphy – reveal its dark energy in the worst possible moment. You get the idea, I hope. It’s no fun to narrow down such problems. How to prevent such failure? You could mess around with if statements and -lt, -gt, -eq for example. Or you make use of, guess what, an Enum. If you have an Enum type for the afore-mentioned environments, PowerShell will refuse a variable of this type to be assigned any value outside of the scope 0..3 and throws an error at the root cause. Therefore, I like to use Enums ever since PowerShell 1.0.

In Windows PowerShell 4.0 and below, Enums are created as follows:

Now, play with it (that’s how I like to learn stuff, btw):

Now, let’s get dirty…

Btw, did you notice the hint within the error message? PowerShell lists the possible values for you.

Hope this helps

Citrix PVS Image Preparation Script for XenApp 7.x Workloads

January 5, 2015

With this post I share a Powershell script that prepares the master installation of a XenApp 7.x Worker for imaging with Citix Provisioning Services, Prepare-XenApp7.ps1.

Due to fact that Citrix has ported its flagship XenApp to the architecture that was introduced with XenDesktop 5, there’s strictly speaking no need to generalize the PVS vDisk that provides the workload of a XenApp Worker because it doesn’t contain IMA-related stuff anymore. On the other hand there’s still room for some optimization steps before putting a XenApp vDisk into production/standard mode. The script automates the following steps:

  • Investigate the PVS’ Personality.ini in the root of the system drive in order to determine the disk mode that is read-write, read-only, or started from local HD
  • Clear Citrix User Profile Manager’s cache
  • Resync time
  • Update GPO settings
  • Clear network related caches (DNS and ARP)
  • Clear WSUS Client related settings
  • Clear event logs
  • Based on the findings in Step 1, suggest a convenient main action, that is either “Exit” (if we’re in maintenance/private w/ read-write vdisk access), or “Invoke ImagingWizard” (if we started from local HD), or “Invoke XenConvert” (reverse imaging scenario w/ read-only vdisk access)

BTW, the script should work for desktop workloads as well but I haven’t tested it so far.

Hope this helps

Latest version on GitHub: Prepare-XenApp7.ps1

How To Backup MS SQL Express Databases? #PowerShell

January 4, 2015

Happy Twenty Fifteen! The first post of the new year deals with a common question I am confronted with from time to time: Do you have a script to backup MS SQL Express? Yes, I have. The script requires the SQLPS PowerShell Module that will be installed automatically with newer versions of MS SQL Express. Basically, it simplifies the usage of its Backup-SqlDatabase Cmdlet:

Hope this helps

[Updated] How To Get The Clientname Within A Logon Script? #PowerShell #RemoteDesktopServices #CitrixXenApp

December 2, 2014

This morning, a workmate seeked my support regarding an issue that I wasn’t aware of: on a Windows Server 2008 R2 Remote Desktop Session Host you can’t leverage the CLIENTNAME environment variable within a logon script. I stumbled upon a post regarding the same issue and decided to port their VBScript based solution to Windows PowerShell and here’s the result:

Hope this helps

What was, is, and will be new in Windows PowerShell?

October 12, 2014

If you need an overview what’s new in the upcoming version and what was new in the current and former versions of Windows PowerShell as from version 3.0 I recommend Microsoft TechNet What’s New In PowerShell.

[Updated] Citrix PVS: Sync Local vDisk Store #PowerShell

September 29, 2014

The other evening, I’ve updated the Sync-PvsLocalStore.ps1 script in order to support multiple Citrix Provisioning Services (PVS) vDisks due to customer request.

The purpose of this script is to copy or rather sync changed and new Versions of one or more given vDisks between the local Stores within a Farm of two or more PVS servers. Basically, you can think of the Sync-PvsLocalStore.ps1 as a wrapper for Robocopy.exe /MIR with some extra brains on top. That is because it is able to detect and exclude a Maintenance Version of a vDisk from the copy process, meaning that the script only spreads out the latest Production and Test versions of a vDisk while it doesn’t bloat the stores with Maintenance versions that is work in progress typically.

The usage is very simple. Look at this example:

You need to specify one ‘MasterServer’, one or more ‘MemberServer’, the path of the Store (needs to be the same on each server), one or more vDisk names, the name of the corresponding Site and Store. The two latter help the script to identify any Maintenance Version.

Sync-PvsLocalStore.ps1 needs to be run on a system where the PVS Console or rather its command-line interface MCLI.EXE is installed. (The script utilizes MCLI.EXE because to date there’s no advantage in using the PVS PowerShell Snapin.)