Mastering PowerCLI - Sample Chapter
Mastering PowerCLI - Sample Chapter
Mastering PowerCLI - Sample Chapter
By the end of the book, you will have the required in-depth
knowledge to master the art of PowerCLI scripting.
P U B L I S H I N G
pl
e
E x p e r t i s e
D i s t i l l e d
Mastering PowerCLI
$ 54.99 US
34.99 UK
Sa
P r o f e s s i o n a l
Sajal Debnath
Mastering PowerCLI
Mastering PowerCLI
ee
Sajal Debnath
P U B L I S H I N G
Preface
If you are a system administrator who manages a considerable-sized environment,
then I do not need to elaborate on the importance of scripting to you. Scripting was
and always will be one of the most important arsenals in a system administrator's
weaponry. With the term scripting, till very recently typically bash or other such
shell scripts used to come to mind and more advanced ones, such as Perl, PHP, or
Ruby. I love this scripting language for the sheer beauty and power that it presents.
If you are coming from a *NIX environment, PowerShell will completely change
your perception about scripting. If you are managing a vSphere environment, then
besides vRealize Orchestrator, PowerCLI is the most powerful tool available to help
you automate the different aspects of a vSphere environment. Probably, if I need to
get something done really quickly, I will still rely on PowerCLI scripting.
In all my years of experience as a working professional and before that as a
student and teacher, I have seen primarily two methods of explanation: the first
approach, and the most widely used one, is to take an example problem and solve
it while explaining the solution to the student. This way, the student learns how
to solve a particular type of problem. The second approach is the one in which a
teacher explains the basic logic and principles of a solution behind the problem,
and then asks the students to solve the problem all by themselves. As a student, I
always found myself struggling with the first approach. Though the first approach
was easier to understand, it tends to limit my knowledge to solving only similar
problems. Because of the lack of the understanding of the underlying logic, faced
with a new problem, I could not solve it most of the time. This happened especially
in mathematical problems. It was like showing me a program written in C to
implement Dijkstra's algorithm and explaining how this program was written.
Knowing this, will I be able to implement any other algorithm in C or utilize
Dijkstra's algorithm for my advantage? Probably not. Instead, if someone teaches
me the different aspects of C language and how to write programs using C,
then I can utilize this knowledge to write any programs.
Preface
You may disagree or agree with me, but I always preferred the second approach as it
worked for me and gave me a better understanding and hold on the topic.
So, throughout this book, I tried to explain all the building blocks of advanced
PowerShell and PowerCLI scripting and then provided examples to showcase what
I am trying to say. I tried this approach with the hope that it will give you a better
understanding and clarity of the underlying constructs so that you can build on top
of this.
Preface
Chapter 10, Using REST APIs, discusses Representational state transfer (REST) APIs
and how PowerCLI can be used to manage the VMware vRealize Automation
environment using REST APIs.
Chapter 11, Creating Windows GUI, discusses how to create a Windows graphical user
interface (GUI) using PowerShell and other tools.
Chapter 12, Best Practices and Sample Scripts, describes PowerShell scripting best
practices. This chapter also covers two sample scripts, one to get a security report
and another to find the capacity of a vSphere environment.
[1]
This book tries to cover all or most of the advanced topics of PowerShell and PowerCLI
to enable you to master the subject and become a master scripter/tool maker, but at
the same time, this book is written from the perspective of a system admin. To achieve
this, I would try to avoid the developer jargons and replace them with normal, simple
examples. Note that this is a 'mastering PowerCLI' book, not a mastering PowerShell
book, so the examples given in this book are from the PowerCLI perspective. You can
say that I am looking at PowerShell through the eyes of PowerCLI, and we will cover
those topics of PowerShell that will enable us to write production-grade scripts for
managing VMware environments.
In this chapter, we will cover the following topics:
Using GitHub
[2]
Chapter 1
Now, the question that comes to mind is how can we automate? There are so
many ways in which we can automate a task (ask any developer). For a general
system administrator, who is not a developer, there are some basic weapons from
which they can choose. But the most basic and widely used method for any system
administrator is to use Shell scripts. For any operating system, a Shell is the interface
through which you can interact with the operating system. Traditionally, we have
used Shell scripts to automate mundane work of daily life and tasks that do not
require very extensive programming. Unix and Linux operating systems provide
many Shells, such as Bash Shell, C Shell, KORN Shell, and so on. For the Windows
environment, we have command.com (in MS-DOS-based installations) and cmd.exe
(in Windows NT-based installations). Before we start talking about more advanced
ones, let's take a look at scripting and its history.
In general, a scripting language is a high-level programming language that is
interpreted by another program at runtime rather than being compiled by the
computer's processor as other programming languages are. The first interactive
shells were developed in the 1960s and these used Shell Scripts that controlled the
running of computer programs in a computer program, the shell. It started with
Job Control Language (JCL), moving to Unix shells, REXX, Perl, Tcl, Visual Basic,
Python, JavaScript, and so on. For more details, refer to https://en.wikipedia.
org/wiki/Scripting_language.
PowerShell
Traditional shell commands and scripts are best suited for command-line-based
tasks or console-based environments, but with the advent of more GUI-based
servers and operating systems, there is a greater need for a tool that can work with
the more sophisticated GUI environment. This particular requirement has become
more prominent for the Windows environment since Windows shifted its core
from MS-DOS implementations to the NT-based core. Also, traditionally, Windows
provided batch scripts in terms of basic scripting functionality, which was not
enough for its GUI-based environment.
To solve the situation and comply with the updated environment, Microsoft
came up with a novel solution in the form of PowerShell. It is more of a natural
progression of the traditional shell in the advanced operating system environment.
It is the one of the best and most powerful shell environments I have worked with.
Now, more and more serious development is going on in this tool. Today, this has
become so important and mainstream that all the major virtualized environments
support their general operations being automated through this environment.
[3]
The major difference between the traditional Shell and PowerShell is that traditional
Shells are inherently text-based; that is, they work on texts (inputs/outputs), but
PowerShell works inherently on objects. So, PowerShell is far more modern and
powerful than other Shells. Since it also supports and works on objects rather than
texts, let's perform tasks, which were not possible with the earlier Shell Scripts.
Windows PowerShell supports running four types of commands:
PowerShell functions
PowerShell scripts
At the time of writing this book, the latest stable version of PowerShell is 4.0 and
Microsoft released a preview version of PowerShell 5.0 (Windows Management
Framework 5.0 Preview November 2014). A few of the new features in this preview
version are as follows:
A new ConvertFrom-String cmdlet has been added that extracts and parses
the structured objects from the content of text strings.
A new OneGet module has been added that will allow you to discover and
install software packages from the Internet.
For details of a list of the enhancements, you can check the
documentation from Microsoft at https://technet.microsoft.
com/en-us/library/hh857339.aspx.
PowerCLI
VMware PowerCLI is a tool from VMware that is used to automate and manage
vSphere, vCloud Director, vCloud Air, and Site Recovery Manager environments.
It is based on PowerShell and provides modules and snap-ins of cmdlets
for PowerShell.
[4]
Chapter 1
It is supported at a low level (1:1 mappings of API) and high level (API
abstracted) cmdlets.
At the time of writing of this book, VMware released PowerCLI 6.0 R1. The major
new features of this release, among others, are as follows:
Now, in this version, support for vCloud Air has been added. We can
now manage the vCloud Air environment from the same single console.
User guides and 'getting started' PDFs are included as part of the
PowerCLI installation.
[5]
Earlier, PowerCLI had two main snap-ins to provide major functionalities, namely,
VMware.VimAutomation.Core and VMware.VimAutomation.Cloud. These two will
provide the core cmdlets to manage the vSphere environment and vCloud Director
environment. In this release, to keep up with the best practices of PowerShell, most
of the cmdlets are available as "modules" instead of "snap-ins". So, now in order to
use the cmdlets, you need to import the modules into your script or into the Shell.
For example, run the following code:
Import-Module 'C:\Program Files(x86)\VMware\Infrastructure\vSphere
PowerCLI\Modules\VMware.VimAutomation.Cloud'
This code will import the module into the current running scope and these cmdlets
will be available for you to use.
Another big change, especially if you are working with both the vCloud Director
and vSphere environment is the RelatedObject parameter of vSphere PowerCLI
cmdlets. With this, now you can directly retrieve vSphere inventory objects from
cloud resources. This interoperability between the vSphere PowerCLI and vCloud
Director PowerCLI components makes life easier for system admins. Since any
VM created in vCloud Director will have an UUID attached to the name of their
respective VMs in the vCenter server, so extra steps are necessary to correlate a VM
in the vCenter environment to its equivalent vAppVM in vCloud Director. With
this parameter, these extra steps are no longer required.
Type
Snap-in/
Module
VMware.VimAutomation.Vds
Module
VMware.VimAutomation.Cis.Core
Module
VMware.VimAutomation.Storage
Module
[6]
Chapter 1
Module/Snap-in
VMware.VimAutomation.HA
Type
Module
VMware.VimAutomation.License
Snap-in
VMware.ImageBuilder
Snap-in
VMware.DeployAutomation
Snap-in
Type
Module
VMware.VimAutomation.PCloud
Module
https://technet.microsoft.com/en-us/magazine/2007.03.powershell.aspx.
[7]
In any programming language, the first thing that you need to learn about is the
variables. Declaring a variable in PowerShell is pretty easy and straightforward;
simply, start the variable name with a $ sign. For example, run the following code:
PS C:\> $newVariable = 10
PS C:\> $dirList = Dir | Select Name
Note that at the time of variable creation, there is no need to mention the
variable type.
You can also use the following cmdlets to create different types of variable:
New-Variable
Get-Variable
Set-Variable
Clear-Variable
Remove-Variable
Best practice for variables is to initialize them properly. If they are not initialized
properly, you can have unexpected results at unexpected places, leading to many
errors. So, you can use Set-Strictmode in your script so that it can catch any
uninitialized variables and thus remove any errors creeping in due to this. For details,
check out https://technet.microsoft.com/en-us/library/hh849692.aspx.
When we started programming, we started with flowcharts, then moved on to pseudo
code, and then, finally, implemented the pseudo code in any programming language
of our choice. But in all this, the basic building blocks were the same. Actually, when
we write any code in any programming language, the basic logic always remains
the same; only the implementation of those basic building blocks in that particular
language differs. For example, when we greet someone in English, we say "Hello"
but the same in Hindi is "Namaste". So, the purpose of greeting remains the same
and the effect is also the same. The only difference is that depending on the language
and understanding, the words change.
Similarly, the building blocks of any logic can be categorized as follows:
Conditional logic
Now, let's take a look at how these two logics are implemented in PowerShell.
[8]
Chapter 1
Conditional logic
In PowerShell, we have if, elseif, else and switch to use as conditional logic.
Also, to use these logics properly, we need some comparison or logical operators.
The comparison and logical operators available in PowerShell are as follows:
Comparison operators:
Operator
-eq
Description
-ne
Not equal to
-lt
Less than
-gt
Greater than
-le
-gt
Equal to
Logical operators:
Operator
Description
-not
Logical Not
-and
Logical AND
-or
Logical OR
In the preceding statement, both elseif and else are optional. The "condition"
is the logic that decides whether the "script block" will be executed or not. If the
condition is true, then the script block is executed; otherwise, it is not. A simple
example is as follows:
if ($a-gt$b) { Write-Host "$a is bigger than $b"}
elseif ($a-lt$b) { Write-Host "$a is less than $b"}
else { Write-Host " Both $a and $b are equal"}
[9]
The preceding example compares the two variables $a and $b and depending on
their respective values, decides whether $a is greater than, less than, or equal to $b.
The syntax for the Switch statement in PowerShell is as follows:
Switch (value) {
Pattern 1 {Script
Pattern 2 {Script
Pattern n {Script
Default
{Script
}
Block}
Block}
Block}
Block}
If any one of the patterns matches the value, then the respective Script Block is
executed. If none of them matches, then the Script Block respective for Default
is executed.
The Switch statement is very useful for replacing long if {}, elseif {}, elseif
{} or else {} blocks. Also, it is very useful for providing a selection of menu items.
One important point to note is that even if a match is found, the remaining
patterns are still checked, and if any other pattern matches, then that script block
is also executed. For examples of the Switch case and more details, check out
https://technet.microsoft.com/en-us/library/ff730937.aspx.
do while
while
do until
for
Foreach
Foreach-Object
[ 10 ]
Chapter 1
The following example shows the preceding while loop. Say, we want to add the
numbers 1 through 10.
The do while implementation is as follows:
$sum = 0
$i = 1
do {
$sum = $sum + $i
$i++
}while( $i le 10)
$sum
In both the preceding cases, the script block is executed until the condition is true.
The main difference is that, in the case of do while, the script block is executed at
least once whether the condition is true or false, as the script block is executed
first and then the condition is checked. In the case of the while loop, the condition
is checked first and then the script block is executed only if the condition is true.
The syntax for the do until loop is as follows:
do {
Script Block
}until (condition)
The main difference between do until and the preceding two statements is that,
logically, do until is the opposite of do while. This is the script block is that is
run until the time the condition is false. The moment it becomes true, the loop
is terminated.
The syntax for the for loop is as follows:
for (initialization; condition; repeat)
{code block}
[ 11 ]
The typical use case of a for loop is when you want to run a loop a specified number
of times. To write the preceding example in a for loop, we will write it in the
following manner:
For($i=0, $sum=0; $i le 10; $i++)
{
$sum = $sum + $i
}
$sum
The purpose of the foreach statement is to step through (iterate) a series of values
in a collection of items. Note the following example. Here, we are adding each
number from 1 to 10 using the foreach loop:
# Initialize the variable $sum
$sum = 0
# foreach statement starts
foreach ($i in 1..10)
{
# Adding value of variable $i to the total $sum
$sum = $sum + $i
} # foreach loop ends
# showing the value of variable $sum
$sum
[ 12 ]
Chapter 1
[ 13 ]
[ 14 ]
Chapter 1
Due to the flexibility of the ISE and ease with which we can work with it, it is my
preferred way of working here, and for the rest of the book, it will be used for
examples. Although there are many specific editors available for PowerShell, I am
not going to cover them in this chapter. In the last chapter, I will cover this topic a
bit and talk about my favorite editor.
So, we've started the ISE. Now let's write our first script that consists of a single line:
Write-Host "Welcome $args !!! Congratulations, you have run your first
script!!!"
Now, let's save the preceding line in a file named Welcome.ps1. From the ISE
command line, go to the location where the file is saved and run the file with the
following command line:
PS C:\Scripts\Welcome.ps1
What happened? Were you able to run the command? In all probability, you will get
an error message, as shown in the following code snippet (in case you are running
the script for the first time):
PS C:\Scripts> .\Welcome.ps1 Sajal Debnath
.\Welcome.ps1 : File C:\Scripts\Welcome.ps1 cannot be loaded because
running scripts is disabled on this system. For more
[ 15 ]
+ FullyQualifiedErrorId : UnauthorizedAccess
So, what does it say and what does it mean? It says that running scripts is disabled
on the system.
Whether you are allowed to run a script or not is determined by the ExecutionPolicy
set in the system. You can check the policy by running the following command:
PS C:\Scripts> Get-ExecutionPolicy
Restricted
So, you can see that the execution policy is set to Restricted (which is the default
one). Now, let's check what the other options available are:
PS C:\Scripts>Get-Help ExecutionPolicy
Name
Synopsis
Category
Module
--------
--------
------
---
Get-ExecutionPolicy
Cmdlet
Microsoft.PowerShell.S...
Gets the execution policies for the current session.
Set-ExecutionPolicy
Cmdlet
Microsoft.PowerShell.S...
Changes the user preference for the Windows PowerSh...
Note that we have Set-ExecutionPolicy as well, so we can set the policy using this
cmdlet. Now, let's check the different policies that can be set:
PS C:\Scripts> Get-Help Set-ExecutionPolicy -Detailed
[ 16 ]
Chapter 1
For the purpose of running our script and for the rest of the examples, we will set
the policy to Unrestricted. We can do this by running the following command:
PS C:\ Set-ExecutionPolicy Unrestricted
Now, if we try to run the earlier script, it runs successfully and gives the
desired result:
PS C:\Scripts> .\Welcome.ps1 Sajal Debnath
Welcome Sajal Debnath !!! Congratulations you have run your first
script!!!
Before we go ahead and schedule a task, we need to finalize the command which,
when run from the task scheduler, will run the script and give you the desired
result. The best way to check this is to run the same command from Start Run.
For example, if I have a script in C:\ by the name Report.ps1, I can run the script
from the command line by running the following command:
powershell file "C:\Report.ps1"
Another point to note here is that once the preceding command is run, the PowerShell
window will close. So, if you want the PowerShell window to be opened so that you
can see any error messages, then add the NoExit switch. So, the command becomes:
Powershell-NoExit file "C:\Report.ps1"
[ 18 ]
Chapter 1
On the right-hand side pane, under Actions, click on Create Basic Task. A new
window opens. In this window, provide a task name and description:
The next window provides you with the trigger details, which will trigger the action.
Select the trigger according to your requirements (how frequently you want to run
the script).
[ 19 ]
Select a type of action that you want to perform. For our purpose, we will choose
Start a program.
In the next window, provide the command that you want to execute (the command
that we checked by running it in Start Run). In our example, it is as follows:
Powershell file "C:\Report.ps1"
[ 20 ]
Chapter 1
The last window will provide you an overview of all the options. Select Finish to
complete the creation of the scheduled task. Now, the script will run automatically
at your predefined time and interval.
[ 21 ]
Instead of using the GUI interface, you can create, modify, and control scheduled
tasks from the PowerShell command line as well. For a detailed list of the commands,
you can run the following command:
PS C:\> Get-Command *ScheduledTask
From the list of the available cmdlets, you can select the one you need and run it.
For more details on the command, you can use the Get-Help cmdlet.
Chapter 1
Due to the preceding problem, distributed version control systems came into
existence. In a distributed version control system, the clients can not only check
the snapshots of the files kept in the centralized server, but they can also maintain
a full replica of those files in their local system. So, if something happens to the
main server, a local copy is always maintained and the server is restored by simply
copying the data from any of the local systems. Thus, in a distributed version control
system, all the local clients act as a full backup system of the central data of the
server. A few such tools are Git, Mercurial, Darcs, and so on.
Among all the distributed version control systems, Git is the most widely used
because of the following advantages:
The following are a few of the differences between Git and other version
control systems:
Most of the operations in Git are local. You can make changes to the files,
or if you want to check something in the history or want to get an old
version, you no longer need to be connected to a remote server. Also, you
don't need to be connected to make changes to the files; they will be locally
saved, and you can push the changes back to the remote server at a later
stage. This gives Git the speed to work.
In Git, we generally always add data. So, it is very difficult to lose data in
Git. In most cases, as the data is pushed to other repositories as well, we
can always recover from any unexpected corruption.
[ 23 ]
This was a very short discussion on Git because GitHub is based entirely on Git.
In this book, we will talk about GitHub and check how it is used because of the
following reasons:
So, without further ado, let's dive into GitHub. GitHub is the largest host for Git
repositories. Millions of developers work on thousands of projects in GitHub. To
use GitHub, the first thing you need to do is create an account in GitHub. We can
create an account in GitHub by simply visiting https://github.com/ and signing
up in the section provided for sign up. Note the e-mail ID that you used to create the
account, as you will use the same e-mail ID to connect to this account from the local
repository at a later stage.
One point to note is that once you log in to GitHub, you can create an SSH key pair
to work with your local account and the GitHub repository. For security reasons,
you should create a two-factor authentication for your account. To do so, perform
the following steps:
1. Log in to your account and go to Settings (top-right hand corner).
2. On the left-hand side, under the Personal settings category, choose Security.
3. Next, click on Set up two-factor authentication.
4. Then, you can use an app or send an SMS.
So, you have created an account and set up two-factor authentication. Now, since
we want to work on our local systems as well, we need to install it on the local
system. So, go ahead and download the respective version for your system from
http://git-scm.com/downloads.
Now, we can configure it two ways. Git is either included or can be installed as
part of it in GitHub for Windows/Mac/Linux. GitHub uses a GUI tool. First, let's
start with the command-line tool. For my examples I have used GitHub Desktop,
which can be downloaded from https://desktop.github.com/.
Open the command-line tool and run the following commands to configure
the environment:
git config --global user.name "Your Name"
git config --global user.email "email@email.com"
[ 24 ]
Chapter 1
You need to replace Your Name with your name and email@email.com with the
e-mail with which you created your account in GitHub.
You can set the same using the GitHub tool as well. Once you install GitHub, go
to Preferences and then Accounts. Log in with your account that you created on
the GitHub site. This will connect you to your account in GitHub.
Next, go to the Advanced tab and fill in the details that you provided in the previous
configuration under the Git Config section. Also, under the Command Line section,
click on Install Command Line Tools. This will install the GitHub command-line
utility on the system.
[ 25 ]
Okay, so now we have installed everything that we require, so let's go ahead and
create our first repository.
To create a repository, log in to your account in GitHub, and then click on the
+New Repository tab:
Next, provide a name for the repository, provide a description, and select whether
you want to make it Private or Public. You can also select Initialize this repository
with a README.
Once the preceding information is provided, click on Create Repository. This
will create a new repository under your name and you would be the owner of
the repository.
[ 26 ]
Chapter 1
Before we go ahead and talk more about using GitHub, let's talk about a few
concepts and how they work in GitHub.
There are two collaborative models in which GitHub works.
Branch
When you create a repository, it is, by default, the master repository. So, how does
another person work on the same project? They create a branch for themselves.
A branch is a replica of the main repository. You can make all the changes to the
branch, and when you are ready, you can merge your changes to the main repository.
[ 27 ]
with Git.
git checkout: This command allows you check the contents of the
repository without going inside the repository.
git merge: As the name suggests, this command allows you to merge
git push: This particular command allows you to push the changes
you made on your local computer back to the GitHub online repository
so that other collaborators are able to see them.
git pull: If you are working on your local computer and want to bring the
latest changes from the GitHub repository to your local computer, you can
use this command to pull down the changes back to the local system.
[ 28 ]
Chapter 1
To use the GUI tool open the GitHub application, and then from the File menu,
select Add Local Repository.
This will bring up a pop-up window saying that This folder is not a repository and
asking if you want to create and add the repository. Click on Create and Add. This
will create a local repository for you.
[ 29 ]
Now, let's go to the directory and create a file and put some text into it. Once the file
is created, we will check the status of the repository that will tell us that there are
untracked files in the repository. Once done, we will notify Git that there is a file
that has changed. Then, we will commit the change to Git so that Git can take
its snapshot. Here is a list of commands:
$ cd Git
$ touch README.txt
$ echo "Hello there, first document in the repository" > README.txt
$ git status
$ git add README.txt
$ git commit -m "README.txt added"
The following is a screenshot of the above commands and the output that we get
for a successful run.
Replace yourname with your username and repository with your repository name.
[ 30 ]
Chapter 1
You can do the same work through the GitHub's GUI application as well. Once you
open the application, go to Preferences, and then under Accounts, log in to your
GitHub account with your account details. Once done, you can create a branch of
the repository.
Let's create a my-changes branch from master. Click on the Branch icon next to
master (as shown in the following screenshot):
Once you do this, your working branch changes to my-changes. Now, add a file to
your local repository, say, changes.txt, and add some text to it:
$ touch changes.txt
$ echo "Changes I made" > changes.txt
The changes that you made will immediately be visible in the GitHub application.
You commit the changes made to the my-changes repository.
[ 31 ]
In the Repository option, select the Push option to push the changes to GitHub.
Next, I have added another file and again committed to my-changes.
This will keep the status of the local and remote repositories as Unsynced. Click on
the right-hand side Sync button to sync the repositories.
Now, if I go back to the GitHub site, I can see the changes that I made to the my-
changes branch.
[ 32 ]
Chapter 1
Next, I want to merge the branch into the main branch. I create a pull request. I can
do this directly from the GitHub online page or from the GitHub application. In the
GitHub application on the local system, go to Repository, and then click on Create
Pull Request. Provide it a name and description, and click on Create. This will create
a pull request in the main branch.
Now, go back to the GitHub page, and you will be able to see the details of the
pull request.
[ 33 ]
Click on Merge pull request, provide your comment, and click on Confirm merge to
merge the change. Now, you can click on Delete the branch.
Also, if you go back to your main branch (which is powercli_scripts in my case),
you will be able to see the changes in the main branch.
This concludes this section. Now, you should be able to create your own project or
fork and work on existing projects.
Chapter 1
Pester is an unit testing framework for PowerShell, which works great for both white
box and black box testing.
White box testing (also known as clear box testing, glass box testing,
transparent box testing, and structural testing) is a method of
testing software that tests the internal structures or workings of an
application, as opposed to its functionality (that is, black box testing).
In white box testing, an internal perspective of the system, as well
as programming skills, are used to design test cases. The tester
chooses inputs to exercise paths through the code and determine the
appropriate outputs. This is analogous to testing nodes in a circuit,
for example, in-circuit testing (ICT).
Black box testing is a method of software testing that examines
the functionality of an application without peering into its internal
structures or workings. This method of testing can be applied to
virtually every level of software testing: unit, integration, system,
and acceptance. It typically comprises of mostly all higher level
testing, but also dominates unit testing as well.
You can refer to https://en.wikipedia.org/wiki/Whitebox_testing and https://en.wikipedia.org/wiki/Blackbox_testing.
[ 35 ]
Well, now that you understand what testing is and the methodologies used,
let's dive into Pester.
Pester is a PowerShell module developed by Scott Muc and is improved by the
community. It is available for free on GitHub. So, all that you need to do is download
it, extract it, put it into the Modules folder, and then import it to the PowerShell session
to use it (since it is a module, you need to import it just like any other module).
To download it, go to https://github.com/pester/Pester.
In the lower right-hand corner, click on Download Zip.
Once you download the ZIP file, you need to unblock it. To unblock it, right-click on
the file and select Properties. From the Properties menu, select Unblock.
[ 36 ]
Chapter 1
You can unblock the file from the PowerShell command line as well. Since I have
downloaded the file into the C:\PowerShell Scripts folder, I will run the
command as follows. Change the location according to your download location:
PS C:\> Unblock-File -Path 'C:\PowerShell Scripts\Pester-master.zip'
-Verbose
VERBOSE: Performing the operation "Unblock-File" on target "C:\PowerShell
Scripts\Pester-master.zip".
[ 37 ]
The last command will give you a list of all the commands imported from the
Pester module.
You can get a list of cmdlets available in the module by running the following
command as well:
PS C:\>Get-Command -Module Pester
[ 38 ]
Chapter 1
Now let's start writing our code and test it. Let's first decide what we want
to achieve. We want to create a small script that will access any name as the
command-line parameter and generate as output a greeting to the name.
So, let's first create a New-Fixture:
PS C:\PowerShell Scripts> New-Fixture -Path .\HelloExample -Name SayHello
Directory: C:\PowerShell Scripts\HelloExample
Mode
LastWriteTime
Length Name
----
-------------
------ ----
-a----
3/23/2015
12:39 AM
-a----
3/23/2015
12:39 AM
30 Say-Hello.ps1
252 Say-Hello.Tests.ps1
Notice that one folder named HelloExample, and two files are created in this
folder. The Say-Hello.ps1 file is the file for the actual code, and the second file,
Say-Hello.Tests.ps1, is the test file.
Now go to the directory and set the location as the current location:
PS C:\PowerShell Scripts> cd .\HelloExample
PS C:\PowerShell Scripts\HelloExample> Set-Location -Path 'C:\PowerShell
Scripts\HelloExample'
[ 39 ]
The first three lines extract the filename of the main script file, and then the dot
sources it to the current running environment so that the functions defined in the
script will be available in the current scope.
Now, we need to define what the test should do. So, we define our test cases. I
have made the necessary modifications:
$here = Split-Path -Parent $MyInvocation.MyCommand.Path
$sut = (Split-Path -Leaf $MyInvocation.MyCommand.Path).Replace(".
Tests.", ".")
. "$here\$sut"
Describe "Say-Hello" {
It "Outputs Hello Sajal, Welcome to Pester" {
Say-Hello -name Sajal | Should Be 'Hello Sajal, Welcome to Pester'
}
}
What I am expecting here is that when Say-Hello.ps1 is run from the command line
with a name as a parameter, it should return Hello <name>, Welcome to Pester.
Let's run the first test. As expected, we get a failed test.
Now, let's correct the code with the following code snippet:
function Say-Hello {
param (
[Parameter(Mandatory)]
$name
)
"Hello $name, Welcome to Pester"
}
Let's run the same test again. Now, it passes the test successfully.
[ 40 ]
Chapter 1
Remember one point that though we call Describe, Context, and It as keywords,
they are basically functions. So, when we use script blocks to call them, we need to
use them in a specific way. So, the following is incorrect:
Context "defines script block incorrectly"
{
#some tests
}
Should Be
Should BeExactly
Should BeNullOrEmpty
Should Match
Should MatchExactly
Should Exist
Should Contain
Should ContainExactly
Should Throw
[ 41 ]
Also, all the assertions have an opposite, negative meaning, which we get by adding
a 'Not' in between. For example, Should Not Be, Should Not Match, and so on.
Now, you should be able to go ahead and start testing your scripts with Pester. For
more details, check out the Pester Wiki at https://github.com/pester/Pester/
wiki/Pester.
Once the snap-in is added and the module is imported, you will have access to all
the PowerCLI commands from the PowerShell ISE console.
Now, let's connect to the vCenter environment by running the following command:
PS C:\>Connect-VIServer -Server <server name> -User <user name>-Password
<Password>
Similarly, to connect to the vCloud Director server, run the following command:
PS C:\> Connect-CIServer -Server <server name> -User <user name>-Password
<Password>
There are many other options. You can see the details using the Get-help cmdlet or
the online help for these commands.
[ 42 ]
Chapter 1
Summary
In this chapter, we touched on the basics of PowerShell, covering the main
added advantages of the PowerShell 5.0 preview and basic programming
constructs, and how they are implemented in PowerShell. We also discussed
PowerCLI, what's new in Version 6.0, and how to run PowerCLI scripts from
Command Prompt or as a scheduled task. Next, we discussed version control and
how to use GitHub to fully utilize the concepts. At the end of this chapter, we saw
how to use Pester to test PowerShell scripts and how to connect to vCenter and
vCloud Director environments.
In the next chapter, we are going to cover advanced functions and parameters
in PowerShell. We will also cover how to write your own help files and error
handling in PowerShell.
[ 43 ]
www.PacktPub.com
Stay Connected: