I find PowerShell incredibly useful. It does, however, have a lot of quirks that make for very annoying why-would-you-ever-make-it-do-that moments. The following is my attempt to list and clarify these small oddities.
This is not a generic guide to using a console, nor is it a general scripting guide. There are much better resources for that found elsewhere.
You have a bunch of options regarding terminals to run powershell. I know a bunch of people that are very happy with cmder. I personally use Windows Terminal because it has QuakeMode which is more useful than you would think. I suggest you find one that fits you, as long as it isnt cmd.exe
.
Remember to figure out how to copy and paste. Usually click+drag and enter to copy, right-click to paste.
Usually Powershell defaults to the encoding used by the terminal, which in turn tends to be Windows-1252. Make sure to convert to and from Utf-8 every time you read/write files and/or webrequests.
echo "åäå" | sc -Path test.txt -Encoding Utf8
Also known as "details about script file character encoding can be important".
I had an occasion where the utf-8 encoding in two strings differed in how they encode non-7bit ascii characters. The specific example looked something like this
$cleaned = $name -replace "ö","o"
The problem was that $name
contained the character ö
utf-8 encoded using two bytes, but the inline ö
after -replace
was encoded using four bytes. This made the replace-command not detecting the character and made for a very elusive bug.
The root cause turned out to be the encoding of the script file. The $name
variable had been downloaded from the net, and so it had a normal utf-8 encoding, but the inline string got it's weird encoding because the script file had been saved without a utf-8 byte order mark.
The following script should illustrate the issue
PS> $script = "[System.Text.Encoding]::UTF8.GetBytes(""ö"")"
PS> $utf8 = New-Object System.Text.UTF8Encoding $false
PS> $utf8bom = New-Object System.Text.UTF8Encoding $true
PS> [System.IO.File]::WriteAllLines("utf.ps1",$script,$utf8)
PS> [System.IO.File]::WriteAllLines("utfbom.ps1",$script,$utf8bom)
PS> .\utf.ps1
195
131
194
182
PS C:\src\bildgenerator> .\utfbom.ps1
195
182
tl;dr: Make sure your .ps1 file is saved as utf8-bom
Navigation is terminal dependent, but they should all have tab completion of commands and command history navigation on the up and down arrows. Multi-line navigation on the arrow keys can vary.
file.ps1 cannot be loaded because running scripts is disabled on this system
Well this is starting of with a bang. PowerShell is by default locked down by something called ExecutionPolicy Restricted. To fix this, open a powershell terminal as administrator and run Set-ExecutionPolicy Unrestricted
. This obviously has security implications that I don't really care about. It doesn't bypass UAC, so how bad can it be?
You will have to set execution policy separately for the x86 powershell, if you ever use that. (Dont use x86 powershell.)
Don't do your regular dev work running as administrator. It is very bad practice. Combined with ExecutionPolicy Unrestricted this becomes an actual security hole.
PS > ./scriptname.ps1 -parameter1 text
PS > $test = Get-Content "file.txt"
Its pretty straightforward. You should have tab completion for names of files, commands, and parameters. If parameters have been marked as mandatory, the script will ask for those as input before starting. Don't rely on this. There is no guarantee that a script has all relevant parameters marked, it may just assume you meant what you did and proceed anyway, generating errors after a while.
The script will output text with Write-Host
. This is the standard way for any script to output text. There are a bunch of alternatives, some of which can be useful if there is no host to write to - azure functions being a notable example. Try Write-Progress
, Write-Debug
, and Write-Information
sometime. Write-Warning
exists, and so does Write-Error
but I'd rather throw
errors.
In addition to writing to host, the script will output everything written to the standard output stream. This includes
- Anything written using Write-Output and echo
- Any object in the
return
statement - Any objects not stored in a variable
Write-Host "post message"
Invoke-Webrequest -Uri "https://localhost:8888" -Method POST -Body @{"message"="data"}
Write-Host "get response"
return Invoke-Webrequest -Uri "https://localhost:8888" -Method GET
The above script will not work like you expect at first glance. Invoke-Webrequest
always returns an object, and if that object isn't stored in a variable it will instead be written to the output stream. Whoever runs the script will receive not just one object, but an array with two objects.
This also happens in functions, where it is even more annoying. If your script is experiencing bugs, make sure you are storing all function call returns in variables.
The Get-Help
command will give you information on any other command. This includes your own scripts, as long as they use the appropriate comment markers.
Windows comes bundled with a powershell editor called PowerShell ISE that no one should be happy with. Also, don't just open scripts in notepad and hope for the best.
My recommendation is VSCode (Guide), but there are alternatives. Use what fits.
In any case, don't do println debugging. A debugger exists for a reason, use it. Just set breakpoints and press F5. Use Unit Tests if you need to access specific methods and don't want to bother navigating the whole script to get there.
PowerShell commands have aliases. See the whole list with the alias
command. You could also add your own, but please don't.
A few notable examples:
These are aliases for Where-Object and ForEach-Object respectively. They are quite common when piping lists. Better get used to it. We will take a closer look at these when we visit piping later.
For some reason, powershell has an alias for Invoke-Webrequest that looks like curl. Note that Invoke-Webrequest takes completely different parameters than curl, so unless you delete the curl alias (rm alias:curl
) or type out curl.exe explicitly, your script will break.
rm is another alias, by the way.
echo is an alias for Write-Output, not Write-Host. This is a very important distinction, especially if you are in a function. If you come from a bash background, please note that echo should not be your primary method to output text.
Get-Content and Set-Content, notable mostly because they are useful.
Double quoted or single quoted strings exist. I usually do double quoted, which has variable expansion. Backtick is the escape character.
To variable-expand more complex expressions, use $()
PS > $test = "aoeu"
PS > Write-Host "result: $(2 + 3) $test"
result: 5 aoeu
$array = @()
$hashTable = @{}
$object = New-Object PSObject -Property $hashTable
Add items to an existing array using the +=
operator. Add items to a hashtable using the Add
method. Add properties to an object using the Add-Member
cmdlet.
$array += "aoeu"
$hashTable.Add("key", "data") # actually $hashTable.key = "data" works as well
$object | Add-Member -MemberType NoteProperty -Name "AnotherProperty" -Value "trololo"
For both hashtables and object you can get properties by dot-notation. If your property name includes complex characters, like space, you can quote the name.
$obj."weird name"
You can use the same method if a variable contains the name of a property you wish to reference.
$var = "length"
$obj."$var" # becomes $obj.length
Hashtables and objects look very similar at first glance, but differ in key areas. The most obvious difference is when you output the contents.
PS > @{"X"=1; "Y"=2; "Z"=3;}
Name Value
---- -----
Y 2
Z 3
X 1
PS > New-Object psobject -Property @{"X"=1; "Y"=2; "Z"=3;}
Y Z X
- - -
2 3 1
You can just add new elements to a hashtable by doing $x["name"] = value
, for objects you can't. Objects you pipe to Add-Member
.
$null
Oh, yeah. The =
operator is only for assignment in powershell. For comparisons, use -eq
.
$a = 1
if( $a -eq 0 )
{
return 3
}
Yup. No single-line statements after if. You must have a script block. The same rule applies to all keywords, pretty much. if
, elseif
, else
, while
pretty much all of them.
# this gives you an error
if( $true )
return $false
# this is how it has to be done
if( $true )
{
return $false
}
Piping is a staple of scripting. You should learn it and realize its potential pretty quickly. This document isn't a tutorial, Google is your friend. Maybe. Who knows?
Traditionally piping has been done on streams. PowerShell doesn't. PowerShell pipes objects.
I'll repeat that, in case its importance went by too quickly.
PowerShell pipes objects.
This is pretty much the one single reason to use PowerShell, and have to deal with all the other insane things described in this document. It is so incredibly useful that all other things pale in comparison.
Anyway, get used to seeing stuff like this
ls | ?{ $_.LastWriteTime -gt "2018-11-01" } | %{ Write-Host $_.Name }
$_
is the iteration variable. Each time the script block is called, that variable is filled with the current object. Also, powershell can convert strings to dates when doing comparisons.
Remember, ls
%
and ?
are aliases. Another way of writing the same thing would be
Get-ChildItem | Where-Object { $_.LastWriteTime -gt "2018-11-01" } | Foreach-Object { Write-Host $_.Name }
It is not completely unlike javascript filter and map functions. Translating to non-piped script the same code looks like this
$items = ls
Foreach( $item in $items )
{
if( $item.LastWriteTime -gt "2018-11-01" )
{
Write-Host $item.Name
}
}
Select, or Select-Object if you want to use its full name, is very powerful. The following is a very common pattern:
$objectArray | ?{ $_.Name -match "txt" } | Select Name,Tags
Here, objectArray is just an array of lets say rather complex object with a lot of properties. This pattern lets us look at only the properties we are interested in.
But what if we only want the name property, and don't want to filter away any objects?
$objectArray | Select Name
Ok, but that gives us a list of objects, all containing one single property called name. What if we just want the names as a list of strings?
$objectArray | Select -ExpandProperty Name
What if we want paging?
$objectArray | Select -ExpandProperty Name -Skip 100 -First 10
After you have done some amount of piping and selecting, you will inevitably run into the weird circumstance where you expect an array, but find yourself holding a single object. In short, powershell will make any single-item array into just an item. You're welcome.
$a = @(1) # a is [1]
$a += 1 # a is [1, 1]
$b = @(1) | select # b is 1
$b += 1 # b is 2
To force an array, just declare it inside the @(). This works because @() means "flatten these values into an array". () is ostensibly the regular array declaration, but I usually default to @() by habit.
Similarily, @() will natively flatten an array of arrays.
$a = @(@(1,2),@(3,4)) # a is [1,2,3,4]
If you wish to keep an array of arrays, have the outer paren be without @.
$a = (@(1,2),@(3,4)) # a is [ [1,2], [3,4] ]
PowerShell version 5 comes bundled with an old version of Pester.
function Add { param($a,$b) return $a+$b;}
Describe "the add function" {
It "can add numbers" {
Add 2 3 | should be 5
Add 0 $null | should not be 5
}
}
The newer versions of pester have a different syntax for the should
statements. Salt to taste.
The cmdlets to use are ConvertTo-Json
and ConvertFrom-Json
. When converting to json you can use hashtables instead of objects without issue. Just note that they will become full objects when you deserialize the json later.
The main quirk is to remember the -Depth
parameter when calling ConvertTo-Json
. Since powershell objects can reference themselves, there needs to be a guard against infinite recursion. Depth is this guard, a very sensible precaution.
PS > $a = @{}
PS > $a.test = $a
PS > $a | convertto-json -Depth 1
{
"test": {
"test": "System.Collections.Hashtable"
}
}
PS > $a | convertto-json -Depth 2
{
"test": {
"test": {
"test": "System.Collections.Hashtable"
}
}
}
PS > $a | convertto-json -Depth 3
{
"test": {
"test": {
"test": {
"test": "System.Collections.Hashtable"
}
}
}
}
I do wish to find and punish whoever set the default value to 2. I usually go by
$data | ConvertTo-Json -Depth 99 | sc "data.json"
PowerShell is dynamically typed, but to force a type onto a variable, use brackets.
$data = [int]"3"
$date = [datetime]"2010-01-01"
If you, for some reason, need to do xml, powershell has native support. Just cast the xml string and you will get an object back.
$data = [xml]"<a>test</a>
However, namespaces could give you trouble. Also, ConvertTo-Xml
exist.
Functions are like script files in miniature.
function test
{
param( $a )
return $a+2
}
They can have parameters, and get-help comments, just like a full script. There is also this thing called a filter that is like a function but worse. Sometimes filters are useful.
Declare parameters using param
.
You do not call functions with parenthesis. You just give the parameters like you would any other script. Yes, this is weird.
test -a 4
What happens if you use parenthesis? Stuff gets weird.
You should call the following function like this: f -a 1 -b 2
(I suppose as long as you stick to the parameter order, you could also do f 1 2
)
function f
{
param( $a, $b )
Write-Host "a is $a"
Write-Host "b is $b
}
But what happens if you do this?
PS > f(1,2)
a is 1 2
b is
See, the parenthesis does nothing, but 1,2 becomes an array, which is promptly accepted as the first parameter. The second parameter gets null.
Calling .net native methods is the exception to this.
If you wish to pipe your own functions, you should do two things:
- Declare the process block
- Decide which parameter takes the input
Perhaps a short example can clarify?
function Square
{
param([Parameter(ValueFromPipeline=$true)]$number)
process
{
return $number * $number
}
}
This function can now be called like this
@(1,2,3,4,5) | Square
There are also optional begin
and end
blocks available. They work like you expect.
The Parameter
attribute also has a ValueFromPipelineByPropertyName
property, where if you pipe objects you can get the relevant object properties directly into function parameters.
If a script wants to run another script or executable, best practice is to use the call operator. (&
)
$result = & "$PSScriptRoot\Script.ps1" -parametername "value"
When you run a script, current working directory is the same for the script as it is for you. Thus, relative paths will be very confused if you aren't running the script from the folder the script is saved. $PSScriptRoot is a variable that always points to the script folder, and should be used to create absolute paths.
Use square brackets to reference a class, then double-colon to reference methods. If you already have a .net object, methods work natively.
$now = [System.DateTime]::UtcNow
$specificDate = New-Object -TypeName System.DateTime -ArgumentList 2019,01,20
$specificDate.AddMinutes(45)
Your current working directory is recognized by all powershell commands, but not by .net methods. Usually, .net methods will have your user folder as your working directory.
Try the following as a fun excercise:
cd \
[System.IO.File]::WriteAllText("file.txt","hello")
Now guess where that file appeared. If you guessed c:\
you are most likely wrong.
This is why all file operations using .net methods should use full paths. Resolve-Path
can help here, but only if the file already exists.
[System.IO.Path]::GetTempFileName()
For .net assemblies:
Add-Type -AssemblyName System.Web
For specific dlls:
Add-Type -path "$PSScriptRoot\FoundThisOnTheInternet.dll"
If you fill a script file with just functions and wish to use those functions from the commandline or from another script you could dot-source the script file.
. .\keystore.ps1
This is similar to the call operator, but the call operator uses a separete scope, where dot-sourcing uses the current scope. This means that any functions declared in the file becomes available. Again, if you are in a script file, use $PSScriptRoot
to make absolute paths.
. "$PSScriptRoot\otherscript.ps1"
The -match
operator uses regex, but I've found the resulting $matches
variable unreliable at times. It can sometimes be better to be explicit:
$r = [regex]"(tst)[^0-9]+$")
$match = $r.Matches("teststring")
if($match -ne $null)
{
$hit = $match.Groups[1].value
}
Remember how you use microsoft management console, manually adding a snap-in every time you want to import or export certificates on your computer?
With powershell you can just
cd Cert:\Current\My
ls
Similar things can be done with environment variables
ls Env:\
For environment variables you can read them from the $env
variable scope.
$env:USERPROFILE
If you set variables this way, they only apply to the current session. To permanently set environment variables, use
[Environment]::SetEnvironmentVariable("VariableName", "value", "User")
Yes. This is annoying. Newer versions of powershell have the flag -SkipHttpErrorCheck
which makes the cmdlet work as you expect it to.
Otherwise you'll have to surround with try-catch.
try
{
return Invoke-WebRequest -Method GET -Uri $uri
}
catch
{
return $_
}
or just
Very useful if crunching a lot of spreadsheet data. If you use tab as delimiter you can pipe to clip and just paste the result into your spreadsheet.
$data = gc "filename.csv" | ConvertFrom-Csv -Delimiter ","
$data | ConvertTo-Csv -Delimiter "`t" -NoTypeInformation | clip
Right. There are built-in cmdlets, depending on which powershell version you use. Though you could always just pipe to clip.exe instead, it's on the path.
echo "test this" | clip
Yes. Use the -Raw
switch to get the content as one long string.
For some reason powershell by default only supports Ssl3 and tls1.0. It's a very quick, but annoying fix.
[Net.ServicePointManager]::SecurityProtocol = "tls12, tls11, tls"
For best results, place the code in your PSProfile script.
There is a file in your user folder that is run with every new powershell session that is started. To find out exactly where it is located, use $Profile
, but the default location is \Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1
in your user folder.
Any code that is placed here will be available in any powershell console, and in all scripts you run.
You could probably write another one of these just on pwsh differences and gotchas compared to old powershell.
Some things to get you started:
- Missing: winrm and pssession stuff
- Missing: cert: and associated methods
- most weird linux-like aliases like
ls
have been removed. This will fuck you up if you are used to doing stuff likels | %{ $_.FullName }
- Not all of net framework exists in net core.
- Missing: AzureRM module. I had some blob storage stuff break in my scripts.
- Tab completion is different