Quantcast
Channel: GoateePFE
Viewing all 35 articles
Browse latest View live

Use the new PowerShell cmdlet ConvertFrom-String to parse KLIST Kerberos ticket output

$
0
0

Tired of hacking away at RegEx and string functions to parse text? This post is for you!

ConvertFrom-String

imageIn yesterday’s post we reviewed a simple example of the new PowerShell 5.x Convert-String cmdlet. Today we are going to study a complex example with ConvertFrom-String. This cmdlet was first documented here in a PowerShell team blog post. To make a long story short, the Microsoft Research team invented some fuzzy logic for parsing large quantities of text using sample data. The technology learns and adapts as it parses. If the output does not look correct, you simply add more examples demonstrating the variation that was not matched.

The trouble with KLIST

I was doing some Kerberos research lately, and I wanted to parse the output of KLIST to be more PowerShell friendly. KLIST displays current Kerberos tickets on a machine, but it is flat text. We will use ConvertFrom-String to turn it into beautiful, sortable, filterable object data like this in Out-Gridview:

image

First, let’s take a look at a sample of KLIST output:

Current LogonId is 0:0x3f337

Cached Tickets: (12)

#0>	Client: Administrator @ CONTOSO.COM
	Server: krbtgt/CONTOSO.COM @ CONTOSO.COM
	KerbTicket Encryption Type: AES-256-CTS-HMAC-SHA1-96
	Ticket Flags 0x60a10000 -> forwardable forwarded renewable pre_authent name_canonicalize
	Start Time: 8/29/2016 16:06:22 (local)
	End Time:   8/30/2016 2:06:21 (local)
	Renew Time: 9/5/2016 16:06:21 (local)
	Session Key Type: AES-256-CTS-HMAC-SHA1-96
	Cache Flags: 0x2 -> DELEGATION
	Kdc Called: 2012R2-DC.contoso.com

#1>	Client: Administrator @ CONTOSO.COM
	Server: krbtgt/CONTOSO.COM @ CONTOSO.COM
	KerbTicket Encryption Type: AES-256-CTS-HMAC-SHA1-96
	Ticket Flags 0x40e10000 -> forwardable renewable initial pre_authent name_canonicalize
	Start Time: 8/29/2016 16:06:21 (local)
	End Time:   8/30/2016 2:06:21 (local)
	Renew Time: 9/5/2016 16:06:21 (local)
	Session Key Type: AES-256-CTS-HMAC-SHA1-96
	Cache Flags: 0x1 -> PRIMARY
	Kdc Called: 2012R2-DC

#2>	Client: Administrator @ CONTOSO.COM
	Server: host/2012R2-MS.contoso.com @ CONTOSO.COM
	KerbTicket Encryption Type: AES-256-CTS-HMAC-SHA1-96
	Ticket Flags 0x40a10000 -> forwardable renewable pre_authent name_canonicalize
	Start Time: 8/29/2016 16:12:25 (local)
	End Time:   8/30/2016 2:06:21 (local)
	Renew Time: 9/5/2016 16:06:21 (local)
	Session Key Type: AES-256-CTS-HMAC-SHA1-96
	Cache Flags: 0
	Kdc Called: 2012R2-DC.contoso.com
...
...

One line of code!

The data contains multiple objects with multiple properties. Unfortunately that is all plain text. To parse this using regular expressions or string functions would be time-consuming and error-prone. Here is the ConvertFrom-String syntax used to produce the screenshot at the top of the article:

klist | ConvertFrom-String -TemplateFile .\template.txt | Out-GridView

Template magic

Wow! Are you kidding?! Nope. That’s it. But… wait… what’s in that template.txt file? Ah. That is the magic. Let’s take a look:

Current LogonId is 0:0xb4ffc69

Cached Tickets: (4)

#{[int]ID*:0}>	Client: {Client:Administrator @ CONTOSO.COM}
	Server: {Server:krbtgt/CONTOSO.COM @ CONTOSO.COM}
	KerbTicket Encryption Type: {KerbTicketEncryptionType:AES-256-CTS-HMAC-SHA1-96}
	Ticket Flags {TicketFlags:0x60a10000} -> {TicketFlagsEnum:forwardable forwarded renewable pre_authent name_canonicalize}
	Start Time: {[datetime]StartTime:8/29/2016 16:06:22} (local)
	End Time:   {[datetime]EndTime:8/30/2016 2:06:21} (local)
	Renew Time: {[datetime]RenewTime:9/5/2016 16:06:21} (local)
	Session Key Type: {SessionKeyType:AES-256-CTS-HMAC-SHA1-96}
	Cache Flags: {CacheFlags:0x1} -> {CacheFlagsEnum:PRIMARY}
	Kdc Called: {KdcCalled:2012R2-DC.contoso.com}

#{[int]ID*:1}>	Client: {Client:Administrator @ CORP.CONTOSO.COM}
	Server: {Server:krbtgt/CONTOSO.COM @ CONTOSO.COM}
	KerbTicket Encryption Type: {KerbTicketEncryptionType:AES-256-CTS-HMAC-SHA1-96}
	Ticket Flags {TicketFlags:0x40e10000} -> {TicketFlagsEnum:forwardable renewable initial pre_authent name_canonicalize}
	Start Time: {[datetime]StartTime:8/29/2016 16:06:21} (local)
	End Time:   {[datetime]EndTime:8/30/2016 2:06:21} (local)
	Renew Time: {[datetime]RenewTime:9/5/2016 16:06:21} (local)
	Session Key Type: {SessionKeyType:AES-256-CTS-HMAC-SHA1-96}
	Cache Flags: {CacheFlags:0x2} -> {CacheFlagsEnum:DELEGATION}
	Kdc Called: {KdcCalled:2012R2-DC.contoso.com}

#{[int]ID*:2}>	Client: {Client:Administrator @ CORP.NA.ALPINESKIHOUSE.COM}
	Server: {Server:host/2012R2-MS.contoso.com @ CONTOSO.COM}
	KerbTicket Encryption Type: {KerbTicketEncryptionType:AES-256-CTS-HMAC-SHA1-96}
	Ticket Flags {TicketFlags:0x40a10000} -> {TicketFlagsEnum:forwardable renewable pre_authent name_canonicalize}
	Start Time: {[datetime]StartTime:8/29/2016 1:12:25} (local)
	End Time:   {[datetime]EndTime:8/30/2016 2:06:21} (local)
	Renew Time: {[datetime]RenewTime:9/5/2016 1:06:21} (local)
	Session Key Type: {SessionKeyType:AES-256-CTS-HMAC-SHA1-96}
	Cache Flags: {CacheFlags:0}
	Kdc Called: {KdcCalled:2012R2-DC}

#{[int]ID*:3}>	Client: {Client:Administrator @ CONTOSO.COM}
	Server: {Server:RPCSS/2012R2-MS.contoso.com @ CONTOSO.COM}
	KerbTicket Encryption Type: {KerbTicketEncryptionType:RSADSI RC4-HMAC(NT)}
	Ticket Flags {TicketFlags:0x40a10000} -> {TicketFlagsEnum:forwardable renewable pre_authent name_canonicalize}
	Start Time: {[datetime]StartTime:12/29/2016 16:12:25} (local)
	End Time:   {[datetime]EndTime:12/30/2016 12:06:21} (local)
	Renew Time: {[datetime]RenewTime:12/5/2016 16:06:21} (local)
	Session Key Type: {SessionKeyType:RSADSI RC4-HMAC(NT)}
	Cache Flags: {CacheFlags:0}
	Kdc Called: {KdcCalled:2012R2-DC.contoso.com}

To create the parsing template we begin by capturing the KLIST output to a text file like this:

KLIST > template.txt

Let’s break down the formatting in the template file:

  • Basic property parsing looks like this: {[datatype]PropertyName:DATA}. To extract meaningful data from flat text, you surround sample output data values with this syntax.
  • The datatype is optional, if you want to use strings for everything. I chose to use [int] and [datetime] for intelligent sorting and filtering of property data.
  • For multi-object parsing in a file like this, the first property on the object gets the asterisk * after the property name. In this case I chose ID to indicate a new record.

For each data instance in the template file we add some formatting to identify the properties we want to extract. We do not need to keep every instance of the certificates in the list, only enough to demonstrate different sample values. Study the example template above to find these differences:

  • Cache Flags
  • Cache Flags Enum (does not appear in every record)
  • KerbTicket Encryption Type
  • Session Key Type
  • Ticket Flags
  • Dates (single-digit vs. double-digit days/months)
  • Server short name vs. FQDN (fully qualified domain name)
  • FQDNs of varying dotted patterns
  • Etc.

Mash the easy button

Does that template.txt look difficult? It might. But it is actually not that bad, and it is MUCH easier than trying to write a RegEx or string functions to parse this. Actually, the fuzzy logic behind the cmdlet does exactly that. It studies each sample to identify differences (string length, spaces, capitalization, etc.) and then dynamically generates parsing code. Some consider this to be like AI (artificial intelligence) or ML (machine learning).

The template above took a couple hours of tweaking and experimenting. It works 95% of the time. Your mileage may vary. For example, you may need to tweak the template for international date formats or different type values.

Your turn…

I decided to keep this blog post short and leave other things for you to discover with the cmdlet. I have shown you one way to use it. There is more! Get-Help is your friend.

Now take this and go parse some of your own flat text use cases. Use the comments below to share your challenges and victories. Enjoy!


Gnarly Innards: How to live debug PowerShell DSC configurations without using Enable-DSCDebug

$
0
0

17_ctraltdelThe Problem

Have you ever needed to debug a PowerShell Desired State Configuration that appeared to be hanging when it was applying? At that point it’s a little too late to run Enable-DscDebug. Here’s how to debug it anyway…

The Solution

On the box applying the active configuration you can see the LCM is busy. Notice the LCMState and LCMStateDetail properties.

PS C:\> Get-DscLocalConfigurationManager

ActionAfterReboot              : ContinueConfiguration
AgentId                        : 7DF72A3A-9AEF-11E6-80BB-00155D67460E
AllowModuleOverWrite           : False
CertificateID                  :
ConfigurationDownloadManagers  : {}
ConfigurationID                :
ConfigurationMode              : ApplyOnly
ConfigurationModeFrequencyMins : 15
Credential                     :
DebugMode                      : {NONE}
DownloadManagerCustomData      :
DownloadManagerName            :
LCMCompatibleVersions          : {1.0, 2.0}
LCMState                       : Busy
LCMStateDetail                 : LCM is a applying an existing configuration.
LCMVersion                     : 2.0
StatusRetentionTimeInDays      : 10
PartialConfigurations          :
RebootNodeIfNeeded             : True
RefreshFrequencyMins           : 30
RefreshMode                    : PUSH
ReportManagers                 : {}
ResourceModuleManagers         : {}
PSComputerName                 :

To debug it we need to find out which process is applying the configuration. DSC configurations run in the WmiPrvSE process, but there are usually multiple instances of this process at any given time. We want the one with the DSC AppDomainName.

PS C:\> (Get-Process WmiPrvSE).Id | ForEach-Object {Get-PSHostProcessInfo -Id $_ | Where-Object AppDomainName -eq 'DscPsPluginWkr_AppDomain'}

ProcessName ProcessId AppDomainName
----------- --------- -------------
WmiPrvSE         3536 DscPsPluginWkr_AppDomain

Then we enter that process for debugging. After listing the runspaces you will usually debug #1.

PS C:\> Enter-PSHostProcess -Id 3536 -AppDomainName DscPsPluginWkr_AppDomain
[Process:3536]: PS C:\Windows\system32> Get-Runspace

 Id Name            ComputerName    Type          State         Availability
 -- ----            ------------    ----          -----         ------------
  1 Runspace1       localhost       Local         Opened        Busy
  2 Runspace2       localhost       Local         Opened        Available
 89 RemoteHost      localhost       Local         Opened        Busy

[Process:3536]: PS C:\Windows\system32> Debug-Runspace -Id 1

Now use the traditional PowerShell command line debugging techniques (? for menu). In my case I was debugging the xWaitForADDomain resource which has a sleep timer in it. At this point I had to wait for the timer to expire on the current iteration before the debug prompt would appear. As the hackers say in the movies: “I’m in!”

[DBG]: [Process:3536]: [Runspace1]: PS C:\Windows\system32>> ?

 s, stepInto         Single step (step into functions, scripts, etc.)
 v, stepOver         Step to next statement (step over functions, scripts, etc.)
 o, stepOut          Step out of the current function, script, etc.

 c, continue         Continue operation
 q, quit             Stop operation and exit the debugger
 d, detach           Continue operation and detach the debugger.

 k, Get-PSCallStack  Display call stack

 l, list             List source code for the current script.
                     Use "list" to start from the current line, "list "
                     to start from line , and "list  " to list
                     lines starting from line

              Repeat last command if it was stepInto, stepOver or list

 ?, h                displays this help message.


For instructions about how to customize your debugger prompt, type "help about_prompt".

You can use Get-Variable to see a list of all the variable in scope as the code is executing. Use ‘l’ to list the code and see where you are in the current execution. Use ‘s’, ‘v’, ‘o’ to step through and find your problem. Once you have analyzed your code, you can either use ‘d‘ to detach and let it keep running or ‘q‘ to stop the configuration completely and exit the debugger.

[DBG]: [Process:3536]: [Runspace1]: PS C:\Windows\system32>> d
[Process:3536]: PS C:\Windows\system32> Exit-PSHostProcess

There you go. You have now seen the gnarly innards of a live DSC configuration running on your box. I hope you do not need this trick very often, but now you know.

More Information

Read more about debugging PowerShell DSC configurations and resources:

Debugging PowerShell DSC Class Resources – Great article that teaches everything we did above.
https://blogs.msdn.microsoft.com/powershell/2016/03/14/debugging-powershell-dsc-class-resources/

PowerShell Runspace Debugging: Part 1 – More good background information.
https://blogs.msdn.microsoft.com/powershell/2015/09/08/powershell-runspace-debugging-part-1/

Get-Help about_Debuggers – Learn more about PowerShell command line debugging.
https://technet.microsoft.com/en-us/library/2b2ce8b3-f881-4528-bd30-f453dea06755

What’s New in PowerShell v5 – Watch the module 4 video of PowerShell MVP Kirk Munro explaining debugging
http://aka.ms/mvaps5

How to run a PowerShell script against multiple Active Directory domains with different credentials

$
0
0

05_collageI was working with a customer recently who needed to execute the same script against servers in different Active Directory domains. They had administrative privileges in each domain, but each domain used a different account. You could apply this same scenario to running one query against domain controllers in different domains. Today we’ll explore one way to do that.

Multiple Credentials

A while back I stumbled onto this handy blog post by Jaap Brasser, a PowerShell MVP from the Netherlands. He was using a hash table to store multiple credentials in a script, and then cache them to disk securely using Export-CliXML. The XML export is a quick way to dump the secure string passwords to disk for later re-use. This has been a popular technique for several years now, but he is the first one I saw doing this with a hash table of credentials. Nice touch! In this post I am building on his technique.

Passwords on disk?!

Many people have blogged about securely storing PowerShell credentials for reuse later. There are many techniques, even Azure Key Vault. However, the most common technique out-of-the-box is with secure strings. I do not have time to rehash these topics here. For more information check out Get-Help for these cmdlets: Get-Credential, ConvertFrom-SecureString, ConvertTo-SecureString.

The main point to understand here is that PowerShell encrypts secure string passwords by default using the DPAPI. This encryption is tied to one user on one computer. If you export credentials to a file, then you can only read them on the computer where the file was generated and with the same user account.

Managing Multiple Credentials

Many customers have multiple-domain Active Directory environments and need a way to manage all of those credentials in a single script. Think about the following use cases:

  • Run the same query against every domain controller in three domains or forests
  • Run the same command against 100 servers residing in 10 different domains

Some people may have simply run the same script multiple times, prompting for credentials of each domain and targeting only those servers. Then repeat that for each domain. This is inefficient and produces multiple data sets to be joined at the end for one view of the output.

To do this efficiently and securely is an advanced scripting challenge.

One Solution

As I approached this problem I broke it down into the following steps:

  • Store the target server FQDNs in an array (from file, AD query, etc.)
  • Parse out the unique domain names from the FQDNs
  • Prompt once for credentials of each unique domain
  • Store the credentials in a hash table
  • Export the credential hash table to XML for reuse later
  • Import the credentials from XML (for subsequent runs of the script)
  • Using a loop, pass each server FQDN and respective domain credential to Invoke-Command

Caveat Emptor (“Buyer Beware”)

A comment on Jaap’s blog post cautions against storing highly privileged account credentials in an encrypted secure string using Export-CliXML. Anyone with access to the one account on the one computer used to encrypt the secure string would be able to retrieve all the different domain credentials stored there. They could decrypt all the passwords into plain text! This is a valid concern. However, regardless of how you store the credentials they will all be available to the one user account while the script is running on this one server. If you choose to run any script with multiple credentials, then this concern will always exist. At that point it is an HR issue making sure your staff is trustworthy.

If you choose to store credentials in a file, then you must secure it appropriately and know the risks even when they are encrypted. With that warning we will proceed.

Clever Credential Controls

Here is a sample script for implementing multiple credentials:

Function Get-DomainCreds {
[CmdletBinding()]
Param(
    [Parameter(
        Mandatory=$true,
        ParameterSetName='Fresh'
    )]
    [ValidateNotNullOrEmpty()]
    [string[]]
    $Domain,
    [Parameter(
        Mandatory=$true,
        ParameterSetName='File'
    )]
    [Parameter(
        Mandatory=$true,
        ParameterSetName='Fresh'
    )]
    [ValidateNotNullOrEmpty()]
    [string]
    $Path
)

    If ($PSBoundParameters.ContainsKey('Domain')) {

        # http://www.jaapbrasser.com/quickly-and-securely-storing-your-credentials-powershell/
        $Creds = @{}
        ForEach ($DomainEach in $Domain) {
            $Creds[$DomainEach] = Get-Credential `
                -Message "Enter credentials for domain $DomainEach" `
                -UserName "$DomainEach\username"
        }
        $Creds | Export-Clixml -Path $Path

    } Else {

        $Creds = Import-Clixml -Path $Path

    }

    Return $Creds
}

Function Split-FQDN {
Param(
    [string[]]$FQDN
)
    ForEach ($Server in $FQDN) {

        $Dot = $Server.IndexOf('.')
        [pscustomobject]@{
            FQDN     = $Server
            Hostname = $Server.Substring(0,$Dot)
            Domain   = $Server.Substring($Dot+1)
        }

    }
}


# Array of server FQDNs from your favorite data source
$Servers = 'server1.contoso.com','dc1.alpineskihouse.com',`
    'dc2.wideworldimporters.com','dc3.contoso.com'
# Take a server list of FQDNs and separate out the domain and hostname
$ServerList = Split-FQDN -FQDN $Servers

# Extract the unique domain names
# ONLY THE FIRST RUN: Get credentials for each domain
$Domains = $ServerList | Select-Object -ExpandProperty Domain -Unique
$DomCreds = Get-DomainCreds -Domain $Domains -Path C:\deploy\creds.xml

# LATER: Get credentials for each domain from the XML file
$DomCreds = Get-DomainCreds -Path C:\deploy\creds.xml

ForEach ($Server in $ServerList) {

    Invoke-Command -ComputerName $Server.FQDN `
        -Credential $DomCreds[$Server.Domain] -ScriptBlock {

        "Hello from $(hostname)"

    }

}

The clever part here is using the domain portion of the FQDN as the hash table key to retrieve the domain-specific credential for each server FQDN in the list.

Insert your own code inside the ScriptBlock parameter of Invoke-Command. Or use the FilePath parameter instead to run the same script on each remote server.

If you do not want to store the credentials in a file, then comment out the Export-CliXML line in the Get-DomainCreds function. You will need to enter the credentials manually each time you run the script if that is your choice.

Now you can schedule this to run unattended using the credentials in the XML file. Note that you must run the script to build the credential XML file under the same user account for the scheduled task.

Here is an example of executing one command across multiple domains:

# Query a list of domain controllers using stored credentials (include functions above)
# List the Domain Admin group membership for all domains
$Servers = 'dc1.alpineskihouse.com',`
    'dc2.wideworldimporters.com','dc3.contoso.com'
$ServerList = Split-FQDN -FQDN $Servers
$DomCreds = Get-DomainCreds -Path C:\deploy\creds.xml
ForEach ($Server in $ServerList) {
    '*' * 40
    $Server.Domain
    Invoke-Command -ComputerName $Server.FQDN `
        -Credential $DomCreds[$Server.Domain] -ScriptBlock {
        Get-ADGroupMember -Identity 'Domain Admins' |
            Select-Object -ExpandProperty distinguishedName
    }
}

You could just as easily loop through Get-ADObject queries passing a unique Server and Credential parameter instead of using Invoke-Command. But it would not be any fun if I wrote all the code for you.

Summary

This is a common scenario, so I wanted to offer a solution using built-in PowerShell features. Now you have one technique for running the same code under multiple credentials against servers in multiple Active Directory domains.

How have you solved this challenge in your environment? How can you use this technique? Comments welcome.

2017 New Years PowerShell DevOps Study List

$
0
0

10_geekmugMicrosoft: from “know-it-all” to “learn-it-all”

In a recent interview Satya Nadella mentioned the learn-it-all mindset. This is certainly true in the world of PowerShell. We are so far beyond “just a scripting language” now.

Wow! Have you been paying attention to PowerShell this year? So many big announcements! Today’s post is a crazy link list of 2016 PowerShell news, whitepapers and projects for your study list in 2017. Warning: This may seem a bit random. It is a massive amount of info, and I’m sure I missed some things. Not all of these are new topics, but they really gained traction this last year.

The One Link

This is the one link that I start every PowerShell conversation with: http://microsoft.com/powershell. Get the docs, download WMF, submit feedback, read blogs, and more. It’s all there.

Christmas movies or PowerShell videos?

PowerShell is a key component to making DevOps a reality on the Windows platform (now any platform). This graphic comes from Michael Greene’s WinOps 2016 presentation. Some of these community project names were new to me, so I compiled a list of links where we can find more information.

image

Release Pipeline Model

In 2016 Michael Greene and Steven Murawski did a road show with the Release Pipeline Model whitepaper. Check out all of these resources and appearances:

Release Pipeline Model Whitepaper Download

http://aka.ms/trpm

http://aka.ms/thereleasepipelinemodel

http://download.microsoft.com/download/C/4/A/C4A14099-FEA4-4CB3-8A8F-A0C2BE5A1219/The%20Release%20Pipeline%20Model.pdf

Slides for RPM talks

https://github.com/mgreenegit/slides/tree/master/TRPM

Ignite – Gain insight into a Release Pipeline Model

https://myignite.microsoft.com/videos/22116

https://www.youtube.com/watch?v=7L1-Kawajss

PowerShell and DevOps Global Summit Presentation April 2016 – First public presentation of the whitepaper

https://www.youtube.com/watch?v=bRd0XiMIRMs

WinOps 2016 Presentation

https://channel9.msdn.com/Events/WinOps/WinOps-Conf-2016/The-Release-Pipeline-Model

WinOps 2016 Interview

https://channel9.msdn.com/Events/WinOps/WinOps-Conf-2016/Michael-Greene-on-DevOps?ocid=relatedsession

PowerScripting Podcast 315 Release Pipeline Model

https://powershell.org/2016/08/15/episode-315-powerscripting-podcast-michael-greene-microsoft-and-steven-murawski-chef-on-the-release-pipeline-model/

RunAs Radio 469 Release Pipeline Model

http://www.runasradio.com/default.aspx?ShowNum=469

TechNetRadio

https://channel9.msdn.com/shows/technet+radio/tnr1666

Code Channels Interview

https://www.codechannels.com/video/microsoft/devops/13-the-release-pipeline-model-transform-it-ops-with-devops-practices/

https://channel9.msdn.com/Shows/DevOps-Dimension/13–The-Release-Pipeline-Model-Transform-IT-Ops-with-DevOps-Practices?ocid=player

Release Pipeline Tools Walk-Through Blog

https://devblackops.io/building-a-simple-release-pipeline-in-powershell-using-psake-pester-and-psdeploy/

Pester

Wyatt – Testing in PowerShell

https://www.youtube.com/watch?v=SftZCXG0KPA

Wyatt – Beyond Syntax: Types of Testing with Pester

https://www.youtube.com/watch?v=0fFrWIxVDl0

Blender – Test-Driven Development

https://www.youtube.com/watch?v=jvvh9cpD_LM

Pester YouTube Playlist

https://www.youtube.com/watch?v=chN5BZUmyQ0&list=PLmUhughzLLXiv7A-8DESD5_wwM6hIvrjT

Mississippi PS UG Presentation

https://youtu.be/o4ihc7atwYQ?list=PLmUhughzLLXiv7A-8DESD5_wwM6hIvrjT

PSake

https://twitter.com/psake_build

https://github.com/psake/psake

http://psake.readthedocs.io/en/latest/

https://www.powershellgallery.com/packages/psake/4.6.0

https://marketplace.visualstudio.com/items?itemName=qetza.psake

Git, Plaster, PSDeploy, PoshSpec, etc.

Warren Frame on Continuous Deployment with PowerShell

https://www.youtube.com/watch?v=jKLf1KjYhTM

Warren Frame Git Crash Course

https://www.youtube.com/watch?v=wmPfDbsPeZY

PoshSpec – Chris Hunt author

https://www.youtube.com/watch?v=IIlbPbXga0M

David Wilson – PowerShell projects with Plaster

https://www.youtube.com/watch?v=0OTLYWSdbtA

PSDeploy

http://ramblingcookiemonster.github.io/PSDeploy-Take-Two/

Invoke-Build

https://github.com/nightroman/Invoke-Build

Lability
https://channel9.msdn.com/Blogs/PSDEVOPSSIG/PSDEVOPSSIGEventLability-Demo-w-Iain-Brigton
https://blog.kilasuit.org/2016/04/13/building-a-lab-using-hyper-v-and-lability-the-end-to-end-example/
https://www.powershellgallery.com/packages/Lability/0.10.1

Visual Studio Code

VSCode has taken the world by storm, and it is a terrific PowerShell editor. It works on Windows, Linux, and Mac. And it’s free!

http://code.visualstudio.com/

https://github.com/PowerShell/vscode-powershell

https://github.com/PowerShell/vscode-powershell/blob/master/CHANGELOG.md

Open Source PowerShell & 10th Anniversary

Jeffrey and team finally did it. Now you can run it on Linux and Mac. You can even view and contribute to the source code.

https://channel9.msdn.com/Blogs/hybrid-it-management/PowerShell-on-Linux-and-Open-Source

https://blogs.msdn.microsoft.com/powershell/2016/08/18/powershell-on-linux-and-open-source-2/

https://blogs.msdn.microsoft.com/powershell/2016/08/17/windows-powershell-is-now-powershell-an-open-source-project-with-linux-support-how-did-we-do-it/

https://github.com/PowerShell/PowerShell

https://blogs.msdn.microsoft.com/powershell/2016/11/08/join-the-powershell-10th-anniversary-celebration/

https://channel9.msdn.com/Events/PowerShell-Team/PowerShell-10-Year-Anniversary

The PowerShell Team even has their own YouTube channel now:

https://www.youtube.com/channel/UCMhQH-yJlr4_XHkwNunfMog

For Your DevOps Book Shelf

Finally, for many people DevOps on any platform begins with reading The Phoenix Project. (I chose the 11 hour audiobook.) This year Gene Kim and team released a sequel The DevOps Handbook. Wow. This book has gold on every page. I recommend it for any organization going through the DevOps transformation.

Learn All The Things

There is no shortage of new material in the world of PowerShell and DevOps. Pick a topic and dive in. I hope you enjoy these links.

Pro Tip: PowerShell DSC Events to Monitor

$
0
0

01_blivitThe Problem

I need to monitor PowerShell DSC health on all of my nodes. But I do not want to wait for every possible event to happen in production to catch it and add it to my monitoring event list.

The Options

There are many options for monitoring PowerShell Desired State Configuration (DSC) status on your Windows nodes:

  • DSC reporting server
  • Get-DSCConfigurationStatus / Test-DSCConfiguration
  • xDscDiagnostics
  • OMS / Azure Automation DSC
  • Harvest and parse the status files under C:\Windows\System32\Configuration\ConfigurationStatus\
  • Event logs (which logs and events to capture?)
  • Windows Event Forwarding
  • Enterprise tools: Splunk, Nagios, SCOM, etc.

I am not saying any one of these options is better than the other. Some would certainly require more work.

One of the easiest options is to simply monitor the Windows event log on your DSC nodes. (While DSC is now available on other platforms, this tip is for Windows nodes.)

The Solution

Every company should already have an enterprise monitoring solution to collect server events centrally and alert on them.

This sweet little snippet below will give you a list of every possible event so that you don’t have to wait for them to happen in production to catch the ID to add to your monitoring solution.

(Get-WinEvent -ListProvider Microsoft-Windows-DSC).Events |
    Select-Object `
        @{Name='Log'       ;Expression={$_.LogLink.LogName}},   `
        @{Name='LevelValue';Expression={$_.Level.Value}},       `
        @{Name='LevelName' ;Expression={$_.Level.DisplayName}}, `
        Id, Description, Template |
    Out-GridView -Title 'DSC Client Events'

Now you can see every possible Information, Warning, and Error event for the Operational, Analytic, and Debug logs in the Microsoft-Windows-DSC provider. It also shows you what all of the place-holder values are in the error message. Here is a clipping of the output:

image

You could even take this list and pipe it into some code to automatically generate an event log import template file for your monitoring tool of choice.

I can’t find the Analytic and Debug logs.

By default the Analytic and Debug logs are turned off. You can enable them using either wevtutil.exe or xDscDiagnostics\Update-xDscEventLogStatus.

But wait, there’s more!

There is more than one provider for DSC event logs, especially if you are wanting to monitor pull server events as well. Use this snippet of code to list the same event information above for any possible DSC event log on your server.

$Providers = Get-WinEvent -ListLog *DSC*, *DesiredState* -Force |
    Select-Object -Unique -ExpandProperty OwningProviderName
$DSCEvents = ForEach ($Provider in $Providers) {
    (Get-WinEvent -ListProvider $Provider).Events |
        Select-Object `
            @{Name='Log'       ;Expression={$_.LogLink.LogName}},   `
            @{Name='LevelValue';Expression={$_.Level.Value}},       `
            @{Name='LevelName' ;Expression={$_.Level.DisplayName}}, `
            Id, Description, Template
}
$DSCEvents | Out-GridView -Title 'DSC Events'

If you would like more information on how this code snippet works or how to filter out specifics of the event message fields, then go read this popular previous post (and the sequel post) that explains events, schemas, and XML in PowerShell.

Happy Hunting

Now those of us who insist on everything being complete and orderly can put our minds at ease. You now have a way to list every possible DSC event log entry and decide which ones you want to monitor. Happy hunting.

Compare Group Policy (GPO) and PowerShell Desired State Configuration (DSC)

$
0
0

14_floppyWhat is the difference between Group Policy (GPO) and PowerShell Desired State Configuration (DSC)? Why would I use one over the other? I hear these questions frequently. Today we are going to fully explore the pros and cons of each. Is GPO going the way of the floppy disk? Let’s find out.

The Contenders

Group Policy Objects (GPO) have been around since Windows 2000 for configuring Windows clients and servers. They require Active Directory, and domain controllers host the policies. Clients download the policies at login based on their location in the organizational unit (OU) hierarchy and based on filters by security groups and WMI. This is the first choice for managing Windows machines in the enterprise, and it is included in-box with the operating system. GPOs enforce system settings and facilitate compliance with corporate and industry policies.

PowerShell Desired State Configuration (DSC) was introduced with Windows Management Framework (WMF) 4.0. Like group policy, it can configure settings on Windows machines. However, DSC can leverage the automation of PowerShell to configure nearly anything on the system, beyond the preconfigured choices of group policy. DSC is also built into the operating system on Windows 8.1/Windows Server 2012 R2 and above. It is available down level by downloading and installing the latest WMF update (currently 5.1). DSC does not have a graphical management interface and requires knowledge of PowerShell scripting to build out tools for administration.

Group Policy vs. PowerShell DSC

Comparing GPO and DSC is like apples vs. oranges, or pirates vs. ninjas, or Chuck Norris vs. …. Well. They each have strengths and weaknesses. And each technology was created to solve a different problem… in completely different eras of computing. 15 years after GPO we now have DSC, addressing a new set of problems that did not exist when GPO came on the scene.

I like to keep things simple, so let’s make a matrix to compare the two. This comparison will be very thorough. If you are impatient, then skip to the end of the article for guidance on when to use either technology.

If you have ever watched the show Who’s Line Is It Anyway, you know that “everything is made up and the points don’t matter.” I am going to give a point to each product along the categories of comparison. These points are based on my own personal bias, evaluating the capabilities of each technology in the current era computing. Do you disagree? Let me know in the comments area below.

Category GPO Point Point DSC
History I started using Group Policy in 1999 on the released candidate of Active Directory in Windows Server 2000. This familiarity gives GPO an advantage as the incumbent in the room. It also has the advantage of years of feedback and refinement (and acquisitions).
GPO was an evolution of registry.pol files in the pre-Windows 2000 days.
1 DSC began in 2014 with the release of Windows Server 2012 R2, earlier if you count preview releases. Currently it is approximately a “version two” product still gaining momentum with broad community support and feedback. You know what they say, when Microsoft hits version three it is ready for prime time.
Domain Environment and Authentication GPO requires an Active Directory domain for traditional delivery, authentication, permissions, etc. GPO can be delivered to Nano Server via new cmdlets that will import a policy backup, but the policy engine is not available.
Outside of this you can manually configure local GPOs.
1 DSC works in both domain and non-domain environments. In fact, DSC does not even need a network. For one customer I created a thumb drive deployment script for field techs to build servers. It can be that easy. DSC can authenticate via any of the methods supported by WinRM/WsMAN.
DSC now has resources to leverage the Nano Server GPO cmdlets.
Operating System Requirements (including non-Windows) Wow. GPO works out-of-the-box on any Windows machine since Windows 2000. That’s huge.

Scoring this section is difficult. GPO is ubiquitous and easy. DSC is more powerful, more complicated, and not as complete. DSC will win this later with more refinement; for now I’ll call it a draw.

draw draw DSC requires WMF 4.0 or above. (You REALLY want to use 5.x or above.) This is available as far back as Windows 7 and Windows Server 2008 R2. Do you really need to support Windows 2000 any more? If you are still running native PowerShell 2.0 on Windows 7/2008 R2, why? Upgrade today. It’s a no-brainer. The support desk will love you for it.
(The only exceptions here are some server-tier products that don’t take kindly to a WMF upgrade. Be sure to read the release notes prior to deploying across your entire data center.)
You also will need remoting enabled on your machines, which is automatic on Windows Server 2012 and above. If InfoSec doesn’t like this, then send them to this article addressing PowerShell security.
But DSC also supports Linux/Unix machines and network switches… barely. Back in 2014 we announced DSC support cross-platform, but it is a work in progress and not really viable compared to all of the more mature solutions in that space… for now. Keep watching here. Azure is driving this, and it will grow quickly, especially since PowerShell is now open source cross-platform.
Behavior GPO prevents changes to the operating system. When applied to GUI settings, the options are grey and simply unavailable. That is pretty strong.
Group Policy Preferences (GPP) allow greater flexibility of setting something only once or allowing it to be changed temporarily.
draw draw DSC reverts setting via ApplyAndAutoCorrect configuration mode. If something does get changed, then DSC can put it back. However, DSC can also imitate GPO by setting the exact same registry keys and values. This gives DSC the advantage of both prevention and correction.
Like GPP, DSC also has an ApplyOnly mode for initial configuration that can change later.
Authoring Beginning in Windows Server 2003 you could click through the Group Policy Management Console (GPMC) to find and configure settings. Over time the number of settings has grown to thousands, and you need to download a spreadsheet to tell you where to find them in the interface. If you really like GUIs, then GPO is for you. “Just keep clicking; just keep clicking.” 1 DSC is authored in PowerShell, a 10-year-old scripting language that is present in all Microsoft enterprise products. This does require learning new skills, but so does everything else in IT. 15 years ago you had to learn GPO; now you have to learn DSC.
DSC gets the point here, because the power of automation is a force-multiplier when creating configurations.
Move your existing server build scripts into DSC for the win.
Workstation Management GPO wins here, no contest. GPO is your tool of choice for all of the unique workstation configurations across the enterprise. The number of variables would be overwhelming to DSC. 1 Yes, DSC can do workstations. I have a customer using it to apply security baselines across the enterprise, where the settings are all identical. DSC would also be handy for kiosks, classroom computers, and other static configuration scenarios.
Server Management GPO ensures compliance with industry security requirements for companies all around the world. This works well, even for non-domain machines where you have to import security templates. 1 DSC can do all that, too, and even easier than GPO in those non-domain environments.
If you want to manage all of the unique server configurations across the data center, then DSC is going to require more effort and/or paid tools. Stick with GPO and System Center there. DSC would facilitate a good baseline build of each server, though.
Server Build GPO can apply settings and even install software for a new server build, but friends don’t let friends deploy software with GPO. You might have to wait a bit for GPO to fully kick in, even using GPUPDATE. And there is that whole non-domain environment problem as well. 1 This is where DSC shines. It was made for server deployment… over and over and over. On-prem, cloud, you name it. Once the box is provisioned, DSC takes over inside the box to fully build it out. GPO was not intended for this and lacks the broad coverage of PowerShell under the covers. Yes, you could probably force GPO into this, but it would require a lot more duct tape.
Recently I helped a customer deploy non-domain joined Azure scale set VMs with GPO security baselines migrated into DSC. It was a beautiful thing. All servers came up identically and completely secured… all with just a couple PowerShell commands.
DevOps and Cloud Environments The domain requirement encumbers GPO in the agile world of DevOps. In order to deploy the same settings across multiple domain environments you would have to duplicate GPOs in each. This is no trivial task. It also requires management of OU and group memberships outside the DevOps pipeline. 1 If you want to go fast with modern cloud deployments or DevOps Continuous Integration/Continuous Deployment (CI/CD), then DSC is your tool of choice. DSC can make dev / test / qa / prod all match, for real, and keep it that way.
DSC can both build the server and then ensure the configuration going forward. GPO doesn’t have the coverage to compete here.
Extensibility GPO has always had the ability for you to create custom registry setting template files (ADM/ADMX). Advanced GPO customization involves creation of Client Side Extensions (CSE), which requires developer skills. 1 DSC is all about roll-your-own settings, and the customization extends well beyond the registry. You can configure anything that you can script in PowerShell. DSC wins here for ease of customization.
Community Support There may be a few custom items on the TechNet Script Center, but GPO is generally lacking in community extensions when compared with DSC. 1 Um. DSC wins this one hands down. From the beginning Microsoft has encouraged community contributions, and we have a rich GitHub and Gallery environment full of community content. This is one of the first times in history YOU have been able to work WITH Microsoft on the product. Our DSC resources are all open source and welcoming of your improvements and fixes. Of course the community is full of other code repositories as well.
Third Party Support GPO has a few vendors who have improved the functionality of the product over the years, and Microsoft acquired some of those to put in the box (ex. Group Policy Preferences, AGPM – Advanced Group Policy Management). The group policy MVPs Jeremy Moskowitz and Darren Mar Elia each have released a number of GPO enhancements as well. draw draw Where DSC lacks in robust management features, third party vendors have stepped up. Chef and Puppet lend their enterprise readiness from the Linux side of the house to make DSC more administrator-friendly at scale.
Also, Azure Automation DSC has a nice dashboard experience for managing configurations and nodes. Keep an eye on this space.
Delivery Method Group Policy uses a pull model. The client logs on to an Active Directory domain. Then it begins a back-and-forth conversation to discover all the policies and download their respective files. “Got any jacks?” “Go fish”
The advantage of Active Directory is that the Domain Controllers (ie pull servers) are highly available and replicate automatically.
1 DSC offers push and pull. Similar to GPO, DSC can download a setting template (MOF) and the files that make it happen (resources). This pull server can be set up for SMB, HTTP, or HTTPS transport. However, the in-box pull server feature is not highly available. The Azure DSC pull server is highly available for nodes both on-prem and in the cloud, but this requires additional cost.
DSC additionally has robust support for peer-to-peer delivery of settings via push.
Firewall GPO uses an assortment of ports: SMB 445, RPCSS 135, RPC random ports 1 One port, encrypted by default. WinRM/WsMan 5985, optionally SMB only if the configuration leverages it.
Scalability Both GPO and DSC scale to thousands of machines. GPO wins for ease-of-use here. 1 DSC requires your own tooling and automation to scale. It is honestly much more work than linking a GPO to an OU. However, the automation and complexity of DSC makes it much more powerful than GPO.
If you need to encrypt credentials in DSC configurations, then scalability gets much more complicated with certificate management. It can be automated, but it is more complex.
How many can I have? It is really easy to apply multiple GPOs to a single machine, and you can see the result using GPRESULT or RSOP. You might have to trace down conflicts, but there is a clear winner and clear rules. 1 DSC can use partial configurations to apply multiple, unique MOF files to a single system. However, this is not best practice. Worse, this is a tedious process of configuring the LCM with specific names and publishing those via push or pull. GPO wins here for ease-of-use.
Raw Power GPO owns the Windows registry and obscure security settings that are difficult to reach via scripting techniques. It is a solid work-horse in this area. And Group Policy Preferences extend configuration coverage considerably. However, it simply lacks the breadth of settings compared to DSC, especially when we factor in ease of user-extensibility. 1 DSC can do just about anything GPO can do, but it has the impressive force of PowerShell behind it. Anything you can do in PowerShell can be deployed to all your nodes. The logic and extensibility of scripting is a clear winner here.
Timers GPO applies every 90 minutes, plus or minus up to 30 minutes, on machines in the domain. The exception is domain controllers which apply GPO every 5 minutes.
This interval is configurable in the registry from 0 minutes to 45 days, but this is not a common practice.
We will give the point to GPO here for the smallest refresh interval, but setting a smaller timer with either technology can cause issues.
1 DSC has two timers: 1. ConfigurationModeFrequencyMins determines how often the box is scanned for configuration drift. The minimum value is 15 minutes. 2. RefreshFrequencyMins is how often it polls the pull server for a new configuration to apply. This minimum is also 15 minutes.
These values must be a multiple of one another.
Whatever you do, don’t put the same setting in both GPO and DSC. The timers will battle back and forth resetting the same value all day long. There is no automatic conflict detection between the two.
Command Line GPRESULT and GPUPDATE are the most common commands. GPO also has a PowerShell module full of helpful commands like Invoke-GPUpdate and Get-GPResultantSetOfPolicy. The module is available when you install GPMC. However, GPO loses a point here, because the PowerShell cmdlets only work with registry-based settings of policies and preferences. When you try to script individual GPO settings you will hit this wall pretty fast.
To be fair, there is a decent set of cmdlets for working with whole policies (links, inheritance, backups, permissions, etc.).
1 DSC has a vast array of command line support, since it was borne out of PowerShell:
Get-DscConfiguration
Get-DscConfigurationStatus
Get-DscLocalConfigurationManager
Remove-DscConfigurationDocument
Restore-DscConfiguration
Stop-DscConfiguration
Invoke-DscResource
Publish-DscConfiguration
Set-DscLocalConfigurationManager
Start-DscConfiguration
Test-DscConfiguration
Update-DscConfiguration
Logging Both GPO and DSC have their own Windows Event Logs. 1 I am going to give the point here to DSC, because of the xDscDiagnostics module that does some cool collation across all the log sources. Also, DSC has a second logging mechanism to view with Get-DscConfigurationStatus, and this data is available centrally in Azure Automation DSC or from the in-box reporting/pull server.
Troubleshooting GPO has a generous assortment of event logs, text log files, and command line utilities for troubleshooting.
GPO cannot do the live debug like DSC, at least in a user-friendly way. Well, I suppose you could attach a Windows debugger to troubleshoot GPO. Do you know anyone who can do that (besides people at Microsoft)?
1 DSC has the dominance in command line and logging as discussed above, but it also has the ability to live debug remotely with no special tools as a configuration is applying. Now that’s cool!
Version Control Version control and release scheduling for GPO requires an additional license for Advanced Group Policy Management (AGPM). 1 Since DSC is configuration-as-code, any source code management tool will work. Popular options are Visual Studio Team Foundation Server (TFS), git-based solutions, and other third party products. I’ll give the point to DSC here, because some of these are free.
Updates GPO combined with WSUS is an effective patching solution for many organizations. This is an established pattern that works well. “If it ain’t broke, don’t fix it.” 1 Yes, DSC can apply patches or updates using the xHotfix resource. However, you really don’t want to write all of that in configuration-as-code. Even though you could automate it with scripting, it is less than ideal. Stick with your other patching solution.
The only exception here would be doing a new server build. Sometimes there are certain patches you might want to lay down first before proceeding to configure the box (like WMF 5.1). One-off patch scenarios like this are well-suited to DSC.
What-If Scenarios GPO can use RSOP modeling to determine “what configuration would this user and computer combination get in these OUs?” 1 DSC has a similar feature using
Test-DscConfiguration
-ReferenceConfiguration. The advantage goes to DSC for the ability to scan multiple machines simultaneously and return object-oriented data for easy bulk reporting.
Reporting How would you produce a list of machines that were in or out of compliance with GPO? There is RSOP and GPRESULT, but those are quite tedious to view the entire environment. Perhaps you have a third party product to do this, but this visibility across the enterprise is not easy for GPO. I guess you could use the RSOP cmdlets if you wanted to build your own solution. 1 Enterprise-wide compliance reporting for DSC is marginally better. The current in-box reporting server leaves a lot to be desired (like high availability), and the Azure Automation DSC solution costs money. However, you can script your own reporting solution using a number of DSC cmdlets like Test-DscConfiguration, Get-DscConfigurationStatus, etc.
I have heard of one person who built their own SCOM management pack for DSC. Other customer may leverage a System Center script to collect and report this data.
DSC has a nice object model to clearly identify which parts of the configuration are in and out of desired state.
Long-term Microsoft Direction Many people wonder if “GPO is dead”, but we are still releasing new policies for Office and Windows with each subsequent version. I do not expect this to go away any time soon. 1 Microsoft is making significant investments in DSC and it is used widely across our modern offerings. This is not a passing fad. Learn it now. It will be here for decades to come. GPO, only time will tell. The PowerShell DSC team at Microsoft is full of ex-Group Policy product people. Maybe that is a clue.
Point Tally Group Policy 7 16 PowerShell DSC

 

Summary Recommendations

Look at the score! Do these points really matter? Maybe. Depends on your goals. Clearly I am biased towards DSC. Why? Growth mindset. Stay locked into the old ways of GPO, or learn the new ways of DSC. Your choice. Tell me what you think in the comments area below.

At the time of this writing each technology has its own pros and cons. Here are the scenarios where I see each as most useful:

Group Policy PowerShell DSC
  • Enterprise workstations
  • Complex scenarios requiring deep layering or filtering of settings
  • Patching with WSUS
  • Staff with only GUI skills
  • Server builds and baseline configurations
  • Static workstations
  • DevOps
  • Cloud and nano server deployments
  • Non-domain environments
  • Configuration drift scenarios
  • Compliance reporting

If you are the typical enterprise environment, don’t rip and replace GPO with DSC. If you are building green field, doing DevOps, or moving to the cloud, then start using DSC right away. I recommend standard DevOps advice in every case: start small with DSC in a limited scope, learn it, get some early wins, and then build out the span of control as your experience and the product grow.

If you still have not learned PowerShell, then you are at a severe disadvantage in your career. PowerShell is not a “new thing”; it is 10 years old. Similarly, Desired State Configuration is three years old now. This is not one of those “wait and see” technologies. It is here to stay. Learn it today.

Resources

There are several places to learn about PowerShell Desired State Configuration online:

  • Start here: https://microsoft.com/powershell. This page includes links for documentation, WMF download, etc. The documentation includes many code samples for getting started.
  • Microsoft Virtual Academy has several good videos for training on DSC.
  • Channel 9 also has many good training videos.
  • Follow #psdsc on Twitter for all the latest news and blog posts. Connect real-time with people using DSC in the real world.

Practical PowerShell Security: Enable Auditing and Logging with DSC

$
0
0

Windows_Insiders_Ninjacat-100PowerShell Security

Almost two years ago Lee Holmes released his famous PowerShell ♥ the Blue Team whitepaper. This is required reading for anyone who works with PowerShell at all in their job or who is concerned about the security of PowerShell in their environment. I outlined a number of PowerShell security-related resources in this previous post: http://aka.ms/pssec.

I am not going to rehash all of the topics in the white paper, but I do want to make it easy for people to implement PowerShell auditing and logging. What better way than putting Lee’s recommendation and code samples into a Desired State Configuration (DSC)? If you would like to see Lee demonstrate these improvements you can watch the third module of this video series: http://aka.ms/MVAps5.

Practical DSC

Sometimes customers ask me what should go into their baseline configurations for servers. Now that is a fun conversation. Let’s just say PowerShell auditing is an easy one to overlook. PowerShell security is not on the radar in many IT shops. With this DSC configuration sample below it can become routine.

Security Features

As Lee outlined in the whitepaper, WMF 5.x includes a number of enhancements in the area of security:

  • Script block logging
  • System-wide transcription
  • Protected event logging (used together with Windows Event Forwarding)
  • and more…

Lee even provided sample code to set the Group Policy registry values to enable these enhancements. In DSC it is really easy to set registry values, so let’s get to it.

Requirements

We are assuming that the target system already has WMF 5.x installed or upgraded. Each of these enhancement will require the following settings:

  • Script block logging
    • HKLM:\Software\Policies\Microsoft\Windows\PowerShell\ScriptBlockLogging
      • EnableScriptBlockLogging, 1
      • EnableScriptBlockInvocationLogging, 1
    • It may also be a good idea to increase the log size. The Microsoft-Windows-PowerShell/Operational log is 15MB by default.
  • System-wide transcription
    • Create a directory to hold transcripts
      • Set permissions on the directory to prevent tampering. (I chose SDDL for the shortest code here.)
      • Trim the transcript directory contents on an interval to avoid filling the drive (if local).
      • We are going to use a local directory for now. Lee recommends pointing it to a share off-box.
    • HKLM:\Software\Policies\Microsoft\Windows\PowerShell\Transcription
      • EnableTranscripting, 1
      • IncludeInvocationHeader,1
      • OutputDirectory, [Path]
  • Protected event logging (can be used together with Windows Event Forwarding)
    • Requires Windows 10 or Windows Server 2016
    • Requires a document encryption certificate
    • HKLM:\Software\Policies\Microsoft\Windows\EventLog\ProtectedEventLogging
      • EnableProtectedEventLogging, 1
      • EncryptionCertificate, [Certificate]

Note that the logging and transcription enhancements include an option for invocation logging. This is optional and will increasing the logging volume. It basically adds start/stop header information for each command that is executed. You can omit this setting if you prefer a lower logging leverl.

Desired State Configuration

Now that we have an outline of the settings required, we can move those into a DSC configuration. You can view the code on my GitHub account here.

  • Script Block Logging
    • Two registry resources set the values.
    • Then for good measure we use a script resource to increase the size of the PowerShell event log.
  • Transcription
    • Three registry resources set the values.
    • We need a file resource to create the directory to hold the transcript files.
    • Then two script resources set the permissions on the directory and remove any old files. We probably could have used an external resource to set the directory permissions, but I want to keep this using in-box resources for simplicity. Keeping the transcript directory clean requires that you set the DSC Local Configuration Manager (LCM) to ApplyAndAutocorrect.
    • NOTE: Remove the file and script resources if you send the transcripts to a remote UNC share path.
    • NOTE: If you use a local path, have fun trying to read the transcript files. Also, the trim files script resource will likely generate warnings trying to clear old files.
  • Protected Event Logging
    • I’m going to skip this one due to a couple reasons:
    • Right now we do not have a way to request a certificate in a configuration, and then pass that data to another resource in the same configuration. Writing a custom resource for that wouldn’t be practical, because I’m trying to keep this to in-box DSC resources for now.
    • Most customers already have an event collection tool (SIEM). If not, then explore the xWindowsEventForwarding resource module.
  • Fit and Finish
    • Notice that the configuration has the following parameters:
    • TranscriptPath – Directory where you want to put the transcript files. Can be local or UNC path.
    • TranscriptDays – How many days of transcripts do you wish to retain?
    • EventLogSizeInMB – Size to set for the PowerShell log to hold the additional content generated.
  • BONUS
    • For completeness I threw in a configuration to disable the transcription and logging.
    • I also threw in a couple lines to query the event logs for your new events.

Ideally you would take this sample code and create two configurations: one for the nodes being audited and one to set up the server with the UNC share receiving the transcripts and Windows Event Forwarding.

The code on my GitHub account is purely sample for you to copy/paste into your own configurations. This makes a great finish for a baseline server security config.

Gotchas

The PowerShell console and ISE will cache group policy settings when they are launched. In order to see the effects of the logging and transcription you will need to open a new session.

If you have DSC set to scan the node for compliance (as most of us do), note that this will generate a significant amount of logging via each of these methods.

So if you decide to put the transcript folder on the local box, then the DSC configuration is going to error out when it goes to check the folder for old files to purge. You will see orange warning text when this happens.

Audit Thy Servers

Now you have a cool whitepaper on PowerShell security to read and some free DSC code to play with. Use the comments area below to discuss any questions or ideas.

Cheers!

PowerShell Remoting and Kerberos Double Hop: Old Problem – New Secure Solution

$
0
0

PshSummit2017PowerShell and DevOps Global Summit 2017

This week I enjoyed presenting at the PowerShell and DevOps Global Summit 2017. If you have not attended, I highly encourage it. You will get to meet PowerShell team members from Microsoft, MVPs, and the people you follow on Twitter! Follow @PshSummit on Twitter to get the alerts for registration. I even work for Microsoft, and I learn a ton every year from the amazing sessions. It is also great connecting with everyone in the PowerShell community.

See this previous blog post for the benefits of resource-based Kerberos constrained delegation and how it can apply to PowerShell remoting. It is not a complete solution, but it works for the key scenarios described below and a few others. The article also outlines a number of other possible Kerberos double hop solutions. The PowerShell documentation team took that article, tweaked it, and turned it into a documentation page here.

tl;dr

  • This is a follow up to my previous blog post on Kerberos double hop and PowerShell remoting.
  • I have published some helper functions for working with resource-based Kerberos constrained delegation (RB KCD) and PowerShell remoting: Enable-RBKCD, Disable-RBKCD, Get-RBKCD.
  • Get the files and slides on my GitHub here.
  • RB KCD works with a limited set of commands and functions running under SYSTEM account in the kernel context.
  • RB KCD does not support WinRM / PowerShell remoting, because that runs under the NETWORK account.
  • For cases where RB KCD does not work you can nest two Invoke-Command statements to make the double hop. See helper function Invoke-DoubleHop.
  • You can share these RB KCD articles and scripts with this short link: http://aka.ms/pskdh

The Problem

Classic Kerberos Double Hop

I am on ServerA, connected to ServerB where I need to reach ServerC. I have permissions to ServerC, but I still get Access Denied. Default Kerberos rules prevent ServerB from passing credentials to ServerC. The most common example is a user (ServerA) connected to a web server (ServerB/frontend) that needs to use the user’s credentials to access a database server (ServerC/backend). In a previous blog post I described multiple popular (and not-so-popular) work-arounds.

Scenario A: Jump Server

From my workstation I connect to my jump server (tool server, etc. whatever you like to call it) via PowerShell remoting (Enter-PSSession, Invoke-Command). From that server I want to reach out and collect data from multiple other servers for a report. I am in the Administrators group on all of these servers, but I get an Access Denied when attempting to access them from my jump server.

Why not connect directly to the servers? Perhaps I have limited network connectivity or restricted routing. Maybe it is a DMZ or a hosted environment. There are many legitimate scenarios why you may choose this approach.

Scenario B: Remote Software Install

Another popular scenario is installing software remotely. From my workstation (ServerA) I want to fan out to 50 servers (ServerB) and install an application whose source files are hosted on a file share (ServerC). Here again I will get Access Denied at the file share even though I know I have permissions. This is Kerberos double hop.

Scenario X

There are many more scenarios for Kerberos double hop. RB KCD will help with some of them. Invoke-DoubleHop should help with more of them. And some will likely have no other choice but to continue using CredSSP for the time being. You will need to experiment to see which commands are compatible with RB KCD (running as SYSTEM in kernel context).

For example, from your workstation you connect to your SharePoint server with PowerShell remoting. The SharePoint cmdlets need to access a backend SQL server, but they fail. Typically CredSSP is the solution. I have some peers who have not been successful yet getting RB KCD to work with this case. I suspicion that it would need to be configured on service accounts and may work then. Let me know if you figure this one out.

Two Solutions, One Module

I created a helper module for quickly configuring RB KCD and for cheating with nested Invoke-Command cmdlets.

PS> Import-Module rbkcd.psm1

PS> Get-Command -Module RBKCD

CommandType Name             Version Source
----------- ----             ------- ------
Function    Disable-RBKCD    0.0     RBKCD
Function    Enable-RBKCD     0.0     RBKCD
Function    Get-RBKCD        0.0     RBKCD
Function    Invoke-DoubleHop 0.0     RBKCD

Here are some examples:

# Both ServerB and ServerC in the same domain.
Enable-RBKCD -ServerB sb.proseware.com -ServerC sc.proseware.com -Credential (Get-Credential)

# ServerB and ServerC in different domains.
Enable-RBKCD -ServerB sb.proseware.com -ServerC ms1.alpineskihouse.com -DomainBCred (Get-Credential) -DomainCCred (Get-Credential)

# See which identities are allowed to delegate to ServerC
Get-RBKCD -ServerC sc.proseware.com -Credential (Get-Credential proseware\adminacct)

# Remove all identities allowed to delegate to ServerC
Disable-RBKCD -ServerC sc.proseware.com -Credential (Get-Credential proseware\adminacct)

# For scenarios that do not work with RB KCD
Invoke-DoubleHop -ServerB sb -ServerC sc -DomainBCred $DomainBCred -Scriptblock {
    dir \\sc\c$
}

While these functions were written to help with RB KCD for PowerShell remoting, they could be used for any other RB KCD scenario. Note that these only work with computer accounts. You could expand the code to work with service accounts or user accounts also.

Did this work for you?

This is a bit of a niche topic, but lots of people struggle with it. Hopefully this was helpful. Please use the comments below to help the community understand where this was helpful for you and where is was not helpful. This is an on-going research project for me, and your feedback is valuable. Thank you. Happy scripting!


Top 10 PowerShell DSC Node Events to Monitor

$
0
0

01_blivitIn a previous blog post I demonstrated how to get a list of all possible PowerShell Desired State Configuration (DSC) events for monitoring. Admittedly, that was an overwhelming list. Today I want to narrow that down to the essentials of DSC monitoring events.

These are the events you’re looking for.

Recently I was working with a customer who wanted specific events for DSC monitoring. I did my testing with a Windows Server 2012 R2 node running WMF 5.1. The pull server was on the same versions. I fired up a node connected to the pull server and labbed a number of common scenarios you would want to monitor.

DSC node events are recorded in the Microsoft-Windows-DSC/Operational log. Here are the main events you want to capture. I have assigned a simple category to each of these.

Category Event ID Level Status
Desired State 4115 / 4343 Information Consistency scan completed (ie. in desired state if 4249 is not also present)
Desired State 4249 Warning Failed consistency scan (ie. not in desired state). Only appears in ApplyAndMonitor mode.
Configuration Apply 4097 Error Configuration failed to apply
Configuration Apply 4332 Information Listing of resources applied in the configuration
Configuration Apply 4257 Information LCM settings during the configuration
Node Pull 4252 Error Node failed to download from pull server, only event 4252 with Error Category 8 in the message
Node Report 4264 / 4266 Information Node successfully reported to report server
Node Report 4260 Error Node failed reporting to report server

 

In some cases there may be other events to indicate similar status. These IDs are the least chatty. Of these ten events I have highlighted the three essential error conditions for monitoring.

Note the following points:

  • Event 4249 only shows up in ApplyAndMonitor configuration mode to indicate configuration drift. In my testing I could not find an event indicating configuration drift when ApplyAndAutocorrect actually makes a correction to the configuration.
  • In the message body of some events you will see PerformRequiredConfigurationChecks. These bit flag values are documented here.
  • Event 4252 appears for all kinds of conditions. Differentiate the events by the message body and the Error Category data inside the event.

Scripting to Capture Logs

Here is some quick syntax to remotely query the events. Note that I limit the total number of events returned for performance reasons. Tweak MaxEvents as needed.

Invoke-Command -ComputerName server1,server2,server3 -ScriptBlock {
  Get-WinEvent -LogName 'Microsoft-Windows-DSC/Operational' -MaxEvents 50} |
  Select-Object PSComputerName,TimeCreated,LevelDisplayName,Id,Message |
  Out-Gridview

Here is some quick syntax to export all of the DSC event logs, optional pull server details, and zip them up for analysis off-box. I use this when troubleshooting DSC.

New-Item -ItemType Directory -Path C:\logs -ErrorAction SilentlyContinue
(Get-WinEvent -ListLog *desired*,*dsc*).LogName |
Where-Object {$_ -notlike "*admin*"} |
ForEach-Object {
    wevtutil export-log /overwrite:true $_ "C:\logs\$($env:COMPUTERNAME)_$($_.Replace('/','-')).evtx"
}
'System','Application' | ForEach-Object {
    wevtutil export-log /overwrite:true $_ "C:\logs\$($env:COMPUTERNAME)_$($_).evtx"
}
If ((Get-WindowsFeature DSC-Service).Installed) {
    Get-ChildItem 'C:\Program Files\WindowsPowerShell\DscService' > C:\logs\DscService.txt
    Copy-Item -Path 'C:\inetpub\wwwroot\PSDSCPullServer\web.config' -Destination C:\logs
}
$PSVersionTable > C:\logs\PSVersionTable.txt
Compress-Archive -Path C:\logs\*.evtx,C:\logs\*.config,C:\logs\*.txt `
    -DestinationPath "C:\logs\$($env:COMPUTERNAME)_DSC_Logs.zip" -Update

The xDscDiagnostics module has a function New-xDscDiagnosticsZip which will get most of these things and a few other items. This code above is tailored for my own DSC troubleshooting needs. Note that my version will attempt to collect additional details from a pull server, assuming the default install paths.

Additional Resources

For more info on troubleshooting DSC and logs see the documentation here: https://msdn.microsoft.com/en-us/powershell/dsc/troubleshooting

Don’t forget to check out my previous blog post for more on working with DSC event logs.

Comments

What do you monitor for DSC events? Did I miss any? If so, let me know in the comments area below.

Getting Started with PowerShell Core on Windows, Mac, and Linux

$
0
0
This is deeper than Coke vs. Pepsi or Ford vs. Chevy. We are breaking down the barriers. Cats and dogs living together. Are you ready for this?
This month I posted over on the PowerShell team blog about my recent experiences with PowerShell on Linux and Mac. It is a ton of fun. Check out the post here:
Enjoy!

Slow Code: Top 5 Ways to Make Your PowerShell Scripts Run Faster

$
0
0

Slow code?

Are you frustrated by slow PowerShell scripts? Is it impacting business processes? Need help tuning your scripts? Today's post is for you.

Can you identify with any of these customer slow PowerShell stories?

Case #1

Customer is scanning Active Directory Domain Controllers in multiple domains and forests scattered all over the state on slow links. This key audit data takes 62 hours to collect and impacts the business response to audit requests. After applying these techniques, the customer reports that the script now completes in 30 minutes.

Case #2

Customer needs to update user licensing on Office 365 for daily new user provisioning. This impacts the business due to 10 hour run time of the script. After applying these optimization tips, the script finishes in 14 minutes.

Case #3

Customer is parsing SCCM client log files that rotate every few minutes. But the script takes longer to run than the log refresh interval. After applying these techniques, the script is 10% of its original length and runs 10 times faster.

Scripting Secrets

After several years of teaching and advising PowerShell scripting I have observed some routine practices that lead to poor script performance. Often this happens with people who copy and paste their scripts from internet sources without truly understanding the language. Other times it simply comes from a lack of formal training. Regardless, today I am going to share with you the secrets I have shared with many customers to improve their script run time.

The classic programming trade-off is speed vs. memory. We want to be aware of both as we write the most efficient code.

Problem #0: Not using cmdlet parameter filters

There is an ancient PowerShell pipeline proverb: Filter left, format right. Disobey it, and your script will take a while. It means that you should filter the pipeline objects as far to the left as possible. And formatting cmdlets should always go at the end, never the middle.

Early on a customer reported to me, "Querying event logs over the WAN is taking days!" Study these two code samples below. Which do you think is faster and why?

# Approach #1
Get-WinEvent -LogName System -ComputerName Server1 |
  Where-Object {$_.InstanceID -eq 1500}

# Approach #2
Get-WinEvent -FilterHashtable @{LogName='System';ID=1500} `
  -MaxEvents 50 -ComputerName Server1

The first approach retrieves the ENTIRE event log over the wire and then filters the results in local memory. The second approach uses cmdlet parameters to effectively reduce the dataset coming from the remote system.

This same advice applies to any cmdlet that queries data, whether local or remote. Be sure to explore the help for all the parameters, even if they look complicated at first. It is worth your time to write the code correctly.

Yes, #2 is faster. Much faster.

Problem #1: Expensive operations repeated

Usually I see this manifest as a query to Active Directory, a database, Office 365 accounts, etc. The script needs to process multiple sets of data, so the script author performs a unique query each time. For example, I need to report on the license status of 10 user accounts in Office 365. Which pseudo code would be faster?

For Each User
    Query the account from Office 365
    Output the license data of the user

Or this:

Construct a single query to Office 365 that retrieves all users in scope
Execute the query and store it into a variable
Pipe the variable into the appropriate loop, selection or output cmdlet

Yes, the second is more efficient, because it only performs the expensive operation once. It may be a little more involved to construct the query appropriately. Or you may need to retrieve an even larger data set if you cannot precisely isolate the accounts in question. However, the end result is a single expensive operation instead of multiples.

Another expensive operation is crawling an array to search a value:

For ($i=0; $i -lt $array.count; $i++) {
    If ($array[$i] -eq $entry) {
        "We found $entry after $($i+1) iterations."
        $found = $true
        Break
    }
}

Instead, add the items to a hash table which has blazingly fast search performance:

$hashtable.ContainsKey($entry)

See Get-Help about_Hash_Tables for more information on my favorite PowerShell data structure.

Problem #2 & #3: Appending stuff

Append-icitus is painful, but appending to objects is more painful. This usually comes in one of two forms:

  1. Appending to files
  2. Appending to arrays

Appending to files

I usually see this with script logging output. Cmdlets like Add-Content, Out-File -Append and Export-CSV -Append are convenient to use for small files. However, if you are using these in a loop with hundreds or thousands of iterations, they will slow your script significantly. Each time you use one of these it will:

  • Open the file
  • Scroll to the end
  • Add the content
  • Close the file

That is heavy. Instead use a .NET object like this:

$sw = New-Object System.IO.StreamWriter "c:\temp\output.txt"
for ($a=1; $a -le 10000; $a++)
{
    $sw.WriteLine($BigString)
}
$sw.Close()

For CSV output, this may require you to construct your own CSV delimited line of text to add to the file. However, it is still significantly faster.

Appending to arrays

I used to do this one often until someone pointed it out to me.

# Empty array
$MyReport = @()
ForEach ($Item in $Items) {
    # Fancy script processing here
    # Append to the array
    $MyReport += $Item | Select-Object Property1, Property2, Property3
}
# Output the entire array at once
$MyReport | Export-CSV -Path C:\Temp\myreport.csv

Now this is almost one better, because we are not appending to a file inside the loop. However, we are appending to an array, which is an expensive memory operation. Behind the scenes .NET is duplicating the entire array in memory, adding the new item, and deleting the old copy in memory.

Here is the more efficient way to do the same thing:

$MyReport = ForEach ($Item in $Items) {
    # Fancy script processing here
    $Item | Select-Object Property1, Property2, Property3
}
# Output the entire array at once
$MyReport | Export-CSV -Path C:\Temp\myreport.csv

You can actually assign the variable one time in memory by capturing all the output of the loop. Just make sure the loop only outputs the raw data you want in the report.

Another option is to use a hash table or .NET array list object. These data structures can dynamically add and remove items without the memory swapping of an array. See Get-Help about_Hash_Tables or System.Collections.ArrayList.

Problem #4: Searching text

The log parsing example I mentioned in Case #3 above gets a lot of people, especially if you started scripting in VBScript where string methods were quite common. Here is a quick chart comparing the three most popular text parsing methods, including links for more info.

Technique Friendly Power
String methods Yes No
Regular expressions No Yes
Convert-String / ConvertFrom-String Yes Yes

Sometimes string methods (ToUpper, IndexOf, Substring, etc.) are all you need. But if the text parsing requires pattern matching of any kind, then you really need one of the other methods, which are much faster as well.

Here is a simple example of using string methods:

$a = 'I love PowerShell!'
# View the string methods
$a | Get-Member -MemberType Methods
# Try the string methods
$a.ToLower()
$a.ToLower().Substring(7,10)
$a.Substring(0,$a.IndexOf('P'))

While string methods are easy to discover and use, their capability gets cumbersome very quickly.

Observe this comparison of three techniques:

$domainuser = 'contoso\alice'

# String methods
$domain = $domainuser.Substring(0,$domainuser.IndexOf('\'))
$user   = $domainuser.Substring($domainuser.IndexOf('\')+1)

# RegEx
$domainuser -match '(?<domain>.*)\\(?<user>.*)'
$Matches

# Convert-String
'contoso\alice' |
    Convert-String -Example 'domain\user=domain,user' |
    ConvertFrom-Csv -Header 'Domain','User'

RegEx is used widely in PowerShell: -split, -replace, Select-String, etc. RegEx excels at parsing string patterns out of text with speed. Take some time to learn it today (Get-Help about_Regular_Expressions).

The new Convert-String and ConvertFrom-String cmdlets were introduced in PowerShell 5. See the links in the chart above for more detailed examples of these powerful text parsing cmdlets. ConvertFrom-String excels at parsing multiple values out of multi-line patterns. And that is exactly what challenged the customer in Case #3 above.

How can I tell how long my script runs?

Use one of these techniques to test different versions of your code for speed.

PowerShell has a cmdlet Measure-Command that takes an -Expression scriptblock parameter. This is the first way most people measure execution time.

Measure-Command -Expression {
    #
    # Insert body of script here
    #
}

Others will do something like this:

$Start = Get-Date
#
# Insert body of script here
#
$End = Get-Date
# Show the result
New-Timespan -Start $Start -End $End

Either method returns a TimeSpan object with properties for any desired unit of time. Just be sure to use the total properties for accuracy.

If you do a slow operation one time, maybe that is little impact. But if you do it 1,000 times, then we are all in trouble. If the data processed in each loop is a rich object with many properties, then it is even worse (ie. more memory). Review your loops carefully to identify expensive commands and optimize them.

Disclaimer

One of the challenges of sharing code publicly is that I am always learning. If you go back to my posts six years ago, you will find that I used some of these poor practices. I have re-written and re-blogged some of them. Others are still there.

Take-aways:

  • Keep learning
  • Review (and optimize) all code you find online before implementing it
  • Periodically review your most-used scripts in light of your new knowledge

Keep scripting!

More Tips

You can find more tips in The Big Book of PowerShell Gotchas over at PowerShell.org/ebooks.

Use Hash Tables To Go Faster Than PowerShell Compare-Object

$
0
0

Compare-Object gotcha down? Slower than my old 300 baud modem? Have no fear. Today we go faster using hash tables.

Let me state first that I love the cmdlet Compare-Object, and I have used it many times with great results. But at scale my customer had some serious performance issues.

The Problem - “I feel the need. The need for speed.”

So my customer has employed all the tricks from the last blog post on making your scripts go faster. But still the script takes hours to run. Between each command he dropped a timestamp into a log file. The culprit… Compare-Object. That single command was taking hours.

But let’s be fair. He’s comparing about 800,000 email addresses between two lists. It would take me weeks to do that by hand with a pencil and paper. Compare-Object is pretty quick at 13 hours. But let’s get this down to seconds.

The Research

First things first. What exactly is Compare-Object doing? To find out, view the source code over at the PowerShell open source GitHub location. So I did that. But I’m not a .NET developer. However, I did notice the comments starting on line 120 helped me understand what it does. That is very similar to my idea.

All I know is that when I want list processing to go faster in PowerShell I use hash tables. I’ll write my own version in native PowerShell and see if it is faster.

The Approach

We have two lists, and we need to know what is different. We want to make the most efficient use of both memory and computation.

If I compare every item in List1 against every item in List2, well, that’s going to take a while (n*m).

Each list comes in as an array. I need to look up all the items in List1 against List2. The fastest way to do lookups is with a hash table.

To find the differences, I will delete the matching entries from both List1 and List2. Arrays are slow at removing a single item, so again I will use hash tables.

After deleting all of the equal values, the only things left in each list are the unique values.

If you want to see what is equal, then I will stuff that into a third list (hash table) containing only the equal values.

The Code

I have placed the hash table comparison into a function called Compare-Object2.

<#
.SYNOPSIS
Faster version of Compare-Object for large data sets with a single value.
.DESCRIPTION
Uses hash tables to improve comparison performance for large data sets.
.PARAMETER ReferenceObject
Specifies an array of objects used as a reference for comparison.
.PARAMETER DifferenceObject
Specifies the objects that are compared to the reference objects.
.PARAMETER IncludeEqual
Indicates that this cmdlet displays characteristics of compared objects that
are equal. By default, only characteristics that differ between the reference
and difference objects are displayed.
.PARAMETER ExcludeDifferent
Indicates that this cmdlet displays only the characteristics of compared
objects that are equal.
.EXAMPLE
Compare-Object2 -ReferenceObject 'a','b','c' -DifferenceObject 'c','d','e' `
    -IncludeEqual -ExcludeDifferent
.EXAMPLE
Compare-Object2 -ReferenceObject (Get-Content .\file1.txt) `
    -DifferenceObject (Get-Content .\file2.txt)
.EXAMPLE
$p1 = Get-Process
notepad
$p2 = Get-Process
Compare-Object2 -ReferenceObject $p1.Id -DifferenceObject $p2.Id
.NOTES
Does not support objects with properties. Expand the single property you want
to compare before passing it in.
Includes optimization to run even faster when -IncludeEqual is omitted.
#>
function Compare-Object2 {
param(
    [psobject[]]
    $ReferenceObject,
    [psobject[]]
    $DifferenceObject,
    [switch]
    $IncludeEqual,
    [switch]
    $ExcludeDifferent
)

    # Put the difference array into a hash table,
    # then destroy the original array variable for memory efficiency.
    $DifHash = @{}
    $DifferenceObject | ForEach-Object {$DifHash.Add($_,$null)}
    Remove-Variable -Name DifferenceObject

    # Put the reference array into a hash table.
    # Keep the original array for enumeration use.
    $RefHash = @{}
    for ($i=0;$i -lt $ReferenceObject.Count;$i++) {
        $RefHash.Add($ReferenceObject[$i],$null)
    }

    # This code is ugly but faster.
    # Do the IF only once per run instead of every iteration of the ForEach.
    If ($IncludeEqual) {
        $EqualHash = @{}
        # You cannot enumerate with ForEach over a hash table while you remove
        # items from it.
        # Must use the static array of reference to enumerate the items.
        ForEach ($Item in $ReferenceObject) {
            If ($DifHash.ContainsKey($Item)) {
                $DifHash.Remove($Item)
                $RefHash.Remove($Item)
                $EqualHash.Add($Item,$null)
            }
        }
    } Else {
        ForEach ($Item in $ReferenceObject) {
            If ($DifHash.ContainsKey($Item)) {
                $DifHash.Remove($Item)
                $RefHash.Remove($Item)
            }
        }
    }

    If ($IncludeEqual) {
        $EqualHash.Keys | Select-Object @{Name='InputObject';Expression={$_}},`
            @{Name='SideIndicator';Expression={'=='}}
    }

    If (-not $ExcludeDifferent) {
        $RefHash.Keys | Select-Object @{Name='InputObject';Expression={$_}},`
            @{Name='SideIndicator';Expression={'<='}}
        $DifHash.Keys | Select-Object @{Name='InputObject';Expression={$_}},`
            @{Name='SideIndicator';Expression={'=>'}}
    }
}

Note that for my purposes I did not need to compare multiple properties, so this approach does not entirely duplicate functionality of the native Compare-Object. You could probably adapt this code for that purpose. I would drop each list object into a hash table value, while making the key a string representation of the one or more properties to be compared. I’ll leave that bit up to you.

Also note that, yes, I used ForEach. General consensus is that ForEach is slower than For. Feel free to adjust and see if that makes a difference in execution time for you.

The Results

# Native Compare-Object
Measure-Command -Expression {
    Compare-Object -ReferenceObject (Get-Content .\file1.txt) `
        -DifferenceObject (Get-Content .\file2.txt) -IncludeEqual
} | Select-Object TotalMilliseconds

# Hash table comparison
Measure-Command -Expression {
    Compare-Object2 -ReferenceObject (Get-Content .\file1.txt) `
        -DifferenceObject (Get-Content .\file2.txt) -IncludeEqual
} | Select-Object TotalMilliseconds

When racing the native Compare-Object against my hash table implementation here are the results:

  • For test lists of 1,000 items, Compare-Object finishes in five seconds while the hash table version finishes in <1 second.
  • For test lists of 100,000 items, the hash table finishes in five seconds while Compare-Object had not finished after multiple minutes (so I just killed the task).
  • For the customer’s 800,000 items, the hash table finished in 30 minutes, as opposed to 13 hours for Compare-Object. To be fair, the script does other tasks besides this Compare-Object. Regardless that is a 25x performance improvement!

How is that for efficiency gain?!

The Moral of the Story

Learn hash tables today! They are the single most versatile, powerful, and fun data structure in all of PowerShell. Let me know your results in the comments area below.

“Goose, it’s time to buzz the tower.”

Function to Create Certificate Template in Active Directory Certificate Services for PowerShell DSC and CMS Encryption

$
0
0

Today I’m cleaning out my code closet. I found this script that I have wanted to share for a while now.

Problem

Active Directory Certificate Services does not include a template for Document Encryption. This is required for DSC credential encryption and the CMS encryption cmdlets. Current processes require manual effort to create the template. Or you must figure out how to use the less-than-friendly AD CS API from .NET. We all know I ain’t no .NET developer.

Solution

I reverse-engineered the resulting OID and certificate objects in Active Directory and wrote a function to create this template from code. This provides a fully-automated solution for creating the template in a lab or production environment.

Functionality

  • Take parameters
  • Generate a unique OID for the template
  • Create the template
  • Optionally permission the template with Enroll for specified group(s)
  • Optionally add AutoEnroll permission as well
  • Optionally publish the template to CA(s)
  • Optionally target all operations to a designated DC

Requirements:

  • Enterprise AD CS PKI
  • Tested on Windows Server 2012 R2 & 2016
  • Enterprise Administrator rights
  • ActiveDirectory PowerShell Module

Template generated will have these properties:

  • 2 year lifetime
  • 2003 lowest compatibility level
  • Private key not exportable
  • Not stored in AD
  • Document Encryption
  • No digital signature

Sample Usage

# Create only the template
# Least valuable approach
New-ADCSTemplateForPSEncryption -DisplayName PowerShellCMS
# Full template creation, permissioning, and deployment
New-ADCSTemplateForPSEncryption -DisplayName PSEncryption `
-Server dc1.contoso.com -GroupName G_DSCNodes -AutoEnroll -Publish

# From a client configured via GPO for AD CS autoenrollment:
$Req = @{
    Template          = 'PSEncryption'
    Url               = 'ldap:'
    CertStoreLocation = 'Cert:\LocalMachine\My'
}
Get-Certificate @Req
# Note: If you have the Carbon module installed, it conflicts with Get-Certificate native cmdlet.

$DocEncrCert = (dir Cert:\LocalMachine\My -DocumentEncryptionCert |
 Sort-Object NotBefore)[-1]

Protect-CmsMessage -To $DocEncrCert -Content "Encrypted with my new cert from the new template!"

Get the code

I have posted this code to the PowerShell Gallery here. Enjoy!

New, Improved Group Policy Link Report with PowerShell

$
0
0

A peer asked me to update one of my classic Group Policy reporting scripts this week, so I thought I would share the update with y'all.

Continuous Improvement

Over the years I have released a number of Group Policy scripts. This one shows you all kinds of goodness:

  • GPOs linked to OUs
  • OUs where block-inheritance may be turned on
  • Situations where no-override is used
  • Forensic data about the last time a GPO was linked or updated on an OU

By popular demand the improvements in this release are:

  • Unlinked GPOs are included
  • Columns are added for easy filtering where User or Computer version do not match
  • Output no longer defaults to CSV, so that you can pipe the output wherever you like

Show me some code

Call the script like this:

.\gPLinkReport.ps1 | Out-GridView
.\gPLinkReport.ps1 | Export-Csv -Path .\GPLinkReport.csv -NoTypeInformation

You can download today's script from the PowerShell Gallery here.

New-TimeSpan -Start ‘9/1/2017’

$
0
0

Dear readers of the GoateePFE blog,

It is with mixed emotion that I announce after seven years of real-world script blogging this is the final GoateePFE blog post. I have chosen to take the next step in my career with a company outside of Microsoft. My new role will involve automation, security, and customers.

When speaking at conferences it has been a pleasure and honor to meet people impacted by the GoateePFE blog posts and videos. I have enjoyed your Tweets and post comments as well. Please continue to share this content with others when it is helpful.

Thank You

You may not be aware that you have played a significant role in my career. Each review cycle I would give my manager a count of blog visits, demonstrating value delivered to the global community, to you. Sometimes I would include your tweets and comments, validating that I was connecting with real issues and providing real answers. Thanks for reading and sharing in the journey with me.

What will happen to "GoateePFE"?

My TechNet blog will remain for years to come, although I will no longer be able to update it. I will keep the @GoateePFE Twitter handle to maintain contact with the PowerShell community. My PowerShell video content will remain on Microsoft Virtual Academy and the Microsoft Premier video education subscription. My Facebook and GitHub locations will also remain. My next role will contain significant automation responsibilities, so I plan to continue involvement in the PowerShell community on some level.

Career Advice

In this post a few years back I shared some tips for career success. I hope that the content provided on the blog has helped you make your own mark on the world. I firmly believe that community participation is one of the best things you can do to advance your career and advance the industry. Join the conversation.

00100010 01000001 00100000 01100111 01101100 01101001 01110100 01100011 01101000 00111111 00100000 01001110 01101111 00101100 00100000 01110100 01101000 01100001 01110100 00100111 01110011 00100000 01101110 01101111 01110100 00100000 01110000 01101111 01110011 01110011 01101001 01100010 01101100 01100101 00101110 00100000 01001001 00100000 01110000 01110010 01101111 01100111 01110010 01100001 01101101 01101101 01100101 01100100 00100000 01101001 01110100 00100000 01101101 01111001 01110011 01100101 01101100 01100110 00101110 00100010 00100000 00101101 00100000 01000110 01110010 01100101 01100100 00100000 01010010 01100001 01101110 01100100 01100001 01101100 01101100 00100000 00101101 00100000 01010010 01101111 01100011 01101011 01100101 01110100 01101101 01100001 01101110

Cheers,

Mr. Ashley McGlone

@GoateePFE


Viewing all 35 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>