Error and Exception Handling using Try/catch in powershell



One of the most important components for creating PowerShell scripts is error and exception handling.

I've personally made mistakes while writing scripts without proper exceptions and trying to figure out why it got terminated.😵 

Error and exception handling is often a forgotten component of scripting because it's common to feel that the code should always execute linearly and in an implicit fashion.




Working with XML Files in Powershell [Parsing]


In the last post, we worked with CSV types of files. The next type of file we're going to look at is Extensible markup language(XML). They are used for various reasons, for example, storing properties data that can be used for configuration and data storage.





Managing CSV Files using Import/Export-CSV in Powershell


In the PowerShell series, we are looking into working with files in PowerShell. The first types of files we are covering which are used are CSV(comma separated values) file types. We are going to look into two important cmdlets import-csv/export-csv which are widely used while working with CSV.






How to work with PowerShell Files Read/Write using Set-Content & Get-Content

In the PowerShell script writing series, we are working on some of the helpful areas to write powershell scripts. In this series we would continue covering some of the important topics on working with PowerShell read files, write files,folder,subfolders.
Another area where you should become very confident is working with files (read & write), as you will need to work with them very frequently.

First, we will take a look at the basics of working with files by retrieving and writing files and the content of the files. This can be achieved with the Get-Content and Set-Content/Out-File cmdlets.

First of all, we will take a dedicated look at how you can export content to a file:

#Storing working location
$exportedProcessesPath = 'C:\temp\test.txt'

#Write processes table to file and show the result in Terminal with the -PassThru flag
Get-Process | Set-Content -Path $exportedProcessesPath

#Open file to verify
psedit $exportedProcessesPath

#retrieving processes and exporting them to file
Get-Process | Out-File $exportedProcessesPath

#Open file to verify
psedit $exportedProcessesPath #or use notepad to open file

#retrieving processes and exporting them to file with Out-String
Get-Process | Out-String | Set-Content $exportedProcessesPath -PassThru

#Open file to verify
psedit $exportedProcessesPath #or use notepad to open file

There is a small difference between exporting content with the two aforementioned cmdlets. Set-Content will call the ToString() method of each object, whereas Out-File will call the Out-String method first and then write the result to file.

You will have a similar result when using Out-File or Set-Content in combination with Out-String:

Sometimes, it may also be necessary to export the content with a specified encoding. There is an additional flag available to accomplish this task, as shown in the following example:


#retrieving process and exporting them to file with encoding
Get-Process | Out-String | Set-Content $exportedProcessesPath -Encoding UTF8
Get-Process | Out-String | Set-Content $exportedProcessesPath -Encoding Byte

Reading File content in powershell

Retrieving the content works very similarly to the Get-Content cmdlet. One downside of the cmdlet is that it will load the complete file into the cache. Depending on the file size, this may take very long and even become unstable. Here is an easy example to load the content into a variable:

Because of this issue, it may become necessary to only retrieve a dedicated number of lines. There are two flags available for this, as follows:

#The last five lines
Get-Content -Path $exportedProcessesPath -Tail 5

#The first five lines
Get-Content -Path $exportedProcessesPath -TotalCount 5


Improving performance of Get-content

In addition, you can also specify how many lines of content are sent through the pipeline at a time. The default value for the ReadCount flag is 0, and a value of 1 would send all content at once. This parameter directly affects the total time for the operation, and can decrease the time significantly for larger files:


#Get-Content with ReadCount, because of perfomance-improvement.
$data = (Get-Content -Path $exportedProcessesPath -ReadCount 50)

#Retrieving data as one large string
$data = Get-Content -Path $exportedProcessesPath -Raw


Working with Files,Folder,Sub-folders

The next step when working with files and folders is searching for specific ones. This can be easily achieved with the Get-ChildItem command for the specific PSDrive:


#Simple Subfolders
Get-ChildItem -Path 'C:\temp' -Directory

#Recurse
Get-ChildItem -Path 'C:\Windows' -Directory -Recurse


#Simple Subfiles
Get-ChildItem -Path 'C:\temp' -File


#Recurse
Get-ChildItem -Path 'C:\Windows' -File -Recurse

As you can see, you can easily work with the -Directory and -File flags to define the outcome. But you will normally not use such simple queries, as you want to filter the result in a dedicated way.


The next, more complex, example shows a recursive search for *.txt files. We are taking four different approaches to search for those file types and will compare their runtimes:


All methods retrieved the same amount of line? $($countWhere -eq $countWhereObject -eq $countInclude -eq $countCmd)

The Slow Approach!


#Define a location where txt files are included
$Dir = 'C:\temp\'

#Filtering with .Where()
$timeWhere = (Measure-Command {(Get-ChildItem $Dir -Recurse -Force -ErrorAction SilentlyContinue).Where({$_.Extension -like '*txt*'})}).TotalSeconds

$countWhere = $((Get-ChildItem $Dir -Recurse -Force -ErrorAction SilentlyContinue).Where({$_.Extension -like '*txt*'})).Count

#Filtering with Where-Object
$timeWhereObject = (Measure-Command {(Get-ChildItem $Dir -Recurse -Force -ErrorAction SilentlyContinue) | Where-Object {$_.Extension -like '*txt*'}}).TotalSeconds

$countWhereObject = $((Get-ChildItem $Dir -Recurse -Force -ErrorAction SilentlyContinue) | Where-Object {$_.Extension -like '*txt*'}).Count


The first two approaches use Get-ChildItem with filtering afterwards, which is always the slowest approach.



#Filtering with Include
$timeInclude = (Measure-Command {Get-ChildItem -Path "$($Dir)*" -Include *.txt* -Recurse}).TotalSeconds

$countInclude = $(Get-ChildItem -Path "$($Dir)*" -Include *.txt* -Recurse).Count

The third approach uses filtering within the Get-ChildItem cmdlet, using the -Include flag. This is obviously much faster than the first two approaches.


#Show all results
Write-Host @"
Filtering with .Where(): $timeWhere
Filtering with Where-Object: $timeWhereObject
Filtering with Include: $timeInclude




You will also need to create new files and folders and combine paths very frequently, which is shown in the following snippet. The subdirectories of a folder are being gathered, and one archive folder will be created underneath each one:

#user folder
$UserFolders = Get-ChildItem 'c:\users\' -Directory

#Creating archives in each subfolder
foreach ($userFolder in $UserFolders)
{
New-Item -Path (Join-Path $userFolder.FullName ('{0}_Archive' -f $userFolder.BaseName)) -ItemType Directory -WhatIf
}


Keep in mind that, due to the PSDrives, you can simply work with the basic cmdlets such as New-Item. We made use of the -WhatIf flag to just take a look at what would have been executed. If you're not sure that your construct is working as desired, just add the flag and execute it once to see its outcome.
A best practice to combine paths is to always use Join-Path to avoid problems on different OSes or with different PSDrives. Typical errors are that you forget to add the delimiter character or you add it twice. This approach will avoid any problems and always add one delimiter.

The next typical use case you will need to know is how to retrieve file and folder sizes.


The following example retrieves the size of a single folder, optionally displaying the size for each subfolder as well.


It is written as a function to be dynamically extendable. This might be good practice for you use, in order to understand and make use of the contents of previous chapters. You can try to extend this function with additional properties and by adding functionality to it.


.SYNOPSIS
Retrieves folder size.
.DESCRIPTION
Retrieves folder size of a dedicated path or all subfolders of the dedicated path.
.EXAMPLE
Get-FolderSize -Path c:\temp\ -ShowSubFolders | Format-List
.INPUTS
Path
.OUTPUTS
Path and Sizes
.NOTES
folder size example
#>
function Get-FolderSize {
Param (
[Parameter(Mandatory=$true, ValueFromPipeline=$true)]
$Path,
[ValidateSet("KB","MB","GB")]
$Units = "MB",
[Switch] $ShowSubFolders = $false
)
if((Test-Path $Path) -and (Get-Item $Path).PSIsContainer )
{
if ($ShowSubFolders)
{
$subFolders = Get-ChildItem $Path -Directory
foreach ($subFolder in $subFolders)
{
$Measure = Get-ChildItem $subFolder.FullName -Recurse -Force -ErrorAction SilentlyContinue | Measure-Object -Property Length -Sum
$Sum = $Measure.Sum / "1$Units"
[PSCustomObject]@{
"Path" = $subFolder
"Size($Units)" = [Math]::Round($Sum,2)
}
}
}
else
{
$Measure = Get-ChildItem $Path -Recurse -Force -ErrorAction SilentlyContinue | Measure-Object -Property Length -Sum
$Sum = $Measure.Sum / "1$Units"
[PSCustomObject]@{
"Path" = $Path
"Size($Units)" = [Math]::Round($Sum,2)
}
}
}
}


Next, we will dive into specific file types, as they hold some benefits in storing, retrieving, and writing information to file.

How to write basic scripts and functions in powershell 6


We are going well with the PowerShell tutorial, we saw how to use credentials and how to work with variables, array, and hash tables.

In the last post, we discussed with the overview of working with script blocks.

In this post, we are going to dig deeper into those with understanding how to design and convert it into functions.

In this post we are going to cover the below topics:

  • Script vs functions
  • Pass parameters to a script
  •  Convert to functions
  •  Best practices for designing functions




You can accomplish many tasks in PowerShell by typing a command and pressing Enter in the shell console. 
  
      We expect when you are starting up writing commands on the PowerShell console and getting the required output.

      But you hand off a task to someone else and need to make sure that it’s done exactly as planned.

      That’s where scripts come in—and it’s also where functions come in.


SCRIPT OR FUNCTION?

Suppose you have some task, perhaps one requiring a handful of commands in order to complete.

 A script is a convenient way of packaging those commands together.

Rather than typing the commands manually, you paste them into a script and PowerShell runs them in order whenever you run that script.

Following best practices, you’d give that script a cmdlet-like name, such as Get-ServerDetails.ps1 

Script blocks

A script block is characterized by curly braces and can accept parameters. It is the building block of functions, DSC configurations, Pester tests, and much more. Take the following code sample to see how script blocks can be used:


# A script block without parameters
{
"Something is happening here
}

# Executing a script block
({
"Something is happening here
}).Invoke()
# Or use the ampersand
& {
"Something is happening here
}



# With parameters
$scriptBlockArgs = {
# Built-in - not recommended
"$($args[0]) is happening here"
}

# Better
$scriptBlockParam = {
param
(
[string]
$TheThing
)
# Built-in - not recommended
"$TheThing is happening here"
}

$scriptBlockArgs.Invoke('Something')

# Named parameters are possible
$scriptBlockParam.Invoke('Something')
& $scriptBlockParam -TheThing SomeThing


Steps: Writing commands


Consider the below example or command which is executed in PowerShell console.

Get-WmiObject –Class Win32_LogicalDisk –Filter "DriveType=3"
-ComputerName venkat |
Select-Object –Property DeviceID,@{Name='ComputerName';
Expression={$_.PSComputerName}},volume,FreeSpace


This command uses WMI to retrieve all instances of the Win32_LogicalDisk class from a given computer. It limits the results to drives having a DriveType of 3, which specifies local, fixed disks.


Using Param to convert it to script.

#define param
Param{
[String]$ComputerName,
[int]$driveType=3
}
Get-WmiObject –Class Win32_LogicalDisk –Filter "DriveType=$driveType"
-ComputerName $ComputerName |
Select-Object –Property DeviceID,@{Name='ComputerName';
Expression={$_.PSComputerName}},volume,FreeSpace

Note

  • you define a Param() section. This section must use parentheses—given PowerShell’s other syntax, it can be tempting to use braces, but here they’ll cause an error message.

Converting to script

Save the above code as Get-Details.ps1


Running the script

PS C:\> .\Get-Details.ps1 -computerName localhost

DeviceID ComputerName Size FreeSpace
-------- ------------ ---- ---------
C: Venkat 42842786562 3246185652



PS C:\> .\Get-DiskInfo.ps1 -computerName localhost |
>> Where-object { $_.FreeSpace -gt 500 } |
>> Format-List -Property *


DeviceID : C:
ComputerName : venkat
Size : 42842786562
FreeSpace : 3246185652

The example piped the output of the script to Where-Object and then to Format-List, changing the output.

This result may seem obvious to you, but it impresses the heck out of us! Basically, this little script is behaving exactly like a real PowerShell command.


Working with function

Function declaration

Decorating a script block with the function keyword and a name is, at first, the only thing that makes a function. Adding your functions to a module will enable you to package them, version them, and ship them as a single unit.


Function Where-LessSpace {
Param(
[int]$ LessSpace
)
BEGIN {}
PROCESS {
If ((100 * ($_.FreeSpace / $_.Size) –lt $ LessSpace)) {
Write-Output $_
}
}
END {}
}

Trying to convert the above script
#define param
Function Get-DiskDetails{
Param{
[String]$ComputerName,
[int]$driveType=3
}
Get-WmiObject –Class Win32_LogicalDisk –Filter "DriveType=$driveType"
-ComputerName $ComputerName |
Select-Object –Property DeviceID,@{Name='ComputerName';
Expression={$_.PSComputerName}},volume,FreeSpace
}


How it works?

  • PowerShell sees the function declaration and loads the function into memory for later use. It creates an entry in the Function: drive that lets it remember the function’s name, parameters, and contents.
  • When you try to use the function, PowerShell will tab-complete its name and parameter names for you.

Best Practice to Follow while Function design (My Way!)

In order to always support all common parameters, it is recommended to include a CmdletBinding attribute in your cmdlet, even if you do not want to use any parameters. 


This gives the executing user full control over error handling and output. 

Additionally, always use the proper verbs to express what your function does. Get-Verb shows plenty of approved verbs that will make your cmdlet easier to find:

#bad
Function foo{
“Try something”
}
#Good
Function foo
{
“Try something”
}


Using the CmdletBinding attribute is very easy and provides many benefits.

function foo 
{
[CmdletBinding()]
param ( )

'Try something'
}


  • If your function uses parameters, always add the type to them
  • Add parameter comments close to the actual parameter
  • consider adding help messages.

param
(
# A comma-separated list of VM names to provision
[Parameter(
Mandatory,
HelpMessage = 'Please enter a comma-separated list of VM names'
)]

Conclusion:

Scripts and functions are the basis for creating complex, repeatable automation in your environment. In this post, we’ve touched on the basics. We’ll continue doing that over the next few post so that you can build scripts and functions that are truly consistent with PowerShell’s overall philosophy and operational techniques.
As a reference, we’ll repeat our Scripting Output Rules:
  •  If you run a command and don’t capture its output into a variable, then the output of that the command goes into the pipeline and becomes the output of the script.
  •  Whatever you put into the pipeline by using Write-Output will also become the output of your script.
  •  Output one kind of object, and one kind only.

Remember these three rules when you’re creating your scripts and you’ll minimize problems with your output.


How to work with Variables, arrays, hash tables, and script blocks in Powershell


I have been recently working mostly on PowerShell /Azure, hence decided to covers lots of easier topics from this space. So that you can get familiar with this.
In the previous post, we looked into Credentials and how to use them while automating things.
In this post we are covering
·         Variables
·         Strict mode
·         Variable drives and cmdlets
·         Arrays
·         Hash tables
·         Script blocks



VARIABLES

Variables are a big part of any programming language or operating system shell, and PowerShell is no exception. In this post, we’ll explain what they are and how to use them, and we’ll cover some advanced variable-like data structures such as arrays, hash tables, and script blocks.


Variables are important for the creation of good scripts.

A variable is like a placeholder and every kind of object and type can be stored in that variable.

  1. You can save data that is often used within a script in it or you can make calculations with it.
  2. You can forward variables containing data to functions, to make changes or output with them.

Variables are the heart of every good script, cmdlet, function, and module.


In PowerShell, a variable name generally contains a mixture of letters, numbers, and the underscore character. You typically see variable names preceded by a dollar sign:


$varName = "NintyZeros.com"



Variable Types

# varible stored as string
$varWithString = "Test"
$varWithString = 'Test'

#varibles stored as int
$varWithInt = 5

#get type of a variable
$varWithString.GetType()
$varWithInt.GetType()

#working with strings
$varCombined = $varWithString + $varWithInt
$varCombined #Test5

#additions
$calculatedVar = $varWithInt + 5
$calculatedVar #10

Being strict with variables

PowerShell has another behavior that can make for difficult troubleshooting. For this example, you’re going to create a very small script and name it test.ps1. The following listing shows the script.




PS C:\Users\venka> $test = Read-Host "Enter a number"
Write-Host $tset
Enter a number: venkat


This kind of behavior, which doesn’t create an error when you try to access an uninitialized variable, can take hours to debug.




## Enabling the StrictMode

Set-StrictMode -Version 1
$test = Read-Host "Enter a number"
Write-Host $tset


####################
Enter a number: venkat
The variable '$tset' cannot be retrieved because it has not been set.
At line:3 char:12
+ Write-Host $tset
+ ~~~~~
+ CategoryInfo : InvalidOperation: (tset:String) [], RuntimeException
+ FullyQualifiedErrorId : VariableIsUndefined

ARRAYS

In many programming languages, there’s a definite difference between an array of values and a collection of objects.
In PowerShell, not so much. There’s technically a kind of difference, but PowerShell does a lot of voodoo that makes the differences hard to see.
Simply put, an array is a variable that contains more than one value. In PowerShell, all values—like integers or strings—are technically objects. So it’s more precise to say that an array can contain multiple objects. 
Arrays can be created from simple values by using the array operator (the @ symbol) and a comma-separated list:

Below examples will help you understand much better.


#fetch all the service running or atopped and stored in object
$services = Get-Service

#Filtering out only running services
$services | Where-Object Status -EQ "Running"

#Fetching based on Index 0
$services[0]


#Create a array
$var = @('Car','Bike','Cycle')
write-host $var
$var[2]

#creating empty array and adding elements
$empvar = @()
$empvar+='car'
$empvar+='bike'
$empvar+='cycle'

write-host $empvar

HASH TABLES AND ORDERED HASH TABLES

Hash tables (which you’ll also see called hash tables, associative arrays, or dictionaries) are a special kind of array.
These must be created using the @ operator, although they’re created within curly brackets rather than parentheses—and those brackets are also mandatory.
Within the brackets, you create one or more key-value pairs, separated by semicolons. The keys and values can be anything you like:


#Creating a simple hastable
$hasexp = @{
Name="NintyZeros";
Author= "Venkat";
};

##What is name of website ?
$hasexp.name

##What is the name of author?
$hasexp.Author

##adding an element to hashtable
$hasexp.Add("site","nintyzeros.com")

##Result
$hasexp.site

#Removing an element by key
$hasexp.Remove("site")

Ordered hash tables
One problem with hash tables is that the order of the elements isn’t preserved. Consider a simple hash table:

With PowerShell v3 and v4, you can create a hash table and preserve the order of the elements:


$hash1 = [ordered]@{
first = 1;
second = 2;
third = 3
}
$hash1


Script blocks

This we have just forced into this post 👏, but like variables, arrays, and hash tables, script blocks are a fundamental element in PowerShell scripting. They’re key to several common commands, too, and you’ve been using them already.


A script block is essentially any PowerShell command, or series of commands, contained within curly brackets, or {}. Anything in curly brackets is usually a script block of some kind, with the the sole exception of hash tables (which also use curly brackets in their structure).
A script block is a collection of statements and/or expressions.
In addition, it can be saved to variables, and even passed to functions or other language constructs:

{ 'This is a simple ScriptBlock'}

 Conclusion:
Variables are one of the core elements of PowerShell that you’ll find yourself using all of the time. 
They’re easy to work with, although some of the specific details and behaviors that we covered in this post represent some of the biggest “gotchas” that newcomers stumble into when they first start using the shell.
Hopefully, by knowing a bit more about them, you’ll avoid those pitfalls and be able to make better use of variables.

How to work with Credentials Parameter in powershell 6


In the previous post, We worked on to get started with Windows PowerShell. We looked at setting up the PowerShell environment and running a sample Script.

In this post, we will learn and understand an important part while automating things is working with credentials. While many of the cmdlets parameters support credentials.

Most of those cmdlets, whether you work on PowerShell Core or Windows PowerShell, can be executed remotely and with different credentials.

In order to see which cmdlets support a Credential parameter, you can use the ParameterName parameter with Get-Command to discover them.




Examples:

Get-Command -ParameterName Credential


First of all, we need to see what a credential actually is by looking at the following code :


$username = ‘venkat’
$password = 'P@ssw0rd' | ConvertTo-SecureString -AsPlainText -Force
$newCredential = New-Object -TypeName pscredential $userName, $password
$newCredential.GetType().FullName
$newCredential | Get-Member

Looking at the code, you can see that the pscredential object type is inherently related to PowerShell, coming from the System.Management.Automation namespace. When viewing the members of that type with Get-Member, you can see that you are able to retrieve the password once you have entered it. However, the password is encrypted with the Data Protection API (DPAPI).


How it is useful ?

You can now use your credentials for various purposes, for example, to create local users and groups, create services, authenticate with web services, and many more. We will revisit these examples later in this chapter when we look at REST APIs and external commands.


Using PowerShell credentials without being prompted for a password


$userName = 'test-domain\test-login'
$password = 'test-password'

$pwdSecureString = ConvertTo-SecureString -Force -AsPlainText $password
$credential = New-Object -TypeName System.Management.Automation.PSCredential `
-ArgumentList $userName, $pwdSecureString



Using the .NET method GetNetworkCredential gives quite a different result. The plaintext password is displayed right beside the encrypted password.
This is by no means a gaping security hole—with the DPAPI, the account on your system already has access to the password. With a few .NET, we can mimic the behavior of the GetNetworkCredential method:




$newCredential.Password

# Using GetNetworkCredential, it's plaintext again
$newCredential.GetNetworkCredential() | Get-Member $newCredential.GetNetworkCredential().Password


To securely store credentials at rest, the built-in Protect-CmsMessage and Unprotect-CmsMessage cmdlets can be used with PowerShell 5 and later. Cryptographic Message Syntax (CMS) cmdlets leverage certificate-based encryption to store data securely.

To test this we first will create a self-signed certificate and this can be achieved using the below command.



New-SelfSignedCertificate -Subject TestCert -KeyUsage KeyEncipherment -CertStoreLocation Cert:\Venkat\My -Type DocumentEncryptionCert
Protect-CmsMessage -to CN=SomeRecipient -Content "Securable goes here" | Out-File .\EncryptedContent.txt


Conclusion

In this post, we saw how to work with credentials which will help us to use while scheduling scripts, calling Rest API, etc. This is one of the core concepts while working with PowerShell. I hope this makes sense in understanding and its usage.

Powershell 6 -Getting Started with Automating System Administration


Windows PowerShell is a task-based command-line shell and scripting language designed specifically for system administration.

Built on the .NET Framework, Windows PowerShell helps IT professionals and power users control and automate the administration of the Windows operating system and applications that run on Windows.

This post is to help you get up and running with PowerShell, taking you from the basics of installation to writing scripts and web server automation.

This will act as an introduction to the central topics of PowerShell, from finding and understanding PowerShell commands and packaging code for reusability right through to a practical example of automating IIS.

You will explore the PowerShell environment and discover how to use cmdlets, functions, and scripts to automate Windows systems.






Accessing the PowerShell Console and PowerShell ISE

We will see how to access the PowerShell console and PowerShell ISE Console and work with PowerShell 6 module.

We will cover how to :

  1. The PowerShell Console and x86 event
  2. The elevated Console
  3. The PowerShell ISE
  4. How to get help with get-help cmdlets
  5. Write a small script in PowerShell 





The standard PowerShell console helps you to install the module and run PowerShell cmdlets. 


While we can use the standard console to perform most of the task but we have an ISE environment to write scripts, modules and test those.


The below fig shows the standard console.





The fig also gives an idea of how to execute the cmdlets get-date which helps you see the current date or the system date.


Working with elevated (means run as administrator) will be needed to perform a certain task which needs admin task.


Consider the below example where we are trying to install a 
msolonline module. This module needs elevated access to install. To perform this task we need to use run as admin of  Standard PowerShell Console.


PS C:\Users\venka> install-module msolonline
install-module : Administrator rights are required to install modules in 'C:\Program Files\WindowsPowerShell\Modules'. Log on to the computer with an account that has Administrator rights, and then try again, or install 'C:\Users\venka\OneDrive\Documents\WindowsPowerShell\Modules' by adding "-Scope CurrentUser" to your command. You can also try running the Windows PowerShell session with elevated rights (Run as Administrator). At line:1 char:1 + install-module msolonline + ~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidArgument: (:) [Install-Module], ArgumentException + FullyQualifiedErrorId : InstallModuleNeedsCurrentUserScopeParameterForNonAdminUser,Install-Module




Accessing the help system and updating it

Let us get started to get hands dirty to perform some basic commands to help us learn PowerShell.


The first things to learn how to get help whenever you need the help of any cmdlets. 

First, we need to update the Help documentation.


PS C:\Users\venka> update-help


To get help with any command first type this below command


PS C:\Users\venka> get-help


Running the cmdlets and Finding it!

The functions in PowerShell are called cmdlets that help to perform various task.

The key feature of the cmdlets are:

The name pattern of PowerShell is the verb-noun type. For example, consider get-date is of the form of the verb (get) and noun (date)

The shortcut to complete any cmdlets by just pressing the TAB BUTTON

If you need any help with knowing more about any cmdlet you can use the get-help cmdlets (tip!).


But if you want to help with the list of command that already present or installed can be performed by using the get-command cmdlet


PS C:\Users\venka> get-Command get-date

But if you want to work with GUI version of get-command. I would suggest you to work with show-command  cmdlets.


PS C:\Users\venka> show-Command get-date

This works ! and shows a popup of that command. If you are a beginner with cmdlets I would suggest you go through these wonderful features.

Measuring the performance of cmdlets


PS C:\Users\venka> Measure-Command {get-date}

Introducing Scripts in PowerShell

Scripts in PowerShell are basically just text files with a special filename extension, ps1. 

To create a script, you would enter a bunch of PowerShell commands in a sequence in a new Notepad file (or you could use any text editor you like), and then save that file as NAME.ps1, where NAME is a friendly description of your script—with no spaces, of course.


To run a PowerShell script you already have, you either enter at a PowerShell window:
–the full path (folder and filename) of the script, like c:\powershell\myscripthere.ps1
or
–if your script is in the current directory the console is looking at, a period and then a backslash, like .\myscripthere.ps1

Other than that, there is nothing special to create a script in PowerShell. You simply add the commands you like.


Example of a sample PowerShell script.

$date = (Get-Date -format "yyyyMMddmmss")
$compname = $env:COMPUTERNAME
$logname = $compname + "_" + $date + "_ServerScanScript.log"
$scanlog = "c:\temp\logs\" + $logname
new-item -path $scanlog -ItemType File -Force

The Last Word
As I mentioned, my goal for this post, was to show you how to get started using PowerShell and put together some simple scripts. We’ve covered a lot in this post: setting up PowerShell environment, how to make scripts, and how to add some logic to your scripting. Then I took a very real issue on how to get help with documentation whenever you are trying with new cmdlets. Congratulations! Onward!


In the next part of this post, we will cover depth knowledge and explanation of understanding the above example and steps to write your own scripts.