Quantcast
Channel: Scripting – Robin CM's IT Blog
Viewing all 27 articles
Browse latest View live

Make mailto links open Yahoo mail on Windows

$
0
0

This isn’t perfect, but it’s better than nothing. I’m using Windows 7, but this should work (with a few tweaks) on other versions too. You do NOT need to install any additional software, so Yahoo Toolbar or Yahoo Messenger are NOT required to make this work. It will work with any web browser, so Internet Explorer is fine, as is anything else, the browser that’s used is whatever your default browser is.

In HTML there is a URL type that causes a mail client to be opened when links are clicked, these URLs start with mailto: and are followed by an email address (similar to how http: causes a web browser to be opened).

If you have some email software installed on your PC then this works fine, but if you use webmail, in my case Yahoo mail, you get a message telling you that you have no email software installed (or maybe something that’s installed but you don’t use gets launched, which is annoying).

With this solution, a new browser window opens and gives you a compose window, with the email addressed to where the mailto link was pointing. This is accomplished via a small VBScript.

There are two parts to this process, a registry edit to alter the mailto URL handler for the current user, and a small bit of code to process the data and launch the default web browser. The script is made somewhat larger by the “installation” routine(s) that allow you to “install” and “uninstall” the URL handler by double-clicking the VBScript file.

Copy the following and paste it into a file called YahooMailto.vbs

Option Explicit
'RCMTech Mail Handler for Yahoo Mail

Const cReg = "HKCU\Software\Classes\mailto\shell\open\command\"
Dim oArgs, oShell
Dim sText, iRet, bDoBackup

Set oArgs = WScript.Arguments
Set oShell = CreateObject("WScript.Shell")

If oArgs.Count>0 Then
      sText = oArgs(0)
      sText = replace(sText,"mailto:","http://compose.mail.yahoo.com?To=")
      sText = replace(sText,"?subject=","&subject=")
      sText = replace(sText,"?body=","&body=")
      sText = replace(sText," ","%20")
      oShell.Run sText,0,False
Else
      sText = "Use RCMTech Yahoo Mail mailto handler?" & vbCrLf & "Click Yes to install, No to remove." 
      iRet = MsgBox(sText, vbYesNoCancel, "RCMTech Yahoo Mail Handler")
      Select Case iRet
             Case 2
                   'Cancel
                   WScript.Quit
             Case 6
                   'Yes
                   On Error Resume Next
                   sText = oShell.RegRead(cReg)
                   If Err.Number = 0 Then
                         bDoBackup = True
                   End If
                   On Error GoTo 0
                   If InStr(sText, WScript.ScriptFullName) Then
                         MsgBox "Mail handler seems to be installed for this user already", vbOKOnly, "RCMTech Yahoo Mail Handler"
                         WScript.Quit
                   Else
                         If bDoBackup Then
                                oShell.RegWrite cReg & "orig", sText, "REG_SZ"
                         End If
                         oShell.RegWrite cReg, "wscript.exe " & WScript.ScriptFullName & " ""%1"""
                         MsgBox "Added mail handler value to registry", vbOKOnly, "RCMTech Yahoo Mail Handler"
                   End If
             Case 7
             'No
                   On Error Resume Next
                   sText = oShell.RegRead(cReg & "orig")
                   If Err.Number = 0 Then
                         On Error GoTo 0
                         oShell.RegWrite cReg, sText, "REG_SZ"
                         oShell.RegDelete cReg & "orig"
                         MsgBox "Reverted to previous mail handler value", vbOKOnly, "RCMTech Yahoo Mail Handler"
                   Else
                         On Error GoTo 0
                         oShell.RegDelete cReg
                         MsgBox "Removed mail handler value", vbOKOnly, "RCMTech Yahoo Mail Handler"
                   End If
      End Select
End If

Put the .vbs file somewhere on your PC, I suggest something like C:\Users\Public, then double-click it and follow the instructions to change your URL handler for mailto. Double-click again to remove/revert to your previous mailto handler.

You need to be signed in to Yahoo for this to work, or to have ticked the “Keep me signed in” box when you last signed in.

Hope you find this useful.



Find print servers in Active Directory from the command line

$
0
0

Run the following on a machine that has the AD DS Snap-ins and Command-Line tools installed (or appropriate on older OS, e.g. AdminPak on Windows XP):

for /f "delims=, tokens=2" %i in ('dsquery * dc^=your^,dc^=active^,dc^=directory -filter "(objectCategory=printQueue)" -limit 0') do @echo %i

Replace dc^=your^,dc^=active^,dc^=directory with your AD info, remembering to escape (prefix with a ^) all the = and , symbols.

This searches Active Directory for printQueue objects, then gives you the CN of the machine that the printer is on.

There will be duplicates, so if you have a print server with 100 printers on it you’ll see it listed 100 times. Dump the data into Excel and do a “Remove Duplicates” (on the Data ribbon).


Pharos Uniprint 8.2 ADLDAP authentication failure

$
0
0

On a fairly recently installed test system of Pharos Uniprint 8.2 running on WIndows Server 2008 R2 (i.e. 64-bit OS) I configured the Pharos Remote bank to use the ADLDAPLogon.exe plugin. This allows you to authenticate users against your Active Directory rather than using the built-in Pharos authentication.

On my existing v8.0 Uniprint system this works fine and has been for about three years or so.

On the new (test) 8.2 system though I was getting the following error when trying to log on to Pharos Remote:

Your logon attempt was not successful. Either the details you supplied are incorrect, you do not have permission to access Pharos Remote, or there are no Print Servers available.

None of which were the case.

Looking in Pharos Administrator under System – Alerts I could see the following Warning entry:

User: my-pharos-remote-user
Severity: Warning
Error Code: 29500
Message: Logon failed for user my-pharos-remote-user

Pharos Remote uses the Pharos Remote bank, which was configured with the Logon plugin of:

C:\Program Files (x86)\Pharos\Bin\ADLDAPLogon.exe

This needs some registry values creating in the following key:

HKEY_LOCAL_MACHINE\Software\Wow6432Node\Pharos\Plugins\AdLdapLogon\servers

The values are REG_SZ and are named as numbers, starting at 1, with the data containing a text string of the following form:

<Active Directory Server Name>:<AD bind account>:<password for the bind account>:<LDAP port>:0

So for example:

rcmdc1:pharos-ldap:S0mePa55word:389:0

You can create multiple values, increasing by 1, if you have multiple domain controllers. I have 1,2,3,4.

The ADLDAPLogon.exe can be tested on the command line in “trace” mode to help verify its functionality and assist with authentication failures. The syntax is:

adldaplogon.exe <output file> trace <username> <password>

where <username> and <password are those of the account you want to try and authenticate (or not), and <output file> is the file that adldaplogon.exe writes for the rest of Uniprint to interrogate for an “OK” or a “FAIL”.

Testing on the command line showed that everything was working fine. It just wasn’t working from Pharos Remote itself.

Time to get out the tools. I used Wireshark and Process Monitor.

Using Process Monitor I set a filter:

Column: Operation
Relation: is
Value: Process Create
Action: Include

This allowed me to see ADLDAPLogon.exe being called by PServer.exe, and more importantly, gave me the command line theh PServer was calling it with:

"C:\Program Files (x86)\Pharos\Bin\adldaplogon.exe" C:\Windows\Temp\Un8d91.tmp "my-pharos-remote-user" "premotepass" "Unknown"

So parameters being passed by PServer.exe are as follows:

  1. Output file
  2. Username
  3. Password
  4. Unknown

Next I used Wireshark to look at the data being sent to the first domain controller in the list (value 1 from the AdLdapLogon\servers registry key). I set a capture filter (via Capture – Options, in the Capture Filter box) of:

host 192.168.10.10

where that is the IP address of the domain controller rcmdc1. If the filter is valid the text box background goes green. Click Start and then try to logon via Pharos Remote.

Examining the captured data I could see some of the lines had LDAP in the Protocol column. One of these had the following in the Info column:

bindRequest(5) "<ROOT>" , NTLMSSP_AUTH, User: \pharos-ldapsasl

So that is where ADLDAPLogon is binding to AD to get more detail about the user to be authenticated. You can see that it’s binding as the account pharos-ldap. There were then various lines, and then one similar line to the above:

bindRequest(11) "<ROOT>" , NTLMSSP_Auth, User: \premotepasssasl

Now that’s interesting as the place where I’d expect to see the username (in this case my-pharos-remote-user) is actually showing the password to be tested for that account…!

So it seems as though either PServer.exe is passing the parameters to ADLDAPLogon.exe in the wrong order (or with something missing) or ADLDAPLogon.exe is not correctly handling the parameters that it’s being passed.

So from earlier testing on the command line in trace mode, the parameter order that worked was:

  1. Output file
  2. trace
  3. Username
  4. Password

But from PServer (not working) we’re actually getting:

  1. Output file
  2. Username
  3. Password
  4. Unknown

The parameters expected by ADLDAPLogon.exe are apparently:

  1. Filename
  2. Level
  3. Username
  4. Password

Note how when we get a success in trace mode on the command line the username is parameter number 3, but when it fails from PServer the password is parameter number 3. So to fix this we somehow need to pad out the parameters to shift the username to parameter 3 (and the password to parameter 4). The easiest & fastest way I could think of to do this was by writing a Command file, paramfix.cmd, containing the following:

@echo off
"C:\Program Files (x86)\Pharos\Bin\ADLDAPLogon.exe" %1 duff %2 %3 %4

Clearly I didn’t know how ADLDAPLogon was going to like being passed an invalid parameter, but I tried it and it worked. I saved this .cmd file into C:\Program Files (x86)\Pharos\Bin and changed the Pharos Remote Bank Logon plugin to be paramfix.cmd instead of ADLDAPLogon.exe.

Et voila, I can now authenticate against Active Directory when logging on to Pharos Remote. As Mr. Russinovich would say, case closed. (though actually I’ve passed this back to Pharos to get them to sort out why it’s not working in the first place, but at least I have a workaround in the meantime)

More info:

  • PServer.exe version is 8.2.6005.221
  • ADLDAPLogon.exe version is 8.2.6005.126

VBScript: Delete registry values that contain a backslash in the name

$
0
0

VBscript is great for most stuff, but occasionally you try and do something and it can’t handle it. I came across another one of these yesterday, trying to remove some old network printer connections. I ended up having to try and delete some registry values, one of which was located in the following key:

HKEY_CURRENT_USER\Software\Microsoft\Windows NT\CurrentVersion\Devices

The value name was of the form of the UNC path to the shared printer:

\\printserver\printer

Now, using oShell.RegDelete you don’t specify the value as a separate parameter to the key, you just concatenate the two together, which means I ended up doing:

oShell.RegDelete "HKCU\Software\Microsoft\Windows NT\CurrentVersion\Devices\\\printserver\printer"

Which a) looks wrong and b) doesn’t work. You get an error:

-2147024894 Invalid root in registry key "HKCU\Software\Microsoft\Windows NT\CurrentVersion\Devices\\\printserver\printer"

I actually had the reg key and value name in variables, but the end result was the above. I tried delimiting the backslashes with more backslashes, a bit like in .reg files, but no. Tried putting quotes around the value name, but that still looks wrong and still didn’t work.

So in the end I wrote a function to do it using WMI, which I tend to avoid if possible, but in this case it didn’t appear to be possible.

' These constants are required by WMI to specify the registry root
Const HKEY_CLASSES_ROOT  = &H80000000
Const HKEY_CURRENT_USER  = &H80000001
Const HKEY_LOCAL_MACHINE = &H80000002
Const HKEY_USERS         = &H80000003

Function regDeleteWMI(iRegRoot, sRegPath, sRegValue)
  Dim oReg
  regDeleteWMI = False
  On Error Resume Next
  Set oReg = GetObject("winmgmts:{impersonationLevel=impersonate}!\\.\root\default:StdRegProv")
  oReg.DeleteValue iRegRoot, sRegPath, sRegValue
  If Err.Number <> 0 Then
    WScript.Echo "Error: regDeleteWMI: ", iRegRoot, sRegPath, sRegValue, Err.Number, Err.Description
  Else
    regDeleteWMI = True
  End If
End Function

Usage is as follows:

If regDeleteWMI(HKEY_CURRENT_USER,"Software\Microsoft\Windows NT\CurrentVersion\Devices","\\printserver\printer") Then
  'deleted ok
End If

VBScript error -2147023080

$
0
0

Had this when trying to copy a file using a Scripting.FileSystem object’s .CopyFile method.

Could find no information on error -2147023080 but trying the same using the command prompt’s copy command yielded the following:

Not enough quota is available to process this command

So there you go. (In case you’re wondering, I gave myself some more quota…!)


Upgrade PST files to Unicode and run ScanPST from the command line

$
0
0

Identified an issue with old PST files that started to misbehave when Microsoft Office was upgraded to 2010. PST files created by Outlook 97, 2000 or XP are stored using ANSI, whereas the ones created by Outlook 2003, 2007, and 2010 use Unicode. Microsoft recommend that you don’t use ANSI PST files anymore.

You can determine what format your PST file is by right-clicking it in Outlook (2003 and higher) and selecting Data File Properties… then clicking the Advanced… button and looking in the Format: box. If it says Outlook Data File then you’re already using a Unicode file, if it says Outlook Data File (97-2002) then it’s ANSI. You can also do it programmatically by reading in the 11th byte of the PST file, if (when converted to an integer) it’s 14 or 15 then it’s an ANSI PST file, if it’s 23 then it’s Unicode – see wVer in the PST file format header specification.

There are manual methods to solve the problem – you basically create a new PST file, open it in Outlook, and move everything from your old format PST file to the new one. Not something that your busy users (or less technically capable ones) will appreciate.

Luckily there’s a great utility called Upstart to the rescue. It gives your users a nice easy way upgrade their PST files.

Plus, it has a command line version – so useful to system admins in larger organisations. I identified 1200 ANSI format PST files on user personal drives that needed converting. Using the Upstart command line utility I’ve been able seamlessly upgrade these overnight prior to upgrading my PCs and XenApp servers with Office 2010.

Also, it’s very sensibly (reasonably) priced.

Even better, cupstart.exe (the command line PST upgrade utility) lets you run the Microsoft ScanPST utility, so you can ensure the PST files have no issues before you attempt to upgrade them. As ScanPST has no command line functionality, cupstart is a great way to get this.

Best yet, the guy who wrote it, Pete, is really helpful in the event that you encounter any issues or don’t read the manual properly (ehem…).

So, some hints and tips based on my experiences with the Command Line version, cupstart.exe:

  • You need to run cupstart on a machine that has Outlook 2003 or higher installed as it uses Outlook’s resources.
  • cupstart by default will run ScanPST to fix problems within the PST file before it converts it. If this isn’t run then any major problems in the file could cause the conversion process to fail. However, ScanPST doesn’t always exit nicely and can crash cupstart. The solution to this is to run cupstart with the scan option first, then repair the file using the repair option if necessary, before finally converting the file to Unicode using the cu option.
  • The Outlook profile keeps a record of what PST files a user has open in Outlook, and what type they are. If you convert an ANSI PST file to Unicode then you can sometimes get issues if the Outlook profile still thinks the PST file is ANSI. Luckily, cupstart provides an option, cp,  to process the PST entries in the Outlook profile to correct this. I found that it was a good idea to also use the /C (continue on errors) switch as if a user has a PST file specified in their Outlook profile that no longer exists cupstart will stop and not process any further PST entries.
  • cupstart does not seem to multithread, I’m running it on a PST with a fast quad core processor but it only uses 25% CPU maximum.
  • To eliminate disk-based or network latency bottlenecks I’ve written a VBScript that copies the user’s PST files to a fast SSD on my processing PC, I then process the file and copy the converted files back to the user’s network storage, also dropping a marker that my logon script picks up to tell it to run cupstart cp /c for the user next time they log on.
  • PST files on the whole get smaller when converted to Unicode, but a few get bigger. I’ve yet to work out why – and probably won’t be able to as trawling through people’s email is both time consuming and probably aginst some kind of data protection agreement). Here’s a graph of before and after sizes for some of the larger PST files I’ve processed (Y axis is size in MB):

Powershell: Remove computer from Active Directory

$
0
0

This is in theory easy, just import the ActiveDirectory module:

Import-Module ActiveDirectory

and then remove the computer:

Remove-ADComputer -Identity "computername"

Except that sometimes it fails with:

Remove-ADComputer : The directory service can perform the requested operation only on a leaf object
At line:1 char:1
+ Remove-ADComputer -Identity computername
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (CN=computername,O...rcmtech,DC=co,DC=uk:ADComputer) [Remove-ADComputer], ADException
    + FullyQualifiedErrorId : The directory service can perform the requested operation only on a leaf object,Microsoft.ActiveDirectory.Management.Commands.RemoveADComputer

But yet other times it works fine. It turns out that a computer account is not always a “leaf” in Active Directory – it can be a container and have child objects within it. I’d been adding Windows Server 2012 virtual machines to my Active Directory, they were Hyper-V VMs created using System Center Virtual Machine Manager 2012 SP1 beta. The Hyper-V hosts I am using are also Server 2012.

Looking at the VMs in ADSI Edit showed that they had a “Windows Virtual Machine” object within them:

ADSI Edit
 Default naming context [dc01.rcmtech.co.uk]
  DC=internal,DC=rcmtech,DC=co,DC=uk
   OU=Servers
    CN=computername
     CN=Windows Virtual Machine

Thus the only way to delete these is to use Remove-ADObject with the -Recursive option, which deletes the object plus any child objects. Unlike Remove-ADComputer which accepts a computer name as the -Identity option, you have to pass in a computer object to Remove-ADObject:

$ComputerToDelete = Get-ADComputer -Identity "computername"
Remove-ADObject -Identity $ComputerToDelete -Recursive

Whilst we’re on the subject, I also found that Get-ADComputer seems to ignore the -ErrorAction SilentlyContinue option (which stops it putting a red error on screen during ps1 script execution). It also doesn’t behave the same way as some other cmdlets I was using when it doesn’t find a computer. Thus, the code block I’m using to look for a computer in AD and delete it if it exists, with no errors on screen if it does not exist, is as follows:

$advm = ""
try{
    $advm = Get-ADComputer -Identity $NewVMName
} catch {
    Write-Host "No existing AD account found"
}
if ($advm -ne ""){
    Write-Host "Remove existing AD account..."
    $x = Remove-ADObject -Identity $advm -Recursive
}

Not perfect, but does the job.


Autologon after creating virtual machine from template with VMM 2012 SP1 beta

$
0
0

I’ve been having a fun time trying to get System Center Virtual Machine Manager 2012 SP1 beta to do things. It’s been even more fun when I’ve been trying via the delightful PowerShell – I want to get to a point where I can build/re-build servers quickly and easily.

My most recent conquest has been to get it to make a newly created virtual machine log on at the console at the end of the creation process.

I’m deploying Windows Server 2012 VMs, and have been able to get them to join to my active directory, that was relatively easy. Getting them to log on required some extra fiddling that I wasn’t expecting.

According to the documentation, which is sparse at best (even for the older and not beta non-SP1 VMM 2012) you just use the -AutoLogonCredential and specify a SCVMM RunAs account. What you actually have to specify is an object that contains a RunAs account. If you’ve been getting the VM to join an active directory then this is fairly easy to work out as it’s the same mechanism used to specify the -DomainJoinCredential. You set up a RunAs account via SCVMM which contains the username, password and domain, then just specify the name of this account (which is not necessarily the Active Directory SamAccountName – depends what name you gave the RunAs account when setting it up). This is quite nice as it means you don’t need to specify passwords in the script. e.g. to set up the credential object you use :

$DomainJoinCredential = Get-SCRunAsAccount -Name <your runas account>

In my case I wanted the server to log on as the same user I was using to join the server to AD, should have been simply a case of adding:

-AutoLogonCredential $DomainJoinCredential -AutoLogonCount 100

to the end of my New-SCVMTemplate command line.

Fail. The VM tries to log on but gives the following on screen:

Other user
The user name or password is incorrect. Try again.
[OK]

The VM is still joined to AD though, using the same credential object, so I know that the username and password is definately correct. It’s clearly not being passed to the VM’s OS properly. To fix this I changed my script a little and made a new code block to add all the autologon stuff in one place. It seems as though the critical bit that’s missing is the domain name. I now have the following code to modify the template I’ll use to create the virtual machine from:

Write-Host "Add autologon credential..."
Set-SCVMTemplate -VMTemplate $Template -AutoLogonCredential $DomainJoinCredential -AutoLogonCount 100 | Out-Null
$Unattend = $Template.UnattendSettings
$Unattend.Add("6/Microsoft-Windows-Shell-Setup/Autologon/Domain","rcmtech")
Set-SCVMTemplate -VMTemplate $Template -UnattendSettings $Unattend | Out-Null

The 6 on the $Unattend.Add line places the command into the oobesystem section of the unattend script.

I’ve previously created the temporary template using New-SCVMTemplate, then retrieved it into $Template by using:

$Template = Get-SCVMTemplate -Name <temporary template name>


Getting PowerShell scripts to run with no prompts

$
0
0

I am currently experimenting with Windows Server 2012, System Center Virtual Machine Manager 2012 SP1, and automation of various processes related to creating/building Server 2012 virtual machines.

I have a script that will create a new Server 2012 VM, customise it, join it to my Active Directory in a “Build” OU, log it on and then try and run a PowerShell script to further configure the server.

Except that the script fails to run for a variety of reasons. Which has been most annoying, and has taken the best part of a day to resolve.

To save you the hassle:

  1. Create a GPO (Group Policy Object) with the following settings, link it to the OU where your server will appear:
    • Computer Configuration – Policies – Administrative Templates – Windows Components – Internet Explorer – Internet Control Panel – Security Page – Site to Zone Assignment List: Enabled: Value name: *.rcmtech.co.uk Value: 2
    • Computer Configuration – Policies – Administrative Templates – Windows Components – Windows PowerShell – Turn on Script Execution: Enabled: Allow all scripts
    • User Configuration – Preferences – Control Panel Settings – Scheduled Tasks – Scheduled Task (Windows Vista and later):
      • General:
        • Action: Replace
        • Name: Server Build Launcher
        • When running the task, use the following user account: %LogonDomain\%LogonUser%
        • Run only when the user is logged on
        • Run with highest privileges
        • Configure for: Windows 7
      • Trigger:
        • Begin the task: At log on
        • Any user
        • Enabled
      • Action:
        • Action: Start a program
        • Program/script: C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe
        • Add arguments: -NoExit -File \\ad.rcmtech.co.uk\netlogon\config\serverbuild\serverbuildlauncher.ps1
      • Settings:
        • Allow task to be run on demand
      • Common:
        • Remove this item when it is no longer applied.
  2. I disabled Internet Explorer Enhanced Security Configuration. I did this by using Microsoft-Windows-IE-ESC in the specialize section, and setting IEHardenAdmin to false. I did this by using the following bit of PowerShell talking to SCVMM to modify the temporary template that the new VM is built from:
$Template = Get-SCVMTemplate -Name $TempTemplateName
$Unattend = $Template.UnattendSettings
$Unattend.Add("3/Microsoft-Windows-IE-ESC/IEHardenAdmin","false")
Set-SCVMTemplate -VMTemplate $Template -UnattendSettings $Unattend | Out-Null

The important bit is: all of it, really. The IE ESC and the Trusted Sites bits need doing or you’ll get:

Security warning
Run only scripts that you trust. While scripts from the internet can be useful,
this script can potentially harm your computer. Do you want to ru
\\ad.rcmtech.co.uk\netlogon\Config\ServerBuild\ServerBuildLauncher.ps1?
[D] Do not run  [R] Run Once  [S] Suspend  [?] Help (default is "D"):

If you don’t set the PowerShell Script Execution policy to Allow all you’ll get an error:

File
\\ad.rcmtech.co.uk\netlogon\Config\ServerBuild\ServerBuildLauncher.ps1 cannot be loaded. The file
\\ad.rcmtech.co.uk\netlogon\Config\ServerBuild\ServerBuildLauncher.ps1 is not digitally signed. The script will
not execute on the system. For more information, see about_Execution_Policies at

http://go.microsoft.com/fwlink/?LinkID=135170.

At line:1 char:1
+ \\ad.rcmtech.co.uk\netlogon\Config\ServerBuild\ServerBuildLauncher.ps1
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : SecurityError: (:) [], PSSecurityException
    + FullyQualifiedErrorId : UnauthorizedAccess

…at least if your PowerShell execution policy is set to RemoteSigned. Check what it is with:

Get-ExecutionPolicy -List

Enjoy.

 


PowerShell: Mounting and dismounting ISO images on Windows Server 2012 and Windows 8

$
0
0

Had need to do this recently, and as per usual with powershell, it was kind of easy, but there were a few fiddly bits that don’t appear to have been thought through by the designers.

Note that as the title of this post states, this is Server 2012/Windows 8 and upwards. Won’t work on Windows 7/Server 2008 R2 even with the latest PowerShell v3.0 (unless they retro-fit it via a hotfix/service pack in the future). It’s an OS feature that can be accssed via PowerShell, not a PowerShell feature.

So, mounting a .ISO image is easy:

$ImagePath = "\\server\share\folder\mydvdimage.iso"
Mount-DiskImage -ImagePath $ImagePath -StorageType ISO

The -StorageType ISO is optional by the way, but I found it so am using it.

Look in Windows Explorer, et voila, your ISO is now mounted and has a drive letter – access it as if it were a real CD/DVD in a physical drive, but probably with better access times.

But you can do that via the GUI by right-clicking on the ISO and choosing “Mount”. We’re doing this in a PowerShell script, and will no doubt be wanting to access that mounted ISO in the same script. Ho hum, what’s the drive letter that it’s been mounted as then? Maybe we can pick the drive letter to mount it as via the Mount-DiskImage command? The answers are “no idea” and “no“. Not exactly helpful is it?

My method for getting the drive letter is as follows:

$ISODrive = (Get-DiskImage -ImagePath $ImagePath | Get-Volume).DriveLetter
Write-Host ("ISO Drive is " + $ISODrive)

If you’ve not run it yet, this just gives you the letter, no colon or backslash, so if you want to use the drive letter you’ll need to add those as you build up the path to whatever you’re accessing on the mounted ISO drive.

Once you’ve done accessing it, you’re probably going to want to dismount it again. Easy?

Dismount-DiskImage -DevicePath E:

Oops. Not so easy:

Dismount-DiskImage : Access is denied.
At line:1 char:1
+ Dismount-DiskImage -DevicePath E:
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : PermissionDenied: (MSFT_DiskImage:ROOT/Microsoft/.../MSFT_DiskImage) [Dismount-DiskImage
   ], CimException
    + FullyQualifiedErrorId : HRESULT 0x80070005,Dismount-DiskImage

So I RTFM. Dismount-DiskImage won’t let you dismount it by specifying the drive letter. In a way that makes sense, Mount-DiskImage seems to dislike drive letters, so why should Dismount-DiskImage be different? You have to dismount the drive by specifying the .ISO file path or by using the device path in the form \\.\CDROM (the latter of which we of course use all the time in Windows…not):

Dismount-DiskImage -ImagePath $ImagePath

The other way to dismount it, which does let you use the drive letter (albeit with a colon suffix) is to eject it like a physical CD/DVD drive:

$ShellApplication = New-Object -ComObject Shell.Application
$ShellApplication.Namespace(17).ParseName($ISODrive.DriveLetter+":").InvokeVerb("Eject")

So there you go. Easy when you know how.


PowerShell: Parallel ping test with CSV result files

$
0
0

I’ve been experimenting with parallel workflows in PowerShell and this seemed like a good thing to try and do with it. No guarantees that this script is the best, but it does work.

Give it a list of computers to test and it’ll ping them all at the same time and write out to a CSV file with the response time in milliseconds, one file per computer tested. If a ping is dropped it’ll give the response time as -1.

workflow PingTest{
    $Computers = "127.0.0.1","rcmtech.co.uk","bbc.co.uk","virginmedia.com"
    foreach -parallel ($Computer in $Computers){
        $Time = Get-Date
        $TestResult = Test-Connection -ComputerName $Computer -Count 1 -ErrorAction SilentlyContinue
        inlinescript{
            if ($using:TestResult.ResponseTime -eq $null){
                $ResponseTime = -1
            } else {
                $ResponseTime = $using:TestResult.ResponseTime
            }
            $ResultObject = New-Object PSObject -Property @{Time = $using:Time; Computer = $using:Computer; ResponseTime = $ResponseTime}
            Export-Csv -InputObject $ResultObject "C:\Users\Me\Desktop\ping$using:Computer.csv" -Append
        }
    }
}
Clear-Host
while($true){
    $Now = Get-Date
    Write-Host $Now "Testing..." -NoNewline
    PingTest
    Write-Host "Sleeping..."
    Start-Sleep 5
}

Description/explanation:

I wanted to try and use the parallel job processing features of PowerShell. Clearly for pinging only a few machines this is overkill, but it can make a significant difference if there were many more machine, or the task was something slightly more involved than a simple ping. In order to process in parallel you have to use a workflow, which means that strange things happen to the processing of your code – you’re transposed into Windows Workflow Foundation, which is a .Net thing. My experience (which is not much) is that some stuff works as you’d expect in PowerShell, and other stuff definately does not.

Within the workflow, most of the code is held within a foreach loop, to which I’ve added the -parallel option. The -parallel option is only valid within a workflow, it modifies the foreach such that it runs every option at once, rather than one at a time as it would normally.

Back to the workflow. You can see that I’m able to shove a comma separated list of text into a variable, I’m also able to use Get-Date and Test-Connection and put their outputs into variables. But then for the other stuff I had issues, so resorted to using inlinescript which allows you put a chunk of “real” PowerShell into your code – it’s fired up in a separate PowerShell session, and the output is returned back into the workflow.

But because your inlinescript code is running in a separate PowerShell session you have to do something special to get access to the variables outside of the inlinescript. That special thing is to prefix the the variable name with using:, so $Time defined outside the inlinescript code block becomes $using:Time within the inlinescript. Don’t forget, as strange things will happen.

In order to handle the error that’s generated by Test-Connection when the ping fails I’ve added -ErrorAction SilentlyContinue, and then used an if statement to check if the ResponseTime property is null, and if so set a variable called $ResponseTime to -1, otherwise this variable gets set to whatever the response time actually was.

Because your code is running in parallel you’ll have to think about how to get at the output. Output to screen is “interesting”, Write-Host won’t work. You can output by just typing text or putting variables on their own on a new line, but that makes PowerShell’s already annoying text formatting even worse, especially as the order that results appear in is “random” due to the fact that the code is not being processed sequentially. Writing the output to a file can cause problems too, if one of the parallel streams has opened the file it’ll cause another to error if it tries to write at the same time. So I’m using one output file per tested machine.

In order to select the data that goes into the file I’m creating a new PSObject and specifying the property names and data. I then pass this object to Export-Csv.

I’m using export-csv as it’s an easy way to get the data into a file, and I can double-click the file to open it in Excel and then quickly see the results and/or graph them. I combined the response times into one sheet and produced this graph of my home broadband performance last night (I’m having some issues, ugh).

So, we’ve written a workflow, now we need to call it! That’s what the small while loop does down the bottom, just loops over and over again, every five seconds.


Script comparison: CMD, VBScript, Powershell

$
0
0

This was going to be a short scripting exercise for a new member of staff, but has been ditched due to time constraints. I’d written some sample scripts to see how long it’d take, so thought I’d post them as interesting comparison of the three languages.

They’re perhaps more a reflection of my abilities than being textbook examples of the three languages. I know there are better ways that bits of them could be done, but these were knocked up by me in as little time as possible so needed to be functional rather than perfect.

The brief was to write a short script, around 10-20 lines, that would move files from one folder to another if they had a certain text string in the filename. The folders were both located in the current user’s Documents folder. Source folder was called TestFiles, destination folder for matching files was called NewFiles. The text string to look for in the filenames was “robin” (minus the quotes). I wanted the script to be easily changed to use different folders or to look for a different text string. I also wanted the script to write some basic activity information to screen.

CMD (Windows Command processor/batch file)

@echo off
set Folder=%userprofile%\Documents\TestFiles
set NewFolder=%userprofile%\Documents\NewFiles
set FindText=robin
for /f "delims=" %%i in ('dir /b %Folder%') do (
   echo %%i
   call :FindAndMove "%Folder%\%%i" %FindText%
)
:FindAndMove
echo %1 | find /i "%2" >nul:
if %errorlevel%==0 (
   move %1 %NewFolder%\ >nul:
   echo Moved to new location
)
goto :eof
echo Done.

VBScript

Option Explicit
Dim oFSO, oFolder, oShell, oFile
Dim sFolderPath, sFindText, sNewFolder
Set oFSO = CreateObject("Scripting.FileSystemObject")
Set oShell = CreateObject("WScript.Shell")
sFolderPath = "%userprofile%\Documents\TestFiles"
sNewFolder = "%userprofile%\Documents\NewFiles"
sFindText = "robin"
sFolderPath = oShell.ExpandEnvironmentStrings(sFolderPath)
sNewFolder = oShell.ExpandEnvironmentStrings(sNewFolder)
Set oFolder = oFSO.GetFolder(sFolderPath).Files
For Each oFile In oFolder
   WScript.Echo oFile.Name
   If InStr(oFile.Name,sFindText) Then
      oFSO.MoveFile oFile.Path,sNewFolder&"\"
      WScript.Echo "Moved to new location"
   End If
Next
WScript.Echo "Done."

PowerShell

$Folder = $env:USERPROFILE+"\Documents\TestFiles"
$NewFolder = $env:USERPROFILE+"\Documents\NewFiles"
$FindText = "robin"
$FolderObj = Get-ChildItem -Path $Folder
foreach($File in $FolderObj){
   Write-Host $File.Name
   if($File.FullName -match $FindText){
      Move-Item -Path $File.FullName -Destination $NewFolder
      Write-Host "Moved to new location"
   }
}
Write-Host "Done."

 


Logon scripts do not run at logon – Server 2012 R2

$
0
0

Another genius bit of fiddling by Microsoft means that on Server 2012 R2 and Windows 8.1, logon scripts no longer run as part of the logon process. This had me totally foxed for hours. I have my logon script configured via Group Policy.

I believe that it is only GPO configured scripts that are affected, if you’re using the oldskool Active Directory user account configuration then it seems like the scripts keep running as expected whilst the logon process runs through.

If they’re configured via GPO, the new default setting is to run them five minutes after logon. This makes the logon process nice and speedy – shame it leaves your environment potentially unusable for the user…

Anyway, eventually I came across this policy setting that allows you to make it work how it used to (and how you probably expected it to):

Setting Path: System/Group Policy
Setting Name: Configure Logon Script Delay
Supported On: At least Windows Server 2012 R2, Windows 8.1 or Windows RT 8.1
Explanation
Enter “0” to disable Logon Script Delay. This policy setting allows you to configure how long the Group Policy client waits after logon before running scripts. By default, the Group Policy client waits five minutes before running logon scripts. This helps create a responsive desktop environment by preventing disk contention. If you enable this policy setting, Group Policy will wait for the specified amount of time before running logon scripts. If you disable this policy setting, Group Policy will run scripts immediately after logon. If you do not configure this policy setting, Group Policy will wait five minutes before running logon scripts.


Search for VMware RDMs

$
0
0

A storage admin recently asked me if a couple of LUNs were still in use. They had very little info about them, but knew they were presented to the vSphere hosts. So I wrote a quick script to find all RDMs (Raw Device Mapped LUNs) attached to virtual machines, and list them.

I’m using my new favourite PowerShell display format, namely Out-GridView as it has a very handy filter box at the top. So you can simply type in the NAA of the LUN or the server name and the result are filtered on the fly.

It has to query each VM to see if it has any RDMs and then get their info, so can take a while to run. I’ve added a progress bar to give you some idea as to how it is progressing. You obviously need the PowerCLI modules loaded, and need to have already run Connect-VIServer before running this script.

$Result = @()
$VMs = Get-VM
$VMsProcessed = 0
foreach($VM in $VMs){
    if($VM.HardDisks.DeviceName.Count -gt 0){
        $Devices = $VM.HardDisks.DeviceName -split "`r"
        foreach($Device in $Devices){
            $DeviceNAA = $Device.Substring(14,32)
            $obj = New-Object system.object
            $obj | Add-Member -MemberType NoteProperty -Name VMName -Value $VM.Name
            $obj | Add-Member -MemberType NoteProperty -Name PowerState -Value $VM.PowerState
            $obj | Add-Member -MemberType NoteProperty -Name DeviceNAA -Value $DeviceNAA
            $Result += $obj
        }
    }
    $VMsProcessed++
    Write-Progress -Activity "Getting VM details" -PercentComplete ($VMsProcessed/$VMs.Count*100)
}
$Result | Out-GridView

How does it work?

  1. Create an empty $Results array
  2. Get all the VMs into $VMs.
  3. Look and see how many RDMs the VM has
    1. If zero, skip to the next one
    2. If it has RDMs, split the CRLF-separated text string into an array, $Devices
    3. Step through each item in the array, for each item:
      1. Extract the NAA from the text string (which is the full vml.<etc.> form that you see in the vSphere client, put the extracted NAA text into $DeviceNAA.
      2. Create a new temporary object, $obj, add to it the VM Name, Power State (you might want to know if the VM connected to the RDM is actually in use or not), and the NAA text string.
      3. Add this object to the master $Results array
  4. Do some basic maths and display/update the progress bar
  5. Display the $Results array in a nice GUI with search feature by piping to Out-GridView

Find vSphere VM locations: Folder, vApp and Resource Pool

$
0
0

The vSphere client doesn’t seem to give you a nice overview where you can see all the location data about your virtual machines. You can see the Folder and vApp by expanding all the branches in the VMs & Templates view, you can see the vApp and Resource Pool in the Hosts and Clusters view.

This script displays all of it in one view, and thanks to the Out-DataGrid cmdlet is sortable and searchable. I’ve also added the ability to save the output as XML for later examination, as the script takes a while to run. You need to have the PowerCLI modules loaded and already done a Connect-VIServer before running the script.

$Result = @()
$DatacenterName = "RCMTechMain"
$VMsProcessed = 0

Function Get-SaveFileName($initialDirectory){   
    [System.Reflection.Assembly]::LoadWithPartialName("System.windows.forms") | Out-Null
    $SaveFileDialog = New-Object System.Windows.Forms.SaveFileDialog
    $SaveFileDialog.initialDirectory = $initialDirectory
    $SaveFileDialog.filter = "XML files (*.XML)| *.XML"
    $SaveFileDialog.ShowDialog() | Out-Null
    $SaveFileDialog.filename
}

Write-Progress -Activity "Get all VMs" -PercentComplete (0)
$VMs = Get-VM -Location (Get-Datacenter -Name $DatacenterName)

foreach($VM in $VMs){
    $obj = New-Object system.object
    $FolderTemp = ""
    $Folder = ""
    $FolderText = "Folder"
    while($FolderTemp -ne $DatacenterName){
        $FolderTemp = (Invoke-Expression -Command ("`$VM."+$FolderText)).Name
        if($FolderTemp -ne "vm"){
            $Folder = "\" + $FolderTemp + $Folder
        }
        if($FolderTemp -eq $null){
            $FolderTemp = $DatacenterName
            $Folder = "\" + $DatacenterName
        }
        $FolderText = $FolderText + ".Parent"
    }

    $obj | Add-Member -MemberType NoteProperty -Name VMName -Value $VM.Name
    $obj | Add-Member -MemberType NoteProperty -Name PowerState -Value $VM.PowerState
    $obj | Add-Member -MemberType NoteProperty -Name Folder -Value $Folder
    $obj | Add-Member -MemberType NoteProperty -Name vApp -Value $VM.VApp.Name
    if($VM.ResourcePool.Name -eq "Resources"){
        $obj | Add-Member -MemberType NoteProperty -Name ResourcePool -Value $null
    } else {
        $obj | Add-Member -MemberType NoteProperty -Name ResourcePool -Value $VM.ResourcePool.Name
    }
    $Result += $obj

    $VMsProcessed++
    Write-Progress -Activity "Getting VM details" -PercentComplete ($VMsProcessed/$VMs.Count*100)
}

$OutPath = Get-SaveFileName -initialDirectory ([environment]::GetFolderPath("mydocuments"))
if($OutPath -ne ""){
    $Result | Export-Clixml -Path $OutPath
}
$Result | Out-GridView -Wait

Set the $DatacenterName to your datacentre name as seen in the vSphere client (or you could do a Read-Host instead).

The folder path is a bit fiddly to work with, hence the while loop. The immediate folder that the VM is located in can be found via $VM.Folder. This will be blank if it’s not in a folder, so for consistency with the vSphere client I’m setting their folder path to the same root as those VMs in folders. If it is in a folder, we then loop looking at the $VM.Folder.Parent, $VM.Folder.Parent.Parent, and so on until we reach the folder that is the name of the datacentre. All the VMs not in a folder seem to be in a root folder called “vm” which doesn’t show in the vSphere client, so I’m ignoring this and not showing it in the resultant folder path.

I’m not displaying nested resource pools as I don’t have any. They work roughly along the lines of the folders though, so you get $VM.ResourcePool, then $VM.ResourcePool.Parent etc. Implementing this functionality has been left as an exercise for the reader ;-)

Then, as with previous scripts, I’m writing the output to an object, which gets added to an array. I’ve added a progress bar as the data retrieval can take a while, and progress bars are easy.

Then we repeat for the next VM.

Finally, you’re prompted if you want to save the file (cancel this if you don’t want to save it). If you do save, the array of objects is dumped to file via Export-Clixml. Then we display the array via Out-GridView.

To open a previously saved XML file you just use Import-Clixml and pipe it to Out-GridView. The machine you do this on doesn’t need to have the PowerCLI stuff, it only needs PowerShell as we’re just reading data in from a file.

To make this even easier, I’ve got a small script that’ll mean you don’t even have to go near a PowerShell prompt.



Browse for and display a PowerShell Clixml file in a DataGrid window

$
0
0

I’ve been using Out-DataGrid a lot recently, I seem to have a lot of uses for it. One of the things I’m displaying is an array of custom objects. Sometimes the script that generates this data takes a while to run, in which case it’s handy to dump the array into an xml file via the Export-Clixml cmdlet.

You can then read this back in and display it in a DataGrid via one line:

Import-Clixml -Path <filename> | Out-DataGrid

This also has the benefit that you can share the data with people who don’t have the necessary PowerShell modules or permissions to run the script that generated the data.

To make this even easier I’ve written a short script to let you browse for the file and then display it. You can just run the script from an Explorer window or shortcut, no typing necessary!

Function Get-OpenFileName($initialDirectory){   
    [System.Reflection.Assembly]::LoadWithPartialName("System.windows.forms") | Out-Null
    $SaveFileDialog = New-Object System.Windows.Forms.OpenFileDialog
    $SaveFileDialog.initialDirectory = $initialDirectory
    $SaveFileDialog.filter = "XML files (*.XML)| *.XML"
    $SaveFileDialog.ShowDialog() | Out-Null
    $SaveFileDialog.filename
}

$XMLPath = Get-OpenFileName -initialDirectory ([environment]::GetFolderPath("mydocuments"))

if($XMLPath -ne ""){
    $XML = Import-Clixml -Path $XMLPath
    $XML | Out-GridView -Wait
}

The if statement is there in case you click “cancel” or close on the browse dialogue. The -Wait stops the DataGrid closing immediately if you run the script from Explorer.


Basic RDS 2012 R2 Shadowing Console

$
0
0

Whereas older versions of Windows had the tsadmin.exe utility, Server 2012 R2 doesn’t have anything equivalent. Helpdesk staff need a simple, easy to manage and configure console to allow them to shadow Remote Desktop users.

Luckily this is pretty easy to achieve thanks to PowerShell, the re-introduced shadowing feature in Windows Server 2012 R2 and the Remote Desktop Client (v6.3.9600).

I use Get-RDUserSession to query the Connection Broker for all user sessions, then display the active ones in a PowerShell GridView (not much point trying to shadow a disconnected session). A session can be selected and the SessionID and HostServer parameters are then passed to mstsc.exe with the /shadow and /control options.

Aside from needing the right permissions to query the Connection Broker and shadow, you also need to have PowerShell v3 or higher and have installed the RSAT-RDS-Tools. Then just create a shortcut to the PowerShell script, e.g.

powershell.exe C:\Scripts\ShadowSession.ps1

Finally, set the $ConnectionBroker to be the fully qualified name of your connection broker server.

$ConnectionBroker = "cbr01.rcmtech.co.uk"

$AllSessions = Get-RDUserSession -ConnectionBroker $ConnectionBroker
if($AllSessions -eq $null){
    [System.Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms") | Out-Null
    $Title = "Shadowing"
    $Message = "No user sessions found, sorry!"
    $Buttons = [System.Windows.Forms.MessageBoxButtons]::OK
    $Icon = [System.Windows.Forms.MessageBoxIcon]::Exclamation
    [System.Windows.Forms.MessageBox]::Show($Message,$Title,$Buttons,$Icon)
    return
}

$ActiveSessions = $AllSessions | Where-Object -Property SessionState -EQ "STATE_ACTIVE"
$GridView = $ActiveSessions | Select-Object -Property CollectionName,UserName,CreateTime,HostServer,SessionID
$Session = $GridView | Out-GridView -Title "Remote Desktop Shadowing - Active Sessions" -OutputMode Single

if($Session -eq $null){
    # No session selected, user probably clicked Cancel
    return
}

mstsc /v:($Session.HostServer) /shadow:($Session.SessionId) /control | Out-Null

Get and delete DNS A and PTR records via PowerShell

$
0
0

My Remote Desktop Session Host build process is now pretty slick. I can easily rebuild a lot of RDSH VMs very quickly. The process basically involved powering off the existing VM, deleting both it and its Active Directory account, and provisioning a new VM. However this causes a slight issue because the AD DNS record is left behind from the old VM, with the security to update it still assigned to the old VM. This means that if I change the allocated IP address for an RDSH server as part of the rebuild, the new VM doesn’t have permission to update the DNS A and PTR records for itself (security is done by AD SID, DNS just works on names/IP addresses).

So, time to use Get-DnsServerResourceRecord and Remove-DnsServerResourceRecord.

Doing the A record is easy:

$NodeToDelete = "rdsh01"
$DNSServer = "dns01.rcmtech.co.uk"
$ZoneName = "rcmtech.co.uk"
$NodeDNS = $null
$NodeDNS = Get-DnsServerResourceRecord -ZoneName $ZoneName -ComputerName $DNSServer -Node $NodeToDelete -RRType A -ErrorAction SilentlyContinue
if($NodeDNS -eq $null){
    Write-Host "No DNS record found"
} else {
    Remove-DnsServerResourceRecord -ZoneName $ZoneName -ComputerName $DNSServer -InputObject $NodeDNS -Force
}

So we try and get the DNS record we’re interested in. If the record is found, we delete it. I was originally doing this not using an -ErrorAction of ”SilentlyContinue” but a “Stop”, which worked to a point but was inconsistent: If the DNS name I was trying to get had never existed (i.e. I made something up) it worked fine, but if I created an A record, ran the script to delete it, then tried the script again with the same DNS name to delete, the Get-DnsServerResourceRecord didn’t error. It didn’t return anything either, and hence why this is coded to use an if statement to check whether $NodeDNS is $null or not, rather than using a try/catch as was my original plan.

The PTR record is a little more fiddly. Now, getting PTR info via nslookup is easy:

C:\> nslookup 192.168.1.123
Server: dns01.rcmtech.co.uk
Address: 192.168.1.10

Name: rdsh01.rcmtech.co.uk
Address 192.168.1.123

Not so with PowerShell… You have to specify the reverse lookup zone, and also give the right bit of the IP address in the right order.

For example, I want to find the name of the machine with IP address 192.168.1.123, and let’s assume my internet class B address range is 192.168.0.0.

Get-DnsServerResourceRecord -ZoneName "168.192.in-addr.arpa" -ComputerName "dns01.rcmtech.co.uk" -Node "123.1" -RRtype Ptr

…nice. You can’t just pass it the full IP address as you can with nslookup. Oh no, we have to be a little more “creative”.

What I’ve ended up doing is using Get-DnsServerResourceRecord to look up the A record of the server to get its IP address, split this into an array – each element containing an octet of the address, then delete the PTR record and finally the A record.

param($NodeToDelete= "")
if($NodeToDelete-eq ""){
    Write-Error "You need to specify a name to delete" -Category NotSpecified -CategoryReason "Need a name to delete" -CategoryTargetName "Missing parameter" -CategoryTargetType "DNS name"
    return
}

$DNSServer = "dns01.rcmtech.co.uk"
$ZoneName = "rcmtech.co.uk"
$ReverseZoneName = "168.192.in-addr.arpa"
$NodeARecord = $null
$NodePTRRecord = $null

Write-Host "Check for existing DNS record(s)"
$NodeARecord = Get-DnsServerResourceRecord -ZoneName $ZoneName -ComputerName $DNSServer -Node $NodeToDelete-RRType A -ErrorAction SilentlyContinue
if($NodeARecord -eq $null){
    Write-Host "No A record found"
} else {
    $IPAddress = $NodeARecord.RecordData.IPv4Address.IPAddressToString
    $IPAddressArray = $IPAddress.Split(".")
    $IPAddressFormatted = ($IPAddressArray[3]+"."+$IPAddressArray[2])
    $NodePTRRecord = Get-DnsServerResourceRecord -ZoneName $ReverseZoneName -ComputerName $DNSServer -Node $IPAddressFormatted -RRType Ptr -ErrorAction SilentlyContinue
    if($NodePTRRecord -eq $null){
        Write-Host "No PTR record found"
    } else {
        Remove-DnsServerResourceRecord -ZoneName $ReverseZoneName -ComputerName $DNSServer -InputObject $NodePTRRecord -Force
        Write-Host ("PTR gone: "+$IPAddressFormatted)
    }
    Remove-DnsServerResourceRecord -ZoneName $ZoneName -ComputerName $DNSServer -InputObject $NodeARecord -Force
    Write-Host ("A gone: "+$NodeARecord.HostName)
}

Powershell function to pin and unpin from Windows Taskbar

$
0
0

Wrote this to make it easy to add and remove pinned items from the taskbar.

Code

function Pin-Taskbar([string]$Item = "",[string]$Action = ""){
    if($Item -eq ""){
        Write-Error -Message "You need to specify an item" -ErrorAction Stop
    }
    if($Action -eq ""){
        Write-Error -Message "You need to specify an action: Pin or Unpin" -ErrorAction Stop
    }
    if((Get-Item -Path $Item -ErrorAction SilentlyContinue) -eq $null){
        Write-Error -Message "$Item not found" -ErrorAction Stop
    }
    $Shell = New-Object -ComObject "Shell.Application"
    $ItemParent = Split-Path -Path $Item -Parent
    $ItemLeaf = Split-Path -Path $Item -Leaf
    $Folder = $Shell.NameSpace($ItemParent)
    $ItemObject = $Folder.ParseName($ItemLeaf)
    $Verbs = $ItemObject.Verbs()
    switch($Action){
        "Pin"   {$Verb = $Verbs | Where-Object -Property Name -EQ "Pin to Tas&kbar"}
        "Unpin" {$Verb = $Verbs | Where-Object -Property Name -EQ "Unpin from Tas&kbar"}
        default {Write-Error -Message "Invalid action, should be Pin or Unpin" -ErrorAction Stop}
    }
    if($Verb -eq $null){
        Write-Error -Message "That action is not currently available on this item" -ErrorAction Stop
    } else {
        $Result = $Verb.DoIt()
    }
}

Syntax

Pin-Taskbar -Item  -Action <Pin|Unpin>

Usage examples

Pin MS Paint to the taskbar

Pin-Taskbar -Item "C:\Windows\System32\mspaint.exe" -Action Pin

Unpin MS Paint from the taskbar

Pin-Taskbar -Item "C:\Windows\System32\mspaint.exe" -Action Unpin

PowerShell: There are null values and then there are SQL null values

$
0
0

I was reading some data out of a SQL Server table, and wanted to do an operation if one of the string values returned, MAC, was either blank or null. I was using the following code:

$NewServerName = "SomeServerName"
$DBServerName = "RCMSQL01"
$SQLConnection = New-Object -TypeName System.Data.SqlClient.SqlConnection -ArgumentList "Server=$DBServerName;Database=Build;Integrated Security=SSPI"
$SQLConnection.Open()
$SQLCommand = $SQLConnection.CreateCommand()
$SQLCommand.CommandText = "SELECT Host,OU,MAC FROM [Build].[dbo].[Main] WHERE ServerName like '$NewServerName'"
$SQLReader = $SQLCommand.ExecuteReader()
if($SQLReader.Read()){
    $HyperVHost = $SQLReader["Host"]
    $OU = $SQLReader["OU"]
    $MAC = $SQLReader["MAC"]
    $SQLReader.Close()
} else {
    Write-Host "VM record missing in Build DB"
}

Due to the nature of the SQL database, the values in the $MAC column might be any of null (the SQL Server default), blank (i.e. some text had been added but removed), or “some actual text”.

If the MAC column was showing as Null, the following code did not work as expected:

if($MAC -eq $null){
    Write-Host "MAC is null"
}

It seems as though this is because the value being returned is actually of the type System.DBNull rather than being an actual PowerShell null value ($null). You can see this by piping the variable into Get-Member.

There are several ways to work with this:

1) Test the $MAC variable against the “special” DBNull variable

if($MAC -eq [System.DBNull]::Value)
    Write-Host "MAC is null"
}

2) Ensure that the variable is created as a string, then test for it being blank rather than null. At the top of the script:

[string]$MAC = ""

Then to test, once the value has been retrieved from SQL Server:

if($MAC -eq ""){
    Write-Host "MAC is blank (might actually be Null in SQL Server)"
}

3) Convert the variable into a string, then test for it being blank

$MACString = $MAC.ToString()
if($MACString -eq ""){
    Write-Host "MAC is blank (might actually be Null in SQL Server)"
}

Viewing all 27 articles
Browse latest View live