Import CSV files in Powershell

If you work a lot with powershell and different Microsoft products then you will run into importing CSV files. Importing CSV files, can be a cumbersome task in Powershell. So here are a few tips and tricks. This article is a kind of two-fer in that we’ll look at working with CSV files and resolving computer names to IP addresses.

To begin, I have a sample CSV file.

The great thing about PowerShell is that you can bring any CSV into PowerShell using Import-CSV. The cmdlet will take the column heading and use it as property names.

Importing a CSV file into PowerShell (Image Credit: Jeff Hicks)

I created my CSV file so that some of the headings match cmdlet parameters I might want to use. Let me explain. One technique I could use to get an IPAddress s with the Test-Connection cmdlet.

Using PowerShell's Test-Connection cmdlet to see an IP address (Image Credit: Jeff Hicks)

How does this relate to my CSV file? This is when you need to look at cmdlet help, specifically in this case at the Computername parameter.

The Test-Connection Computername parameter (Image Credit: Jeff Hicks)

The description says it can be a name. Good, because that is what I have in the CSV file. But here’s the cool part. Take a look at the ‘Accept pipeline input’ section. It has a value of True and an indication of ByPropertyName. This means the cmdlet will accept piped in objects. If the incoming object has a property name called Computername, PowerShell will take that value and bind it to the corresponding parameter. Let me do a quick proof of concept.

Testing parameter binding (Image Credit: Jeff Hicks)

If you recall each imported object has a Computername property courtesy of the Computername column heading. Test-Connection sees it and hooks everything up for me. Let’s try it now for the entire CSV file.

Testing connections from imported objects (Image Credit: Jeff Hicks)

Errors aside for offline or unknown computers, this works!

Before we get to the next step, what do you do if your CSV file doesn’t have a matching heading that you can use with parameter binding? The answer is not to edit the file. This is a situation where you will need to use ForEach-Object. Import the CSV and for each imported object, do something.

It works, but is a bit more cumbersome to type. If you have a process planned for your CSV that involves PowerShell cmdlets, the more you can setup your CSV ahead of time with the right column headings, the easier the import and processing.

If your CSV doesn’t have a header line, you can specify one at import:

Or, let’s say your CSV does have a header but you want to use something different and can’t really modify the source file. You can’t use Import-CSV with the Header parameter. Instead, you have to strip off the first line from the file.

Stripping off the header line from a CSV

To bring this into PowerShell as a set of objects, use the ConvertFrom-CSV cmdlet, which also has Header parameter.

Converting a CSV with an alternate header

Since my file is already using Computername, I used Name for the sake of demonstration. From here I could pipe to another cmdlet and take advantage of pipeline binding.

Fortunately, my CSV file is structured, so I don’t have to use any of these tricks.

Simple script to backup all SQL Server databases

Sometimes things that seem complicated are much easier then you think and this is the power of using T-SQL to take care of repetitive tasks.  One of these tasks may be the need to backup all databases on your server.   This is not a big deal if you have a handful of databases, but I have seen several servers where there are 100+ databases on the same instance of SQL Server.  You could use SQL Server Management Studio to backup the databases or even use Maintenance Plans, but using T-SQL is a much simpler and faster approach.

With the use of T-SQL you can generate your backup commands and with the use of cursors you can cursor through all of your databases to back them up one by one.  This is a very straight forward process and you only need a handful of commands to do this.

Here is the script that will allow you to backup each database within your instance of SQL Server.  You will need to change the @path to the appropriate backup directory.

File Naming Format DBname_YYYYDDMM.BAK

File Naming Format DBname_YYYYDDMM_HHMMSS.BAK

If you want to also include the time in the filename you can replace this line in the above script:

with this line:

In this script we are bypassing the system databases, but these could easily be included as well.  You could also change this into a stored procedure and pass in a database name or if left NULL it backups all databases.  Any way you choose to use it, this script gives you the starting point to simply backup all of your databases.

Also, if you wanted to bypass some of your user databases you can include them in the NOT IN section as well.

Nagios 3 on Debian Wheezy or Ubuntu 14.x

After a quick dap with Observium, I’ve decided to go back to Nagios. Nagios3 is a full featured monitoring package for Linux hosts.
The article will describe how to install Nagios 3 on a Debian or Ubuntu host. For the people who never heard about Nagios, please take a look at http://www.nagios.org/projects/nagioscore . In this how-to we will use the opensource version, named Nagios Core.

To start with Nagios Core, you will need a fully functioning LAMP server. If you don’t have a LAMP enviroment run the following command:

The MySQL install will ask you for an administrative password who you will need to enter 2 times.

After the LAMP installation we can go further with the Nagios installation. Run the following command on your Nagios server:

I gave the password as per my choice, please give the values as per your server environment.

Next we need to configure the Nagios at Server, as follows:

Next change the values for check_external_commands to 1 to enable settings as follows:

Now we need to make the changes effective by restarting the Nagios service by using the command:

Accessing the web-interface at http://server1.example.com/nagios3 or http://IP/nagios3 as follows:

Username will be nagiosadmin and password as created above!

Next click on Host Groups:

It will show the present server as localhost. Just click over the localhost.

Now that the basic setup of the Nagios server is finished, we can add our first client / server to monitor.

A monitored client/server on Nagios is called a “host”. To add a host you will need to install the following package:

Further at  server2.example.com define the main Nagios monitoring server entries in Nagios configuration file as :

Where 192.168.0.100 is the main Nagios monitor server server1.example.com IP. Next start nrpe service as shown below

Similarly we can add more nodes to which we want monitoring.

We also need to define the Nagios client entries at server end. Create a file monitor1.cfg in directory /etc/nagios3/conf.d/ and define the values at server1.example.com. In the object configuration files, you can define hosts, host groups, contacts, contact groups, services, etc. You can split your object definitions across several config files if you wish, or keep them all in a single config file.

Default directory is /etc/nagios3/conf.d/ , you can use any name such as I used monitor1.cfg Entries must be like this:

In above file define host contains the information about the added client server2.example.com, next we can use Nagios to check different services. I used to check PING SSH and DISK services. For more details of services used in Nagios check directory /etc/nagios-plugins/config.

Next to make the changes effective restart the nagios3 service.

Now we can check the client details at the Nagios web-interface at Host Groups as follows:

Next on server2.example.com and check the values there.

 

That is it! Now you have a working Nagios server with 2 monitored hosts.

In following posts I will describe how you can implement monitoring alerts, have detailed graphs and much more. So stay tuned!

OpenSSL update on Linux – Heartbleed

The heartbleed bug caused a fair bit of commotion in the web hosting sector, if you don’t know about the heartbleed bug I would suggest you go to http://heartbleed.com/ and read up first. Now the solution to the bug is quite simple, you just need to update your OpenSSL implementation to the latest version. Because every Linux OS is different I will explain the different steps you need to undertake.

Ubuntu 12.04/12.10/13.04/13.10 and Debian 7

CentOS 6.x

CentOS 6.x with DirectAdmin

CentOS 6.x with Cpanel

Do not forget to reboot afterwards, this step is quite important, so that the services are restarted and are using the latest OpenSSL stack.

Good luck

Hyper-V Cluster error 0x80070032

A Hyper-V Cluster gives you the means to transfer vm’s in a live state to a different host. This is quite handy when it comes to High Availability and when you need to maintance one of your physical hosts. Microsoft and other blogs describe how to set up a HyperV Cluster, so I’m not going in to that now. This post describes how to resolve the Hyper-V Cluster error 0x80070032

You will get this error when you try to live migrate your virtual machines from one physical host to another one. So will have to look in the detailed view (right click on the vm role) the see this error. The Windows event viewer and HyperV Cluster event viewer won’t display this error, you will only get the message that the live migration has failed. This without a clear reason.

The solution itself is quite simple. Open up the settings from the role / VM and go to the name field. Below the name field you will also have a notes field, clear the notes field and retry the live migration. This will go on like it should be.

The new websites are live

With proud I can announce that all the Anvar IT websites have been updated and are back live now.
The websites are as followed

I hope you will enjoy all the websites and if there are problems, please don’t hesitate to contact me.