Solution to Build Golden Images across Major Hypervisors Part 1

Solution to Build Golden Images across Major Hypervisors Part 1

A small intro to the packer/ansible project and why it’s needed and How to create a VM using Packer

Out of my depth

At every employer I’ve worked for, we’ve had both managed/internal and external customers, where we always tried to deliver the best image-based VDI as possible.
The images could be created from master scripts, calling different installer scripts with hard-coded username’n’passwords. Those were the days :-)
Later SMS, which became SCCM (and MDT for Customers only needing a bare minimum deployment solution) - which still works great - but lacks simple integration to some form of password vault or encryption of username and password in customsettings.ini.
Since everyone is doing DevOps, I thought… Why not give it a shot! Why not scrap our normal MDT-way of doing things, and completely start from scratch? That could be fun, right? To be honest, I’ve never been more out of depth, since I’ve started doing this. Just some quick background info: Most of the code I write would be simple PowerShell one-liners and minor scripts.

The goal was to find a new way to build golden images using modernized tools, that could be used across major hypervisors. What I ended up with, was a combination of Packer, Ansible Playbook and Powershell. I’m still looking for a password-vault solution. It may end up being 1Password. Time will tell.

This series of blog posts is as much a tool for me to document my journey, as it is for you to learn from my mistakes :-)
If you have any pointers or want to participate in my little endeavor, do not hesitate to reach out.

Build tools

  • Packer
    • simplicity
    • easy-to-read code
    • well documented
  • Ansible Playbooks
    • no agent
    • central controller node
    • well documented
  • Powershell
    • well you know

How to create a VM using Packer

First blog in the series, will be about Packer.

Who is HashiCorp?

For those who don’t know HashiCorp. Let’s just say, they were tired of manually creating complete infrastructures and golden images. So they wrote Terraform and Packer. They also created a configuration language called HashiCorp Configuration Language, or in short “HCL”, which later got extended to HCL2, which I will be focusing on.

Simple to run

Packer is extremely simple to run.

packer init 
packer build "filename.pkr.hcl"

In a matter of minutes, you will have a VM, with installed OS.
It’s also possible to use variable files

packer init
packer build -var-file="filename.pkrvar.hcl" .

That little “.” is very important, since it defines to use every file in the current directory.

HCL Templates, Blocks, Variables and so on

HCL2 is a declarative language, where you tell Packer what to do and how to do it. This is done using plugins such as vSphere. In this case, it uses vSphere API to create and configure VMs.

HCL Templates

HCL Templates is the actual file, where all code is written. For reference to e.g. Powershell users. The template is a .ps1 file.

Blocks

In a HCL template, there are these things called blocks, which are containers for objects, such as sources and variables.
Blocks have a block type, and can have zero or multiple label names.

A block could look like this:

packer {
  required_version = ">= 1.9.1"
  required_plugins {
    vsphere = {
      version = ">= v1.2.0"
      source  = "github.com/hashicorp/vsphere"
    }
  }
}

“packer” is the block type, and no label has been set.

Packer plugins

Like I mentioned earlier in the blog. Packer uses plugins to create VMs in vSphere, Windows update, install applications using Ansible Playbook, and so on.
In the code above, you can see that “packer” is the block type and it requires the minimum version to be at least “1.9.1” and required plugins are “vsphere”. If plugins are being used, then the code above needs to present in every HCL template, otherwise the plugins won’t work. See Packer Init for downloading plugin binaries.

Variables

There are two types of variables. Input Variables and Local Variables.
Input variables are usually called “variables” and Local variables are called “locals”.

Input variables

Input variables can have default values, but can be overridden using command line options, environment variables or variable definitions files. However, nothing can change the value of an input variable after the initial override.
An example of an Input variable:

variable "vsphere_endpoint" {
  type        = string
  description = "The fully qualified domain name or IP address of the vCenter Server instance."
}

“variable” is the block type and “vsphere_endpoint” is the label

Example of declaring a variable:

vsphere_endpoint = "vcenter_server.fqdn.local"

Example of using the declared variable:

vcenter_server = "var.vsphere_endpoint"

Local variables

Local or locals can be used multiple times, and contains other local variables, input variables, data sources etc.
An example of using local and locals variables:

locals {
  build_by          = "Built by: theDaniel Packer ${packer.version}"
  build_date        = formatdate("YYYY-MM-DD hh:mm ZZZ", timestamp())
  build_description = "Built on: ${local.build_date}\n${local.build_by}"
  iso_paths         = ["[${var.common_iso_datastore}] ${var.iso_path}/${var.iso_file}", "[] /usr/lib/vmware/isoimages/${var.vm_guest_os_family}.iso"]
}

The first two are local variables. The last two are a combination of input variables and two local variables

Packer init

As mentioned earlier, the “Packer init” command is the first command to be executed, when working with templates. This installs all the binaries when using plugins. This command is completely safe to run, since it’s only job is to install binaries.

packer {
  required_version = ">= 1.9.1"
  required_plugins {
    vsphere = {
      version = ">= v1.2.0"
      source  = "github.com/hashicorp/vsphere"
    }
  }
}

Reuse of the code from earlier, when executing “Packer init”, then “vsphere” will be downloaded and installed.

Packer build

“Packer build” does what it says. It builds the VM (HashiCorp calls it an artifact).
There is bunch of parameters that can be used, e.g. “-var-file” and “-force”. “-var-file” is self-explaining, but “-force” is nice to know. If a VM specified with the same name in the template already exists, then “-force” deletes that VM, before creating a new one.

but what if, I’ve have a VM, that shouldn’t be deleted, and has the same name, as in the template??
Change the name, in the template then. Control your VM names.

For now, this is the HCL Template, I’m using:

//  BLOCK: packer
//  The Packer configuration.

packer {
  required_version = ">= 1.9.1"
  required_plugins {
    vsphere = {
      version = ">= v1.2.0"
      source  = "github.com/hashicorp/vsphere"
    }
    ansible = {
      version = "~> 1"
      source = "github.com/hashicorp/ansible"
    }
  }
}

//  BLOCK: locals
//  Defines the local variables.

locals {
  build_by                   = "Built by: DanofficeIT Packer methodology ${packer.version}"
  build_date                 = formatdate("YYYY-MM-DD hh:mm ZZZ", timestamp())
  build_description          = "Built on: ${local.build_date}\n${local.build_by}"
  iso_paths                  = ["[${var.common_iso_datastore}] ${var.iso_path}/${var.iso_file}", "[] /usr/lib/vmware/isoimages/${var.vm_guest_os_family}.iso"]
  bucket_name                = replace("${var.vm_guest_os_family}-${var.vm_guest_os_name}-${var.vm_guest_os_version}", ".", "")
  bucket_description         = "${var.vm_guest_os_family} ${var.vm_guest_os_name} ${var.vm_guest_os_version}"

}

//  BLOCK: source
//  Defines the builder configuration blocks.

source "vsphere-iso" "windows-server-standard-dexp" {

  // vCenter Server Endpoint Settings and Credentials
  vcenter_server       = var.vsphere_endpoint
  username             = var.vsphere_username
  password             = var.vsphere_password
  insecure_connection  = var.vsphere_insecure_connection

  // vSphere Settings
  datacenter           = var.vsphere_datacenter
  cluster              = var.vsphere_cluster
  datastore            = var.vsphere_datastore
  folder               = var.vsphere_folder

  // Virtual Machine Settings
  vm_name              = var.vm_name
  guest_os_type        = var.vm_guest_os_type
  firmware             = var.vm_firmware
  CPUs                 = var.vm_cpu_count
  cpu_cores            = var.vm_cpu_cores
  CPU_hot_plug         = var.vm_cpu_hot_add
  RAM                  = var.vm_mem_size
  RAM_hot_plug         = var.vm_mem_hot_add
  cdrom_type           = var.vm_cdrom_type
  disk_controller_type = var.vm_disk_controller_type
  storage {
    disk_size             = var.vm_disk_size
    disk_controller_index = 0
    disk_thin_provisioned = var.vm_disk_thin_provisioned
  }
  network_adapters {
    network      = var.vsphere_network
    network_card = var.vm_network_card
  }
  vm_version           = var.common_vm_version
  remove_cdrom         = var.common_remove_cdrom
  tools_upgrade_policy = var.common_tools_upgrade_policy
  notes                = local.build_description

  // Removable Media Settings
  iso_paths    = local.iso_paths

  // Floppy configuration
  floppy_files         = [
        "data/*",
        "scripts/*",
  ]

  floppy_content = {
    "autounattend.xml" = templatefile("${abspath(path.root)}/data/autounattend.pkrtpl.hcl", {
      build_username       = var.build_username
      build_password       = var.build_password
      vm_inst_os_language  = var.vm_inst_os_language
      vm_inst_os_keyboard  = var.vm_inst_os_keyboard
      vm_inst_os_image     = var.vm_inst_os_image_standard_desktop
      vm_inst_os_kms_key   = var.vm_inst_os_kms_key_standard
      vm_guest_os_language = var.vm_guest_os_language
      vm_guest_os_keyboard = var.vm_guest_os_keyboard
      vm_guest_os_timezone = var.vm_guest_os_timezone
    })
  }

  // Boot and Provisioning Settings
  http_port_min    = var.common_http_port_min
  http_port_max    = var.common_http_port_max
  boot_order       = var.vm_boot_order
  boot_wait        = var.vm_boot_wait
  boot_command     = var.vm_boot_command
  ip_wait_timeout  = var.common_ip_wait_timeout
  shutdown_command = var.vm_shutdown_command
  shutdown_timeout = var.common_shutdown_timeout

  // Communicator Settings and Credentials
  communicator   = "winrm"
  winrm_username = var.build_username
  winrm_password = var.build_password
  winrm_port     = var.communicator_port
  winrm_timeout  = var.communicator_timeout
}

//  BLOCK: build
//  Defines the builders to run, provisioners, and post-processors.

build {
  sources = ["source.vsphere-iso.windows-server-standard-dexp"]

  provisioner "powershell" {
    environment_vars = [
      "BUILD_USERNAME=${var.build_username}"
    ]
    elevated_user     = var.build_username
    elevated_password = var.build_password
    scripts           = formatlist("${path.cwd}/%s", var.scripts)
  }

  provisioner "powershell" {
    elevated_user     = var.build_username
    elevated_password = var.build_password
    inline            = var.inline
  }
}