Terraform automation: Nested ESXi Hosts (Management, Edge and Compute)

From Iwan
Jump to: navigation, search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

To deploy multiple instances or Nested ESXi hosts we need to download and pre-deploy the ESXi Appliance using the OVA that can be downloaded on the website of William Lam.

I downloaded this image: “Nested_ESXi6.7u3_Appliance_Template_v1.ova”

Note

This .OVA needs to be deployed first, and we are going to clone this pre-deployed image and configure the IP address and DNS setting with Terraform. So we are NOT deploying the template or .OVA using Terraform, but we are deploying it first and then we clone this into other instances.

I have decided to deploy the following nested ESXi hosts per lab:

  • Management ESXi Hosts for the Management Cluster
    • M-ESXi-01
    • M-ESXi-02
    • M-ESXi-03
  • Edge ESXi Hosts for the Edge Cluster
    • E-ESXi-01
    • E-ESXi-02
  • Compute ESXi Hosts for the Compute Cluster
    • C-ESXi-01
    • C-ESXi-02
    • C-ESXi-03

The full resource calculation can be found in this article: The nested labs project overview/introduction.

When the image is deployed we need to run a PowerShell script to make the vApp properties configurable. More information about this can be found here.

The PowerCLI code to use to enable configurable vApp properties is:

Enable-VmVappProperties.ps1

<#
Sets all vApp properties for a VM to be user-configurable as Terraform v11.5
and the Terraform vSphere provider v1.3 and earlier do not support deploying
VMs with one or more vApp property where the UserConfigurable field is set to
‘False’.

William Lam’s nested ESXi VM templates have a ‘debug’ vApp properties that must
be enabled before Terraform can successfully clone VMs from it.  This script
will do so.

Usage; 
PS C:\Scripts\PowerCLI\Terraform\Nested ESXi> Connect-ViServer
PS C:\Scripts\PowerCLI\Terraform\Nested ESXi> .\Enable-VmVappProperties.ps1 -Name “Nested_ESXi6.7u3_Appliance_Template_v1”


https://github.com/terraform-providers/terraform-provider-vsphere/issues/394
root #>
param (
    [Parameter(
        Mandatory = $true,
        Position = 0,
        ValueFromPipeline = $true
    )]
    [ValidateCount( 1, 100)]
    [ValidateNotNullOrEmpty()]
    [String[]] $Name
)

user $vms = Get-VM -Name $Name -ErrorAction ‘Stop’

foreach ( $vm in $vms ) {
    $virtualMachineConfigSpec = New-Object VMware.Vim.VirtualMachineConfigSpec
    $virtualMachineConfigSpec.changeVersion = $vm.ExtensionData.Config.ChangeVersion
    $virtualMachineConfigSpec.vAppConfig = New-Object VMware.Vim.VmConfigSpec

    foreach ( $property in $vm.ExtensionData.Config.VAppConfig.Property ) {
        $vAppPropertySpec = New-Object -TypeName ‘VMware.Vim.VAppPropertySpec’
        $vAppPropertySpec.Operation = ‘edit’
        $vAppPropertySpec.Info = $property
        $VAppPropertySpec.Info.UserConfigurable = $true

        $virtualMachineConfigSpec.vAppConfig.Property += $VAppPropertySpec
    }

    $vm.ExtensionData.ReconfigVM_Task( $virtualMachineConfigSpec )

    # Get all the IDs and values
    $vm.ExtensionData.Config.VAppConfig.Property -
        Select-Object -Property 'ID', 'Value'
}

And we first need to connect to our vCenter Server before we can execute the script:

PS C:\Users\Administrator> Connect-VIServer
WARNING: Please consider joining the VMware Customer Experience Improvement Program, so you can help us make PowerCLI a better product. You can join using the following command:

Set-PowerCLIConfiguration -Scope User -ParticipateInCEIP $true

VMware's Customer Experience Improvement Program ("CEIP") provides VMware with information that enables VMware to improve its products and services, to fix problems, and to advise you on how best to deploy and use our products.  As
part of the CEIP, VMware collects technical information about your organization’s use of VMware products and services on a regular basis in association with your organization’s VMware license key(s).  This information does not
personally identify any individual.

For more details: type "help about_ceip" to see the related help article.

To disable this warning and set your preference use the following command and restart PowerShell:
Set-PowerCLIConfiguration -Scope User -ParticipateInCEIP $true or $false.

cmdlet Connect-VIServer at command pipeline position 1
Supply values for the following parameters:
Server[0]: vcsa-01.home.local
Server[1]:

Name                           Port  User
----                           ----  ----
vcsa-01.home.local             443   VSPHERE.LOCAL\Administrator

PS C:\Users\Administrator>

First we verify if the VM gan be found in the vCenter Server inventory:

PS C:\Scripts\PowerCLI\Terraform> Get-VM -Name “Nested_ESXi6.7u3_Appliance_Template_v1"

Name                 PowerState Num CPUs MemoryGB
——                 ————— ———— ————
Nested_ESXi6.7u3_… PoweredOff 2        6.000


PS C:\Scripts\PowerCLI\Terraform>

Then we can run the script:

PS C:\Scripts\PowerCLI\Terraform\Nested ESXi> .\Enable-VmVappProperties.ps1 -Name "Nested_ESXi6.7u3_Appliance_Template_v1"

Type Value
—— ——
Task task-94569
     192.168.1.200

     False
     pool.ntp.org

     True

PS C:\Scripts\PowerCLI\Terraform\Nested ESXi>

No that this has been done we can run the Terraform scripts to deploy the nested ESXi hosts and clone them from the initial "Nested_ESXi6.7u3_Appliance_Template_v1” VM. I renamed the VM to “esxi-nested” to make the naming easier.

The Terraform scripts can be found below and I have created separate directories and scripts for the Management, Edge and Compute nested hosts.

Nested Management Hosts

❯ tree
├── nested-esxi-management.tf

nested-esxi-management.tf

CLICK ON EXPAND ===> ON THE RIGHT ===> TO SEE THE OUTPUT (nested-esxi-management.tf code) ===> :

provider “vsphere” {
  version=“~> 1.3”
  vsphere_server=“vcsa-01.home.local" #Change with your data
  allow_unverified_ssl=true
  user=“administrator@vsphere.local” #Change with your admin name and pwd
  password=“<my vCenter Server password>” 
}

data “vsphere_datacenter” “dc” {
  name=“HOME” #The target dc
}

data “vsphere_resource_pool” “pool” {  
  name=“Lab1”
  datacenter_id=“${data.vsphere_datacenter.dc.id}”
  }

data “vsphere_datastore” “datastore” {
  name=“vsanDatastore” #Change with your datastore name
  datacenter_id=“${data.vsphere_datacenter.dc.id}”
}

root # Management interface
  data “vsphere_network” “network_mgmt” {
  name=“L1-ESXI-MGMT”
  datacenter_id=“${data.vsphere_datacenter.dc.id}”
}

root ####################################################################

data “vsphere_virtual_machine” “template” {
  name=“esxi-template”
  datacenter_id=“${data.vsphere_datacenter.dc.id}”
}

root ####################################################################
root #L1-MGT-ESX-01
root ####################################################################

resource “vsphere_virtual_machine” “l1-mgt-esx-01” {
  name=“l1-mgt-esx-01.corp.local”
  guest_id=“${data.vsphere_virtual_machine.template.guest_id}”
  resource_pool_id=“${data.vsphere_resource_pool.pool.id}”
  datastore_id=“${data.vsphere_datastore.datastore.id}”
  num_cpus=2
  memory=20000
  wait_for_guest_net_timeout=0

  network_interface {
    network_id=“${data.vsphere_network.network_mgmt.id}”
  }
  network_interface {
    network_id=“${data.vsphere_network.network_mgmt.id}”
  }

  disk {
    label=“sda”
    unit_number=0
    size="${data.vsphere_virtual_machine.template.disks.0.size}"
    eagerly_scrub="${data.vsphere_virtual_machine.template.disks.0.eagerly_scrub}"
    thin_provisioned="${data.vsphere_virtual_machine.template.disks.0.thin_provisioned}"
  }

  disk {
    label="sdb"
    unit_number=1
    size="${data.vsphere_virtual_machine.template.disks.1.size}"
    eagerly_scrub=“${data.vsphere_virtual_machine.template.disks.1.eagerly_scrub}"
    thin_provisioned="${data.vsphere_virtual_machine.template.disks.1.thin_provisioned}”
  }

  disk {
    label=“sdc”
    unit_number=2
    size=“${data.vsphere_virtual_machine.template.disks.2.size}”
    eagerly_scrub=“${data.vsphere_virtual_machine.template.disks.2.eagerly_scrub}”
    thin_provisioned=“${data.vsphere_virtual_machine.template.disks.2.thin_provisioned}”
  }

  clone {
     template_uuid=“${data.vsphere_virtual_machine.template.id}”
  }

  vapp {
    properties {
      “guestinfo.hostname” = “l1-mgt-esx-01”
      “guestinfo.ipaddress” = “192.168.12.11” # Default = DHCP
      “guestinfo.netmask” = “255.255.255.0”
      “guestinfo.gateway” = “192.168.12.1”
      “guestinfo.dns” = “192.168.11.10”
      “guestinfo.domain” = “lab1.local”
      “guestinfo.ntp” = “192.168.11.10”
      “guestinfo.syslog” = “”
      “guestinfo.password” = “VMware1!” 
      “guestinfo.ssh” = “True” # Case-sensitive string
      “guestinfo.createvmfs” = “False” # Case-sensitive string
      “guestinfo.debug” = “False” # Case-sensitive string
    }
  }

  lifecycle {
    ignore_changes= [
      “annotation”,
      “vapp.0.properties”,
    ]
  }
}

root ####################################################################
root #L1-MGT-ESX-02
root ####################################################################

resource “vsphere_virtual_machine” “l1-mgt-esx-02” {
  name=“l1-mgt-esx-02.corp.local”
  guest_id=“${data.vsphere_virtual_machine.template.guest_id}”
  resource_pool_id="${data.vsphere_resource_pool.pool.id}"
  datastore_id="${data.vsphere_datastore.datastore.id}"
  num_cpus=2
  memory=20000
  wait_for_guest_net_timeout=0

  network_interface {
    network_id="${data.vsphere_network.network_mgmt.id}"
  }
  network_interface {
    network_id="${data.vsphere_network.network_mgmt.id}"
  }

  disk {
    label="sda"
    unit_number=0
    size="${data.vsphere_virtual_machine.template.disks.0.size}"
    eagerly_scrub="${data.vsphere_virtual_machine.template.disks.0.eagerly_scrub}"
    thin_provisioned="${data.vsphere_virtual_machine.template.disks.0.thin_provisioned}"
  }

  disk {
    label="sdb"
    unit_number=1
    size="${data.vsphere_virtual_machine.template.disks.1.size}"
    eagerly_scrub="${data.vsphere_virtual_machine.template.disks.1.eagerly_scrub}"
    thin_provisioned="${data.vsphere_virtual_machine.template.disks.1.thin_provisioned}"
  }

  disk {
    label=“sdc”
    unit_number=2
    size=“${data.vsphere_virtual_machine.template.disks.2.size}"
    eagerly_scrub="${data.vsphere_virtual_machine.template.disks.2.eagerly_scrub}”
    thin_provisioned=“${data.vsphere_virtual_machine.template.disks.2.thin_provisioned}”
  }

  clone {
     template_uuid=“${data.vsphere_virtual_machine.template.id}”
  }

  vapp {
    properties {
      “guestinfo.hostname” = “l1-mgt-esx-02”
      “guestinfo.ipaddress” = “192.168.12.12” # Default = DHCP
      “guestinfo.netmask” = “255.255.255.0”
      “guestinfo.gateway” = “192.168.12.1”
      “guestinfo.dns” = “192.168.11.10”
      “guestinfo.domain” = “lab1.local”
      “guestinfo.ntp” = “192.168.11.10”
      “guestinfo.syslog” = “”
      “guestinfo.password” = “VMware1!” 
      “guestinfo.ssh” = “True” # Case-sensitive string
      “guestinfo.createvmfs” = “False” # Case-sensitive string
      “guestinfo.debug” = “False” # Case-sensitive string
    }
  }

  lifecycle {
    ignore_changes= [
      “annotation”,
      “vapp.0.properties”,
    ]
  }
}

root ####################################################################
root #L1-MGT-ESX-03
root ####################################################################

resource “vsphere_virtual_machine” “l1-mgt-esx-03” {
  name=“l1-mgt-esx-03.corp.local”
  guest_id=“${data.vsphere_virtual_machine.template.guest_id}”
  resource_pool_id=“${data.vsphere_resource_pool.pool.id}”
  datastore_id=“${data.vsphere_datastore.datastore.id}”
  num_cpus=2
  memory=20000
  wait_for_guest_net_timeout=0

  network_interface {
    network_id="${data.vsphere_network.network_mgmt.id}"
  }
  network_interface {
    network_id="${data.vsphere_network.network_mgmt.id}"
  }

  disk {
    label="sda"
    unit_number=0
    size="${data.vsphere_virtual_machine.template.disks.0.size}"
    eagerly_scrub="${data.vsphere_virtual_machine.template.disks.0.eagerly_scrub}"
    thin_provisioned="${data.vsphere_virtual_machine.template.disks.0.thin_provisioned}"
  }

  disk {
    label="sdb"
    unit_number=1
    size="${data.vsphere_virtual_machine.template.disks.1.size}"
    eagerly_scrub="${data.vsphere_virtual_machine.template.disks.1.eagerly_scrub}"
    thin_provisioned="${data.vsphere_virtual_machine.template.disks.1.thin_provisioned}"
  }

  disk {
    label="sdc"
    unit_number=2
    size="${data.vsphere_virtual_machine.template.disks.2.size}"
    eagerly_scrub=“${data.vsphere_virtual_machine.template.disks.2.eagerly_scrub}"
    thin_provisioned="${data.vsphere_virtual_machine.template.disks.2.thin_provisioned}”
  }

  clone {
     template_uuid=“${data.vsphere_virtual_machine.template.id}”
  }

  vapp {
    properties {
      “guestinfo.hostname” = “l1-mgt-esx-03”
      “guestinfo.ipaddress” = “192.168.12.13” # Default = DHCP
      “guestinfo.netmask” = “255.255.255.0”
      “guestinfo.gateway” = “192.168.12.1”
      “guestinfo.dns” = “192.168.11.10”
      “guestinfo.domain” = “lab1.local”
      “guestinfo.ntp” = “192.168.11.10”
      “guestinfo.syslog” = “”
      “guestinfo.password” = “VMware1!” 
      “guestinfo.ssh” = “True” # Case-sensitive string
      “guestinfo.createvmfs” = “False” # Case-sensitive string
      “guestinfo.debug” = “False” # Case-sensitive string
    }
  }

  lifecycle {
    ignore_changes= [
      “annotation”,
      “vapp.0.properties”,
    ]
  }
}

The OVA template initially consist of 2 x vNIC (that represents two physical vmnic’s). We need to add more vNICs *MANUALLY* that will represent other networks and assign the vNICs to the correct networks:

In the Terraform: NSX-T infra segments segments I explain how to create the NSX-T (infra) segments using terraform.

  • 'LX-APP-MGMT-11 | 192.168.11.0/24
    • Subnet for Management applications, like vCenter Server, NSX-T Manager, etc.
  • LX-ESXI-MGMT-UT | 192.168.12.0/24
    • Subnet for all vmk0 interfaces for ESXi management traffic.
  • 'LX-VMOTION-13 | 192.168.13.0/24
    • Subnet for all vmk1 interfaces for ESXi vMotion traffic.
  • LX-VSAN-14 | 192.168.14.0/24
    • Subnet for all vmk2 interfaces for ESXi VSAN traffic.
  • LX-REPLICATION-15 | 192.168.15.0/24
    • Subnet for all vmk3 interfaces for ESXi Replication traffic.

A VM can have up to 10 vNICs and we are using 5 now.

Note

You can also alter the Terraform code to deploy more vNICs and assign the correct network right away for you.

The best way would be to only use two interfaces per ESXi host and use VLAN trunking to take care of the network segentation. Doing this for NSX-T (nested) on NSX-T (infra) would require different skills and methods that I will explain in a different blog article.

Nested Edge Hosts

❯ tree
├── nested-esxi-edge.tf

nested-esxi-edge.tf

CLICK ON EXPAND ===> ON THE RIGHT ===> TO SEE THE OUTPUT (nested-esxi-edge.tf code) ===> :

provider “vsphere” {
  version=“~> 1.3”
  vsphere_server=“vcsa-01.home.local" #Change with your data
  allow_unverified_ssl=true
  user=“administrator@vsphere.local” #Change with your admin name and pwd
  password=“<my vCenter Server password>” 
}

data “vsphere_datacenter” “dc” {
  name=“HOME” #The target dc
}

data “vsphere_resource_pool” “pool” {  
  name=“Lab1”
  datacenter_id=“${data.vsphere_datacenter.dc.id}”
  }


data “vsphere_datastore” “datastore” {
  name=“vsanDatastore” #Change with your datastore name
  datacenter_id=“${data.vsphere_datacenter.dc.id}”
}

root # Management interface
  data “vsphere_network” “network_mgmt” {
  name=“L1-ESXI-MGMT”
  datacenter_id=“${data.vsphere_datacenter.dc.id}”
}

root ####################################################################

data “vsphere_virtual_machine” “template” {
  name=“esxi-template”
  datacenter_id=“${data.vsphere_datacenter.dc.id}”
}

root ####################################################################
root #L1-EDG-ESX-01
root ####################################################################

resource “vsphere_virtual_machine” “l1-edg-esx-01” {
  name=“l1-edg-esx-01.corp.local”
  guest_id=“${data.vsphere_virtual_machine.template.guest_id}”
  resource_pool_id=“${data.vsphere_resource_pool.pool.id}”
  datastore_id=“${data.vsphere_datastore.datastore.id}”
  num_cpus=2
  memory=8000
  wait_for_guest_net_timeout=0

  network_interface {
    network_id=“${data.vsphere_network.network_mgmt.id}”
  }
  network_interface {
    network_id=“${data.vsphere_network.network_mgmt.id}”
  }

  disk {
    label=“sda”
    unit_number=0
    size=“${data.vsphere_virtual_machine.template.disks.0.size}”
    eagerly_scrub=“${data.vsphere_virtual_machine.template.disks.0.eagerly_scrub}”
    thin_provisioned=“${data.vsphere_virtual_machine.template.disks.0.thin_provisioned}”
  }

  disk {
    label=“sdb”
    unit_number=1
    size=“${data.vsphere_virtual_machine.template.disks.1.size}”
    eagerly_scrub=“${data.vsphere_virtual_machine.template.disks.1.eagerly_scrub}”
    thin_provisioned=“${data.vsphere_virtual_machine.template.disks.1.thin_provisioned}”
  }

  disk {
    label=“sdc”
    unit_number=2
    size=“${data.vsphere_virtual_machine.template.disks.2.size}”
    eagerly_scrub=“${data.vsphere_virtual_machine.template.disks.2.eagerly_scrub}”
    thin_provisioned=“${data.vsphere_virtual_machine.template.disks.2.thin_provisioned}”
  }

  clone {
     template_uuid=“${data.vsphere_virtual_machine.template.id}”
  }

  vapp {
    properties {
      "guestinfo.hostname" = "l1-edg-esx-01"
      "guestinfo.ipaddress" = "192.168.12.14" # Default = DHCP
      "guestinfo.netmask" = "255.255.255.0"
      “guestinfo.gateway” = “192.168.12.1”
      "guestinfo.dns" = "192.168.11.10"
      "guestinfo.domain" = "lab1.local"
      "guestinfo.ntp" = "192.168.11.10"
      "guestinfo.syslog" = ""
      "guestinfo.password" = "VMware1!" 
      "guestinfo.ssh" = "True" # Case-sensitive string
      "guestinfo.createvmfs" = "False" # Case-sensitive string
      "guestinfo.debug" = "False" # Case-sensitive string
    }
  }

  lifecycle {
    ignore_changes= [
      "annotation",
      "vapp.0.properties",
    ]
  }
}

root ####################################################################
root #L1-EDG-ESX-02
root ####################################################################

resource "vsphere_virtual_machine" "l1-edg-esx-02" {
  name="l1-edg-esx-02.corp.local”
  guest_id=“${data.vsphere_virtual_machine.template.guest_id}”
  resource_pool_id=“${data.vsphere_resource_pool.pool.id}”
  datastore_id=“${data.vsphere_datastore.datastore.id}”
  num_cpus=2
  memory=8000
  wait_for_guest_net_timeout=0

  network_interface {
    network_id=“${data.vsphere_network.network_mgmt.id}”
  }
  network_interface {
    network_id=“${data.vsphere_network.network_mgmt.id}”
  }

  disk {
    label=“sda”
    unit_number=0
    size=“${data.vsphere_virtual_machine.template.disks.0.size}”
    eagerly_scrub=“${data.vsphere_virtual_machine.template.disks.0.eagerly_scrub}”
    thin_provisioned=“${data.vsphere_virtual_machine.template.disks.0.thin_provisioned}”
  }

  disk {
    label=“sdb”
    unit_number=1
    size=“${data.vsphere_virtual_machine.template.disks.1.size}”
    eagerly_scrub=“${data.vsphere_virtual_machine.template.disks.1.eagerly_scrub}”
    thin_provisioned=“${data.vsphere_virtual_machine.template.disks.1.thin_provisioned}”
  }

  disk {
    label=“sdc”
    unit_number=2
    size=“${data.vsphere_virtual_machine.template.disks.2.size}”
    eagerly_scrub=“${data.vsphere_virtual_machine.template.disks.2.eagerly_scrub}”
    thin_provisioned=“${data.vsphere_virtual_machine.template.disks.2.thin_provisioned}”
  }

  clone {
     template_uuid=“${data.vsphere_virtual_machine.template.id}”
  }

  vapp {
    properties {
      “guestinfo.hostname” = “l1-edg-esx-02”
      “guestinfo.ipaddress” = “192.168.12.15” # Default = DHCP
      "guestinfo.netmask" = "255.255.255.0"
      “guestinfo.gateway” = “192.168.12.1”
      "guestinfo.dns" = "192.168.11.10"
      "guestinfo.domain" = "lab1.local"
      "guestinfo.ntp" = "192.168.11.10"
      "guestinfo.syslog" = ""
      "guestinfo.password" = "VMware1!" 
      "guestinfo.ssh" = "True" # Case-sensitive string
      "guestinfo.createvmfs" = "False" # Case-sensitive string
      "guestinfo.debug" = "False" # Case-sensitive string
    }
  }

  lifecycle {
    ignore_changes= [
      "annotation",
      "vapp.0.properties",
    ]
  }
}

The OVA template initially consist of 2 x vNIC (that represents two physical vmnic’s). We need to add more vNICs *MANUALLY* that will represent other networks and assign the vNICs to the correct networks:

In the Terraform: NSX-T infra segments segments I explain how to create the NSX-T (infra) segments using terraform.

  • LX-APP-MGMT-11 | 192.168.11.0/24
    • Subnet for Management applications, like vCenter Server, NSX-T Manager, etc.
  • LX-ESXI-MGMT-UT | 192.168.12.0/24
    • Subnet for all vmk0 interfaces for ESXi management traffic.
  • LX-VMOTION-13 | 192.168.13.0/24
    • Subnet for all vmk1 interfaces for ESXi vMotion traffic.
  • LX-VSAN-14 | 192.168.14.0/24
    • Subnet for all vmk2 interfaces for ESXi VSAN traffic.
  • LX-REPLICATION-15 | 192.168.15.0/24
    • Subnet for all vmk3 interfaces for ESXi Replication traffic.
  • LX-OVERLAY-EDGE-17 | 192.168.17.0/24
    • Subnet for all Edge (VM) Transport Node TEP interfaces for GENEVE traffic.
  • LX-BGP-UPLINK-01-18 | 192.168.18.0/24
    • Subnet for BGP uplink 01 for N/S Connectivity.
  • LX-BGP-UPLINK-02-19 | 192.168.19.0/24
    • Subnet for BGP uplink 02 for N/S Connectivity.

A VM can have up to 10 vNICs and we are using 8 now.

Note

You can also alter the Terraform code to deploy more vNICs and assign the correct network right away for you.

The best way would be to only use two interfaces per ESXi host and use VLAN trunking to take care of the network segentation. Doing this for NSX-T (nested) on NSX-T (infra) would require different skills and methods that I will explain in a different blog article.

Nested Compute Hosts

❯ tree
├── nested-compute-edge.tf

nested-esxi-compute.tf

CLICK ON EXPAND ===> ON THE RIGHT ===> TO SEE THE OUTPUT (nested-compute-edge.tf code) ===> :

provider “vsphere” {
  version=“~> 1.3”
  vsphere_server=“vcsa-01.home.local" #Change with your data
  allow_unverified_ssl=true
  user=“administrator@vsphere.local” #Change with your admin name and pwd
  password=“<my vCenter Server password>” 
}

data “vsphere_datacenter” “dc” {
  name=“HOME” #The target dc
}

data “vsphere_resource_pool” “pool” {  
  name=“Lab1”
  datacenter_id=“${data.vsphere_datacenter.dc.id}”
  }

data “vsphere_datastore” “datastore” {
  name=“vsanDatastore” #Change with your datastore name
  datacenter_id=“${data.vsphere_datacenter.dc.id}”
}

root # Management interface
  data “vsphere_network” “network_mgmt” {
  name=“L1-ESXI-MGMT”
  datacenter_id=“${data.vsphere_datacenter.dc.id}”
}

root ####################################################################

data “vsphere_virtual_machine” “template” {
  name=“esxi-template”
  datacenter_id=“${data.vsphere_datacenter.dc.id}”
}

root ####################################################################
root #L1-CMP-ESX-01
root ####################################################################

resource “vsphere_virtual_machine” “l1-cmp-esx-01” {
  name=“l1-cmp-esx-01.corp.local”
  guest_id=“${data.vsphere_virtual_machine.template.guest_id}”
  resource_pool_id=“${data.vsphere_resource_pool.pool.id}”
  datastore_id=“${data.vsphere_datastore.datastore.id}”
  num_cpus=2
  memory=8000
  wait_for_guest_net_timeout=0

  network_interface {
    network_id=“${data.vsphere_network.network_mgmt.id}”
  }
  network_interface {
    network_id=“${data.vsphere_network.network_mgmt.id}”
  }

  disk {
    label=“sda”
    unit_number=0
    size="${data.vsphere_virtual_machine.template.disks.0.size}"
    eagerly_scrub="${data.vsphere_virtual_machine.template.disks.0.eagerly_scrub}"
    thin_provisioned="${data.vsphere_virtual_machine.template.disks.0.thin_provisioned}"
  }

  disk {
    label="sdb"
    unit_number=1
    size="${data.vsphere_virtual_machine.template.disks.1.size}"
    eagerly_scrub="${data.vsphere_virtual_machine.template.disks.1.eagerly_scrub}"
    thin_provisioned="${data.vsphere_virtual_machine.template.disks.1.thin_provisioned}"
  }

  disk {
    label="sdc"
    unit_number=2
    size=“${data.vsphere_virtual_machine.template.disks.2.size}”
    eagerly_scrub="${data.vsphere_virtual_machine.template.disks.2.eagerly_scrub}”
    thin_provisioned=“${data.vsphere_virtual_machine.template.disks.2.thin_provisioned}”
  }

  clone {
     template_uuid=“${data.vsphere_virtual_machine.template.id}”
  }

  vapp {
    properties {
      “guestinfo.hostname” = “l1-cmp-esx-01”
      “guestinfo.ipaddress” = “192.168.12.16” # Default = DHCP
      “guestinfo.netmask” = “255.255.255.0”
      “guestinfo.gateway” = “192.168.12.1”
      “guestinfo.dns” = “192.168.11.10”
      “guestinfo.domain” = “lab1.local”
      “guestinfo.ntp” = “192.168.11.10”
      “guestinfo.syslog” = “”
      “guestinfo.password” = “VMware1!” 
      “guestinfo.ssh” = “True” # Case-sensitive string
      “guestinfo.createvmfs” = “False” # Case-sensitive string
      “guestinfo.debug” = “False” # Case-sensitive string
    }
  }

  lifecycle {
    ignore_changes= [
      “annotation”,
      “vapp.0.properties”,
    ]
  }
}

root ####################################################################
root #L1-CMP-ESX-02
root ####################################################################

resource “vsphere_virtual_machine” “l1-cmp-esx-02” {
  name=“l1-mgt-cmp-02.corp.local”
  guest_id=“${data.vsphere_virtual_machine.template.guest_id}”
  resource_pool_id=“${data.vsphere_resource_pool.pool.id}”
  datastore_id=“${data.vsphere_datastore.datastore.id}”
  num_cpus=2
  memory=8000
  wait_for_guest_net_timeout=0

  network_interface {
    network_id=“${data.vsphere_network.network_mgmt.id}”
  }
  network_interface {
    network_id="${data.vsphere_network.network_mgmt.id}"
  }

  disk {
    label="sda"
    unit_number=0
    size="${data.vsphere_virtual_machine.template.disks.0.size}"
    eagerly_scrub="${data.vsphere_virtual_machine.template.disks.0.eagerly_scrub}"
    thin_provisioned="${data.vsphere_virtual_machine.template.disks.0.thin_provisioned}"
  }

  disk {
    label="sdb"
    unit_number=1
    size="${data.vsphere_virtual_machine.template.disks.1.size}"
    eagerly_scrub="${data.vsphere_virtual_machine.template.disks.1.eagerly_scrub}"
    thin_provisioned="${data.vsphere_virtual_machine.template.disks.1.thin_provisioned}"
  }

  disk {
    label="sdc"
    unit_number=2
    size="${data.vsphere_virtual_machine.template.disks.2.size}"
    eagerly_scrub="${data.vsphere_virtual_machine.template.disks.2.eagerly_scrub}"
    thin_provisioned="${data.vsphere_virtual_machine.template.disks.2.thin_provisioned}"
  }

  clone {
     template_uuid=“${data.vsphere_virtual_machine.template.id}"
  }

  vapp {
    properties {
      "guestinfo.hostname" = “l1-cmp-esx-02”
      “guestinfo.ipaddress” = “192.168.12.17” # Default = DHCP
      “guestinfo.netmask” = “255.255.255.0”
      “guestinfo.gateway” = “192.168.12.1”
      “guestinfo.dns” = “192.168.11.10”
      “guestinfo.domain” = “lab1.local”
      “guestinfo.ntp” = “192.168.11.10”
      “guestinfo.syslog” = “”
      “guestinfo.password” = “VMware1!” 
      “guestinfo.ssh” = “True” # Case-sensitive string
      “guestinfo.createvmfs” = “False” # Case-sensitive string
      “guestinfo.debug” = “False” # Case-sensitive string
    }
  }

  lifecycle {
    ignore_changes= [
      “annotation”,
      “vapp.0.properties”,
    ]
  }
}

root ####################################################################
root #L1-CMP-ESX-03
root ####################################################################

resource “vsphere_virtual_machine” “l1-cmp-esx-03” {
  name=“l1-cmp-esx-03.corp.local”
  guest_id=“${data.vsphere_virtual_machine.template.guest_id}”
  resource_pool_id=“${data.vsphere_resource_pool.pool.id}”
  datastore_id=“${data.vsphere_datastore.datastore.id}”
  num_cpus=2
  memory=8000
  wait_for_guest_net_timeout=0

  network_interface {
    network_id=“${data.vsphere_network.network_mgmt.id}”
  }
  network_interface {
    network_id=“${data.vsphere_network.network_mgmt.id}”
  }

  disk {
    label=“sda”
    unit_number=0
    size=“${data.vsphere_virtual_machine.template.disks.0.size}”
    eagerly_scrub=“${data.vsphere_virtual_machine.template.disks.0.eagerly_scrub}"
    thin_provisioned="${data.vsphere_virtual_machine.template.disks.0.thin_provisioned}"
  }

  disk {
    label="sdb"
    unit_number=1
    size="${data.vsphere_virtual_machine.template.disks.1.size}"
    eagerly_scrub="${data.vsphere_virtual_machine.template.disks.1.eagerly_scrub}"
    thin_provisioned="${data.vsphere_virtual_machine.template.disks.1.thin_provisioned}"
  }

  disk {
    label="sdc"
    unit_number=2
    size="${data.vsphere_virtual_machine.template.disks.2.size}"
    eagerly_scrub="${data.vsphere_virtual_machine.template.disks.2.eagerly_scrub}"
    thin_provisioned="${data.vsphere_virtual_machine.template.disks.2.thin_provisioned}"
  }

  clone {
     template_uuid="${data.vsphere_virtual_machine.template.id}"
  }

  vapp {
    properties {
      “guestinfo.hostname” = “l1-cmp-esx-03”
      "guestinfo.ipaddress" = "192.168.12.18" # Default = DHCP
      “guestinfo.netmask” = “255.255.255.0”
      “guestinfo.gateway” = “192.168.12.1”
      “guestinfo.dns” = “192.168.11.10”
      “guestinfo.domain” = “lab1.local”
      “guestinfo.ntp” = “192.168.11.10”
      “guestinfo.syslog” = “”
      “guestinfo.password” = “VMware1!” 
      “guestinfo.ssh” = “True” # Case-sensitive string
      “guestinfo.createvmfs” = “False” # Case-sensitive string
      “guestinfo.debug” = “False” # Case-sensitive string
    }
  }

  lifecycle {
    ignore_changes= [
      “annotation”,
      “vapp.0.properties”,
    ]
  }
}

The OVA template initially consist of 2 x vNIC (that represents two physical vmnic’s). We need to add more vNICs *MANUALLY* that will represent other networks and assign the vNICs to the correct networks:

In the Terraform: NSX-T infra segments segments I explain how to create the NSX-T (infra) segments using terraform.

  • LX-ESXI-MGMT-UT | 192.168.12.0/24
    • Subnet for all vmk0 interfaces for ESXi management traffic.
  • LX-VMOTION-13 | 192.168.13.0/24
    • Subnet for all vmk1 interfaces for ESXi vMotion traffic.
  • LX-VSAN-14 | 192.168.14.0/24
    • Subnet for all vmk2 interfaces for ESXi VSAN traffic.
  • LX-REPLICATION-15 | 192.168.15.0/24
    • Subnet for all vmk3 interfaces for ESXi Replication traffic.
  • LX-OVERLAY-HOST-16 | 192.168.16.0/24
    • Subnet for all Host (ESXi) Transport Node TEP interfaces for GENEVE traffic.

A VM can have up to 10 vNICs and we are using 5 now.

Note

You can also alter the Terraform code to deploy more vNICs and assign the correct network right away for you.

The best way would be to only use two interfaces per ESXi host and use VLAN trunking to take care of the network segmentation. Doing this for NSX-T (nested) on NSX-T (infra) would require different skills and methods that I will explain in a different blog article.

Executing your terraform code

So we are ready to execute the terraform code on a per-directory basis.

Validate your code:

ihoogendoor-a01:#Test iwanhoogendoorn$ tfenv use 0.11.14
[INFO] Switching to v0.11.14
[INFO] Switching completed
ihoogendoor-a01:Test iwanhoogendoorn$ terraform validate

Plan your code:

ihoogendoor-a01:Test iwanhoogendoorn$ terraform plan

Execute your code to implement the Segments:

ihoogendoor-a01:Test iwanhoogendoorn$ terraform apply

When the segments need to be removed again you can revert the implementation:

ihoogendoor-a01:Test iwanhoogendoorn$ terraform destroy

Sources