Author: admin (Page 1 of 4)

RHEL 7 and Chronyd on vSphere

We have had an issue recently relating to chronyD which primarily affects our redhat 7 servers and not the redhat 6 boxes, seems redhat 6 is a little bit more flexiable with regarding timesources but thats another story.

Typically we get out timesources from our fortigate (Stratum 2) resource (Stratum 1 being fortiguards NTP source where it gets the time form and stratum 0 being atomic clock)

We then cascade that down to the PDC role on the DC which is the stratum 3 source and then this rolls down to the other DCS’s Stratum 4).

Anyway when using time on redhat its always best to go for the fortigates rather than the windows timesource.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Add-PSSnapin VMware.VimAutomation.Core
Connect-VIServer -Server vcenter.local -User username -Password Password
$ServerList = Get-Content C:\serverlist.txt

Foreach ($vm in $ServerList)
{
New-AdvancedSetting -Entity $vm -Name tools.syncTime -Value '0' -Confirm:$false -Force:$true
New-AdvancedSetting -Entity $vm -Name time.synchronize.continue -Value '0' -Confirm:$false -Force:$true
New-AdvancedSetting -Entity $vm -Name time.synchronize.restore -Value '0' -Confirm:$false -Force:$true
New-AdvancedSetting -Entity $vm -Name time.synchronize.resume.disk -Value '0' -Confirm:$false -Force:$true
New-AdvancedSetting -Entity $vm -Name time.synchronize.shrink -Value '0' -Confirm:$false -Force:$true
New-AdvancedSetting -Entity $vm -Name time.synchronize.tools.startup -Value '0' -Confirm:$false -Force:$true
New-AdvancedSetting -Entity $vm -Name time.synchronize.tools.enable -Value '0' -Confirm:$false -Force:$true
New-AdvancedSetting -Entity $vm -Name time.synchronize.resume.host -Value '0' -Confirm:$false -Force:$true
}

Always remember to make the following vmx changes to the VM’s

And of couse set chronyd with the

1
maxdistance = 16

vSphere 7.0 Update 1 Released

Well its out now after the comments in september to say it was due to be out we now have update 1 released to all.

We have a number updates as listed below from the release notes, the most interesting of all is vSphere with Tanzu which is something im really interested to have a play with.

What’s New

  • ESXi 7.0 Update 1 supports vSphere Quick Boot on the following servers:
    • HPE ProLiant BL460c Gen9
    • HPE ProLiant DL325 Gen10 Plus
    • HPE ProLiant DL360 Gen9
    • HPE ProLiant DL385 Gen10 Plus
    • HPE ProLiant XL225n Gen10 Plus
    • HPE Synergy 480 Gen9
  • Enhanced vSphere Lifecycle Manager hardware compatibility pre-checks for vSAN environments: ESXi 7.0 Update 1 adds vSphere Lifecycle Manager hardware compatibility pre-checks. The pre-checks automatically trigger after certain change events such as modification of the cluster desired image or addition of a new ESXi host in vSAN environments. Also, the hardware compatibility framework automatically polls the Hardware Compatibility List database at predefined intervals for changes that trigger pre-checks as necessary.
  • Increased number of vSphere Lifecycle Manager concurrent operations on clusters: With ESXi 7.0 Update 1, if you initiate remediation at a data center level, the number of clusters on which you can run remediation in parallel, increases from 15 to 64 clusters.
  • vSphere Lifecycle Manager support for coordinated updates between availability zones: With ESXi 7.0 Update 1, to prevent overlapping operations, vSphere Lifecycle Manager updates fault domains in vSAN clusters in a sequence. ESXi hosts within each fault domain are still updated in a rolling fashion. For vSAN stretched clusters, the first fault domain is always the preferred site.
  • Extended list of supported Red Hat Enterprise Linux and Ubuntu versions for the VMware vSphere Update Manager Download Service (UMDS): ESXi 7.0 Update 1 adds new Red Hat Enterprise Linux and Ubuntu versions that UMDS supports. For the complete list of supported versions, see Supported Linux-Based Operating Systems for Installing UMDS.
  • Improved control of VMware Tools time synchronization: With ESXi 7.0 Update 1, you can select a VMware Tools time synchronization mode from the vSphere Client instead of using the command prompt. When you navigate to VM Options > VMware Tools > Synchronize Time with Host, you can select Synchronize at startup and resume (recommended)Synchronize time periodically, or, if no option is selected, you can prevent synchronization.
  • Increased Support for Multi-Processor Fault Tolerance (SMP-FT) maximums: With ESXi 7.0 Update 1, you can configure more SMP-FT VMs, and more total SMP-FT vCPUs in an ESXi host, or a cluster, depending on your workloads and capacity planning. 
  • Virtual hardware version 18: ESXi Update 7.0 Update 1 introduces virtual hardware version 18 to enable support for virtual machines with higher resource maximums, and:
    • Secure Encrypted Virtualization – Encrypted State (SEV-ES)
    • Virtual remote direct memory access (vRDMA) native endpoints
    • EVC Graphics Mode (vSGA).
  • Increased resource maximums for virtual machines and performance enhancements:
    • With ESXi 7.0 Update 1, you can create virtual machines with three times more virtual CPUs and four times more memory to enable applications with larger memory and CPU footprint to scale in an almost linear fashion, comparable with bare metal. Virtual machine resource maximums are up to 768 vCPUs from 256 vCPUs, and to 24 TB of virtual RAM from 6 TB. Still, not over-committing memory remains a best practice. Only virtual machines with hardware version 18 and operating systems supporting such large configurations can be set up with these resource maximums.
    • Performance enhancements in ESXi that support the larger scale of virtual machines include widening of the physical address, address space optimizations, better NUMA awareness for guest virtual machines, and more scalable synchronization techniques. vSphere vMotion is also optimized to work with the larger virtual machine configurations.
    • ESXi hosts with AMD processors can support virtual machines with twice more vCPUs, 256, and up to 8 TB of RAM.
    • Persistent memory (PMEM) support is up twofold to 12 TB from 6 TB for both Memory Mode and App Direct Mode.

https://blogs.vmware.com/vsphere/2020/10/announcing-general-availability-vsphere-7-update-1.html

Windows 2016 and Hotplug Devices on vSphere 6.7

One of the issues that we have found is that due to the changes that have been made in both vsphere and windows that volumes appear as hotplug devices in Computer Manager, also this can affect how disks are brought online by windows.

To resolve this add the following setting to the vmx file

1
devices.hotplug = "false"

Also within windows make sure your diskpart san policy is set to always on.

1
2
3
4
5
Diskpart.exe
San
(This will then show your policy)
san policy=onlineall
(and your done)

vSphere 7.0 Released

VMware vSphere 7.0 has been announced and released by VMware. This is a major release that VMware will roll out in Q1 and vSphere 7.0 shall be adopted fast as soon as all the backup and DR vendors update their software

One of the main things is that there is no longer a Windows vCenter options, so only VCSA from now on.

Some of the features useable straight out the door are:

  • Improved Distributed Resource Scheduler (DRS)
  • Assignable Hardware Framework
  • Advanced Dynamic DirectPath I/O
  • vSphere Lifecycle Manager
  • Greatly Improved vMotion
  • Advanced Security – implement multifactor authentication (MFA)
  • Precision clock for PTP support
  • Even more advanced Content Library
  • Essential Services for Modern Hybrid cloud

Here are the release notes from VMware

https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-vcenter-server-70-release-notes.html

Disable Solarwinds Alerting With PowerShell

A customer is currently using solarwinds to monitor there virtual infastructure, when they do there patching they need to login to the solarwinds console, and manually step though each of the virtual machines/objects they are going to patch and put them into maintenance mode so the on-call guy doesnt get flooded with alerts.

To help matters and save a bit of time i used the below piece of powershell scripting to take away that manual task and only require a text file with a list of the servers to be modified (this is assuming you have the swisPowershell installed:

1
Install-Module -Name SwisPowerShell

swmaintme.ps1

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# Check if the powershell module that is used is actually loaded and if not load it up
Import-Module SwisPowerShell  
# Hours passed when running script e.g (./swmaintme.ps1 12) will set maintance for 12 hours
$hours=$args[0]
# Where is the server file located (file is a text file with just server names not FQDN)
$serverlist = Get-Content -Path "'path to imput text file'\unmanageme.txt"
# What is the solarwinds server (can only be ran from here as port 17777 is not open remotely :( )  
$strsolarWindServer="Solarwinds Server name Here"  
# Lets connected to the server listed above nice and trusted
$swis = Connect-Swis -Hostname $strsolarWindServer -Trusted
# For each time you look at a line in the text file above do this >>>>>
foreach($server in $serverlist){  
 
    $strQuery = "SELECT uri FROM Orion.Nodes WHERE SysName LIKE '" + "$server" + "%'"  
    $uris = Get-SwisData $swis $strQuery
#   Important line where we actually set the server to unmanaged and status 9 and then set it to maintance from when script was run to the hours we said at start  
    $uris | ForEach-Object { Set-SwisObject $swis $_ @{Status=9;Unmanaged=$true;UnmanageFrom=[DateTime]::UtcNow;UnmanageUntil=[DateTime]::UtcNow.AddHours($hours)}}  
}

Once you maintenance is finished and i the following script will take the same input file and put them back into monitoring mode, unless of course you wish to wait till your maintenance period you specified in the first script ends.

swunmaintme.ps1

1
2
3
4
5
6
7
8
9
10
Import-Module SwisPowerShell  
$serverlist = Get-Content -Path "'path to imput text file'"\unmanageme.txt"
$strsolarWindServer="
Solarwinds Server name Here"  
$swis = Connect-Swis -Hostname $strsolarWindServer -Trusted

foreach($server in $serverlist){  
    $strQuery = "
SELECT uri FROM Orion.Nodes WHERE SysName LIKE '" + "$server" + "%'"  
    $uris = Get-SwisData $swis $strQuery
    $uris | ForEach-Object { Set-SwisObject $swis $_ @{Status=1;Unmanaged=$false}}  
}

New Release: PowerCLI 11.4.0

New version of  PowerCLI released version 11.4.0

Heres a brief breakdown of the updates included:

PowerCLI 11.4.0 comes with the following updates:

  • Add support for Horizon View 7.9
  • Added new cmdlets to the Storage module
  • Updated Storage module cmdlets
  • Updated HCX module cmdlets

Dont forget its easy to update your powercli version (see below)

1
Update-Module VMware.PowerCLI

And finally heres a link to the vmware article with all the info https://blogs.vmware.com/PowerCLI/2019/08/new-release-powercli-11-4-0.html?src=so_5a314d05e49f5&cid=70134000001SkJn

Horizon Admin Login Issue UAG Deployment

I have been going through the process of replacing the security servers on my homelab over the weekend with Universal Access gateways (UAG). The process has been fine and to be honest very straight forward.

There is no point me going though the install process as Carl Stalhood has a excellent walkthough on his blog https://www.carlstalhood.com/vmware-unified-access-gateway/

One issue that i did have after the installation and configuration of the UAG was access to my Horizon Admin Page /admin and /newadmin after the installation.As i dont access it local from the connection server, what i had forgotten was to do the following:

For each connection server create a text file named locked.properties in “Location Where you have installed Horizon View”\Server\sslgateway\conf

1
2
3
4
5
6
7
8
9
Open or Create a locked.properties file using a plain text editor.
Add this line:

checkOrigin=false

Note: Ensure the locked.properties file is not in .txt extension after saving.
 
Save and close the file.
Restart the VMware Horizon View Connection Server service.

Once this was done i could login to my Horizon Admin page, there is a Vmware KB article at https://kb.vmware.com/s/article/2144768

Installing Ansible/AWX ready for VMware Automation on Unbuntu 8.04LTS

I have just started to look into using Ansible for automating some of the tasks we current use on customer infrastructure, from VM template deployment to the install/configure of bespoke applications within the VM.

For my testlab i though i would install ansible and the relevant VMware required modules like Python SDK for VMware and pyvmomi. I also installed AWX to provide a GUI for the task scheduler and to better understand how Ansible Tower works.

To get started with the installation we deploy a simple unbuntu 8.04LTS server using the default options (The VM specs are 2vcpu 16GB ram and a 120GB VMDK). I then ran through the process below to get ansible/AWX and of course Vmware SDK for Python installed. We have a number or requirements as we are running AWX in a docker (it should be noted that i prefer to put this in /opt but that a personal thing)

1
2
3
4
5
6
7
8
9
10
apt-add-repository --yes --update ppa:ansible/ansible
apt install ansible -y
apt install docker.io
apt install python-pip -y
pip install docker
pip install docker-composer
apt install nodejs npm -y
npm install npm --global
cd /opt
git clone https://github.com/ansible/awx.git

At this stage we can install AWX however if we were to use the default options the postgress data would be placed in /tmp which means it would be reset every reboot and we would loose out current configuration, as shown below I prefer to edit the inventory and put it to /opt/pgdocker. (again this is personal choice and you can put it anywhere you want)

1
2
3
4
5
cd /opt
mkdir pgdocker
cd awx/installer
edit inventory change postgres data files location to /opt/pgdocker
ansible-playbook -i inventory install.yml

You should now be able to visit http:\\(your servername) and login with a username of admin and a password of password

Next we want to install the VMware SDK for python and pyvmomi for ansible.

1
2
3
4
5
6
cd /opt
clone https://github.com/vmware/vsphere-automation-sdk-python.git
cd vsphere-automation-sdk-python
pip install --upgrade --force-reinstall -r requirements.txt --extra-index-url file:///opt/vsphere-automation-sdk-python/lib

pip install pyvmomi

Congrats you now have Ansible/AWX/Python all ready for working with Vsphere.

I will be doing an article covering some of the basics of using Ansible and VMware together in another post.

« Older posts