Working from home is the future, yet VMware just extended vSphere 6.5 support for a year because remote upgrades are too hard

VMware has extended support for vSphere 6.5 and vCenter 6.5 by a year, and says it needs to do so because customers are struggling to upgrade while their teams work from home/live in their offices.

News of the extension emerged in a Friday post by Paul Turner, veep for product management at VMware’s Cloud Platform Business Unit.

“This month marks a full year that many businesses transitioned to a work from home model with the onset of the global pandemic,” Turner wrote. “That’s created challenges for some of our customers with regards to IT operations and strategic planning. It has also led to uncertainty as to when your business operations will return to normal.”

“We can help address some of your challenges by offering you both flexibility and continued support as we all work together to get to the other side of this pandemic.”

The change means that vSphere 6.5 will reach end of general support(EoGS) on November 15th, 2022. A year later VMware will also stop offering technical guidance.

The new end of support dates now mirror those for vSphere 6.7.

VMware says faster growth will come once customers get back inside data centres

But even with an extra year, vCenter 6.5 have work to do because the client to drive it requires Adobe Flash. And Flash was put to rest in January 2021. If you can keep old Flash-enabled browsers enabled in your environment, cross your fingers, and feel free to stick with vCenter 6.5. Otherwise, VMware recommends an upgrade to vCenter 6.7 and its shiny new HTML5 client

Users of VMWare’s virtual storage array, VSAN, have also been given some extra time. Versions 6.5 and 6.6 were slated to go EoGS in November 2021. Support will now end in October 2022. End of technical guidance remains at November 2023 for both versions.

This isn’t the first time VMware has pointed out the negative effects of working from home: on in its Q3 2021 results call then-VMware-CEO Pat Gelsinger attributed slow signoff of major deals to customers who couldn’t get their teams back into the office to work on major projects.

How to unmount a LUN or detach a datastore device from ESXi hosts

This article provides steps to unmount a LUN from an ESXi 5.x/6.x host, which includes unmounting the file system and detaching the datastore/storage device. These steps must be performed for each ESXi host.  Easy to follow Step-by-Step Guide: How to Unmount a LUN or Detach a Datastore.

Unmounting a LUN using the command line

To unmount a LUN from an ESXi 5.x/6.x host using the command line:

  1. If the LUN is an RDM, skip to step 4. Otherwise, to obtain a list of all datastores mounted to an ESXi host, run this command:# esxcli storage filesystem list

    You see output, which lists all VMFS datastores, similar to:

    Mount Point Volume Name UUID Mounted Type Size Free
    ————————————————- ———– ———————————– ——- —— ———– ———–
    /vmfs/volumes/4de4cb24-4cff750f-85f5-0019b9f1ecf6 datastore1 4de4cb24-4cff750f-85f5-0019b9f1ecf6 true VMFS-5 140660178944 94577360896
    /vmfs/volumes/4c5fbff6-f4069088-af4f-0019b9f1ecf4 Storage2 4c5fbff6-f4069088-af4f-0019b9f1ecf4 true VMFS-3 146028888064 7968129024
    /vmfs/volumes/4c5fc023-ea0d4203-8517-0019b9f1ecf4 Storage4 4c5fc023-ea0d4203-8517-0019b9f1ecf4 true VMFS-3 146028888064 121057050624
    /vmfs/volumes/4e414917-a8d75514-6bae-0019b9f1ecf4 LUN01 4e414917-a8d75514-6bae-0019b9f1ecf4 true VMFS-5 146028888064 4266131456

  2. To find the unique identifier of the LUN housing the datastore to be removed, run this command:# esxcfg-scsidevs -m

    This command generates a list of VMFS datastore volumes and their related unique identifiers. Make a note of the unique identifier (NAA_ID) for the datastore you want to unmount as this will be used later on.

    For more information on the esxcfg-scsidevscommand, see Identifying disks when working with VMware ESX/ESXi (1014953).

  3. Unmount the datastore by running this command:# esxcli storage filesystem unmount [-uUUID | -l label | -p path ]

    For example, use one of these commands to unmount the LUN01 datastore:

    # esxcli storage filesystem unmount -l LUN01
    # esxcli storage filesystem unmount -u 4e414917-a8d75514-6bae-0019b9f1ecf4
    # esxcli storage filesystem unmount -p /vmfs/volumes/4e414917-a8d75514-6bae-0019b9f1ecf4

    Note: If the VMFS filesystem you are attempting to unmount has active I/O or has not fulfilled the prerequisites to unmount the VMFS datastore, you see an error in the VMkernel logs similar to:

    WARNING: VC: 637: unmounting opened volume (‘4e414917-a8d75514-6bae-0019b9f1ecf4’ ‘LUN01’) is not allowed.
    VC: 802: Unmount VMFS volume f530 28 2 4e414917a8d7551419006bae f4ecf19b 4 1 0 0 0 0 0 : Busy

  4. To verify that the datastore is unmounted, run this command:# esxcli storage filesystem list

    You see output similar to:

    Mount Point Volume Name UUID Mounted Type Size Free
    ————————————————- ———– ———————————– ——- —— ———– ———–
    /vmfs/volumes/4de4cb24-4cff750f-85f5-0019b9f1ecf6 datastore1 4de4cb24-4cff750f-85f5-0019b9f1ecf6 true VMFS-5 140660178944 94577360896
    /vmfs/volumes/4c5fbff6-f4069088-af4f-0019b9f1ecf4 Storage2 4c5fbff6-f4069088-af4f-0019b9f1ecf4 true VMFS-3 146028888064 7968129024
    /vmfs/volumes/4c5fc023-ea0d4203-8517-0019b9f1ecf4 Storage4 4c5fc023-ea0d4203-8517-0019b9f1ecf4 true VMFS-3 146028888064 121057050624
    LUN01 4e414917-a8d75514-6bae-0019b9f1ecf4 false VMFS-unknown version 0 0

    The Mounted field is set to false, the Type field is set to VMFS-unknown version, and that no Mount Point exists.

    Note: The unmounted state of the VMFS datastore persists across reboots. This is the default behavior. If you need to unmount a datastore temporarily, you can do so by appending the –no-persist flag to the unmountcommand.

  5. To detach the device/LUN, run this command:# esxcli storage core device set –state=off -d NAA_ID
  6. To verify that the device is offline, run this command:# esxcli storage core device list -dNAA_ID

    You see output, which shows that the Status of the disk is off, similar to:

    naa.60a98000572d54724a34655733506751
    Display Name: NETAPP Fibre Channel Disk (naa.60a98000572d54724a34655733506751)
    Has Settable Display Name: true
    Size: 1048593
    Device Type: Direct-Access
    Multipath Plugin: NMP
    Devfs Path: /vmfs/devices/disks/naa.60a98000572d54724a34655733506751
    Vendor: NETAPP
    Model: LUN
    Revision: 7330
    SCSI Level: 4
    Is Pseudo: false
    Status: off
    Is RDM Capable: true
    Is Local: false
    Is Removable: false
    Is SSD: false
    Is Offline: false
    Is Perennially Reserved: false
    Thin Provisioning Status: yes
    Attached Filters:
    VAAI Status: unknown
    Other UIDs: vml.020000000060a98000572d54724a346557335067514c554e202020

    Running the partedUtil getptbl command on the device shows that the device is not found.

    For example:

    # partedUtil getptbl /vmfs/devices/disks/naa.60a98000572d54724a34655733506751

    Error: Could not stat device /vmfs/devices/disks/naa.60a98000572d54724a34655733506751 – No such file or directory.
    Unable to get device /vmfs/devices/disks/naa.60a98000572d54724a34655733506751

  7. If the device is to be permanently decommissioned, it is now possible to unpresent the LUN from the SAN. For more information, contact your storage team, storage administrator, or storage array vendor.
  8. To rescan all devices on the ESXi host, run this command:# esxcli storage core adapter rescan [ -Avmhba# | –all ]

    The devices are automatically removed from the Storage Adapters.

    Notes:

    • A rescan must be run on all hosts that had visibility to the removed LUN.
    • When the device is detached, it stays in an unmounted state even if the device is re-presented (that is, the detached state is persistent). To bring the device back online, the device must be attached. To do this via the command line, run this command:# esxcli storage core device set –state=on -d NAA_ID
  9. If the device is to be permanently decommissioned from an ESXi host, (that is, the LUN has been or will be destroyed), remove the NAA entries from the host configuration by running these commands:
    1. To list the permanently detached devices:# esxcli storage core device detached list

      You see output similar to:

      Device UID State
      —————————- —–
      naa.50060160c46036df50060160c46036df off
      naa.6006016094602800c8e3e1c5d3c8e011 off

    2. To permanently remove the device configuration information from the system:# esxcli storage core device detached remove -d NAA_ID

      For example:

      # esxcli storage core device detached remove -d naa.50060160c46036df50060160c46036df