esxcli最有用的命令

发布时间 2023-12-01 17:27:46作者: maym

ESXCLI is a part of the ESXi shell, this is a CLI framework intended to manage an ESXi components such as hardware, network, storage and control ESXi itself on the low level.

esxcli command list

The list of available ESXCLI commands depends on the ESXi version.

Hence, the list of top ESXCLI namespaces for ESXi 6.7 is as follows:

  • device – device manager commands
  • esxcli – commands related to ESXCLI itself
  • fcoe – Fibre Channel over Ethernet commands
  • graphics – VMware graphics commands
  • hardware – commands for checking hardware properties and configuring hardware
  • iscsi – VMware iSCSI commands
  • network – this namespace includes a wide range of commands for managing general host network settings (such as the IP address, DNS settings of an ESXi host, firewall) and virtual networking components such as vSwitch, portgroups etc.
  • nvme – managing extensions for VMware NVMe driver
  • rdma – commands for managing the remote direct memory access protocol stack
  • sched – commands used for configuring scheduling and VMkernel system properties
  • software – managing ESXi software images and packages for ESXi
  • storage – commands used to manage storage
  • system – commands for configuring VMkernel system properties, the kernel core system and system services
  • vm – some commands that can be used to control virtual machine operations
  • vsan – VMware vSAN commands

STORAGE

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
#Rescan for new storage on all adapters
esxcfg-rescan --all
 
# List Storage adapters
esxcli storage core adapter list
 
# Determine the driver type that the Host Bus Adapter is currently using
 esxcfg-scsidevs -a
vmhba0  pvscsi            link-n/a  pscsi.vmhba0                            (0000:03:00.0) VMware Inc. PVSCSI SCSI Controller
vmhba1  vmkata            link-n/a  ide.vmhba1                              (0000:00:07.1) Intel Corporation PIIX4 for 430TX/440BX/MX IDE Controller
vmhba64 vmkata            link-n/a  ide.vmhba64                             (0000:00:07.1) Intel Corporation PIIX4 for 430TX/440BX/MX IDE Controller
 
# Determine driver vestion details for HBA controller
vmkload_mod -s pvscsi
 
 
 
# search for new VMFS datastores
vmkfstools -V
 
# List of VMFS snapshots
esxcli storage vmfs snapshot list
 
# based on VMFS UUID mount snapshot
esxcli storage vmfs snapshot mount -u "aaaa-aaaa-aaaa-aaa"
# when orginal datastore is still online we need resignature snapshot to generate different UUID  . Snapshot needs to be in RW access
esxcli storage vmfs snapshot resignature -u "aaaa-aaaa-aaaa-aaa"
 
#List of datastores with extends for each volume and mapping from device name to UUID
esxcli storage vmfs extent list
 
#To generate a compact list of the LUNs currently connected to the ESXi host, including VMFS version.
esxcli storage filesystem list
 
# check locking mechanism
esxcli storage vmfs lockmode list
 
#Switch back from ATS to SCSI locking
esxcli storage vmfs lockmode set --scsi --volume-label=vmfs3
esxcli storage vmfs lockmode set --ats --volume-label=vmfs3
 
# disable ATS and check status
esxcli system settings advanced list -o /VMFS3/UseATSForHBonVMFS5
# disable ats
esxcli system settings advanced set -i 0 -o /VMFS3/UseATSForHBOnVMFS5
#enable ATS
esxcli system settings advanced set -i 1 -o /VMFS3/UseATSForHBOnVMFS5
 
 
# check storage array multipathing 
esxcli storage nmp device list
# create fake SSD on iSCSI LUN
esxcli storage nmp satp rule add –satp= VMW_SATP_ALUA –device naa.6006016015301d00167ce6e2ddb3de11 –option “enable_ssd”  # reboot of host is reqired
 
 
# to see world that has device opened for lun
# typically we need this for devices in PDL  to find processes that are using this device
 
esxcli storage core device world list -d naa.6006048c870bbed5047ce8d51a260ad1
Device                                World ID  Open Count  World Name
------------------------------------  --------  ----------  ------------
naa.6006048c870bbed5047ce8d51a260ad1     32798           1  idle0
naa.6006048c870bbed5047ce8d51a260ad1     32858           1  helper14-0
naa.6006048c870bbed5047ce8d51a260ad1     32860           1  helper14-2
naa.6006048c870bbed5047ce8d51a260ad1     32937           1  helper26-0
 
# WWID from RDM disk
RDM disk ID > vml.0200100000600601601fc04500c260d45af966c4f9565241494420
ls -alh /vmfs/devices/disks/ | grep  0200100000600601601fc04500c260d45af966c4f9565241494420
lrwxrwxrwx    1 root     root          36 Dec 19 10:27 vml.0200100000600601601fc04500c260d45af966c4f9565241494420 -> naa.600601601fc04500c260d45af966c4f9
 
600601601fc04500c260d45af966c4f9 is reflecting to WWID 

NETWORK

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
# List vmkernel ports
esxcli network ip interface list
 
# get IPv4 addresses
esxcli network ip interface ipv4 get
Name  IPv4 Address    IPv4 Netmask     IPv4 Broadcast   Address Type  Gateway        DHCP DNS
----  --------------  ---------------  ---------------  ------------  -------------  --------
vmk0  192.168.198.21  255.255.255.0    192.168.198.255  STATIC        192.168.198.1     false
vmk1  172.30.1.167    255.255.255.224  172.30.1.191     STATIC        0.0.0.0           false
 
# get IPv6 addresses
esxcli network ip interface ipv6 get
Name  IPv6 Enabled  DHCPv6 Enabled  Router Adv Enabled  DHCP DNS  Gateway
----  ------------  --------------  ------------------  --------  -------
vmk0         false           false               false     false  ::
vmk1         false           false               false     false  ::
 
# check Jumbo frame ping  result - no data fragmentation . MTU setting of 9000 (minus 28 bytes for overhead) to another ESXi host
vmkping  -I vmk1 172.30.1.168 -d -s 8972
PING 172.30.1.168 (172.30.1.168): 8972 data bytes
sendto() failed (Message too long)
 
 
# list network stacks
esxcli network ip netstack list
defaultTcpipStack
   Key: defaultTcpipStack
   Name: defaultTcpipStack
   State: 4660
 
# Create new standard vSwitch
esxcli network vswitch standard add --vswitch-name=vSwitchVmotion
 
# List  phisical adapters
esxcli network nic list
Name    PCI Device    Driver    Admin Status  Link Status  Speed  Duplex  MAC Address         MTU  Description
------  ------------  --------  ------------  -----------  -----  ------  -----------------  ----  -----------------------------------------------
vmnic0  0000:0b:00.0  nvmxnet3  Up            Up           10000  Full    00:50:56:98:cf:ab  1500  VMware Inc. vmxnet3 Virtual Ethernet Controller
vmnic1  0000:13:00.0  nvmxnet3  Up            Up           10000  Full    00:50:56:98:37:94  1500  VMware Inc. vmxnet3 Virtual Ethernet Controller
vmnic2  0000:1b:00.0  nvmxnet3  Up            Up           10000  Full    00:50:56:98:bc:bd  1500  VMware Inc. vmxnet3 Virtual Ethernet Controller
 
# assign uplink vmnic2 to vSwitch  vSwitchVmotion
 esxcli network vswitch standard uplink add --uplink-name=vmnic2 --vswitch-name=vSwitchVmotion
 
# migrate vmotion vmkernel service to different vSwitch
esxcli network vswitch standard portgroup add --portgroup-name=Vmotoin --vswitch-name=vSwitchVmotion
esxcli network ip interface list   #list vmkernel adapters
esxcli network ip interface add --interface-name=vmk2 --portgroup-name=Vmotoin    #create vmk interface in Vmotoin portgroup
esxcli network ip interface ipv4 set --interface-name=vmk2 --ipv4=192.168.200.21 --netmask=255.255.255.0 --type=static  # assign static IP to vmkernel port
vim-cmd hostsvc/vmotion/vnic_set vmk2 #enable vmotion on vmk2 
vim-cmd hostsvc/vmotion/vnic_unset vmk0 # disable vmotion on vmk0  
 
# list dvSwitches
esxcli network vswitch dvs vmware list
 
#Add custom netstack
esxcli network ip netstack add -N "CustomNetstack"
 
# check NIC link status
 esxcli network nic list
# enable/disable single uplink
esxcli network nic down -n vmnicX
esxcli network nic up -n vmnicX
 
# check single uplink details
 esxcli network nic get -n vmnic6
   Advertised Auto Negotiation: true
   Advertised Link Modes: Auto, 1000BaseT/Full, 100BaseT/Full, 100BaseT/Half, 10BaseT/Full, 10BaseT/Half
   Auto Negotiation: false
   Cable Type: Twisted Pair
   Current Message Level: 0
   Driver Info:
         Bus Info: 0000:04:00:2
         Driver: igbn
         Firmware Version: 1.70.0:0x80000f44:1.1904.0
         Version: 1.5.2.0
   Link Detected: false
   Link Status: Down by explicit linkSet
   Name: vmnic6
   PHYAddress: 0
   Pause Autonegotiate: true
   Pause RX: true
   Pause TX: true
   Supported Ports: TP
   Supports Auto Negotiation: true
   Supports Pause: true
   Supports Wakeon: true
   Transceiver: internal
   Virtual Address: 00:50:56:59:63:27
   Wakeon: MagicPacket(tm)

SYSTEM

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
#syslog check config
esxcli system syslog config get
   Check Certificate Revocation: false
   Default Network Retry Timeout: 180
   Dropped Log File Rotation Size: 100
   Dropped Log File Rotations: 10
   Enforce SSLCertificates: true
   Local Log Output: /scratch/log
   Local Log Output Is Configured: false
   Local Log Output Is Persistent: true
   Local Logging Default Rotation Size: 1024
   Local Logging Default Rotations: 8
   Log To Unique Subdirectory: true
   Message Queue Drop Mark: 90
   Remote Host: udp://192.168.98.10:514
   Strict X509Compliance: false
 
# configure syslog
esxcli system syslog config set --loghost=udp://192.168.198.10:514
 
# restart syslog service to apply configuration changes 
 esxcli system syslog reload
# if you are using vcenter as a syslog collector logs are located in  /var/log/vmware/esx
 
# enter maintenance mode
# for non vsan based host
esxcli system maintenanceMode set --enable true
# for VSAN esxi host
esxcli system maintenanceMode set --enable yes  --vsanmode ensureObjectAccessibility
 
# get maintenance mode status
esxcli system maintenanceMode get

#VSAN

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
#List vsan network interfaces
esxcli vsan network list
localcli vsan network list
Interface:
   VmkNic Name: vmk2
   IP Protocol: IP
   Interface UUID: 02c4905c-0117-2e99-9ecb-48df373682cc
   Agent Group Multicast Address: 224.2.3.4
   Agent Group IPv6 Multicast Address: ff19::2:3:4
   Agent Group Multicast Port: 23451
   Master Group Multicast Address: 224.1.2.3
   Master Group IPv6 Multicast Address: ff19::1:2:3
   Master Group Multicast Port: 12345
   Host Unicast Channel Bound Port: 12321
   Multicast TTL: 5
   Traffic Type: vsan
 
Interface:
   VmkNic Name: vmk0
   IP Protocol: IP
   Interface UUID: bef8905c-1bf7-d03d-a07e-48df373682cc
   Agent Group Multicast Address: 224.2.3.4
   Agent Group IPv6 Multicast Address: ff19::2:3:4
   Agent Group Multicast Port: 23451
   Master Group Multicast Address: 224.1.2.3
   Master Group IPv6 Multicast Address: ff19::1:2:3
   Master Group Multicast Port: 12345
   Host Unicast Channel Bound Port: 12321
   Multicast TTL: 5
   Traffic Type: witness
 
# check what happen if we put host into Maintenance with noAction option
localcli vsan debug evacuation precheck -e 5c90b59b-6dd0-fdfa-5161-48df373682cc -a noAction
 
Action: No Action:
   Evacuation Outcome: Success
   Entity: Host TRGEBSVMH02.TRGEBCSN.ra-int.com
   Data to Move: 0.00 GB
   Number Of Objects That Would Become Inaccessible: 0
   Objects That Would Become Inaccessible: None
   Number Of Objects That Would Have Redundancy Reduced: 69
   Objects That Would Have Redundancy Reduced: (only shown with --verbose option)
   Additional Space Needed for Evacuation: N/A
 
 
# Remove one of vsan interfaces once we migrated to new one
esxcli vsan network remove -i vmk3
 
# vsan health  report from localcli 
localcli vsan health cluster list
 
 
# get cluster UUID and members
esxcli vsan cluster get
# get local node UUID
cmmds-tool whoami
5bb4cf73-6e2f-dfaa-e1b0-9cdc71bb4ed0
 
# get vsan unicast agent list
localcli vsan cluster unicastagent list
NodeUuid                              IsWitness  Supports Unicast  IP Address    Port   Iface Name
--------------------------------------------------------------------------------------------------
5bb4d4a0-8acc-c374-e5fe-d06726d34248          0  true              172.30.1.168  12321
00000000-0000-0000-0000-000000000000          1  true              172.30.1.32   12321
 
# add unicast agent by hand
esxcli vsan cluster unicastagent add -t node -u 5bb4cf73-6e2f-dfaa-e1b0-9cdc71bb4ed0 -U true -a 172.30.1.167 -p 12321
 
# create folder on vsan
/usr/lib/vmware/osfs/bin/osfs-mkdir /vmfs/volumes/vsan:5223b097ec01c8f5-ca8bf9d261f6796e/some_folder
 
#vsan object summary health
localcli vsan debug object health summary get
Health Status                                     Number Of Objects
-------------------------------------------------------------------
reduced-availability-with-active-rebuild                          0
data-move                                                         0
nonavailability-related-incompliance                              0
reduced-availability-with-no-rebuild                             45
inaccessible                                                     74
healthy                                                           1
reduced-availability-with-no-rebuild-delay-timer                 16
nonavailability-related-reconfig                                  0
 
 
 
 
# List of vsan disks
vdq -Hi
 vdq -Hi
Mappings:
   DiskMapping[0]:
           SSD:  naa.51402ec011e50cc7
            MD:  naa.5000c500a722db23
            MD:  naa.5000039928481915
            MD:  naa.5000039928481839
            MD:  naa.50000399284814c5
            MD:  naa.50000399284818c5
            MD:  naa.5000c500a723a283
            MD:  naa.5000039928481749
 
# check if vsan disk are operational
esxcli vsan storage list | grep -i Cmmds
# find phisical disk location based on naa
esxcli storage core device physical get -d naa.5000c500c1560c3f
 
# list of vsan disk and disk group membership 
esxcli vsan storage list | grep -i Uuid
   VSAN UUID: 5230295f-ead5-ee2b-c3ae-1edef5135985
   VSAN Disk Group UUID: 52487975-68ed-8ebb-268e-f54ba9358941
   VSAN UUID: 523af961-9357-eca0-4e1b-4ada8805984d
   VSAN Disk Group UUID: 52487975-68ed-8ebb-268e-f54ba9358941
 
# remove vsan disk based on UUID ( data needs to be evacuated before )
 esxcli vsan storage remove -u 520217b6-b32a-2aac-7d5b-0d28bc71bc60
# vSAN Congestion
#Congestion is a feedback mechanism to reduce the rate of incoming IO requests from the vSAN #DOM client layer to a level that the vSAN disk groups can service
 
# To understand if our ESXi is having vSAN congestion you can run the following scritp (per host)
for ssd in $(localcli vsan storage list |grep "Group UUID"|awk '{print $5}'|sort -u);do echo $ssd;vsish -e get /vmkModules/lsom/disks/$ssd/info|grep Congestion;done
 
vSAN metrics
 
Slab Congestion: This originates in vSAN internal operation slabs. It occurs when the number of inflight operations exceed the capacity of operation slabs.
Comp Congestion: This occurs when the size of some internal table used for vSAN object components is exceeding threshold.
SSD Congestion: This occurs when the cache tier disk write buffer space runs out.
Log Congestion: This occurs when vSAN internal log space usage in cache tier disk runs out.
Mem Congestion: This occurs when the size of used memory heap by vSAN internal components exceed the threshold.
IOPS Congestion: IOPS reservations/limits can be applied to vSAN object components. If component IOPS exceed reservations and disk IOPS utilization is 100.
 
se the following commands under your responsability. If your vSAN report LSOM errors in vSAN logs, these metrics can be changed to reduce vSAN congestion.
 
esxcfg-advcfg -s 16 /LSOM/lsomLogCongestionLowLimitGB  – (default 8).
esxcfg-advcfg -s 24 /LSOM/lsomLogCongestionHighLimitGB  – (default 16).
 
esxcfg-advcfg -s 10000 /LSOB/diskIoTimeout
esxcfg-advcfg -s 4 /LSOB/diskIoRetryFactor
 
esxcfg-advcfg -s 32768 /LSOM/initheapsize  — Seems this command is not available in vShere 6.5 + vSAN 6.6
esxcfg-advcfg -s 2048 /LSOM/heapsize — — Seems this command is not available in vShere 6.5 + vSAN 6.6
 
Official VMware KB:
 
# What’s the current size of the LLOG and PLOG:
for ssd in $(localcli vsan storage list |grep "Group UUID"|awk '{print $5}'|sort -u);do \
llogTotal=$(vsish -e get /vmkModules/lsom/disks/$ssd/info|grep "Log space consumed by LLOG"|awk -F \: '{print $2}'); \
plogTotal=$(vsish -e get /vmkModules/lsom/disks/$ssd/info|grep "Log space consumed by PLOG"|awk -F \: '{print $2}'); \
llogGib=$(echo $llogTotal |awk '{print $1 / 1073741824}'); \
plogGib=$(echo $plogTotal |awk '{print $1 / 1073741824}'); \
allGibTotal=$(expr $llogTotal \+ $plogTotal|awk '{print $1 / 1073741824}'); \
echo $ssd;echo " LLOG consumption: $llogGib"; \
echo " PLOG consumption: $plogGib"; \
echo " Total log consumption: $allGibTotal"; \
done
 
advanced configuration values in vSAN:
 
esxcfg-advcfg -g /LSOM/lsomSsdCongestionLowLimit
esxcfg-advcfg -g /LSOM/lsomSsdCongestionHighLimit
These two commands output the threshold values for memory. And with the following commands you can see the threshold values for SSD and Log congestion:
 
esxcfg-advcfg -g /LSOM/lsomMemCongestionLowLimit
esxcfg-advcfg -g /LSOM/lsomMemCongestionHighLimit
 
esxcfg-advcfg -g /LSOM/lsomLogCongestionLowLimitGB
esxcfg-advcfg -g /LSOM/lsomLogCongestionHighLimitGB
 
# Check for Checksum errors
for disk in $(localcli vsan storage list |grep "VSAN UUID"|awk '{print $3}'|sort -u);do echo ==DISK==$disk====;vsish -e get /vmkModules/lsom/disks/$disk/checksumErrors;done
 
#Physical disk health status
# Find inaccessible objects
cmmds-tool find -f python | grep -B9 "state....13" | grep uuid | cut -c 13-48 > inaccessibleObjects.txt
echo $(wc -l < inaccessibleObjects.txt) "Inaccessible Objects Found"
cat inaccessibleObjects.txt
 
 
# Disk capacity
esxcli vsan health cluster get -t "Disk capacity"
#Congestion
esxcli vsan health cluster get -t "Congestion"
# Memory pools (heaps)
esxcli vsan health cluster get -t "Memory pools (heaps)"
#Memory pools (slabs)
esxcli vsan health cluster get -t "Memory pools (slabs)"
 
 
#Find accessible object paths
cmmds-tool find -t DOM_OBJECT -f json |grep uuid |awk -F \" '{print $4}'|while read i;do objPath=$(/usr/lib/vmware/osfs/bin/objtool getAttr -u $i|grep path);echo "$i: $objPath";done
 
# All hosts contributing stats
esxcli vsan health cluster get -t "All hosts contributing stats"
 
 
# Look for SSD congestion
# Any value greater than 150 requires to be investigated
for ssd in $(localcli vsan storage list |grep "Group UUID"|awk '{print $5}'|sort -u);do echo $ssd;vsish -e get /vmkModules/lsom/disks/$ssd/info|grep Congestion;done
 
# resync bandwidth
esxcli vsan resync bandwidth get
 
# get pending resync status
cd /tmp
 while true;do echo "" > ./resyncStats.txt ;cmmds-tool find -t DOM_OBJECT -f json |grep uuid |awk -F \" '{print $4}' |while read i;do pendingResync=$(cmmds-tool find -t DOM_OBJECT -f json -u $i|grep -o "\"bytesToS
ync\": [0-9]*,"|awk -F " |," '{sum+=$2} END{print sum / 1024 / 1024 / 1024;}');if [ ${#pendingResync} -ne 1 ]; then echo "$i: $pendingResync GiB"; fi;done |tee -a ./resyncStats.txt;total=$(cat resyncStats.txt |awk '{sum+=$2} END{print su
m}');echo "Total: $total GiB" |tee -a ./resyncStats.txt;total=$(cat ./resyncStats.txt |grep Total); totalObj=$(cat ./resyncStats.txt|grep -vE " 0 GiB|Total"|wc -l);echo "`date +%Y-%m-%dT%H:%M:%SZ` $total ($totalObj objects)" >> ./totalHi
story.txt; sleep 60;done

#VSAN RVC

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# vSAN resync report
/localhost/Datacenter1/computers/Cluster1> vsan.resync_dashboard .
 
# list of VSAN objects with vSAN UUID
/localhost/Datacenter1/computers/Cluster1> vsan.obj_status_report -t .
 
Total v10 objects: 129
+-------------------------------------------------------------------------------------------------+---------+---------------------------+
| VM/Object                                                                                       | objects | num healthy / total comps |
+-------------------------------------------------------------------------------------------------+---------+---------------------------+
| Template-Server2016                                                                             | 2       |                           |
|    [vsanDatastore] 35bc3760-6fe2-f031-6e47-48df37176ad4/Template-Server2016.vmtx                |         | 3/3                       |
|    [vsanDatastore] 35bc3760-6fe2-f031-6e47-48df37176ad4/Template-Server2016.vmdk                |         | 3/3                       |
 
+-------------------------------------------------------------------------------------------------+---------+---------------------------+
| Unassociated objects                                                                            |         |                           |
|    3079a862-ddbe-8071-1c5e-48df37176888                                                         |         | 3/3                       |
 
+-------------------------------------------------------------------------------------------------+---------+---------------------------+
 
 
# repair/ re-register vmx files
 vsan.check_state -r .

#VM

 

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# list of VMs registered on hosts
cat etc/vmware/hostd/vmInventory.xml
<ConfigRoot>
  <ConfigEntry id="0002">
    <objID>56</objID>
    <vmxCfgPath>/vmfs/volumes/vsan:528e24c4fab25460-bbc1e9f77409188b/1518385d-742b-749d-14dc-08f1ea8c406e/Server2016.vmx</vmxCfgPath>
  </ConfigEntry>
# register VM that is in inaccesible status on host
vim-cmd solo/registervm /vmfs/volumes/vsan\:528e24c4fab25460-bbc1e9f77409188b/1518385d-742b-749d-14dc-08f1ea8c406e/Server2016.vmx
 
# get all VMs on host
vim-cmd vmsvc/getallvms
# get VM power status
vim-cmd vmsvc/power.getstateVMID
# shutdown VM
vim-cmd vmsvc/power.shutdown VMID
# power off
vim-cmd vmsvc/power.off VMID

 

#VCENTER

1
2
#Check services status
service-control --status