Sunday, 22 December 2019

VMWARE REFERENCES

Free VMware Tutorials and Training


Free step by step VMware Product Walkthrough
https://featurewalkthrough.vmware.com/
Free labs and training on VMware Products
http://labs.hol.vmware.com/HOL/catalogs/
Free Video Training on VMware products
https://www.youtube.com/user/VMwareKB/playlists
Guides on vSphere Upgrades, Security, and more
https://vspherecentral.vmware.com/

Compatibility and Planning Tools

VMware Product Compatibility, Upgrade Path, and Database Interoperability checker
https://www.vmware.com/resources/compatibility/sim/interop_matrix.php

Support and Troubleshooting

Community forums to ask and find answers
https://communities.vmware.com/welcome
KB articles for all VMware Products and versions
https://kb.vmware.com/selfservice/microsites/microsite.do#
Another source for default username and passwords
http://www.vtagion.com/vmware-default-username-passwords/
VMware Official Hardening Guides
https://www.vmware.com/security/hardening-guides

VMware Applications, Tools, and Downloads

VMware Apps built by VMware Engineers
https://labs.vmware.com/flings
Adapters, Management Packs, and more
https://solutionexchange.vmware.com/store
Code and Dashboards Downloads
https://code.vmware.com/home
Offical VMware Software Download Page
https://my.vmware.com/web/vmware/downloads


VMware Community Groups

vMUG – VMware User Groups: Great place to network with VMware employees and VMware Users
https://www.vmug.com/

Thursday, 18 April 2019

Remove a disk from RHEL7 safely without reboot in vmware\vSphere VM

Before proceeding please make sure that the disks are no longer in use by any file systems, logical volumes, volume groups or most importantly raw devices.
Run "fdsik -l" and identify which disk has to be remove.
# fdisk -l | grep /dev/sd
Disk /dev/sda: 128.8 GB, 128849018880 bytes, 251658240 sectors
/dev/sda1   *        2048     2099199     1048576   83  Linux
/dev/sda2         2099200   247472127   122686464   8e  Linux LVM
Disk /dev/sdb: 107.4 GB, 107374182400 bytes, 209715200 sectors
Disk /dev/sdc: 536.9 GB, 536870912000 bytes, 1048576000 sectors
Disk /dev/sdd: 322.1 GB, 322122547200 bytes, 629145600 sectors

Disk /dev/sde: 322.1 GB, 322122547200 bytes, 629145600 sectors

Make /dev/sde offline:

# echo "offline" > /sys/block/sde/device/state

Now , delete the disk:
# echo "1" > /sys/block/sde/device/delete

validate the same:
# fdisk -l | grep /dev/sd
Disk /dev/sda: 128.8 GB, 128849018880 bytes, 251658240 sectors
/dev/sda1   *        2048     2099199     1048576   83  Linux
/dev/sda2         2099200   247472127   122686464   8e  Linux LVM
Disk /dev/sdb: 107.4 GB, 107374182400 bytes, 209715200 sectors
Disk /dev/sdc: 536.9 GB, 536870912000 bytes, 1048576000 sectors

Disk /dev/sdd: 322.1 GB, 322122547200 bytes, 629145600 sectors

Now we see that disk has removed from OS (RHEL7).
Now remove the disk safely form VM--> edit settings--> remove hard disk.





















Tuesday, 16 April 2019

Expand XFS filesystem in RHEL 7 using same disk in vSphere/vCenter

Expand  XFS filesystem in RHEL 7 using same disk in vSphere/vCenter



# fdisk -l
Disk /dev/sdb: 107.4 GB, 107374182400 bytes, 209715200 sectors

# df -h
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/rhel-root         20G  5.8G   15G  29% /
devtmpfs                     3.8G     0  3.8G   0% /dev
tmpfs                        3.9G     0  3.9G   0% /dev/shm
tmpfs                        3.9G   13M  3.8G   1% /run
tmpfs                        3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/mapper/rhel-opt          15G  781M   15G   6% /opt
/dev/mapper/rhel-tmp          15G   33M   15G   1% /tmp
/dev/mapper/rhel-var          20G  1.6G   19G   8% /var
/dev/mapper/vg_DATA-lv_DATA  100G   33M  100G   1% /DATA
/dev/sda1                   1014M  270M  745M  27% /boot
/dev/mapper/rhel-home         15G  673M   15G   5% /home

We need to increase /DATA by doing expansion on same disk( no new disk).
Increase the disk size at VM from 100G to 200G.

Run below command:
# ls /sys/class/scsi_device/
1:0:0:0  1:0:1:0  2:0:0:0

Here 1:0:0:0 is root disk and 1:0:1:0 is the DATA disk.
To expand DATA at RHEL 7 , run below command:
echo 1 > /sys/class/scsi_device/1\:0\:1\:0/device/rescan

Run fdisk -l and we can see the same disk /dev/sdb has expanded to 200G.
# fdisk -l
Disk /dev/sdb: 214.7 GB, 214748364800 bytes, 419430400 sectors

Now run below to increase physical volume.
# pvresize /dev/sdb
  Physical volume "/dev/sdb" changed
  1 physical volume(s) resized or updated / 0 physical volume(s) not resized

Here we can see that 100G free space has allocated.
# pvs
  PV         VG      Fmt  Attr PSize    PFree
  /dev/sda2  rhel    lvm2 a--   117.00g   4.00m
  /dev/sdb   vg_DATA lvm2 a--  <200.00g 100.00g

Same has visible in volume group:

]# vgs
  VG      #PV #LV #SN Attr   VSize    VFree
  rhel      1   6   0 wz--n-  117.00g   4.00m
  vg_DATA   1   1   0 wz--n- <200.00g 100.00g

# vgdisplay
VG Name               vg_DATA
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <200.00 GiB
  PE Size               4.00 MiB
  Total PE              51199
  Alloc PE / Size       25599 / <100.00 GiB
  Free  PE / Size       25600 / 100.00 GiB

Expand the logical volume:
]#  lvextend -l +100%FREE /dev/mapper/vg_DATA-lv_DATA
  Size of logical volume vg_DATA/lv_DATA changed from <100.00 GiB (25599 extents) to <200.00 GiB (51199 extents).

  Logical volume vg_DATA/lv_DATA successfully resized.


Now expand the partition:

# xfs_growfs /dev/mapper/vg_DATA-lv_DATA
meta-data=/dev/mapper/vg_DATA-lv_DATA isize=512    agcount=4, agsize=6553344 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=26213376, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=12799, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 26213376 to 52427776

We can see that now /DATA has extended to 200G.

# df -h
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/rhel-root         20G  5.8G   15G  29% /
devtmpfs                     3.8G     0  3.8G   0% /dev
tmpfs                        3.9G     0  3.9G   0% /dev/shm
tmpfs                        3.9G   13M  3.8G   1% /run
tmpfs                        3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/mapper/rhel-opt          15G  781M   15G   6% /opt
/dev/mapper/rhel-tmp          15G   33M   15G   1% /tmp
/dev/mapper/rhel-var          20G  1.6G   19G   8% /var
/dev/mapper/vg_DATA-lv_DATA  200G   33M  200G   1% /DATA
/dev/sda1                   1014M  270M  745M  27% /boot















Monday, 8 April 2019

How to Enable EVC in a HPE SimpliVity cluster

Please go through below VMware KBs before proceeding:
EVC modes:
CPU compatibility:

Procedure:
1. Disable vSphere HA 
2. Change DRS mode to Manual 
3. Power Off all VMs except OVCs 
4. Right click on each HPE Simplivity ESXi node and choose "All Simplivity Actions -> Shut Down Virtual Controller.." to shutdown OVCs one by one.
5. Enable EVC mode 
6. Power On all OVCs one by one. Wait between 10-15 minutes for each OVC to let all services to startup. 
7. Login to any OVC and run svt-federation-show. Confirm that all OVCs are Alive and Connected. 
8. Confirm that all ESXi nodes in the cluster can access the same Simplivity datastores.
9. Power On all VMs. 

Saturday, 30 March 2019

HPE SimpliVity 380 Gen10 data at rest encryption (DARE)

HPE SimpliVity 380 Gen10 data at rest encryption (DARE)


Below given procedure is to configure an HPE SimpliVity 380 Gen10 server to support data at rest
encryption by enabling the encryption feature on the HPE Smart Array controller in "Local Mode".
Please note that this feature doesn't require separate license. But if you wish to need professional support in case of any issue, license entitlement is recommended.

⚡WARNING: Smart Array based encryption can only be enabled before the system is deployed. Do not attempt this procedure on a deployed system containing data.

Login to the iLO of Simlivity hardware. I am having "HPE SimpliVity 380 Gen10". Hardware is having OmniStack 3.7.7.
















Click on "Power Switch" and select "cold boot". Server will be rebooted.


















Select "F10" for "Intelligent provisioning".






Select "Smart Storage Administrator".




 A warning will get displayed for reboot post configuration.




 Select Smart Array Controller--> HPE Smart Array P816.



Select "Configure".




Now select "Physical Drives" and "Advanced controller settings".

















Select "Encryption Manager" then click on  "Perform initial setup".

















Select "Setup Type" as "Full setup" and enter password by yourself. Also make note of this password as it may need during decryption.

















Select "Key Management Mode" as "Local Key Management Mode" and enter a key made by yourself and click "OK". Also make note of this key as it may need during decryption.




















Click "Yes" to proceed further.



















Accept terms and conditions.



















Now select "Logical devices".



















Now select "Convert Plaintext Data to Encrypted Data".

















I have selected "No Discard existing data" as this is first time deployment. And don't forget to select all logical drives. Click "OK"



















Click "Yes".


















Click "Finish". Now drive encryption will start post reboot, hence reboot the server.
















VCSA 6.7 U1 unable to send alert mails to different domain

I have recently migrated from Windows based vCenter server 6.0 U3 to VCSA 6.7 U1. With windows based vCenter, I was able to receive alerts on my mail box. Post migration alerts were not coming. 
After analyzing the logs I understand that, my VCSA 6.7 U1 is in XYZ.domain.com which is not exposed. And I was trying to send mail to ABC.domain.com which is exposed in internet. So VCSA 6.7 U1 was unable to resolve the DNS of ABC.domain.com. Below are the changes which I have done to send mail to other domain from VCSA 6.7 U1.
Take a backup of "sendmail.cf". Create new file "service.switch" and perform entry as given below.
[root@xxxxxxx01 ~]# cat /etc/mail/service.switch
hosts files
Later search "O ResolverOptions" in sendmail.cf. Default option will be "#O ResolverOptions=+AAONLY".
un-comment this option and update as below.
O ResolverOptions=-DNSRCH
Restart the sendmail service "service sendmail restart" and wait for 2 mins.
Now VCSA 6.7 will stop doing DNS resolution of ABC.domain.com and you will start getting alert mails on mail@ABC.domain.com.



VM recovery by vSphere Replication on same Site

To setup replication for a VM on same site
Login to vCenter Web Client and click on “Site Recovery”:



 Click on “Open Site Recovery” at Primary vCenter, it will open Site recovery console in next tab:


Click on ‘View Details” and select PR vcenter:
Click on “Replication” tab:



















Click on “+New” to setup replication for a VM and select VM:

Select datastore:

Select RPO and “Point in Time”:

Enable Network compression:

Click on finish. It will start Sync operation. 

Post Sync it will display status “OK”.



To recover VM using vSphere Replication
  Select VM which need recovery.
During maintenance activity we have to “pause” replication and post maintenance enable replication to click on “Resume”.
To perform recovery, click on “Recover”:



  

Option
Description
Synchronize recent changes
Performs a full synchronization of the virtual machine from the source site to the target site before recovering the virtual machine. Selecting this option avoids data loss, but it is only available if the data of the source virtual machine is accessible. You can only select this option if the virtual machine is powered off.
Use latest available data
Recovers the virtual machine by using the data from the most recent replication on the target site, without performing synchronization. Selecting this option results in the loss of any data that has changed since the most recent replication. Select this option if the source virtual machine is inaccessible or if its disks are corrupted.



(Optional) Select the Power on the virtual machine after recovery check box.
Currently disabling power On option and proceeding with “Use latest available data”. Click next.
**Here Points in time recovery displays we have 5 currently retained instances available.
Select the recovery folder and click Next
(Optional) Select the Power on the virtual machine after recovery check box.







Select the target compute resource and click Next.

Click Finish.


**Keep refreshing the browser to get the latest status.

vSphere Replication validates the provided input and recovers the virtual machine. If successful, the virtual machine status changes to Recovered. The virtual machine appears in the inventory of the target site




vSphere Replication presents the retained instances as standard snapshots after a successful recovery. You can select one of these snapshots to revert the virtual machine. vSphere Replication does not preserve the memory state when you revert to a snapshot.

Post recovery VM will be visible in the chosen computing resource:

Go to snapshot manager and choose the snapshot which you want to go back as last known good configuration:

If a replicated virtual machine is attached to a distributed virtual switch and you attempt to perform a recovery in an automated DRS cluster, the recovery operation succeeds but the resulting virtual machine cannot be powered on. To attach it to the correct network, edit the recovered virtual machine settings.
vSphere Replication disconnects virtual machine network adapters to prevent damage in the production network. After recovery, you must connect the virtual network adapters to the correct network. A target host or cluster might lose access to the DVS the virtual machine was configured with at the source site. In this case, manually connect the virtual machine to a network or other DVS to successfully power on the virtual machine.


After a successful recovery, vSphere Replication disables the virtual machine for replication if the source site is still available. When the virtual machine is powered on again, it does not send replication data to the recovery site. To unconfigure the replication, click the Remove icon.

Similarly can be done for the recovery at secondary site.