Thursday, November 16, 2023

Command to check NetApp NVMEM Battery

 

from console.

cnaabcd1::> system node environment sensors show -name "Bat Present"|"Bat Volt"

Node Sensor                 State Value/Units Crit-Low Warn-Low Warn-Hi Crit-Hi

---- --------------------- ------ ----------- -------- -------- ------- -------

cnaabcd-01

     Bat Present           normal

                                      PRESENT

     Bat Volt              normal

                                     8100 mV      5500     5600    8500    8600

cnaabcd-02

     Bat Present           normal

                                      PRESENT

     Bat Volt              normal

                                     8100 mV      5500     5600    8500    8600

4 entries were displayed.


from SP mode :

system battery show.


VMAX Lun migrate from one pool to another pool

 

VMAX Lun migrate from one pool to another pool

symmigrate -sid 123 -name migration4 -f tdev3 -tgt_pool  -pool F15_R5_01_1B validate -nop

symmigrate -sid 123 -name migration4 -f tdev3 -tgt_pool  -pool F15_R5_01_1B establish -v

symmigrate -sid 123 -name migration4 query

symmigrate -sid 123 -name migration4 terminate


tdev2 and tdev4 is file where i have to save ldev which need to migrate.




symmigrate -sid 123 -name migration5 -f tdev4 -tgt_pool  -pool F15_R5_01_1B validate -nop

symmigrate -sid 123 -name migration5 -f tdev4 -tgt_pool  -pool F15_R5_01_1B establish -v

symmigrate -sid 123 -name migration5 query

symmigrate -sid 123 -name migration5 terminate

Thursday, August 24, 2023

Powershell Powercli script to delete 30 days poweredoff vms from vcenter

 #*************************************************************************************************************

#      Script Name :   VMPoweredOff30DaysAgo.ps1

#      Purpose :   Get the report of VMS Powered Off 30 Days ago

#

#*************************************************************************************************************

#

#If(!(Get-PSSnapin | Where {$_.Name -Eq "VMware.VimAutomation.Core"}))

#{

#Add-PSSnapin VMware.VimAutomation.Core

#}

$VCServer = Read-Host 'Enter VC Server name'

$vcUSERNAME = Read-Host 'Enter user name'

$vcPassword = Read-Host 'Enter password' -AsSecureString

$vccredential = New-Object System.Management.Automation.PSCredential ($vcusername, $vcPassword)



$LogFile = "VMPoweredOff_" + (Get-Date -UFormat "%d-%b-%Y-%H-%M") + ".csv" 


Write-Host "Connecting to $VCServer..." -Foregroundcolor "Yellow" -NoNewLine

$connection = Connect-VIServer -Server $VCServer -Cred $vccredential -ErrorAction SilentlyContinue -WarningAction 0 | Out-Null

$Global:Report = @()



If($? -Eq $True)


{

Write-Host "Connected" -Foregroundcolor "Green" 


$PoweredOffAge = (Get-Date).AddDays(-30)

$Output = @{}

$PoweredOffvms = Get-VM | where {$_.PowerState -eq "PoweredOff"}

$EventsLog = Get-VIEvent -Entity $PoweredOffvms -Finish $PoweredOffAge  -MaxSamples ([int]::MaxValue) | where{$_.FullFormattedMessage -like "*is powered off"}

If($EventsLog)

{

$EventsLog | %{ if($Output[$_.Vm.Name] -lt $_.CreatedTime)

{

$Output[$_.Vm.Name] = $_.CreatedTime

}

}

}

$Result = $Output.getEnumerator() | select @{N="VM";E={$_.Key}},@{N="Powered Off Date";E={$_.Value}}


If($Result)

{

$Result | Export-Csv -NoTypeInformation $LogFile

}

Else

{

"NO VM's Powered off last 30 Days"

}

}

Else

{

Write-Host "Error in Connecting to $VIServer; Try Again with correct user name & password!" -Foregroundcolor "Red" 

}


If($? -Eq $True)

{

#write-output $Output

$confirmation = Read-Host 'Check VM list csv file and confirm to Delete Vm Deletion-Yes/No'

if ($confirmation -eq "Yes"){

$vmlist = Get-Content -Path "C:\RY\ansible lab\Powershell_lab\vmdelete30days.txt"

Remove-VM -VM $vmlist -DeletePermanently -Confirm:$True

}

}



#Disconnect-VIServer * -Confirm:$false

#

#

Thursday, May 4, 2023

Vmware Dynamic Inventory update creation on Ansible Tower.

 Follow the below mentioned steps to configure Dynamic inventory update on Ansible Tower.


Selecting this credential type enables synchronization of inventory with VMware vCenter.

The automation controller uses the following environment variables for VMware vCenter credentials and are fields prompted in the user interface:



VMware credentials have the following inputs that are required:

  • vCenter Host: The vCenter hostname or IP address to connect to.

  • Username: The username to use to connect to vCenter.

  • Password: The password to use to connect to vCenter.


  1. To configure a VMWare-sourced inventory, select VMware vCenter from the Source field.

  2. The Create Source window expands with the required Credential field. Choose from an existing VMware Credential. For more information, refer to Credentials.

  3. You can optionally specify the verbosity, host filter, enabled variable/value, and update options as described in the main procedure for adding a source.

  4. Use the Source Variables field to override variables used by the vmware_inventory inventory plugin. Enter variables using either JSON or YAML syntax. Use the radio button to toggle between the two. For a detailed description of these variables, view the vmware_inventory inventory plugin.

Starting with Ansible 2.9, VMWare properties have changed from lower case to camelCase. The controller provides aliases for the top-level keys, but lower case keys in nested properties have been discontinued. For a list of valid and supported properties starting with Ansible 2.9, refer to virtual machine attributes in the VMware dynamic inventory plugin.



 


Wednesday, May 3, 2023

Ansible playbook to collect VM names in as list

 below is github link for the same.

https://github.com/area51coder/ansible_group_issues

---

  - name: VMware VM Inventory Generator

    hosts: localhost

    gather_facts: no

    tasks:   

      - name: collect VMs in specific folder

        vmware_vm_facts:

             validate_certs: False

             hostname: "{{ vcenter_hostname }}"

             username: "{{ vcenter_username }}"

             password: "{{ vcenter_passwd }}"

        #     folder: "{{ vcenter_folder }}"

        delegate_to: localhost

        register: vm_info

     

      - name: print VM info

        debug:

           msg: "{{ item['guest_name'] }}"

        with_items: "{{ vm_info.virtual_machines }}"

     

      - name: Show virtual machines guest_name in job log

        ansible.builtin.debug:

          msg: "{{ item['guest_name'] }}"

        with_items: "{{ vm_info.virtual_machines }}"

      - name: Print virtual machines guest_name in output file

        ansible.builtin.copy: # should really use template here

          content: |

              Virtual Machine Names

              {% for vm in vm_info.virtual_machines %}

              - {{ vm['guest_name'] }}

              {% endfor %}

          dest: /etc/ansible/reports/virtual_machine_names.txt


NetApp SnapMirror Synchronous and Asynchronous

 

NetApp SnapMirror Synchronous and Asynchronous

When the replication is synchronous, the source host sends a write request to the source storage system, which is the source of the replication. Then, the source storage system sends a replication request, and it also sends the write to the destination storage system.

 

The target destination storage sends an acknowledgement back to the source storage, and the source storage sends the acknowledgement back to the client. With synchronous replication, the data is written to the source and target storages before an acknowledgement is sent back to the client. Therefore, you can't have too much delay in the data being written to both locations.

 

When you use asynchronous replication, the source host sends in a write request to its source storage system, and the source storage system immediately returns an acknowledgement to the client.

 

Then, based on a predetermined schedule that you decide, for example, once every 10 minutes, the source sends all of the data written to it in the previous 10 minutes to the target storage. The target storage then sends an acknowledgement back to the source storage system.

 

The asynchronous replication breaks it down into two separate operations. With synchronous, the write goes to both the source and the target storage before the acknowledgement returns. With asynchronous replication, the write comes into the source storage and immediately sends the acknowledgement back to the client.

 

Later on, on the schedule in a separate operation, all of the writes will be written to the target storage, and the target storage will return an acknowledgement.

 

With asynchronous replication, the source storage sends an acknowledgement immediately back to the client host system, so there's no time and distance limitation. The application will not time out in the source because the acknowledgement is sent back immediately.

NetApp Logical Interfaces (LIFs)

NetApp Cluster Mode.

What are lifs and types of lifs :

NetApp Logical Interfaces are where our IP addresses (or WWPNs for Fibre Channel and FCoE) live in NetApp ONTAP systems. Having the IP address applied at the Logical Interface level gives us more flexibility than would be possible if it was applied to a physical port. It allows for the use of Interface Groups and VLANs, and for IP addresses to migrate between different physical ports in case of failure or maintenance.

Multiple LIF's can be placed on the same port, interface group, or VLAN. LIF's can move to other nodes non-disruptively. We can migrate a LIF to another port as an administrator or it could move to a different port because of a failure.

Each individual LIF is owned by and dedicated to a single SVM

There are a few different types of NetApp Logical Interfaces:

 

Node Management LIF - Each node has one LIF which an administrator can connect to for node management. The LIF never leaves that node.

Cluster Management LIF - Each cluster also has a LIF which an administrator can connect to manage the entire cluster. The cluster management LIF can move between different nodes.

Cluster LIF - Two or more cluster LIF's exist per node, they are homed on the cluster interconnect physical ports. This is used for traffic between nodes in the cluster.

Data LIF - Our data LIF's serve client access over our NAS and SAN protocols.

Intercluster LIF - for SnapMirror and/or SnapVault replication we have inter-cluster LIF's which must be created on each node.