Thursday, May 4, 2023

Vmware Dynamic Inventory update creation on Ansible Tower.

 Follow the below mentioned steps to configure Dynamic inventory update on Ansible Tower.


Selecting this credential type enables synchronization of inventory with VMware vCenter.

The automation controller uses the following environment variables for VMware vCenter credentials and are fields prompted in the user interface:



VMware credentials have the following inputs that are required:

  • vCenter Host: The vCenter hostname or IP address to connect to.

  • Username: The username to use to connect to vCenter.

  • Password: The password to use to connect to vCenter.


  1. To configure a VMWare-sourced inventory, select VMware vCenter from the Source field.

  2. The Create Source window expands with the required Credential field. Choose from an existing VMware Credential. For more information, refer to Credentials.

  3. You can optionally specify the verbosity, host filter, enabled variable/value, and update options as described in the main procedure for adding a source.

  4. Use the Source Variables field to override variables used by the vmware_inventory inventory plugin. Enter variables using either JSON or YAML syntax. Use the radio button to toggle between the two. For a detailed description of these variables, view the vmware_inventory inventory plugin.

Starting with Ansible 2.9, VMWare properties have changed from lower case to camelCase. The controller provides aliases for the top-level keys, but lower case keys in nested properties have been discontinued. For a list of valid and supported properties starting with Ansible 2.9, refer to virtual machine attributes in the VMware dynamic inventory plugin.



 


Wednesday, May 3, 2023

Ansible playbook to collect VM names in as list

 below is github link for the same.

https://github.com/area51coder/ansible_group_issues

---

  - name: VMware VM Inventory Generator

    hosts: localhost

    gather_facts: no

    tasks:   

      - name: collect VMs in specific folder

        vmware_vm_facts:

             validate_certs: False

             hostname: "{{ vcenter_hostname }}"

             username: "{{ vcenter_username }}"

             password: "{{ vcenter_passwd }}"

        #     folder: "{{ vcenter_folder }}"

        delegate_to: localhost

        register: vm_info

     

      - name: print VM info

        debug:

           msg: "{{ item['guest_name'] }}"

        with_items: "{{ vm_info.virtual_machines }}"

     

      - name: Show virtual machines guest_name in job log

        ansible.builtin.debug:

          msg: "{{ item['guest_name'] }}"

        with_items: "{{ vm_info.virtual_machines }}"

      - name: Print virtual machines guest_name in output file

        ansible.builtin.copy: # should really use template here

          content: |

              Virtual Machine Names

              {% for vm in vm_info.virtual_machines %}

              - {{ vm['guest_name'] }}

              {% endfor %}

          dest: /etc/ansible/reports/virtual_machine_names.txt


NetApp SnapMirror Synchronous and Asynchronous

 

NetApp SnapMirror Synchronous and Asynchronous

When the replication is synchronous, the source host sends a write request to the source storage system, which is the source of the replication. Then, the source storage system sends a replication request, and it also sends the write to the destination storage system.

 

The target destination storage sends an acknowledgement back to the source storage, and the source storage sends the acknowledgement back to the client. With synchronous replication, the data is written to the source and target storages before an acknowledgement is sent back to the client. Therefore, you can't have too much delay in the data being written to both locations.

 

When you use asynchronous replication, the source host sends in a write request to its source storage system, and the source storage system immediately returns an acknowledgement to the client.

 

Then, based on a predetermined schedule that you decide, for example, once every 10 minutes, the source sends all of the data written to it in the previous 10 minutes to the target storage. The target storage then sends an acknowledgement back to the source storage system.

 

The asynchronous replication breaks it down into two separate operations. With synchronous, the write goes to both the source and the target storage before the acknowledgement returns. With asynchronous replication, the write comes into the source storage and immediately sends the acknowledgement back to the client.

 

Later on, on the schedule in a separate operation, all of the writes will be written to the target storage, and the target storage will return an acknowledgement.

 

With asynchronous replication, the source storage sends an acknowledgement immediately back to the client host system, so there's no time and distance limitation. The application will not time out in the source because the acknowledgement is sent back immediately.

NetApp Logical Interfaces (LIFs)

NetApp Cluster Mode.

What are lifs and types of lifs :

NetApp Logical Interfaces are where our IP addresses (or WWPNs for Fibre Channel and FCoE) live in NetApp ONTAP systems. Having the IP address applied at the Logical Interface level gives us more flexibility than would be possible if it was applied to a physical port. It allows for the use of Interface Groups and VLANs, and for IP addresses to migrate between different physical ports in case of failure or maintenance.

Multiple LIF's can be placed on the same port, interface group, or VLAN. LIF's can move to other nodes non-disruptively. We can migrate a LIF to another port as an administrator or it could move to a different port because of a failure.

Each individual LIF is owned by and dedicated to a single SVM

There are a few different types of NetApp Logical Interfaces:

 

Node Management LIF - Each node has one LIF which an administrator can connect to for node management. The LIF never leaves that node.

Cluster Management LIF - Each cluster also has a LIF which an administrator can connect to manage the entire cluster. The cluster management LIF can move between different nodes.

Cluster LIF - Two or more cluster LIF's exist per node, they are homed on the cluster interconnect physical ports. This is used for traffic between nodes in the cluster.

Data LIF - Our data LIF's serve client access over our NAS and SAN protocols.

Intercluster LIF - for SnapMirror and/or SnapVault replication we have inter-cluster LIF's which must be created on each node.

 

 

 

How to run Ansible playbook on vmware vcenter

 

How to run Ansible playbook on vmware vcenter

So we have below mentioned vcenter ip

192.168.29.195

first we will create host Inventory for vecenter ip on Ansible Tower as per below mentioned snapshot.











Now we will create a project to collect inventory report of vcenter

I already have playbook ready and saved in server path :

Credential I have mentioned in playbook itself., you can add it in survey also. Or use secret.yml file.

/var/lib/awx/projects/vcenter





Now we will create a template to execute job, and will selecte the below mentioned yml file from tower to execute.




You can also use below github link to use playbook.

https://github.com/area51coder/vmwareinventory

reports will get generate and save on given location in playbook in html format.




Tuesday, May 2, 2023

Signing certificate is not valid" error in VCSA 6.5.x,6.7.x or vCenter Server 7.0.x

 In an environment with a vCenter Server Appliance (VCSA) 6.5.x, 6.7.x or vCenter Server 7.0.x, you experience these symptoms:

·         The vmware-vpxd service fails to start.

·         Logging in to the vSphere Client fails with the error:

HTTP Status 400 – Bad Request Message BadRequest, Signing certificate is not valid

 

To resolve the Signing certificate is not valid error:

1.       Download the attached fixsts.sh script from this article and upload to the impacted PSC or vCenter Server with Embedded PSC to the /tmp folder.

2.       If the connection to upload to the vCenter by the SCP client is rejected, run this from an SSH session to the vCenter:

# chsh -s /bin/bash

3.       Connect to the PSC or vCenter Server with an SSH session if you have not already per Step 2.

4.       Navigate to the /tmp directory:

# cd /tmp

5.       make the file executable:

# chmod +x fixsts.sh

6.       Run the script:

# ./fixsts.sh

7.       Restart services on all vCenters and/or PSCs in your SSO domain by using below commands:

8. # service-control --stop --all && service-control --start --all


Note: Restart of services will fail if there are other expired certificates like Machine SSL or Solution User. Proceed with the next step to identify and replace expired certificates.

The following one-liner can determine other expired certificates for the vCenter Server Appliance:  

·         for i in $(/usr/lib/vmware-vmafd/bin/vecs-cli store list); do echo STORE $i; sudo /usr/lib/vmware-vmafd/bin/vecs-cli entry list --store $i --text | egrep "Alias|Not After"; done

if above does not work.

 

run below command from vcenter shell mode.

 

/usr/lib/vmware-vmca/bin/certificate-manager

 

choose 8 and reset all certificate.


Resetting root password in vCenter Server Appliance 6.5 / 6.7 / 7.x

 Resetting root password in vCenter Server Appliance 6.5 / 6.7 / 7.x

  • The root account password has been lost or forgotten

Monday, May 1, 2023

How to identify disk number or ID in NetApp filer.

 

How to identify disk number or ID in NetApp filer.

Disk Naming 

Let's also cover the naming convention for our disks. The controllers will be reading and writing data to those individual disks, so it needs a way to identify them individually. The naming convention is:

stack_id.shelf_id.bay_number

 For example, this was in stack ID 1, the shelf ID is 0, and the bay here is 23. When we're in the system manager or the command line viewing information there and see information about that disk, that disk would be identified as 1.0.23. Obviously, the bay next door would be 1.0.22.

 

 

Unfortunately it depends on your Data ONTAP version. With Data ONTAP 8.2.x (and earlier) drive names have different formats, depending on the connection type (FC-AL / SAS). Each drive has a universal unique identifier (UUID) that is a unique number from all other drives in your cluster.

Each disk is named wit their node name at the beginning. For example node1:0b.1.20 (node1 – nodename, 0 – slot, b – port, 1 – shelfID, 20 – bay).

In other words for SAS drives the name convention is <node>:<slot><port>.<shelfID>.<bay>

For SAS in multi-disk shelf the name convention is: <node>:<slot><port>.<shelfID>.<bay>L<position> – <position> is either 1, or 2 – in this shelf type two disks are inside a single bay.

For FC-AL name convention is: <node>:<slot><port>.<loopID>