-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inventory plugin (DCNE-302) #721
base: master
Are you sure you want to change the base?
Conversation
Report all devices which are part of the fabric.
Hi @p3ck, some initial comments from my end. I am not familiar with inventory plugins development yet so might ask a bit more questions. I will in parallel also look into the https://docs.ansible.com/ansible/latest/dev_guide/developing_inventory.html#developing-inventory and the code https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/inventory/__init__.py. This will likely take a bit more time from my end. |
plugins/inventory/aci.py
Outdated
--- | ||
plugin: cisco.aci.aci | ||
host: 192.168.1.90 | ||
username: admin |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was doing some local testing with your code and the normal arguments from the aci collection seem to work as expected.
I do however have an additional question regarding plugin usage and the arguments exposed / used for authentication. For our normal modules we also allow users the specify HTTPAPI connection plugin to limit the amount of login requests send. This allows for a user to specify some additional authentication arguments to be set in inventory which are in that case not needed to be specified in each task. See some explanation in the repository: https://github.com/CiscoDevNet/ansible-aci/blob/master/docs/optimizing.md#using-the-aci-httpapi-plugin.
Is there a way to leverage this plugin also for the inventory plugin? If not, is there a way to expose these arguments to also be valid inputs for this plugin? For instance by updating the aliases from aci_argument_spec in this plugin.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am just looking at this documentation to understand how it works.
I do see that I probably need to rename this module though.. I see this
ansible_network_os=cisco.aci.aci
Do you have an opinion on what this inventory plugin should be called? cisco.aci.aci_inv ??
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I personally prefer cisco.aci.aci_inventory or cisco.aci.inventory if it requires to be changed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok - I will use cisco.aci.aci_inventory
I did some quick investigation and it looks like ansible_connection is not used from inventory module. But we can generate an inventory that will use that for the devices found..
(venv-3.9) [root@okd-master-0 aci-inv]# cat connection_cisco_aci.yml
---
plugin: cisco.aci.aci_inventory
host: 192.168.1.90
username: admin
#password: OR you can use env variable ACI_PASSWORD
validate_certs: false
compose:
ansible_connection: "'ansible.netcommon.httpapi'"
ansible_network_os: "'cisco.aci.aci'"
ansible_host: "'192.168.1.90'"
keyed_groups:
- prefix: role
key: role
(venv-3.9) [root@okd-master-0 aci-inv]# ansible-inventory -i connection_cisco_aci.yml --list -v
No config file found; using defaults
Using inventory plugin 'ansible_collections.cisco.aci.plugins.inventory.aci' to process inventory source '/root/aci-inv/connection_cisco_aci.yml'
{
"_meta": {
"hostvars": {
"TEP-1-101": {
"address": "10.0.216.64",
"ansible_connection": "ansible.netcommon.httpapi",
"ansible_host": "192.168.1.90",
"ansible_network_os": "cisco.aci.aci",
"bootstrapState": "done",
"childAction": "",
"clusterTimeDiff": "-5",
"configIssues": "",
"controlPlaneMTU": "9000",
"currentTime": "2025-02-04T15:44:36.025+00:00",
"dn": "topology/pod-1/node-101/sys",
"enforceSubnetCheck": "no",
"etepAddr": "0.0.0.0",
"fabricDomain": "ACI Fabric1",
"fabricId": "1",
"fabricMAC": "00:22:BD:F8:19:FF",
"id": "101",
"inbMgmtAddr": "0.0.0.0",
"inbMgmtAddr6": "::",
"inbMgmtAddr6Mask": "0",
"inbMgmtAddrMask": "0",
"inbMgmtGateway": "0.0.0.0",
"inbMgmtGateway6": "::",
"lastRebootTime": "2025-01-23T23:00:38.276+00:00",
"lastResetReason": "unknown",
"lcOwn": "local",
"modTs": "2025-01-30T14:50:54.106+00:00",
"mode": "unspecified",
"monPolDn": "uni/fabric/monfab-default",
"name": "TEP-1-101",
"nameAlias": "",
"nodeType": "unspecified",
"oobMgmtAddr": "0.0.0.0",
"oobMgmtAddr6": "::",
"oobMgmtAddr6Mask": "0",
"oobMgmtAddrMask": "0",
"oobMgmtGateway": "0.0.0.0",
"oobMgmtGateway6": "::",
"podId": "1",
"remoteNetworkId": "0",
"remoteNode": "no",
"rlOperPodId": "1",
"rlRoutableMode": "no",
"rldirectMode": "no",
"role": "leaf",
"serial": "TEP-1-101",
"serverType": "unspecified",
"siteId": "0",
"state": "in-service",
"status": "",
"systemUpTime": "11:16:43:58.000",
"tepPool": "10.0.0.0/16",
"unicastXrEpLearnDisable": "no",
"version": "simsw-5.2(5c)",
"virtualMode": "no"
},
"TEP-1-103": {
"address": "10.0.216.65",
"ansible_connection": "ansible.netcommon.httpapi",
"ansible_host": "192.168.1.90",
"ansible_network_os": "cisco.aci.aci",
"bootstrapState": "done",
"childAction": "",
"clusterTimeDiff": "-8",
"configIssues": "",
"controlPlaneMTU": "9000",
"currentTime": "2025-02-04T15:44:36.028+00:00",
"dn": "topology/pod-1/node-103/sys",
"enforceSubnetCheck": "no",
"etepAddr": "0.0.0.0",
"fabricDomain": "ACI Fabric1",
"fabricId": "1",
"fabricMAC": "00:22:BD:F8:19:FF",
"id": "103",
"inbMgmtAddr": "0.0.0.0",
"inbMgmtAddr6": "::",
"inbMgmtAddr6Mask": "0",
"inbMgmtAddrMask": "0",
"inbMgmtGateway": "0.0.0.0",
"inbMgmtGateway6": "::",
"lastRebootTime": "2025-01-23T23:00:38.435+00:00",
"lastResetReason": "unknown",
"lcOwn": "local",
"modTs": "2025-01-30T14:55:59.013+00:00",
"mode": "unspecified",
"monPolDn": "uni/fabric/monfab-default",
"name": "TEP-1-103",
"nameAlias": "",
"nodeType": "unspecified",
"oobMgmtAddr": "0.0.0.0",
"oobMgmtAddr6": "::",
"oobMgmtAddr6Mask": "0",
"oobMgmtAddrMask": "0",
"oobMgmtGateway": "0.0.0.0",
"oobMgmtGateway6": "::",
"podId": "1",
"remoteNetworkId": "0",
"remoteNode": "no",
"rlOperPodId": "1",
"rlRoutableMode": "yes",
"rldirectMode": "yes",
"role": "spine",
"serial": "TEP-1-103",
"serverType": "unspecified",
"siteId": "0",
"state": "in-service",
"status": "",
"systemUpTime": "11:16:43:58.000",
"tepPool": "10.0.0.0/16",
"unicastXrEpLearnDisable": "no",
"version": "simsw-5.2(5c)",
"virtualMode": "no"
},
"apic1": {
"address": "10.0.0.1",
"ansible_connection": "ansible.netcommon.httpapi",
"ansible_host": "192.168.1.90",
"ansible_network_os": "cisco.aci.aci",
"bootstrapState": "none",
"childAction": "",
"clusterTimeDiff": "0",
"configIssues": "",
"controlPlaneMTU": "9000",
"currentTime": "2025-02-04T15:44:36.021+00:00",
"dn": "topology/pod-1/node-1/sys",
"enforceSubnetCheck": "no",
"etepAddr": "0.0.0.0",
"fabricDomain": "ACI Fabric1",
"fabricId": "1",
"fabricMAC": "00:22:BD:F8:19:FF",
"id": "1",
"inbMgmtAddr": "192.168.11.1",
"inbMgmtAddr6": "fc00::1",
"inbMgmtAddr6Mask": "0",
"inbMgmtAddrMask": "24",
"inbMgmtGateway": "192.168.11.254",
"inbMgmtGateway6": "::",
"lastRebootTime": "2025-01-23T23:00:38.028+00:00",
"lastResetReason": "unknown",
"lcOwn": "local",
"modTs": "2025-01-23T23:04:21.394+00:00",
"mode": "unspecified",
"monPolDn": "uni/fabric/monfab-default",
"name": "apic1",
"nameAlias": "",
"nodeType": "unspecified",
"oobMgmtAddr": "192.168.1.90",
"oobMgmtAddr6": "fe80::200:ff:fe0:0",
"oobMgmtAddr6Mask": "0",
"oobMgmtAddrMask": "24",
"oobMgmtGateway": "192.168.1.3",
"oobMgmtGateway6": "2001:420:28e:2020:acc:68ff:fe28:b540",
"podId": "1",
"remoteNetworkId": "0",
"remoteNode": "no",
"rlOperPodId": "0",
"rlRoutableMode": "no",
"rldirectMode": "no",
"role": "controller",
"serial": "TEP-1-1",
"serverType": "unspecified",
"siteId": "0",
"state": "in-service",
"status": "",
"systemUpTime": "11:16:43:58.000",
"tepPool": "0.0.0.0",
"unicastXrEpLearnDisable": "no",
"version": "5.2(5c)",
"virtualMode": "no"
}
}
},
"all": {
"children": [
"ungrouped",
"role_controller",
"role_leaf",
"role_spine"
]
},
"role_controller": {
"hosts": [
"apic1"
]
},
"role_leaf": {
"hosts": [
"TEP-1-101"
]
},
"role_spine": {
"hosts": [
"TEP-1-103"
]
}
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok thanks, should we add compose to the example?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am happy to. Is there additional argument that should be set to make it a fully working example? Is the example I provided enough?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For me the provided example was clear, so would suggest add two examples with a comment above. 1. minimal version, 2. compose example. Think that would make it easier for users to understand compose.
Renamed the module to cisco.aci.aci_inventory and remove duplicate arg specs that we already get from including constructed doc_fragment.
Also - If you're like me and wondering how to see the ansible docs for an inventory plugin you have to change the type...
I didn't realize this at first. :-) |
please fix sanity issues |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, waiting on others to review for additional comments and changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@p3ck How is this plugin different from the information we're able to query using the below modules? As @akinross mentioned there's also a dedicated module called aci_system for the same purpose. Can this plugin be leveraged anyway outside of Ansible? Thanks!
---
- name: Query topSystem class in ACI
hosts: aci
gather_facts: no
tasks:
- name: Query topSystem class
cisoc.aci.aci_rest:
host: "{{ ansible_host }}"
username: "{{ aci_username }}"
password: "{{ aci_password }}"
path: "/api/class/topSystem.json"
method: get
validate_certs: false
register: top_system_response
- name: Display topSystem information
debug:
var: top_system_response.response
OR
- name: Query all controllers system information
cisco.aci.aci_system:
host: apic
username: userName
password: somePassword
validate_certs: false
state: query
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just one very minor change.
Co-authored-by: Samita B <[email protected]>
The difference is it's available as an inventory source. The request for this plugin was for inventory tracking in Ansible Automation Platform. HTH |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Report all devices which are part of the fabric.
Example shows how to have all spines, leafs and controller in groups.
fixes #720