Almost been a year that i was trained on ACI, hence, would like to refresh my memory before its completely wiped off 🙂
My first impression, i felt the product was quite interesting because simplification of networking configuration/application driven policy/Micro segmentation/Multi-tenancy/automation…
My view, takes little time to understand for core networking folks but the person who is from Core Network + virtualization cloud able to understand the concepts well and able to integrate ACI with different hypervisors with ease(VMvamre vSwitch/Hyper-v,Xen,Openstack(Neutron Component as well)
What is ACI???
Application Centric Infrastructure (ACI) in the data center is a holistic architecture with centralized automation and policy-driven application profiles. ACI delivers software flexibility with the scalability of hardware performance.
Key characteristics of ACI include:
- Simplified automation by an application-driven policy model
- Centralized visibility with real-time, application health monitoring
- Scalable performance and multi-tenancy in hardware
Works on eVxlan, it extension of Vxlan
In simple, what is Vxlan:
VXLAN enables you to create a logical network for your virtual machines across different networks. You can create a layer 2 network on top of your layer 3 networks. This is why VXLAN is called an overlay technology. Normally if you want a virtual machine to “talk” to a virtual machine in a different subnet you need to use a layer 3 router to bridge the gap between networks.
- Each VXLAN(Virtual extension lan) is LAN extension over L3 and segment has unique 24-bit Virtual Network Identifier(VNI) enables up to 16 Million unique virtual LAN segments.
- VXLAN uses MAC over IP/UDP.
- VXLAN is first host based overlays means,VXLAN encap and decap starts from physical server and virtual switch sitting on physical server.
- Enables VM mobility at layer 2 across layer 3 boundaries
For more information on Layer2 over Layer 3 protocols, please go through Massive data center design book which covers(TRILL,Fabric path, VXLAN,NVGRE)
Overview:
ACI infrastructure essentially has 3 components:
- Spine Switches (SPINE -> 9500 series, Baby Spine -> 9336PQ)
- Leaf Switches (LEAF -> 9396 (2U) & 93128 (3U))
- Application Policy Infrastructure Controller – APIC Cluster (min 3 devices in a cluster)
- As of now 6 Spines and 18 leaves are supported in an ACI fabric. The ratio of Spine to Leaf is 1:3
- Support upto 1000 tenants and 128K End point
- North bound ports on the Spine are always 40 Gig while South bound ports on the Leaf for Access layer are 1/10 Gig
- ACI also comes with a different line card which is other than the conventional Nexus line cards
- Leaf and Spine switches communicate with each other through IS-IS protocol
- Please refer below link for more information
- NFE -> Switching (Broadcom T2), ALE -> Routing (Cisco)
Hardware information:
Partner Ecosystem(stale data): uses oplex protocol(Cisco ) for integration, as per my knowledge, ACI does not support openflow protocol.
L4-L7 Compatibility List:
Vendor | Products | Software | First Certified APIC Release | Link to the Device Package |
Cisco | ASA 5585 and ASAv | ASA 5585 – 8.4 and later
ASAv – 9.2.1 and later |
1.0(1x) | ASA Device Packages |
A10 | Thunder Appliances – Hardware, Hybrid Virtual, Virtual | 1.0 and later | 1.0(1x) | A10 Networks Device Package |
AVI Networks | 15.1 and later | 1.0(2x) | AVI Device Package | |
Citrix | NetScaler MPX, SDX, VPX | 10.1 and later | 1.0(1x) | Citrix Device Package |
F5 | BIG-IP LTM Physical and Virtual | 11.4.1 and later | 1.0(1x) | F5 Device Packages |
Radware | Alteon VX | 30.0.4.0 and later | 1.0(2x) | Radware Device Package |