Almost been a year that i was trained on ACI, hence, would like to refresh my memory before its completely wiped off 🙂
My first impression, i felt the product was quite interesting because simplification of networking configuration/application driven policy/Micro segmentation/Multi-tenancy/automation…
My view, takes little time to understand for core networking folks but the person who is from Core Network + virtualization cloud able to understand the concepts well and able to integrate ACI with different hypervisors with ease(VMvamre vSwitch/Hyper-v,Xen,Openstack(Neutron Component as well)
What is ACI???
Application Centric Infrastructure (ACI) in the data center is a holistic architecture with centralized automation and policy-driven application profiles. ACI delivers software flexibility with the scalability of hardware performance.
Key characteristics of ACI include:
- Simplified automation by an application-driven policy model
- Centralized visibility with real-time, application health monitoring
- Scalable performance and multi-tenancy in hardware
Works on eVxlan, it extension of Vxlan
In simple, what is Vxlan:
VXLAN enables you to create a logical network for your virtual machines across different networks. You can create a layer 2 network on top of your layer 3 networks. This is why VXLAN is called an overlay technology. Normally if you want a virtual machine to “talk” to a virtual machine in a different subnet you need to use a layer 3 router to bridge the gap between networks.
- Each VXLAN(Virtual extension lan) is LAN extension over L3 and segment has unique 24-bit Virtual Network Identifier(VNI) enables up to 16 Million unique virtual LAN segments.
- VXLAN uses MAC over IP/UDP.
- VXLAN is first host based overlays means,VXLAN encap and decap starts from physical server and virtual switch sitting on physical server.
- Enables VM mobility at layer 2 across layer 3 boundaries
For more information on Layer2 over Layer 3 protocols, please go through Massive data center design book which covers(TRILL,Fabric path, VXLAN,NVGRE)
ACI infrastructure essentially has 3 components:
- Spine Switches (SPINE -> 9500 series, Baby Spine -> 9336PQ)
- Leaf Switches (LEAF -> 9396 (2U) & 93128 (3U))
- Application Policy Infrastructure Controller – APIC Cluster (min 3 devices in a cluster)
- As of now 6 Spines and 18 leaves are supported in an ACI fabric. The ratio of Spine to Leaf is 1:3
- Support upto 1000 tenants and 128K End point
- North bound ports on the Spine are always 40 Gig while South bound ports on the Leaf for Access layer are 1/10 Gig
- ACI also comes with a different line card which is other than the conventional Nexus line cards
- Leaf and Spine switches communicate with each other through IS-IS protocol
- Please refer below link for more information
- NFE -> Switching (Broadcom T2), ALE -> Routing (Cisco)
Partner Ecosystem(stale data): uses oplex protocol(Cisco ) for integration, as per my knowledge, ACI does not support openflow protocol.
L4-L7 Compatibility List:
As we all are aware of that ACI works on Declarative model but what is the difference between imperative and Declarative model
ACI terminology(Depicted in different format for better understanding):
Teneant – Can be customer/BU/Environment(prod/Dev/Test)
Context/Private Network – Nothing but VRF in networking terminology
Bridge domain – SVI, a container for subnets
EPG(End point group) – EPG`s are used to group end points (such as physical hosts or VMs together with similar policy requirements.
Contract – Policies between EPG`s,we can call it as ACL also.
-Consumer –> Outgoing
– Provided –> Incoming