Category Archives: Home

Refreshing my memory on cisco ACI (part – II)

I am trying to recollect some more points on ACI which is continues to my previous post.

ACI is a simple modular switch, below depicted diagram suppose to be in part -1 ¬†ūüôā

ACI - Modular switch

AVS -> Application Virtual Switching supported only on VMWARE (VEM)

  • ¬† ¬† ¬† ¬† Essentially a modified N1K VEM with an Opflex agent (port-groups backed by VxLANs)
  • ¬† ¬† ¬† APIC will also talk to AVS/VEM over OPFLEX and assign it IP address just like any other f ¬† ¬†Fabric component

AVS flow

AVS switching modes:

  • ¬† ¬† ¬† ¬† ¬† ¬†Local switching: Intra-EPGs traffic switched on the same host
  • ¬† ¬† ¬† ¬† ¬† FEX mode: All traffic sent to Leaf for switching
  • ¬† ¬† ¬† ¬† ¬† Full switching : : Full APIC policy enforcement on server


  • X9700 only supported ACI supported line card
  • NXOS line cards are different that ACI line cards
  • Leaf and Spine communicate over IS-IS (by default) and IBGP (configurable for route leaking)
  • Traffic is normalized into eVXLAN (ACI VXLAN) at the spine and communication happens based on source and destination EPG
  • If leaf does not know dest mac, traffic is sent to spine
  • If even spine does not know, then the frame is dropped by default, however we can configure it to flood such frames
  • Leaf identifies a new host as it comes up with any snooping technology and reports the Spine through a communication protocol called COOP
  • Old entries on leaf switch will be removed after 5 minutes
  • APIC is configurable through CIM-C and KVM
  • APIC will further configure the spine and leaf switches starting with IP assignment
  • Management IP offered by APIC to fabric are only for management communication and not for any outside access
  • APIC will communicate with fabric over a dedicated VRF called Overlay-1
  • VM kernel IP address subnet should be different than APIC IP assignment subnet
  • VLAN ID is required for infrastructure network 4093
  • Kernel of APIC is CENT OS
  • Cannot conf t to leaf switches

Refreshing my memory on cisco ACI

Almost been a year that i was trained on ACI, hence, would like to refresh my memory before its completely ¬†wiped off ūüôā

My first impression, i felt the product was quite interesting because simplification of networking configuration/application ¬†driven policy/Micro segmentation/Multi-tenancy/automation…

My view,  takes little time to understand for core networking folks but the person who is from Core Network + virtualization  cloud able to understand the concepts well and able to integrate ACI with different hypervisors with ease(VMvamre vSwitch/Hyper-v,Xen,Openstack(Neutron Component as well)

What is ACI??? 

Application Centric Infrastructure (ACI) in the data center is a holistic architecture with centralized automation and policy-driven application profiles. ACI delivers software flexibility with the scalability of hardware performance.

Key characteristics of ACI include:

  • Simplified automation by an application-driven policy model
  • Centralized visibility with real-time, application health monitoring
  • Scalable performance and multi-tenancy in hardware

Works on eVxlan, it extension of Vxlan

In simple, what is Vxlan:

VXLAN enables you to create a logical network for your virtual machines across different networks. You can create a layer 2 network on top of your layer 3 networks. This is why VXLAN is called an overlay technology. Normally if you want a virtual machine to ‚Äútalk‚ÄĚ to a virtual machine in a different subnet you need to use a layer 3 router to bridge the gap between networks.

  1. Each VXLAN(Virtual extension lan) is LAN extension over L3 and  segment has unique  24-bit Virtual Network Identifier(VNI) enables up to 16 Million  unique virtual LAN segments.
  2. VXLAN uses MAC over IP/UDP.
  3. VXLAN is first host based overlays means,VXLAN  encap and decap starts from physical server and virtual switch sitting on physical server.
  4. Enables VM mobility at layer 2 across layer 3 boundaries

For more information on Layer2 over Layer 3 protocols, please go through Massive data center design book which covers(TRILL,Fabric path, VXLAN,NVGRE)



ACI infrastructure essentially has 3 components:

  • Spine Switches (SPINE -> 9500 series, Baby Spine -> 9336PQ)
  • Leaf Switches (LEAF -> 9396 (2U) & 93128 (3U))
  • Application Policy Infrastructure Controller – APIC Cluster (min 3 devices in a cluster)
  • As of now 6 Spines and 18 leaves are supported in an ACI fabric. The ratio of Spine to Leaf is 1:3
  • Support upto 1000 tenants and 128K End point
  • North bound ports on the Spine are always 40 Gig while South bound ports on the Leaf for Access layer are 1/10 Gig
  • ACI also comes with a different line card which is other than the conventional Nexus line cards
  • Leaf and Spine switches communicate with each other through IS-IS protocol
  • Please refer below link for more information
  • NFE -> Switching (Broadcom T2), ALE -> Routing (Cisco)

Hardware information:


Partner Ecosystem(stale data): uses oplex protocol(Cisco ) for integration, as per my knowledge, ACI does not support openflow protocol.

L4-L7 Compatibility List:

Vendor Products Software First Certified APIC Release Link to the Device Package
Cisco ASA 5585 and ASAv ASA 5585 – 8.4 and later

ASAv – 9.2.1 and later

1.0(1x) ASA Device Packages
A10 Thunder Appliances – Hardware, Hybrid Virtual, Virtual 1.0 and later 1.0(1x) A10 Networks Device Package
AVI Networks 15.1 and later 1.0(2x) AVI Device Package
Citrix NetScaler MPX, SDX, VPX 10.1 and later 1.0(1x) Citrix Device Package
F5 BIG-IP LTM Physical and Virtual 11.4.1 and later 1.0(1x) F5 Device Packages
Radware Alteon VX and later 1.0(2x) Radware Device Package
As we all are aware of that ACI works on Declarative  model but what is the difference between imperative and Declarative model
ACI terminology(Depicted in different format for better understanding):
Teneant – Can be customer/BU/Environment(prod/Dev/Test)
Context/Private Network – Nothing but VRF in networking terminology
Bridge domain – SVI, a container for subnets
EPG(End point group) –¬†EPG`s are used to group end points (such as physical hosts or VMs together with similar policy requirements.
Contract – Policies between EPG`s,we can call it as ACL also.
¬† ¬† ¬† ¬† -Consumer –> Outgoing
¬† ¬† ¬† ¬†– Provided ¬†–> Incoming

AWS vs AZURE Networking – Mapped to Networking terminology

When I was going thorough AWS and AZURE Networking, collected the network terminology used in public cloud and tried to map to physical/logical networking terminology, will be handy when you are  configuring networking stuff on public clouds.

S.No AWS AZURE Explanation in Networking terminology Remarks
1 VPC (Virtual Private cloud) VNET your own data center  
2 NACL(Network ACL) РStateless NACL Perimeter security  
3 S/w Router   works as a router  
4 Route table(static routes to be added) Through power shell need to add static routes Static routes  
5 Private/Public subnet Private/Public subnet Private/Public subnet  
¬† Elastic IP Reserved IP N/A Public IP gets changed once you reboot the instance, but elastic/reserved IP doesn’t change after stop/start the instance.
6 NAT instance NA Static/Dynamic NAT   
7 ELB(Elastic Load balancing) РPublic Availability Set Load balancer for public facing  
8 ILB(Internal Load balancing) РPrivate Availability Set Load balancer for private facing  
9 Internet gateway Gateway For internet access (default routed to be added towards internet GW)  
10 VPN gateway VPN gateway To build VPN tunnel(AWS to ON-PREM)  
11 Secuirty group(Staefull) End points More secure to instance/server  
12 Route 53 Traffic Manager Nothing but Global site load balancer  

Below is the sample  diagram of Network connectivity flow in AWS.

AWS Networking