Cisco SD-Access Solution Design Guide – Comprehensive Study Notes

Document Organization

• Guide broken into major chapters: Cisco SD-Access intro, components, operational planes, network architecture, fabric roles, design considerations, site reference models, migration, appendices.

Cisco SD-Access – Intent–Based Campus Networking

• Runs on Cisco Catalyst™ Center (formerly DNA Center); year 20252025 release.
• Uses fabric technology to virtualise the campus ("one or more logical networks on one physical network").
• Identity-driven segmentation with Cisco TrustSec® SGTs → micro-segmentation inside each virtual network (VN).
• Assurance & analytics built-in for proactive insights.
• Key business drivers:
– Simplified deployment & automation with open APIs.
– Consistent wired/wireless security.
– "Wireless bandwidth at scale" – local-switching of WLAN traffic gives uplink of every access switch.
– Identity services for context-aware policy.
– Telemetry-based assurance (even with encrypted traffic).

Solution Components

Cisco Catalyst Center Appliance

• Hardware or virtual form factors sized for SD-Access + Assurance + SWIM.
• Four workflow pillars:

  1. Design – global settings, site profiles, IP pools, SWIM, telemetry.

  2. Policy – AI endpoint analytics, SGT contract matrix, QoS.

  3. Provision – PnP / LAN Automation, fabric creation, VNs, zero-trust.

  4. Assurance – health dashboards, AI analytics, sensor tests.
    • Platform tab exposes APIs/Dev toolkit.

Cisco Identity Services Engine (ISE)

• Performs AAA + policy service; maps users/devices → SGTs.
• Personas:
PAN (admin), MnT (monitor), PSN (policy service), pxGrid (context share).
• Min. recommended: 2-node ISE cluster (all personas) per campus; PSNs tied to fabric sites in Catalyst Center.

SD-Access Operational Planes

Control Plane – LISP
– Separates Endpoint ID (EID) from Routing Locator (RLOC).
– Anycast gateway: same default-GW IP/MAC on every edge switch.
– Map-Server/Resolver logic held in Control-Plane Node.
Data Plane – VXLAN
– MAC-in-UDP/IP encapsulation; overhead 50 bytes50\ \text{bytes}; uses outer UDP.
– Encodes Layer-2 VNI (VLAN↔VNI) & Layer-3 VNI (VRF↔VNI / Instance-ID).
Policy Plane – TrustSec
1616-bit SGT inserted into VXLAN-GBP header; GBAC enforces policy hop-by-hop.
Management Plane – Catalyst Center
– Intent→config, fabric automation, assurance, SWIM, open APIs.

Architecture Building Blocks

What is a Fabric?

• Full-mesh overlay built with tunnels; provides "any subnet anywhere" without STP.

Underlay Network

• Physical routed fabric. Design rules:
– Layer-3 routed access (no STP, no port-channel loops).
– Point-to-point links (optical preferred).
– Campus-wide MTU 1550\ge 1550; recommended 91009100 bytes (LAN Automation sets 91009100).
– BFD for fast failure detection.
– IS-IS used by LAN Automation; OSPF/EIGRP also possible.
– Loopback0 (/32) RLOCs must be reachable; AP→WLC must have non-default route.

Overlay Network

• VNs = VRFs; carry isolated control & data planes.
• Layer-2 overlay (bridge) & Layer-3 overlay (route) supported.
• Guideline: avoid overlapping IPs, use fewer / larger subnets (stretch allowed), enable L2 flooding only when required.

Shared Services

• DHCP/DNS/NTP, Internet, UC, servers – usually north of border.
• Option 1: services in GRT with selective route-leak.
• Option 2: services in dedicated VRF leaked via RT-import/export.

Fabric Roles & Terminology

Control-Plane Node – LISP MS/MR, host-tracking DB.
Edge Node – Access-layer switch + xTR; does VXLAN encap/decap, anycast SVI, AAA NAD.
Intermediate Node – Plain routed underlay; MTU \ge fabric size.
Border Node – Gateway between fabric & external; adverts EID space via BGP, does PxTR, VRF-lite.
Fabric-in-a-Box (FIAB) – Single switch (or stack/SVL) runs border+control+edge (+ optional embedded WLC).
Extended Node – Downstream IE/Catalyst/DB switch; Layer-2 trunk to edge; three flavours: Classic, Policy (inline SGT), Supplicant-Based (EAP-TLS).
Fabric WLC – Catalyst 9800 series; registers client MACs into HTDB; data plane VXLAN to edge.
Fabric AP – Local-mode; CAPWAP ctrl to WLC, VXLAN data to edge, auto-placed in \text{INFRA_VN}.
Embedded Wireless – 9800-EW on Catalyst 9k switch (except 9200/9500X/9600).
Transit Types: IP-Based (decap to native IP) vs SD-Access (maintains VXLAN).
Transit Control-Plane Node – Aggregate HTDB across sites; required for SD-Access transit.
Fabric Site – Unique set of roles (at least 1 CP); has site-local WLC & PSN.

Design Considerations

LAN Design Principles

• Underlay: routed access, jumbo MTU, BFD, point-to-point optics, loopback /32 reachability.
• Overlay: VN for macro-segmentation, SGT for micro-segmentation, stretch subnets, no overlap.
• Latency limits (RTT):
– Catalyst Center ↔ device 300 ms\le 300\ \text{ms}.
– AP ↔ WLC 20 ms\le 20\ \text{ms}.

Device-Role Principles

• Edge: trust-boundary, PoE, 802.1X, inline enforcement.
• CP: deploy in redundant pair; avoid >2 if fabric wireless (WLC updates two CPs only).
• Border: size for throughput; external (default), internal (imports DC prefixes), anywhere (both).
• FIAB: stacking or SVL for HA; daisy-chain edges possible.
• Extended Node: EtherChannel trunk; dual-homing via FlexLink+; policy nodes carry SGT 80008000 on uplink.
• Catalyst Center HA: single-node or 3-node cluster; cluster needs ≥2 nodes alive.

Feature-Specific

Multicast: head-end replication vs native (PIM-SSM in underlay). Anycast-RP on border or external RP supported.
Layer-2 Flooding: Maps subnet→underlay multicast; use /24 scopes; requires underlay PIM & Anycast-RP.
Critical VLAN: default VLANs 20462046 (voice) & 20472047 (data); best practice—dedicated Critical VN.
LAN Automation:
– Uses PnP + IS-IS; primary/peer seeds act DHCP & RP.
– Auto /31 on links, /32 loopback, MTU 91009100, PIM, BFD.
– Enable-Multicast option auto-configures MSDP (Anycast-RP with Loopback 6000060000).

Wireless Design

• Two models:
– Over-the-Top (centralised CAPWAP) – useful during migration.
– Integrated Fabric Wireless – data VXLAN to edge (distributed), ctrl CAPWAP to WLC.
• Mixed-mode possible per-SSID.
• Guest options:

  1. Guest VN with SGT isolation.

  2. Multisite Remote Border (anchor VN across sites).

  3. Legacy Guest-Anchor WLC (centralised SSID).

External Connectivity Patterns

• Layer-3 Handoff automated (VRF-lite + eBGP).
VRF-Aware Peer – services block / firewall; route-leak (prefix-list + RM) or VRF-RT.
Non-VRF Peer – merge into GRT; sink-hole + ACL to stop east-west.
Firewall Peer – preferred for policy; supports contexts/zones.
• Design triangles not squares: each border → both upstream devices; crosslinks.

Security & Policy End-to-End

• Macro (VRF) over IP-based transit: VRF-lite, GRE, DMVPN, SD-WAN, or MP-BGP VPNv4.
• Micro (SGT) over IP-based transit: inline-tag hop-by-hop or through GRE/IPsec CMD; or SXPv5/IP-SXP domain.

Multidimensional Factors

• Greenfield vs Brownfield (Layer-2 handoff for migration).
• Users & devices count → site sizing.
• Geography & latency → SD-Access transit vs IP transit.
• Local vs central services for survivability.
• HA: redundant CP, border, WLC (SSO), Catalyst Center clusters.

Site Reference Models

Model

Endpoints

APs

Key Traits

Fabric-in-a-Box

<10001000

<5050

Single switch/stack does CP+Border+Edge (+EWLC); no site HA beyond stacking

Small

<1000010\,000

<500500

Collapsed core/distribution pair = CP+Border; separate WLC HA; services block switch

Medium

<5000050\,000

<25002500

Three-tier; CP may be separate; multiple distro blocks; WLC HA pairs; services block

Large

<100000100\,000

<1000010\,000

Dedicated CP pair, internal & external border pairs, multiple WLC HA; data-center services

Distributed Campus

Multi-site

n/a

SD-Access transit (LISP+BGP or Pub/Sub), transit CP nodes, policy carried end-to-end

Migration to SD-Access

Parallel – build new fabric in parallel; repatch cables (simple rollback).
Incremental – convert existing access switch to edge; use Layer-2 handoff to traditional network.
Hybrid – combo of above.

Wireless Migration Steps

  1. Build wired fabric (parallel/incremental).

  2. Keep existing SSID centrally switched over fabric.

  3. Floor-by-floor assign new fabric-enabled SSID (new IP pool).

  4. WLC can manage 1 fabric site + multiple traditional sites; assure RTT <20 ms.

Layer-2 Border Handoff Details

• VLAN ↔ L2VNI translation; border becomes SVI default-GW.
• Multicast & WoL supported across handoff.
• Guideline: dedicate this border; no other roles; VTP transparent; allowed VLAN 240942–4094 (excl. reserved).
• Traditional SVI must be shutdown before cut-over.

Fabric Protocol Deep-Dive (Appendix A)

• VXLAN-GBP header carries 2424-bit VNI + 1616-bit SGT + 33-bit policy.
• LISP EID→RLOC mapping; HTDB analogous to DNS; supports MAC / IPv4 / IPv6 EIDs.

Key Numbers / Formulae

• VXLAN overhead =50bytes=50\,\text{bytes}.
• Jumbo MTU recommendation =9100=9100 bytes.
• Minimum fabric MTU 1550\ge1550 bytes.
• AP↔WLC RTT 20ms\le20\,\text{ms}; Catalyst Center↔device 300ms\le300\,\text{ms}.
• SGT field length =16bits=16\,\text{bits}216=655362^{16}=65\,536 tags ( 64000\approx64\,000 usable ).
• Max external borders per site =4=4; control plane nodes for fabric wireless =2=2.
• Default critical VLANs: voice 20462046, data 20472047.

Ethical / Practical Implications

• Identity-driven segmentation reduces lateral-movement attack surface (Zero-Trust campus).
• Automation cuts mis-config errors → lower OpEx, faster compliance.
• Consistent policy across wired/wireless vital for BYOD/IoT proliferation.

Real-World Connections

• "Any subnet anywhere" enables hot-desking & IoT mobility without re-IP.
• Inline SGT over GRE / SD-WAN supports hybrid cloud & work-from-home edge.
• Fabric-in-a-Box ideal for pop-up labs, small branches, industrial kiosks.

Further Resources

• SD-Access Segmentation Design Guide, Distributed Campus Deployment Guide, Medium/Large Fabric Prescriptive Guide.
• YouTube playlists: Catalyst Center, ISE, SD-Access.
• Cisco Community: SD-Access Fabric Resources hub.