In this series of posts, I’ve covered the QoS strategy and the creation of policies used at the system level as well as FEX HIF and trunk ports. There are still policies which need to be defined and applied to access and routed (L3) interfaces – this post will cover creation of these. I’ve decided to keep all the information together instead of breaking it up which makes for a lot of reading, so it’s probably a good time to grab some refreshments before continuing this journey…
Thinking more about the types of traffic that will be in the datacenter, the follow guidelines apply:
- Traffic coming in via L3 links from the campus core to the datacenter will have already been marked at ingress. The only incoming traffic from the campus which may need different treatment will be voice, which will be marked DSCP 46 (EF). Everything else will be treated as best-effort. A policy will be created for these uplinks.
- Traffic between devices connected to the Nexus front-panel ports and between the Nexus switches themselves will use the same policies. Per our strategy, almost everything will be treated as best-effort except for Voice, iSCSI, and vMotion. However, there are some types of traffic which may be destined for networks outside the datacenter networks and which will be marked for upstream QoS. Most of these additional markings will be for OAM (Ops/Admin/Management) traffic, but there are a couple other traffic classes being marked as well.
In the past, I’ve created ACLs which match all traffic for a particular class. While this approach has worked as expected, I’ve recently begun giving more thought to simplifying configuration readability. If some small changes result in faster comprehension of the device configuration for other members of the network team, I believe that it’s worth a few more lines. As a result, I’ll create multiple smaller ACLs for traffic classification. Of course, this may all change after the next config review process…
Without further ado, let’s create some ACLs for the class-maps:
! ! IPv4 and IPv6 ACLs for iSCSI. Matching source ports, as client traffic ! will originate from FEX HIFs and marked by vSphere vDS policies. ! ip access-list ACL-QOS-ISCSI 10 remark Match iSCSI Traffic 20 permit tcp any eq 860 any 30 permit tcp any eq 3260 any ! ipv6 access-list v6-ACL-QOS-ISCSI 10 remark Match iSCSI Traffic 20 permit tcp any eq 860 any 30 permit tcp any eq 3260 any ! ! ! IPv6 and IPv6 ACLs to match all ICMP traffic ! ip access-list ACL-QOS-ICMP 10 remark Match ICMP Traffic 20 permit icmp any any ! ipv6 access-list v6-ACL-QOS-ICMP 10 remark Match ICMP Traffic 20 permit icmp any any ! ! ! IPv4 and IPv6 to match HTTPS server traffic. Client traffic will ! mostly be sourced from upstream access ports - HTTPS in the datacenter ! mostly used for server management such as iLO and CIMC ! ip access-list ACL-QOS-HTTPS 10 remark Match HTTPS for Management 20 permit tcp any eq 443 any ! ipv6 access-list v6-ACL-QOS-HTTPS 10 remark Match HTTPS for Management 20 permit tcp any eq 443 any ! ! ! IPv4 and IPv6 ACLs matching NTP ! ip access-list ACL-QOS-NTP 10 remark Match NTP 20 permit udp any any eq ntp 30 permit udp any eq ntp any ! ipv6 access-list v6-ACL-QOS-NTP 10 remark Match NTP 20 permit udp any any eq ntp 30 permit udp any eq ntp any ! ! ! IPv4 and IPv6 ACLs matching SSH from server and client. While ! most clients will be in the campus, there is some server-to-server ! traffic generated as well. ! ip access-list ACL-QOS-SSH 10 remark Match SSH 20 permit tcp any any eq 22 30 permit tcp any eq 22 any ! ipv6 access-list v6-ACL-QOS-SSH 10 remark Match SSH 20 permit tcp any any eq 22 30 permit tcp any eq 22 any ! ! ! IPv4 and IPv6 bidirectional replication traffic - this may be within ! the datacenter or from/to an offsite DR location ! ip access-list ACL-QOS-NDMP 10 remark Match NDMP and SnapMirror 20 permit tcp any any eq 10000 30 permit tcp any eq 10000 any 40 permit tcp any any range 10566 10569 50 permit tcp any range 10566 10569 any 60 permit tcp any any eq 10571 70 permit tcp any eq 10571 any 80 permit tcp any any eq 10670 90 permit tcp any eq 10670 any ! ipv6 access-list v6-ACL-QOS-NDMP 10 remark Match NDMP and SnapMirror 20 permit tcp any any eq 10000 30 permit tcp any eq 10000 any 40 permit tcp any any range 10566 10569 50 permit tcp any range 10566 10569 any 60 permit tcp any any eq 10571 70 permit tcp any eq 10571 any 80 permit tcp any any eq 10670 90 permit tcp any eq 10670 any ! ! ! Finally, IPv4/IPv6 to match NFS data. ! ip access-list ACL-QOS-NFS 10 remark Match NFS Data Ports 20 permit tcp any any eq 2049 30 permit tcp any eq 2049 any ! ipv6 access-list v6-ACL-QOS-NFS 10 remark Match NFS Data Ports 20 permit tcp any any eq 2049 30 permit tcp any eq 2049 any
With ACLs created, we’ll now create a set of class maps to match the traffic we’re interested in. Descriptions of each class are in the definition:
class-map type qos match-any CLASS-QOS-EF description Match incoming packets marked with EF - CoS 5 match dscp 46 ! class-map type qos match-all CLASS-QOS-COS4 description Global class to match CoS 4 match cos 4 ! class-map type qos match-any CLASS-QOS-ISCSI description Match v4/v6 iSCSI traffic (non-NetApp) - DSCP 32 (CS4) / CoS 4 match access-group name ACL-QOS-ISCSI match access-group name v6-ACL-QOS-ISCSI ! class-map type qos match-any CLASS-QOS-ICMP description Match ICMP - DSCP 16 (CS2) / CoS 2 match access-group name ACL-QOS-ICMP match access-group name v6-ACL-QOS-ICMP ! class-map type qos match-any CLASS-QOS-HTTPS description Match HTTPS server traffic - DSCP 16 (CS2) / CoS 2 match access-group name ACL-QOS-HTTPS match access-group name v6-ACL-QOS-HTTPS ! class-map type qos match-any CLASS-QOS-NTP description Match NTP client and server traffic - DSCP 16 (CS2) / CoS 2 match access-group name ACL-QOS-NTP match access-group name v6-ACL-QOS-NTP ! class-map type qos match-any CLASS-QOS-SSH description Match SSH client and server traffic - DSCP 16 (CS2) / CoS 2 match access-group name ACL-QOS-SSH match access-group name v6-ACL-QOS-SSH ! class-map type qos match-any CLASS-QOS-NDMP description Match replication client and server traffic - DSCP 10 (AF11) / CoS 1 match access-group name ACL-QOS-NDMP match access-group name v6-ACL-QOS-NDMP ! class-map type qos match-any CLASS-QOS-NFS description Match NFS client and server traffic - DSCP 10 (AF11) / CoS 1 match access-group name ACL-QOS-NFS match access-group name v6-ACL-QOS-NFS !
The fun is just beginning! Class-maps defined, ACLs created – now we actually need to do something with them. There will be two policies created – one for the L3 uplink to the campus core, and one the front panel ports. Comments are embedded in the policies for your reading pleasure:
! ! Policy for L3 Uplink ports - There should be no storage or vMotion ! coming in to the DC, but there may be EF-marked voice. ! policy-map type qos POLICY-QOS-UPLINK description L3 Uplink QoS Policy - no Storage or vMotion from Campus ! ! Voice traffic - maintain EF, put into qos-group 3, and set CoS 5 for queuing to FEX ports class CLASS-QOS-EF set dscp 46 set qos-group 3 set cos 5 ! ! Explicitly dump everything else to class-default class class-default set qos-group 0 set cos 0 !
! Policy for any other port in the DC, including the vPC peer links. ! Unique qos-groups will be used for Voice, iSCSI, vMotion, and ! default traffic within the DC block. ! Note that we're setting CoS for each class here. While not necessary now ! (the global QoS policy isn't doing anything for CoS other than 3,4,5) ! we'll set here for future use. ! policy-map type qos POLICY-QOS-DEFAULT description Default classification and marking policy ! ! Same with uplink policy - EF to qos-group 3 and CoS 5 class CLASS-QOS-EF set dscp 46 set qos-group 3 set cos 5 ! ! CoS 4 should be sourced from the NetApp. Put into qos-group 2 ! and ensure CoS 4 is maintained to the FEX HIF. class CLASS-QOS-COS4 set qos-group 2 set cos 4 ! ! Any other iSCSI traffic is marked to CS4/CoS 4 and put into group 2 class CLASS-QOS-ISCSI set dscp 32 set cos 4 set qos-group 2 ! ! vMotion is marked at the vDS to CoS 3. Set DSCP AF31, qos-group 1, ! and CoS 3 for transmission to FEX HIF. class CLASS-QOS-COS3 set dscp 26 set cos 3 set qos-group 1 ! ! ICMP, HTTPS, NTP, and SSH will get marked to CS2 / CoS 2 and put ! into the default qos-group class CLASS-QOS-ICMP set dscp 16 set cos 2 set qos-group 0 class CLASS-QOS-HTTPS set dscp 16 set cos 2 set qos-group 0 class CLASS-QOS-NTP set dscp 16 set cos 2 set qos-group 0 class CLASS-QOS-SSH set dscp 16 set cos 2 set qos-group 0 ! ! Replication and NFS traffic marked to AF11 / CoS 1 and put into the ! default qos-group class CLASS-QOS-NDMP set dscp 10 set cos 1 set qos-group 0 class CLASS-QOS-NFS set dscp 10 set cos 1 set qos-group 0 ! ! And our explicit catch-all for everything else - dump into qos-group 0 class class-default set qos-group 0 !
The last step is to apply the policies to the ports. This is similar to Catalyst switches:
! Apply the uplink policy to the L3 Uplink to the campus core: interface Port-Channel100 service-policy type qos input POLICY-QOS-UPLINK !
! Apply the default policy to front-panel ports (one port given as an example) interface Ethernet1/1 service-policy type qos input POLICY-QOS-DEFAULT
And that should be a good start. To recap this series of articles, we have:
- Determined the traffic types expected in the datacenter.
- Taken an initial best-guess at the amount of bandwidth for each of four available queues (eight if FEX not in use).
- Created policies of type qos, queuing, and network-qos, and applied these policies globally to the switch.
- Attached the global policies to FEX HIF ports to ensure traffic is assigned to the proper qos-groups on ingress.
- Identified the types of traffic expected between the datacenter and campus, as well as the traffic expected within the datacenter.
- Classified these traffic types and created policies to act on the traffic (marking and qos-group assignment).
- Assigned the front-panel and uplink policies to ports.
For the final installment of this series, I’ll verify that these policies are working. In the meantime, please feel free to submit comments here or directly to me if there are any questions or identified errors. I’m not pretending to be a Nexus or QoS expert, so will gladly accept constructive feedback!
Until next time,