In the last post, I began the process of creating a QoS strategy based on identification of traffic we care about (classification), determination of the DSCP or CoS tags to apply (marking), and desired bandwidth (queuing). This post will provide an overview of the configuration required to achieve the QoS goals defined by that strategy.
Since publishing the first post, I found this gem in the Nexus 9k QoS configuration guide:
“For VLAN-tagged packets, priority is assigned based on the 802.1p field in the VLAN tag and takes precedence over the assigned internal priority (qos-group). DSCP or IP access-list classification cannot be performed on VLAN-tagged frames.”
I have updated the original post with this documentation excerpt but need to make some changes to the policy maps to be used based. A quick glimpse of our 9k switches reflects that most of the ports are trunks and will thus be carrying VLAN-tagged packets. No problem, but I’m glad to have found this before moving forward on the configuration (and this post)! The silver lining here is that configuration should be greatly simplified, as we’ll rely on traffic classification and marking being performed on each connected host.
There are 8 possible CoS values, 3 of which will be mapped to specific QoS-groups on the switch (remember that codepoint 6 is for network control and is mapped to the Control QoS queue by the switch):
First, we’ll create class maps to match the traffic we care about:
class-map type qos CLASS-QOS-COS5 description Global class to match CoS 5 match cos 5 ! class-map type qos CLASS-QOS-COS4 description Global class to match CoS 4 match cos 4 ! class-map type qos CLASS-QOS-COS3 description Global class to match CoS 3 match cos 3 !
With the class maps created, the global policy map can now be created:
policy-map type qos POLICY-QOS-GLOBAL description Global QoS policy class CLASS-QOS-COS5 set qos-group 3 set cos 5 class CLASS-QOS-COS4 set qos-group 2 set dlb-disable set cos 4 class CLASS-QOS-COS3 set qos-group 1 set cos 3 class class-default set qos-group 0 ! ! Class-default will catch all unmatched CoS values. !
CLASS-QOS-COS4 (which will be used for iSCSI traffic) has an extra action defined in the policy: “set dlb-disable.” This is being done in accordance with the Nexus 9k QoS guide, which states:
“The dynamic load balancing (DLB) based hashing scheme is enabled by default on all internal links of a linecard. When DLB is enabled, no-drop traffic might experience out-of-order packet delivery when congestion on internal links occurs and PFC is applied. If applications on the system are sensitive to out-of-order delivery, you can adjust for this by disabling DLB at the qos-group level. Disable DLB by using the set dlb-disable action in the QoS policy-maps and the set qos-group action for no-drop classes.”
Continuing, we must create a network-qos policy which will define system-wide QoS properties. This policy will use the system-defined network-qos classes, which match 1:1 with the qos-groups (e.g. qos-group 3 is matched by system class “c-nq3”, qos-group 2 by c-nq2, etc.):
policy-map type network-qos POLICY-NETWORKQOS-GLOBAL description Set global QoS properties class type network-qos c-nq3 mtu 9216 class type network-qos c-nq2 mtu 9216 pause pfc-cos 4 class type network-qos c-nq1 mtu 9216 class type network-qos c-nq-default mtu 9216
…As well as a set of policies to handle input and output queuing and scheduling:
policy-map type queuing POLICY-QUEUING-GLOBAL-IN description Global input bandwidth allocation class type queuing c-in-q3 priority level 1 class type queuing c-in-q2 bandwidth remaining percent 40 class type queuing c-in-q1 bandwidth remaining percent 20 class type queuing c-in-q-default bandwidth remaining percent 40 ! policy-map type queuing POLICY-QUEUING-GLOBAL-OUT description Global output bandwidth allocation class type queuing c-out-q3 priority level 1 class type queuing c-out-q2 bandwidth remaining percent 40 class type queuing c-out-q1 bandwidth remaining percent 20 class type queuing c-out-q-default bandwidth remaining percent 40 !
Easy enough, right? The only remaining task is to apply these policies to the device. This will be done globally via the system qos context:
NXOS1(config)# system qos NXOS1(config-sys-qos)# service-policy type queuing output POLICY-QUEUING-GLOBAL-OUT NXOS1(config-sys-qos)# service-policy type queuing input POLICY-QUEUING-GLOBAL-IN NXOS1(config-sys-qos)# service-policy type network-qos POLICY-NETWORKQOS-GLOBAL NXOS1(config-sys-qos)# service-policy type qos input POLICY-QOS-GLOBAL
Assigning a policy of type qos to the system is only possible when using Fabric Extenders – otherwise an error may be displayed:
policy-map POLICY-QOS-GLOBAL (type:qos) is supported only on fex
After applying at the system level, the policy must be applied to trunk ports and FEX Host Interface (HIF) ports to take effect. From the FEX QoS guide, “When configuring end to end queuing from the HIF to the front panel port, the QoS classification policy needs to be applied to both system and HIF. This allows the FEX to queue on ingress appropriately (system) and allows the egress front panel port to queue appropriately (HIF).”
Application of the policy to the trunk and FEX HIF ports is nearly identical to the same task for Catalyst switches:
! interface Ethernet101/1/1 service-policy type qos input POLICY-QOS-GLOBAL !
With the basics now covered, there are a couple more policies we will need to create to handle classification and marking on access and routed interfaces. Those policies will be covered in the next post. Hopefully I won’t find more overlooked caveats during this process! Until then, keep learning – and remember to read very carefully before implementing!