Sunday 30 March 2008

Congestion Management and Queuing - LLQ

LLQ (Low Latency Queuing) - offers all of the benefits of CBWFQ, but with the ability to offer low delay and jitter queues for supporing VoIP/ToIP. As with CBWFQ all traffic not identified using class-maps is assigned to the default-class, which uses all remaining available bandwith. Default-class can use FIFO or WFQ, also with WRED.

Strict priority queues enable low-delay transmission of real-time data, the queue is policed, this prevents starvation of the other queues.

Configuration of LLQ follows the same methods as CBFWQ, an additional 'priority' command is used to specify strict-priority queues.

E.g.
policy-map enterprise_qos
class voice
priority 50
class business
bandwith 200
class class-default
fair-queue

The policy above offers strict-priority queue for voice class, with bandwidth guarantee of 50Kb/sec, business class is guaranteed 200Kb/sec using CBWFQ.

Congestion Management and Queuing - CBWFQ

CBWFQ (Class Based Weighted Fair Queuing) - User-defined classes, each class is assigned it's own queue. Each queue can be configured with it's own (minimum) bandwith guarantee. Flexible class-maps are used to match traffic (flows) to queues.

CBWFQ can create up to 64 queues, with one for each user-defined class, each queue uses FIFO, with defined bandwidth guarantee and maximum packet limit, tail drop occurs if the maximum packet limit is reached. To avoid tail-drops WRED can be applied to specific queues.

A default-class is always present in CBWFQ which is used to queue any traffic not matching the user-defined queues. It is possible to define a minimum bandwidth guarantee for the default-class, otherwise it will use any remaining bandwidth available.

CBWFQ is not able to offer a queue suitable for applications that require low latency queuing.

Steps for configuring CBWFQ:
1. Define class-maps (match features described previously)
2. Create a policy-map, define parameters such as guaranteed bandwith/packet queue/queuing method
3. Apply policy-map to an interface in a specified direction

E.g. Guarantee minimum 120Kb/sec bandwidth for web-based applications, with a maximum queue limit of 90 packets, guarantee 20Kb/sec bandwidth for traffic from host 192.168.2.3, and fair-queue all other traffic

access-list 100 permit ip host 192.168.2.3 any
!
class-map web_application
match protocol www
!
class-map host_192.168.2.3
match access-group 100
!
policy-map enterprise_policy
class web_application
bandwith 120
queue-limit 90
class host_192.168.2.3
bandwith 20
class class-default
fair-queue
!
interface FastEthernet 0/0
service-policy output enterprise_policy

Congestion Management and Queuing - WFQ

WFQ (Weighted Fair Queuing) - Default queuing mechanism on serial interfaces less than E1 speed, it is used by modern queuing methods CBWFQ and LLQ. WFQ is a flow based queuing method, flows are assigned to FIFO queues.

Flows are identified by the following features:
-Src/Dst IP address
-Protocol number
-ToS
-Src/Dst TCP/UDP port number

When recieving a packet it's hash value is calculated, all packets in a flow will have the same hash value. The hash is calculated on the fields above. After the packet has had it's hash calculated if it is a new flow it is assigned a new queue, if it is part of an existing flow it is assigned to the same queue as the rest of the packets.

If a queue size is larger than congestive disgard threshold then it may be dropped. The priority of a packet or flow will influence the sequencing number. Higher priority packets are assigned a smaller sequence number so will be given preferential treatment. If all flows have the same priority interface bandwith is devided equally.

The number of queues is equal to the number of active flows, by default this is a maximum of 256 queues. When flows exceed the maximum configured value they are added to existing queues.



WFQ has a hold policy that covers every queue, when the hold threshold is exceeded any new packets are dropped, this is called 'aggressive dropping'. Aggressive dropping does have one exception, if the destination queue is empty then the packet will be accepted.

Every queue has a Congestion Discard Threshold (CDT), if the hold queue is not full but the CDT is reached then the packet will be dropped. This is called 'early dropping', the only exception is if a packet has a higher sequence number then that will be dropped instead.

WFQ is enabled using 'fair-queue' command on interface, 'fair-queue cdt_value' can be used to configure CDT. 'hold-queue max_limit out' can be used to set the hold queue limit.

Congestion Management and Queuing - RR & WRR

RR (Round Robin) - RR uses multiple queues, the packet schedular takes one packet from each queue and starts from the top again. If there is a large packet in one queue then there is a possibility that other queues may suffer temporarily.

All queues are of equal priority and it is not possible to prioritise traffic.

WRR (Weighted Round Robin) - modified version of RR, where specific weights can be added to queues. Custom Queuing (CQ) is a form of WRR where you can specify the number of bytes to be processed before moving to the next queue.

Congestion Management and Queuing - Priority Queuing

PQ - Priority Queuing uses four queues, high/medium/low/normal. Packets must be assigned to each queue, if packets are not assigned they will be put into the normal queue. Access-lists are commonly used to identify the traffic to be queued.

1. If packets exist in the high priority queue then packets in other queues will not be processed until it is empty.
2. If the high priority queue is empy then one packet will be processed from the medium priority queue.
3. If the medium priority queue is empty then a packet from the normal queue is processed.
4. If the normal priority queue is empy then a packet from the low priority queue is processed.

After processing one packet the packet schedular will always start from the beginning again, checking the priority queues in the order described above. The problem with this is that low priority queues may be starved if higher priority queues have traffic waiting constantly.

Cisco IOS command priority-list can be used to define the queue that packets should be sent to.

Congestion Management and Queuing - FIFO

First in First Out (FIFO) FIFO uses a single queue, and requires no configuration effort. Packet class/priority/type are not considered when queued. Real-time applications may be queued behind bulk data, and also dropped if necessary.

On high bandwidth interfaces FIFO is considered an appropriate queuing mechanism.

Congestion Management and Queuing - Intro

Queuing is a technique that deals with temporary congestion, for long term congested links recommended practise is to upgrade the link.

Default queuing mechanism on all interfaces, except those less than 2048Kb/sec is FIFO - first in first out.

Software queues are only used when the hardware queue is full (transmit queue TxQ), hardware queue quickly transmits packets in the order they are recieved.

Friday 28 March 2008

Network Based Application Recognition - NBAR

NBAR is capable of detecting applications/flows through a router. It is limited in the applications that it can recognise, PDLM's can be added to NBAR allowing recognition of additional applications. NBAR is simpler than access-lists and also supports HTTP MIME types, and stateful connections.

NBAR can be integrated into QoS to identify traffic and classify, using protocol-discovery.

ip nbar pdlm name_of_pdlm - Add a new PDLM located in flash
ip nbar port-map name_of_protocol tcp/udp port_number - map protocol to port
ip nbar protocol-discovery - enable NBAR protocol discovery on an interface

match protocol protocol_name - identify traffic in class-map using NBAR.

http://www.cisco.com/pcgi-bin/tablebuild.pl/pdlm - Download PDLMs

QoS - Trust Boundaries

As we all know, we should perform traffic classification and marking as close to the network edge as possible (this means the access layer switches, or even devices connected to the access-layer).

Defining a trust boundary is important, it prevents unauthorised software/devices from marking packets with a priority that could be detrimental to other critical traffic flows.

There are three main trust boundaries at the access layer; host/IP phone/switch, this would generally be CoS markings, if you wish to mark at layer 3 (DSCP) then a distribution switch may be your trust boundary, as illustrated below.


Thursday 27 March 2008

Diffserv model - IPP and DSCP

ToS field can be used for IP precedence, or DSCP (8 bytes)

IP Precedence is old, only uses 3 bits to define priority, 8 possible values, only 6 user-definable.
DSCP is new, backwards compatible with IPP, uses 6-bits

Default Per-hop-behaviour (PHB) is '000' for three most significant bits, queuing is best-effort service.

Assured Forwarding (AF) -three most significant bits, guaranteed bandwidth

Expidited Forwarding (EF) - three most significant bits, provides low delay service


Expidited Forwarding - Ensures low delay, low jitter, designed for real-time applications. It is important to limit the bandwidth of EF so that it doesn't starve other classes, ideally EF is used with admission control. During congestion EF polices bandwidth.

Non DSCP complient voice applications mark IPP as 101 (5-critical), the most significant bits of EF are 101, so enables backwards compatability.

Assured Forwarding - AF provides four queues for four classes of traffic (AF1x/AF2x/AF3x/AF4x), the 'x' value defines drop probability. There is no priority among the queues.

AF11 is less likely to be dropped than AF13. There are 3 drop priorities, low1, medium2, high3.

Each AF class is backwards compatible with IPP values, e.g. AF21 is equal to IPP2

EF class is backwards compatible with IPP value 5 (binary 101).


When implementing QoS it is important to ensure that enough bandwidth is reserved for each queue to avoid delay or drops. Providing an AF queue is not policed, it can consume more bandwidth providing it is available.

Classification & Marking - @layer 2 MPLS

Packets entering an MPLS network have an additional 4-byte header added, this header contains the parameters for MPLS.

Within the MPLS header there is a field called EXP, this experimental field is used for marking priority. 3-bits are available for specifying priority, just like CoS field in 802.q frames, or IP precedence.

On the edge of the MPLS network, when an IP packet arrives the three most significant bits of the ToS (layer 3 QoS marking) are copied to the EXP field of the MPLS header. The three bits are called IP precedence.

It is possible for the MPLS service provider to chose not to copy the IP precendence to the EXP field, and instead configure the EXP field themselves, this leaves the ToS field available for the customer to mark as appropriate.

Classification & Marking - @layer2 Frame-relay/ATM

QoS marking at frame-relay and ATM isn't very sophisticated, in frame-relay the DE 'discard elligable' bit is set to 1 if it is allowed to be dropped.

ATM uses the same principle, with a field called CLP, when set to 1 the cell is a good candidate for dropping.

Classification & Marking - @Layer 2 Ethernet

CoS - Class of Service is a method of marking packets at layer2, commonly done by an access switch, or IP phone.

802.1q VLAN trunking protocol is a 4-byte header field (VLAN ID) added to an Ethernet frame. Within the VLAN ID field is a 3-bit field called 'user priority' this is called 802.1p

The 802.1p field allows up to 8 different values that can identify different priorities of traffic.

The image below shows the placement of 802.1q VLAN ID field, and 801.p within it.


Classification & Marking

Classification and marking is the process of identifying, then grouping traffic. Traffic descriptors are methods of identifying traffic to be put into classes, the following are methods of traffic identification:-

Ingress (incoming interface)
Source/destination IP address
CoS (class of service) layer 2 marking on ISL/802.1p frame
IP Precedence/DSCP layer 3 marking on IP packet
MPLS EXP value on MPLS header
Application type

Common practise nowadays is to mark the packet using deep-inspection as close to the edge as possible, then as the packet traverses the network only the packet marking needs to be checked.

Wednesday 26 March 2008

QoS - Models

Best Effort - no configuration required, no differentiated service to applications

Integrated Services (intserv) - Signals to reservice service and guarantee quality, uses RSVP signalling protocol. Model also known as hard-qos, signalling is on a per flow basis, so can become bandwidth intensive.

Differentiated Services (diffserv) - most modern, uses traffic classification and marking on a per router basis.

QoS - Methods of Configuration

There are mutliple methods to configure QoS, the best tool to use depends on what you want to achieve, and your familiarity with router configuration.

1. Legacy command-line interface (CLI) - old, complicated, prone to errors in configuration

2. Modular QoS Command Line (MQC) - new, efficient, uniform across all router platforms
-define traffic classes - class-map
-define treatment to traffic classes - policy-map
-apply QoS policy to an interface in a direction - service-policy in/out

3. AutoQos - Automatic QoS policy configuration, including traffic identification, marking, queuing

4. SDM QoS Wizard - Graphical web interface, QoS configuration by software wizard

Recommended method of QoS configuration is MQC.

QoS - Major Issues

QoS isn't just for fun, it aims to resolve the following key issues experienced in enterprise networks:

Available bandwidth: Obviously traffic flow is limited by the link with least bandwidth (bottleneck). Available bandwidth = total bandwidth/number of flows. Methods to fix lack of bandwidth include: upgrade links (costly), classify and mark traffic and use queuing techniques. Another solution is to use compression such as cRTP, this can be CPU intensive.

End to end delay: Four types of delay-

1. Processing delay -time taken for device to move packet from ingress to egress interface
2. Queuing delay - time spent in queue before serialisation
3. Serialisation delay - time taken to send all bits onto medium
4. Propagation delay - time taken to traverse medium

Variation of delay (jitter): Packets within a flow arrive out of order, and end to end delay may vary. Real-time applications like video and voice require packets to arrive in the order that they were sent. To resolve the problem a de-jitter buffer is used on the router, it (where delay is minimal) re-orders packets before passing them to the application.

Packet loss: Packet loss occurs due to router buffers filling up, and packets get dropped. Routers may drop some packets in order to make room for higher priority traffic. UDP traffic is connectionless, without the flow control of TCP, this means that packets have to be resent, which is costly for VoIP and video applications.

VoIP- Odds and ends

Important stuff....but already memorised

cRTP - RTP header compression - beneficial on links less than 2mb/sec,

VAD - Voice Activated Detection - Substantial bandwidth savings, theory is that conversations involve substantial amounts of time where no voice is spoken, with VAD enabled there durations are not packetised. Another premise is that when one party is talking, the other is not, so there is no need to packetise voice in that direction.

Fragmentation - VoIP packets are typically small, so when a small VoIP packet is in an outbound queue behind a large data packet it has an impact on the quality of the call. Fragmentation reduces the maximum transmission unit (MTU) so that the VoIP packets are not delayed waiting for large packets to leave the output queue.

VoIP - Pulse Code Modulation

PCM - Pulse Code Modulation:

PCM is a method of converting analogue voice signals to digital voice signals... this is how it works!...

8000 samples of voice are collected and recorded every second, each sample is quanitizised by assigning an 8bit binary number based on voltage/height, this produces a bit rate of 640000 bit/sec (64Kb/sec).

PCM is un-compressed, and can be transmitted nicely within a channel of an E1 trunk! Easy

VoIP - Analogue to Digital Conversion

In order to convert analogue electrical signals into digital binary numbers the following steps take place:

1. Sampling - Period capturing and recording of voice (resulting in a PAM Pulse Amplitute Modulation signal)

2. Quantization
- Assigning numeric values to the voltage (or height) of each sample

3. Encoding - Representing the numeric values in a binary format

4. Compression (optional)
- Aims to reduce the number of bits transmited to conserve bandwidth.

High quality voice calls have a high sampling rate, the down side to this is that generates a higher bit-rate, which is more bandwidth intensive!

VoIP - Interfaces

Interfaces!...

Digital interfaces: (multiple channels)
ISDN - PRI [e1/t1]
ISDN - BRI

Analogue interfaces: (single channel)
FXS (phone/fax/modem ~ringing)
FXO (PSTN CO switch ~battery, dial-tone, digit collection)
E&M (trunk PBX-PBX connectivity)

Introduction

I will be writing this blog, whilst studying for the Cisco Optimising Converged Cisco Networks (ONT).

The aim is to capture the key information from my studying for the examination...