Thursday, April 8, 2010

MPLS Label Assignment and Distribution

Label Distribution Protocol (LDP) and Tag Distribution Protocol (TDP) exchange labels and store the information in the label information base (LIB).

A label is added to the IP forwarding table (forwarding information base, or FIB) to map an IP prefix to a next-hop label.

A locally generated label is added to the label forwarding information base (LFIB) and mapped to a next-hop label.

An LSP is a sequence of LSRs that forward labeled packets for a particular FEC. Each LSR swaps the top label in a packet traversing the LSP. An LSP is similar to Frame Relay or ATM virtual circuits. In cell-mode MPLS, an LSP is a virtual circuit.

Impacts of IP Aggregation

Aggregation (or summarization) should not be used on ATM LSRs because it breaks LSPs in two, which means that ATM switches would have to perform Layer 3 lookups.

Aggregation should also not be used where an end-to-end LSP is required. Typical examples of networks that require end-to-end LSPs are the following:

A transit BGP autonomous system (AS) where core routers are not running BGP
An MPLS VPN backbone
An MPLS-enabled ATM network
A network that uses MPLS TE

Frame-Mode Loop Detection

The TTL functionality in MPLS is equivalent to that of traditional IP forwarding. Furthermore, when an IP packet is labeled, the TTL value from the IP header is copied into the TTL field in the label. This is called “TTL propagation.”

TTL propagation can be disabled to hide the core routers from the end users. Disabling TTL propagation causes routers to set the value 255 into the TTL field of the label when an IP packet is labeled.

If TTL propagation is disabled, it must be disabled on all routers in an MPLS domain to prevent unexpected behavior.

TTL can be optionally disabled for forwarded traffic only, which allows administrators to use traceroute from routers to troubleshoot problems in the network.

Penultimate Hop Popping

PHP optimizes MPLS performance by reducing the number of table lookups on the egress router.

PHP is not supported on ATM devices because a label is part of the ATM cell payload and cannot be removed by the ATM switching hardware.

Per-Platform Label Allocation

There are two possible approaches for assigning labels to networks:

* Per-platform label allocation: One label is assigned to a destination network and announced to all neighbors. The label must be locally unique and valid on all incoming interfaces. This is the default operation in frame-mode MPLS.

* Per-interface label allocation: Local labels are assigned to IP destination prefixes on a per-interface basis. These labels must be unique on a per-interface basis.

MPLS Convergence

The overall convergence in an MPLS network is not affected by LDP convergence when there is a link failure.

Frame-mode MPLS uses liberal label retention mode, which enables routers to store all received labels, even if they are not being used.

These labels can be used, after the network convergence, to enable immediate establishment of an alternative LSP tunnel.

Cell-Mode Issues

Cell-mode MPLS is significantly different from frame-mode MPLS because of some ATM-specific requirements:

* ATM uses cells and not frames. A single packet may be encapsulated into multiple cells. Cells are a fixed length, which means that normal labels cannot be used because they would increase the size of a cell. The virtual path identifier/virtual channel identifier (VPI/VCI) field in the ATM header is used as the MPLS label. An LSP tunnel is therefore called a virtual circuit in ATM terminology.
* ATM switches and routers usually have a limited number of virtual circuits that they can use. MPLS establishes a full mesh of LSP tunnels (virtual circuits), which can result in an extremely large number of tunnels.

Because ATM switches cannot forward IP packets, labels cannot be asynchronously assigned and distributed.

Instead, the router initiates an ordered sequence of requests on the upstream side of the ATM network.

It is not until the request is answered with the label and assigned to destinations in the IP routing table that the forwarding table is populated.

An ordered sequence of downstream requests is followed by an ordered sequence of upstream replies. This type of operation is called downstream-on-demand allocation of labels.

Two virtual circuits can merge into one. Standard ATM virtual switching hardware does not support this situation, and as a result, segmented packets from the two sources may become interleaved.

There are two possible solutions to this problem:

* Allocate a new downstream label for each request. This solution would result in a greater number of labels.
* Buffer the cells of the second packet until all cells of the first packet are forwarded. This solution results in an increased delay of packets because of buffering.

The major benefit of VC merge is that it minimizes the number of labels (VPI/VCI values) needed in the ATM part of the network.

The major drawbacks to VC merge are as follows:

* Buffering requirements increase on the ATM LSR.
* There is an increase in delay and jitter in the ATM network.
* ATM networks under heavy load become more like frame-based networks.

Loop Detection in Cell-Mode MPLS Networks

Cell-mode MPLS uses the VPI/VCI fields in the ATM header to encode labels. These two fields do not include a TTL field. Therefore, cell-mode MPLS must use other ways of preventing routing loops.

LDP uses a hop-count TLV (type, length, value) attribute to count hops in the ATM part of the MPLS domain.

This hop count can be used to provide correct TTL handling on ATM edge LSRs on behalf of ATM LSRs that cannot process IP packets.

A maximum limit in the number of hops can also be set.

Per-Interface Label Allocation

Cell-mode MPLS defaults to using per-interface label space because ATM switches support per-interface VPI/VCI values to encode labels.

Therefore, if a single router has two parallel links to the same ATM switch, two LDP sessions are established and two separate labels are requested.

Label Distribution Parameters

The two label space options are:

* Per-interface label space, where labels must be unique for a specific input interface
* Per-platform label space, where labels must be unique for the entire platform (router)

The two options for label generation and distribution are as follows:

* Unsolicited downstream distribution of labels is used in frame-mode MPLS, where all routers can asynchronously generate local labels and propagate them to adjacent routers.
* Downstream-on-demand distribution of labels is used in cell-mode MPLS, where ATM LSRs have to request a label for destinations found in the IP routing table.

Another aspect of label distribution focuses on how labels are allocated:

* Frame-mode MPLS uses independent control mode, where all routers can start propagating labels independently of one another.
* Cell-mode MPLS requires LSRs to already have the next-hop label if they are to generate and propagate their own local labels. This option is called ordered control mode.

The last aspect of label distribution looks at labels that are received but not used:

* Frame-mode MPLS may result in multiple labels being received but only one being used. Unused labels are kept, and this mode is usually referred to as liberal label retention mode.
* Cell-mode MPLS keeps only labels that it previously requested. This mode is called conservative label retention mode.

LDP Session Establishment

LDP is a standard protocol used to exchange labels between adjacent routers. TDP) is a Cisco proprietary protocol that has the same functionality as LDP.

LDP periodically sends hello messages. The hello messages use UDP packets with a multicast destination address of (“all routers on a subnet”) and destination port number of 646 (711 for TDP).

If another router is enabled for LDP (or TDP), it will respond by opening a TCP session with the same destination port number (646 or 711).

ATM LSRs establish the IP adjacency across the MPLS control virtual circuit, which by default has a VPI/VCI value of 0/32.

An IP routing protocol and LDP (or TDP) use this control virtual circuit to exchange IP routing information and labels.

Some Cisco devices use the Virtual Switch Interface (VSI) protocol to create entries in the LFIB table (ATM switching matrix of the data plane) based on the information in the LIB table (control plane). This protocol is used to dynamically create virtual circuits for each IP network.

MPLS Concepts

The two major elements of MPLS architecture are the control plane and the data plane.

* The control plane exchanges routing information (with routing protocols such as OSPF) and labels with protocols such as LDP or TDP
* The data plane is the forwarding engine

MPLS labels maintain how to forward information. They function differently depending on whether MPLS is functioning in frame-mode or cell-mode.

* In frame-mode MPLS labels are 32-bit fields inserted between the Layer 2 and Layer 3 headers. These are broken into the following
o 20-bit label
o 3-bit experimental field
o 1 bit bottom-of-stack indicator
o 8-bit TTL field
* In cell-mode the ATM header is the label

A label switch router (LSR) is a device that forwards based on labels.

An edge LSR labels and removes labels from packets.

LSRs that perform cell-mode MPLS are divided into the following categories:

* ATM LSRs if they are ATM switches. All interfaces are enabled for MPLS, and forwarding is done based only on labels.
* ATM edge LSRs if they are routers connected to an MPLS-enabled ATM network.

Forwarding equivalence class (FEC) describes the forwarding characteristic of a packet, such as the destination IP.

MPLS is used for the following applications:

* Unicast IP routing
* Multicast IP routing
* MPLS traffic engineering provides more efficient link use
* Differentiated Quality of Service
* MPLS VPNs - Separate customer routing information across the MPLS backbone
* Any Transport over MPLS - Transport Layer 2 packets over an MPLS backbone

Thursday, April 1, 2010

Optimizing BGP Scalability

This is due not only to the number of parameters and attributes you can modify for the protocol, but also to the sheer size of the routing table(s) you could deal with! This final chapter of BGP focuses on four ways of optimizing BGP operation when dealing with these enormous routing tables:

* Reducing BGP convergence time
* Limiting the number of BGP prefixes from a neighbor
* Using BGP peer groups
* Configuring route dampening

Reducing BGP Convergence Time

The creators of the BGP routing protocol designed it for slow convergence. Although this seems illogical, it becomes clear when you realize the sheer size of a BGP network. If BGP propagated routes quickly, a single, flapping network could cause an instant worldwide routing table recalculation. Considering the number of flapping routes that exist on a daily basis, this would be disastrous.

Using a variety of BGP configuration commands
, you are able to lower the convergence time of BGP. If you are dealing with Internet-sized routing tables, Cisco recommends that you do NOT adjust the following timers. However, if you are using BGP to manage an enterprise-sized routing table, modifying the following timers can increase network performance and convergence time.

There are two timers you can adjust to lower the convergence time of BGP: the scanner interval and the hello interval.

The scanner interval is how often the BGP routing process “walks through” the BGP routing table and ensures all routes are still reachable. By default, this occurs once every 60 seconds. By lowering this interval you allow BGP to modify the table more quickly in the event that a next-hop address becomes unreachable. Keep in mind that decreasing this interval does adversely affect your router CPU load. Use the following syntax to modify the scanner interval:

Router(config-router)# bgp scan-time seconds

The hello interval is how often BGP sends a hello message to a neighboring router. By default, BGP sends hello messages every 30 seconds for EBGP neighbors and every 5 seconds for IBGP neighbors. By decreasing this interval, the BGP routing process can detect a disconnected neighbor sooner resulting in faster convergence. Use the following syntax to modify the hello interval:

Router(config-router)# neighbor ip_address advertisement-interval seconds

Limiting the number of BGP prefixes from a neighbor

This feature allows you to limit the number of route advertisements you receive from a particular neighbor. This is necessary to protect yourself from a misconfigured neighbor who could send multiple copies of the Internet routing table to your router. This would quickly result in a memory overflow and potentially cause the router to crash. Use the following syntax to limit the number of prefixes you can receive from a neighbor:

Router(config-router)# neighbor ip_address maximum-prefix number_of_prefixes [threshold] [warning-only] [restart minutes]

Following is a description of the optional arguments for the maximum-prefix syntax:

threshold – This is a number from 0-100 representing a percentage. When a router reaches this percentage of prefixes (in relation to the maximum number of prefixes), it will begin generating warning messages.

warning-only – This causes the BGP router process to ONLY send warning messages when the neighbor exceeds the maximum number of prefixes. The default behavior is to drop the neighbor connection.

restart minutes –
This instructs the router to try to re-establish the session after the specified interval in minutes

Using BGP peer groups

BGP peer groups
are primarily designed to ease BGP neighbor configuration. However, peer groups also provide a slight performance boost. Peer groups allow you to group common neighbor parameters under a peer group name. This is useful if you have many BGP neighbors with similar parameters. You can then assign all the neighbors to a common peer group rather than assigning all the neighbor parameters individually. The syntax to create a peer group is as follows:

Peer group creation

Router(config-router)# neighbor peer_group_name peer-group

Router(config-router)# neighbor peer_group_name (assign parameters to the peer group such as remote-as, route-map, filter-list, etc…)

Assigning peer groups

Router(config-router)# neighbor ip_address peer_group_name

Configuring Route Dampening

Because the Internet is such a large entity, the probability for routing table changes is extremely high. At any given time of day or night, there are routes being added and removed from the BGP routing table. When a router connected to the Internet is failing, a common symptom is the connection going up and dropping continuously. Administrators commonly refer to this as route flapping. Uninhibited route flapping can cause constant, worldwide BGP routing table changes, thus decreasing Internet performance.

Route dampening is a method that allows a service provider to detect flapping routes and suppress them. This keeps a route that could potentially flap for hours or even days from propagating across the Internet. The architecture of route dampening is fairly easy to understand. When a route flaps (goes down and back up), the service provider assigns that route a penalty. After a route has been assigned too many penalties, the service provider suppresses the route and no longer advertises it for a certain amount of time.

Before you can understand the configuration of route dampening, you must understand the terminology:

Suppress Limit – The penalty limit at which a route is suppressed. Once a route reaches this limit, it is no longer advertised.

Reuse Limit –
The point at which the route is re-advertised to the Internet. Once the penalty assigned to a route reaches this amount, the service provider will re-advertise the route. (in addition, service provider erases all penalties assigned to a route once the penalty drops below half of the reuse limit)

Maximum Suppress Limit – The maximum amount of time the service provider will suppress a route.

Now that you understand the foundation terms, here is the syntax to configure route dampening:

Router(config-router)# bgp dampening [half-life reuse suppress max-suppress-time]

half-life –
How long before the service provider reduces the penalty of a route by half

reuse – The penalty value at which a route is reused

suppress –
The penalty value at which a route is suppressed

max-suppress-time –
The maximum amount of time a route can be suppressed

Scaling Service Provider Networks

Here are some guidelines for scaling service provider networks:

* BGP carries customer and provider routes
* IGPs carry only internal routes used to supply routers with an understanding of the next-hop-IP. This may include loopback IPs for IBGP neighborships.
* Do not redistribute BGP into your IGP
* IBGP does not scale well as a full mesh, and create too much update traffic
* Use route-summarization whenever possible

Route Reflectors overcome the full mesh requirement of IBGP neighborship.

Here is how a route reflector will behave.

* When a router receives an update from an external peer, it will propagate that advertisement to all peers (eBGP and iBGP).
* When a router receives an update from a non-client internal peer, if it is a router reflector, it will propagate that advertisement to all clients and eBGP peers.
* When a route reflector receives an update from a client, it will be reflected to all iBGP peers.

Route-reflectors may be single points of failure unless clusters are used. Clusters allow for redundancy without problems such as routing loops.

A hierarchy of route-reflectors may be used to overcome scaling very large autonomous systems.

allow a large autonomous system to be carved up into smaller AS numbers. To the outside world, the autonomous systems participating in the BGP confederation are seen as a single AS. This can help overcome scalability by reducing peering.

An iBGP full mesh is needed for member-autonomous systems. eBGP neighborships can be used in any manner to provide connectivity between all participating member-ASs.

Important Commands:

bgp cluser-id cluster-id – Configured the route reflector cluster

neighbor ip-address route-reflector-client – Informs a route reflector of its clients

router bgp member-as-number – Configures the member-AS of a router within a confederation

bgp confederation identifier external-as-number – Configures the external AS

bgp confederation peers list-of-intra-confederation-as – Informs an intermember EBGP speaker in a confederation of the other member-autonomous systems participating in the confederation

Tuesday, February 9, 2010

BGP--Route Selection Using Attributes

Route Selection Using Attributes

Attributes fall into multiple categories.

Well-known means that all implementations must support the attribute.

Optional attributes do not need to be recognized by the BGP implementation.

There are two categories of well-known attributes. They are mandatory and discretionary.

Mandatory attributes must be included in all messages.

Discretionary attributes do not need to be included in a message.

Mandatory Well-Known attributes are as follows:



Next-hop IP

Discretionary Well-Known attributes are those below:

Local Preference

Atomic Aggregate

Optional attributes can be either transitive or nontransitive.

Non-transitive means the metric is not carried far.

Transitive means that they are kept and carried beyond the local neighbors.

The MED is an optional nontransitive attribute.

The aggregator and community are optional transitive attributes.

The AS-path stores the list of AS numbers traversed for a network advertisement.

The next-hop attribute is the next-hop IP that will be used. Use caution with this attribute on multipoint NBMA networks. The next-hop-self keyword may be needed.

Weight is the first attribute considered in route selection. A higher weight is preferred. Weight is not advertised. It is only used to influence the path selection to an outbound network from a single router.

Local Preference works like the weight attribute for path selection. However, it affects the entire AS.

AS-Path Prepending influences how other autonomous systems reach your network. Remember to prepend your own AS number, otherwise the advertisement will be dropped. Prepend additional AS numbers onto the path that you are attempting to devalue.

The Multi-Exit Discriminator, AKA “metric”, is used to influence how a neighboring AS reaches your network. Higher metric values are perceived as worse.

Communities allow route tagging. Once routes have been tagged, they may be filtered. Communities are 32 bit values represented in decimal values separated by a colon. 2000:100 is an example of a community value. The first 16 bits represent the AS number. The last 16 bits represent the tag value.

There are four special community values.

No-export: will not be advertised beyond the confederation

Internet: equivalent to any

No-advertise: never advertise this route

Local-AS: will not be advertised outside of the AS (even with regards to confederations)

Tuesday, February 2, 2010


Route Selection Using Policy Controls

Over the last ten years, the size of the Internet grew to an extremely large size. None of the interior protocols used by most companies (such as OSPF, RIP, and EIGRP) could successfully handle a network of this size. BGP was a new Exterior Gateway Protocol (EGP) created to handle routing tables of enormous size. This introductory chapter gives us the overview of BGP and the foundation configuration commands.



* …is a distance vector routing protocol
* …uses TCP as its layer four transport (TCP port 179)
* …does not use triggered updates
* …uses periodic keepalives to verify TCP connectivity
* …is extremely scalable, but is slow to converge

BGP is rarely necessary if your company has a single connection to the Internet. It is most useful when you have multiple or redundant Internet connections since it can then find the service provider with the fastest path to your destination.


Unlike most of the routing protocols you may have configured in the past, BGP does not dynamically discover other neighboring BGP routers. They must be statically configured. This is beneficial since the service provider keeps its BGP connections under tight security. Use the following syntax to configure a BGP neighbor relationship:

Router(config)# router bgp

Router(config-router)# neighbor remote-as

You can only configure a Cisco router for a single BGP autonomous system (AS) (you cannot enter multiple router bgp numbers). However, you can connect to a practically limitless number of neighboring autonomous systems.

Once you have formed your neighbor relationships (neighbors no longer show the idle or active states from the show ip bgp summary output), you are now able to specify which internal networks you would like to advertise into the BGP routing process. Remember, service provider will propagate the networks you advertise to the entire Internet.

There are two ways of advertising internal networks into the BGP routing process: the network or redistribute router configuration commands. The BGP network command operates differently than any other routing protocol. Typically, the network statement tells a routing process the networks on which it should operate. For example, if you typed network when using the RIP routing protocol, RIP would send advertisements out any interface that was using an address from the network. In BGP, the statement network causes BGP to advertise the network to all neighbor relationships it has formed (provided a network is installed on the interior routing table).

The redistribute command, however, works similarly to other routing protocols. The command redistribute eigrp 100 causes all EIGRP routes from autonomous system 100 to enter the BGP routing table.


OSPF was written to address the needs of large, scalable internetworks that RIP could not. The issues it addresses are:

Speed of convergence:
In large networks, RIP convergence can take several minutes. With OSPF, convergence is much faster as routing changes are flooded immediately and computed in parallel.

Support for VLSM:
RIP v1 does not support VLSM. OSPF does support VLSM.

Network reachability:
RIP networks cannot span more than 15 routers, while OSPF has virtually no reachability limitations.

Use of bandwidth: RIP broadcasts its routing table out each interface every 30 seconds. OSPF multicasts link-state updates and only sends the updates when there is a change in the network. OSPF does perform a full update every 30 minutes to ensure that all routers are synchronized.

Method for path selection:
RIP has no concept of network delays or link costs. It routes packets purely on hop count. OSPF uses a cost value (speed of connection) for its path selection.

OSPF relies on IP packets for delivery of routing information, and uses protocol 89 in the transport layer.

We learned the three basic OSPF topologies:

Broadcast Multi-access:
Networks supporting multiple attached routers, together with the capability of addressing a single physical message to all of the attached routers (broadcast). Ethernet would be an example.

Point-to-point: A network that joins a single pair of routers. A T1 dedicated serial line would be an example.

NBMA (Non-broadcast Multi-access): Networks supporting multiple routers, but having no broadcast capability. Frame-relay and X.25 are examples of NBMA networks.

In a broadcast multi-access topology such as Ethernet, Hello packets are sent periodically out each OSPF enabled interface using IP multicast address The information contained in the hello packet is:

Router ID: A 32-bit number (usually an IP address) that uniquely identifies a router in an AS (autonomous system).

Hello and Dead intervals: The default Hello interval is 10 seconds. The Dead interval is 4 times the hello interval or 40 seconds by default.

Neighbors: The neighbors with which bi-directional communication has been established.

Area-ID: To communicate, two routers must share a common segment and have their interfaces belong to the same area on that segment.

Router priority: An 8-bit number that indicates the priority of this router when selecting a Designated Router (DR) and Backup Designated Router (BDR).

DR and BDR IP addresses: The IP address of the current DR and BDR are listed.

Authentication password:
If authentication is enabled, the password is listed here.

Stub area flag: A stub area is a special area that has only one exit to the backbone.

DR/BDR Election

To elect a DR and BDR on a broadcast multi-access network, the routers view each other’s priority value during the hello packet exchange process, and then use the following conditions to determine which is elected:

The router with the highest priority value is the DR.

The router with the second highest priority value is the BDR.

The default priority is 1 on an OSPF interface, in case of a tie, the router ID is used. The router with the highest router ID then becomes the DR, and the router with the second highest router ID becomes the BDR. The router ID is the highest IP address on the router, unless a loopback is configured, in which case the highest loopback IP address will be the router ID.

Loopback interfaces are logical interfaces that never go down. In other words they will always be in an UP, UP state. Because they can never go down, they are excellent references to use for router processes. Cisco is well aware of this and uses them in many ways. For instance, remember how OSPF chooses its Router ID. The highest active IP address is used, unless a Loopback interfaces is configured. If so, the highest Loopback IP address is chosen as the Router ID. BGP uses Loopbacks in the very same way. Also, Loopbacks are great for simulating networks connected to a router.

A router with the priority set to 0 is ineligible to become DR or BDR. If a router with a higher priority value gets added to the network, the DR and BDR do NOT change. The only time a DR or BDR change is if one goes down. If the DR goes down, the BDR takes its place. If the BDR goes down, a new BDR is elected. Basically the first two routers powered up on a segment will become the DR and BDR.

An adjacency is the relationship that exists between a router and its DR and BDR. Adjacent routers will have synchronized link-state databases. Once a DR and BDR are elected, any router added to the network will establish adjacencies only with the DR and BDR.

OSPF neighbor process

When a router is first powered on, it goes through several states, each with its own function.

The router (let's call it RouterA) begins in the DOWN state- It begins to send hello packets out its OSPF enabled interfaces.

When routers receive this hello packet, they add it to their list of neighbors. This is the INIT state.

The neighbors that received the hello packet will reply with their own hello packet. The neighbor field will include RouterA as a neighbor.

When RouterA receives these packets, it adds all the routers that had its router ID in their hello packet to its own neighbor database. This is referred as the TWO-WAY state.

The routers determine who the DR and BDR will be. After the DR and BDR election, the routers are considered to be in the EXSTART state (ready to start exchanging link-state information).

In the EXSTART state, the DR and BDR establish adjacencies with each router in the network. When the routers have exchanged one or more DBD (Database Description) packets, they are in the EXCHANGE state.

The routers exchange link-state information using LSR (Link State Requests) and LSU (Link State Update) packets. A router will issue a LSAck in response when a LSU is received. The process of sending LSRs is referred to as the LOADING state.

All routers add the new link-state entries into their link-state databases.

Once all LSRs have been satisfied for a given router, the adjacent routers are considered synchronized and in a FULL state. The routers must be in a full state before they can route traffic. At this point, the routers should all have identical link-state databases.

Routers in a point-to-point topology dynamically detect their neighbors by using the hello protocol. There is no election: adjacency is automatic as soon as the two routers can communicate. All OSPF packets are sent to multicast address The default OSPF hello and dead intervals on non-NBMA topologies are 10 seconds and 40 seconds, respectively.

We learned that the OSPF operation is in an NBMA topology. With NBMA networks, a single interface interconnects multiple sites. NBMA topologies support multiple routers but without broadcast capabilities. Frame relay, ATM, and X.25 are examples of NBMA networks. The default OSPF hello and dead intervals on NBMA topologies are 30 seconds and 120 seconds, respectively.


Routing protocols fall into four different categories. Classful vs. Classless and Distance Vector vs. Link State. The main difference between Classful and Classless routing protocols are their support of VLSM. Classful routing protocols do not support VLSM. This is due to the fact they do not include the subnet mask with route updates. On the other hand, Classless routing protocols do support VLSM, because they contain subnet mask information within the route updates.

Here is a quick review of the Classful routing protocols.

RIP (Routing Information Protocol)

Uses hop count as its metric. HOP stands for Hand-off protocol.
IP load balancing is enabled by default.
Sends its entire routing table every 30 seconds by default out all RIP enabled interfaces.
It is a Distance Vector routing protocol.
It is a classful routing protocol (route masks are not carried within the updates, consistency of masks is assumed).
RIP uses UDP port number 520, which makes it an Application layer protocol.
Hop count limit of 15, and 16 is infinity.

IGRP (Interior Gateway Routing Protocol):

Uses a composite metric made up of bandwidth, delay, reliability, load, and MTU (with Bandwidth and delay used as default).
Uses the “fastest” path to the destination.
IP load balancing is enabled by default.
It is a classful routing protocol. Route masks are not carried within the updates, and consistency of masks is assumed.
IGRP uses protocol number 9 at the Transport layer.
Default hop count limit of 100, but configurable to 255.

We explained how "less is more" in the classful/classless routing distinction. With classful routing protocols, summary routes are automatically created at Class A, B, and C network boundaries. So, all router interfaces in the network must have the same subnet mask. If they do not, routing failures may occur. As a result, classful routing protocols may not fully utilize available allocation of host addresses.

Since no subnet mask is sent in routing updates with classful routing protocols, the router does one of the following to determine the network portion of the destination address:

If the routing update information regards the same network number as configured on the receiving interface, the router applies the subnet mask that is configured on the receiving interface.

If the routing update information pertains to a network address that is not the same as the one configured on the receiving interface, the router will apply the default (by class) subnet mask.

Unlike classful routing protocols, classless routing protocols include the routing mask with the route advertisement. With classless routing protocols, summary routes can be manually controlled within the network. Classless routing protocols include OSPF, EIGRP, RIP v2, IS-IS, and BGP.

In a classless routing environment, router interfaces within the same network can have different subnet masks (VLSM can be used). This approach maximizes allocation of available host addresses.

Distance vector routing protocols are referred to as "routing by rumor". They simply relay learned routes out interfaces on a periodic basis to directly connected neighbors. There are two distance vector routing algorithms which distance vector protocols use. The more common of the two is the Bellman-Ford, or B-F, algorithm. EIGRP uses DUAL -- the Diffusing Update Algorithm.

Link state routing is the alternative to distance vector. In a link-state environment, link-state announcements are propagated to all devices in the routing domain. Also, hierarchical design can limit the requirement to notify all devices.

Convergence time is the time it takes for all routers to agree on the network topology after a change such as:

New routes being added
Existing routes changing state

EIGRP Features and Advantages

EIGRP is an advanced distance vector protocol (Cisco also calls EIGRP a balanced hybrid protocol). EIGRP is guaranteed to be 100% loop free while maintaining a very rapid convergence time. EIGRP offers superior performance over IGRP because of the rapid convergence and the guarantee of a loop-free topology at all times. These improvements are the key to the name “Enhanced” IGRP.

Features and advantages of EIGRP include the following:

Incremental updates
Supports VLSM and discontiguous networks
Classless routing
Compatible with existing IGRP networks
Protocol independent (supports IP, IPX, and AppleTalk)
Uses multicast instead of broadcast
Utilizes link bandwidth and delay
Unequal cost path load balancing
More flexible than OSPF

EIGRP is not an application such as RIP, but is instead a protocol running at the transport layer as protocol number 88 in the IP header. EIGRP uses the services of IP to deliver routing information.

EIGRP supports many different topologies such as Multi-access (Ethernet), Point-to-point (HDLC), and NBMA (Frame relay and ATM)

We also learned that EIGRP automatically summarizes at the classful boundary, and this can be turned off with the command no auto-summary.

EIGRP Operation

The EIGRP terminology is as follows:

Neighbor table:
The is the table of adjacent routers

Topology table:
This is where all learned routes are maintained

Routing table:
This is where the best (successor) routes are stored

Successor: The primary route to a network

Feasible Successor: The backup route to a network

Here are the five generic packet types used in EIGRP:

Hello: Multicasts used for neighbor discovery

Update: Multicasts used for updating neighbors of new routes

Queries: A router sends queries when it does not have a Feasible Successor

Replies: A packet sent in reply to a query

ACK: The ACK is used to acknowledge the above packets

We learned that hellos are sent every 5 seconds on broadcast media, point-to-point links, and multi-point circuits with bandwidth greater than T1. They are sent every 60 seconds on multi-point circuits with bandwidth less than T1

The hold time is, by default, three times the hello interval.

EIGRP Metrics

EIGRP uses the same composite metric as IGRP does to pick the best path, except that it is scaled by 256. The default criteria used are:

Bandwidth: The smallest bandwidth between the source and destination

Cumulative interface delay along the path

Additional criteria that can be used is as follows:

Reliability: Worst reliability between source and destination based on keepalives

Load: Worst load on a link between source and destination based on bps

MTU (Maximum Transfer Unit): Smallest MTU in path

•EIGRP uses the following formula to calculate the composite metric:
–CM = 256 x ([k1 x BWmim+ (k2 x Bwmim)/(256 – LOAD) + k3 x DELAYsum] x X)
–Where the following is true:
•BWmim = 107/bandwidth_of_slowest_link
•DELAYsum = SUM(delays_along_the_path)
•X = k5/(reliability + k4) if and only if k1<>1, if k1 = 1 then X = 1
–With the k values set at the default values – you have:
•CM = 256 x (BWmim + DELAYsum)
–NOTE: When you compute by hand – you will get a slightly diff. result than router – this is because of how router handles floating point mathematics
AD (Advertised Distance) is the cost between the next-hop router and the destination.

FD (Feasible Distance)
is the cost to reach the destination from the local router.

The successor (lowest cost route) is the best route to a destination.

The FS (Feasible Successor) is a valid backup route in the event the successor route to the destination fails.

EIGRP utilizes the split horizon feature and that you can use EIGRP to turn off split horizon for NMBA.

EIGRP Configuration

The commands to configure EIGRP are similar to those used for IGRP. We showed you the commands needed to configure a router for EIGRP:

Router(config)# router eigrp

Router(config-router)# network (the interfaces that will participate in EIGRP)

If you are using serial links, remember they default to a bandwidth of 1.544Mbps (T1 speed). You should manually change the bandwidth value of lower value links (56K, 128K, 384K, etc) to properly reflect the clock rate of the interface to the EIGRP routing process:

Router(config-if)# bandwidth

EIGRP will automatically summarize at the classful network boundary. To turn this feature off, issue the following command:

Router(config-router)# no auto-summary

To manually create a summary on an interface issue the following:

Router(config-if)# ip summary-address eigrp

Remember EIGRP will perform equal-cost load balancing on 4 equal-cost links by default, but you can configure it to perform load balancing on a maximum of 6.

To perform unequal-cost load balancing, you must use the variance command:

Router(config-router)# variance

Verifying EIGRP

Here's a review of EIGRP and the commands used to verify and test your EIGRP configuration. Those commands are listed below.

Show commands:

show ip eigrp neighbors: Displays EIGRP neighbor table

show ip eigrp topology: Displays the topology table

show ip route eigrp: Displays the EIGRP routes in the routing table

show ip protocols: Displays current routing protocols running

show ip eigrp traffic: Displays information about EIGRP packets

show ip eigrp events: Displays information about EIGRP events

Debug commands:

debug eigrp packet: Shows EIGRP packets as they are sent and received

debug eigrp neighbor: Shows the EIGRP neighbor process

debug eigrp route: Shows EIGRP changes made to the routing table

debug eigrp summary: Shows a summary of EIGRP activity

debug eigrp events: Shows EIGRP events as they happen


Network Address Translation allows a router to translate source and destination IP addresses. Another function of NAT is to observe the port numbers used in communication in a production network. In the event port numbers need to be monitored, a route-map can be used to identify the source addresses. When a route-map is used in this manner, the router performing NAT will store complete information for translation, including port numbers.


IPv6 addresses consist of 128 bits, allowing for a much greater address space. IPv6 addresses can be shortened in two manners.

Leading 0s can be dropped in 64 bit block (4 hexadecimal digits)
A “::” can be used to represent consecutive 0s spanning multiple fields, but can only be used once! This can be used in the beginning, end or middle of the address.
An example of IPv6 shortening is the following


The above address can be abbreviated as follows.


RIPng, OSPF, BGP4+, and Integrated IS-IS are capable of serving as IPv6 routing protocols.

When connecting IPv6 and IPv4 networks, there are a few things to consider. Dual stack hosts allow for connectivity to both types of networks. If traffic of one version is needed to cross another version, say IPv4 information over IPv6, information can either be tunneled across the foreign network, or a translation can be done.

Route Summerization

Route summarization, or route aggregation, is a method of representing a series of network numbers in a single summary address.

To implement route summarization, certain requirements are needed:

Multiple IP addresses must have the same highest-order bits

Routing decision are made based on the entire address

Routing protocols must carry the prefix (subnet mask) length

Dis-contiguous subnets are major network addresses separated by another major network address.

CIDR is a mechanism developed to alleviate exhaustion of addresses and reduce routing table sizes. With CIDR, blocks of Class C addresses are assigned to ISPs, which in turn assign subsets of address space to organizations. These blocks are then summarized in routing tables.

Planning of an IP address space requires an examination of the corporate structure. Improper addressing can result in an unscalable network design.

A scalable IP addressing scheme allows for route aggregation. Route aggregation, also known as route summarization allows for many routes to be represented with a single advertisement. This reduces routing updates and allows for greater scalability with our routing protocols.

Consider avoiding the use of the "zero subnet" to prevent problems caused by devices not compatible with this technology. In the event subnet zero is used, the "ip subnet-zero" command will correctly configure a router for this practice.

Fixed Length Subnet Masking, or FLSM, uses a constant mask everywhere in the network.

Variable Length Subnet Masking, or VLSM, uses an inconsistent mask tailoring to different sizes of networks.