What technique can overwhelm the content Addressable Memory tables on layer 2 switches

Layer 2 Addressing

Layer 2 addresses are also called MAC addresses, physical addresses, or burned-in addresses (BIA). These are assigned to network cards or device interfaces when they are manufactured.

MAC addresses (Figure 1.15) have a value of 48 bits. The first 24 bits comprise the Organizational Unique Identifier (OUI), which represents a code that identifies the vendor of the device. The second least significant bit in the OUI portion identifies whether the address is locally (bit value of 1) or universally (bit value of 0) assigned, and the most significant bit identifies a unicast MAC address (bit value of 0) or a multicast address (bit value of 1). The last 24 bits form a unique value assigned to a specific interface, allowing each network interface to be identified in a unique way via the associated MAC address.

What technique can overwhelm the content Addressable Memory tables on layer 2 switches

Figure 1.15 – MAC Address Structure

Switching

Switches are network devices that separate collision domains and process data at high rates due to the switching function being implemented in hardware using Application Specific Integrated Circuits (ASICs). Networks are segmented by switches in order to provide more bandwidth per user by reducing the number of devices that share the same bandwidth. In addition, they forward traffic only on interfaces that need to receive the traffic. However, for unicast traffic, switches forward the frame to a single port rather than to all ports.

When a frame enters an interface, the switch adds the source MAC address and the source port to its bridging table and then examines the destination MAC. If this is a broadcast, multicast, or unknown unicast frame, the switch floods the frame to all ports, except for the source port. If the source and the destination addresses are on the same interface, the frame is discarded. However, if the destination address is known (i.e., the switch has a valid entry in the bridging table), the switch forwards the frame to the corresponding interface.

The switching operation can be summarized by Figure 1.16 below:

What technique can overwhelm the content Addressable Memory tables on layer 2 switches

Figure 1.16 – Switching Operation

When the switch is first turned on, the bridging table contains no entries. The bridging table (also called the switching table, the MAC address table, or the CAM [Content Addressable Memory] table) is an internal data structure that records all of the MAC addresses to interface pairs whenever the switch receives a frame from a device. Switches learn source MAC addresses in order to send data to appropriate destination segments.

In addition to flooding unknown unicast frames, switches also flood two other frame types: broadcast and multicast. Various multimedia applications generate multicast or broadcast traffic that propagates throughout a switched network (i.e., broadcast domain).

When a switch learns a source MAC address, it records the time of entry. Every time the switch receives a frame from that source, it updates the timestamp. If a switch does not hear from that source before a predefined aging time expires, that entry is removed from the bridging table. The default aging time in Cisco Access Layer switches is 5 minutes. This behavior is exemplified in the MAC address table shown below, where the sender workstation has the AAAA.AAAA.AAAA.AAAA MAC address:

Ref. Time Action

Port

MAC Address

Age (sec.)

00:00 Host A sends frame #1

Fa0/1

AAAA.AAAA.AAAA.AAAA

0

00:30 Age increases

Fa0/1

AAAA.AAAA.AAAA.AAAA

30

01:15 Host A sends frame #2

Fa0/1

AAAA.AAAA.AAAA.AAAA

0

06:14 Age increases

Fa0/1

AAAA.AAAA.AAAA.AAAA

299

06:16 Entry aged out (deleted)

06:30 Host A sends frame #3

Fa0/1

AAAA.AAAA.AAAA.AAAA

0

06:45 Age increases

Fa0/1

AAAA.AAAA.AAAA.AAAA

15

MAC address table entries are removed when the aging time expires because switches have a finite amount of memory, limiting the number of addresses it can remember in its bridging table. If the MAC address table is full and the switch receives a frame from an unknown source, the switch floods that frame to all ports until an opening in the bridge table allows the bridge to learn about the station. Entries become available whenever the aging timer expires for an address. The aging timer helps to limit flooding by remembering the most active stations in the network. The aging timer can be adjusted if the total number of network devices is lower than the bridging table capacity, which causes the switch to remember the station longer and reduces flooding.

Note:    The process of flooding new unknown frames when the MAC address table is full is a potential security risk because an attacker could take advantage of this behavior and overwhelm the bridging table. If this happens, all the ports (including the attacker port) will receive all the new received frames, even if they are not destined for them.

Spanning Tree Protocol

The Spanning Tree Protocol (STP), defined by IEEE 802.1D, is a loop-prevention protocol that allows switches to communicate with each other in order to discover physical loops in a network. If a loop is found, the STP specifies an algorithm that switches can use to create a loop-free logical topology. This algorithm creates a tree structure of loop-free leaves and branches that spans across the Layer 2 topology.

Loops occur most often as a result of multiple connections between switches, which provides redundancy, as shown below in Figure 1.17.

What technique can overwhelm the content Addressable Memory tables on layer 2 switches

Figure 1.17 – Layer 2 Loop Scenario

Referring to the figure above, if none of the switches run STP, the following process takes place: Host A sends a frame to the broadcast MAC address (FF-FF-FF-FF-FF-FF) and the frame arrives at both Switch 1 and Switch 2. When Switch 1 receives the frame on its Fa0/1 interface, it will flood the frame to the Fa0/2 port, where the frame will reach Host B and the Switch 2 Fa0/2 interface. Switch 2 will then flood the frame to its Fa0/1 port and Switch 1 will receive the same frame it transmitted. By following the same set of rules, Switch 1 will re-transmit the frame to its Fa0/2 interface, resulting in a broadcast loop. A broadcast loop can also occur in the opposite direction (the frame received by Switch 2 Fa0/1 will be flooded to the Fa0/2 interface, which will be received by Switch 1).

Bridging loops are more dangerous than routing loops because, as mentioned before, a Layer 3 packet contains a special field called TTL (Time to Live) that decrements as it passes through Layer 3 devices. In a routing loop, the TTL field will reach 0 and the packet will be discarded. A Layer 2 frame that is looping will stop only when a switch interface is shut down. The negative effects of Layer 2 loops grow as the network complexity (i.e., the number of switches) grows, because as the frame is flooded out to multiple switch ports, the total number of frames multiplies at an exponential rate.

Broadcast storms also have a major negative impact on the network hosts, because the broadcasts must be processed by the CPU in all devices on the segment. In Figure 1.17, both Host A and Host B will try to process all the frames they receive. This will eventually deplete their resources unless the frames are removed from the network.

STP calculations are based on the following two concepts:

  • Bridge ID
  • Path Cost

A Bridge ID (BID) is an 8-byte field composed of two subfields: the high-order Bridge Priority (2 bytes) and the low-order MAC address (6 bytes). The MAC address is expressed in hexadecimal format, while the Bridge Priority is a 2-byte decimal value with values from 0 to 65535 and a default value of 32768.

Switches use the concept of cost to evaluate how close they are to other switches. The original 802.1D standard defined a cost of 1000 Mbps divided by the bandwidth of the link in Mbps. For example, a 10 Mbps link was assigned a cost of 100 and a FastEthernet link had a cost of 10. Lower STP costs are better. However, as higher bandwidth connections have gained popularity, a new problem has emerged, namely that cost is stored as an integer value only. The option of using a cost of 1 for all links greater than 1 Gbps would narrow the accuracy of the STP cost calculations, so it is considered invalid. As a solution to this problem, the IEEE decided to modify the cost values on a non-linear scale, as illustrated below:

Bandwidth

STP Cost

10 Mbps

100

45 Mbps

39

100 Mbps

19

622 Mbps

6

1 Gbps

4

10 Gbps

2

These values were carefully chosen to allow the old and new schemes to interoperate for the link speeds in common use today.

To create a loop-free logical topology, STP uses a four-step decision process, as follows:

  1. Lowest Root BID
  2. Lowest Path Cost to Root Bridge
  3. Lowest Sender BID
  4. Lowest Port ID

Switches exchange STP information using special frames called Bridge Protocol Data Units (BPDUs). Switches evaluate all the BPDUs received on a port and store the best BPDU seen on every port. Every BPDU received on a port is checked against the four-step sequence to see whether it is more attractive than the existing BPDU saved for that port.

When a switch first becomes active, all of its ports send BPDUs every 2 seconds. If a port hears a BPDU from another switch that is more attractive than the BPDU it has been sending, the port stops sending BPDUs. If the more attractive BPDU stops arriving for a period of 20 seconds (by default), the local port will resume sending its own BPDUs.

The two types of BPDUs are as follows:

  • Configuration BPDUs, which are sent by the Root Bridge and flow across active paths
  • Topology Change Notification (TCN) BPDUs, which are sent to announce a topology change

The initial STP convergence process is accomplished in the following three steps:

  1. Root Bridge election
  2. Root Ports election
  3. Designated Ports election

When a network is powered on, all the switches announce their own BPDUs. After they analyze the received BPDUs, a single Root Bridge is elected. All switches except the Root Bridge calculate a set of Root Ports and Designated Ports to build a loop-free topology. After the network converges, BPDUs flow from the Root Bridge to every segment in the network. Additional changes in the network are handled using TCN BPDUs.

The first step in the convergence process is electing a Root Bridge. The switches do this by analyzing the received BPDUs and looking for the switch with the lowest BID, as shown below in Figure 1.18:

What technique can overwhelm the content Addressable Memory tables on layer 2 switches

Figure 1.18 – STP Convergence

Referring to the figure above, Switch 1 has the lowest BID of 32768.AA.AA.AA.AA.AA.AA and will be elected as the Root Bridge because it has the lowest MAC address, considering they all have the same Bridge Priority (i.e., the default of 32768).

The switches learn about Switch 1’s election as the Root Bridge by exchanging BPDUs at a default interval of 2 seconds. BPDUs contain a series of fields, among which include the following:

  • Root BID – identifies the Root Bridge
  • Root Path Cost – information about the distance to the Root Bridge
  • Sender BID – identifies the bridge that sent the specific BPDU
  • Port ID – identifies the port on the sending bridge that placed the BPDU on the link

Only the Root BID and Sender BID fields are considered in the Root Bridge election process. When a switch first boots, it places its BID in both the Root BID and the Sender BID fields. For example, Switch 1 boots first and starts sending BPDUs announcing itself as the Root Bridge every 2 seconds. After some time, Switch 3 boots and announces itself as the Root Bridge. When Switch 2 receives these BPDUs, it discards them because its own BID has a lower value. As soon as Switch 3 receives a BPDU generated by Switch 2, it starts sending BPDUs that list Switch 2 as the Root BID (instead of itself) and Switch 3 as the Sender BID. The two switches now agree that Switch 2 is the Root Bridge. Switch 1 boots a few minutes later, and it initially assumes that it is the Root Bridge and starts advertising this fact in the BPDUs it generates. As soon as these BPDUs arrive at Switch 2 and Switch 3, these two switches give up the Root Bridge position in favor of Switch 1. All three switches are now sending BPDUs that announce Switch 1 as the Root Bridge.

The next step is electing the Root Ports. A Root Port on a switch is the port that is closest to the Root Bridge. Every switch except the Root Bridge must elect one Root Port. As mentioned before, switches use the concept of cost to determine how close they are from other switches. The Root Path Cost is the cumulative cost of all links to the Root Bridge.

When Switch 1 sends BPDUs, they contain a Root Path Cost of 0. As Switch 2 receives them, it adds the path cost of its interface Fa0/1 (a value of 19 for a FastEthernet link) to the Root Path Cost value. Switch 2 sends the new Root Path Cost calculated value of 19 in its BPDUs generated on the Fa0/2 interface. When Switch 3 receives the BPDUs from Switch 2, it increases the Root Path Cost by adding 19, the cost of its Fa0/2 interface, for a total of 38. At the same time, Switch 3 also receives BPDUs directly from the Root Bridge on Fa0/1. This enters Switch 3 with a value of 0, and Switch 3 increases the cost to 19 because Fa0/1 is a FastEthernet interface. At this point, Switch 3 must select a single Root Port based on the two different BPDUs it received, one with a Root Path Cost of 38 from Switch 2 and the other with a Root Path Cost of 19 from Switch 1. The lowest cost wins; thus, Fa0/1 becomes the Root Port and Switch 3 begins advertising this Root Path Cost of 19 to downstream switches. Switch 2 goes through the same set of calculations and elects its Fa0/1 interface as the Root Port. This Root Port selection process on Switch 3 is based on the lowest Root Path Costs it receives in the BPDUs, as illustrated below:

BPDUs Received on the Port

Root Path Cost

Fa0/1 (winner)

19

Fa0/2

38

Note:    The Path Cost is a value assigned to each port and it is added to BPDUs received on that port in order to calculate the Root Path Cost. The Root Path Cost represents the cumulative cost to the Root Bridge and it is calculated by adding the receiving port’s Path Cost to the value contained in the BPDU.

The next step in the STP convergence process is electing Designated Ports. Each segment in a Layer 2 topology has one Designated Port. This port sends and receives traffic to and from that segment and the Root Bridge. Only one port handles traffic for each link, guaranteeing a loop-free topology. The bridge that contains the Designated Port for a certain segment is considered the Designated Switch on that segment.

Analyzing the link between Switch 1 and Switch 2, Switch 1 Fa0/1 has a Root Path Cost of 0 (being the Root Bridge) and Switch 2 Fa0/1 has a Root Path Cost of 19. Switch 1 Fa0/1 becomes the Designated Port for that link because of its lower Root Path Cost. A similar election takes place for the link between Switch 1 and Switch 3. Switch 1 Fa0/2 has a Root Path Cost of 0 and Switch 3 Fa0/1 has a Root Path Cost of 19, so Switch 1 Fa0/2 becomes the Designated Port.

Note:    Every active port on the Root Bridge becomes a Designated Port.

When considering the link between Switch 2 and Switch 3, both Switch 2 Fa0/2 and Switch 3 Fa0/2 ports have a Root Path Cost of 19, resulting in a tie. To break the tie and declare a winner, STP uses the four-step decision process described below:

  1. Lowest Root BID: All three bridges are in agreement that Switch 1 is the Root Bridge; advance to the next step.
  2. Lowest Root Path Cost: Both Switch 2 and Switch 3 have a cost of 19; advance to the next step.
  3. Lowest Sender BID: Switch 2’s BID (32768.BB.BB.BB.BB.BB.BB) is lower than Switch 3’s BID (32768.CC.CC.CC.CC.CC.CC), so Switch 2 Fa0/2 becomes the Designated Port and Switch 3 Fa0/2  is considered a non-Designated Port; end of the decision process.
  4. Lowest Port ID: N/A.

In a loop-free topology, Root and Designated Ports forward traffic and non-Designated Ports block traffic. The five STP states are listed below:

State

Purpose

Blocking Receives BPDUs only
Listening Builds “active” topology
Learning Builds bridging table
Forwarding Sends/receives user data
Disabled Administratively down

  1. After initialization, the port starts in the Blocking state, where it listens for BPDUs. The port will transit into the Listening state after the booting process, when it thinks it is the Root Bridge or after not receiving BPDUs for a certain period of time.
  2. In the Listening state, no user data passes through the port; it is just sending and receiving BPDUs in order to determine the Layer 2 topology. This is the phase in which the election of the Root Bridge, Root Ports, and Designated Ports occur.
  3. Ports that remain Designated or Root Ports after 15 seconds progress to the Learning state, and during another 15-second period, the bridge builds its MAC address table but does not forward user data.
  4. After the 15-second period, the port enters the Forwarding state, in which it sends and receives data frames.
  5. The Disabled state means the port is administratively shut down.

The STP process is controlled by the three timers listed below:

Timer

Purpose

Default Value

Hello Time Time between sending of BPDUs by the Root Bridge

2 seconds

Forward Delay Duration of the Listening and Learning states

15 seconds

Max Age Duration for which the BPDU is stored

20 seconds

A modern variation of the STP is the Rapid STP (RSTP), as defined by IEEE 802.1W. The main advantage of RSTP is its ability to achieve fast convergence (i.e., neighbor switches can communicate between each other and determine the state of the links in less time). RSTP ports have the following roles:

  • Root
  • Designated
  • Alternate
  • Backup
  • Disabled

RSTP port states are also different, as the Blocking, Learning, and Disabled states converge into a Discarding state. Although some important differences exist between RSTP and STP, they are compatible and can work together in any network.

Virtual LANs

Virtual LANs (VLANs) define broadcast domains in a Layer 2 network. They represent an administratively defined subnet of switch ports that are in the same broadcast domain, the area in which a broadcast frame propagates through a network.

As mentioned before, routers separate broadcast domains, preventing broadcasts from propagated through router interfaces. On the other hand, Layer 2 switches create broadcast domains by special configuration on the switch. By defining broadcast domains on the switch, you can configure switch ports to forward a received broadcast frame to other specified ports.

Broadcast domains cannot be observed by analyzing the physical topology of the network because VLAN is a logical concept based on the configuration of switches. Another way of thinking about VLANs is as virtual switches, defined in one physical switch. Each new virtual switch defined creates a new broadcast domain (VLAN). Since traffic from one VLAN cannot pass directly to another VLAN within a switch, a router must be used to route packets between VLANs. Moreover, ports can be grouped into different VLANs on a single switch or on multiple interconnected switches, but broadcast frames sent by a device in one VLAN will reach only the devices in that specific VLAN.

VLANs represent a group of devices that participate in the same Layer 2 domain and can communicate without needing to pass through a router, meaning they share the same broadcast domain. Best design practices suggest a one-to-one relationship between VLANs and IP subnets. Devices in a single VLAN are typically also in the same IP subnet.

What technique can overwhelm the content Addressable Memory tables on layer 2 switches

Figure 1.19 – Virtual LANs

Figure 1.19 above presents two VLANs, each associated with an IP subnet. VLAN 10 contains Router 1, Host A, and Router 2 configured on Switch 1 and Switch 3 and is allocated the 10.10.10.0/24 IP subnet. VLAN 20 contains Host B, Host C, and Host D configured on Switch 2 and Switch 3 and is allocated the 10.10.20.0/24 IP subnet.

Although vendors used individual approaches in creating VLANs, a multi-vendor VLAN must be handled carefully when dealing with interoperability issues. For example, Cisco developed the ISL standard that operates by adding a new 26-byte header, plus a new trailer, encapsulating the original frame, as shown in Figure 1.20 below. In order to solve the incompatibility problems, IEEE developed 802.1Q, a vendor-independent method to create interoperable VLANs.

What technique can overwhelm the content Addressable Memory tables on layer 2 switches

Figure 1.20 – ISL Marking Method

802.1Q is often referred to as frame tagging because it inserts a 32-bit header, called a tag, into the original frame, after the Source Address field, without modifying other fields. The next 2 bytes after the Source Address field hold a registered Ethernet-type value of 0 x 8100, meaning the frame contains an 802.1Q header. The next 3 bits represent the 802.1P User Priority field, which are used as Class of Service (CoS) bits in Quality of Service (QoS) techniques. The next subfield is a 1-bit Canonical Format Indicator, followed by the VLAN ID (12 bits). This results in a total of 4,096 VLANs when using 802.1Q. The 802.1Q marking method is illustrated in Figure 1.21 below:

What technique can overwhelm the content Addressable Memory tables on layer 2 switches

Figure 1.21 – 802.1Q Marking Method

A port that carries data from multiple VLANs is called a trunk. It can use either the ISL or the 802.1Q protocols. A special concept in 802.1Q is the native VLAN. This is a particular type of VLAN in which frames are not tagged. The native VLAN’s purpose is to allow a switch to use 802.1Q trunking (i.e., multiple VLANs on a single link) on an interface; however, if the other device does not support trunking, the traffic for the native VLAN can still be sent over the link. Cisco uses VLAN 1 as its default native VLAN.

Among the reasons for using VLANs, the most important include the following:

  • Network security
  • Broadcast distribution
  • Bandwidth utilization

An important benefit of using VLANs is network security. By creating VLANs within switched network devices, a logical level of protection is created. This can be useful, for example, in situations in which a group of hosts must not receive data destined for another group of hosts (e.g., departments in a large company, as depicted in Figure 1.22 below).

What technique can overwhelm the content Addressable Memory tables on layer 2 switches

Figure 1.22 – Departmental VLAN Segmentation

VLANs can mitigate situations in which broadcasts represent a problem in a network. Creating additional VLANs and attaching fewer devices to each isolates broadcasts within smaller areas. The effectiveness of this action depends on the source of the broadcast. If broadcast frames come from a localized server, that server might need to be isolated in another domain. If broadcasts come from workstations, creating multiple domains helps reduce the number of broadcasts in each domain.

In Figure 1.22 above, each department’s VLAN has a 100 Mbps bandwidth shared between the workstations in that specific department, creating a standalone broadcast domain. Users attached to the same network segment share the bandwidth of that particular segment. As the number of users attached to the segment grows, the average bandwidth assigned to each user decreases, which affects its various applications. Therefore, implementing VLANs can offer more bandwidth to users.

Which layer 2 security technique is implemented on switches?

Deploy the Port Security feature to prevent unauthorized access from switching ports. Use the Private VLAN feature where applicable to segregate network traffic at Layer 2.

What is the layer 2 attacks?

Layer 2 Attacks and Mitigation Techniques session focuses on the security issues surrounding Layer 2, the data-link layer. With a significant percentage of network attacks originating inside the corporate firewall, exploring this soft underbelly of data networking is critical for any secure network design.

What are some layer 2 vulnerabilities?

Layer 2 switched environments, typically found in enterprise customer wiring closets, can be easy targets for network security attacks..
MAC address flooding..
DHCP server spoofing..
"Man-in-the-middle" attacks using gratuitous ARP..
IP host spoofing..

What is a content addressable memory table quizlet?

The CAM table, or content addressable memory table, is present in all Cisco Catalysts for layer 2 switching. It is used to record a stations mac address and it's corresponding switch port location.