Professional Documents
Culture Documents
IP Multicast
Scott Hogg
CCIE #5133, CISSP, CIPTSS, CIPTDS
Scott Hogg has been a network computing consultant for over 14 years. Scott provides network engineering, security consulting, and training services to his clients, focusing on creating reliable, high-performance, secure, manageable, and cost effective network solutions. He has been working with computers since 1985 and has worked with UNIX and networking systems since 1988. He has a B.S. in Computer Science from Colorado State University, a M.S. in Telecommunications from the University of Colorado, along with his CCIE (#5133), CISSP (#4610), FCNE, and CIPTSS/CIPTDS certifications. Scott has designed, implemented, and troubleshot networks for many large enterprises and service providers. Scotts current interests are in the areas of IP Multicast, QoS, MPLS, VoIP, and IPv6.
Agenda
Fundamentals of IP Multicast Host to Router Multicast Protocols IP Multicast Switching Protocols Intra-Domain Multicast Routing Protocols Inter-Domain Multicast Routing Protocols Troubleshooting Techniques Summary, Q&A
2
This class has the following goals High level introduction to the technology Introduces student to IP Multicast Gives Examples of Key IPMC technologies
IP Multicast
Fundamentals of IP Multicast
3
3
What is IP Multicast?
Bandwidth conserving technology. Reduces traffic by delivering a single stream of data to thousands of recipients. Invented by Steve Deering in 1989. Communication between a single sender and multiple receivers on a network providing the following advantages:
Enhanced Efficiency: Controls network traffic and reduces server and CPU loads Optimized Performance: Eliminates traffic redundancy Distributed Applications: Makes multipoint applications possible
Steve Deering invented IPMC for his Masters degree at Stanford University under the guide of Jon Postel
Cisco IOS Multicast enables customers to: Efficiently deploy and scale distributed group applications across the network Create a ubiquitous, enterprise-wide content distribution model Solve traffic congestion problems Allow service providers to deploy value-added services that leverage their existing infrastructure
Applications that Benefit from IP Multicast Multimedia Streaming media (Audio/Video) Training programs (Distance Learning) Corporate communications/presentations Video/Audio conferencing Gaming Data warehousing, financial applications Any one-to-many data applications As multicast becomes more accepted, more applications will be developed to utilize it. There are a slew of audio/video streaming applications that take advantage of IPMC In addition to the various streaming apps, Computer Gaming is beginning to develop applications that will take advantage of IPMC. For example: An online game of Doom with 4 players requires 3 unicast packet updates sent to the other 3 players every time your character makes a single move. If all 4 players are moving at the same time, each player will be sending and receiving 3 packets continuously. Now imagine that same game with 50 players. Now everytime you make a move, 49 copies of that update will have to be sent. With IPMC all players can send only 1 update to a Multicast address and all other players will receive the update. Now you can have 1000 people in the same game, each only sending 1 update.
The Entire Network Receives the Packet Even if there are few receivers Multiple Uni-casts
x9 x6 x6 x8 x2 x2
Broadcasting requires every device in the entire network (including across WAN links) to receive the data (imagine a 2 Meg video Stream) Sending Multiple unicasts can very easily overload the source or entire segments of the network (imagine a 2 Meg video stream going to 1000 receivers thats 2Gig of data being sent from the source)
Multicast Operation
Multicast Group
Multicast transmission: sends single multicast packet addressed to all intended recipients
Creates Efficient communication and transmission Optimizes network performance Introduces a new class of IP Addresses,
Class D 224.0.0.0 to 239.255.255.255
6
With IP Multicast the source only sends 1 packet to reach thousands of receivers. The only time a packet is duplicated is at a router that has multiple paths to send that data down.
IP Multicast Challenges
Multicast is UDP Based
Best Effort Delivery
Multicast applications do not expect reliable delivery of data and should be designed accordingly.
No Congestion Avoidance
Lack of TCP windowing and slow-start mechanisms can result in network congestion.
Duplicates
Some multicast protocol mechanisms result in the occasional generation of duplicate packets.
Group Member 1
Group Member 2
8
Think of an IPMC group like an Email distribution list. You dont have to be a member of the distribution list to send an email, but you do have to be a member to get the email. If you are a member you can send an email to the list also.
IP Multicast Concept
Fundamental Components
1. Routing infrastructure 2. Router to Client infrastructure Source
Routing Infrastructure
IP multicast routing Floods, grafts, prunes Routing protocols PIM-SM, Bidir-PIM, etc. Interconnecting protocols MBGP, MSDP, SSM, etc.
Receiver
The 2 key components of IPMC is the communication between routers to figure out how to deliver the content to the receivers and the communication between a PC and a router (the PC tells the router which IPMC traffic it wishes to receive).
IP Multicast Addresses
IP group addresses
Class D addresshigh-order 4 bits are (1110) = (224.0.0.0/4) Range from 224.0.0.0 through 239.255.255.255
10
GLOP (RFC 2770) is 233.0.0.0/8 Static Group Address Assignment middle two octets are your AS number AS number inserted in the middle two octets remaining low-order octet used for group assignment 233.0.0.0 through 233.255.255.255 Insufficient address space for large content providers http://gigapop.uoregon.edu/glop/index.html http://www.ogig.net/glop/ Site-local scope: 239.253.0.0/16 253??? In the second octet?
10
Multicast Routing Monitor 224.0.1.111 http://www.iana.org/assignments/multicast-addresses HSRP uses 238.0.0.1 through 238.0.0.255 Guidelines for Enterprise IP Multicast Address Allocation http://www.cisco.com/en/US/tech/tk828/technologies_white_paper09186a00802d4643.shtml
11
IP Multicast
Host to Router Multicast Protocols
12
12
12
Module Agenda
MAC Layer Addressing MAC Layer Addressing Example IGMP Version 1 IGMP Version 2 IGMPv1/v2 Operation Examples IGMP Version 3
13
13
To obtain a OUI MAC address from the IEEE you must spend $1000. Since this was a research project between Jon and Steve, Jon could only afford $1000 for 1 MAC address (01-00-5e). He agreed to split the MAC in half with Steve. This is why the 1st bit in the 2nd octet is also ignored. If they could have spent $16,000 there wouldnt have been the problem of overlapping.
14
15
When designing an IPMC addressing scheme, it is important to remember the fact that you have 32 IP addresses overlapping the same 1 MAC address
15
16
With all versions of IGMP, only one router per IP subnet sends queries. This router is called the query router. In Version 1, the query router is chosen with the help of the multicast routing protocol. In Versions 2 and 3, it is chosen by the lowest IP address among the routers. Below are several interoperability options. V3 is supported on Windows XP and FreeBSD
16
IGMP Version 1
Membership Queries
Sent by router to the All-Hosts (224.0.0.1) multicast address to determine what multicast groups have active members Default query interval is 60 seconds No standard election method for designated querier
Membership Reports
Sent by host (TTL=1) wishing to receive a specific multicast group Can be sent manually or in response to a query Report Suppression
Used by the clients so all members do not have to respond to a query A random number between 1 and 10 is chosen for each client and the first to time out sends the report 17
IGMP Membership Queries IGMPv1 Membership Queries are sent by the router to the All-Hosts (224.0.0.1) multicast address to solicit what multicast groups have active receivers on the local network. IGMP Membership Reports IGMPv1 Membership Reports are sent by hosts wishing to receive traffic for a specific multicast group. Membership Reports are sent (with a TTL of 1) to the multicast address of the group for which the hosts wishes to receive traffic. Hosts either send reports asynchronously (when the wish to first join a group) or in response to Membership Queries. In the latter case, the response is used to maintain the group in an active state so that traffic for the group continues to be forwarded to the network segment. Report Suppression Report suppression is used among group members so that all members do not have to respond to a query. This saves CPU and bandwidth on all systems. The rule in multicast membership is that as long as one member is present, the group must be forwarded onto that segment. Therefore, only one member present is required to keep interest in a given group so report suppression is efficient. TTL Since Membership Query and Report packets only have local significance, the TTL of these packets are always set to 1. This is also so they will not be accidentally forwarded off of the local subnet and cause confusion on other subnets. After 3 queries with no response, the router will stop sending the multicast traffic to that segment.
17
IGMP Version 2
Querier Election
Router with the lowest IP address is elected the querier on a subnet
18
Checksum
Group Address
Checksum
Group Address
19
The way Version 2 is backward compatible with Version 1 is in its packet format. Notice that the Version and Type fields of v1 have been merged into the Type field for v2. V1 version field had only 1 value - 1 for version 1 (there was a predecessor to IGMP v1 which would have set this field to 0 V1 type field has 2 values - Membership Query and Membership Report V2 now has 4 values for the type field: 0x11 - Membership Query (General Query and Group-Specific Query) 0x12 - Version 1 Membership Report 0x16 - Version 2 Membership Report 0x17 - Leave Group The maximum Response Time for Version 2 identifies the time in units of 1/10 of a second that a host may wait to respond to a Query message. The default value for this field is 100 (10sec). This field is used only in Membership Query messages. The checksum field is the same on both versions. It is a 16 bit 1s complement of the 1s complement sum of the IGMP message. The group address field is slightly different between v1 and v2. V1 - it contains the multicast group address when a membership report is being sent and set to 0 when used in a membership query V2 - set to 0 for a general query and in a group-specific query it contains the group being queried. When a membership report or leave group message is sent this field is set to the target multicast group address.
19
Report
10.1.1.1
Joining Member sends report to Multicast Group address (224.1.1.1) immediately upon joining
20
Both versions are the same in the way a host joins a group.
20
X
Suppressed Report 10.1.1.1
X
Suppressed Query 224.0.0.1
Router sends periodic queries to 224.0.0.1 (all hosts). One member per group per subnet reports to 224.1.1.1 (Multicast Group) Other members see that H2 sent a report and suppress their reports
21
The default router query interval time is 60 seconds. When a host receives an IGMP membership query, the host starts a countdown report-timer for each multicast group it has joined. Each report timer is initialized to a random value between 0 and the maximum response interval (10sec). Once the timer hits 0, the host multicasts an IGMP membership report for the group it is a member of. If the host sees another report sent out on the segment before its own timer reaches 0, it suppresses its report and resets its timer. The reason that the timer is set to a random interval is to prevent multiple hosts on a segment from sending reports at the same time.
21
X
Suppressed Report 10.1.1.1
X
Suppressed Query 224.1.1.1
Router sends periodic queries to 224.1.1.1 (Group specific) One member per group per subnet reports to 224.1.1.1 (Multicast Group) Other members suppress reports
22
This slide shows how group state is maintained with IGMP version 2 Notice the only difference between v1 and v2 in maintaining a group is how the router sends its query (group specific or all hosts).
22
10.1.1.2
10.1.1.1
IGMPv2 B A
Initially all routers send out a Query Router w/lowest IP address elected querier Other routers become Non-Queriers
23
Version 1 does not have an election mechanism. It relies on the Layer 3 IPMC routing protocol (PIM, MOSPF, DVMRP, etc.) to fix this problem by electing a designated router for the subnet. The designated router is different from the IGMP querier. The designated router is a function of the IPMC routing protocol and handles certain multicast forwarding duties on the subnet.
23
10.1.1.1 IGMPv1
Query to 224.0.0.1
Hosts silently leave group Router continues sending periodic queries No Reports for group received by router Group times out
24
The default timeout period is 3 times the query interval or 3x60 seconds= 3 minutes. Once a router receives a membership report the query interval and timeout period are reset.
24
H2 leaves group; sends Leave message Router sends Group specific query A remaining member host sends report Group remains active
25
With the new Leave mechanism for Version 2, the router does not have to continue sending multicast traffic out an interface while waiting the 3 minutes for the leave timeout period. Once a leave is seen on the router, it sends back a query and waits the query response interval before timing out the group (default 60 seconds).
25
IGMP Version 3
IGMPv3 inter-operates with IGMPv1 and v2 It adds support for source filtering packets
Include/Exclude Source Lists A client can request only a specific source A client can request all but a specific source
26
Version 3 will add a bit of security into IGMP. It will allow hosts to request traffic from specific trusted sources (or deny specific un-trusted sources). Requires new IPMulticastListen API in the OS Applications must be re-written to use IGMPv3 with the Include/Exclude features IGMPv3 support for Microsoft XP, Linux 2.4 (Cisco and Sprint) kernel, FreeBSD, NetBSD
26
Router
27
Version 1 Membership Queries Membership Reports Version 2 Group-specific Query Leave Group message Querier election mechanism Query-interval response time Version 3 Source filtering (S,G) and Full Reports
27
IP Multicast
IP Multicast Switching Protocols
28
28
Switching Protocols
Layer 2 Multicast Problem IGMP Snooping IGMP Snooping Example 1 IGMP Snooping Example 2 RGMP Multicast Switching GMRP
29
We arent going to talk about CGMP It is vendor proprietary technique Deprecated in favor of IGMP Snooping May be the only technique available with older equipment
29
Multicast
30
On switches with multiple VLANs, only the traffic coming in on a specific VLAN will be flooded out that same VLAN.
30
MC Group 1
MC Group 2
Keep in mind that this flooding will only occur on the same VLAN it will not span multiple VLANs unless the router sends to each VLAN.
31
MC Group 1
G1
MC Group 2
G1,G2
G2
32
32
IGMP Snooping
Switches become IGMP aware IGMP packets intercepted by the NMP or by special hardware ASICs Switch must examine contents of IGMP messages to determine which ports want what traffic
IGMP membership reports IGMP leave messages
PIM
IGMP
Impact on switch:
Must process ALL Layer 2 multicast packets Admin. load increases with multicast traffic load Requires special hardware to maintain throughput
IGMP
33
IGMP Snooping is the industry standard. IGMP Snooping does require extra hardware (ASIC) to separate the traffic flow from its CPU. Cisco needed a solution before IGMP Snooping became a standard, which is why CGMP was developed. IGMP Snooping Used on the switch Allows the switch to learn about multicast routers and sources Allows the switch to forward multicast packets out to only subscribing clients IGMPv3 no report suppression therefore enables individual member tracking by switch NMP = Network Management Processor ip igmp snooping [ vlan # ] show ip igmp snooping mrouter [vlan vlan-id] IGMP Filtering ES-SW-01# config term ES-SW-01(config)# ip igmp profile 12 ES-SW-01(config-igmp-profile)# permit ES-SW-01(config-igmp-profile)# range 229.12.12.0 ES-SW-01(config-igmp-profile)# end ES-SW-01# show ip igmp profile 12 IGMP Profile 12 permit range 229.12.12.0 229.12.12.0 ES-SW-01(config)# interface fastethernet0/22 ES-SW-01(config-if)# ip igmp filter 12 ES-SW-01(config-if)# end On a switch: ip igmp snooping vlan 1 immediate-leave
33
Switching CPU
0 Engine
CAM Table
2
MAC Address 0100.5e01.0203 Ports 0,1,2
Entry Added
Host 1
Host 2
Host 3
Host 4
34
When a Multicast report is sent from Host 1 to Router A, the cam table is built in the switch to send all future traffic destined to that MAC address to ports 1 (Router A), 2 (Host 1), and 0 (Switch CPU). The switch heard the IGMP general queries to 224.0.0.1 coming from the router and adds port 1 to the CAM table The switch might also hear PIM hello packets from the router
34
Switching CPU
0 Engine
CAM Table
2
MAC Address 0100.5e01.0203 Ports 0,1,2,5
Port Added
Host 1
Host 2
Host 3
Host 4
35
As Host 4 sends a IGMP membership report to the router, the switch adds that port in its CAM table for that MAC address.
35
CAM Table
2
MAC Address 0100.5e01.0203 Ports 0,1,2,5
Host 3
Host 4
36
Once the CAM table is built, if a large stream of data were sent to the multicast MAC address, it would go to all interested ports (0,1,2,5). Notice that port 0 is listed which is the CPU In this example the switch CPU would be overloaded from receiving the 1.5Mbps video stream. There needs to be a mechanism for the switch to handle this and not get all multicast traffic. Failures during CPU congestion can include unicast and multicast frame drops Sometimes the switching engine continues for forward traffic, but the internal CPU begins to drop packets because it cant keep up with incoming traffic The CPU might miss IGMP leave messages that could further hamper performance
36
CAM Table
2
MAC Address 0100.5exx.xxxx L3 IGMP Ports 0
Host 1
Host 2
Host 3
Host 4
37
ASIC stands for Application Specific Integrated Circuit. It essentially allows the switch to further peer into the packets and examine Layer 3 information and only send IGMP messages to the CPU for processing. Now there will be 2 separate CAM entries for the Multicast MAC address: 1 for the IGMP membership reports 1 for all other traffic to that MAC address
37
CAM Table
2
MAC Address 0100.5exx.xxxx 0100.5e01.0203 L3 IGMP !IGMP Ports 0 1,2
Host 1
Host 2
Host 3
Host 4
38
In this example, the ASIC intercepts the IGMP packet and sends that to the CPU (port 0). The CPU then forwards this IGMP packet out the port it was originally destined for. Notice that there are now 2 entries in the CAM table: One for all IGMP traffic destined to the IPMC MAC address 01-00.5e One for all non IGMP traffic destined to the IPMC MAC address of that Group Note that if Host 2 were to send an IGMP membership report towards the router but for a different group, the ASIC would intercept that report, send it to the CPU and then forward it out to the Router. The CPU would then add a third entry in the CAM table for that specific MAC address and populate ports 1 and 3.
38
LAN Switch
CAM Table
2
MAC Address 0100.5exx.xxxx 0100.5e01.0203 L3 IGMP !IGMP Ports 0 1,2 ,5
Port Added
Host 1
Host 2
Host 3
Host 4
39
Since there is now an entry for the MAC address in the switches CAM table (from Host 1 joining), when any additional hosts join that same group the traffic is not flooded, it is forwarded to the proper ports. Notice that the ASIC still intercepts all IGMP traffic and forwards that to port 0 or the CPU.
39
CAM Table
2
MAC Address 0100.5exx.xxxx 0100.5e01.0203 L3 IGMP !IGMP Ports 0 1,2 ,5
Host 3
Host 4
40
Now all traffic destined for the Multicast MAC address is not sent to the CPU. Cisco switches require a Net Flow Feature Card (newer switches have this as a standard, and more powerful switches can already peer into Layer 3 so they dont need such a card) to be built into the switch to be able to perform IGMP Snooping. When hosts leave the group, the switch sees the IGMP leave messages and removes the CAM table entries When the router sends an IGMP general query to 224.0.0.1, all hosts will get this message regardless of the CAM table contents This problem is still subject to the IGMPv1 leave latency problem described earlier in this presentation
40
Cat IOS
ip igmp snooping interface vlan 10 ip igmp snooping querier ip igmp snooping fast-leave
41
Cisco Multicast Support Matrix http://www.cisco.com/en/US/partner/tech/tk828/tk363/technologies_white_paper09186a00800a85d1.shtml Cat OS: set igmp [enable | diable] show igmp statistics [vlan] set igmp flooding enable show igmp flooding set igmp mode {igmp-only | igmp-cgmp | auto} show igmp mode set igmp leave-query-type auto-mode | general-query | mac-gen-query show igmp leave-query-type set igmp fastleave enable show multicast protocols status set igmp v3-processing enable show multicast v3-group show multicast router [igmp] [mod/port] [vlanid] set igmp fastblock enable set multicast ratelimit {disable | enable} set multicast ratelimit rate rate show multicast ratelimit-info set igmp querier {disable | enable} vlan set igmp querier vlan [qi | oqi] val set igmp querier address ip_address vlan show igmp querier information show multicast group [mac_addr] [vlan_id] show multicast group igmp [mac_addr] [vlan_id] show multicast group count [vlan_id] show multicast group count igmp [vlan_id] show multicast router [igmp | rgmp][mod/port] [vlan_id] set cam {static | permanent} multicast_mac mod/port [vlan] clear multicast router [mod/port | all] clear cam mac_addr [vlan] CAT IOS: show ip igmp interface vlan vlan_ID | include querier show ip igmp interface vlan vlan_ID | include globally show ip igmp interface vlan vlan ID | include snooping
41
GMRP Overview
Generic Attribute Resolution Protocol (GARP) Multicast Registration Protocol (GMRP) New standard from IEEE 802.1 Committee (802.1r) Required changes to the IEEE 802.3 Ethernet Frame Length
1518 Bytes extended to 1522 Bytes
GMRP runs on both the host and Layer-2 switch will need upgrades to support IGMP is still required on the host Allows client and switch to operate at Layer-2 for IP Multicast packets
42
Because of the change in the Ethernet Frame Length, it is unlikely that GMRP will be implemented any time in the near future. You might expect GMRP to become a new standard once IPv6 is implemented. Only supported on 5000s and 6500s http://www.cisco.com/en/US/partner/tech/tk828/tk363/technologies_tech_note09186a0080122a70.shtml Introduces Layer 2 speeds at the switch Allows paths between client and router to be built before router knows about it Replaces the need to have special ASIC level hardware to accomplish IGMP Snooping Default timers: Join time: 200 ms , Leave time: 600 ms , Leaveall time: 10,000 ms CatOS: set gmrp [enable | disable] show gmrp configuration set port gmrp [enable | disable] mod/port set gmrp fwdall [enable | disable] mod/port set gmrp registration [normal | fixed | forbidden] mod/port set garp timer {join | leave | leaveall} timer_value show garp [timer | statistics [vlan_id] ] clear gmrp statistics {vlan_id | all} http://www.javvin.com/protocolGMRP.html Not to be confused with GVRP GARP VLAN Registration Protocol (GVRP) is a Generic Attribute Registration Protocol (GARP) application that provides 802.1Qcompliant VLAN pruning and dynamic VLAN creation on 802.1Q trunk ports. With GVRP, the switch can exchange VLAN configuration information with other GVRP switches, prune unnecessary broadcast and unknown unicast traffic, and dynamically create and manage VLANs on switches connected through 802.1Q trunk ports
42
RGMP messages are sent to the multicast address 224.0.0.25 Only supported on 5000s and 6500s RFC3488 CAT OS: #rgmp set rgmp [enable | disable] show rgmp group [mac_addr] [vlan_id] show rgmp group count [vlan_id] show rgmp statistics [vlan] show multicast router [igmp | rgmp] [mod/port] [vlan_id] clear rgmp statistics Router IOS: Router(config)# ip rgmp debug ip rgmp [group-name | group-address] show multicast protocols status debug ip rgmp [ group-name | group-address] http://www.cisco.com/univercd/cc/td/doc/product/software/ios120/120newft/120limit/120s/120s10/dtrgmp.htm http://www.cisco.com/univercd/cc/td/doc/product/lan/cat6000/sw_5_4/config/multi.htm#xtocid722936 ftp://ftpeng.cisco.com/ipmulticast/config-notes/rgmp.txt
43
IP Multicast Switching
IP Multicast MultiLayer Switching (MMLS)
Policy Feature Card 1 (PFC1) - Netflow
http://www.cisco.com/en/US/partner/products/hw/switches/ps708/products_configuration_guide_chapter09186a00800eaa2c. html http://www.cisco.com/en/US/partner/tech/tk828/tk363/technologies_white_paper09186a00800d6b5f.shtml Multicast Distributed Fast Switching Note Multicast Distributed Fast Switching (MDFS) is also known as Multicast Distributed Switching (MDS). In Multicast Distributed Fast Switching (MDFS), packets are distributed switched on platforms with line cards. This methodology reduces the load on the Router Processor (RP), which now need only perform route lookups. The Cisco 7500 series routers and Cisco 12000 series Internet routers have processors on their line cards that allow them to perform distributed processing. The line cards for the Cisco 7500 series are called Versatile Interface Processors (VIPs). Each line card can make forwarding decisions independently, allowing fast-switching to operate in a distributed manner. IP Multicast Multilayer Switching Note IP multicast Multilayer Switching is also known as IP multicast MLS and MMLS. IP multicast MLS is hardware-based, Layer 3 switching of IP multicast data for routers connected to high-end Catalyst LAN switches. IP multicast MLS switches IP multicast data packet flows between IP subnets using advanced ASIC switching hardware, thereby off-loading processor-intensive, multicast packet routing from network routers. The packet forwarding function is moved onto the connected Layer 3 switch whenever a supported path exists between a source and members of a multicast group. Packets that do not have a supported path to reach their destinations are still forwarded in software by routers. The distinction between a router and a LAN switch has become increasingly vague because of the evolution of highly intelligent Layer 3-aware ASIC. The capability of a router to interact with the forwarding mechanism of a LAN switch at Layer 3 has led to a dramatic increase in switching performance. The Catalyst 5000 family switches and Catalyst 6500 family switches with Supervisor Engine I support IP multicast MLS for only (S, G) flows. The Catalyst 6500 family switches with Supervisor Engine II support IP multicast MLS for both (S, G) and (*, G) flows. For more information on (S, G) and (*, G) flows, ftp://ftpeng.cisco.com/ipmulticast/mds.txt http://www.cisco.com/univercd/cc/td/doc/product/software/ios120/120newft/120t/120t5/ipmctmls.htm #mmls nonrpf set mmls nonrpf enable set mmls nonrpf timer 60 set mmls nonrpf window 10 set mmls nonrpf timer 10 mls ip multicast connected show mls ip multicast connected Cat OS:
44
Router-Port Group Management Protocol Multicast Switching GMRP: GARP Multicast Registration Protocol
45
45
IP Multicast
Intra-Domain Multicast Routing Protocols
46
46
46
Module Agenda
Routing Protocol Basics Reverse Path Forwarding (RPF) Multicast Distribution Trees Sparse and Dense Mode Protocols PIM Sparse-Mode Auto-RP, BSR, Anycast RP Bidirectional PIM
47
47
Source-Based Routing Routers must know packet origination, rather than destination (opposite of unicast)
48
We say multicast is Destination to Source even thought the streaming-media traffic flow is from source to destination Source-Based Routing Since we have to route from the receiver to the source, there must be a mechanism for informing all routers in the network about active sources on each group Since distribution trees are built from a receiver perspective, prior source knowledge is implied unless another mechanism exists to build the tree Multicast routing utilizes Reverse Path Forwarding (RPF) Broadcast: floods packets out all interfaces except incoming from source; initially assuming every host on network is part of multicast group Prune: eliminates tree branches without multicast group members; cuts off transmission to LANs without interested receivers Selective Forwarding: requires its own integrated unicast routing protocol
48
Multicast routing protocol responsibilities Build and maintain distribution trees Provide mechanism for informing all routers about active sources and groups How do we inform routers about active sources and groups? Flood to all routersDense Rendezvous MechanismSparse It is very important to understand the differences between source trees and shared trees and know how they work. Not all IPMC routing protocols use a Unicast routing protocol to determine the reverse-path. MOSPF is one of those protocols that does not use a separate unicast routing protocol. Unicast routers care about where the packet is heading (destination) Multicast routers care about where the packet came from (source)
49
Reverse Path Forwarding Routers forward multicast datagrams received from incoming interface on distribution tree leading to source Routers check the source IP address against their multicast routing tables (RPF check); ensure that the multicast datagram was received on the specified incoming interface If the incoming datagram passes the RPF check, then the packet is forwarded out all outgoing interfaces in the Multicast Group If the incoming datagram fails the RPF check, the packet is discarded RPF checks pass if packet arrives on the same interface that the traffic would be routed back to the source RPF check is done every 5 seconds by default on Cisco routers, so flapping/convergence can cause problems Asymmetrical routing can cause RPF checks to fail
50
Shared Trees
One tree built for all sources in the network All traffic must pass through the RP or core Uses less memory May get sub-optimal paths May introduce delay
51
51
(S,G)
S S
PC
RP
RP 52
Imagine that the trees on the right side are over-laid on the network on the left side.
52
S2 S1 S1
RP
RP PC
S2 S1 53
Multiple sources create multiple distribution trees A different tree for each source
53
Sparse-mode
Explicit join behavior - PULL Supports both source-trees and shared-trees Typically uses a master database containing information about active sources and groups (e.g. Rendezvous Point or Core Router) PIM-SM, CBT
54
Sparse mode Good! Dense mode Bad! Source: The Cavemans Guide to IP Multicast, 2000, R. Davis Dense-mode multicast protocols Initially flood/broadcast multicast data to entire network, then prune back paths that don't have interested receivers This flood and prune behavior will repeat every x minutes (cisco default is 3 minutes) Sparse-mode multicast protocols Assumes no receivers are interested unless they explicitly ask for it IETF status of Multicast Protocols DVMRPv1 is obsolete and is no longer used. DVMRPv2 is the current implementation and is used through-out the Mbone. However, DVMRPv2 is only an Internet-Draft. Only MOSPF currently has an IETF Category of Standards-Track. However, most members of the IETF IDMR working group are unsure that MOSPF will scale to any degree and are therefore uncomfortable with declaring MOSPF as the standard for IP Multicasting. (Even the author of MOSPF, J. Moy, has been quoted in an RFC that, more work needs to be done to determine the scalability of MOSPF.) At the August 1997 meeting of the IETF IDMR working group, the vote to approve PIM as the IETF Multicast protocol standard was 2 votes short of being unanimous. Given the above, the IETF is expected to reassign all of the above protocols to the Experimental Category in order to give all protocols a level playing field.
54
Appropriate for
Wide scale deployment for both densely and sparsely populated groups in the enterprise Optimal choice for sparse groups with few receivers perhaps separated by expensive WAN links
55
Protocol Independent Multicast (PIM) Sparse-mode (RFC 2117) Utilizes a rendezvous point (RP) to coordinate forwarding from source to receivers Regardless of location/number of receivers, senders register with RP and send a single copy of multicast data through it to registered receivers Regardless of location/number of sources, group members register to receive data and always receive it through the RP Appropriate for Multipoint datastreams going to a relatively small number of LANs Few interested receivers per multicast group Senders/receivers sparsely distributed or separated by WAN links Intermittent traffic (no necessity to flood each new session)
55
(S,G)
RP
Receiver
56
Explicit Join model Shared trees are (*,G) and always rooted at the Rendezvous Point (RP) Source trees are (S,G) and always rooted at the source Receiving router creates a (*,G) state by sending Joins toward the RP By default on Cisco routers: Once the first packet of Multicast traffic arrives down a Shared tree, the router immediately switches to the source tree. This Bandwidth threshold (0 bytes) can be adjusted. PIM v1 messages were IGMP type 14 packets PIM v2 messages are sent to 224.0.0.13 S = Source G = Group * = All Sources
56
The 3 different methods of configuring an RP are: Statically Auto-RP PIMv2 BootStrap Router (BSR) All three of these methods will be covered in detail in the Deploying IPMC class PIM-SM State refresh Once per minute the routers refresh their state After 3 minutes the state is deleted
57
58
Evaluation: PIM Sparse-mode Can be used for sparse of dense distribution of multicast receivers (no necessity to flood) Advantages Traffic sent only to registered receivers that have explicitly joined the multicast group RP can be switched to optimal shortest-path-tree when high-traffic sources are forwarding to a sparsely distributed receiver group Interoperates with DVMRP Potential issues Requires RP during initial setup of distribution tree (can switch to shortest-path-tree once RP is established and determined suboptimal) RPs can become bottlenecks if not selected with great care Complex behavior is difficult to understand and therefore difficult to debug ip multicast-routing ip pim bidir-enable ip pim rp-address 10.2.3.1 10 access-list 10 permit 239.8.1.0 0.0.0.255 access-list 10 deny any interface Ethernet 0/0 ip mroute-cache ip pim sparse-dense-mode
58
G) (, RP
(,G) Membership
Report
Receiver 1
Receiver 2
1) Receiver 1 sends an IGMP Membership Report to Router C 2) Router C adds its Ethernet Interface to the requested Multicast group (*,G) 3) Router C sends a group join message (*,G) to the RP 4) The RP adds the interface to the group (*,G) 5) When Receiver 2 joins the group, Router E sends a join message towards the RP 6) Router C sees the join message and adds that interface to the group list 59
59
G) (, RP
(,G) ..
(,G) Prune(,G)
Receiver 1
Receiver 2
1) Receiver 2 sends an IGMP Group Leave Report to Router E 2) Router E removes its Ethernet Interface from the requested Multicast group (*,G) 3) Router E sends a group prune message (*,G) to Router C 4) Router C sees the prune message and removes the group list from that interface
60
60
Register Messages
Source 1
Register-Stop
RP
Receiver 1
Receiver 2
1) Source begins sending group G traffic 2) Router A encapsulates packets in Registers; unicasts to RP 3) RP creates (S,G) state toward first-hop router 4) RP sends Register-Stop to A once (S,G) packets start arriving. 5) Router A stops encapsulating traffic in Register Messages
61
61
Register Messages
Source 1
Join/Prune
Join/Prune
(* ,
G) RP
C
(*,G)
(*,G)
E
(*,G)
Receiver 1
Receiver 2
1) Source begins sending group G traffic 2) Router A encapsulates packets in Registers; unicasts to RP 3) RP creates (S,G) state, decapsulates the register messages and forwards them out the (*,G) tree 4) RP sends (S,G) Join/Prune Message toward Source
62
62
Source 1
C
(*,G)
(*,G)
E
(*,G)
Receiver 1
Receiver 2
1) Source begins sending group G traffic 2) Router A encapsulates packets in Registers; unicasts to RP 3) RP creates (S,G) state, decapsulates the register messages and forwards them out the (*,G) tree 4) RP sends (S,G) Join/Prune Message toward Source 5) RP sends Register-Stop to A once (S,G) packets start arriving. 6) Router A stops encapsulating traffic in Register Messages
63
63
Source 1
A (S
,G )
B
Jo in
C IPM
(*,
G) RP
(S,G)
E
(,G) (S,G)
Receiver 1
1) IPMC packet arrives down Shared tree (from RP) 2) (*,G) states are replaced with (S,G) states
Receiver 2
64
64
Source 1
(S ,
G)
B C
(S,G) (S,G)
(S RP G) ne , (S ru P
) ,G RP
E
(S,G)
Receiver 1
1) IPMC packet arrives down Shared tree 2) (*,G) states are replaced with (S,G) states
Receiver 2
3) Router C sends a (S,G) Join Towards Source (Router A) 4) (S,G) traffic begins flowing down Source Tree 5) Router C triggers (S,G) RP-bit Prune toward RP 6) IPMC traffic ceases flowing down Shared Tree 65
65
RP
Router B .253
DR is determined with PIM Hello messages on multiaccess networks Prevents redundant/parallel joins to the RP ip pim dr-priority <value>
66
Beau Williamson, p174 Hellos are sent to 224.0.0.13 (AllPIMRouters) DR Priority takes preference, otherwise, higher primary IP address is elected [no] ip pim dr-priority <value> Configures the neighbor priority used for PIM Designated Router (DR) election. The router with the largest <value> on an interface will become the PIM DR. If multiple routers have the same priority, then the largest IP addressed system on the interface becomes DR. If a router doesn't include the DR-Priority Option in it's Hello messages, the router is regarded highest priority router and will become DR. If multiple of such routers exist, the largest IP addressed router will become DR. This allows interoperation with older systems.
66
ip pim sparse-dense-mode
Sparse-Dense mode interfaces Interface mode is determined by Group Mode If Group is dense, then interface operates in dense mode If Group is sparse, then interface operates in sparse mode Sparse mode interfaces Operate in sparse mode for all groups Dense mode interfaces Operate in dense mode for all groups
67
67
68
PIM-SM RFC 2362 Router# show ip pim rp mapping PIM Group-to-RP Mappings Group(s) 224.0.0.0/4 RP 10.10.10.1 (?), v2v1 Info source: 10.10.10.2 (?), via Auto-RP Uptime: 00:22:08, expires: 00:02:40
68
This is from an RP because the (*,G) entry shows a RPF nbr of 0.0.0.0 and incoming interface is Null The T flag is set on the (S,G) entry from the source The source is sending traffic inbound on Serial 3 this looks like the RP on a stick scenario (Beau Williamson p303)
69
S Spare mode flag means the group is in sparse mode C Connected flag means that there is a directly connected member for this group L Local flag means that this router itself is a member of this group P Pruned flag means that the OIlist is NULL and that a Prune message will be sent to the upstream (RPF) neighbor T SPT flag means that traffic is being forwarded via the (S,G) entry (SPT) X Proxy-Join timer flag means that proxy join timer is running used in the turnaround router scenario (Beau Williamson p308) J Join SPT flag (*,G) means that the traffic rate has exceeded the SPT-threshold J Join SPT flag (S,G) means that this source was previously cut over to the SPT tells router to check the traffic flow against the SPT-threshold to see if it should be switched back to the (*,G) shared tree In (*, G) entry Indicates SPT-Threshold is being exceeded Next (S,G) received will trigger join of SPT In (S, G) entry Indicates SPT joined due to SPT-Threshold If rate < SPT-Threshold, switch back to Shared Tree F Register flag means that this router should send register messages for this traffic there is a directly connected source In (S,G) entry S is a directly connected source Triggers the Register Process In (*, G) entry Set when F set in at least one child (S,G) R RP-bit flag (S,G) only the shared tree is sometimes called the RP tree hence RP-bit means that the (S,G) entry is applicable to the shared tree (S, G) entries only Set by (S,G)RP-bit Prune Indicates info is applicable to Shared Tree Used to prune (S,G) traffic from Shared Tree Initiated by Last-hop router after switch to SPT Modifies (S,G) forwarding behavior IIF = RPF toward RP (I.e. up the Shared Tree) OIL = Pruned accordingly
70
PIM Auto-RP
Cisco proprietary technique to automate RP election Auto-RP automates the distribution of group-to-RP mappings in a PIM network A router must be designated as an RP mapping agent
Receives the RP announcement messages from the RPs and arbitrates conflicts RP mapping agent sends the consistent group-to-RP mappings to all other routers by dense mode flooding
224.0.1.39 - cisco-rp-announce Auto-RP candidate routers announce themselves to this group mapping agents listen to this group 224.0.1.40 - cisco-rp-discovery group-to-RP mapping info sent to this address by the mapping agent all Auto-RP routers join this group Requires ip pim sparse-dense-mode on interfaces Auto-RP candidate with the highest IP address becomes the RP Can be used with Admin-Scoping
71
Auto-RP is a mechanism where a PIM router learns the set of group-to-RP mappings required for PIM SM To successfully implement Auto-RP and prevent any groups other than 224.0.1.39 and 224.0.1.40 from operating in dense mode, we recommend configuring a sink RP (also known as RP of last resort). A sink RP is a statically configured RP that may or may not actually exist in the network. Configuring a sink RP does not interfere with Auto-RP operation because, by default, Auto-RP messages supersede static RP configurations. We recommend configuring a sink RP for all possible multicast groups in your network, because it is possible for an unknown or unexpected source to become active. If no RP is configured to limit source registration, the group may revert to dense mode operation and be flooded with data. Advantage of Auto-RP is that TTL-scoping can be used RP-Announce-Interval is 60 seconds by default Hold time is set to 3X RP-Announce-Interval = 180 seconds RP-Discovery-Interval is 60 seconds by default Hold time is set to 3X RP-Discovery-Interval = 180 seconds If a router faisl to receive a RP-Discovery message and the Group-to-RP mapping expires then the router switches to the statically configured RP RP of last resort If no RP of last resort is configured then the router switches the group to dense mode Use Auto-RP when minimum config is desired and when flexibility is desired Pros: More flexible Easy to maintain Cons: Increased RP failover times Special care needed to avoid Dense-mode fallback Some of these techniques increase configuration tasks
71
72
Access Router ip pim rp-address 10.10.10.1 20 access-list 20 deny 224.0.1.39 access-list 20 deny 224.0.1.40 access-list 20 permit 224.0.0.0 15.255.255.255 On the mapping agent ip pim rp-announce-filter This command filters Auto-RP announcement messages that arrive on group 224.0.1.39 from candidate RP routers. This command prevents unwanted candidate RP announcement messages from being processed by the mapping agent. Unwanted messages could interfere with the RP election mechanism of the mapping agent. ip pim rp-announce-filter rp-list 1 group-list 2 access-list 1 permit 10.0.0.1 access-list 1 permit 10.0.0.2 access-list 2 permit 224.0.0.0 15.255.255.255 Example RP: ip pim send-rp-announce Vlan10 scope 16 group-list 20 Send RP-announce messages with TTL=16 and sends the group range specified with the ACL 20 these are the groups that the router is willing to be the RP for ip pim send-rp-discovery Vlan10 scope 16 Act as a RP mapping agent tells router to join 224.0.1.39 to listen for RP-Announce messages and cache them Mapping agent selects the RP with the highest IP address Send RP information every 60 seconds to all routers listening to RP-discovery (224.0.1.40) ip pim accept-register list 120 - used on RP to filter incoming Register messages filters on source address alone or S,G pair ! access-list 20 permit 239.1.1.1 0.0.0.255 access-list 120 permit ip host 192.168.2.13 any access-list 120 permit ip host 192.168.2.14 any show ip pim rp mapping
72
BSR messages are flooded hop-by-hop every 60 seconds through the network
Distributes all Group-to-RP mapping (RP-set) info Carried within PIM messages to 224.0.0.13 (all-PIM-routers) TTL=1, Cant use TTL scoping with BSR
Therefore, doesnt require ip pim sparse-densemode because BSR mechanism doesnt rely on multicast Cant be used with Admin-Scoping
73
BSR is a mechanism where a PIM router learns the set of group-to-RP mappings required for PIM SM PIM messages are link-local multicast messages that travel from PIM router to PIM router. Because of this single hop method of disseminating RP information, TTL scoping cannot be used with BSR. A BSR performs similarly as an RP, except that it does not run the risk of reverting to dense mode operation, and it does not offer the ability to scope within a domain. Use BSR when static/anycast RPs cant be used and when maximum interoperability is required Pros: Interoperates with all vendors Cons: Increased RP failover times Special care to avoid dense mode fallback may increase config commands Does not support admin scoping of group addresses
73
BSR is a mechanism where a PIM router learns the set of group-to-RP mappings required for PIM SM If the group-list is omitted on the rp-candidate command then 224/4 is assumed 1 is the hash-mask-length in the bsr-candidate command 100 is the BSR-priority (default = 0)
74
PIM Anycast RP
Use the same IP address for two routers Redistribute these loopback addresses into IGP Violates the rules for uniqueness of IP addresses Non-RP routers point to the anycast IP address as their RP MSDP is required between anycast-RPs to share information about their sources with each other Can never fall back to Dense mode because RP is statically defined
75
Use Anycast RP when network connects to Internet or when rapid RP failover is critical Pros: Fastest RP convergence method Required when connecting to Internet Cons: Requires more configuration (MSDP) Requires MSDP to function
75
76
There are two or more routers in the network with exactly similar configurations each one advertises 10.10.10.1 as their RP address Alternative Anycast RP Example: ip multicast-routing interface loopback 0 ip address 10.10.10.2 255.255.255.255 uses 10.10.10.3 interface Loopback1 ip address 10.10.10.1 255.255.255.255 ip pim sparse-dense-mode ip ospf network point-to-point ip pim rp-address 10.10.10.1 10 router ospf 10 network 10.10.10.1 0.0.0.0 area 10 ip msdp peer 10.10.0.3 connect-source loopback 0 ip msdp originator-id loopback 0 access-list 10 permit 239.0.0.0 0.255.255.255 Access Router Configuration: ip pim rp-address 10 10 10 1 20 76 ! Other router
Bidirectional PIM
Branches of Multicast tree are used for bidirectional traffic flow Routers only maintain a single (*,G) entry per group (No (S,G) state) All (S,G) is eliminated Members explicitly join the shared tree however sources do not use Registers to get their data on the shared tree Simple to troubleshoot all traffic goes through RP Designated Forwarder (DF) closest to Bidir RP is elected on all networks Reduces resources of multicast routers for applications where everyone is a sender and a receiver
State, memory, CPU, bandwidth
Disadvantage no Bidir RP redundancy or loadbalancing like with Anycast/MSDP only primary/secondary Bidir RP redundancy
77
The cisco IOS implementation supports 3 modes for a group: Dense-mode Sparse-mode Bidir-mode BiDir PIM works well for financial applications (Hoot-n-Holler) and multicast video conferencing that require a many-to-many model/trafficflow The source tree model provides optimum routing in the network, while shared trees provide a more scalable solution. A variant of the PIM, whereby data flows both up and down the same distribution tree Bi-directional PIM uses only shared tree forwarding, thereby reducing state creation Command found by default in many new IOS versions: ip pim bidir-enable On every network segment and point-to-point link, all PIM routers participate in a procedure called DF election. The procedure selects one router as the DF for every RP of bidirectional groups. This router is responsible for forwarding multicast packets received on that network upstream to the RP. The DF election is based on unicast routing metrics and uses the same tie-break rules employed by PIM assert processes. The router with the most preferred unicast routing metric to the RP becomes the DF. Use of this method ensures that only one copy of every packet will be sent to the RP, even if there are parallel equal cost paths to the RP. A DF is selected for every RP of bidirectional groups. As a result, multiple routers may be elected as DF on any network segment, one for each RP. In addition, any particular router may be elected as DF on more than one interface. draft-farinacci-bidir-pim-01
77
access-list 45 permit 224.0.0.0 0.255.255.255 access-list 45 permit 227.0.0.0 0.255.255.255 access-list 45 deny 225.0.0.0 0.255.255.255 access-list 46 permit 226.0.0.0 0.255.255.255 access-list 47 permit 10.0.0.0 0.255.255.255
78
A Bidir-PIM capable router can run in bidir-mode, sparse-mode, dense-mode or any combination of them. If a router is configured for bidir-mode but does not learn of a bidir capable RP it will operate in sparse-mode. If a bidir capable router learns of a bidir RP then the group range advertised by the RP will operate in bidir-mode. If the RP advertises any groups witha negative prefix they will operate in dense-mode. By default a bidir RP advertises all groups as bidir. An access group on the RP can be used to specify a list of groups to be advertised as bidir. Groups with the "deny" clause will operate in densemode. A different (non bidir) RP address needs to be specified for groups that need to operate in sparsemode. This is because a single access-list allows only "permit" or a "deny" clause. Following example shows how to configure a bidir RP to run all 3 modes. 224/8 and 227/8 are bidir groups, 225/8 is dense-mode and 226/8 is sparse-mode. Both the bidir RP and the sparse-mode RP are configured on one router using two different loopback interfaces. override - Indicates that if there is a conflict, the RP configured with this command prevails over the RP learned by Auto-RP. ip pim bidir-neighbor-filter <acl> This command enables the operator to explicitly specify what routers should be participating on the DF election, while still allowing all routers to participate in the sparse-mode domain.
show ip pim rp mapping show ip mroute B-flag = Bidir PIM show ip pim int e1/0 df Router# show ip pim interface df Interface RP DF Winner Metric Uptime Ethernet3/3 10.10.0.2 10.4.0.2 0 00:03:49 10.10.0.3 10.4.0.3 0 00:01:49 10.10.0.5 10.4.0.4 409600 00:01:49
78
Constraining Multicast
Multicast Boundary Rate Limit TTL Scoping
interface Serial0 ip pim sparse-dense ip multicast rate-limit out group-list 10 1000 ip multicast boundary 10 filter-autorp ip multicast ttl-threshold 16 access-list 10 deny 224.0.1.39 access-list 10 deny 224.0.1.40 access-list 10 deny 239.0.0.0 0.255.255.255 access-list 10 permit 224.0.0.0 15.255.255.255
79
1000 = 1Mbps
ip sdr listen must be used on router if the video or whiteboard parameter is to be used on rate limiting ip multicast boundary blocks all Site-Local (239.255.0.0/16) from entering or leaving the site ip multicast ttl-threshold prevents Site-Local Candidate-RP (C-RP) announcements from leaking out of site interface serial 0 description External interface on a boundary router ip multicast boundary 1 filter-autorp access-list 1 deny 224.0.1.39 access-list 1 deny 224.0.1.40 access-list 1 deny 239.0.0.0 0.255.255.255 access-list 1 permit 239.192.1.0 0.0.0.255 interface serial 0 description External interface to non-adjacent region ip multicast ttl-threshold 128 Below is an example of how this rate limiting multicast traffic on an interface can be configured. interface FastEthernet0/0 ip sdr listen ip multicast rate-limit in group-list 1 80 interface Serial 0/0 ip sdr listen ip multicast rate-limit out group-list 1 80 access-list 1 permit 225.1.1.1 0.0.0.0 If the group-list and the source-list are not specified then the rate-limiting is performed on the interfaces as a whole. Below is the syntax for the rate limit command: ip multicast rate-limit <in | out> [group-list ACL#] [source-list ACL#] <Kbps> ; command required for rate-limiting ; command required for rate-limiting - networkers04 RST2701 p109
79
80
80
IP Multicast
Inter-Domain Multicast Routing Protocols
81
81
81
82
These are protocols and technologies that help scale multicast routing protocols across large or multiple domains.
82
Interdomain Multicast routing protocols allow domains to interact with each other (similar to BGP with IP)
83
MBGP is a last minute fix to the problem with DVMRPs lack of scalability MBGP is not a permanent solution to connecting IPMC domains together MBGP alone is not a complete solution to interconnect IPMC domains MBGP along with MSDP will provide the necessary components to interconnect multiple IPMC domains There are still very significant scaling issues with this solution Multiprotocol extensions to the BGP unicast inter-domain protocol that carry multicast specific routing information Adds capabilities to BGP to enable multicast routing policy throughout the Internet and connect multicast topologies within and between BGP autonomous systems Carries IP Multicast routes. MP-BGP carries multiple instances of routes for unicast routing, multicast routing, and VPNv4 PIM uses routes associated with multicast routing to join Reverse Path Forwarding (RPF) decisions at the inter domain borders
84
Unicast Table U-Table Multicast Table M-Table Global Command ip multicast longest-match use best AD unless longest match is enabled First preference is Static Mroute table (AD=1) Second preference in MBGP table (AD=200, AD=20) Third preference is Unicast table (longest match)
85
MBGP Example
Congruent Topologies
AS 123
.1
AS 321
Unicast Information
NLRI: 192.192.25/24 AS_PATH: 321 MED: Next-Hop: 192.168.100.2 ...
Sender
Routing Update
86
Incongruent means that the multicast and unicast traffic take different paths
86
87
NLRI Syntax: router bgp 50 network 172.16.0.0 mask 255.255.0.0 nlri unicast multicast neighbor 172.16.12.2 remote-as 2 nlri unicast multicast Address Family Syntax: router bgp 50 no bgp default ipv4-unicast neighbor 172.16.12.2 remote-as 4 address-family ipv4 unicast neighbor 172.16.12.2 activate network 172.16.0.0 mask 255.255.0.0 address-family ipv4 multicast neighbor 172.16.12.2 activate network 172.16.0.0 mask 255.255.0.0
87
MSDP RFC 3618 MBGP connects the trees together allowing a client to request a source from a different network from the remote RP (full unicast routing is required to accomplish this) MSDP connects the RPs together allowing a client to request a source from a different network from his local RP Allows multiple PIM sparse-mode domains to share information about active sources Announces active sources to MSDP peers Interacts with MP-BGP for inter domain operation Used for Anycast Rendezvous Point redundancy solution SA message contains: IP address of Originator (RP address) Number of (S,G)s pairs being advertised List of active (S,G)s in the domain Encapsulated Multicast packet
88
MSDP Example
Domain E
MSDP Peers Source Active Messages
SA SA RP
r
Join (*, 224.2.2.2)
Domain C
RP
Domain B SA
RP SA
SA
SA
RP
SA
RP
Domain D
Domain A
192.1.1.1, 224.2.2.2 89
In this example, the domains have already been neighbored with MBGP
89
MSDP Example
Domain E
MSDP Peers
RP
Domain C
RP
.2 224 (S, n J oi
) .2.2
Domain B
RP RP
Domain D
RP
Domain A
90
The SA messages essentially allows the receiver to tunnel to the source to get the stream
90
MSDP Example
Domain E
MSDP Peers Multicast Traffic
RP
r
Domain C
RP
Domain B
RP RP
Domain D
RP
Domain A
91
Once the path (source tree) has been built, the receiver starts to get the traffic as if on the same domain.
91
ip msdp default-peer { peer-address | peer-name} [prefix-list list] Sample: Router A: Int s0/0 Ip address 192.168.100.1 255.255.255.252 Ip pim sparse Ip msdp peer 192.168.100.2 Ip msdp sa-request 192.168.100.2 Router B: Int s0/0 Ip address 192.168.100.2 255.255.255.252 Ip pim sparse Ip msdp peer 192.168.100.1 Ip msdp sa-request 192.168.100.1 MSDP Best Current Practice IETF Draft: draft-ietf-mboned-msdp-deploy-06.txt Net commands: 92
Anycast/MSDP RP Example
interface Loopback1 ip address 10.10.10.1 255.255.255.255 ip pim sparse-dense-mode ! ip pim rp-address 10.10.10.1 10 ip pim accept-register list 120 ! ip msdp peer 10.10.2.13 connect-source Loopback 0 ip msdp cache-sa-state ip msdp mesh-group anycast 10.10.10.2 ip msdp originator-id Loopback 0 ! access-list 10 permit 239.0.0.0 0.255.255.255 access-list 20 permit 239.1.1.1 0.0.0.255 access-list 120 permit ip host 10.10.2.13 any access-list 120 permit ip host 10.10.2.14 any
93
RFC 3446 Anycast RP mechanism using PIM and MSDP Access Router ip pim rp-address 10.10.10.1 20 ip pim spt-threshold infinity ! access-list 20 permit 239.1.1.1 0.0.0.255 Other commands: ip msdp sa-filter [in | out] <IP of peer> <ACL>
93
Well suited for Internet model where there are few sources and many receivers Global command ip pim ssm {default | [acl]}
draft-holbrook-ssm-00 draft-holbrook-ssm-arch-00
94
draft-holbrook-idmr-igmpv3-ssm-06.txt ssm232-08.txt
draft-ietf-ssm-arch-04.txt draft-ietf-mboned-
SSM forwarding uses only source-based forwarding trees. SSM range is defined for inter domain use, and Cisco IOS Software allows other groups to be configured using the SSM forwarding model (S,G) trees only no shared trees Out of band source discovery Web page, Content server URD uses web server that redirects to TCP port 465 router intercepts TCP port 465 connection and joins the source explicitly SSM mapping allows SSM routing to occur without IGMPv3 being present. SSM mapping uses statically configured tables or dynamic Domain Name System (DNS) discovery of the source address for a SSM channel SSM helps prevent multicast DOS attacks Bogus source traffic cant consume network bandwidth because it is not received by host application Captain Midnight sources bogus noise/data to group causes DOS attack by overloading low speed links 94
95
Session Directory Tool (SDR) historically was used to announce session/group information on a well-known multicast group but it had problems scaling Multicast Address Dynamic Client Allocation Protocol (MADCAP) RFC 2730 Similar to DHCP server and client API shipped in Win2K Applications need to support API to support MADCAP MASC will allow developers of IPMC content to build their content without prior knowledge of the IPMC address. It will be very similar to DNS where a name will be associated with the IPMC address. Once the MASC negotiation has been completed, the IPMC address will be associated with that name and the content will be pointed to the name.
Address registration has a time limit associated with it (MASC servers periodically review the claims)
MASC will replace the functionality of the Session Director currently allocating IPMC addresses on the MBone. MASC is expected to be completed when BGMP is completed. MASC RFC 2909 Hierarchical, dynamic address allocation scheme Top of hierarchy is at an Internet exchange Children request addresses from parent Complex garbage-collection problem Not in use currently
95
MSDP
Interconnects RP domains Requires PIM Sparse-Mode
PIM-SSM
Simpler, but requires IGMPv3, IGMPv3 lite, or URD
MASC
Hierarchical Multicast Addressing method Difficult to implement fragmentation problems Long ways off!
96
96
IP Multicast
Troubleshooting
97
97
Bill Nickless & Caren Litvanyi, A Methodology for Troubleshooting Interdomain IP Multicast, NANOG 27 Phoenix, AZ. RFC2432: Terminology for IP Multicast Benchmarking Multicast MIBs IGMP-MIB RFC 2933 IGMPIGMP-STD-MIB Cisco TIPMROUTE-MIB RFC 2932 IPMROUTE-STD-MIB Cisco TPIM-MIB RFC 2934 PIMCISCO-PIM-MIB Cisco MSDP-MIB draft-ietf-msdp-mib-07.txt RTP-MIB RFC 2959 Cisco also documents some good Traps for monitoring multicast networks. ftp://ftpeng.cisco.com/ipmulticast/config-notes/mib-info.txt
97
Multicast Troubleshooting
Troubleshoot IP Multicast in Sections
Source Segment Rendezvous Point (PIM-SM) Receiver - IGMP
x9 x6 x6 x8 x2 x2
Speaking of troubleshooting streaming media systems like VoIP IP Multicast is also an area where problems can exist and require good troubleshooting methods
} }
}
98
98
Multicast Troubleshooting
Set Shortest Path Tree (SPT) Threshold to Infinity to prevent SPT switchover Log into many routers and view the multicast routing tables and PIM-SM states View the IGMP statistics at the source/receiver Ethernet switches Make sure all routers agree upon the Rendezvous Point Create some automated tools Use a protocol analyzer to view the IGMP messages on the LAN segment Make sure every interface has PIM installed Make sure every router agrees on the RP 99
Beau Williamson, Developing IP Multicast Networks Volume 1, Cisco Press, 1999. Sparse Mode Good! Dense Mode Bad! (Source: The Cavemans guide to Multicast 2000 R. Davis Ftp://ftpeng.cisco.com/ipmulticast/html/ipmulticast.html William R. Parkhurst, Cisco Multicast Routing & Switching, McGraw Hill, 1999. Dave Kosiur, IP Multicasting, Wiley, 1998.
99
Multicast Troubleshooting
Start from the Receiver and work toward the Source Check Receivers LAN switch
show cam static show cam dynamic
debug ip mpacket [detail | fastswitch] [access-list] [group] debug ip pim [group | df [rp-address]]
100
Multicast Troubleshooting
Start from the PIM DR and work toward PIM RP
show ip pim rp show ip pim rp mapping <mcast group IP> show ip pim neighbor <rpf interface> show ip pim interface
debug ip pim <GroupIP> debug ip mpacket <GroupIP> debug ip mrouting <GroupIP> show ip mroute <Group ID> [count]
101
Multicast Troubleshooting
Test with a receiver on the same segment as the source test only the application Check the source streaming format is compatible with receiver software Create low-speed streams Enable IGMP snooping and enable mroutecaching to reduce CPU load on network elements Watch out for redundant links Check for dense mode fallback
102
Here are some more techniques for troubleshooting IP Multicast. Avoid dense mode fallback by making sure every router has an RP for every group Configure an RP of last resort / sink RP Configure this as a static RP on every router Be sure to use an ACL to avoid problems with Auto-RP ip pim rp-address <RP-of-last-resort> 10 access-list 10 deny 224.0.1.39 access-list 10 deny 224.0.1.40 access-list 10 permit 224.0.0.0 15.255.255.255 Alternatively, use global command to avoid dense mode fallback ip pim autorp-listener Puts all interfaces into DM for Auto-RP announce/discovery groups Leaves interface in SM for all other groups This command works to prevent DM flooding when using ip pim sparsemode on interfaces New IOS command no ip pim dm-fallback totally prevents DM fallback Makes RP=0.0.0.0 (non existent) Makes all shared tree disappear 102
Mtrace shows the IP Multicast path from source to receiver branch for a multicast distribution tree
Similar to traceroute command mtrace [source IP] [destination IP] [group IP]
Mrinfo queries the local router for neighbor multicast peer routers
mrinfo [ hostname | address] [source-address | interface]
103
Note: Mtrace packets use special IGMP packets with IGMP Type codes of 0x1E and 0x1F. Mtrace shows the IP Multicast path from source to receiver. Similar to unicast trace command Trace path between any two points in network TTL Thresholds & Delay shown at each node Troubleshooting Usage: Find where multicast traffic flow stops - Focus on router where flow stops Verify path multicast traffic is following - Identify sub-optimal paths Mstat shows Multicast path in pseudo graphic format. Trace path between any two points in network Drops/Duplicates shown at each node TTLs & Delay shown at each node Troubleshooting Usage: Locate congestion point in the flow. Focus on router with high drop/duplicate count Duplicates indicated as negative drops
Mrinfo command is the MBONEs original tool to determine what neighboring multicast routers are peering with a multicast router.
103
Run the test with mrm start test1 Show the results with show ip mrm [ manager | status ] Stop the test with mrm stop test1
104
MRM Commands: To change the frequency, duration, or scope of beacon messages that the Manager sends to Test Senders and Test Receivers beacon [interval seconds] [holdtime seconds] [ttl hops] To clear the status report cache buffer, clear ip mrm status-report [ip-address] To configure an interface to operate as a Test Sender or Test Receiver, or both, ip mrm {test-sender | test-receiver | test-sender-receiver} To configure a Test Sender or Test Receiver to accept requests only from Managers that pass an access list ip mrm accept-manager {access-list-name | access-list-number} [test-sender | test-receiver] To identify a Multicast Routing Monitor (MRM) test and enter the mode in which you specify the test parameters ip mrm manager test-name To specify that an interface is the Manager manager type number group ip-address To start or stop a test mrm test-name {start | stop} Establish Test Receivers for Multicast Routing Monitor (MRM), Specify which Test Senders the Test Receivers will listen to, Specify which sources the Test Receivers monitor, Specify the packet delay, Change Test Receiver parameters receivers {access-list-name | access-list-number} [sender-list {access-list-name |access-list-number} [packet-delay]] [window seconds] [report-delay seconds] [loss percentage] [no-join] [monitor | poll] To configure Test Sender parameters senders {access-list-name | access-list number} [packet-delay milliseconds] [rtp | udp] [target-only | all-multicasts | all-test-senders] [proxy_src] To display Test Sender or Test Receiver information show ip mrm interface [interface-unit] To display test information show ip mrm manager [test-name] To display Multicast Routing Monitor (MRM) status reports of errors in the circular cache buffer show ip mrm status-report [ip-address] To change User Datagram Protocol (UDP) port numbers to which a Test Sender sends test packets or a Test Receiver sends status reports udp-port [test-packet port-number] [status-report port-number]
104
Cisco IP multicast network management
Multicast Netflow
Multicast Netflow is supported in Netflow v9 by just enabling netflow accounting NetFlow for IP multicast traffic is supported in Cisco IOS Release 12.3 Must have multicast fast switching or multicast distributed fast switching (MDFS); multicast CEF switching is not supported Multicast ingress, egress, both, and rpf-failures NetFlow can show you the flows destined for a multicast group address Router (config-if)# ip multicast netflow egress
105
www.cisco.com/go/netflow leads to: http://www.cisco.com/warp/public/732/Tech/nmp/netflow/index.shtml http://www.cisco.com/en/US/partner/products/sw/iosswrel/ps5187/products_feature _guide09186a00801b1beb.html ip multicast netflow [ingress | egress] ip multicast netflow rpf-failure show ip cache verbose flow show ip cache flow aggregation Abilene NetFlow page http://www.itec.oar.net/abilene-netflow Fanout server supports multicast. Most implementation of NetFlow do not support multicast
105
IP Multicast
Class Summary
106
106
106
Class Summary
A typical Multicast network today would consist of:
IGMP version 2 on the clients IGMP snooping on the switches PIM Sparse-Mode on the routers MBGP on the links between domains MSDP between the RPs in different domains
107
107
Class Summary
Multicast applications are a reality Multicast is an end-to-end technology Multicast is still under heavy development Multicast security is a long way off
Problems with group shared keys SSM may be one way to stem DOS attacks
Not a lot of robust management tools exist QOS is required maintain good transport for your multicast traffic streams IGMPv3 and SSM will help make multicast more widely deployed Future: IPv6 Multicast, Multi-VRF, MVPN
108
Multicast VPN MPLS VPN Draft-rosen-vpn-mcast-07.txt Data MDTs and multicast MDTs Networkers 2004 RST-2702 IPv6 uses MLDv1 and MLDv2 Multicast Listener Discovery rather than IGMPv3 RFC 3513 IPv6 has support for PIM-SM, SSM, bidir PIM RFC 2373 IPv6 multicast addresses ff::/8 is just like 224.0.0.0/4 ff02::1 is the link local mcast address just like 224.0.0.1 SSMv6 is ff3X::/32 where X represents the scope bits ipv6 multicast-routing ipv6 pim rp-address <V6-addr> Questions: mcast-v6-support@cisco.com
108
Scott@Hoggnet.com
Mobile: 303-949-4865
109
Q&A
109
References
Beau Williamson, Developing IP Multicast Networks Volume 1, Cisco Press, 1999. William R. Parkhurst, Cisco Multicast Routing & Switching, McGraw Hill, 1999. C. Kenneth Miller, Multicast Networking & Applications, Addison-Wesley, 1998. Dave Kosiur, IP Multicasting, Wiley, 1998. Cisco, Interdomain Multicast Solutions Guide, Cisco Press, 2002. Brian M. Edwards, Leonard A. Giuliano, Brian R. Wright, Interdomain Multicast Routing: Practical Juniper Networks and Cisco Systems Solutions, Addison-Wesley, 2002.
110
The Beau Williamson book is the Best resource to date on IP Multicast. Most of the Cisco material is contained in Beaus book. http://www.cisco.com/go/ipmulticast ftp://ftpeng.cisco.com/ipmulticast/index.html ftp://ftpeng.cisco.com/ipmulticast/html/ipmulticast.html http://www.cisco.com/go/iptv Questions = cs-ipmulticast@cisco.com Http://www.3com.com/technology/tech_net/white_papers/500637.html, The Multimedia World According to GMRP, 3Com. IEEE Draft P802.1r/D1, GARP Proprietary Attribute Registration Protocol (GPRP), LAN/MAN Standards Committee of the IEEE Computer Society, 1999. S. Armstrong, A. Freier, K. Marzullo, Multicast Transport Protocol, RFC1301, 1992. A. Ballar, Core Based Trees Multicast Routing, RFC2189, 1997. IETF Organization, http://www.ietf.org.
2950 IGMP-Snooping and MVR: http://www.cisco.com/univercd/cc/td/doc/product/lan/cat2950/1219ea1/scg/swigmp.htm 3550 Layer 3 IP Multicast Services: http://www.cisco.com/univercd/cc/td/doc/product/lan/c3550/1219ea1/3550scg/swmcast.htm 4000 Layer 3 IP Multicast Services: http://www.cisco.com/univercd/cc/td/doc/product/lan/cat4000/12_1_11/config/mcastmls.htm#xtocid10 6000 Series Layer 2 Multicast Services: http://www.cisco.com/univercd/cc/td/doc/product/lan/cat6000/sw_7_2/confg_gd/multi.htm 6000 Series Layer 3 IP Multicast Switching: http://www.cisco.com/univercd/cc/td/doc/product/lan/cat6000/12111bex/swcg/mcastmls.htm CCO Command Reference for IP Multicast: http://www.cisco.com/univercd/cc/td/doc/product/software/ios122/122cgcr/fiprmc_r/index.htm IP Multicast Solutions: http://www.cisco.com/univercd/cc/td/doc/cisintwk/intsolns/mcst_sol/ Multicast Traffic Engineering: http://wwwin.cisco.com/cpress/cc/td/cpress/internl/ip_multi/di10ch16.htm Cisco IP/TV Home: http://www.cisco.com/warp/public/cc/pd/mxsv/iptv3400/ Cisco IP/VC Home: http://www.cisco.com/warp/public/cc/pd/mxsv/ipvc3500/
110