Junos CSPF for multiple dynamic paths

A quick one today, but a few people have recently asked me how the Junos CSPF process manages to always ensure the primary and secondary paths are kept diverse *automagically*. RSVP path calculation with CSPF is an interesting topic, and if you look through the archives, you’ll find I’ve touched on this as I’ve discussed SRLGs, RSVP auto-bandwidth, and other such topics.

Lets assume for the purposes of this post that we have a LSP that looks like this:

[edit protocols mpls]
[email protected]# show 
label-switched-path test-LSP-to-PE5 {
    to 10.0.0.5;
    primary primary-path-to-5;
    secondary secondary-path-to-5;
}
path primary-path-to-5;
path secondary-path-to-5;

This LSP is pretty simple. There are no constraints implemented for either of the primary or secondary paths, and no constraints on the LSP as a whole.

Unpon configuration of a LSP such as this, Junos will attempt to stand up the primary path. This will follow the best IGP metric subject to constraints (available bandwidth, admin-groups, SRLGs, etc). Only once this has stood up will the calculation for the secondary begin.

As Junos calculates the secondary path, it will first take a copy of the topology of the RSVP network and inflate the IGP metric of every link that has been used by the primary path by a factor of 8 million. This ensures that if at all possible, a diverse path is used – while still allowing the use of a shared path between the primary and secondary if this is our only option.

When the primary path re-optimizes to a new path, this process is repeated, thus ensuring that we at all times try to keep the primary and secondary as diverse as possible.

It’s worth noting that if you turn off CSPF for the LSP, the above functionality will not be implemented, and both paths will simply follow the best IGP path.

Hope this helps!

The 10 most useless tips to pass your JNCIE

Recently I’ve had a bunch of people asking me what to do on exam day to pass their JNCIE, from whether they should have coffee in the morning or not to what sort of breakfast they should have. Other questions have included concerns about screen size on the laptops provided. These sorts of things are mentioned on a bunch of websites and study guides. I’m going to address my view on this once and for all in this post…

I had to abandon my holiday yesterday (Easter Friday) and spend it debugging a fault. It was a particularly hard and complex one, which had left others stumped. While I’m not able to go into a lot of detail, I can disclose that to identify the cause of the fault I had to spend 15 minutes or so tcpdumping RSVP messages and examining them to find the issue. From this I was able to deduce that a particular box was doing something that was particularly odd and we were able to proceed with resolving it.

The above paragraph might seem unrelated to the point of the post… but it really isn’t. The issue I debugged yesterday was far harder than the troubleshooting elements of any JNCIE exams I have taken. This isn’t to say that the JNCIE exams aren’t incredibly challenging – they are – and I have great respect for anyone who has done one of these exams – as it displays a significant level of skill.

The point is however that, like many others, I spend most of my days doing a combination of debugging the most ugly of problems you will ever see in a service provider environment and architecting a huge range of solutions comprising technologies from many vendors to deliver to customers. A couple of times a year, people in these positions are going to face a mind-bendingly odd problem which will require significant debugging skill and an intimate understanding of the technologies and protocols involved to resolve.

If you are doing this sort of work regularly, you are not going to struggle to achieve a JNCIE. You are still going to have to do a hell of a lot of reading, labbing and learning. The breadth of topics covered in a JNCIE will stretch most to have to cover many technologies they have never touched to a depth they have not understood anything before. I personally spent a considerable amount of time reading, re-reading, and re-re-re-reading many RFCs, books, and other resources to make sure I had a solid understanding of the standards defining each protocol. Significant time was spent labbing technologies that were new to me (for example – NV-MVPN, draft-rosen, Carrier of Carriers VPN, and Interprovider MPLS VPN options B,C & E).

But what I didn’t have to learn was how this stuff fundamentally works. I get this stuff already, as do most people who successfully attempt this exam. I didn’t have to learn how to troubleshoot to do the JNCIE – I’ve been troubleshooting complex and horrible problems all of my career. Most people sitting these shouldn’t need to learn this.

About 8 weeks ago I was sitting in a hotel room in drinking with a few friends, when someone piped up and had what I considered to be a long and whiney rant where they blamed everyone else for their failures. Essentially this person was saying that the questions were not worded well enough, and this was why he failed his CCIE. I tend to disagree. Both the JNCIE and CCIE exams are extensively alpha & beta tested (I recently sat the beta version of the JNCIE-ENT). During this time much feedback is given around the readability and understandability of the questions. The questions will never be written to tell you what to do – that’s not the point of an expert level exam. But they are written in a way that ensures that if you have the understanding requred you will know what to do. Another person in the room pointed out that they’d lost half the time in the troubleshooting section of the his CCIE because they hadn’t seen that there was a topology diagram, but still passed. Because he understood the content to an expert level, and had a huge amount of experience, he was able to get everything in that section done in the 50% time remaining.

What’s the point of all this? Why is this all relevant? Well it’s actually all pretty relevant – the point is that for any of these IE exams, they are designed so that an expert in the subject areas will pass. If you have an intimate understanding of how multicast works, you aren’t going to struggle to deploy and troubleshoot it.

Many websites and study guides have tips and tricks for these exams. These range from not having too much coffee (duh!) to eating a certain type of bread in the morning. Most of them are utter rubbish. Or more to the point – while they’re going to help you focus during the day, and might even make the difference between having time for that one extra question that gets you over the pass mark and not having time – this isn’t going to pass the exam for you. For the record – both times I’ve done a JNCIE the morning has begun with a massive breakfast from the Juniper cafe that makes me sleepier than normal! And both times I’ve walked out hours before the finish time with everything done.

While I don’t want to throw stones at anyone in this article, I think it’s time for a bit of a reality check for those who think that the type of bread they eat, range of pens in the exam, different colour highlighters, or the size of the laptop screen are going to make any difference at all. The best difference you can make is regular hard work in the many months leading up to the exam, and having great amounts of hands-on experience to boost.

Happy easter everyone!

Two new certifications… and….. aspirations for another JNCIE!

Over the last month or so, I have been quietly working away at a couple more JNCIS certifications – JNCIS-QF (QFabric) and JNCIS-SEC (Security).

For those of you who have not done any Juniper certifications before – a JNCIS is roughly equivalent to a CCNP level certification. I’ve come off the back of fairly intensive study for my JNCIE-ENT, for which I was invited to sit the beta of some new test forms in February – so rather than doing 6+ an hour per night, I slowed this down to an hour or two per week. It’s actually been quite odd re-figuring out what to do with all this “free time” – so doing these certifications was a good thing to do to keep my brain occupied and learning something new!

I’m pleased to say that I passed both, getting my JNCIS-QF on 14 March, and my JNCIS-SEC on 8 April. I thought both did a really good job of establishing that the candidate had a reasonable knowledge of the subject areas covered, and would feel confident setting up either a QFabric system or a Juniper SRX in a live deployment (which I guess is the point, right?).

I’ve also decided that I am in fact going to work towards a third JNCIE – the JNCIE-SEC. In many ways, this is going to be far more interesting than the other two, given the fact that I have far less experience in the area of Network Security than I have in Service Provider or Enterprise (as I have spent most of my career working for large service providers!). I’m really looking forward to learning a bunch of new and different technologies – something which is always very enjoyable!

I do however plan to take this one significantly slower than the other two. Essentially I did all the study for my last two JNCIEs in one year – and while I am glad I did it, as I wanted to prove to myself that I could; I would not do it again as I had no life at all while I was doing it. My plan is to slowly work towards this with the aim of doing the JNCIE-SEC exam sometime in the next year. Over the next couple of months I plan to sit the JNCIP-SEC and the JNCSP-SEC – though I will be doing plenty of labbing for the JNCIE as I study for these two written exams. From there I’ll make a week-by-week study plan of what I want to learn and work out a pace to approach it, and only book the exam when I’m sure that I am entirely ready.

As I do this, I’ll be blogging regularly on some of the new technologies and concepts I will be learning – and would appreciate any feedback/corrections; much of this stuff will be very new and different for me!

I also am hoping to hear back on my JNCIE-ENT result in the next few weeks – and will post this as soon as I get it!

Thanks!

CoS for RE sourced traffic

Many of you will have deployed CoS extensively on your networks. One area of a Junos CoS deployment that I am often asked about by friends is how to manipulate traffic that is sourced from the routing-engine. There are multiple catches and caveats in dealing with this traffic, and different ways to manipulate this.

At a 10,000 foot level, whenever we deploy CoS we generally want to be able to manipulate route-engine sourced traffic such as ISIS, BGP, OSPF, BFD, RSVP, etc to have various different DSCP, EXP, and 802.1p markings. Firstly, we can set a policy as to the marking used for traffic sourced from the route-engine;

set class-of-service host-outbound-traffic dscp-code-point 111000
set class-of-service host-outbound-traffic ieee-802.1 default 111

You can also specify the forwarding-class that is used for processing traffic sourced from the route-engine;

set class-of-service host-outbound-traffic forwarding-class hoffs-odd-class

It’s important to note that your rewrite rules will not take effect with this traffic (by default). Even if you have specified a forwarding-class, the “host-outbound-traffic” markings will be applied outbound for this route-engine sourced traffic.

However as of Junos 12.3, Juniper have implemented a new option for “host-outbound-traffic” on the MX, which causes the router to use the rewrite-rule for each unit to put markings onto traffic from the RE (based on the forwarding-class it is assigned). This is particularly helpful where you might have multiple fibre providers providing access to your customers, each with a different markings scheme that you are required to use. Note that this is only available for the 802.1p markings (not DSCP) This is done as follows;

set class-of-service host-outbound-traffic ieee-802.1 rewrite-rules

Of course a rewrite-rule must be configured on the outbound unit for this to have effect. So if we have a rewrite-rule to map “hoffs-odd-class” traffic to a marking of 010, the traffic will be now marked as 010 on egress.

This of course does not help us for DSCP markings (it only applies to 802.1p markings). Often we will want to manipulate these. Also how would we approach this problem if we were to wanted to assign different forwarding-classes to different types of traffic being sourced from the RE? A great example of this is that while we might want to ensure that BGP is prioritised, we probably don’t need prioritisation of http traffic sourced from the RE!

The solution for this is quite clever. Most of you will know that you can firewall off all traffic to the RE (regardless of the IP it is destined to – even if that IP is on a physical interface) by applying an inbound firewall filter to the loopback. The clever thing is that you can also apply a firewall filter to all traffic leaving the RE by applying an outbound firewall filter to the loopback. If we want to ensure that all http/https traffic is put into the best-effort forwarding-class, we could do the following;

set interfaces lo0 unit 0 family inet filter output RE-QOS
set firewall family inet filter RE-QOS term web from protocol tcp
set firewall family inet filter RE-QOS term web from port http
set firewall family inet filter RE-QOS term web from port https
set firewall family inet filter RE-QOS term web then dscp be
set firewall family inet filter RE-QOS term web then forwarding-class best-effort
set firewall family inet filter RE-QOS term web then accept
set firewall family inet filter RE-QOS term catchall then accept

This is a pretty handy tool and allows us to do a fairly fine-grained manipulation of how each traffic-type being sourced by the RE is treated. Obviously you could customise this in any way to suit your needs. However it’s worth noting that to my understanding you cannot manipulate the 802.1p markings with a firewall filter – hence why the “rewrite-rules” option becomes so important for host-oubound-traffic.

If you thought that this is all there is to marking/classifying traffic sourced from the RE, you would be wrong! On a MX router, the processing of certain control traffic is delegated to the individual line-cards (such as BFD). I have learned the hard way that the markings on this traffic are not modified by any configuration you apply to normal RE-sourced traffic.

The news is not all bad though, as there is an easy workaround for this, and not many protocols are distributed to the line cards. For this traffic, you can apply an outbound firewall filter to the interface you are doing this traffic on. As an example, here is how to ensure that BFD traffic which has been distributed to the line card is placed into the correct forwarding class and marked appropriately;

set interfaces ge-1/2/3 unit 0 family inet filter output cos-bfd-link
set firewall family inet filter cos-bfd-link term 1 from protocol udp
set firewall family inet filter cos-bfd-link term 1 from port 3784
set firewall family inet filter cos-bfd-link term 1 from port 3785
set firewall family inet filter cos-bfd-link term 1 then loss-priority low
set firewall family inet filter cos-bfd-link term 1 then forwarding-class network-control
set firewall family inet filter cos-bfd-link term 1 then dscp 111000
set firewall family inet filter cos-bfd-link term 2 then accept

Hope this helps!

Don’t forget the rest of your loopbacks!

With all that is going on on the internet currently around NTP reflection attacks and the like, it seemed timely to do a post on the logic of how router-protect filters are applied to loopbacks in JUNOS.

For those of you new to using Juniper gear, if you apply a firewall filter inbound on the loopback of a Juniper networks device, this will be applied to all traffic processed by the routing-engine. This includes traffic with a destination address of a physical interface (i.e. not the loopback). This provides a simple and convenient place to deploy firewall filters to protect the routing-engine on the Juniper device.

This generally looks something like this (where re-protect has the rules for what should talk to the RE);

set interfaces lo0 unit 0 family inet filter input re-protect

This includes VRF/Virtual Router interface traffic for VRFs/ Virtual routers that do not have their own loopback interfaces.

The catch that many people I have been helping over the last week have forgotten however, is the fact that this does not apply to traffic in VRFs or virtual-routers that have their own loopback. If the VRF or virtual-router has a loopback interface in it, you must apply the filter to this loopback as well for it to take effect. For example;

set interfaces lo0 unit 504 family inet filter input re-protect

The classic example where you may strike this is that you will generally require loopback interfaces in any VRF in which you wish to land BNG PPPoE subscribers on the MX routers.

However, a better way to implement firewall filtering to protect the routing engine would actually be to implement it in an apply group, in order that all future loopback interfaces are protected without any configuration being required. This could be done like so;

set groups re-protect interfaces lo0 unit <*> family inet filter input re-protect
set apply-groups re-protect

The only catch with deploying it like this is that if you ever do explicitly configure an input filter on a loopback unit directly (i.e. not through the apply-group to all), the group will cease to have any effect on this loopback (as it will see the group as having been overridden with local config).

Hope this all helps!

LSP mappings based on route/traffic attributes

A friend today asked me an interesting question (that is in fact a part of the JNCIE-SP syllabus) – “How can I ensure that certain traffic types take different paths in my MPLS network?”

This is applicable for many of us who run large backhaul networks with many paths in them – some higher latency, some higher capacity. In these cases it is important to be able to load balance traffic based on many different requirements, first the available capacity, but also at times based on the traffic type.

A good example of where these requirements might become important is found in New Zealand – we have a single cable system out of NZ, which connects in a ring between two points in each of NZ, Australia & the USA (see below diagram). What this results in is that between NZ and the USA there are two paths: one direct, and one that goes via AU (and is 30ms longer). Both links are unprotected in themselves, but if you have a path on each, you can assume that one out of the two paths will be up at any time. You therefore don’t want to have too much more traffic than a single link could cope with, but at times you’ll oversubscribe a tad. If you wanted to oversubscribe it would make sense to ensure that latency sensitive traffic is on the short path and best effort traffic is on the long path.

The friend with whom I was discussing this was wanting to ensure that UDP traffic destined to routes in a certain as-path was transited on the long path between the USA and NZ. In this blog post we will discuss how we would achieve this on a Juniper MX router, and walk through how to achieve this example in a step by step manner.

We can map traffic to LSPs based on a range of criteria including the CoS queue, designation prefix, destination prefix as-path, and a few other useful properties (basically any information about the destination route that you can match with policy or the QoS class the traffic is in).

The one catch is that the route-preference for all LSPs you are wanting to map traffic to must be both equal and the best possible route-preference to the destination. If this is not the case, traffic will simply be sent to the best preference LSP. Likewise – if one LSP is unable to stand up and is withdrawn from the routing table, traffic mapped to this LSP will just move to another LSP.

Below is a diagram showing an example network which illustrates the problem my friend had. I have illustrated the primary TE path of two LSPs – the RED LSP which takes a direct path from the USA to NZ, and the BLUE LSP which takes a longer path via AU to NZ. The goal will be to ensure that all UDP traffic from our US transit provider that is destined to as900 and its customers will take the long path, while all other traffic takes the short path. Criteria for sending via each link is also illustrated on the diagram.

Dagram of example network

Dagram of example network

Okay – got the requirements – tell me how we do this!!!!!

We’ll configure this in a few steps, breaking down what we are doing and why as we go.

1/ Classify UDP traffic so that we can match based on this later
The first thing we identify is that one of the requirements dictates that we peer inside the header of every packet to classify it as UDP or non-UDP. In order to do this we will need to use a firewall filter on ingress to our router. We will use this to classify the traffic into a CoS class (which we can then use to map to a different LSP when we match the destination we want).

The first thing we must do in this step is create our new CoS class. Let’s call this class the “udp-class” class, while we leave the “best-effort” and “network-control” classes we had already configured in place;

[email protected]# show | compare

[edit class-of-service forwarding-classes]
     queue 3 { ... }
+    queue 1 udp-class;

Now that we have this, we must build a firewall filter to match UDP traffic vs other traffic;

[email protected]# show | compare
[edit]
+  firewall {
+      family inet {
+          filter transit-in {
+              term udp {
+                  from {
+                      protocol udp;
+                  }
+                  then {
+                      forwarding-class udp-class;
+                      accept;
+                  }
+              }
+              term other-traffic {
+                  then accept;
+              }
+          }
+      }
+  }

Finally, we need to apply this filter to inbound traffic on the interface connecting to our US transit provider;

[email protected]# show | compare 
[edit interfaces ge-0/0/3 unit 0 family inet]
+       filter {
+           input transit-in;

At this point, we have all UDP traffic from our transit provider mapped to our “udp-class” CoS queue, and we are now ready to create a policy to make forwarding decisions based on this.

2/ Create a policy to make next-hop decisions based on CoS queue

In this step, we will create a CBF (CoS Based Forwarding) policy, which will (when called upon by route policy) install a next-hop LSP based on the specified forwarding class.

This is done as follows;

[email protected]# show | compare 
[edit class-of-service]
+   forwarding-policy {
+       next-hop-map NZ-Traffic {
+           forwarding-class udp-class {
+               lsp-next-hop BLUE;
+           }
+           forwarding-class best-effort {
+               lsp-next-hop RED;
+           }
+           forwarding-class network-control {
+               lsp-next-hop RED;
+           }
+       }
+   }

It is worth re-noting that the LSPs must be equal route preference (see more detail above) – I’ve seen lots of people miss this and wonder why their CBF policy is not working.

Additionally, the astute reader will note that I have not actually created a policy for the assured-forwarding queue, which is created by default on the MX as queue 2. In this case we will assume that no traffic is passing in this queue, however if any traffic is passed in a queue that is not defined in a CBF policy, it is mapped in the same manner as queue 0 (in this case best-effort). If queue 0 is not defined, one of the defined queues is selected at random use for non-defined queues.

At this point we have our CBF policy all sorted and are ready to proceed to the next step.

3/ Find the destinations we want this policy applied to

We must now find the destinations we want this policy applied to. In our case, this is to be all prefixes destined to as900 and it’s customers. This is best described in a regular expression as “900+ .*” (one or more iterations of as900 followed by any number of other AS numbers).

We can verify that this will work with the following command (note that I only have the two prefixes shown in the diagram set up behind the NZ router in this lab);

[email protected]# run show route aspath-regex "900+ .*" 

inet.0: 16 destinations, 16 routes (16 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

200.0.0.0/24       *[BGP/170] 00:16:23, localpref 100, from 10.0.0.2
                      AS path: 900 800 700 I
                      to 10.1.0.3 via ge-0/0/1.0, label-switched-path BLUE
                    > to 10.1.1.2 via ge-0/0/2.0, label-switched-path RED

inet.3: 1 destinations, 1 routes (1 active, 0 holddown, 0 hidden)

mpls.0: 4 destinations, 4 routes (4 active, 0 holddown, 0 hidden)

We now configure this on our USA router;

[email protected]# show | compare
[edit policy-options]
+   as-path as900_and_customers "900+ .*";

Cool! We now have a way to match both parts of the criteria on which we wish to match traffic. All that is left to do is to put it all together.

4/ Putting it all together and writing some route policy

In previous steps we have created mechanisms to match the traffic type and destinations for which we want to send traffic via our “long path”. Now we need to create some policy based on this to make it all work!

We want this policy to match traffic destined prefixes for as900 and customers (defined above) that already has a next hop of the NZ router. For all traffic that is matched we need to then define the next-hop based on our CBF policy defined above (in order that only UDP traffic is sent via the long path (BLUE LSP)).

For all other traffic with a next hop of the NZ router, we want to map it to the short path (RED LSP).

The following policy will do the trick nicely;

[email protected]# show | compare 
[edit policy-options]
+   policy-statement map-nz-lsps {
+       term as900_and_customers {
+           from {
+               neighbor 10.0.0.2;
+               as-path as900_and_customers;
+           }
+           then {
+               cos-next-hop-map NZ-Traffic;
+               accept;
+           }
+       }
+       term other-nz {
+           from neighbor 10.0.0.2;
+           then {
+               install-nexthop lsp RED;
+               accept;
+           }
+       }
+   }

It’s worth noting again that LSPs must have an equal route preference (and there must be no better preferences route) for install-nexthop to work (as with CBF policy) – see more detail above.

Finally we need to apply this policy to all routes being exported from the route-engine to the forwarding-engine. This requires one further line of configuration, and is done as follows;

[email protected]# show | compare 
[edit routing-options]
+   forwarding-table {
+       export map-nz-lsps;
+   }

We now have a peek at a prefix which matches the criterion of each of the two terms, starting with 200.0.0.0/24;

[email protected]> show route forwarding-table matching 200/24 
Routing table: default.inet
Internet:
Destination        Type RtRef Next hop           Type Index NhRef Netif
200.0.0.0/24       user     0                    indr 262143     2
                                                 idxd   551     2
                   idx:1      10.1.0.3          Push 299920   572     2 ge-0/0/1.0
                   idx:3      10.1.1.2           ucst   573     4 ge-0/0/2.0
                   idx:xx     10.1.1.2           ucst   573     4 ge-0/0/2.0

We can see from the above output that it has created a per queue mapping for queues 1, 3 and a default mapping (matching queue 0’s configuration). So all working as expected.

And now for 100.0.0.0/24;

[email protected]# run show route forwarding-table matching 100/24    
Routing table: default.inet
Internet:
Destination        Type RtRef Next hop           Type Index NhRef Netif
100.0.0.0/24       user     0                    indr 262142     2
                              10.1.1.2           ucst   573     3 ge-0/0/2.0

We can see from the output that other traffic is being mapped to the RED LSP (i.e. the short path) – exactly what we wanted.

5/ Testing

We now want to verify this by generating some traffic from the “Transit provider in the USA” – which in this lab is represented with a CentOS box. We need to test three scenarios;

A/ Traffic destined for 100/24
In this test, I will generate some ICMP echo requests from the CentOS box representing the transit provider to 100/24. If our lab is working correctly, I would expect to see this take the RED LSP (the short path).

Let’s clear the LSP stats, run the ICMP echo requests, then re-examine the LSP stats;

[email protected]# run clear mpls lsp statistics
[[email protected] ~]# ping 100.0.0.1 -i 0.01 
PING 100.0.0.1 (100.0.0.1) 56(84) bytes of data.

--- 100.0.0.1 ping statistics ---
5241 packets transmitted, 0 received, 100% packet loss, time 52754ms
[email protected]# run show mpls lsp statistics ingress 
Ingress LSP: 2 sessions
To              From            State     Packets            Bytes LSPname
10.0.0.2        10.0.0.1        Up              0                0 BLUE
10.0.0.2        10.0.0.1        Up           4890           410760 RED
Total 2 displayed, Up 2, Down 0

Great! As expected, traffic is being sent via the RED (short path) LSP.

B/ non-UDP traffic destined for 200/24

In this test, I will generate some ICMP echo requests from the CentOS box representing the transit provider to 200/24. If our lab is working correctly, I would expect to see this take the RED LSP (the short path).

Let’s clear the LSP stats, run the ICMP echo requests, then re-examine the LSP stats;

[email protected]# run clear mpls lsp statistics
[[email protected] ~]# ping 200.0.0.1 -i 0.01 
PING 200.0.0.1 (200.0.0.1) 56(84) bytes of data.

--- 200.0.0.1 ping statistics ---
1447 packets transmitted, 0 received, 100% packet loss, time 14581ms
[email protected]# run show mpls lsp statistics ingress    
Ingress LSP: 2 sessions
To              From            State     Packets            Bytes LSPname
10.0.0.2        10.0.0.1        Up              0                0 BLUE
10.0.0.2        10.0.0.1        Up           1447           121548 RED
Total 2 displayed, Up 2, Down 0

Again traffic is transiting the RED (short path) LSP as expected.

C/ UDP traffic destined for 200/24

In this final test, I will generate some UDP iperf traffic from the CentOS box representing the transit provider to 200/24. If our lab is working correctly, I would expect to see this take the BLUE LSP (the long path).

Let’s clear the LSP stats, run the iperf, then re-examine the LSP stats;

[email protected]# run clear mpls lsp statistics
[[email protected] ~]# iperf -c 200.0.0.1 -u -b 10m -t 30
------------------------------------------------------------
Client connecting to 200.0.0.1, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size:  107 KByte (default)
------------------------------------------------------------
[  3] local 10.1.99.99 port 46896 connected with 200.0.0.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-30.0 sec  35.8 MBytes  10.0 Mbits/sec
[  3] Sent 25511 datagrams
[  3] WARNING: did not receive ack of last datagram after 10 tries.
[email protected]# run show mpls lsp statistics ingress    
Ingress LSP: 2 sessions
To              From            State     Packets            Bytes LSPname
10.0.0.2        10.0.0.1        Up          25521         38332542 BLUE
10.0.0.2        10.0.0.1        Up              0                0 RED
Total 2 displayed, Up 2, Down 0

This is taking the long path. All working as expected!

In this article I have attempted to describe how to select LSPs based on traffic and destination route properties. While I’ve used the criterion of UDP traffic aimed at a certain destination, you could of course implement this based on any combination of CoS queue, destination route attributes & traffic properties. This should be a really useful tool to have up your sleeve to meet TE requirements on your network – and something worth knowing for the JNCIE-SP exam.

One thing to note is that while in this article I quickly whipped up an extra CoS queue without any further configuration, beware of doing this in real life – you should to define buffer and transmit rates for schedulers on all interfaces for each queue. I will aim to do another blog post soon digging into this deeper – but for now just a warning (and have a look at the O’Reilly MX book if you want more detail on MX CoS)!

Thanks to Barry Murphy for coming up with this interesting scenario for me to write a post about. Hope this helps!