Don’t forget the rest of your loopbacks!

With all that is going on on the internet currently around NTP reflection attacks and the like, it seemed timely to do a post on the logic of how router-protect filters are applied to loopbacks in JUNOS.

For those of you new to using Juniper gear, if you apply a firewall filter inbound on the loopback of a Juniper networks device, this will be applied to all traffic processed by the routing-engine. This includes traffic with a destination address of a physical interface (i.e. not the loopback). This provides a simple and convenient place to deploy firewall filters to protect the routing-engine on the Juniper device.

This generally looks something like this (where re-protect has the rules for what should talk to the RE);

set interfaces lo0 unit 0 family inet filter input re-protect

This includes VRF/Virtual Router interface traffic for VRFs/ Virtual routers that do not have their own loopback interfaces.

The catch that many people I have been helping over the last week have forgotten however, is the fact that this does not apply to traffic in VRFs or virtual-routers that have their own loopback. If the VRF or virtual-router has a loopback interface in it, you must apply the filter to this loopback as well for it to take effect. For example;

set interfaces lo0 unit 504 family inet filter input re-protect

The classic example where you may strike this is that you will generally require loopback interfaces in any VRF in which you wish to land BNG PPPoE subscribers on the MX routers.

However, a better way to implement firewall filtering to protect the routing engine would actually be to implement it in an apply group, in order that all future loopback interfaces are protected without any configuration being required. This could be done like so;

set groups re-protect interfaces lo0 unit <*> family inet filter input re-protect
set apply-groups re-protect

The only catch with deploying it like this is that if you ever do explicitly configure an input filter on a loopback unit directly (i.e. not through the apply-group to all), the group will cease to have any effect on this loopback (as it will see the group as having been overridden with local config).

Hope this all helps!

Regular expressions in JUNOS show commands

Most of you will be aware that you can use regular expressions to transform/filter the output of any command in JUNOS with a show blah | match “ge-*/0/[0-9]”, the one catch with this method is that it kills the headers for each column of output.

There’s actually another way to achieve this, which is to write the regexp directly into the show command. You can actually do this with most show commands in JUNOS (though the example below is a show interfaces, i’ve also tried this on ISIS, RSVP and OSPF output). The advantage of this way is that it preserves any information that is not related to a specific interface, making the command clearer to read.

To do this, just write any interface-specific command and put the regexp at the end. For example show rsvp interfaces ae* will show you all the RSVP AE interfaces. Or the below output will show you only the gigabit interfaces that match ge-*/0/* (i.e. over two line cards, but one set of interfaces within the line card);

[email protected]> show interfaces terse ge-*/0/* 

Interface               Admin Link Proto    Local                 Remote
ge-0/0/0                up    down
ge-0/0/1                up    down
ge-0/0/2                up    down
ge-0/0/3                up    down
ge-0/0/4                up    down
ge-0/0/5                up    down
ge-0/0/6                up    down
ge-0/0/7                up    down
ge-0/0/8                up    down
ge-0/0/9                up    down
ge-2/0/0                up    up
ge-2/0/1                up    up
ge-2/0/2                up    up
ge-2/0/3                up    up
ge-2/0/4                up    up
ge-2/0/5                up    down
ge-2/0/6                up    up
ge-2/0/7                up    up
ge-2/0/8                up    down
ge-2/0/9                up    up

I’ve explained this to a few people this week, so figured it would be a good thing to post here in case anyone hasn’t already picked this up.

Hope this helps!

LSP mappings based on route/traffic attributes

A friend today asked me an interesting question (that is in fact a part of the JNCIE-SP syllabus) – “How can I ensure that certain traffic types take different paths in my MPLS network?”

This is applicable for many of us who run large backhaul networks with many paths in them – some higher latency, some higher capacity. In these cases it is important to be able to load balance traffic based on many different requirements, first the available capacity, but also at times based on the traffic type.

A good example of where these requirements might become important is found in New Zealand – we have a single cable system out of NZ, which connects in a ring between two points in each of NZ, Australia & the USA (see below diagram). What this results in is that between NZ and the USA there are two paths: one direct, and one that goes via AU (and is 30ms longer). Both links are unprotected in themselves, but if you have a path on each, you can assume that one out of the two paths will be up at any time. You therefore don’t want to have too much more traffic than a single link could cope with, but at times you’ll oversubscribe a tad. If you wanted to oversubscribe it would make sense to ensure that latency sensitive traffic is on the short path and best effort traffic is on the long path.

The friend with whom I was discussing this was wanting to ensure that UDP traffic destined to routes in a certain as-path was transited on the long path between the USA and NZ. In this blog post we will discuss how we would achieve this on a Juniper MX router, and walk through how to achieve this example in a step by step manner.

We can map traffic to LSPs based on a range of criteria including the CoS queue, designation prefix, destination prefix as-path, and a few other useful properties (basically any information about the destination route that you can match with policy or the QoS class the traffic is in).

The one catch is that the route-preference for all LSPs you are wanting to map traffic to must be both equal and the best possible route-preference to the destination. If this is not the case, traffic will simply be sent to the best preference LSP. Likewise – if one LSP is unable to stand up and is withdrawn from the routing table, traffic mapped to this LSP will just move to another LSP.

Below is a diagram showing an example network which illustrates the problem my friend had. I have illustrated the primary TE path of two LSPs – the RED LSP which takes a direct path from the USA to NZ, and the BLUE LSP which takes a longer path via AU to NZ. The goal will be to ensure that all UDP traffic from our US transit provider that is destined to as900 and its customers will take the long path, while all other traffic takes the short path. Criteria for sending via each link is also illustrated on the diagram.

Dagram of example network

Dagram of example network

Okay – got the requirements – tell me how we do this!!!!!

We’ll configure this in a few steps, breaking down what we are doing and why as we go.

1/ Classify UDP traffic so that we can match based on this later
The first thing we identify is that one of the requirements dictates that we peer inside the header of every packet to classify it as UDP or non-UDP. In order to do this we will need to use a firewall filter on ingress to our router. We will use this to classify the traffic into a CoS class (which we can then use to map to a different LSP when we match the destination we want).

The first thing we must do in this step is create our new CoS class. Let’s call this class the “udp-class” class, while we leave the “best-effort” and “network-control” classes we had already configured in place;

[email protected]# show | compare

[edit class-of-service forwarding-classes]
     queue 3 { ... }
+    queue 1 udp-class;

Now that we have this, we must build a firewall filter to match UDP traffic vs other traffic;

[email protected]# show | compare
[edit]
+  firewall {
+      family inet {
+          filter transit-in {
+              term udp {
+                  from {
+                      protocol udp;
+                  }
+                  then {
+                      forwarding-class udp-class;
+                      accept;
+                  }
+              }
+              term other-traffic {
+                  then accept;
+              }
+          }
+      }
+  }

Finally, we need to apply this filter to inbound traffic on the interface connecting to our US transit provider;

[email protected]# show | compare 
[edit interfaces ge-0/0/3 unit 0 family inet]
+       filter {
+           input transit-in;

At this point, we have all UDP traffic from our transit provider mapped to our “udp-class” CoS queue, and we are now ready to create a policy to make forwarding decisions based on this.

2/ Create a policy to make next-hop decisions based on CoS queue

In this step, we will create a CBF (CoS Based Forwarding) policy, which will (when called upon by route policy) install a next-hop LSP based on the specified forwarding class.

This is done as follows;

[email protected]# show | compare 
[edit class-of-service]
+   forwarding-policy {
+       next-hop-map NZ-Traffic {
+           forwarding-class udp-class {
+               lsp-next-hop BLUE;
+           }
+           forwarding-class best-effort {
+               lsp-next-hop RED;
+           }
+           forwarding-class network-control {
+               lsp-next-hop RED;
+           }
+       }
+   }

It is worth re-noting that the LSPs must be equal route preference (see more detail above) – I’ve seen lots of people miss this and wonder why their CBF policy is not working.

Additionally, the astute reader will note that I have not actually created a policy for the assured-forwarding queue, which is created by default on the MX as queue 2. In this case we will assume that no traffic is passing in this queue, however if any traffic is passed in a queue that is not defined in a CBF policy, it is mapped in the same manner as queue 0 (in this case best-effort). If queue 0 is not defined, one of the defined queues is selected at random use for non-defined queues.

At this point we have our CBF policy all sorted and are ready to proceed to the next step.

3/ Find the destinations we want this policy applied to

We must now find the destinations we want this policy applied to. In our case, this is to be all prefixes destined to as900 and it’s customers. This is best described in a regular expression as “900+ .*” (one or more iterations of as900 followed by any number of other AS numbers).

We can verify that this will work with the following command (note that I only have the two prefixes shown in the diagram set up behind the NZ router in this lab);

[email protected]# run show route aspath-regex "900+ .*" 

inet.0: 16 destinations, 16 routes (16 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

200.0.0.0/24       *[BGP/170] 00:16:23, localpref 100, from 10.0.0.2
                      AS path: 900 800 700 I
                      to 10.1.0.3 via ge-0/0/1.0, label-switched-path BLUE
                    > to 10.1.1.2 via ge-0/0/2.0, label-switched-path RED

inet.3: 1 destinations, 1 routes (1 active, 0 holddown, 0 hidden)

mpls.0: 4 destinations, 4 routes (4 active, 0 holddown, 0 hidden)

We now configure this on our USA router;

[email protected]# show | compare
[edit policy-options]
+   as-path as900_and_customers "900+ .*";

Cool! We now have a way to match both parts of the criteria on which we wish to match traffic. All that is left to do is to put it all together.

4/ Putting it all together and writing some route policy

In previous steps we have created mechanisms to match the traffic type and destinations for which we want to send traffic via our “long path”. Now we need to create some policy based on this to make it all work!

We want this policy to match traffic destined prefixes for as900 and customers (defined above) that already has a next hop of the NZ router. For all traffic that is matched we need to then define the next-hop based on our CBF policy defined above (in order that only UDP traffic is sent via the long path (BLUE LSP)).

For all other traffic with a next hop of the NZ router, we want to map it to the short path (RED LSP).

The following policy will do the trick nicely;

[email protected]# show | compare 
[edit policy-options]
+   policy-statement map-nz-lsps {
+       term as900_and_customers {
+           from {
+               neighbor 10.0.0.2;
+               as-path as900_and_customers;
+           }
+           then {
+               cos-next-hop-map NZ-Traffic;
+               accept;
+           }
+       }
+       term other-nz {
+           from neighbor 10.0.0.2;
+           then {
+               install-nexthop lsp RED;
+               accept;
+           }
+       }
+   }

It’s worth noting again that LSPs must have an equal route preference (and there must be no better preferences route) for install-nexthop to work (as with CBF policy) – see more detail above.

Finally we need to apply this policy to all routes being exported from the route-engine to the forwarding-engine. This requires one further line of configuration, and is done as follows;

[email protected]# show | compare 
[edit routing-options]
+   forwarding-table {
+       export map-nz-lsps;
+   }

We now have a peek at a prefix which matches the criterion of each of the two terms, starting with 200.0.0.0/24;

[email protected]> show route forwarding-table matching 200/24 
Routing table: default.inet
Internet:
Destination        Type RtRef Next hop           Type Index NhRef Netif
200.0.0.0/24       user     0                    indr 262143     2
                                                 idxd   551     2
                   idx:1      10.1.0.3          Push 299920   572     2 ge-0/0/1.0
                   idx:3      10.1.1.2           ucst   573     4 ge-0/0/2.0
                   idx:xx     10.1.1.2           ucst   573     4 ge-0/0/2.0

We can see from the above output that it has created a per queue mapping for queues 1, 3 and a default mapping (matching queue 0’s configuration). So all working as expected.

And now for 100.0.0.0/24;

[email protected]# run show route forwarding-table matching 100/24    
Routing table: default.inet
Internet:
Destination        Type RtRef Next hop           Type Index NhRef Netif
100.0.0.0/24       user     0                    indr 262142     2
                              10.1.1.2           ucst   573     3 ge-0/0/2.0

We can see from the output that other traffic is being mapped to the RED LSP (i.e. the short path) – exactly what we wanted.

5/ Testing

We now want to verify this by generating some traffic from the “Transit provider in the USA” – which in this lab is represented with a CentOS box. We need to test three scenarios;

A/ Traffic destined for 100/24
In this test, I will generate some ICMP echo requests from the CentOS box representing the transit provider to 100/24. If our lab is working correctly, I would expect to see this take the RED LSP (the short path).

Let’s clear the LSP stats, run the ICMP echo requests, then re-examine the LSP stats;

[email protected]# run clear mpls lsp statistics
[[email protected] ~]# ping 100.0.0.1 -i 0.01 
PING 100.0.0.1 (100.0.0.1) 56(84) bytes of data.

--- 100.0.0.1 ping statistics ---
5241 packets transmitted, 0 received, 100% packet loss, time 52754ms
[email protected]# run show mpls lsp statistics ingress 
Ingress LSP: 2 sessions
To              From            State     Packets            Bytes LSPname
10.0.0.2        10.0.0.1        Up              0                0 BLUE
10.0.0.2        10.0.0.1        Up           4890           410760 RED
Total 2 displayed, Up 2, Down 0

Great! As expected, traffic is being sent via the RED (short path) LSP.

B/ non-UDP traffic destined for 200/24

In this test, I will generate some ICMP echo requests from the CentOS box representing the transit provider to 200/24. If our lab is working correctly, I would expect to see this take the RED LSP (the short path).

Let’s clear the LSP stats, run the ICMP echo requests, then re-examine the LSP stats;

[email protected]# run clear mpls lsp statistics
[[email protected] ~]# ping 200.0.0.1 -i 0.01 
PING 200.0.0.1 (200.0.0.1) 56(84) bytes of data.

--- 200.0.0.1 ping statistics ---
1447 packets transmitted, 0 received, 100% packet loss, time 14581ms
[email protected]# run show mpls lsp statistics ingress    
Ingress LSP: 2 sessions
To              From            State     Packets            Bytes LSPname
10.0.0.2        10.0.0.1        Up              0                0 BLUE
10.0.0.2        10.0.0.1        Up           1447           121548 RED
Total 2 displayed, Up 2, Down 0

Again traffic is transiting the RED (short path) LSP as expected.

C/ UDP traffic destined for 200/24

In this final test, I will generate some UDP iperf traffic from the CentOS box representing the transit provider to 200/24. If our lab is working correctly, I would expect to see this take the BLUE LSP (the long path).

Let’s clear the LSP stats, run the iperf, then re-examine the LSP stats;

[email protected]# run clear mpls lsp statistics
[[email protected] ~]# iperf -c 200.0.0.1 -u -b 10m -t 30
------------------------------------------------------------
Client connecting to 200.0.0.1, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size:  107 KByte (default)
------------------------------------------------------------
[  3] local 10.1.99.99 port 46896 connected with 200.0.0.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-30.0 sec  35.8 MBytes  10.0 Mbits/sec
[  3] Sent 25511 datagrams
[  3] WARNING: did not receive ack of last datagram after 10 tries.
[email protected]# run show mpls lsp statistics ingress    
Ingress LSP: 2 sessions
To              From            State     Packets            Bytes LSPname
10.0.0.2        10.0.0.1        Up          25521         38332542 BLUE
10.0.0.2        10.0.0.1        Up              0                0 RED
Total 2 displayed, Up 2, Down 0

This is taking the long path. All working as expected!

In this article I have attempted to describe how to select LSPs based on traffic and destination route properties. While I’ve used the criterion of UDP traffic aimed at a certain destination, you could of course implement this based on any combination of CoS queue, destination route attributes & traffic properties. This should be a really useful tool to have up your sleeve to meet TE requirements on your network – and something worth knowing for the JNCIE-SP exam.

One thing to note is that while in this article I quickly whipped up an extra CoS queue without any further configuration, beware of doing this in real life – you should to define buffer and transmit rates for schedulers on all interfaces for each queue. I will aim to do another blog post soon digging into this deeper – but for now just a warning (and have a look at the O’Reilly MX book if you want more detail on MX CoS)!

Thanks to Barry Murphy for coming up with this interesting scenario for me to write a post about. Hope this helps!

[NON TECHNICAL] A few snaps from around SF city!

On the day after my JNCIE-ENT lab exam, I had the morning/afternoon to kill before my flight. Barry Murphy was also in town (and on the same flight back), so we met up and did the tourist thing. Below are a few snaps from our day in/around SF city. Apologies for the non technical post.

SF city from the far end of the Golden Gate Bridge

SF city from the far end of the Golden Gate bridge

Street artist

Street artist

Seals on the SF waterfront

Seals on the SF waterfront

Barry calling for help

Barry calling for help

Golden gate bridge

Golden Gate bridge

Barry and I at Golden gate bridge

Barry and I at Golden Gate bridge

Union Square

Union Square

Cable car

Cable car

Google barge at Treasure Island

Google barge at Treasure Island

JNCIE-ENT in review

Over the last 3 months I have done 350 hours of study towards my JNCIE-ENT lab exam. I had the fortune of being selected to participate in the beta version of a new exam version. Again (as I did for the JNCIE-SP – see my article on this here) I travelled over to the SF Bay Area to do this exam, which I took in Sunnyvale on 21st February (yesterday as I write this while waiting for my flight back to NZ at SFO airport).

I decided to take a few days of annual leave (PTO for American folk) and fly in on the Monday (with the exam on the Friday) in order that I could meet with various people in SF who I know online but had not yet met in person. I had an awesome time doing this, and it’s awesome to have put a heap of faces to names of people I have known for quite some time but never met!

As with all expert level exams, no matter how good you are, a significant amount of study is going to be required. You need to combine a wide ranging detailed knowledge of a heap of different protocols with expert level troubleshooting and debugging skills. The exam was 8.5 hours (8 hours normally, plus an extra half hour due to the exam still being in beta), which seems like a long time, but there are a lot of tasks to complete, and tasks often require a fair amount of configuration to complete.

One of the things that is important to note with these exams is that while you are required to achieve the tasks involved, you don’t have to do anything more – so it’s important to remember exactly what the task specifies. You are not trying to make a perfectly built network – you just need to achieve the goals they lay out to pass!

The JNCIE-ENT focuses on Enterprise Routing & Switching (like a CCIE R&S) – and is geared towards a practical (and realistic) enterprise deployment with a bunch of features you are likely to see. Having said this though, I can’t think of any network where you would see all the odd things you were required to do in this exam – but most networks have one or two of the tasks. The full syllabus list can be found here.

I have always been a big believer that the most important thing in preparing for an expert level exam is to use a wide range of resources to prepare – generally each one will have good and bad elements, but together they present a well rounded view of what is required. Additionally – nothing beats practical operational/architectural experience using the technologies in the exam. Knowing how to drive the right show commands / enable the correct traceoptions / do the right tcpdump is really important in this exam.

My reading list was as follows;

  • The InetZero JNCIE-ENT preparation lab book
  • The Proteus JNCIE-ENT workbook
  • Junos Enterprise Switching (O’Reilly)
  • Junos Enterprise Routing 2nd ed (O’Reilly)
  • The Juniper Day One guides
  • Interdomain Multicast Routing (Practical Juniper & Cisco Solutions)
  • Various Juniper courseware;
    • Junos Multicast Routing (JMR)
    • Junos Class of Service (JCOS)
    • Advanced Junos Enterprise Routing (AJER)
    • Advanced Junos Enterprise Switching (AJEX)
    • Advanced Junos Service Provider Routing (AJSPR)

I also rented a bunch of lab time from InetZero, however unfortunately there was only a very limited amount of lab time available to book in the 3 months I was preparing, so I did not manage to use all my vouchers (hint – if you are doing this, be sure to check with them what times they have available before you buy!).

Finally – I had a fairly extensive lab available at work, and used Junosphere quite a bit to lab various routing features. All in all, by the time I took the exam I was feeling as prepared as I could be, and actually ended up deciding to do absolutely no study in the last week I was in SF – as I had done (hopefully) more than enough prior to that.

I cannot say much about the actual lab – only that there were a lot of interesting and unique tasks – and that it was a really enjoyable day of configuring and troubleshooting! I had everything done after about 5 hours, then spent another hour double and triple checking everything. I’m proud to say that I walked out 2.5 hours early after getting to the point where I was sure everything was right (though of course there’s always the chance that I will have to eat my words if I’ve overlooked some major points and I end up failing!).

I felt that the exam was pitched at a fair level, ensuring the participant met the level required, while sticking to scenarios that you might be required to see in real life. Now I have to wait 2 months while all the other participants do the exam before they will mark them all together then figure out what the passing score is going to be (and then if I have made it!). I have always found that waiting for the result of an exam like this is super-painful, so I will be trying to keep busy and not think about it over the next wee while until I get the pass/fail mail!

Thanks to all those who were preparing for the exam or had already done it, particularly my “study buddies” Tyler and Campo, both of whom I would regularly bounce things off (as they would do the same to me). If you are studying for one of these exams I would highly recommend buddying up with a couple of people to do this – sometimes nothing is more helpful than someone else’s perspective on a problem.

Stay tuned for the result as soon as I have it!