The 10 most useless tips to pass your JNCIE

Recently I’ve had a bunch of people asking me what to do on exam day to pass their JNCIE, from whether they should have coffee in the morning or not to what sort of breakfast they should have. Other questions have included concerns about screen size on the laptops provided. These sorts of things are mentioned on a bunch of websites and study guides. I’m going to address my view on this once and for all in this post…

I had to abandon my holiday yesterday (Easter Friday) and spend it debugging a fault. It was a particularly hard and complex one, which had left others stumped. While I’m not able to go into a lot of detail, I can disclose that to identify the cause of the fault I had to spend 15 minutes or so tcpdumping RSVP messages and examining them to find the issue. From this I was able to deduce that a particular box was doing something that was particularly odd and we were able to proceed with resolving it.

The above paragraph might seem unrelated to the point of the post… but it really isn’t. The issue I debugged yesterday was far harder than the troubleshooting elements of any JNCIE exams I have taken. This isn’t to say that the JNCIE exams aren’t incredibly challenging – they are – and I have great respect for anyone who has done one of these exams – as it displays a significant level of skill.

The point is however that, like many others, I spend most of my days doing a combination of debugging the most ugly of problems you will ever see in a service provider environment and architecting a huge range of solutions comprising technologies from many vendors to deliver to customers. A couple of times a year, people in these positions are going to face a mind-bendingly odd problem which will require significant debugging skill and an intimate understanding of the technologies and protocols involved to resolve.

If you are doing this sort of work regularly, you are not going to struggle to achieve a JNCIE. You are still going to have to do a hell of a lot of reading, labbing and learning. The breadth of topics covered in a JNCIE will stretch most to have to cover many technologies they have never touched to a depth they have not understood anything before. I personally spent a considerable amount of time reading, re-reading, and re-re-re-reading many RFCs, books, and other resources to make sure I had a solid understanding of the standards defining each protocol. Significant time was spent labbing technologies that were new to me (for example – NV-MVPN, draft-rosen, Carrier of Carriers VPN, and Interprovider MPLS VPN options B,C & E).

But what I didn’t have to learn was how this stuff fundamentally works. I get this stuff already, as do most people who successfully attempt this exam. I didn’t have to learn how to troubleshoot to do the JNCIE – I’ve been troubleshooting complex and horrible problems all of my career. Most people sitting these shouldn’t need to learn this.

About 8 weeks ago I was sitting in a hotel room in drinking with a few friends, when someone piped up and had what I considered to be a long and whiney rant where they blamed everyone else for their failures. Essentially this person was saying that the questions were not worded well enough, and this was why he failed his CCIE. I tend to disagree. Both the JNCIE and CCIE exams are extensively alpha & beta tested (I recently sat the beta version of the JNCIE-ENT). During this time much feedback is given around the readability and understandability of the questions. The questions will never be written to tell you what to do – that’s not the point of an expert level exam. But they are written in a way that ensures that if you have the understanding requred you will know what to do. Another person in the room pointed out that they’d lost half the time in the troubleshooting section of the his CCIE because they hadn’t seen that there was a topology diagram, but still passed. Because he understood the content to an expert level, and had a huge amount of experience, he was able to get everything in that section done in the 50% time remaining.

What’s the point of all this? Why is this all relevant? Well it’s actually all pretty relevant – the point is that for any of these IE exams, they are designed so that an expert in the subject areas will pass. If you have an intimate understanding of how multicast works, you aren’t going to struggle to deploy and troubleshoot it.

Many websites and study guides have tips and tricks for these exams. These range from not having too much coffee (duh!) to eating a certain type of bread in the morning. Most of them are utter rubbish. Or more to the point – while they’re going to help you focus during the day, and might even make the difference between having time for that one extra question that gets you over the pass mark and not having time – this isn’t going to pass the exam for you. For the record – both times I’ve done a JNCIE the morning has begun with a massive breakfast from the Juniper cafe that makes me sleepier than normal! And both times I’ve walked out hours before the finish time with everything done.

While I don’t want to throw stones at anyone in this article, I think it’s time for a bit of a reality check for those who think that the type of bread they eat, range of pens in the exam, different colour highlighters, or the size of the laptop screen are going to make any difference at all. The best difference you can make is regular hard work in the many months leading up to the exam, and having great amounts of hands-on experience to boost.

Happy easter everyone!

CoS for RE sourced traffic

Many of you will have deployed CoS extensively on your networks. One area of a Junos CoS deployment that I am often asked about by friends is how to manipulate traffic that is sourced from the routing-engine. There are multiple catches and caveats in dealing with this traffic, and different ways to manipulate this.

At a 10,000 foot level, whenever we deploy CoS we generally want to be able to manipulate route-engine sourced traffic such as ISIS, BGP, OSPF, BFD, RSVP, etc to have various different DSCP, EXP, and 802.1p markings. Firstly, we can set a policy as to the marking used for traffic sourced from the route-engine;

set class-of-service host-outbound-traffic dscp-code-point 111000
set class-of-service host-outbound-traffic ieee-802.1 default 111

You can also specify the forwarding-class that is used for processing traffic sourced from the route-engine;

set class-of-service host-outbound-traffic forwarding-class hoffs-odd-class

It’s important to note that your rewrite rules will not take effect with this traffic (by default). Even if you have specified a forwarding-class, the “host-outbound-traffic” markings will be applied outbound for this route-engine sourced traffic.

However as of Junos 12.3, Juniper have implemented a new option for “host-outbound-traffic” on the MX, which causes the router to use the rewrite-rule for each unit to put markings onto traffic from the RE (based on the forwarding-class it is assigned). This is particularly helpful where you might have multiple fibre providers providing access to your customers, each with a different markings scheme that you are required to use. Note that this is only available for the 802.1p markings (not DSCP) This is done as follows;

set class-of-service host-outbound-traffic ieee-802.1 rewrite-rules

Of course a rewrite-rule must be configured on the outbound unit for this to have effect. So if we have a rewrite-rule to map “hoffs-odd-class” traffic to a marking of 010, the traffic will be now marked as 010 on egress.

This of course does not help us for DSCP markings (it only applies to 802.1p markings). Often we will want to manipulate these. Also how would we approach this problem if we were to wanted to assign different forwarding-classes to different types of traffic being sourced from the RE? A great example of this is that while we might want to ensure that BGP is prioritised, we probably don’t need prioritisation of http traffic sourced from the RE!

The solution for this is quite clever. Most of you will know that you can firewall off all traffic to the RE (regardless of the IP it is destined to – even if that IP is on a physical interface) by applying an inbound firewall filter to the loopback. The clever thing is that you can also apply a firewall filter to all traffic leaving the RE by applying an outbound firewall filter to the loopback. If we want to ensure that all http/https traffic is put into the best-effort forwarding-class, we could do the following;

set interfaces lo0 unit 0 family inet filter output RE-QOS
set firewall family inet filter RE-QOS term web from protocol tcp
set firewall family inet filter RE-QOS term web from port http
set firewall family inet filter RE-QOS term web from port https
set firewall family inet filter RE-QOS term web then dscp be
set firewall family inet filter RE-QOS term web then forwarding-class best-effort
set firewall family inet filter RE-QOS term web then accept
set firewall family inet filter RE-QOS term catchall then accept

This is a pretty handy tool and allows us to do a fairly fine-grained manipulation of how each traffic-type being sourced by the RE is treated. Obviously you could customise this in any way to suit your needs. However it’s worth noting that to my understanding you cannot manipulate the 802.1p markings with a firewall filter – hence why the “rewrite-rules” option becomes so important for host-oubound-traffic.

If you thought that this is all there is to marking/classifying traffic sourced from the RE, you would be wrong! On a MX router, the processing of certain control traffic is delegated to the individual line-cards (such as BFD). I have learned the hard way that the markings on this traffic are not modified by any configuration you apply to normal RE-sourced traffic.

The news is not all bad though, as there is an easy workaround for this, and not many protocols are distributed to the line cards. For this traffic, you can apply an outbound firewall filter to the interface you are doing this traffic on. As an example, here is how to ensure that BFD traffic which has been distributed to the line card is placed into the correct forwarding class and marked appropriately;

set interfaces ge-1/2/3 unit 0 family inet filter output cos-bfd-link
set firewall family inet filter cos-bfd-link term 1 from protocol udp
set firewall family inet filter cos-bfd-link term 1 from port 3784
set firewall family inet filter cos-bfd-link term 1 from port 3785
set firewall family inet filter cos-bfd-link term 1 then loss-priority low
set firewall family inet filter cos-bfd-link term 1 then forwarding-class network-control
set firewall family inet filter cos-bfd-link term 1 then dscp 111000
set firewall family inet filter cos-bfd-link term 2 then accept

Hope this helps!

Don’t forget the rest of your loopbacks!

With all that is going on on the internet currently around NTP reflection attacks and the like, it seemed timely to do a post on the logic of how router-protect filters are applied to loopbacks in JUNOS.

For those of you new to using Juniper gear, if you apply a firewall filter inbound on the loopback of a Juniper networks device, this will be applied to all traffic processed by the routing-engine. This includes traffic with a destination address of a physical interface (i.e. not the loopback). This provides a simple and convenient place to deploy firewall filters to protect the routing-engine on the Juniper device.

This generally looks something like this (where re-protect has the rules for what should talk to the RE);

set interfaces lo0 unit 0 family inet filter input re-protect

This includes VRF/Virtual Router interface traffic for VRFs/ Virtual routers that do not have their own loopback interfaces.

The catch that many people I have been helping over the last week have forgotten however, is the fact that this does not apply to traffic in VRFs or virtual-routers that have their own loopback. If the VRF or virtual-router has a loopback interface in it, you must apply the filter to this loopback as well for it to take effect. For example;

set interfaces lo0 unit 504 family inet filter input re-protect

The classic example where you may strike this is that you will generally require loopback interfaces in any VRF in which you wish to land BNG PPPoE subscribers on the MX routers.

However, a better way to implement firewall filtering to protect the routing engine would actually be to implement it in an apply group, in order that all future loopback interfaces are protected without any configuration being required. This could be done like so;

set groups re-protect interfaces lo0 unit <*> family inet filter input re-protect
set apply-groups re-protect

The only catch with deploying it like this is that if you ever do explicitly configure an input filter on a loopback unit directly (i.e. not through the apply-group to all), the group will cease to have any effect on this loopback (as it will see the group as having been overridden with local config).

Hope this all helps!

LSP mappings based on route/traffic attributes

A friend today asked me an interesting question (that is in fact a part of the JNCIE-SP syllabus) – “How can I ensure that certain traffic types take different paths in my MPLS network?”

This is applicable for many of us who run large backhaul networks with many paths in them – some higher latency, some higher capacity. In these cases it is important to be able to load balance traffic based on many different requirements, first the available capacity, but also at times based on the traffic type.

A good example of where these requirements might become important is found in New Zealand – we have a single cable system out of NZ, which connects in a ring between two points in each of NZ, Australia & the USA (see below diagram). What this results in is that between NZ and the USA there are two paths: one direct, and one that goes via AU (and is 30ms longer). Both links are unprotected in themselves, but if you have a path on each, you can assume that one out of the two paths will be up at any time. You therefore don’t want to have too much more traffic than a single link could cope with, but at times you’ll oversubscribe a tad. If you wanted to oversubscribe it would make sense to ensure that latency sensitive traffic is on the short path and best effort traffic is on the long path.

The friend with whom I was discussing this was wanting to ensure that UDP traffic destined to routes in a certain as-path was transited on the long path between the USA and NZ. In this blog post we will discuss how we would achieve this on a Juniper MX router, and walk through how to achieve this example in a step by step manner.

We can map traffic to LSPs based on a range of criteria including the CoS queue, designation prefix, destination prefix as-path, and a few other useful properties (basically any information about the destination route that you can match with policy or the QoS class the traffic is in).

The one catch is that the route-preference for all LSPs you are wanting to map traffic to must be both equal and the best possible route-preference to the destination. If this is not the case, traffic will simply be sent to the best preference LSP. Likewise – if one LSP is unable to stand up and is withdrawn from the routing table, traffic mapped to this LSP will just move to another LSP.

Below is a diagram showing an example network which illustrates the problem my friend had. I have illustrated the primary TE path of two LSPs – the RED LSP which takes a direct path from the USA to NZ, and the BLUE LSP which takes a longer path via AU to NZ. The goal will be to ensure that all UDP traffic from our US transit provider that is destined to as900 and its customers will take the long path, while all other traffic takes the short path. Criteria for sending via each link is also illustrated on the diagram.

Dagram of example network

Dagram of example network

Okay – got the requirements – tell me how we do this!!!!!

We’ll configure this in a few steps, breaking down what we are doing and why as we go.

1/ Classify UDP traffic so that we can match based on this later
The first thing we identify is that one of the requirements dictates that we peer inside the header of every packet to classify it as UDP or non-UDP. In order to do this we will need to use a firewall filter on ingress to our router. We will use this to classify the traffic into a CoS class (which we can then use to map to a different LSP when we match the destination we want).

The first thing we must do in this step is create our new CoS class. Let’s call this class the “udp-class” class, while we leave the “best-effort” and “network-control” classes we had already configured in place;

[email protected]# show | compare

[edit class-of-service forwarding-classes]
     queue 3 { ... }
+    queue 1 udp-class;

Now that we have this, we must build a firewall filter to match UDP traffic vs other traffic;

[email protected]# show | compare
[edit]
+  firewall {
+      family inet {
+          filter transit-in {
+              term udp {
+                  from {
+                      protocol udp;
+                  }
+                  then {
+                      forwarding-class udp-class;
+                      accept;
+                  }
+              }
+              term other-traffic {
+                  then accept;
+              }
+          }
+      }
+  }

Finally, we need to apply this filter to inbound traffic on the interface connecting to our US transit provider;

[email protected]# show | compare 
[edit interfaces ge-0/0/3 unit 0 family inet]
+       filter {
+           input transit-in;

At this point, we have all UDP traffic from our transit provider mapped to our “udp-class” CoS queue, and we are now ready to create a policy to make forwarding decisions based on this.

2/ Create a policy to make next-hop decisions based on CoS queue

In this step, we will create a CBF (CoS Based Forwarding) policy, which will (when called upon by route policy) install a next-hop LSP based on the specified forwarding class.

This is done as follows;

[email protected]# show | compare 
[edit class-of-service]
+   forwarding-policy {
+       next-hop-map NZ-Traffic {
+           forwarding-class udp-class {
+               lsp-next-hop BLUE;
+           }
+           forwarding-class best-effort {
+               lsp-next-hop RED;
+           }
+           forwarding-class network-control {
+               lsp-next-hop RED;
+           }
+       }
+   }

It is worth re-noting that the LSPs must be equal route preference (see more detail above) – I’ve seen lots of people miss this and wonder why their CBF policy is not working.

Additionally, the astute reader will note that I have not actually created a policy for the assured-forwarding queue, which is created by default on the MX as queue 2. In this case we will assume that no traffic is passing in this queue, however if any traffic is passed in a queue that is not defined in a CBF policy, it is mapped in the same manner as queue 0 (in this case best-effort). If queue 0 is not defined, one of the defined queues is selected at random use for non-defined queues.

At this point we have our CBF policy all sorted and are ready to proceed to the next step.

3/ Find the destinations we want this policy applied to

We must now find the destinations we want this policy applied to. In our case, this is to be all prefixes destined to as900 and it’s customers. This is best described in a regular expression as “900+ .*” (one or more iterations of as900 followed by any number of other AS numbers).

We can verify that this will work with the following command (note that I only have the two prefixes shown in the diagram set up behind the NZ router in this lab);

[email protected]# run show route aspath-regex "900+ .*" 

inet.0: 16 destinations, 16 routes (16 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

200.0.0.0/24       *[BGP/170] 00:16:23, localpref 100, from 10.0.0.2
                      AS path: 900 800 700 I
                      to 10.1.0.3 via ge-0/0/1.0, label-switched-path BLUE
                    > to 10.1.1.2 via ge-0/0/2.0, label-switched-path RED

inet.3: 1 destinations, 1 routes (1 active, 0 holddown, 0 hidden)

mpls.0: 4 destinations, 4 routes (4 active, 0 holddown, 0 hidden)

We now configure this on our USA router;

[email protected]# show | compare
[edit policy-options]
+   as-path as900_and_customers "900+ .*";

Cool! We now have a way to match both parts of the criteria on which we wish to match traffic. All that is left to do is to put it all together.

4/ Putting it all together and writing some route policy

In previous steps we have created mechanisms to match the traffic type and destinations for which we want to send traffic via our “long path”. Now we need to create some policy based on this to make it all work!

We want this policy to match traffic destined prefixes for as900 and customers (defined above) that already has a next hop of the NZ router. For all traffic that is matched we need to then define the next-hop based on our CBF policy defined above (in order that only UDP traffic is sent via the long path (BLUE LSP)).

For all other traffic with a next hop of the NZ router, we want to map it to the short path (RED LSP).

The following policy will do the trick nicely;

[email protected]# show | compare 
[edit policy-options]
+   policy-statement map-nz-lsps {
+       term as900_and_customers {
+           from {
+               neighbor 10.0.0.2;
+               as-path as900_and_customers;
+           }
+           then {
+               cos-next-hop-map NZ-Traffic;
+               accept;
+           }
+       }
+       term other-nz {
+           from neighbor 10.0.0.2;
+           then {
+               install-nexthop lsp RED;
+               accept;
+           }
+       }
+   }

It’s worth noting again that LSPs must have an equal route preference (and there must be no better preferences route) for install-nexthop to work (as with CBF policy) – see more detail above.

Finally we need to apply this policy to all routes being exported from the route-engine to the forwarding-engine. This requires one further line of configuration, and is done as follows;

[email protected]# show | compare 
[edit routing-options]
+   forwarding-table {
+       export map-nz-lsps;
+   }

We now have a peek at a prefix which matches the criterion of each of the two terms, starting with 200.0.0.0/24;

[email protected]> show route forwarding-table matching 200/24 
Routing table: default.inet
Internet:
Destination        Type RtRef Next hop           Type Index NhRef Netif
200.0.0.0/24       user     0                    indr 262143     2
                                                 idxd   551     2
                   idx:1      10.1.0.3          Push 299920   572     2 ge-0/0/1.0
                   idx:3      10.1.1.2           ucst   573     4 ge-0/0/2.0
                   idx:xx     10.1.1.2           ucst   573     4 ge-0/0/2.0

We can see from the above output that it has created a per queue mapping for queues 1, 3 and a default mapping (matching queue 0’s configuration). So all working as expected.

And now for 100.0.0.0/24;

[email protected]# run show route forwarding-table matching 100/24    
Routing table: default.inet
Internet:
Destination        Type RtRef Next hop           Type Index NhRef Netif
100.0.0.0/24       user     0                    indr 262142     2
                              10.1.1.2           ucst   573     3 ge-0/0/2.0

We can see from the output that other traffic is being mapped to the RED LSP (i.e. the short path) – exactly what we wanted.

5/ Testing

We now want to verify this by generating some traffic from the “Transit provider in the USA” – which in this lab is represented with a CentOS box. We need to test three scenarios;

A/ Traffic destined for 100/24
In this test, I will generate some ICMP echo requests from the CentOS box representing the transit provider to 100/24. If our lab is working correctly, I would expect to see this take the RED LSP (the short path).

Let’s clear the LSP stats, run the ICMP echo requests, then re-examine the LSP stats;

[email protected]# run clear mpls lsp statistics
[[email protected] ~]# ping 100.0.0.1 -i 0.01 
PING 100.0.0.1 (100.0.0.1) 56(84) bytes of data.

--- 100.0.0.1 ping statistics ---
5241 packets transmitted, 0 received, 100% packet loss, time 52754ms
[email protected]# run show mpls lsp statistics ingress 
Ingress LSP: 2 sessions
To              From            State     Packets            Bytes LSPname
10.0.0.2        10.0.0.1        Up              0                0 BLUE
10.0.0.2        10.0.0.1        Up           4890           410760 RED
Total 2 displayed, Up 2, Down 0

Great! As expected, traffic is being sent via the RED (short path) LSP.

B/ non-UDP traffic destined for 200/24

In this test, I will generate some ICMP echo requests from the CentOS box representing the transit provider to 200/24. If our lab is working correctly, I would expect to see this take the RED LSP (the short path).

Let’s clear the LSP stats, run the ICMP echo requests, then re-examine the LSP stats;

[email protected]# run clear mpls lsp statistics
[[email protected] ~]# ping 200.0.0.1 -i 0.01 
PING 200.0.0.1 (200.0.0.1) 56(84) bytes of data.

--- 200.0.0.1 ping statistics ---
1447 packets transmitted, 0 received, 100% packet loss, time 14581ms
[email protected]# run show mpls lsp statistics ingress    
Ingress LSP: 2 sessions
To              From            State     Packets            Bytes LSPname
10.0.0.2        10.0.0.1        Up              0                0 BLUE
10.0.0.2        10.0.0.1        Up           1447           121548 RED
Total 2 displayed, Up 2, Down 0

Again traffic is transiting the RED (short path) LSP as expected.

C/ UDP traffic destined for 200/24

In this final test, I will generate some UDP iperf traffic from the CentOS box representing the transit provider to 200/24. If our lab is working correctly, I would expect to see this take the BLUE LSP (the long path).

Let’s clear the LSP stats, run the iperf, then re-examine the LSP stats;

[email protected]# run clear mpls lsp statistics
[[email protected] ~]# iperf -c 200.0.0.1 -u -b 10m -t 30
------------------------------------------------------------
Client connecting to 200.0.0.1, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size:  107 KByte (default)
------------------------------------------------------------
[  3] local 10.1.99.99 port 46896 connected with 200.0.0.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-30.0 sec  35.8 MBytes  10.0 Mbits/sec
[  3] Sent 25511 datagrams
[  3] WARNING: did not receive ack of last datagram after 10 tries.
[email protected]# run show mpls lsp statistics ingress    
Ingress LSP: 2 sessions
To              From            State     Packets            Bytes LSPname
10.0.0.2        10.0.0.1        Up          25521         38332542 BLUE
10.0.0.2        10.0.0.1        Up              0                0 RED
Total 2 displayed, Up 2, Down 0

This is taking the long path. All working as expected!

In this article I have attempted to describe how to select LSPs based on traffic and destination route properties. While I’ve used the criterion of UDP traffic aimed at a certain destination, you could of course implement this based on any combination of CoS queue, destination route attributes & traffic properties. This should be a really useful tool to have up your sleeve to meet TE requirements on your network – and something worth knowing for the JNCIE-SP exam.

One thing to note is that while in this article I quickly whipped up an extra CoS queue without any further configuration, beware of doing this in real life – you should to define buffer and transmit rates for schedulers on all interfaces for each queue. I will aim to do another blog post soon digging into this deeper – but for now just a warning (and have a look at the O’Reilly MX book if you want more detail on MX CoS)!

Thanks to Barry Murphy for coming up with this interesting scenario for me to write a post about. Hope this helps!

BGP local-as – as-path manipulation options

BGP local-as is a handy feature which allows you to pretend to be a member of a different ASN for the purposes of peering with another ASN. This is pretty handy for migration purposes – i.e. if ISP A buys ISP B and wants to migrate all of ISP B’s customers to peer with ISP A without having to get them to change their peering configuration. This feature is very simple to configure, and most who have worked for any length of time in a service provider will be familiar with it, however many will be googling if they have to implement any of the as-path manipulation features offered within it! In this blog post, I’m going to go through each of these as-path manipulation options in some detail.

This blog post is based on a lab of 4 routers (each in its own ASN) laid out in the following manner;
BGP local-as as-path lab

The base configuration of the lab we shall be working on can be found here; local-as-blog-lab.pdf

Before we get into this, it’s worth noting that i’m configuring all my BGP parameters in the “neighbor” hierarchy out of sheer force of habit – but I could equally be configuring the peer-as & local-as in the “group” hierarchy.

As noted in the diagram, 1.1.1.1/32 is advertised into BGP from R1, and 6.6.6.6/32 from R6. These are the only routes that are being advertised via BGP, and are there to show us what happens in each direction when we configure local-as with the various options offered to us within this feature.

Right, so first up, lets say that R2 thinks that R4’s ASN is actually AS9999. We’ll firstly configure this on R2;

[edit protocols bgp group R4 neighbor 192.168.3.2]
-     peer-as 4;
+     peer-as 9999;

Now we will find that this BGP session isn’t doing so well at coming up;
On R2;

[email protected]# run show bgp summary | match 9999           
192.168.3.2            9999          0          2       0       0          14 Active

Okay, so now we need to turn on local-as on R4 for the BGP session facing R2 in order that R4 pretends to be inside AS9999;

[edit protocols bgp group R2 neighbor 192.168.3.1]
+      local-as 9999;

And it’s now up and looking good – R4 is pretending for the sakes of the session to R2 that it is in AS9999;

[email protected]# run show bgp summary | match 9999    
192.168.3.2            9999          5          5       0       0          48 1/1/1/0              0/0/0/0

Okay, now we’ll have a look at how the routes look in either direction. Firstly lets examine 1.1.1.1/32 as received on R6;

[email protected]# run show route 1.1.1.1 

inet.0: 6 destinations, 6 routes (6 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

1.1.1.1/32         *[BGP/170] 00:02:29, localpref 100
                      AS path: 4 9999 2 1 I
                    > to 192.168.6.1 via ge-0/0/1.0

And now for 6.6.6.6/32 on R1;

[email protected]# run show route 6.6.6.6 

inet.0: 6 destinations, 6 routes (6 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

6.6.6.6/32         *[BGP/170] 00:03:06, localpref 100
                      AS path: 2 9999 4 6 I
                    > to 192.168.1.2 via ge-0/0/1.0

We can see that as9999 now appears as an additional ASN in the as-path in both directions through the R2-R4 BGP peering. This is important to note – while we would expect it to be added as routes are learned by R2 from R4, R4 is also adding it to routes it learns to keep the as-path consistent in both directions.

Now lets have a look at the options available to us for the local-as feature;

[email protected]# set protocols bgp group R2 neighbor 192.168.3.1 local-as ?
Possible completions:
  <as_num>              Autonomous system number in plain number or 'higher 16bits'.'Lower 16 bits' (asdot notation) format
  alias                Treat this AS as an alias to the system AS
  loops                Maximum number of times this AS can be in an AS path (1..10)
  no-prepend-global-as  Do not prepend global autonomous-system number in advertised paths
  private              Hide this local AS in paths learned from this peering

Before we go any further, it’s worth noting that you cannot configure both the private and alias options on the same neighbor/group.

Loops is not going to be covered in in this post, but at a high level this is a way to influence allow X number of as-path loops in the as-path.

The private option instructs the router with local-as configured to stop adding the configured “local-as” to routes it learns from the peer with local-as configured against it.

Lets configure the private option on this on R4 now;

[edit protocols bgp group R2 neighbor 192.168.3.1 local-as]
+      private;

As expected, there is no change on the 6.6.6.6/32 route from R1, which R4 is learning from R2;

[email protected]# run show route 6.6.6.6    

inet.0: 6 destinations, 6 routes (6 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

6.6.6.6/32         *[BGP/170] 00:00:02, localpref 100
                      AS path: 2 9999 4 6 I
                    > to 192.168.1.2 via ge-0/0/1.0

However, R6 now sees 1.1.1.1/32 without as9999 in the as-path – R4 is no longer adding this to the as-path as it receives the route from R2;

[email protected]# run show route 1.1.1.1 

inet.0: 6 destinations, 6 routes (6 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

1.1.1.1/32         *[BGP/170] 00:02:29, localpref 100
                      AS path: 4 2 1 I
                    > to 192.168.6.1 via ge-0/0/1.0

We’ll now roll this back ready to move on to the next option;

[email protected]# show | compare 
[edit protocols bgp group R2 neighbor 192.168.3.1 local-as]
-      private;

The next option we will look at is the alias feature. This behaves the same way as private for routes learned from the peer that local-as is configured on – it does not add the local-as to the path, but as normal just adds the local system ASN as normal. However the difference is that when sending routes to the neighbor, the router with local-as configured will omit the local system ASN, placing just the configured local-as in the as-path (instead of the default behaviour which is to insert both the local system and configured local-as in the path).

Given that there is nothing like seeing it for yourself, we will now configure this option on R4 and take a peek;

[edit protocols bgp group R2 neighbor 192.168.3.1 local-as]
+      alias;

Firstly we inspect 1.1.1.1/32 (learned by R4 from R2 and viewed on R6) which we can see looks as if there was no local-as configured on R4 at all – the as-path is as normal. This is the same as in the private option;

[email protected]# run show route 1.1.1.1 

inet.0: 6 destinations, 6 routes (6 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

1.1.1.1/32         *[BGP/170] 00:00:47, localpref 100
                      AS path: 4 2 1 I
                    > to 192.168.6.1 via ge-0/0/1.0

Next, we’ll have a look at 6.6.6.6/32 (sent by R4 to R2 and viewed on R1) which we can see has AS9999 but not AS4 in the path;

[email protected]# run show route 6.6.6.6    

inet.0: 6 destinations, 6 routes (6 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

6.6.6.6/32         *[BGP/170] 00:00:56, localpref 100
                      AS path: 2 9999 6 I
                    > to 192.168.1.2 via ge-0/0/1.0

Before we move on, lets quickly remove the alias option so that we can see what happens when we configure the next feature;

[edit protocols bgp group R2 neighbor 192.168.3.1 local-as]
-      alias;

The final option available to us is one called no-prepend-global-as. While private and alias cannot be configured at the same time as each other (as alias is the functionality of private plus one further change to the as-path), no-prepend-global-as can be configured in conjunction with either of these other options. The no-prepend-global-as essentially implements only the function that is present in alias but not private – this is that when sending routes to the neighbor, the router with local-as configured will omit the local system ASN, placing just the configured local-as in the as-path (instead of the default behaviour which is to insert both the local system and configured local-as in the path).

We’ll now configure this;

edit protocols bgp group R2 neighbor 192.168.3.1 local-as]
+      no-prepend-global-as;

And then we’ll have a look. Like alias we can see that the configured local-as on R4 is in the as-path as seen by R1 (learned from R4 by R2), but not the R4 system AS;

[email protected]# run show route 6.6.6.6 

inet.0: 6 destinations, 6 routes (6 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

6.6.6.6/32         *[BGP/170] 00:05:53, localpref 100
                      AS path: 2 9999 6 I
                    > to 192.168.1.2 via ge-0/0/1.0

However, we can see that unlike alias, it does not modify the default behaviour on routes learned from R2 by R4 (and we have both the R4 system ASN + the configured local-as;

[email protected]# run show route 1.1.1.1 

inet.0: 6 destinations, 6 routes (6 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

1.1.1.1/32         *[BGP/170] 00:06:19, localpref 100
                      AS path: 4 9999 2 1 I
                    > to 192.168.6.1 via ge-0/0/1.0

Local-as is a really useful feature – generally for the purposes of migrations. Some of the as-path manipulation options can be invaluable in complex/odd peering scenarios where there is a requirement to disguise what is actually happening on the network – be it a superficial requirement or the prevention of a an apparent loop being signified in the as-path.

I hope this article helps clarify the various options for you!

A time-saving trick for the JNCIE lab exam

When I did my JNCIE-SP, there were countless times where I wanted to do something that referred a group of interfaces such as “all of the core-facing interfaces”. In my practice for the JNCIE-ENT I’m finding this is no different. Whether it is whacking on “family mpls”, or applying a CoS rewrite rule, gathering a list all of your core interfaces (especially if there is some anomaly making it difficult to use wildcards in configuration-groups to do this) to use to text-process into a bunch of set commands can be a time-consuming task.

I came upon something quite obvious (when I thought about it for a moment) as a way to solve this; gathering the output of a show command in XML then matching the parameter I wanted to text-process.

Here’s an example – I wanted to apply the DSCP classifier JNCIE to all core interfaces;

Start by checking that the command has what you want. This looks fine (presents a list of all the core interfaces).

[email protected]# run show ospf neighbor
Address Interface State ID Pri Dead
172.30.0.65 ge-0/0/10.0 Full 172.30.15.3 128 35
172.30.0.81 ge-0/0/12.0 Full 172.30.15.7 128 35
172.30.0.77 ge-0/0/6.0 Full 172.30.15.4 128 38

Now verify how it looks in XML form. Looks like it’ll do the trick nicely.

[email protected]# run show ospf neighbor | display xml
<rpc-reply xmlns:junos="http://xml.juniper.net/junos/11.2R1/junos">
<ospf-neighbor-information xmlns="http://xml.juniper.net/junos/11.2R1/junos-routing">
<ospf-neighbor>
<neighbor-address>172.30.0.65</neighbor-address>
<interface-name>ge-0/0/10.0</interface-name>
<ospf-neighbor-state>Full</ospf-neighbor-state>
<neighbor-id>172.30.15.3</neighbor-id>
<neighbor-priority>128</neighbor-priority>
<activity-timer>32</activity-timer>
</ospf-neighbor>
<ospf-neighbor>
<neighbor-address>172.30.0.81</neighbor-address>
<interface-name>ge-0/0/12.0</interface-name>
<ospf-neighbor-state>Full</ospf-neighbor-state>
<neighbor-id>172.30.15.7</neighbor-id>
<neighbor-priority>128</neighbor-priority>
<activity-timer>32</activity-timer>
</ospf-neighbor>
<ospf-neighbor>
<neighbor-address>172.30.0.77</neighbor-address>
<interface-name>ge-0/0/6.0</interface-name>
<ospf-neighbor-state>Full</ospf-neighbor-state>

Okay, lets now filter out the kludge.

[email protected]# run show ospf neighbor | display xml | match interface-name
<interface-name>ge-0/0/10.0</interface-name>
<interface-name>ge-0/0/12.0</interface-name>
<interface-name>ge-0/0/6.0</interface-name>

Finally, lets find and replace <interface-name> for set class-of-service interfaces and .0</interface-name> for unit 0 classifiers dscp JNCIE in a text editor.

set class-of-service interfaces ge-0/0/10 unit 0 classifiers dscp JNCIE
set class-of-service interfaces ge-0/0/12 unit 0 classifiers dscp JNCIE
set class-of-service interfaces ge-0/0/6 unit 0 classifiers dscp JNCIE

And now we have the text in the form we want it to paste back onto the Juniper. This is probably obvious to most of you, but for those of you who hadn’t thought of this yet hopefully this is a huge time saver compared doing it manually (while letting you avoid the blunt sledgehammer approach that wildcards in groups can be)!