Friday, August 31, 2012

PVLANS


PVLANS AND THEIR CONFIGURATIONS
Private VLAN concepts are quite simple, but Cisco’s implementation and configuration steps are a bit confusing – with all the “mappings” and “associations” stuff. Here comes a short overview of how private VLANs work.
To begin with, let’s look at the concept of VLAN as a broadcast domain. What Private VLANs (PVANs) do, is split the domain into multiple isolated broadcast subdomains. It’s a simple nesting concept – VLANs inside a VLAN. As we know, Ethernet VLANs are not allowed to communicate directly, they need L3 device to forward packets between broadcast domains. The same concept applies to PVLANS – since the subdomains are isolated at level 2, they need to communicate using an upper level (L3 and packet forwarding) entity – such as router. However, there is difference here. Regular VLANs usually correspond to a single IP subnet. When we split VLAN using PVLANs, hosts in different PVLANs still belong to the same IP subnet, but they need to use router (another L3 device) to talk to each other (for example, by means of local Proxy ARP). In turn, router may either permit or forbid communications between sub-VLANs using access-lists.
Why would anyone need Private VLANs? Commonly, this kind of configurations arise in “shared” environments, say ISP co-location, where it’s beneficial to put multiple customers into the same IP subnet, yet provide a good level of isolation between them.
For our sample configuration, we will take VLAN 100 and divide it into two PVLANs – sub-VLANs 101 and 102. Take the regular VLAN and call it primary (VLAN 100 in our example), then divide ports, assigned to this VLAN, by their types:
Promiscuous (P): Usually connects to a router – a type of a port which is allowed to send and receive frames from any other port on the VLAN
Isolated (I): This type of port is only allowed to communicate with P-ports – they are “stub”. This type of ports usually connects to hosts
Community (C): Community ports are allowed to talk to their buddies, sharing the same group (of course they can talk to P-ports)
In order to implement sub-VLAN behavior, we need to define how packets are forwarded between different port types. First comes the Primary VLAN – simply the original VLAN (VLAN 100 in our example). This type of VLAN is used to forward frames downstream from P-ports to all other port types (I and C ports). In essense, Primary VLAN entails all port in domain, but is only used to transport frames from router to hosts (P to I and C). Next comes Secondary VLANs, which correspond to Isolated and Community port groups. They are used to transport frames in the opposite direction – from I and C ports to P-port.
Isolated VLAN: forwards frames from I ports to P ports. Since Isolated ports do not exchange frames with each other, we can use just ONE isolated VLAN to connect all I-Port to the P-port.
Community VLANs: Transport frames between community ports (C-ports) within to the same group (community) and forward frames uptstream to the P-ports of the primary VLAN.
This is how it works:
Primary VLANs is used to deliver frames downstream from router to all hosts; Isolated VLAN transports frames from stub hosts upstream to the router; Community VLANs allow frames exchange withing a single group and also forward frames in upstream direction towards P-port. All the basic MAC address learning and unknown unicast flooding princinples remain the same.
Let’s move to the configuration part (Primary VLAN 100, Isolated VLAN 101 and Community VLAN 102).
Step 1:
Create Primary and Secondary VLANs and group them into PVLAN domain:
!
! Creating VLANs: Primary, subject to subdivision
!
vlan 100
 private-vlan primary

!
! Isolated VLAN: Connects all stub hosts to router
!
vlan 101
 private-vlan isolated

!
! Community VLAN: allows a subVLAN within a Primary VLAN
!
vlan 102
 private-vlan community

!
!  Associating
!
vlan 100
 private-vlan assoc 101,102
What this step is needed for, is to group PVLANs into a domain and establish a formal association (for syntax checking and VLAN type verifications).
Step 2:
Configure host ports and bind them to the respective isolated PVLANs. Note that a host port belongs to different VLANs at the same time: downstream primary and upstream secondary.
!
! Isolated port (uses isoalated VLAN to talk to P-port)
!
interface FastEthernet x/y
 switchport mode private-vlan host
 switchport private-vlan host-association 100 101

!
! Community ports: use community VLAN
!
interface range FastEthernet x/y - z
 switchport mode private-vlan host
 switchport private-vlan host-association 100 102
Step 3:
Create a promiscuous port, and configure downstream mapping. Here we add secondary VLANs for which traffic is received by this P-port. Primary VLAN is used to send traffic downstream to all C/I ports as per their associations.
!
! Router port
!
interface FastEthernet x/y
 switchport mode private-vlan promisc
 switchport private-vlan mapping 100 add 101,102
if you need to configure an SVI on the switch, you should add an interface correspoding to Primary VLAN only. Obviously that’s because of all secondary VLANs being simply “subordiantes” of primary. In our case the config would look like this:
interface Vlan 100
 ip address 172.16.0.1 255.255.255.0
Lastly, there is another feature, worths to be mentioned, called protected port or Private VLAN edge. The feature is pretty basic and avaiable even on low-end Cisco switches, allows to isolate ports in the same VLAN. Specifically, all ports in a VLAN, marked as protected are prohibited from sending frames to each other (but still allowed to send frames to other (non-protected) ports within the same VLAN). Usually, ports configurated as protected, are also configured not to receive unknown unicast (frame with destination MAC address not in switch’s MAC table) and multicast frames flooding for added security.
Example:
interface range FastEthernet 0/1 - 2
 switchport mode access
 switchport protected
 switchport block unicast
 switchport block multicast

Saturday, November 5, 2011

Label Switching with MPLS.

4.2 MPLS System Functions

This section describes the MPLS functions of distributing labels, merging of LSPs, manipulating the MPLS label stack, and selecting a route on which to forward a labeled packet.

Label Distribution

The distribution of labels—which includes allocation, distribution, and withdrawal of label and FEC bindings—is the mechanism on which MPLS most depends. It is the simple fact of agreeing on the meaning of a label that makes simplified forwarding on the basis of a fixed-length label possible. Protocols defined to aid in achieving this agreement between cooperating network devices are thus of paramount importance to the proper functioning of MPLS.

Piggyback Label Distribution

Labels may be transported in routing (and related) protocol messages. The attraction of this approach is that by piggybacking label assignments in the same protocol that is used to transport or define the associations (e.g., FECs) bound to those labels, the degree of consistency in assignment, validity, and use of those labels is increased. Consistency is made better by eliminating the use of additional messages that may lag behind and introduce a latency period in which, for instance, a route advertisement and its corresponding label(s) are inconsistent. Note that the latency resulting from a lag between route and label updates can be significant at very high packet transport speeds even if the delay is very small.
Examples of piggyback label distribution are discussed in Rekhter and Rosen (w.i.p.) and Awduche et al. (w.i.p.). See also the section entitled Label Distribution in Chapter 6.

Generalized Label Distribution

Labels may also be distributed using protocols designed for that specific purpose. A label distribution protocol is useful under those circumstances in which no suitable piggyback protocol may be used. The attractions of this approach are as follows:
  • The scope of a label distribution protocol is orthogonal to specific routing (and related) protocols.
  • A label distribution protocol provides a direct means for determining the capabilities of LSR peers.
  • The protocol is more likely to be semantically complete  relative to the label distribution process.
LDP (Andersson et al. 2001) is an example of a label distribution protocol.

Merging Granularity

Merging in MPLS is the process of grouping FECs that will result in an identical forwarding path within an MPLS domain into a single LSP. Without this process, multiple LSPs will be set up to follow the same routed path toward an MPLS egress that is common for FECs associated with each LSP. This is not an efficient use of labels. However, the egress for the MPLS domain for a set of FECs may wish to use a finer granularity for the LSPs arriving at its input interfaces (for example, ensuring that no two streams of traffic, which the egress will forward to different next hops, share the same input labels).
In general, the best combination of efficiency and purpose is achieved by allowing downstream LSRs to control the merging granularity.
If an LSR, which is not an egress, waits until it has received a mapping from its downstream peer(s) and simply adopts the level of granularity provided by the mappings it receives, the downstream peer controls the granularity of resulting LSPs. This is the recommended approach when using ordered control. 
If an LSR, which is not an egress, distributes labels upstream prior to having received label mappings from downstream, it may discover that the label mappings it subsequently receives are based on a different level of granularity. In this case, the LSR may have to do one of the following:
  • Withdraw some or all of its label mappings and reissue mappings with a matching granularity.
  • Merge streams associated with finer-granularity label mappings sent to upstream peers into a smaller set of coarser-granularity label mappings from downstream.
  • Choose a subset of finer-granularity label mappings from downstream to splice with the smaller set of coarser-granularity label mappings sent upstream. 
An LSR operating in independent control mode that is merge capable may follow a policy that results in its typically sending slightly finer granularity mappings to upstream peers than it typically receives from its downstream peers. If it does this, it can then merge the streams received on the finer-granularity LSPs from upstream to send on to the coarser LSPs downstream.
An LSR operating in independent control mode that is not merge capable must either withdraw and reissue label mappings upstream to match the granularity used downstream or request matching-granularity label mappings from downstream.

Merging

Merging is an essential feature in getting MPLS to scale to at least as large as a typical routed network. With no merge capability whatever, LSPs must be established from each ingress point to each egress point (producing on the order of n2 LSPs, where n is the number of LSRs serving as edge nodes). With even partial merge capability, however, the number of LSPs required is substantially reduced (toward order n). With merge capability available and in use at every node, it is possible to set up multipoint-to-point LSPs such that only a single label is consumed per FEC at each LSR—including all egress LSRs.
Different levels of merge capability are defined so that LSRs can support at least partial merge capability even when full merge capability is hard to do given the switching hardware (as is the case with many ATM switches).

Frame Merge

Frame merge is the capability typical of standard routing and is a natural consequence of transport media that encapsulate an entire L3 packet inside an L2 frame. In this case, full merging occurs naturally and no action is required of the LSR. This is typically the case with non-ATM L2 technologies.

VC Merge

VC merge is the name applied to any technique that, when used with an ATM switch, allows it to effectively perform frame merging. Typically, this requires queuing cells associated with a single ATM Adaptation Layer (AAL) frame (if they are not actually reassembled) until the last one has been received. Those cells are then transmitted in the same order in which they were received, while being careful not to interleave them with cells from any other AAL frame being transmitted on the same VC. Interleaving cells using different VCIs is permissible; however, cells associated with the same VCI on any input interface must be transmitted without interleaving with cells received on other input interfaces (or the same interface using a different VCI) that will be transmitted using the same VCI.
Interleaving cells from different input VPI/VCIs onto the same output VPI/VCI makes it impossible for the receiver of the interleaved cells (from at least two sources) to determine where the frame boundaries should be when reassembling the cells into a higher-layer frame. The end-of-frame markers from multiple frames are interleaved as well, which would cause the cells from part of one frame to be assembled with cells from part of another frame (from a different source VPI/VCI), producing a completely useless assembled frame. To successfully merge traffic at the VPI/VCI level, the first cell from one input VPI/VCI must not be sent on an output VPI/VCI until the last cell from another input VPI/VCI has been sent on that same output VPI/VCI.
VC merging therefore requires that cells from each input VPI/VCI to be merged be queued until the last cell from other merging input VPI/VCIs has been sent on the same output VPI/VCI. 

VP Merge

VP merge is the name applied to any technique that provides for mapping distinct VCI numbers on different virtual paths (VPs) at input interfaces to the same VP at an output interface. Because distinct VCIs are used in transmitting cells on an output interface, it is not possible to interleave cells from different input streams at the output interface.

Saturday, July 30, 2011

ROUTER ON A STICK

How To Configure Router On A Stick - 802.1q Trunk To Router

Router-on-a-stick is a term frequently used to describe a setup up that consists of a router and switch connected using one Ethernet link configured as an 802.1q trunk link. In this setup, the switch is configured with multiple VLANs and the router performs all routing between the different networks/VLANs.
While some believe the term 'router-on-a-stick' sounds a bit silly, it's a very popular term and commonly used in networks where no layer-3 switch exists. A good example of a router-on-a-stick configuration (which also happens to be the one we are going to cover) would be a Call Manager Express installation where there is the need to split the VoIP network, consisting of your Cisco IP Phone devices, from your data network where all workstations and servers are located.

Example Scenario

Our example is based on a scenario you are most likely to come across when dealing with VoIP networks. Because VoIP implementations require you to separate the data and voice network in order to route packets between them, you need either a layer 3 switch or a router. This configuration ensures availability and stability of the VoIP service, especially during peak traffic hours in your network.
Packets running between VLANs are routed via the CCME router connected to the switch using one physical port configured as a trunk port on both ends (switch and router). If you would like to read more on VLAN routing and VLAN theory, you can visit can visit the Cisco website on Vlans that covers all related topics and terms.
This example will show you how to configure a Cisco router and switch in order to create a trunk link between them and have the router route packets between your VLANs.
This diagram shows an illustration of the above configuration.

STEP 1 - Switch Configuration

First step is to create the required two VLANs on our Cisco switch and configure them with an IP address:
SW1# configure terminal
SW1(config)# interface vlan1
SW1(config-if)# description Data Vlan
SW1(config-if)# ip address 192.168.0.2 255.255.255.0
SW1(config-if)# exit
SW1(config)# interface vlan2
SW1(config-if)# description Voice Vlan
SW1(config-if)# ip address 192.168.2.2 255.255.255.0
SW1(config-if)# exit
Next, we need to create the trunk port that will connect to the router. For this purpose, we've selected port GigabitEthernet 0/1 (port 1):
SW1# configure terminal
SW1(config)# interface gigabitethernet 0/1
SW1(config-if)# description Trunk-to-Router
SW1(config-if)# switchport trunk encapsulation dot1q
SW1(config-if)# switchport mode trunk
SW1(config-if)# spanning-tree portfast

To eliminate confusion, these commands are instructing the switch thus:
1) Define the trunk to use the 802.1q protocol
2) Set the specific port to 'trunk mode'
3) Enable the spanning-tree 'portfast' function to ensure the port will forward packets immediately when connected to a device e.g router.
The above steps complete the switch-side configuration.

 

STEP 2 - Router Configuration

We need to follow a similar configuration for our router to enable communication with our switch and allow all VLAN traffic to pass through and route as necessary.
Creating a trunk link on a router port is not very different from the process used above - while we create the trunk port on one physical interface, we are required to create a sub-interface for each VLAN.
Again, this is a fairly simple process and easy to understand once you've done it at least one time.
R1# configure terminal
R1(config)# interface gigabitethernet0/1
R1(config-if)# no ip address
R1(config-if)# duplex auto
R1(config-if)# speed auto
R1(config-if)# interface gigabitethernet0/1.1
R1(config-subif)# description Data VLAN
R1(config-subif)# encapsulation dot1q 1 native
R1(config-subif)# ip address 192.168.0.1 255.255.255.0
R1(config-subif)# ip nat inside
R1(config-subif)# ip virtual-reassembly
R1(config-subif)# interface gigabitethernet0/1.2
R1(config-subif)# description Voice VLAN
R1(config-subif)# encapsulation dot1q 2
R1(config-subif)# ip address 192.168.2.1 255.255.255.0
R1(config-subif)# ip nat inside
R1(config-subif)# ip virtual-reassembly

In order to form a trunk link with our switch it is necessary to create one sub-interface for every VLAN configured on our switch. After creating the sub-interface, we assign an IP address to it and set the encapsulation type to 802.1q along with the VLAN to which the subinterface belongs.
For example, the encapsulation dot1q 2 command defines 802.1q encapsulation and sets the subinterface to VLAN 2. The native parameter we used for subinterface gigabitethernet0/1.1 tells the router that the native vlan is VLAN 1. This is a default parameter on every Cisco switch and therefore must be matched by the router as well.
The ip virtual-reassembly command is usually automatically thrown in by the Cisco IOS (we've included it to show you the command) and is a security measure to avoid buffer overflow and control memory usage during an attack of fragmented packets which can cough up your router's resources. This command is added automatically when you enable the NAT service using the ip nat inside command. More information on NAT configuration can be obtained from the isco website

Article Summary

This blog explained the use of router-on-a-stick configurations and showed how you can configure an 802.1q trunk link between a Cisco switch and router. Router-on-a-stick configurations are extremely useful in environments where no layer-3 switch exists, providing Inter-VLAN routing services with a single router and one interface - cutting down seriously the costs for internal routing.
It is always preferable to use a router with a Gigabit Ethernet interface to ensure you've got plenty of bandwidth to handle large amounts of data transfers if needed.
If you have found the blog useful, i would really appreciate if you shared it with others by having it on tweeter. Sharing my blogs takes only a minute of your time and helps my blogs reach more people.

COMPILED BY ERICK OSIKE
Lead Technical Engineer Bernice Communications Ltd.

Friday, July 29, 2011

Installation Of A Cisco Catalyst 4507R-E

Installation Of A Cisco Catalyst 4507R-E

Introduction

For many, dealing with Cisco equipment is a dream come true, while for others more lucky, it's simply an everyday routine.
Driven by our thirst for technical material and experience, we thought it would be a great idea to start presenting various installations of Cisco equipment around the globe, especially equipment that we don't get to play with everyday.
We recently had the chance to unpack and install a Cisco Catalyst 4507R-E Layer 3 switch(Ardhi House,NKR) which we must admit was extremely impressive. The Cisco Catalyst series is world-known for its superior network performance and modularity that allows it to 'adapt' to any demands your network might have.
For those who haven't seen or worked with a 4507R/4507R-E switch, it's a very big and heavy switch in a metal cabinet (chassis) supporting up to two large power supplies and a total of 7 cards (modules), two of which are the supervisor engines that do all the switching and management work.
The new 4507R-E series is a mammoth switch that allows a maximum of 320Gbps (full duplex) switching capacity by utilising all 7 slots, in other words 5 modules alongside with two Supervisor Engine 6-E cards (with two full line rate 10Gb Uplinks).
The 4507R-E switch is shipped in a fairly large box 50(H)x44(W)x32(D) cm and weights around 21 Kgrs with its shipping box. The practical height of the unit for a rack is 11U which means you need quite a bit of room to make sure it's comfortably placed.

The Grand Opening
Like most Cisco engineers, we couldn't wait to open the heavy box and smell the freshly packaged item that came directly from Cisco's manufacturing line. We carefully moved the 4507R-E box to the datacenter and opened the top side of the box.....
The upper area of the picture is where you'll find the two large cube slots for the power supplies. Below them, you can identify 6 out of the 7 slots waiting to be populated and give this monster unbelievable functionality!
After opening the package and removing the plastic wrapping, we placed the switch on the floor so we could take a better look at it.
Because we couldn't wait any longer, we quickly opened one of two power supplies and inserted it into the designated slot. The power supplies installed were rated at 2800Watts each - providing more than enough juice to power a significant number of IP Phones via the PoE cards installed later on.
The picture below shows both power supplies, one inserted into its slot, while the other was placed on top of the chassis with its connectors facing frontwards so you can get a glimpse of them. When inserted into its slot, the power supply's bottom connectors plug firmly into the chassis connectors and power up the Catalyst switch.
Turning on the power supplies for the first time made the datacenter's light dim instantantly as they began to draw power for the first time! Interestingly enough, if you take a look at the power supply on top of the chassis, you'll notice three long white strips inside the power supply. These are actually three very large electrolytic capacitors - quite impressive!
For those interested, the power supplies were made by Sony (yes, they had a Sony sticker on them!).

Supervisor Engine Installation
As we mentioned in the beginning of this article, the powering engine of any 4500 series Catalyst switch is the Supervisor Engine. The Supervisor engines occupy up to two slots on the 4507R chassis, one of them used for redundancy in case the other fails. When working with two supervisor engines, the 4507R is usually configured to automatically switch from one engine to the other without network interruptions, even for a VoIP network with active calls between ends.
Cisco currently has around 7 different Supervisor Engines, each with unique characteristics, designed for various levels of density and bandwidth requirements.
Currently, the Supervisor Engine 6-E is the best performing engine available, providing 320Gbps bandwidth (full duplex) and 250 million packets per second forwarding rate!
For our installation, we worked with the Supervisor Engine II-Plus, also known as Cisco part WS-X4013+. Here's one of the supervisor engines in its original antistatic bag:

After placing on my wrist the antistatic wrist-strap contained in the package and carefully unwrapping the supervisor engine, the green circuit-board with its black towers (heatsinks) is revealed. You can easily see the 5 heatsinks, two of which are quite large and do an excellent job in keeping the processors cool:
At the back left side of the board, you can see the supervisor engine's connector which is equally impressive with 450 pin connectors - 50 on each row!
We took a picture from the back of the board to make sure the connector was clearly visible:
Just looking at the connector makes you imagine the number of signals that pass through it to give the 4507R-E the performance rating it has! On the left of the board's connector is the engine's RAM (256MB), while right behind it is the main CPU with the large heatsink, running at 266Mhz.
Here is a close up of the engine's RAM module. The existing 256MB memory module can be removed and upgraded according to your requirements:

Moving to the front side of the Supervisor Engine, you can see the part number and description:
The uplink ports visible on the front are GBIC (GigaBit Interface Converter) that can be used as normal Gigabit interfaces. By using different GBIC's you can connect multimode, singlemode fiber optic cable or standard CAT5e/CAT6 Ethernet cabling. These ports can come in handy when you're approaching your switch's full capacity.
 
The impressive Supervisor Engine fits right into one of the two dedicated slots available on the 4507R-E chassis. These are slots 3 & 4 as shown in the picture below. Also visible is the switch's backplane and black connectors awaiting the Supervisor Engine boards (marked with red):

We took another picture inside the chassis to make things as clear as possible:

Here you can see the backplane with the two Supervisor Engine connectors. The white coloured connectors just above and below the Supervisor Engines are used by the rest of the boards available to the 4507R.

After inserting one of the Supervisor Engines and two power supplies, here is the result:
One detail well worth noticing is the colour coded bars on the left and right side of the Supervisor card. These colour codes exist to ensure engineers don't accidently try to insert a Supervisor card into an inappropriate slot. The 4507R-E can accept upto two supervisor engines, therefore you have two slots dedicated to them, leaving 5 slots available.
Cisco engineers have thought of everything on the 4507R-E. The cooling mechanisim is a good example of smart-thinking and intelligent engineering. With 7 cards installed on the system, pumping a generous amount of heat, the cooling had to be as effective as possible. Any heat captured between the cards could inadvertably lower the components' reliability and cause damage in the long term.
This challenge was dealt with by placing a fan-tray right next to the cards in a vertical direction. The fan-tray is not easily noticed when taking a quick glance, but the available handle on the front gives away that something is hidden in there. Unscrew the top & bottom bolts, place your hand firmly around the handle and pulling outwards will suprise you:

The picture taken on the left shows the eight fans placed on the fan-tray. These fans work full speed the moment you power the switch on, consuming 140Watts alone!
Once they start spinning, you really can't argue that the cooling is inadequate, as the air flow produced is so great that when we powered the 4507R-E, the antistatic bags accidently forgotten on the right hand side of the chassis were sucked almost immediately against the chassis grip, just at it happens when you leave a plastic bag behind a powerful fan!
Of course, anything on the left side of the chassis (vieweable in our picture) would be immediately blown away.
After inserting the fan-tray back in place, it was time to take a look around and see what else what left to play with.
Our eyes caught another Cisco box and we approached it, picked it up and checked out the label:
The product number WS-X4548-GB-RJ45V and size of the package made it clear we were looking at a card designated for the 4507R-E. Opening the package confirmed our thoughts - this was a 48port Gigabit card with PoE:
We carefully unwrapped the contents always using our antistatic wrist-strap so that we don't damage the card, and then placed it on top of its box:
The card has an impressive quantity of heatsinks, two of which are quite large and therefore must generate a lot of heat. The backplane connector is visible with its white colour (back left corner), and right behind the 48 ports is an area covered with a metallic housing. This attracted our attention as we thought something very senstive must be in that area for Cisco to protect it in such a way.
Taking a look under the protective shield we found a PCB board that ran along the length of the board:
Our understanding is that this rail of PCB with transistors and other electrical circuits mounted on it seemed to be regulators for the PoE support. Taking into consideration that we didn't see the same protection in other similar non-PoE boards, we couldn't image it being something else.
When we completed our checkup, we decided it was time to install the card and finally power the 4507R-E switch.
The picture on the left shows our 4507R-E installed with two Supervisor Engine II-Plus engines in active-standby redundancy mode and one 48 port Gigabit Ethernet card with PoE support.
On top is the editor's (Chris Partsenidis) laptop with a familair website loaded, Firewall.cx!
Configuring the Supervisor engines was a simple experience. When the 4507R-E is switched on, both engines will boot by first performing a POST test on their modules, memory buffers etc. When this internal POST phase is successfully complete without errors, the engines begin to boot the IOS.
The screenshot below shows us the described procedure from one Supervisor engine since you can't monitor both engines unless you have two serial ports.



As you can see, the Supervisor engine passed all tests and then proceeded to boot the IOS.
Once loaded, the IOS will check for the existence of a second Supervisor engine, establish connection with it and, depending on which slot it is located in, it will automatically initialise the second engine in standby mode as shown below:


Once the Supervisor engine bootup process is complete, you are able to configure any aspect of the switch according to your needs, just as you would with any other Cisco Catalyst switch. The interesting part is when you try to save your configuration:


In the above screenshot, we've configured the switch to boot using a specific IOS located in the bootflash, as soon as we saved the configuration using the wr command, the Supervisor engine automatically synchronised the two engines' nvram without any additional commands. This excellent functionality makes sure that whatever configuration is applied to the active Supervisor engine will be available to the standby engine should the first one fail.
The great part of this switch is that you can obtain any type of information you require from it. For example, we switched off one of the two power supplies and executed the show modules command. This command gives a report of the installed modules (cards) in the catalyst switch along with a few more details:


The command reveals that the backplane power consumption is approximately 40 Watts followed by a detailed report of the installed modules. In our example, you can see the two Supervisor engines in slot 3 & 4, followed by the 48 Gigabit Ethernet module in slot 5. The command also shows the Supervisor engines' configured redundany operating mode and status. Lastly, any system failures are reported at the end - this output shows that we've got a problem with one of the power supplies, but rest assured, we had simply switched it off to see if it was going to show up in the report!
 

Article Summary

This article covered the initial installation and setup of a new Cisco Catalyst 4507R-E switch, populated with two Supervisor Engines II-Plus and a 48 port Gigabit module with PoE support. We saw areas of the switch which you won't easily find elsewhere and our generous amount of pictures made sure you understood what the 4507R-E looks like, inside and out! Lastly, we saw the switch bootup procedure and Supervisor engine POST test and syncronization process.
If you have found the article useful, we would really appreciate you sharing it with others by using the provided services on the top left corner of this article. Sharing our articles takes only a minute of your time and helps Firewall.cx reach more people through such services.

HOPE YOU WILL HAVE A WONDERFUL CONFIGURATION AND INSTALLATION HOUR :

CONFIGURING NTP ON A CISCO ROUTER

Configuring NTP On A Cisco Router

Network Time Protocol (NTP) is a vital service not only for Cisco devices but almost every network device. Any computer-based device needs to be accurately synchronised with a reliable time source such as an NTP server.
When it comes to Cisco routers, obtaining the correct time is extremely important because a variety of services depend on it. The logging service shows each log entry with the date and time - very critical if you're trying to track a specific incident or troubleshoot a problem.
Generally, most Cisco routers have two clocks (most people are unaware of this!): a battery-powered hardware clock, referenced as the 'calendar' in the IOS CLI, and a software clock, referenced as the 'clock' in the IOS CLI.
The software clock is the primary source for time data and runs from the moment the system is up and running. The software clock can be updated from a number of sources:
  • NTP Server
  • SNTP (Simple NTP)
  • VINES Time Source
  • Hardware clock (built into the router)
Because the software clock can be configured to be updated from an external source, it is considered more accurate in comparison to the hardware clock. The hardware clock can be configured to be updated from the software clock.

Example Scenario

This article will show you how to configure your Cisco router to synchronise its software clock from external sources such as NTP servers. We will also show you how to configure your router to act as an NTP server for your internal network devices, ensuring all devices are synchronised.
First example involves setting up the router to request NTP updates and synchronise itself from a public NTP server. This will ensure the router's time is constantly synchronised, however it will not act as an NTP server for internal hosts:
We'll need to configure the router to resolve FQDN using our ISP's name server:
R1(config)# ip nameserver 195.170.0.1
Now we instruct our Cisco router to obtain its updates from the public NTP server.
R1(config)# ntp server 1.gr.pool.ntp.org

As soon we issue the command, the router will resolve the FQDN into an ip address and begin its synchronisation. Right after issuing the command, we can verify the router is correctly configured and awaiting synchronisation:
R1# show ntp associations
address.............. ref clock ........st ..when ..poll ..reach ..delay ..offset ..disp
~195.97.91.220 .131.188.3.221...2 ....30 ....64 ....1 .....0.000 -1539.9 .7937.5
* sys.peer, # selected, + candidate, - outlyer, x falseticker, ~ configured

R1# show ntp status
Clock is unsynchronised, stratum 16, no reference clock
nominal freq is 250.0000 Hz, actual freq is 250.0006 Hz, precision is 2**24
reference time is 00000000.00000000 (02:00:00.000 Greece Mon Jan 1 1900)
clock offset is 0.0000 msec, root delay is 0.00 msec
root dispersion is 0.00 msec, peer dispersion is 0.00 msec
loopfilter state is 'FSET' (Drift set from file), drift is -0.000002405 s/s
system poll interval is 64, never updated.
The 'show ntp associations' command shows that the system is configured (~) to synchronise with our selected NTP server, however, it is not yet synchronised. When it is, expect to see the star (*) symbol in front of the tilde (~). The 'ref. clock' column shows the IP address of the NTP server from which our public server (1.gr.pool.ntp.org) is synchronising.
It is also worth noting the column named 'st' which is equal to two (2). This represents the stratum level. The higher the stratum, the closer to the Atomic clock source we are. As a general rule, always try to synchronise with a server that has a low stratum.
The 'show ntp status' command confirms that we are yet to be sy

Monday, July 4, 2011

Configuring A Frame Relay Switch

hostname FRAME_SWITCH
!
!
ip subnet-zero
no ip domain-lookup
frame-relay switching
!
!
!
interface Ethernet0
no ip address
no ip directed-broadcast
shutdown
!
interface Serial0
ip address 10.1.1.2 255.255.255.0
clockrate 56000
!
interface Serial1
no ip address
no ip directed-broadcast
encapsulation frame-relay
logging event subif-link-status
logging event dlci-status-change
clockrate 56000
no frame-relay inverse-arp
frame-relay intf-type dce
frame-relay route 122 interface Serial2 221
frame-relay route 123 interface Serial3 321
!
interface Serial2
no ip address
no ip directed-broadcast
encapsulation frame-relay
logging event subif-link-status
logging event dlci-status-change
clockrate 56000
no frame-relay inverse-arp
frame-relay intf-type dce
frame-relay route 221 interface Serial1 122
!
interface Serial3
no ip address
no ip directed-broadcast
encapsulation frame-relay
logging event subif-link-status
logging event dlci-status-change
clockrate 56000
no frame-relay inverse-arp
frame-relay intf-type dce
frame-relay route 321 interface Serial1 123
!
interface BRI0
ip address 150.1.1.1 255.255.255.252
no ip directed-broadcast
encapsulation ppp
dialer map ip 150.1.1.2 name R2 broadcast 2335552221
dialer-group 1
!
ip classless
!
dialer-list 1 protocol ip permit
!
line con 0
exec-timeout 0 0
logging synchronous
transport input none
line aux 0
line vty 0 4
login
!
end 

Monday, June 27, 2011

PULL BACK THE CURTAIN OF FEAR


It's amazing what you'll discover if you start exploring what God has placed within you. I'm always astounded when I read an article about an eighty-year-old man who goes skydiving or something equally daring for his birthday. To us it seems amazing, but that man didn't just wake up one morning and decide to jump out of a plane. You can be sure that he has been stretching himself and trying new things his whole life.
What are you doing to stretch yourself? I encourage you: don't just sit back and wait for something to happen. Don't let the curtain of fear drape your heart's desires.
There's a story about a woman who was newly divorced, almost penniless, afraid of public places, and trying to raise two teenage sons. After several tragedies in her life, she developed severe agoraphobia and was afraid to even leave her house. She searched her heart for ways to support herself and her two sons.
She loved to cook, and all she knew to do for income was to make sandwiches and other simple foods. With the help of her two sons, she found a few customers; but because she was so uncomfortable leaving the house, she had her two sons deliver the sandwiches. Her business quickly grew beyond the size of her kitchen, and she now faced a decision. Would she stand still and stop growing, or would she confront her fears and step outside her comfort zone? Though fear constantly nagged at her, she recognized that cooking was a desire that God had placed inside of her. As she sat in her house, she could imagine her business growing and began to see success. She made a decision to stretch herself - one step at a time. First, she decided to confront the agoraphobia that imprisoned her. Reaching deep inside herself, she was able to take a job as a chef at a local hotel, and once again she experienced tremendous success.A blessed day to you all!