What is vpc in nexus




















Though Port-Channels are great, the problem is that all links within the "bundle" must be connected to the same switch. This results in a single control plane for both management and configuration purposes.

Whereas with vPC each switch is managed and configured independently. It is important to remember that with vPC both switches are managed independently. Figure 1 : vPC Components. Here lies the issue. When the vPC peer-link goes down only the vPC member ports are shut down, i. In order to ensure the orphan port is brought down correctly the interface command orphan port suspend is used.

The vPC peer-link is the most important component within the vPC domain. Just as we mentioned, should a member port fail then the peer-link is used to send unicast traffic to the peer. Figure 2 : PeerLink Scenario. Below shows the necessary configuration.

This configuration is applied to both switches. This protocol is used for stateful synchronization and configuration. It utilizes the peer link and does not require any configuration by the administrators. The Cisco Fabric Services over Ethernet protocol is used to perform compatibility checks in order to validate the compatibility of vPC member ports to form the channel, to synchronize the IGMP snooping status, to monitor the status of the vPC member ports, and to synchronize the Address Resolution Protocol ARP table.

V irtual S witching S ystem VSS is a virtualization technology that pools multiple Cisco Catalyst Switches into one virtual switch , increasing operational efficiency, boosting nonstop communications, and scaling system bandwidth capacity. VSS was first available in the Cisco series and was later introduced to the Cisco , the newer X , Series switches and the Catalyst April onwards.

Both technologies are similar from the perspective of the downstream switch but there are differences, mainly in that the control plane works on the upstream devices. Multi-Chassis Port Channel. Loop Free Topology. Spanning Tree as failsafe protocol. Maximum physical Nodes. Control Plane. Single logical node. Two independents active nodes. Layer 3 port channel. Common configuration.

Two different configurations. Static, LACP. Table 1. Catalyst Switches may need a supervisor engine upgrade to form a VSS. STP is still in operation but is running only as a failsafe mechanism.

Finally, the devices e. Link Aggregation Control Protocol LACP is the protocol that allows for dynamic portchannel negotiation and allows up to 16 physical interfaces to become members of a single port channel. Recommendations in order of preference for the vPC Keepalive link interconnection.

As last resort, can be routed in-band over the L3 infrastructure. Turning off the OOB Management switch, or removing by accident the keepalive links from this switch in parallel with vPC Peer-Link failure, could lead to split brain scenario and network outage. Using point to point links makes it easier to control the path and minimizes the risk of failure. However, an interface for each vPC peer switch should be used to host the keepalive link.

Layer 3 connectivity for the Keepalive Link can be accomplished either with the SVI or with L3 no switchport configuration of the interfaces involved. If the SVI is configured to route the keepalive packets , then this vlan should not be routed over vPC link. The following design guidelines are recommended for the vPC Peer-Links :. If both vPC peers are active, the secondary vPC i. At this point traffic continues flowing through the Primary vPC without any disruptions. In the unfortunate event there is an orphan device connected to the secondary peer , then its traffic will be black-holed.

In the event the Peer Keepalive Link fails it will not have a negative effect on the operation of the vPC, which will continue forwarding traffic.

The Keepalive Link is used as a secondary test mechanism to confirm the vPC peer is live in case the Peer-Link goes down:. As soon as the Keepalive Link is restored the vPC will continue to operate. In the case of a vPC peer switch total failure , the remote switch learns from the failure via the Peer Keepalive link since no keepalive messages are received.

The data traffic is forwarded by utilizing the remaining links til the failed switch recovers. It should be noticed that the Keepalive messages are used only when all the links in the Peer-Link fail:.

Under this condition both switches undertake the vPC primary roles. If this happens, the vPC primary switch will remain as the primary and the vPC secondary switch will become operational primary causing severe network instability and outage:.

The vPC is configured and normal operation is verified by following the nine steps defined below. It should be noted that the order of the vPC configuration is important and that a basic vPC setup is established by using the first 4 steps :. Step 2 : Select a Peer Keepalive deployment option. Step 3 : Establish the vPC peer keepalive link. Step 4 completes the global vPC configuration on both vPC peer switches. Step 5 : Configure individual vPCs to downstream switches or devices.

Step 8 : Optionally, enable the additional features to optimize the vPCs setup. To help illustrate the setup of the vPC technology we used two Nexus data center switches. Typically, a similar process would be followed for any other type of Nexus switches.

Our setup below utilizes the SVI technology and the second option dedicated 1G link proposed for the N5k series switches keepalive link setup table 2. This deployment option involves a dedicated VLAN with a configured SVI used for the keepalive link within an isolated VRF named keepalive for complete isolation from the rest of the network.

It is, however, highly recommended to configure the vPC Peer Keepalive link to use a separate VRF instance to ensure that the peer keepalive traffic is always carried on that link and never on the Peer-Link. This is shown by the diagonal green arrow above. If for some reason the frame is sent across the VPC peer link blue dotted line to Nexus B, B is not allowed to forward the frame out a member link say to D , because that might cause looping or duplicate packets.

The one exception to that behavior is what happens if the member link from A to D, the one with the green diagonal arrow next to it, goes down as shown by the red X. In that case and only in that case is Nexus B allowed to forward a frame that came across the member link.

B can forward a frame that came across the peer link out the right B to D link because the diagonal link going to that switch — the paired VPC A to D member link — is down. To put it another way, VPC peers are expected to forward a frame received on a member link out any other member link that needs to be used. Only if they cannot do so due to a link failure, is forwarding across the VPC peer link and then out a member link allowed, and even then, the cross-peer-link traffic can only go out the member link that is paired with the member link that is down.

What you might not expect at this point is that the same rules apply to routed traffic. And that since VPC does no spoofing of the two peers being one L3 device, packetes can get black-holed. MAC address or a configured priority good idea are used to determine the primary and secondary VPC peers. I do have a question, though. While I realize there are benefits to having them as two separate physical links, can the VPC keepalives be sent across the VPC peer link?

Never mind my earlier question if I actually submitted it. I was wondering if the keepalive link could be combined with the peer link. I forgot the keepalive link can be done over the management port. You always want keepalives on a different link than the VPC peer link, since their purpose is to detect a situation with the peer still up but VPC peer link down.

Instead, you can use the management port. Or, if you put a separate point-to-point routed link between the peers, in parallel with the VPC peer-link, you can use that for the keepalives.



0コメント

  • 1000 / 1000