1 Huawei Technologies SDN Showcase at SDN and openflow world congress 2013 . Introduction Huawei Smart Network Controller (SOX). While openflow already penetrated data centers, service providers are now turning to openflow to Huawei SN640. ease network operations and provisioning. In prep- aration for one of the biggest European conferences Huawei AS1600. focusing on SDN and openflow , Huawei Technolo- gies commissioned EANTC to validate Huawei 's Huawei SN640 Huawei SN640. openflow solution targeted at service providers. Huawei SN640. This report explains the use cases, test cases and results we collected during the test execution in Ixia IxNetwork Huawei AS1600 Ixia EANTC's lab in Berlin, Germany. IxNework openflow Switch Autonomous System 100. Huawei built a service provider access network in EANTC's lab based on a mix of Huawei openflow Non- openflow Router Autonomous System 200. switches and legacy switches that do not support Gigabit Ethernet openflow .
2 The test network was managed by a Traffic Generator pool of Huawei openflow controllers. EANTC used openflow Channel Ixia's IxNetwork testers to emulate subscriber traffic. Figure 1: Test Bed Network Topology Test Setup Results The test bed network was composed of four Huawei Interworking With Legacy Devices SN640 openflow switches controlled by Unless a greenfield deployment is an option, Huawei 's openflow controller cluster, called service providers are likely to expect openflow Smart openflow Controller (SOX). Huawei also devices to be installed in the network gradually. It is brought two AR1600 routers that did not support therefore obvious that openflow -based networks openflow to represent legacy networks. One of have to interwork with legacy network components. these routers was configured as part of the Open- This interworking could be part of the same admin- Flow domain with Autonomous System (AS) 100, istrative domain or a different domain.
3 While the other router was configured as part of a legacy domain in AS 200. All devices were inter- connected using Gigabit Ethernet. Test Results Highlights High-availability including Open- Each of the openflow switches established a single Flow controller resiliency openflow channel to the openflow controller Traffic management with network- cluster using an out-of-band management network. wide Quality of Service and multi- The controller automatically discovered and path forwarding displayed the complete network topology including the non- openflow routers. The controller negotiated Self service traffic management and learned the proper version for each openflow device. Huawei SDN Showcase at SDN & openflow world congress 2013 Page 1 of 4. In this first test we looked into how common and Smart Network well trusted protocols such as IP/MPLS and External Controller (SOX). BGP interwork between openflow and non-Open- Huawei SN640.
4 Flow devices. The first step involved the openflow Controller discovering the complete network topology. The automatic topology discovery algo- Huawei AS1600. rithm utilized Link Layer Discovery Protocol (LLDP). messages to discover the network topology. Since Huawei SN640 Huawei SN640. LLDP was enabled on the legacy devices these and Huawei SN640. the links connected to the openflow switches were also discovered by the Controller. Huawei AS1600. We then investigated IP/MPLS interworking between the openflow switches and the Non- openflow Switch Non- openflow Router openflow devices. Huawei configured Resource Gigabit Ethernet eBGP. Reservation Protocol Traffic Engineering (RSVP-TE). to be used as Label Switched Path (LSP) signalling Autonomous System 100 OSPF-TE. protocol. Huawei used Open Shortest Path First Autonomous System 200 RSVP-TE. Traffic Engineering (OSPF-TE) as link state routing Figure 2: Interworking With Legacy Devices protocol to distribute the link state and traffic engi- neering information between nodes in the same Multi-Path Forwarding autonomous system.
5 All those tasks were performed by the controller. Both the openflow and the non- Network resource optimization is one of the big openflow switches did not need to be provisioned challenges that traditional networks are facing these by hand at all. days. Various methods, such as Equal Cost Multi- path (ECMP), are employed in the networks to We verified that OSPF and RSVP-TE sessions were better utilize the network resources. ECMP means established between the Non- openflow switch in that multiple paths to the same destination are used AS 100 and the SOX via the controller's GUI and to provide load balancing. devices' CLI interface. While MPLS service and user According to OIF, centralized network control traffic was running, we did not observe any packet allows more granular network control and optimiza- loss. The openflow switch that was connected to tion than distributed network control. The flow- the Non- openflow switch had installed flow entries based openflow control model allows network to push and pop MPLS header.
6 Administrators to apply policies at a very granular level, including session, user, device, and applica- Next on the verification list was the use of eBGP to tion levels. inter-connect AS 100 (the openflow domain) and In order to test these capabilities of openflow multi- AS 200 (the non- openflow domain). One Open- path forwarding, we sent IPv4 user traffic for three Flow switch was configured as Autonomous System different network services, distinguished by DSCP, Border router (ASBR) for AS100 and another router IP source and IP destination address. For each was configured as an ASBR for AS 200. Once the network service we applied a different bandwidth controller discovered the neighbor, it installed a profile as show in table 1. I. flow entry for the control traffic and established the BGP sessions toward the Non- openflow switch in Traffic Service A Service B Service C. AS 200. After the routing information was Class exchanged between AS 100 and AS 200, we veri- High 20 Mbit/s 40 Mbit/s 100 Mbit/s fied that IPv4 prefixes advertised from Non-Open- Medium 30 Mbit/s 60 Mbit/s 150 Mbit/s Flow switch in AS 200 were installed in SOX's Low 50 Mbit/s 100 Mbit/s 250 Mbit/s Routing Information Base (RIB) and vise versa.
7 We did not observe any packet loss for the user traffic. Table 1: Bandwidth Profiles Huawei SDN Showcase at SDN & openflow world congress 2013 Page 2 of 4. The user traffic was equally load balanced between Controller Resiliency two equal cost paths based on the flow level. The Service provider networks need to handle a poten- direct link between the left and the right openflow tially large amount of user flows and traffic. If all switch was set with a higher link metric, therefore user flows are controller through a single device, both indirect links were used. We did not observe that device represents a single point of failure as any packet loss or re-ordered packets for all well as, perhaps, a performance bottleneck. There- services and Class of Service (CoS). fore, the Open Networking Foundation (ONF) intro- duced a multi-controller feature in the openflow Link and Node Resiliency standard with the main goal to avoid a single In current service provider networks, each network point of failure.
8 In this architecture each of the layer has its own resiliency mechanism resulting in openflow switches establishes an openflow more expensive networking equipment, higher oper- channel to all openflow controllers in the domain. ational and management costs. We asked Huawei Huawei chose to implement their openflow if openflow can help provide a uniform and reli- controller as a cluster composed of multiple Open- able resiliency mechanism while at the same time Flow controllers. All of the controllers in the cluster provide carrier-grade resiliency? can be viewed as one single logical controller. Huawei explained that having multiple controllers in To answer this question we looked into the resiliency one cluster provides scalability and reliability as a options implemented by Huawei 's openflow -based big benefit to the service provider. solution. We emulated the traditional service inter- ruption scenarios such as link and node failure and The number of controllers in the cluster could measured the service interruption time.
9 Dynamically be adjusted to manage the load based on real time observation of the network state. We We tested the protection approach utilizing the tested the Huawei 's SOX controller cluster ability to openflow fast-failure bucket group react to controller failure and dynamic load type. This mechanism enables the openflow switch balancing. to change the forwarding path without requiring round trip communication to the controller. This In our test we used one physical machine running method significantly reduced the failure reaction several independent controller processes. Initially, and recovery time. The results from both resiliency Huawei configured the controller cluster with two categories are shown in the following figure: openflow controller instances. Both instances were configured to run as equals in the cluster. We veri- fied that the packet-in (data packets that are sent to the controller) load was equally balanced between both instances.
10 Each of the controller instance was configured to handle at most 10,000 packet-in packets/second. Once we increased the packet-in load to 32,000. packet/second, the number of controller instances increased automatically to 4 to handle the load. We verified that each of the controller instance was handling 8,000 packet-in packets per second without loss. While the traffic load was running, we disabled one of the controller instances. The controller cluster detected the failure of one of the instances and instantiated a new controller Figure 3: Service Interruption Time per CoS instance. The packet-in load was again distributed equally between the controller instances. Huawei SDN Showcase at SDN & openflow world congress 2013 Page 3 of 4. Rate Limiting traffic volume of 10 GByte for Low CoS was Current Quality of Service (QoS) deployments in exceeded, and increased back at CIR when the service provider networks typically handle customer customer provisioned additional data volume.