rocket
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/rahi_io/wp-includes/functions.php on line 6114About 20 years ago there was a need for more servers because of an increase in the number of applications due to improved network availability. Hence, a large number of access ports were required to be provisioned in data centers. Cloud computing did not exist at the time, and private data centers were the norm. Each enterprise had to invest in its infrastructure, and multi-tenancy was also limited to service provider networks as VLAN addressing space was limited to 4094 VLANs within the data center.
As a result of the high number of servers in the data center, access switches needed to have more ports to connect to the servers. These formed the access layer. These, in turn, needed to be aggregated to a distribution/aggregation layer typically at the end of the data center row. The distribution switches typically were modular switches with a large number of downstream ports to aggregate many access switches.
The next design consideration was to connect all these distribution switches to high bandwidth, low latency transport core with a high level of redundancy. The core layer’s functionality is only data transport with minimal routing overhead.
The switching functionality was limited to the access and distribution layers, with the distribution layer acting as the inter-VLAN gateway. The Core layer did not participate in switching and focused only on routing. The rest of the network is typically connected to the core or a dedicated distribution switch pair. Multiple uplinks were connected for redundancy at all layers, and redundant device pairs with identical configurations were configured in HA for fast failover. This was achieved using cross-links. VSS was one such popular configuration.
The 3-tier data centers were mainly designed for North-South traffic flows at a time when server virtualization was in a nascent phase. As virtualization became more commonplace, applications were baked across multiple physical servers spread across the data center and in some cases, across multiple data centers. This exposed the problem with the number of hops in the three-tier architecture. Traffic flows looked like this: Server1 -> Access1->Dist1->Core (can be more than one hop)->Dist2->Access2->Server2. A minimum of 4 hops between the source and destination access switches.
The second and one of the most important factors was Spanning-tree. Managing so many STP domains was proving problematic, and all the available uplinks were not being used in the network. Provisioning newer applications required agility, and network teams could not cope with the dynamic nature of applications while working with the rigid Layer 2 designs.
Lastly, VM mobility was becoming more and more necessary for DR and HA scenarios for critical workloads. This was difficult to implement in the 3-tier model as destination data center L2 and L3 design meant that readdressing of the VMs was necessary. This broke the applications and required reconfigurations at the user and administrator levels.
In addition, the development of SOC capabilities by the network OEMs meant that near-linerate Routing lookups were possible. There was no real need to restrict routing to the core and distribution layer only, and it could be extended to the access layer and alleviate the issues with Spanning-tree.
The above problems with 3-tier data centers led OEMs to devise a simpler architecture using the below principles:
Leaf and Spine network is an implementation of the CLOS architecture.
In the CLOS fabric, there are new switch roles that we need to understand. The leaf switch is another name for the access switch. Both the ingress and egress switches in the CLOS fabric are leaf switches.
In addition, we have the ‘Spine’ role – which is another name for the Crossbar functionality in a 3 stage CLOS fabric. As all ‘leaf’ switches connect to this layer, it is called the Spine of the fabric and hence the name.
Each leaf needs to connect to all the spines in the network and use ECMP to utilize all available paths for forwarding. The ‘spine’ switches need not connect to each other and are used only for connectivity between the leaf switches.
While it is possible to connect external networks to spine switches, it is recommended to have dedicated switches called ‘border-leaf’ switches that form the border of the fabric and connect to external networks.
Advantages of Using a Leaf-Spine Architecture:
Rahi can help enterprises identify and deploy the latest leaf-spine solutions available in the market from a multitude of vendors. Rahi has extensive experience in deploying highly scalable data center networks across the globe and experienced professional services and managed services teams for Day 1 configuration and Day 2 support.
If you want to learn more please read part two
Let our experts design, develop, deploy and manage your requirements while you focus on what's important for your business