loader image

100G at the Enterprise Edge, when scaling becomes the bottleneck

Nov 29, 2022

The enterprise edge is a critical and essential telecom location for any enterprise as it hosts services closer to the enterprise users. It must be architected with scalability in mind.

However, many enterprises/network operators can easily go wrong in architecting a future-proof enterprise edge DC.

Treating an edge DC as a central DC when it comes to design is less efficient. Network Edge has a particular environment that requires an out-of-box approach to its architecture.

Through this blog, we discuss the network capacities for the edge DC and how to make a future-proof design for edge DC.

But first, a word about the drivers triggering the growth of edge DCs:

Firstly, 5G is expected to increase the number of connected devices and bandwidth. The eMBB services triggered the increase in data consumption, while the uRLLC services facilitate low latency services that require the services to be hosted closer to the edge.

Secondly, not only do the applications need to be hosted closer to the user, but the applications have become far more complex today. For example, as companies expedite their journey to digital transformation, they need to take advantage of AI/ML and big data to tap into new and innovative services. These new applications are complex, bandwidth-heavy, and far more dynamic, thus challenging the scalability of DCs.

Lastly, internet traffic is rising even in the post covid era. Data usage increased by  HYPERLINK “https://stlpartners.com/articles/edge-computing/3-reasons-why-edge-will-change-video-streaming/”25-35% during the covid period, but in the post covid era, the “increase” trend persists as the continued remote work culture exerts pressure on the bandwidth.

The growth in edge DCs

The increase in data consumption and the need to have low latency services require a re-look at the network architecture. Hosting services at a central location not only increase the latency for the user but necessitate the use of expensive backhaul.

Therefore, the trend is towards decentralization of DCs, i.e., bringing the DCs closer to the user at the edge location. 

This DC can be located anywhere closer to the enterprise users, as the following diagram from  HYPERLINK “https://www.lfedge.org/”LFEdge shows. It can be hosted within the enterprise or at the edge of the enterprise within a service provider’s premises.

Edge Locations

Fig: Edge Locations: Ref: LFEdge

Additionally, the DC can be managed by the enterprise or service provider if hosted inside the enterprise.

The edge brings a lot of values but challenges too.


 The edge brings value but not without challenges.

 There are many benefits to establishing edge DCs closer to the user, such as 

  • Low latency services

Thanks to new latency-critical services in 5G, the edge offers an attractive location for the placement of services. Services such as AR, VR, and autonomous driving require ultra-low latency between the user and the application servers. Edge DCs can facilitate such services as they are closer to the user.

  • Bandwidth Offload

While it is true that non-latency-prone services can run in the central cloud instead of the edge cloud, this requires additional bandwidth for transport to carry services from the edge to the central DC. This incurs additional costs of transport, which can be avoided if bandwidth-intensive services run at the edge instead of central DC. Therefore, edge offers a preferable location for CDNs, video streaming servers, etc.

With all the benefits of the edge cloud, it is also essential to mention some of its challenges.



While edge DCs bring a lot of value, they also have associated challenges. Like

  • Limitation of Space

Edge DCs are often limited in space. They are often established in remote areas closer to the user in shelters or small buildings. They are usually in high numbers, like hundreds or thousands. Because of this, it is costly to establish edge DCs which are reasonably sized in all cases. Owing to limited real estate, edge DCs need to have smart-sized equipment.

  • Limitation of Power.

The limitation of the space is not the only concern. DCS are power-hungry too. And the power is usually very limited in the edge DCs. This necessitates using a platform that has integrated “all-in-one” features that optimize the need for power in such locations.


Edge DCs need to be scalable even with less power and space- How can 100G help?

Even though edge DCs are limited in power and space, they need to be designed with scalability in mind.

With the growth in the data, as mentioned earlier in the blog, it can be a severe concern to scale an Edge DC.

There are a couple of drivers for why capacity can be a bottleneck in edge DCs.

First, the edge DC is where multiple diverse services are hosted. For example, while the central DC usually hosts the core services only (mobile core/fixed core), the edge DC is expected to host the core and RAN/Access services owing to its proximity to the users, thus putting an extra burden on the bandwidth.

Secondly, the East-West traffic is on the rise, i.e., the traffic processed locally in the DC is higher than the data that goes out.

The East-West traffic is processed locally in the DC. The traffic flows between different application servers and does not go out to the internet. While North-South traffic is the data between a user and the outside world

East-West vs. North-South Traffic

Fig: East-West vs. North-South Traffic- Ref  HYPERLINK “https://www.oreilly.com/library/view/qos-enabled-networks-2nd/9781119109105/c10.xhtml”O HYPERLINK “https://www.oreilly.com/library/view/qos-enabled-networks-2nd/9781119109105/c10.xhtml”REILLY

 This means a lot of traffic moves within the DC but not outside the DC.

As more and more services are hosted on edge, the East-West traffic can be a concern not just in central DC but also in the edge DC.

This traffic inside the DC means that the inter-switch links (switch to switch) need to be highly scalable enough as these are the links that can be bottlenecks for the East-West traffic.

We recommend thinking about the higher capacity interface for inter-switch links for better power and space utilization, as we show you in the discussion.

Interswitch links are not connected to the servers directly. Therefore, they can be scaled differently than the servers’ uplinks. They are not dependent on the capacity of server uplinks, so that they can be scaled at a higher pace than the server uplinks.

For example, the server’s uplinks today can be 10G to 25 G. However, inter-switch links can be much higher, for example, 100G, for a more scalable DC.

For this purpose, it is recommended to design the inter-switch links with 100G to sustain the bandwidth growth in the future easily.


Why is 100G better at the edge? 

 Let’s take a comparison between two configurations on edge.

For comparison purposes, we consider that the switch ASIC is the same, which can provide a throughput of 1 Terabit.

However, the switch vendor offers two configurations: one configuration with 100G/port and the other with 40G/port keeping the switch ASIC the same.

Customers can use any of them in the edge DC in the Inter-switch link.

The number of ports is the same in both cases. The initial cost of the 100G configuration is higher, but the price is not linear as the port capacity increases, so 100G can give a better cost per bit.

Again also, the power consumption is not linear as the port becomes higher, resulting in much better power consumption per bit for 100G than for 40G.

It is clear that the 100G per port configuration wins in all cases in terms of ROI and provides a  more future-proof design.


Why is 100G better at the edge?

100G port configuration 40G port configuration
Switch ASIC 1 Tbps 1 Tbps
Number of ports 10 x 100G ports 10 x 40G ports
Size 1 RU 1 RU
Port configuration 100G per port 40G per port
Power Consumption (Watt/bit) Lower Higher
Bandwidth throughput per RU Higher Lower
Cost per bit Lower Higher

  Table: Comparison of 100G per port vs. 40G per port for Edge DC



Lanner offers an all-in-one White box for edge with multiple 100G options

Lanner’s white box platform HTCA (  HYPERLINK “https://www.lannerinc.com/products/telecom-datacenter-appliances/hybridtca-platforms”HTCA-6600,  HYPERLINK “https://www.lannerinc.com/products/telecom-datacenter-appliances/hybridtca-platforms”HTCA-6400,  HYPERLINK “https://www.lannerinc.com/products/telecom-datacenter-appliances/hybridtca-platforms”HTCA-6200 ) is an all-in-one MEC / Edge computing platform consisting of various compute blade and switchblade based on programmable data planes such as Intel Tofino ASIC (such as  HYPERLINK “https://lannerinc.com/products/telecom-datacenter-appliances/modules-and-blades/hlm-1101″HLM-1101). They come in different form factors ranging from 2U to 6 U, suitable for compact deployment at the edges.

Lanner’s HTCA platform offers multiple switching blades for 100 GbE, such as  HYPERLINK “https://www.lannerinc.com/products/telecom-datacenter-appliances/modules-and-blades/hlm-1020″HLM HYPERLINK “https://www.lannerinc.com/products/telecom-datacenter-appliances/modules-and-blades/hlm-1020”-1020 and  HYPERLINK “https://www.lannerinc.com/products/telecom-datacenter-appliances/modules-and-blades/hlm-1030″HLM-1030. Lanner’s HLM-1020 uses Broadcom’s BCM56860, 1.2 T switch fabric with 2x100G CXP and 20x10G SFP+ interfaces. While HLM-1030 is uses Broadcom 3.2 T switch fabrics BCM56960, with 6 x 100G QSFFP28, 4 x 40Gb QSFP+, and 16 x 10Gb SFP+ interfaces.


Lanner- A leading manufacturer of white box solutions!

Lanner Electronics is a leading manufacturer of white box and uCPE platforms. It provides compact white box solutions and uCPEs for a wide range of applications like Edge cloud/MEC, Open RAN, NFV, SDN, SD-WAN, network orchestration, and network slicing.

Lanner operates in the US through its subsidiaries Lanner Electronics USA, Inc. and Whitebox Solutions (whiteboxsolution.com).

Latest blogs

    Your Items
    Your items list is empty.Return to Shop