The extraordinary growth of Cloud Computing, Internet of Things and Artificial Intelligence in the last decade has had a dramatic impact on data center scale,design and connectivity.

Cloud computing services are the fastest growing among all large systems activities worldwide, and have led to the development of new data center architectures, with Hyperscale data centers being built in clusters and multiple Co-location data centers being built in campus environments. These multi-data center designs require very rich interconnections between the buildings and within the buildings. Accommodating these new designs into existing networks has led to severe cable management issues and network degradation over time. Existing ducts are full and demand for increasing connectivity and bandwidth are showing no signs of slowing.

At AFL Hyperscale, we develop and deliver advanced, scalable network infrastructure solutions to facilitate the ultra high fiber counts, bandwidth and
connectivity required today and in the future, for Hyperscale and Co-Location data centers alike.

Our ground-breaking Ultra-High Fiber Count solution is the answer to your evolving data center network, providing interconnection between and within data
center buildings on a scale, never seen before.


In developing the UHFC Solution, we have adopted a series of connectivity reference models including large Co-Location (Green Field), large Co-Location (Brown Field), Hyperscale and Large Enterprise/Multi-Hall scenarios. These models include campus or cluster cabling, internal Data Center trunk cabling and connections from cable entry all the way to the customer space or edge network equipment.

In this document, we look at the large Co-Location (Green Field) reference model, adopting a simplified version of the ISO/IEC 24764 standard.

ENI - External Network Interface | MD - Main Distribution
ZD - Zone Distribution | LDP - Local Distribution Point


In our large Co-Location (Green Field) Reference Model, three Co-Location Data Centers are located within a campus and connected with two or more diversely routed cables.

Cables run in ducts from External Network Interface (ENI) rooms to other ENI rooms, with fiber counts often in excess of 1000 and link lengths running from 100 meters to several kilometers.


In our Co-Location reference model, the network design adopted, provides maximum flexibility of connectivity between all spaces within the data center building and also to the other data centers on the campus. The design can be split into two discreet sections – the backbone and the horizontal cabling.

In the horizontal cabling, all of the equipment racks are cabled back to the Main Distribution frames using medium fiber count cables.

In the backbone cabling all of the MDs on the campus are interconnected, using very high fiber count cables. This allows a client in any space on the campus to be connected to a service provider located anywhere on the campus, scaling the campus into a single Hyperscale space.

Data Centre Cabling Solutions-01


The MD connections to the ENI, to the other MD within the data hall and to the other MDs within the data center building are often referred to as the data center backbone. These are typically linked with high density cable of 144 fibers to 864. Due to the routing of the high fiber cables through walls and between floors, routes are usually run as bare cable and fusion spliced at the termination points. Alternatively, they can be installed as single end pre-terminated cable , with the free end, pulled (or blown) from the source to the destination.


The MD connections out to the client space and equipment racks are referred to as the data center horizontal cabling. In our Co-Location model, these links need to be provisioned to offer the maximum flexibility for deploying connectivity to the client space as and when it is needed. The architecture needs to accommodate very low density fiber deployments as well as very high fiber count deployments in the same space.


This diagram takes the large Co-Location (Green Field) reference model, (simplified version of ISO/IEC 24764 standard), and shows how the AFL Hyperscale UHFC Solution can maximize the density, and minimize the footprint of your data center space.

Our Solution in Action-01

1 - 3456F Mass Fusion Splice Wall Cabinet
2- 9U Ribbon Mass Splice and Patch
3 - UHD 2U Chassis
4 - Octagonal Junction Box
5 - UHD MPO to 3 LC/SC Module
6 - UHD 1U Chassis


SpiderWeb Ribbon® is a bonded fiber design that allows for both highly efficient ribbon termination and legacy discrete fiber termination. Compact, ultra-high fiber count cables fabricated using SWR fiber are central to the UHFC Solution.

12 fibers are intermittently connected together using a resin bond, the intermittent nature of the bond allows the ribbon to be bunched and collapsed similar to a bundle of loose fibers. It further lets the ribbon act as either a traditional ribbon for mass fusion splicing, or be broken out into individual fibers for single fiber handling.

SWR® technology significantly reduces cable diameter and weight, and is used in ultra-high fiber count indoor and outdoor cable types, resulting in  lower installation costs and major improvements in utilisation of cable pathways and duct space.

SWR Infographic


Our comprehensive range of ultra-high fiber-count, outdoor and indoor/outdoor SWR® fiber cables enable inter-building fiber connectivity on an  unprecedented scale. Coupled with this, our building entrance solutions provide the ultimate transition between Outside and Inside Plant networks and cabling infrastructure.

In this scenario, ultra-high fiber count external grade Polyethylene (PE) SWR® cables are converted to high fiber count indoor rated SWR® cables in the ENI, this is a permanent connection so is best suited to fusion splicing.

In our example, we have two 1,728 WTC SWR® cables coming into each ENI and four 864 fiber indoor WTC SWR® going out to the Main Distribution area.
By taking full advantage of the SWR® technology, the ultra-high fiber count cables are mass fusion spliced using Fujikura’s 70R Ribbon Fusion Splicer.

The splices are then managed and stored in specially designed mass fusion splice trays, each of which hold up to 12 splices (144 fibers).

The splice trays are then either mounted in a bespoke 3,456f wall mount enclosure or in a traditional splicing frame, depending on the available space and customer preference.

The 864 fiber indoor WTC cables are then routed to the Main Distribution area where they are Mass Fusion spliced into specially designed 9U 864 fiber splice and patch housings.

This high density splice and patch housing is an AFL Hyperscale bespoke designed housing to meet the needs of terminating a ultra high fiber count SWR cable.

Eni to MD 1Eni to MD 2Eni to MD 3

Data Center Backbone Options

AFL Hyperscale has a selection of unique products that allow you to terminate the data center backbone in a number of different methods, in a manner most suited to your MD area.

Pre-Terminated Horizontal Cabling

How to splice SWR Cable

How to Splice

AFL Inspection, Test & Measurement Accessories