What do AR/VR, cloud gaming, smart cities, 5G, autonomous vehicles, healthcare sensors, surveillance and facial recognition all have in common? The need for low-latency connectivity enabled by networks architected with edge computing.

For some service providers, edge computing trials have already started. For others, edge computing plans won’t be formulated for a few years. But whether they’ve already devised their edge compute strategy or haven’t yet begun, the first question they need to ask themselves is: “How do we define edge computing?”

At Broadband Success Partners, we have done just that. Since the start of the year, we’ve asked this question and seven others of more than 20 network engineering and commercial services executives at Tier 1 and Tier 2-3 MSOs. Here are their answers to the initial questions and our insights.

  1. What is edge computing?
    There’s no single definition. The surveyed cable operators are deploying edge computing (or expect to) in one of three ways. According to 43% of those interviewed, “transforming headends and hub sites to mini data centers, or Headend Re-architected as a Data Center (HERD)” is what best describes their edge compute initiatives. A third of the executives cited “Distributing compute and virtualization via Flexible MAC Architecture (FMA).” The balance of those interviewed, or 24%, approach it as building new edge sites with compute and storage closer to end customers.

These varying views are due, in large part, to the individual’s preferred edge computing use cases. For example, if their primary applications are less latency-sensitive such as video caching or SD-WAN, they skew toward a less distributed compute architecture. In contrast, those thinking in terms of VR/AR, gaming and/or autonomous vehicles with little to no tolerance for latency will gravitate towards another configuration.

  1. Where is edge computing?
    Naturally, we next asked this question – as in, where will edge equipment be located? With many noting that their goal is to get as close to the edge of the network as possible, it’s not surprising that “headends” (33%) and “hub sites” (29%) were the two top answers.

An interesting split emerges when you view the results by company size. Tier 1 cable operators are more likely to place edge computing hardware in hub sites rather than headends. The opposite holds true for the Tier 2-3s. This is due, in part, to the cost of scaling up and out to place edge devices in many hub sites versus fewer headends. This expense needs to be considered relative to the allowable latency. “Our decision depends on the tipping point of finances and latency tolerance,” as one executive explained.

Deployment also depends on where the equipment can more easily reside. As one Tier 2-3 executive noted, “the existing headend facilities have space and power to accommodate edge hardware.” The density of the network could also be a decision variable. In a rural system with each headend serving relatively few customers, edge hardware could be there – versus a hub site serving more customers in a suburban system.

The hub site versus headend decision appears to be fluid. For example, one interviewee notes that the two types of locations are interchangeable. A few others said they’re starting with less costly headend deployments and will then migrate outwards to hub sites at a later date.

  1. What’s driving your company to edge computing?
    Over half of the executives noted either “improved customer experience” or “the enablement of new revenue streams” as the most important driver for edge computing in their company. Among the 29% who indicated “improved customer experience,” here’s the rationale some gave:
  • Edge compute implementation must serve the customer’s need
  • Customer satisfaction with lower latency for gaming and video optimization is important

As to why 24% of those interviewed noted the “enablement of new revenue streams” as the top driver, here are a few of the reasons:

  • Due to financial priorities; tasked with new revenue growth.
  • For HFC to go to 10 Gbit/s, we need a distributed architecture; only way to achieve this.

The two top drivers align nicely with each other in that new revenue streams can only be created if the customer is satisfied with the new services.

There you have it: the answers to our first three questions of this new research. The rest of the story will be shared in my next blog post.

David Strauss, Broadband Success Partners