Cover Story: Inside Comcast’s Downingtown

Downingtown, Pa.— About an hour from the gleaming 58-story Comcast Center in the heart of downtown Philadelphia, there’s a far-less-spectacular warehouse building owned by the same company in this suburban borough.

The nondescript building stretches out over an area larger than a football field and houses a labyrinth of laboratories, test rooms and troubleshooting areas designed to serve as Comcast’s new “integration epicenter.” Despite its plain outside appearance, it represents nothing less than the future of the largest cable operator in the United States and, by extension, the entire cable industry.

As the tech world becomes more splintered, it’s become increasingly difficult for the vast array of equipment needed to run a cable operation to “talk” with each other. Downingtown represents something akin to a 21st century Rosetta Stone through which Comcast can untangle software knots, allowing seamless communication between all of its disparate equipment. It is the last place new Comcast products and services go before they go into subscribing homes. It’s the final dragnet to catch and purge software bugs.

“We have [additional] product-engineering labs that develop and integrate and work out bugs,” said Comcast senior vice president of testing and operations Charlotte Field. “When [those products] get to Downingtown, we put them on this end-to-end network, to see how they work on our total network — our converged network.”

Its official name is “the Comcast end-to-end test and integration center,” but most people call it by its location. Downingtown. It’s an exact replica of the company’s national network, with links to companion labs in Denver, Bishop’s Gate, N.J., and Moorestown, N.J. All of the largest cable operators have similar operations in some form or another.

'BEES IN THE DARK’



The lab, which began its first tests last fall, helps the cable giant avoid massive technical glitches. So, for example, engineers here can test if a software update for a set-top box actually fixes the problem instead of corrupting an earlier software release.

Currently, the lab is clearing the bugs out of three applications: Caller ID on TV, which interrupts a program on TV to show the ID and number of incoming calls; helping consumers switch to the HD version of an SD video stream, and Tru2way TVs and set-top boxes, which will allow interactive ads and are to be available to consumers this fall. “Several” other tests are also underway, Comcast executives said.

Ultimately, the lab will be the final stop for potentially dozens of services and applications riding on Comcast’s video, broadband data and voice plant.

“It’s all about testing to make sure anything new can be provisioned and billed for, and to make sure we have the right tools to understand what kind of problems can arise,” Field said.

Comcast wouldn’t say how much it spent to build the Downingtown dragnet, but the price tag is estimated to be more, and perhaps substantially more, than $25 million, according to one person familiar with the costs.

More than anything, cable’s need for such facilities reflects how critical software has become for the information technology systems needed to support new services. Finding problems in a world of software, as one cable technologist likes to quip, is like getting stung by bees in the dark: You know they’re there, but you can’t see them.

The old rule of thumb about how cable’s capital spending is 80% hardware and 20% software is starting to invert. That’s largely because cable technology was once primarily physically tangible: an amplifier, a roll of coaxial cable, an F-fitting for the end of a piece of cable.

Those physical artifacts are still around, but the far more complicated part is software. The set-top box, now designed to be the new Tru2way devices coming to retail later this year, goes away. Implementing it is a complicated twist of firmware, software stacks, operating systems, middleware, and applications. And it’s all invisible.

And that’s the reason for Downingtown — to be the secret decoder ring that brings software problems into the visible domain.

SOFTENING THE NETWORK?



As recently as two years ago, if you asked a cable CTO what was the hardest challenge faced by a system, the phrase “hardening the network” was high on the list. “Hardening the network” meant getting serious about best practices on craftsmanship — from splicing individual strands of fiber together, to crimping on F-connectors. It meant developing consistency around tests and measurements, to make sure the right signal levels existed for the best possible pictures, fastest data speeds, and best sound quality. Much of it was driven by the addition of voice services, which necessarily must support 911 emergency calls.

To “harden the network” was to develop policies for spare equipment and redundancy, so that if a link went down on the west end of town, a mechanism was there to quickly open up another lane to subscribing households. It meant paying closer attention to “telemetry,” which also goes by “network monitoring.” (For years, network monitoring was among the first things to get sliced during budgeting negotiations. No longer.)

These “plant-hardening” techniques became more critical as the cable industry, once comprised of literally hundreds of separate operators going back 60 years, has consolidated into a handful of giants. The way Continental Cable did things was different than the way TeleCable did things, which was similar to the way Cox Communications did things, but different than how Adelphia Communications or Tele-Communications Inc. did them.

Reality, in cable technology, is this: Every system is at least a little different than the next. From amplifier spacing to bandwidth maximums to optical layouts to headend components to conditional access and encryption, it’s entirely plausible that no one cable system is exactly like another.

Even the “500-home node” necessarily doesn’t serve precisely 125 homes to the north, south, east and west of its location because neighborhoods and towns just didn’t evolve that way. One side of town grows faster than the other, or uses more on-demand services than the other.

Because these software and applications differences matter so much, that’s why today’s cable technologists are now talking about “softening the network.”

At the recent SCTE Cable-Tec Expo in Philadelphia, Comcast executive vice president of national engineering and technical operations John Schanz used the term on an early morning breakfast panel. A few hours later, his colleague, chief technical officer Tony Werner, echoed the idea in a different panel discussion.

“Software has always been an important part of the business — but it becomes much more relevant now, in terms of testing, uniformity on requirements, openness,” Schanz said at the breakfast. “The network is softening as part of the evolution toward merging multiple products and experiences onto a single network.”

Comcast is not alone in the pursuit of an end-to-end integration lab. Time Warner Cable operates one, in Charlotte, N.C. In Atlanta, Cox links its interoperability tests with a gating system — suppliers must get through each gate before advancing to the next. Not all of the gates are technical. The earlier Cox gates determine whether a product should even be on its plant, by way of business models and product viability.

CABLE ANATOMY 101



The physical anatomy of a cable system goes something like this: A national fiber optic network links into regional fiber rings, which encircle cities and towns. The rings connect to headends and headends to distribution hubs. Hubs connect over fiber to nodes, nodes connect over coaxial cable to homes.

The nervous system of a contemporary cable system, traditionally called “the back office” or “billing system,” is what’s different now. Nowadays, it’s called “IT” (information technology) and it comprises all the software necessary to sell various services to each individual customer: Say a customer wants caller ID on the video service, and needs and 8 Megabyte package for her data service. All that requires “provisioning” of a customer’s devices and services, requiring linking into the systems that can send the bill at the end of the month.

In some ways, Downingtown pales in comparison to its companion lab in Denver, nestled near the Rockies with an array of 10-meter dishes on the outside, and its extensive video and production orientation inside. Comcast absorbed the facility as part of its purchase of TCI.

As the former “Headend in the Sky,” or HITS facility, the Comcast Media Center remains the breeding ground and physical launching pad for new video products — like its recently announced “Axis” program, to assist software developers wanting to write applications that will run on Comcast’s Tru2way platforms. The CMC will continue to provide mission-critical uplink services of broadcast and on-demand video, and will pave the company’s way toward advanced video compression, like MPEG-4.

But Downingtown is as different from the Comcast Media Center as suburban Philadelphia is from Denver. The Downingtown center is more about the general, industrial shift to software and applications that run on a “converged” network — meaning not within the traditional and isolated “silos” of voice gear, video gear and data gear. (Staffers have already shortened how they talk about the multiplatform, silo-busting approach: “cross-plat.”)

Inside, the end-to-end Downingtown lab is a combination of office space, used as applications labs, and a 15,000 square foot data center. The application labs are used by 50 Comcast employees, now, as well as any technology suppliers wanting to make sure their gear will work on Comcast’s converged plant.

“We built it so that vendors can come in to do early interoperability testing, to isolate problems they may not see in their test facilities — but would in ours,” said Field.

Rack upon racks of gear line the vast building, like some futuristic department store. There’s a training room, and a legacy testing lab, important so that new applications don’t crash devices that are already installed in people’s homes. There’s also a troubleshooting area, to fix problems as they occur — but preferably before they occur.

Disaster recovery protocols are studied here so that redundant routes can be instantly activated to move information and communications traffic. Likewise, automatic testing lets an operator test multiple devices with multiple applications without having a person sitting there loading each application. Essentially, more testing, faster. Comcast engineers can also test unattended, which means they can do this testing from somewhere other than where the equipment is.

For power, Downingtown features substantial generator backup. Battery backup, too — enough for 15 minutes of clean, uninterrupted power. That translates into a room full of stacked, car-battery-sized batteries.

Where the racks run out, expansion space exists — an adjacent and unfinished room on one end of the building with nearly 20,000 feet of unused space — for now.

Said Schanz: “What we’re doing right now is preparing the infrastructure. The beginnings of the software ecosystem are coming together. It’s a journey, but we’re definitely on it.”