Resilience, space weather, and the end of the world

Animation courtesy of Spaceweather.com

Animation courtesy of Spaceweather.com

Last week you may have caught the news of two large coronal mass ejections (CMEs) occurring within a few days of each other, hitting Earth with a good dose of radiation. The two CMEs were the result of the catchily titled AR2158 sunspot, and their power was placed within the ‘extreme’ bracket of the scale used by astronomers.

On the night, many people saw the beauty in the event – thanks to it resulting in a fantastic Northern Lights display – whilst others predicted that it would lead to a nightmare of doomsday proportions.

In the end, the only ones really impacted were amateur (ham) radio enthusiasts. The HF signals used in ham radio transmissions propagate by ‘bouncing’ off the ionosphere, the atmospheric layer impacted by geomagnetic activity. This is a good thing for amateur radio enthusiasts, allowing communication over much longer distances than usual.

It was therefore far from the cataclysmic existential risk some had made it out to be – but there is just cause for concern, thanks to the potential future impact of such an event on wireless, and the consequences for wider society.

The most famous solar event is the Carrington Event of 1859, a powerful solar storm that produced the largest geomagnetic storm ever recorded. Aurora Borealis sightings near the equator were noted, whilst a famous anecdote states that gold miners in Denver woke up at 1AM and began their morning routines due to the brightness.

We’ve seen written testimony discussing Northern Lights events like this throughout history; the reason the Carrington Event is remembered is partially due to our increased understanding by 1859, but also the impact on the early telegraph systems we had by this point. These systems failed, pylons sparked and operators suffered electric shocks.

Fast forward to 2014, and we are witnessing another peak in solar activity.  After a major solar superstorm narrowly missed Earth in 2012, a NASA study in the December 2013 edition of Space Weather estimated the chances of a Carrington-level event hitting earth by 2022 at 12%.  That said, I’m dubious of how you assign statistics to something that has only happened once.

Such an event could induce huge currents in east-west wires – the longer the wire the bigger the effect. That could cause significant disruption in USA and continental Europe – though less in UK where our powerlines mostly run North-South. Transformers failing, power networks collapsing, fires and other unpleasant effects could result. With no power water supplies and sewage systems  - and of course communication networks – could stop working if not specifically design to take such effects into account.

However, just as we take into account and mitigate risks of terrorism to wireless infrastructure, the impact such an event could have on communications infrastructure is something we must take in to account when planning wireless.

Why? A 2013 report from Lloyd’s investigating the risk of such a storm estimated between $0.6 – 2.6 trillion in costs from US power shortages alone, with lower end estimates relying upon utility businesses being prepared for an event.

We contributed to arguably the most authoritative study on this issue: a report by a Royal Academy of Engineering committee on the impacts of so-called Extreme Space Weather on engineered systems and infrastructure. This included a group of eminent space scientists together with representatives of major services such as power networks and aviation, with Real Wireless representing the interests of wireless communication networks.

One interesting finding was that, although Carrington events are very extreme, even more typical activity should result in measurable impacts on mobile network quality around once a week. We recommended that systems needed for critical applications should carefully examine their use of synchronization systems based on GPS, which could be vulnerable.

The full report is available here.

We can’t prevent such an event, it will probably happen one day in the not too distant future, all we can do is ensure we are prepared to mitigate any impact it may have. We hope that the Royal Academy of Engineering report will provide a basis for proper planning to minimise the potential consequences.

Clouding the Edge for LTE-A and Beyond

This blog post was originally published over at Light Reading.

One of the areas of increasing discussion about LTE-Advanced (LTE-A) and especially around the yet-to-be defined 5G standard is the tension between the “edge” and the “cloud.”

Over the last decades in telecom the powerful trend has been to push intelligence out to the edge. David Isenberg wrote a very good — but oddly not as widely known or distributed as it deserves — essay on this way back in 1997:The Rise of the Stupid Network.

We now have edge routers, we have gateways in our phones, and new smartphones have “intelligence” onboard in a way landline phones never did.

In wireless networking, a few years after Isenberg’s essay, broadband was proving this logic with TCP/IP pushing intelligence out to the edge. While 2G the smarts were quite centralized — with a basestation controller (BSC) in the network — with 3G that focus shifted and the network started to flatten out a bit. (See Mobile Infrastructure 101.)

Bell Labs, meanwhile, had the idea of putting the router and stack all the way into the basestation with the snappily named BaseStationRouter. That of course then became the 3G small cell, with the medium access control (MAC) and stacks moving into the NodeB with Iuh replacing Iub, and then onto “flat architecture” of LTE. (See Telco in Transition: The Move to 4G Mobility.)

So small cells represent the clear case of intelligence to the edge — some people call this the Distributed RAN (D-RAN). (See Know Your Small Cell: Home, Enterprise, or Public Access?)

The advantages are that networks become better: We put capacity exactly where we need it. The small cell is responsive and efficient, and we can do things like offload and edge caching, latency is reduced (which improves speed and QoE) and so on. It is a cost effective and intelligent way to make the network better and has been the “obvious” paradigm for the last few years. (SeeMeet the Next 4G: LTE-Advanced.)

But over the last few years we have seen the reverse trend too.

In computing we have the cloud. Intelligence moving out of the edge into the center: The widespread use of Amazon AWS or Google Cloud to host services, the rise of Chromebooks, cloud-based services like Salesforce, Dropbox or Gmail.

This concept is also been felt in the wireless world, as we have heard more and more about cloud RAN (C-RAN). This is the opposite trend to small cells: Having a “dumb” remote radio head (RRH) at the edge with all the digits sent back over fiber — aka “fronthaul” — to a huge farm of servers that do all of signal processing for the whole network. No basestation and certainly no basestation router. (See What the [Bleep] Is Fronthaul? and C-RAN Blazes a Trail to True 4G.)

Some simple advantages here are from economies of scale: One big server farm is cheaper and more efficient than having the same processing power distributed — electricity and cooling needs at the basestation are reduced for example. A more subtle gain is from pooling, which is sometimes called “peak/average” or “trunking gain.”

While in a normal network every basestation must be designed to cope with the peak traffic it will support — even though other basestations will be lightly loaded then, only to have their peak some other time. So the network needs a best/worst case dimensioning, even though on average there is a lot of wasted capacity. In contrast, the Cloud RAN can have just the right of capacity for the network as a whole and it “sloshes around” to exactly where it is needed.

That is a benefit, but it has not seemed significant enough to persuade most carriers.

The problem has been connectivity: Those radio heads produce a huge amount of data and the connectivity almost certainly requires dark fiber. Most carriers simply do not have enough fiber, and even for those who do it is unfeasibly expensive. So, for most operators C-RAN has so far been economically interesting but not compelling and not worth the cost. (See DoCoMo’s 2020 Vision for 5G.)

But there is an increasingly strong reason that is changing that calculation.

Most of the advances in signal processing that make LTE-A and 5G interesting rely on much tighter coordination between basestations. Whether they are called CoMP, or macro-diversity or 3D MIMO or beam-shaping, they all rely on fast, low-level communication between different sites. This is impossible with “intelligence at the edge” but relatively easy with a centralized approach. (SeeSprint Promises 180Mbit/s ‘Peaks’ in 2015 for more on recent advances in multiple input, multiple output antennas.)

Hence the renewed focus on centralized solutions: Whilst before the economics were intriguing maybe these performance and spectral efficient gains make it compelling.

There is the twist that maybe a “halfway” solution would be optimal. This would perhaps have some signal processing in the radio, to reduce the data rate needed on fronthaul — and use something easier and cheaper than dark fiber — while still getting the pooling economies and signal processing benefits. (See60GHz: A Frequency to Watch and Mimosa’s Backhaul Bubbles With Massive MIMO.)

This tension between the edge and the cloud will be one of the more interesting architectural choices facing 5G and is something 3rd Generation Partnership Project (3GPP) and 5GPPP are looking at, as is the Small Cell Forum Ltd. (See5G Will Give Operators Massive Headaches – Bell Labs.)

But it might be an ironic twist if the architecture that becomes 5G is back to the “some at edge, some in core” we had with GSM or 3G, and we re-invent Abis and Iub for a new generation. [Ed note: Abis is an interface that links the BTS and the BSC; Iub links the Radio Network Controller (RNC) and Node B in a GSM network.]

2G and 3G are dead, long live LTE

Earlier this month, Verizon CFO Fran Shammo finally confirmed that the long delayed launch of VoLTE on their networks will happen in Q4 of this year.

This signals a turning point in the technology; it’s been a long and slow road to get here, but we’re finally at the point where it is starting to infiltrate the mainstream conscious.

Both AT&T and Verizon have committed themselves to offering phones that can take advantage of the new VoLTE technology by Q4 – and I’d hazard a guess that means it is certain to be a standard feature in both Apple and Samsung’s latest generation phones. This in turn will undoubtedly mean their competitors are not far behind with their own offerings.

So far no real surprises. The more interesting question, though, is when will we see the first LTE only devices? After all, many operators and handset manufacturers have made no secret of their desire to turn off 2G or 3G networks.

For the operators, supporting these now legacy technologies not only occupies valuable spectrum, but adds additional infrastructure rollout and maintenance costs.

For handset manufacturers, the need to make use of 2G and 3G networks adds additional modem requirements and costs. These in turn negatively impact battery life and phone size. We’ve recently seen several new companies emerge offering “LTE Only”, thin modems at very aggressive prices, which no doubt has piqued the interest of manufacturers.

Obviously switching over entirely to LTE has only been made possible with the introduction of VoLTE. The lack of voice support has meant that a circuit switch fall back has been a requirement up until now, therefore 2G and 3G networks were a necessity.

Another key barrier up to now has been LTE coverage. Obviously, until this catches up, we’re unlikely to see any operator in a hurry to offer handsets that only support LTE, as this would severely impact their customers’ experiences.  But, as we saw in our recent work for the Scottish Government, the speed with which LTE has rolled out means it won’t be long until it catches up – our estimates put indoor 4G coverage in Scotland at 95% by the end of 2015.

Verizon originally forecast the introduction of LTE-only phones to their network by the end of 2014, a prediction that raised more than a few eyebrows. Their updated forecast now pushes this out to early 2016.

I think this is not only likely, but perhaps a necessity; should they wait any longer, the ecosystem will be in place for a competitor to take advantage of their delay.