Bad neighbours? A comparison of LPWA technology options

Screen Shot 2016-05-11 at 10.24.43While the carrier community is celebrating the steady arrival of 3GPP defined cellular IoT that will enable the use of existing GSM networks with minimal impact through upgrades, there remains significant interest in alternative solutions in the unlicensed space.

Some of this interest comes from service providers who lack access to licensed spectrum, but the majority is being driven by use cases where the long range, extended battery life, and very low cost of Low Power Wide Area (LPWA) wireless technologies is a fundamental necessity. What is emerging though is a fragmented area of largely proprietary solutions, making it difficult for users to decide on which option best suits their particular use case.

The key approaches to unlicensed M2M connectivity can be split in to two groups: UltraNarrowBand (UNB) technologies; and those that employ some form of spread spectrum modulation (SSM).

Growth forecasts for the M2M market underline the need for these LPWA systems to be able to co-exist in license exempt spectrum and that any LPWA solution should be able to support  many connected devices – and this requirement is only going to become more important over time as the number of devices increases.

Real Wireless recently carried out a study that compared the levels of interference between networks using these two different physical layer architectures. This required us to model a scenario in which a UNB and a SSM network had overlapping coverage areas and various other sources of interference, including non-LPWA users, in order to study the ability of both technologies to mitigate interference.

This insight gained was that UNB and spread spectrum modulation networks can only effectively co-exist in very low capacity deployments. Shared channel operation, either between a SSM and a UNB network, or two SSM networks, would result in mutual interference and uplink blocking of both networks, except in cases of very low simultaneous user numbers.

In other words, the reality is that a SSM LPWA network architecture should be considered a ‘bad neighbour’, and multiple unlicensed IoT networks can only effectively share access to spectrum when they all also share a UNB architecture. However, given the number of use cases for these technologies, they will undoubtedly coexist in one location. As a result, this study has significant implications for technology choices in this important growth market.

To find out more about our study and approach to modelling of unlicensed IoT solutions, download our new white paper today.

A Manifesto for better DAS

DAS_Close_SMateoNote: This article, by Real Wireless’s Managing Consultant Oliver Bosshard, was originally published in RCR Wireless

The traffic demands users are placing upon networks continues to accelerate rapidly. In order to cope with this trend, debates have focused on network offload; the benefits, options available and their respective merits.

Despite plenty of discussion, we’re still no closer to solving this problem. 80% of all data traffic is generated indoors, from office employees, visitors, customers and the like. Yet remarkably few buildings feature indoor mobile infrastructure installations; typically only the newest and largest (e.g. airports, stadia, shopping malls), ignoring the vast majority of people. Clearly there is a huge opportunity here and a number of technologies are jostling to be the solution – including Small Cells, Radio Remote Heads, Wi-Fi and optical or passive Distributed Antenna Systems (DAS), each with their own merits, weaknesses and use cases. Ideally we would have one standardized solution with standardized interfaces for all (large to small) indoor coverage solutions.

The oldest solution for indoor coverage is DAS. Originally an analogue, single-operator, single-technology solution, it has evolved to encompass digital multi-operator, multi-technology solutions – as well as supporting MIMO (Multiple Input – Multiple Output).

Despite these upgrades there remain a number of significant limitations to the technology – and a lot of room for improvement. Worse still, it now faces stiff competition from rival technologies, in particular from small cells. A number of analyses show that these might be more cost-effective than DASinsome cases and some companies and pundits have been making announcements about the end of DAS. With the massive reduction of base station (BS) pricing, operators might even opt for more dispersed Base Stations (BS) with a passive DAS, instead of an active DAS.

But DAS continues to be used widely, and in fact the number of deployments continues to grow. No surprise: it has a lot of compelling attractions, especially for multi-technology and multi-operator support (they can share one infrastructure, as opposed to one small cell for each). Indeed, it was striking how at Mobile World Congress 2014 the explosion of activity and news came from DAS vendors, demonstrating a number of new products and innovative ideas.

Certainly, all is not lost for DAS. If it could continue to evolve into a smart, digital solution – offering flexible sectorisation, intelligent/dynamic capacity steering, digitalization, package switching at a more competitive price – it could become the ideal solution.

Here is my manifesto for a better DAS.

(Note: In the interests of complete fairness, several companies are currently working on – or planning – some of these ideas. However, I’m not yet aware of anyone that has announced they intend to combine them all, and I see this as the real opportunity here.)

Better RRUs through equipment adjustments

Compared to conventional radio remote heads, the radios used in the Radio Remote Units (RRU) of DAS are technology-agnostic. Typically RRUs will feature modular support for all 5 bands and technologies, and are remarkably straightforward in their composition: one power supply, one Fibre or Cat6/7 connection, one RF output to an antenna or passive DAS.

Ideally, the RRUs should evolve in to full 2×2 MIMO remote units. To achieve this, the equipment required needs to become slightly more complex, with 2 RRUs in tandem fed by two Fibres or Cat7 cables with two RF outputs to a MIMO antenna or DAS. If the total spectrum of both streams together is less than 270 MHz, only one Fibre / Cat 7 connection may be used.

Increase signal capacity and noise cancellation via digital transmission

At present, the fiber connections between Master Units (MUs) and Radio Remote Units (RRUs) typically support a sampled analogue RF signal input of up to 10 Gbps in capacity.

With 270 MHz of cellular spectrum available across all 2G to 4G bands and technologies, and 30 MHz of sampled spectrum typically requiring 1 Gbps of digital capacity, this means 9 of the 10 Gbps available is required for cellular; leaving only 1Gbps spare for other technologies such as Wi-Fi.

But sampling of an analogue RF signal is not the most efficient usage of a transport medium. Imagine if the digital bit stream from the CPRI interface could be used. The CPRI data stream does not need sampling and therefore can be transmitted as it is, using the transport medium in an efficient way. As a result, conventional fibre connections could be replaced with Cat7 cabling, in conjunction with standard SFPs on the MUs and RRUs.

The other benefit of digital transmission is that the digital signal can be transported, amplified and distributed without the typical signal losses and noise creation. This would mean that RRUs could be situated far away from the MU and daisy chained as required.

Finally, by digitizing the transmission, the current issues DAS faces with signal loss over distance are rendered irrelevant, as the signal can be amplified without the risk of increased noise. Thisalsostartstoleadtowardsahybrid between DASandthenewer CRAN architectures beingcontemplatedforthewide-areanetwork.

Reduce costs, simplify and increase efficiency by connecting to CPRI

At present, DAS uses standard RF interfaces for BS – MU connections. This results in OEMs needing to purchase and produce additional BS hardware for compatibility. This increases the complexity of the solution, whilst adding additional costs for manufacturing, stocking and shipping.

Using CPRI – oranotherevolvedoptimizedinteroperabledigitalinterface- would do away with the need to include the radio in the BS, reducing hardware requirements, power consumption and the use of external directional couplers and termination loads between the BS and the MU. The need for less up-conversion and final amplification in the base stations would reduce hardware costs, power, UPS and air-conditioning significantly and avoid RF noise creation.

Finally, by digitizing the transmission, the current issues DAS faces with signal loss over distance are rendered irrelevant, as the signal can be amplified without the risk of increased noise. This would mean that RRUs could be far away from the MU and daisy chained as required.

But at the same time, thanks to our newly supported CPRI connectivity, the signal conversion at the RRU end becomes simpler and cheaper, thanks to a direct digital / RF conversion.

Take full advantage of routing and switching capabilities

Another benefit of CPRI interface is that the data is presented as a digital data stream. As a result, the data stream could be switched and routed by proprietary switches supplied by the DAS manufacturers, using either Cat 7 (up to 100m) or Fibre networks for longer MU – RU distances (up to 40 km is possible).

Doing so would not only allow full flexibility in traffic allocation to end points, shaping traffic to meet demand for capacity, but also smarter switching of unused or underused repeaters. By manually or dynamically switching off unused repeaters, more effective management of uplink and downlink noise pollution and power consumption is enabled.

Unfortunately CPRI implementation differs from OEM to OEM: it is one of those “not quite standardized standards”. We need to achieve open interfaces and (perhaps) cross-vendor interoperability if we are to get the best possible use out of DAS and a more open market. In the meantime, DAS manufacturers can create their own CPRI at the master unit output, in order to take advantage of the benefits digital transmission offers.

Another issue is that CPRI, while digital, is not compatible with Ethernet or current installed networks.

That too is changing, however. While there are issues with carrying these signals for their transportation over standard TCP/IP switching and routing networks these can be addressed. By standardising the CPRI interface across OEMs and encapsulating the cellular data packets in standard IP packets, traffic could be switched and routed via conventional routers instead of proprietary units supplied by DAS manufacturers – of course, this is reliant upon transmission requirements being met, such as synchronization and a jitter free constant serial data streams. However, several vendors are now demonstrating products that can, indeed, carry digitized RF over standard Ethernet. 

By adopting these proposed changes, we would see massive capital and operational savings in the use of DAS systems. Standard infrastructure would be able to be used for switching and routing, whilst larger areas would be able to be covered by a DAS system.

To end with two thoughts about the implications for business models and the industry.

It is notable that these changes to DAS, coming from an in-building context are very similar to, and probably converge with, the activities around virtualization and Cloud RAN that are happening elsewhere in the network. Again, the move to transporting digital radio signals, to support multiple services in a flexible way are similar. We may see some intriguing overlaps between DAS companies and Cloud RAN suppliers.

Finally, and worth noting, the term “neutral host” may well receive a completely new meaning and present a new opportunity for groups such as MSOs. Interestingly, there has always been a difference in neutral host between Europe & USA, and that difference could change in various ways. Beyond cost and efficiency savings, the proposed changes could actually catalyse new business models that could change the industries structure.

Clouding the Edge for LTE-A and Beyond

This blog post was originally published over at Light Reading.

One of the areas of increasing discussion about LTE-Advanced (LTE-A) and especially around the yet-to-be defined 5G standard is the tension between the “edge” and the “cloud.”

Over the last decades in telecom the powerful trend has been to push intelligence out to the edge. David Isenberg wrote a very good — but oddly not as widely known or distributed as it deserves — essay on this way back in 1997:The Rise of the Stupid Network.

We now have edge routers, we have gateways in our phones, and new smartphones have “intelligence” onboard in a way landline phones never did.

In wireless networking, a few years after Isenberg’s essay, broadband was proving this logic with TCP/IP pushing intelligence out to the edge. While 2G the smarts were quite centralized — with a basestation controller (BSC) in the network — with 3G that focus shifted and the network started to flatten out a bit. (See Mobile Infrastructure 101.)

Bell Labs, meanwhile, had the idea of putting the router and stack all the way into the basestation with the snappily named BaseStationRouter. That of course then became the 3G small cell, with the medium access control (MAC) and stacks moving into the NodeB with Iuh replacing Iub, and then onto “flat architecture” of LTE. (See Telco in Transition: The Move to 4G Mobility.)

So small cells represent the clear case of intelligence to the edge — some people call this the Distributed RAN (D-RAN). (See Know Your Small Cell: Home, Enterprise, or Public Access?)

The advantages are that networks become better: We put capacity exactly where we need it. The small cell is responsive and efficient, and we can do things like offload and edge caching, latency is reduced (which improves speed and QoE) and so on. It is a cost effective and intelligent way to make the network better and has been the “obvious” paradigm for the last few years. (SeeMeet the Next 4G: LTE-Advanced.)

But over the last few years we have seen the reverse trend too.

In computing we have the cloud. Intelligence moving out of the edge into the center: The widespread use of Amazon AWS or Google Cloud to host services, the rise of Chromebooks, cloud-based services like Salesforce, Dropbox or Gmail.

This concept is also been felt in the wireless world, as we have heard more and more about cloud RAN (C-RAN). This is the opposite trend to small cells: Having a “dumb” remote radio head (RRH) at the edge with all the digits sent back over fiber — aka “fronthaul” — to a huge farm of servers that do all of signal processing for the whole network. No basestation and certainly no basestation router. (See What the [Bleep] Is Fronthaul? and C-RAN Blazes a Trail to True 4G.)

Some simple advantages here are from economies of scale: One big server farm is cheaper and more efficient than having the same processing power distributed — electricity and cooling needs at the basestation are reduced for example. A more subtle gain is from pooling, which is sometimes called “peak/average” or “trunking gain.”

While in a normal network every basestation must be designed to cope with the peak traffic it will support — even though other basestations will be lightly loaded then, only to have their peak some other time. So the network needs a best/worst case dimensioning, even though on average there is a lot of wasted capacity. In contrast, the Cloud RAN can have just the right of capacity for the network as a whole and it “sloshes around” to exactly where it is needed.

That is a benefit, but it has not seemed significant enough to persuade most carriers.

The problem has been connectivity: Those radio heads produce a huge amount of data and the connectivity almost certainly requires dark fiber. Most carriers simply do not have enough fiber, and even for those who do it is unfeasibly expensive. So, for most operators C-RAN has so far been economically interesting but not compelling and not worth the cost. (See DoCoMo’s 2020 Vision for 5G.)

But there is an increasingly strong reason that is changing that calculation.

Most of the advances in signal processing that make LTE-A and 5G interesting rely on much tighter coordination between basestations. Whether they are called CoMP, or macro-diversity or 3D MIMO or beam-shaping, they all rely on fast, low-level communication between different sites. This is impossible with “intelligence at the edge” but relatively easy with a centralized approach. (SeeSprint Promises 180Mbit/s ‘Peaks’ in 2015 for more on recent advances in multiple input, multiple output antennas.)

Hence the renewed focus on centralized solutions: Whilst before the economics were intriguing maybe these performance and spectral efficient gains make it compelling.

There is the twist that maybe a “halfway” solution would be optimal. This would perhaps have some signal processing in the radio, to reduce the data rate needed on fronthaul — and use something easier and cheaper than dark fiber — while still getting the pooling economies and signal processing benefits. (See60GHz: A Frequency to Watch and Mimosa’s Backhaul Bubbles With Massive MIMO.)

This tension between the edge and the cloud will be one of the more interesting architectural choices facing 5G and is something 3rd Generation Partnership Project (3GPP) and 5GPPP are looking at, as is the Small Cell Forum Ltd. (See5G Will Give Operators Massive Headaches – Bell Labs.)

But it might be an ironic twist if the architecture that becomes 5G is back to the “some at edge, some in core” we had with GSM or 3G, and we re-invent Abis and Iub for a new generation. [Ed note: Abis is an interface that links the BTS and the BSC; Iub links the Radio Network Controller (RNC) and Node B in a GSM network.]

Boris and 5G – will London really have it by 2020?

Boris Johnson has recently announced that London would lead the world in having 5G by 2020.

It’s not where you’d expect such an announcement to come from, at least not compared to the 5G stories so far. Boris certainly doesn’t run a cellular network, he isn’t responsible for telecoms policy or spectrum and (quite probably) won’t still be Mayor then.

But will London actually have 5G by 2020?

Cellular standards do follow an interestingly regular ten-year ‘tick.’ The first 1G analogue standards (e.g. AMPS,NMT) were launched around 1981; GSM first launched in 1991; UMTS 3G launched in between 2001-2003 and LTE was 2010.

So simplistically, you could say 2021 for commercial launch was about right, making pre-release demonstration on 2020 viable.

In LTE all the key requirements were agreed, and the basic technology outline had been accepted by 2004. However there is still a lot of mileage in LTE-A and the interim releases so 5G could be much later.

The other issue is the uncertainty over what exactly 5G is. With a wide set of use cases, a huge variety of architectures and technologies and, as yet, little consensus, it is hard to believe 5G will be a quick development.

This all makes Boris’ 2020 deadline look unlikely.

What I do think will happen is a demonstration and claims of “5G” in July 2020, driven by the Tokyo Olympics and the Japanese commitment to show something working. But even this may need to be taken with a pinch of salt, given the history of FOMA – a slightly incompatible pre-standard version of UMTS that launched in Japan in 2001, two years ahead of the ‘proper’ Release 99.

So while Boris could have his wish and demo 5G in London in 2020, it’s likely to be an engineering demonstrator with an instruction set in kanji shipped over from the Tokyo Olympics, and it won’t be the real 5G. The rest of us will have to wait at least another two years before we’re using 5G services – whatever they turn out to be.