Who really needs near-zero latency?

One of the ‘generational’ shifts associated with 5G is the promise of near-zero latency. Now for most engineers, the reduction of latency is generally seen as a necessary good, which is why putting it at the heart of the 5G value proposition is so rarely questioned. But when it comes to making the business case for 5G, it’s important to start making judgements about how much value can be attributed to ultra-low latency and the use cases in which it is mission critical.

It also means making a call about when such applications are likely to achieve the critical mass necessary to deliver significant and sustainable returns that justify investment.

Latency has fallen across the cellular generations. It was around 500ms with 2G, perhaps 100ms with 3G and around 30-50ms with 4G. As surveys have noted, falling from 100ms in 3G to 50ms in 4G has improved user satisfaction. Further improvements may be both harder to deliver and have less impact and ROI associated with them.

Current LTE networks deliver a theoretical latency across the radio interface of 10ms, which translates in practice to around 40ms once delays in the core and external networks are taken into account. Improving this latency would not materially change the experience for most use cases.

For example, video streaming can accommodate very high latency using buffering. Web browsing does not improve materially with latencies below about 50ms. For any application with video it is worth remembering that the frame refresh rate on most devices is effectively 25Hz – which means a video frame is replaced by another every 40ms, generating the perception of a moving picture. Having a latency below 40ms does not help for such applications since the video will not refresh faster – and even if it did it would not be perceptible.

So where might significantly lower latency help? Possibly with ‘tactile’ communications, where a user is remotely controlling a robot using, for example, a special glove, which provides feedback on the touch sensation although there is some debate over whether latencies lower than 20-40ms are needed for this. But this is likely to be an indoor application where a wired or short-range wireless solution is more likely to be used.

Some claim low latency is needed for control of autonomous vehicles. However, it is likely there will be reluctance to depend upon low latency network connectivity for emergency situations such as harsh braking from a nearby car; direct car-to-car communications and advanced sensors are better suited to this situation.

Of course, the lower the latency the better, but it is hard to see any economically compelling cellular applications where reducing latency below that of LTE, the latency performance of which is still being improved by the standards, would make a material difference to the user experience.

At Real Wireless, whilst we are fully behind research towards future low latency communications technologies, we believe that there are still significant gains to be had from leveraging the full value of LTE investments and that delivering such capabilities to the sectors and communities that can most benefit from them must remain an urgent priority.

Wi-Fi first? – William Webb, Regulatory and Spectrum Expert, Real Wireless

In developing wireless communications over the past three decades we have been chasing ever-faster speeds and ever higher capacity. This has delivered astonishing benefits for all of us that have truly transformed our lives. But the speed of data connection is now becoming less important than consistency – the ability to be connected at a reasonable speed everywhere. Rather than aiming for ever-faster connections it suggests that delivering enhanced coverage in a number of known problematic locations such as trains and rural areas would generate greater value for the economy and be preferred by most consumers. These problems have persisted throughout the broadband era but the technology and inclination to tackle them is now emerging.

In most of the locations where connectivity is difficult Wi-Fi is a better solution than cellular, with the exception of coverage in rural areas. Wi-Fi provision on trains enables more productive journeys. Wi-Fi in buildings increasingly enables voice calling as well as data access. Wi-Fi can also provide very high capacity in stadium and the 60GHz ‘Wi-Gig’ variant can enable Gbits/s links within rooms. This reflects a trend that has been underway for years towards increasing use and reliance on Wi-Fi to the extent that it is now the preferred method of communication for most. Our cellphones typically send around 85% of their data traffic over Wi-Fi and our tablets and laptops typically 100%. That we live in a ‘Wi-Fi first’ world is only slowly being realized across the industry – developing policies for such a world is becoming increasingly important for governments and regulators.

The end result – connectivity everywhere – would be one well worth striving for. A great road system is no longer one with unlimited maximum speed, but one with minimal congestion and excellent safety. A great communications system is one available everywhere, all the time with minimal congestion and at low cost. I am excited by the prospects that we might now be able to step off the ‘data rate’ escalator and focus on delivering a solution that meets everyone’s needs wherever they are.

One of my particular skills is in the regulation of radio spectrum and wireless communications – I spent seven years at Ofcom and have written two books on this topic. Our current regulatory framework devotes much attention to licensed spectrum for cellular and competition policies amongst mobile operators. It is time to reassess that framework, with increased focus on spectrum for Wi-Fi and with a changed competition policy that recognizes the world of mobile communications and the interests of the citizen-consumer will look quite different five years from now.

Wireless Communications: A wrong turn taken many years ago leads to a dead end?

Introduction

deadendSome years ago I bought my daughter a wooden train track. The starter set included a small amount of track and a couple of engines. She played with it briefly and then lost interest. I concluded that more track and other items like stations were needed to make it sufficiently interesting to engage her attention and carried on doing this for quite some time before I finally realised that it was nothing to do with the amount of track – she just was not interested in train tracks.

My thesis is that wireless communications equally made some poor decisions many years ago. Just like my daughter, end users did not refuse the new services offered but quickly lost interest in them. Only now are operators and manufacturers starting to realise that the path they have been on was the wrong one. This paper discusses the decisions made, the resulting outcome and then suggests the path that should have been followed and how we might redirect our efforts to get back on track.

Historical decisions

The main developments in wireless communications have been within the cellular industry. Here the industry has progressed through a series of “generations” from 1G to 3G, with 4G now being widely discussed. The decisions made when designing the next generation are key – they affect the services that can be offered, the cost of the network and even aspects such as the battery life of handsets. The timing of the generations is also important in that it affects the need for operators to invest in new spectrum and technologies. It is in making these decisions that the wrong turn was taken. This section explains the decisions that were made and then subsequent sections discuss why these were inappropriate and have led us to a dead end.

The first generation was a mix of different analogue standards with a range of problems including security, lack of roaming and limited capacity. The standard that was developed in response to this was GSM (there were other standards in the US and Japan but these eventually became sidelined). GSM has remained secure to this date, provided enough capacity for all and facilitated roaming. In addition, almost as an after-thought, the short message service (SMS) was added which became the hugely popular texting service (something we will return to later). Over time, evolutions were added to enable packet data transmission which was generally more efficient for data services.

Once the team that had been working on the second generation completed their task, their attention naturally turned to the next challenge. Since 2G had followed about a decade after 1G there was a natural supposition that 3G would follow a decade later. However, there was nothing obviously wrong with 2G that needed fixing so the developers of 3G focussed on making it do the same things as 2G but only better. Better to them meant faster, so 3G was designed to deliver much higher data rates than 2G. It was also somewhat more spectrum efficient although the gains here were relatively small (perhaps a factor of three for voice, somewhat higher for packet data). This was one of the key decisions that we will return to – that “better” meant “faster”.

With 3G introduced, if somewhat shakily, the bandwagon rolled on, looking at what would be required for 4G. Just as with the 2G to 3G transition there was little that obviously needed fixing so the same teams concluded that they would make 4G even better than 3G – again predominantly by making it even faster. As before some small spectrum efficiency gains were anticipated although these were even less than in the previous transition as technologies came closer to fundamental limits. For 4G, all the decisions about what “better” meant when 3G was designed were simply taken to be true.

So it was a fundamental decision taken in the mid 1990s that wireless systems needed to offer ever higher data rates that has broadly placed us on the path we have followed since then. But there is little evidence that users value higher data rates and plenty that they value other things. This divergence between what the system designers think “good” looks like and what the end users want has been growing over time and is now leading to serious problems. These are explored in the next section.

The current position

Despite ever “better” technology, the current position of the wireless communications industry is not generally healthy. Most wireless operators are no longer seeing a growth in revenues and indeed many are now seeing a small fall each year as competition drives down call costs for subscribers. This has led many operators to cost-cutting measures. The manufacturing industry is also in poor shape. Nortel is bankrupt, many other manufacturers have merged and few are currently profitable. Many are reliant on 4G deployments or technologies like LTE or WiMax for future growth, but these deployments might be some years away. Although it is almost impossible to ascertain, it seems unlikely that many operators who invested in 3G spectrum and networks have yet recouped their investment, almost a decade after the first 3G systems were launched.

Other areas of wireless are somewhat more healthy. In the short range area, WiFi and BlueTooth chipset sales continue to grow and become embedded in ever more products. The number of WiFi hotspots is still growing and they are being used to an increasing extent. Satellite communications remain stable although they address only a small niche market segment. Broadcast of TV and radio is also stable, although there are some concerns over the funding models for broadcasters – but these are not predominantly technically related.

Paradoxically, 3G does appear to have managed a recent success in the form of wireless data, or “3G dongles” as they are often known. While WiFi has demonstrated to users that there can be value in downloading emails and enabling web surfing when away from the home and office, WiFi coverage is erratic and often requires user intervention to log into each different zone. It appeared to some that 3G might be able to offer an alternative. With sufficiently high data rates that downloads happen fast enough for most and no need to log into different zones it does appear to solve the “data roaming” problem. But this is an illusion. For while 3G does indeed solve the problem, it is unable to provide enough capacity at a low enough cost. A hint as to why this might be is to note that between 2G and 4G data rates have risen in the region of 100-1,000 fold but capacity has only gone up around 5-10 fold.

It is worth dwelling on this problem a little more because it exposes some of the fallacies in the previous decisions made. Current 3G networks in the UK (and other countries are likely similar) can support data transfers of around 1GByte/user/month. This is adequate for occasional email download on the move but rapidly gets used up if there is any video involved or if used as the primary household broadband connection. Beyond this level cellular networks become congested and the data rates users can achieve suffer substantially. There are some ways to enhance this. One is to acquire additional spectrum, however, this is costly in terms of spectrum fees, infrastructure upgrade and the need to subsidise dongles that can work on the new frequencies. Another is to deploy more cells, but this is again expensive and becoming increasingly difficult to do in crowded areas where suitable locations are hard to find.

In order to increase capacity then, operators will need to spend more money. This only makes sense if they can charge enough for the data usage to justify the cost. Herein lies the problem. Users will pay quite a lot for the initial connection but as the volume of data increases and data rate grows the amount that they are prepared to pay per bit of data transferred falls. At present users only pay around 1% of the cost per bit for data transfer than they do for voice despite the fact that a bit of data requires exactly the same network resources as a bit of voice. Only by reducing the price per month have operators made wireless data successful but the price point they have reached is insufficient to justify new investment in the network. Users then, do quite like the idea of high speed wireless data but not enough to pay at the levels necessary to make this an attractive and sustainable business for the operators.

We are left in something of a dead end. Revenue into the industry in the form of subscriber ARPU is falling, resulting in operators suffering falling revenue with and downward pressure on the rest of the industry. The solution that the industry proposes to this is faster 3G and ultimately even faster 4G but while users generally like the idea of faster it is only of marginal value to them and they will not pay much for high speed wireless data – not enough to justify further investment. Few are prepared to admit it but 3G looks like it was mostly a mistake and 4G looks even more problematic. As manufacturers go bankrupt and operators increasingly look to merge, where does wireless go from here?

Where we need to be

The greatest success in the wireless industry in recent years is Apple’s iPhone. In particular, the fact that the iPhone was initially launched as a 2G device is instructive. Indeed, another of the success stories of recent years – the Blackberry – is also a 2G device. The iPhone has since been upgraded to 3G and this does appear to have somewhat improved the user experience but nevertheless this clearly shows that end users value something other than data rate. Of course, all things being equal, higher data rates are better, but do not add all that much value to most.

The iPhone succeeded through a much improved user interface that enabled users to do much more with their phone. Its popularity was further enhanced by the “Apps store” which enabled users to select from an enormous range of different games and applications and download them for typically a small one-off payment. It is notable that Apple and others succeeded where the operators failed. Operators have been trying to introduce new services for decades. These include WAP, group calling, video calling, picture messaging, mobile TV, location-based services and more. They have almost all failed for a range of reasons. These include the fact that the operators wanted to change a per-usage recurring revenue when users wanted to pay one-off fees and the desire of the operator to roll out a service consistently across the thousand or more handset variants operating on their network which tended to bring the service experience down to the lowest common denominator as well as slow down its introduction.

Despite having high data rate channels at their disposal, users continue to predominantly use apparently highly inferior approaches such as texting and most recently Twitter – a text only solution with limited message size. All of this suggests that while the designers of 3G might have thought that “better” meant faster, this was far from what the end users understood by “better”. So what might better actually look like?

A first conclusion is that “better” is not necessarily more. If this were true then voice calls would have been replaced by video calls giving not only voice but images as well. Instead, if anything, voice calls have been replaced by texting, emails and tweets. Less, it appears, is often better than more.

A second conclusion is that variety is generally a good thing. New methods of communication such as Facebook have been embraced rapidly while older ones like texting have not declined. Humans like a wide range of communications mechanisms to select from depending on the context. Some things, like rejection of a suitor, are done less painfully from a distance over a very “thin” communications channel. Sometimes a text to say “I love you” is worth much more than a video call to say the same thing.

A third is that users are very different. There are many thousands of applications in the Apps Store. Some are very niche but very valuable to a subset of users. A wide range of simple applications and services is better than a narrow range of highly developed applications.

Perhaps above all else though, the conclusion that designers of communications systems do not really understand how their solutions will be used and what users will see as “better” stands out. Making any guess as to what “better” might be is likely to fail while “worse” (eg texting) may actually turn out to be just what the user wants. So faster is not better (or at least, not much better), what is better is unpredictable, but providing users and developers with the tools to play around, be inventive and have a wide range of channels and their disposal is more likely to result in a good outcome. To paraphrase, if we’d concentrated more on developing iPhones and less on 3G we might be in a much better place now.

So, to answer the question set at the start of this section, where we need to be is an environment with a wide range of communications mechanisms which are flexible and relatively low cost, are amendable to new services being developed and to experimentation by end users. Technically, this might mean a range of large-cell networks (eg 2G) and small cell networks (eg WiFi) as well as fixed networks of course, which devices can readily connect to and which offer a simple standardised interface. It means devices that have a small number of standardised operating systems onto which applications can be readily downloaded so that developers can write just one version of their application. It means a wide range of extra features in the handset such as location and cameras to provide more “hooks” for developers to experiment. And it means a flexible value chain where service providers can readily integrate data transmission across multiple networks and develop services making use of multiple resources.

How we can get back on track

To summarise the discussion to date, wireless communications is currently not in a healthy position. Operators and manufacturers are facing increasing pressure to the extent that they are merging or going bankrupt and yet the next generation of technology only threatens to make things worse by increasing expenditure without providing substantial end-user benefit. This sorry state is all a result of assuming the “better” meant “faster”.

Where we need to be is a mix of different low-cost networks that offer flexibility and enable experimentation. The good news is that we already have most of the technology and investment that we need – the fact that the iPhone has been so successful is ample evidence of this. It is less a case of changing the technology and more one of changing the structure.

A first step for the operators is to bring to a halt most of their infrastructure expenditure. There is no evidence that 4G will be any more successful than 3G nor that further investment in 3G will be any more beneficial than the investment to date (back to the opening discussion about the train track). But there is plenty of evidence that the existing 2G and 3G networks can carry data in a flexible manner that allows most services to be implemented. Halting expenditure will also be the first step to restoring profitability.

The next step is for the operators to separate into network and service provision elements. This will open up the value chain, making it easier for service providers to put together converged offerings spanning multiple networks, both fixed, mobile and WiFi. This will allow, for example, simple roaming across home, office, WiFi and cellular networks enabling lowest cost transfer of data in a manner that will meet the needs of most while minimising overall network cost. It also enables the network elements to merge, outsource, share masts and generally reduce the costs of their operations, further aiding the operators’ business cases.

Along with this is the need for application development environments where developers can write once for a range of different networks and devices. This requires operators to provide standard interfaces into network elements such as their location databases and for mobile devices to standardise on a small number of operating systems which are able to interwork in the manner that the same document can be viewed on an Apple or a PC computer. Initiatives are underway in these areas but would benefit from greater commitment.

The final step is an acceptance that the value chain comprises network operators who run networks and provide wholesale bitpipes, service providers who aggregate data provision across multiple networks and provide customer care, and application developers who put together the applications that run on top of all these. This enables the greatest flexibility allowing many different communications channels to be provided and a wide range of applications to be made available.

Conclusions

Wireless communications can provide immense value to users. A range of new wireless services such as travel assistance and interworking with home networks and appliances would add substantially to this value. However, at present the industry is on the wrong path. It is fixated on ever higher data rates – a fixation bourn of a time in the 1990s when work started on 3G but when there were few obvious problems to fix with 2G. While higher data rates are no bad thing, they add little value to the end user and do not help provide any of the services which might revolutionise the role of the mobile phone. Further, if users try to make widespread use of these data rates the networks will rapidly run into capacity problems. This fixation has been steadily leading the whole wireless sector down the wrong path to the extent now that many operators and manufacturers see a problematic future.

Happily, it getting back on track is not overly difficult. It requires operators to forego plans to deploy new networks and instead concentrate on opening up their networks, perhaps through structural separation of the network and service provisioning element, and enabling a wide range of applications to be developed and deployed by others. It is the structure of the sector we need to address most, not the technology it might deploy.

The “killer app” may be dead but we still need a “founder app”

For many years, in many conferences, whenever a new technology or service was discussed the cry went up “yes, but what’s the killer application”? This was based on the observation that voice calls were the key reason people bought mobile phones (at the time) and that voice calls were well over 95% of the revenue. By analogy it was assumed any new service or technology would have a similar dominant application that needed to be identified and targeted to get the technology off the ground. Sadly, killer apps turned out to be few and far between and after failing to identify many, most proponents of new technologies took the easy route out by proclaiming that there was no killer app, just lots of small ones that added together justified the investment. The Apple Apps Store seems to be the apotheosis of this concept – the final nail in the coffin of the killer app.

 

But while it may be true that a new technology can be justified by multiple applications, there remains the tricky problem of getting something off the ground for which the use is unclear. Take near-field payment systems. The idea of using the mobile phone as a payment and identification mechanism has been around for years and tried a few times without success – there is little point having the payment capability in the phone if there are no shops that allow payment with it and no point equipping shops if there are only a few phones with the technology. This changed with the advent of the Oyster payment system in London (and similar in some other major cities). Now people had Oyster cards that could be used for payment elsewhere making the deployment of readers sensible. And as readers were deployed, building the card into the phone had obvious benefits. Thanks to Oyster, near-field payment is off the ground and may enter a virtuous circle where it rapidly becomes mainstream and then ubiquitous. Oyster is unlikely to turn out to the killer application – or even the largest application by value – but it was big enough to get things started.

 

The same can be seen in multiple other areas. With BlueTooth it was the wireless headset that persuaded people of its value. BlueTooth is now much more widely used for a huge number of applications. With WiFi it was home connectivity to a broadband connection – other devices are now camping onto this. But with technologies such as Zigbee and UWB that founder application is still to be identified. This is one of the reasons why forecasting the success of technologies can be so hard – they can languish for some time until a large organisation, Government or industry decides to adopt them for a particular application which then gets them onto the virtuous circle that leads to widespread adoption for multiple applications.

 

So ask not what the killer app will be but look instead for the founding application.

Scenario Planning – Generates a warm feeling but little value

Forecasting the future – in any discipline but especially in the relatively fast moving world of telecoms – is always important. With massive investments in new technologies and networks and with payback periods that can extend over decades or more, making the right bet is critical. As mentioned in a previous posting, we are now seeing the results of some companies having made poor bets, for example Nortel’s assumption that 4G and WiMAX would arrive more quickly than they have.

Many do try to forecast the future of wireless but the trend in the last decade has been to move from particular predictions to the use of scenarios. This can be seen in any of the forecasts from major entities such as the World Wireless Research Forum (WWRF), European Commission publications, outputs from regulators and a host of research papers.

Scenarios, on the face of it, represent a very sensible approach to forecasting. There are some variables that appear just too uncertain, such as whether mobile TV will take off. Better then to model a range of scenarios often representing extreme cases. This would work well if all the scenarios, or almost all, pointed to a similar outcome. For example, if 4G was needed under all reasonable scenarios then the analysis would have demonstrated strongly that its emergence was near-certain. But in the communications sector this never happens. Instead, the “status quo” scenario shows that no new networks or technologies are needed while the “wireless data explodes” scenario shows that networks will need a ten-fold increase in capacity or more. Effectively, scenarios demonstrate that in order to make a bet on the future you have to pick a particular scenario. Sadly, the forecasters who use scenarios never do. They simply present their 2×2 matrix and assume that their work is done.

There is another kind of forecaster which I term the “hockey stick forecaster”. These tend to be analysts and consultants who produce reports about particular services, such as location based services, providing predictions of revenues over the coming five years. They always look similar – slow growth for the next couple of years, rapidly accelerating in future years. Their predictions almost always prove optimistic so when they revisit them every year or two they slide the “hockey stick” a few years to the right and republish.

Time and experience has shown that neither of these approaches generates much value – indeed if anything they tend to lead companies like Nortel into bankruptcy. We need an alternative – experts with a considerable experience of the industry and great insight who can provide an unbiased and carefully thought through analysis of the future and some way for the industry to coalesce around their views.

Meltdown in the wireless communications industry

The news from the infrastructure manufacturers looks bleak. Motorola have stopped pension contributions for US employees in order to save cash. Analysts are debating whether Nortel would be better off filing for bankruptcy sooner rather than later. Alcatel-Lucent is heading for large losses and even Ericsson has substantially downgraded its sales predictions.

Some of this is in response to the “credit crunch”. While mobile subscribers are not materially cutting back on the amount that they are spending on wireless communications, mobile operators are concerned that they might be less inclined to try new services and that competition will continue to erode the cost of calls and texts. The result would be slightly falling revenue for an industry that has been accustomed to growth year after year. Operators also realise that borrowing money for infrastructure projects is difficult at the moment and are inclined to cancel or delay uncommitted spending.

Some of the pain relates to poor strategic decisions. Both Motorola and Nortel decided to exit from 3G and instead concentrate on WiMAXand 4G. WiMAX always looked like a relatively niche technology compared to 3G/4G and seems even less likely to make an impact as it becomes harder for new entrants to raise the capital to build their network. 4G looks like it will be postponed for some time as operators aim to generate as much revenue as they can from their 3G networks, many of which are still running well below capacity. Without any revenue from 3G sales, these manufacturers could be heading for a bleak few years from which it will be difficult to recover.

Another problem for the established suppliers is competition from the Far East. Manufacturers from counties like China and Korea have now established a strong reputation and product line coupled with a relatively low cost base. The established suppliers, by contrast, are in high cost countries and are often saddled with a high cost base and large pension liabilities.

Underlying all these problems is a fundamental change in wireless communications, away from the “generation game” of new networks being deployed every decade and away from an era of ever more new operators entering the market. The construction of large cellular networks is virtually over, leaving growth in areas such as small cells and network enhancements.

What we are seeing is a long term shift of falling revenue for the established infrastructure suppliers, exacerbated and brought into sharp focus by the credit crunch. Many of the great names from the past will not survive the transition. Others will merge – although the track record for mergers is poor. Many jobs will be lost in the process.

While this is bad news, especially for those personally involved, it has a feeling of inevitability about it. What is less inevitable but potentially just as devastating is the possible collateral damage. Smaller companies with innovative new products and software may get caught up in this upheaval, perhaps because they supply the larger manufacturers, perhaps because the spending cuts from the operators affect them, or perhaps because confidence evaporates from the sector making it impossible to raise finance. Such companies ideally need to find some way to “hibernate” for a year or two as the problems of the sector sort themselves through. Otherwise, there is a risk that many new and important products such as femtocells will be set back many years.

By 2010 the landscape of those supplying the telecoms operators will look somewhat different.

Who is best placed to deploy new services?


When it comes to deploying new mobile services such as location based offerings it would seem obvious that the best companies to do this are the mobile operators. After all, they have access to the network, access to the customers, a strong brand and enormous financial muscle and leverage over others in the supply chain. And yet, their track record is very poor. For example:

 

  • Attempts to introduce email services failed until Blackberry arrived, effectively bypassing the operators by installing devices in the corporation and software in the handset and just using the data facilities of the operator’s network.
  • Attempts to introduce Internet access using “walled gardens” such as Vodafone Live gained little traction, it was the introduction of the iPhone and associated browser which translated any web page into a form ready for use that revolutionised the mobile internet – again only using the data facilities of the operator.
  • Attempts to introduce location-based services by the operators mostly failed, it has again been Apple that has achieved much in this space using simple information such as cell location, bypassing the operator.
  • Although it is early days, it appears that watching mobile TV is more likely to occur via a download from the iPlayer to a podcast suitable for a mobile device than through anything offered by the operator.

 

Indeed, it is hard to think of a single example of a service that the mobile operators have introduced that has become successful despite many years of trying, not just with the examples above but including picture messaging, home zone tariffing, push-to-talk and other user group services, mobile payment, music services and so much more.

 

What the mobile operators have done well at is the provision of “bit pipes” – basic carriers that transfer voice and data from one place to another. Voice and SMS have been hugely successful and more recently 3G data is starting to take off now that prices have fallen and operators are concentrating on bit pipe provision.

 

So there is something of a theme emerging here. Operators are very good at providing voice and data transfer but very poor at delivering services to run on top of these. This is despite their fear that they will be marginalised if they just become a bit pipe provider and that the only way to continue to grow and be profitable is to “move up the value chain” into service provision. Indeed, in a number of recent presentations by major operators they have stated that they do not think they can survive unless they start to capture some of the revenue from services.

 

So why, despite all their efforts and strengths, have operators failed so conspicuously to deliver services? Perhaps, firstly, because it is not their core expertise. For example, they have less understanding of location-based services than mapping companies or organisations like Google that have integrated mapping data and location into much of what they do. Secondly, because they are trying to extract more revenue from the service than is viable, or that consumers are prepared to pay. For example, operators sought to impose a “per transaction” cost for location services whereas consumers preferred free services, often funded by advertising. Thirdly, and more controversially, perhaps they do not have the right image with consumers. While they have a very strong brand it is associated with being a bit-pipe – with the provision of voice and data to a mobile phone – and not with innovation, with being cool or even with being a trusted entity. That is why individuals are much more willing to try a new service from Apple or Google than they are from Vodafone or Orange. Equally, they probably would not want Apple to deliver the voice service that they rely on as a core part of their life. Brand can be critical in these areas.

 

So we have a conundrum for operators. They want to avoid at all costs becoming a bit pipe because they perceive that this would result in lower profitability and growth and yet they are extremely good at bit pipe provision with a brand and organisation well matched to delivering this. Operators would prefer to deliver services but their track record is awful, they are routinely out-manoeuvred by organisations like Apple and they do not have the right brand or skills to achieve this. But is bit pipe provision such a bad thing? Without it no services can operate and hence it will always be required and will always generate reasonable return. It may be that at present operators under-price the delivery of bits and cross-subsidise from other revenue sources, in which case this may need to be gradually reversed so that bits are delivered at a price that enables a reasonable profit. If operators really wished to deliver services then perhaps they should split themselves into bit pipe organisations and service organisations, enabling separate branding, skills and focus, but it is hard to see this happening.

 

The implication is that operators should leave the development of services to those better suited, such as Apple, Google, Microsoft or even entities such as Amazon. By working with them and making various network parameters available they could stimulate demand, leading to great bit traffic and hence increased revenues.

 

 

Where do the really big gains in wireless capacity come from?

There are many on-going initiatives seeking to gain additional wireless capacity. In the US much effort is being expended into the “white space” activity to gain access to under-used TV broadcast spectrum while in the UK Governmental bodies are working hard to enable sharing or rental of some of their spectrum. TV digital switch-over is consuming much regulatory effort in order to free up UHF spectrum and so on.

 

It is interesting to do some quick “rule of thumb” analysis on these initiatives. As a starting point recall that the cellular operators in the UK currently have some 450MHz of spectrum available to them now adding up all the spectrum at 900MHz, 1800MHz, 2GHz and including TDD allocations, with another 190MHz being auctioned soon at 2.6GHz making around 650MHz. Licence-exempt or unlicensed usage has around 750MHz below 6GHz, although much of this is in the 5GHz band.

 

The white space work in the UK suggests that in most areas there is around 100MHz of spectrum available for cognitive access. The assumptions are that this will likely be licence-exempt usage hence adding around 14% to total allocations, albeit in a frequency range where the propagation is considered to be good. Government usage is typically between 40% and 50% of the total spectrum under 5GHz although much of their usage is shared so the true figure may be lower. In the extreme case of all Government usage ceasing this would double the spectrum available for civil applications, in practice a dramatic 20% reduction in Governmental use would return around 200MHz in the key 1-3GHz bands, around the same amount as is being auctioned at 2.6GHz (but the Governmental spectrum would likely not have the advantage of being harmonised across Europe for cellular applications). Digital switch-over is set to liberate around 100MHz of spectrum, of which perhaps 50MHz might be used for cellular applications, a mere 7% of the cellular spectrum that will be available by then.

 

Now come at this from a different angle. A useful “law” by Marty Cooper of Arraycom is that wireless voice traffic doubles every 30 months. Growth may no longer be occurring in voice, but it seems now to be happening dramatically in 3G wireless data. Extrapolations and laws are highly approximate, and operators will tend to control traffic growth to exploit the available capacity, but nevertheless over the next decade, if we see anything like the growth in usage of the previous one, we will need a 10-fold increase in capacity (Cooper’s law would suggest a 16-fold increase).

 

All of this gain is clearly not going to come from additional spectrum. Even in the best case the cellular operators might gain 190MHz at 2.6GHz, 200MHz from Governmental use and 50MHz from digital switchover making 450MHz (while licence-exempt usage might gain another 100-200MHz). This would double their current allocation, which would be welcome and worthwhile, but still far short of an order of magnitude increase. History shows us that the big increases in capacity come from ever smaller cells, and with femtocells and Wi-Fi hotspots becoming increasingly available it is clear that the technology already exists to enable an order of magnitude increase in “base station” numbers. The biggest constraint, though, on small cells is typically finding a low-cost but high capacity backhaul mechanism. That is why the most important advance for wireless is not more spectrum but “more wires”, or ideally “fibres”. A widespread roll-out of fibre deep into the network would dramatically improve backhaul availability enabling the deployment of countless micro, pico and femtocells.

So perhaps all those manufacturers, operators and other stakeholders working hard on initiatives such as white space might reconsider whether they could better invest their time and resources in speeding a widespread fibre deployment? Not only would this improve home broadband speeds, it would enable much greater wireless speed and capacity.

Why is our ability to predict the uptake of services so poor?

A growth in the use of wireless services has long been seen by the wireless industry as the way to generate revenue growth as subscriber numbers plateau. However, there is a long history of very poor prediction of service success – WAP, location based services and data have for many years drastically underperformed almost all targets, while, as is often mentioned, SMS has been an unexpected success. Wireless analysts have often predict a “hockey-stick” uptake for almost all the services they consider and as the services have languished, they continually shift the inflection point in the hockey-stick curve to the right, despite the increasing evidence that the service is not finding success. Yet, occasionally, a much delayed hockey-stick growth does occur – the sudden rapid increase in 3G data “dongles” is such a case.

 

Tipping point....

Tipping point....

 

 

 

 

Why is it, after all the experience that we now have of launching new services, that we are still so poor at forecasting their success? Predominantly this is because services are based on a very complex inter-related “eco-system” that includes manufacturers, operators, other service providers and importantly the early adopters and advocates amongst the end users. For example, for location based services manufacturers need to build easy-to-use GPS location systems within the handset, operators need to provide a framework for location based services, entities like mapping companies need to produce appropriate offerings, Google needs to provide location-enabled search and early adopters and key influencers need to be enthusiastic about the service in order to convince others to adopt it. The relationships between all these players are complex with some positive and some negative feedback loops. Models of such situations show dramatically different outcomes can be achieved with relatively little change in inputs and “tipping points” are often observed. The complexity is not helped by the tendency of those in industry to look optimistically at the services they are working on, and for analysts to prefer reports with more positive than negative outcomes.

 

The existence of tipping points – values of particular input variables at which the predictions of the model suddenly shifts from no growth to the hockey-stick – makes it almost impossible to predict accurately the success of such services. The chances are that most will be predicted to be successful for many years during which they will languish and then suddenly, for reasons that may not even be apparent, or appear of little relevance, they will take off rapidly. All we can do is learn from the models as to what behaviours would most likely result in success. But actually we know this already – to be successful all elements of the service launch must be near-perfect. The technology must work, the service must be easy to use, the pricing must be attractive and the marketing must attract the right early-adopters who must be deeply impressed. If any one element is not quite right it could be enough to prevent the service succeeding. That much is common sense. The difficultly, as always, is getting all the companies to work together in a way that is competitive but collaborative, embraces standards but allows competitive differentiation. This is very hard – the incentives on individual organisations are rarely such that they work together well. What tends to happen is that individual elements slowly get solved and when the last one falls into place the service takes off. This may be what happened with data – the last element being the service pricing.

 

The bottom line is that accurately predicting service uptake will remain almost impossible unless all entities work together on delivering the service. And history suggests that this is unlikely.

3G as an alternative to home broadband?

 

In a desire to develop competition in broadband provision to the home, many have hoped that wireless might one day evolve sufficiently to become a viable alternative. But the technologists were generally sceptical. Wireless, they noted, did not have the capacity and tended to lag behind the speed of fixed connections. Previous attempts to deliver wireless broadband, from Ionica to the more recent UK Broadband, have all either failed, or remained small-scale activities. Yet suddenly, almost out of nowhere, 3G data card sales are rocketing and for many wireless does appear to be becoming a viable alternative to fixed line broadband. What is going on here?

 

The answer is that 3G is a viable alternative for some – in particular those with relatively low total data volumes and who do not want a fixed line to their home. On the data side most 3G cards have a limit to the data per month unlike home broadband which is typically unlimited (albeit with some “fair usage” clause). So, 3 are currently offering 5GBytes/month for £15 on contract with 10p/MByte additional charge if this limit is exceeded. That equates to around 160Mbytes/day. For downloading emails and web surfing that is probably plenty for most, but audio and video streaming could quickly eat through that (audio streaming at 100kbits/s equates to around 45Mbytes/hour, video streaming at 500kbits/s would use up the daily entitlement in 40 minutes). As the BBC iPlayer becomes ever more popular, many predict a rapid increase in the average data consumption to the home. (In passing, it is worth noting that 160Mbytes equates to around 1,800 voice call minutes per month. Current offers are £20 for 500 voice call minutes so data is being offered at around a quarter of the voice call price making VoIP attractive.)

 

The other question is whether the household wants to have a fixed phone line. If they decide they do, and pay the line rental, then the additional cost of broadband on this line is typically well below £15 and provides higher data rates, unlimited volumes and often mostly free calls (except to premium rate and overseas numbers). For such a household, mobile broadband is not particularly attractive (except as a way of accessing the network when on the move). However, many people do not want a phone line. Those who are renting, students and those who spend little time at home often do not want to feel tied to a fixed line. Such individuals already make use of mobile as their only means of voice communications and extending this to mobile data clearly looks sensible.

 

Finally, there is the question of network capacity. Cellular networks have enhanced their capacity with HSDPA which broadly enables higher data rates for those with good coverage while excluding those with very poor coverage. This makes it very difficult to analytically derive cell capacity. Instead, Qualcomm and others have modelled and measured typical scenarios and concluded that data rates in the region of 1.2 – 1.5Mbits/s per cell can be supported. So if all the data users tried to access their 160Mbytes between, say, 8pm and 10pm, the cell could, at best support around 8 subscribers per carrier. If voice traffic is also to be carried then this would be lower. Assuming around 10,000 cells covering the UK, each with 3 sectors, then the total subscriber numbers per operator per carrier would be in the region of 240,000. Of course, if users actually averaged less than their allotted allocation per day then more could be supported – as is likely the case. So perhaps up to a million users per operator might be feasible, especially if additional spectrum is acquired allowing more carriers to be deployed, giving perhaps as many as five million across all operators.

 

So we can conclude that cellular is not a viable replacement for broadband for all – it does not have the capacity for more than around 25% of UK households. It is also only attractive to a certain class of user who would typically not have a fixed line to the home and it may become less attractive if video streaming becomes the norm. But for a substantial subset of the market it looks ideal.