Some years ago I bought my daughter a wooden train track. The starter set included a small amount of track and a couple of engines. She played with it briefly and then lost interest. I concluded that more track and other items like stations were needed to make it sufficiently interesting to engage her attention and carried on doing this for quite some time before I finally realised that it was nothing to do with the amount of track – she just was not interested in train tracks.
My thesis is that wireless communications equally made some poor decisions many years ago. Just like my daughter, end users did not refuse the new services offered but quickly lost interest in them. Only now are operators and manufacturers starting to realise that the path they have been on was the wrong one. This paper discusses the decisions made, the resulting outcome and then suggests the path that should have been followed and how we might redirect our efforts to get back on track.
The main developments in wireless communications have been within the cellular industry. Here the industry has progressed through a series of “generations” from 1G to 3G, with 4G now being widely discussed. The decisions made when designing the next generation are key – they affect the services that can be offered, the cost of the network and even aspects such as the battery life of handsets. The timing of the generations is also important in that it affects the need for operators to invest in new spectrum and technologies. It is in making these decisions that the wrong turn was taken. This section explains the decisions that were made and then subsequent sections discuss why these were inappropriate and have led us to a dead end.
The first generation was a mix of different analogue standards with a range of problems including security, lack of roaming and limited capacity. The standard that was developed in response to this was GSM (there were other standards in the US and Japan but these eventually became sidelined). GSM has remained secure to this date, provided enough capacity for all and facilitated roaming. In addition, almost as an after-thought, the short message service (SMS) was added which became the hugely popular texting service (something we will return to later). Over time, evolutions were added to enable packet data transmission which was generally more efficient for data services.
Once the team that had been working on the second generation completed their task, their attention naturally turned to the next challenge. Since 2G had followed about a decade after 1G there was a natural supposition that 3G would follow a decade later. However, there was nothing obviously wrong with 2G that needed fixing so the developers of 3G focussed on making it do the same things as 2G but only better. Better to them meant faster, so 3G was designed to deliver much higher data rates than 2G. It was also somewhat more spectrum efficient although the gains here were relatively small (perhaps a factor of three for voice, somewhat higher for packet data). This was one of the key decisions that we will return to – that “better” meant “faster”.
With 3G introduced, if somewhat shakily, the bandwagon rolled on, looking at what would be required for 4G. Just as with the 2G to 3G transition there was little that obviously needed fixing so the same teams concluded that they would make 4G even better than 3G – again predominantly by making it even faster. As before some small spectrum efficiency gains were anticipated although these were even less than in the previous transition as technologies came closer to fundamental limits. For 4G, all the decisions about what “better” meant when 3G was designed were simply taken to be true.
So it was a fundamental decision taken in the mid 1990s that wireless systems needed to offer ever higher data rates that has broadly placed us on the path we have followed since then. But there is little evidence that users value higher data rates and plenty that they value other things. This divergence between what the system designers think “good” looks like and what the end users want has been growing over time and is now leading to serious problems. These are explored in the next section.
The current position
Despite ever “better” technology, the current position of the wireless communications industry is not generally healthy. Most wireless operators are no longer seeing a growth in revenues and indeed many are now seeing a small fall each year as competition drives down call costs for subscribers. This has led many operators to cost-cutting measures. The manufacturing industry is also in poor shape. Nortel is bankrupt, many other manufacturers have merged and few are currently profitable. Many are reliant on 4G deployments or technologies like LTE or WiMax for future growth, but these deployments might be some years away. Although it is almost impossible to ascertain, it seems unlikely that many operators who invested in 3G spectrum and networks have yet recouped their investment, almost a decade after the first 3G systems were launched.
Other areas of wireless are somewhat more healthy. In the short range area, WiFi and BlueTooth chipset sales continue to grow and become embedded in ever more products. The number of WiFi hotspots is still growing and they are being used to an increasing extent. Satellite communications remain stable although they address only a small niche market segment. Broadcast of TV and radio is also stable, although there are some concerns over the funding models for broadcasters – but these are not predominantly technically related.
Paradoxically, 3G does appear to have managed a recent success in the form of wireless data, or “3G dongles” as they are often known. While WiFi has demonstrated to users that there can be value in downloading emails and enabling web surfing when away from the home and office, WiFi coverage is erratic and often requires user intervention to log into each different zone. It appeared to some that 3G might be able to offer an alternative. With sufficiently high data rates that downloads happen fast enough for most and no need to log into different zones it does appear to solve the “data roaming” problem. But this is an illusion. For while 3G does indeed solve the problem, it is unable to provide enough capacity at a low enough cost. A hint as to why this might be is to note that between 2G and 4G data rates have risen in the region of 100-1,000 fold but capacity has only gone up around 5-10 fold.
It is worth dwelling on this problem a little more because it exposes some of the fallacies in the previous decisions made. Current 3G networks in the UK (and other countries are likely similar) can support data transfers of around 1GByte/user/month. This is adequate for occasional email download on the move but rapidly gets used up if there is any video involved or if used as the primary household broadband connection. Beyond this level cellular networks become congested and the data rates users can achieve suffer substantially. There are some ways to enhance this. One is to acquire additional spectrum, however, this is costly in terms of spectrum fees, infrastructure upgrade and the need to subsidise dongles that can work on the new frequencies. Another is to deploy more cells, but this is again expensive and becoming increasingly difficult to do in crowded areas where suitable locations are hard to find.
In order to increase capacity then, operators will need to spend more money. This only makes sense if they can charge enough for the data usage to justify the cost. Herein lies the problem. Users will pay quite a lot for the initial connection but as the volume of data increases and data rate grows the amount that they are prepared to pay per bit of data transferred falls. At present users only pay around 1% of the cost per bit for data transfer than they do for voice despite the fact that a bit of data requires exactly the same network resources as a bit of voice. Only by reducing the price per month have operators made wireless data successful but the price point they have reached is insufficient to justify new investment in the network. Users then, do quite like the idea of high speed wireless data but not enough to pay at the levels necessary to make this an attractive and sustainable business for the operators.
We are left in something of a dead end. Revenue into the industry in the form of subscriber ARPU is falling, resulting in operators suffering falling revenue with and downward pressure on the rest of the industry. The solution that the industry proposes to this is faster 3G and ultimately even faster 4G but while users generally like the idea of faster it is only of marginal value to them and they will not pay much for high speed wireless data – not enough to justify further investment. Few are prepared to admit it but 3G looks like it was mostly a mistake and 4G looks even more problematic. As manufacturers go bankrupt and operators increasingly look to merge, where does wireless go from here?
Where we need to be
The greatest success in the wireless industry in recent years is Apple’s iPhone. In particular, the fact that the iPhone was initially launched as a 2G device is instructive. Indeed, another of the success stories of recent years – the Blackberry – is also a 2G device. The iPhone has since been upgraded to 3G and this does appear to have somewhat improved the user experience but nevertheless this clearly shows that end users value something other than data rate. Of course, all things being equal, higher data rates are better, but do not add all that much value to most.
The iPhone succeeded through a much improved user interface that enabled users to do much more with their phone. Its popularity was further enhanced by the “Apps store” which enabled users to select from an enormous range of different games and applications and download them for typically a small one-off payment. It is notable that Apple and others succeeded where the operators failed. Operators have been trying to introduce new services for decades. These include WAP, group calling, video calling, picture messaging, mobile TV, location-based services and more. They have almost all failed for a range of reasons. These include the fact that the operators wanted to change a per-usage recurring revenue when users wanted to pay one-off fees and the desire of the operator to roll out a service consistently across the thousand or more handset variants operating on their network which tended to bring the service experience down to the lowest common denominator as well as slow down its introduction.
Despite having high data rate channels at their disposal, users continue to predominantly use apparently highly inferior approaches such as texting and most recently Twitter – a text only solution with limited message size. All of this suggests that while the designers of 3G might have thought that “better” meant faster, this was far from what the end users understood by “better”. So what might better actually look like?
A first conclusion is that “better” is not necessarily more. If this were true then voice calls would have been replaced by video calls giving not only voice but images as well. Instead, if anything, voice calls have been replaced by texting, emails and tweets. Less, it appears, is often better than more.
A second conclusion is that variety is generally a good thing. New methods of communication such as Facebook have been embraced rapidly while older ones like texting have not declined. Humans like a wide range of communications mechanisms to select from depending on the context. Some things, like rejection of a suitor, are done less painfully from a distance over a very “thin” communications channel. Sometimes a text to say “I love you” is worth much more than a video call to say the same thing.
A third is that users are very different. There are many thousands of applications in the Apps Store. Some are very niche but very valuable to a subset of users. A wide range of simple applications and services is better than a narrow range of highly developed applications.
Perhaps above all else though, the conclusion that designers of communications systems do not really understand how their solutions will be used and what users will see as “better” stands out. Making any guess as to what “better” might be is likely to fail while “worse” (eg texting) may actually turn out to be just what the user wants. So faster is not better (or at least, not much better), what is better is unpredictable, but providing users and developers with the tools to play around, be inventive and have a wide range of channels and their disposal is more likely to result in a good outcome. To paraphrase, if we’d concentrated more on developing iPhones and less on 3G we might be in a much better place now.
So, to answer the question set at the start of this section, where we need to be is an environment with a wide range of communications mechanisms which are flexible and relatively low cost, are amendable to new services being developed and to experimentation by end users. Technically, this might mean a range of large-cell networks (eg 2G) and small cell networks (eg WiFi) as well as fixed networks of course, which devices can readily connect to and which offer a simple standardised interface. It means devices that have a small number of standardised operating systems onto which applications can be readily downloaded so that developers can write just one version of their application. It means a wide range of extra features in the handset such as location and cameras to provide more “hooks” for developers to experiment. And it means a flexible value chain where service providers can readily integrate data transmission across multiple networks and develop services making use of multiple resources.
How we can get back on track
To summarise the discussion to date, wireless communications is currently not in a healthy position. Operators and manufacturers are facing increasing pressure to the extent that they are merging or going bankrupt and yet the next generation of technology only threatens to make things worse by increasing expenditure without providing substantial end-user benefit. This sorry state is all a result of assuming the “better” meant “faster”.
Where we need to be is a mix of different low-cost networks that offer flexibility and enable experimentation. The good news is that we already have most of the technology and investment that we need – the fact that the iPhone has been so successful is ample evidence of this. It is less a case of changing the technology and more one of changing the structure.
A first step for the operators is to bring to a halt most of their infrastructure expenditure. There is no evidence that 4G will be any more successful than 3G nor that further investment in 3G will be any more beneficial than the investment to date (back to the opening discussion about the train track). But there is plenty of evidence that the existing 2G and 3G networks can carry data in a flexible manner that allows most services to be implemented. Halting expenditure will also be the first step to restoring profitability.
The next step is for the operators to separate into network and service provision elements. This will open up the value chain, making it easier for service providers to put together converged offerings spanning multiple networks, both fixed, mobile and WiFi. This will allow, for example, simple roaming across home, office, WiFi and cellular networks enabling lowest cost transfer of data in a manner that will meet the needs of most while minimising overall network cost. It also enables the network elements to merge, outsource, share masts and generally reduce the costs of their operations, further aiding the operators’ business cases.
Along with this is the need for application development environments where developers can write once for a range of different networks and devices. This requires operators to provide standard interfaces into network elements such as their location databases and for mobile devices to standardise on a small number of operating systems which are able to interwork in the manner that the same document can be viewed on an Apple or a PC computer. Initiatives are underway in these areas but would benefit from greater commitment.
The final step is an acceptance that the value chain comprises network operators who run networks and provide wholesale bitpipes, service providers who aggregate data provision across multiple networks and provide customer care, and application developers who put together the applications that run on top of all these. This enables the greatest flexibility allowing many different communications channels to be provided and a wide range of applications to be made available.
Wireless communications can provide immense value to users. A range of new wireless services such as travel assistance and interworking with home networks and appliances would add substantially to this value. However, at present the industry is on the wrong path. It is fixated on ever higher data rates – a fixation bourn of a time in the 1990s when work started on 3G but when there were few obvious problems to fix with 2G. While higher data rates are no bad thing, they add little value to the end user and do not help provide any of the services which might revolutionise the role of the mobile phone. Further, if users try to make widespread use of these data rates the networks will rapidly run into capacity problems. This fixation has been steadily leading the whole wireless sector down the wrong path to the extent now that many operators and manufacturers see a problematic future.
Happily, it getting back on track is not overly difficult. It requires operators to forego plans to deploy new networks and instead concentrate on opening up their networks, perhaps through structural separation of the network and service provisioning element, and enabling a wide range of applications to be developed and deployed by others. It is the structure of the sector we need to address most, not the technology it might deploy.