Cellular for train to ground backhaul

One of the things I’ve noticed while working with the rail industry is that the relationship between the rail and telecommunications sectors often isn’t very good. I believe this is detrimental to both sides. Part of the problem is a simple lack of understanding of how the two sectors operate and how they see the world. I thought it would be useful to write a few blog posts to explain the state of the art to help improve mutual cooperation.

Passenger Internet access on trains (aka WiFi on trains) is very popular with users and demand for capacity keeps growing. Getting enough data bandwidth to a moving vehicle containing hundreds of users is a massive technical challenge. This post is going to look at the technical approaches to perform the data backhaul from the train to the ground communications networks and offer some thoughts on the future direction.

Simplified architecture of an on-train Mobile Communications Gateway (MCG)

Simplified architecture of an on-train Mobile Communications Gateway (MCG)

The common architecture for train Internet access is to bring the on-train data to a gateway that then connects via one or more radio technologies to the ground. In rail-speak the gateway is normally called the “mobile communications gateway” (MCG). There are several MCG vendors but the two biggest are Nomad Digital, who pretty much created the market, and Icomera. There are three wireless technology options for train to ground backhaul: satellite, cellular and trackside systems.

In this post I am going to start by looking at cellular backhaul and then talk about satellite and trackside systems another day.

Using cellular systems to perform the backhaul is the path of least resistance in most rail scenarios and is the dominant solution in the market. Most countries have reasonably good cellular coverage and the physical requirements on the train for antennas and other equipment are not too demanding (though one thing you quickly learn is that making physical changes to rolling-stock is never easy). The major challenges with cellular have been dealing with coverage holes, obtaining enough bandwidth and international roaming for cross-border trains.

The approach generally taken to deal with coverage and capacity was pioneered by Nomad Digital. This involves using multiple cellular modems in the MCG each with a different subscription, typically from different mobile operators. The MCG will multiples data over all the available cellular connections and make the net total bandwidth available to the train. This is done transparently to the on-train users. In areas of coverage by more than one operator this provides the maximum bandwidth by aggregating several operators together. In areas of sparse coverage it means that the train can still maintain a connection even if just one of the serving operators has coverage.

This multi-operator aggregation model has done well but it also creates some problems. By allowing the rail industry to “make do” with inadequate cellular coverage and standard cellular subscriptions it has postponed the establishment of a more strategic relationship between rail and telecoms which could create better long-term solutions. The aggregation model also seems to have the potential to conflict with developments in the cellular industry that are starting to appear in standards and deployed networks.

As we have said the MCG makes simultaneous use of multiple network operators as a way of maximising coverage and bandwidth. This works best of there are several independent cellular infrastructures available in the operating region. Now operators are increasing moving towards a shared infrastructure model the number of really independent cellular infrastructures is going to be reduced. Multiple connections to operators that share the same infrastructure do not actually improve coverage or available capacity; they just compete with each other for the same resources. Use of multiple connections is also a way for the MCG to gain access to more than one cellular channel but in developments of LTE we already see that LTE modems can aggregate channels in to a single connection. Again we have the possibility of different subsystems independently chasing the same resources.

Another role of the MCG is to make the best available backhaul connection by combining or choosing between multiple radio technologies where they are available. As the cellular industry starts to adopt the “always best connected” model the cellular modems will increasingly act independently as a broker between multiple radio technologies. The mobile operator will configure selection algorithms for the cellular modems. The MCG will be independently configured with selection algorithms between its modems. The questions of who is in charge and whether the selection algorithms work harmoniously together are interesting.

Lastly, we come to roaming. In the domains of M2M and internet-enabled appliances we have seen a lot of regional or global deals with mobile operators where there is no real cost penalty for roaming. Unfortunately the rail sector seems to almost entirely rely on normal national subscriptions. MCGs may include “geo fencing” techniques to swap networks and subscriptions automatically during cross-border journeys though I gather these have had some technical problems. It is symptomatic of the disconnect between rail and telecoms that the rail sector chooses to solve roaming problems in a way that avoids engaging the mobile operators. It will be interesting to see how the planned removal of European roaming fees changes this model.

So where do we go from here? The MCG is clearly going to remain the hub of train communications and will retain the role of brokering and multiplexing between different radio connections. However as cellular systems become more capable of multichannel and multiradio operation and as the cellular industry moves towards more shared infrastructure then there needs to be more attention paid to ensuring that the behaviours of the different system components are complementary rather than conflicting. To some extent the MCG is a work-around for the difficult relationship between the rail and telecoms sectors. It would be nice to see the telecoms sector contributing to solutions, for example in the roaming area which has already been well addressed for M2M.

ISPs shouldn’t be fly posters

I am a great fan of the old British sitcom Yes Minister. For those who may not be familiar it is about the plight of a politician, Jim Hacker, who constantly fights to maintain his position in the face of external events outside his control and the obduracy of his civil servants.  Apart from being very funny it has proved to be excellent training for a career involving large corporations and global standards bodies.

In one episode of Yes Minister, Jim Hacker has been talking to a European Commissioner about food standards and naming. The Commissioner is concerned that British sausages contain very little meat and are generally made of dubious content:

‘Bernard (Jim’s secretary): “They can’t stop us eating the British sausage, can they?”

Jim Hacker: “No, but they can stop us calling it a sausage. Apparently it’s got to be called the Emulsified High-Fat Offal Tube.”

Bernard: “And you swallowed it?”’

 

Fly posters, not popularThis came to my mind when thinking about reports that an ISP in the US could be injecting advertising on top of popular web pages. To me, if you are going to describe your service as Internet access then this means you will attempt to provide access to data on the Internet without undue interference. If the reports are true then in this case the ISP is adding advertisements on top of the Internet content. In the physical world we have a name for the practice of sticking unauthorized advertising on top of other peoples’ property. We call it fly posting.  Fly posting is a seedy business popular with get rich quick schemes, dodgy night clubs and “adult services”. Because fly posting is such an eyesore most cities now have very strict bylaws that manage the problem and fine the perpetrators.

It seems that there may be ISPs who are being tempted to become fly posters by the possibility of getting some new revenue. I invite them to think about the kind of companies that get involved in fly posting in the real world and perhaps to reconsider their position.  In the meantime, consumers need some real guidance as to what service they are getting when they sign up for Internet service. As with the Commissioner’s proposal for the British sausage it should be somehow more accurately descriptive of the product on offer. I suggest “Spam Producing Internet Manglers” (SPIMs).

Conference outings

I am not a big fan of the conference circuit. One day we’ll perfect a model of communication among the telecoms community that doesn’t require us all to regularly clog the worlds’ airports and hotels. But, until then it is good to get out the office once in a while and hear some different perspectives on what’s happening.

If you fancy hearing my thoughts then here are two upcoming opportunities:

BWCS Train Communication Systems 2013

June 12-13, London

“LTE for Critical Communications in Rail – How and When?”

  • LTE is being adopted globally for “blue light” public safety applications. Should rail follow suit and apply LTE to critical communications?
  • Are LTE standards suitable for rail requirements?
  • Is there a business case for migrating existing technologies?
  • Can we improve on GSM-R and avoid repeating past mistakes?
  • What are the sharing models for critical and non-critical users?

LTE World Summit

June 24-26, Amsterdam

“3GPP’s progress in delivering interoperable LTE Public Safety Standards”

3GPP Attaches WebRTC to the Crushing Weight of IMS

Well, here’s some entertainment for a cold Monday. At 3GPP SA Plenary last week they almost, but not quite, approved a work item for “Web Real Time Communication (WebRTC) Access to IMS”. The lack of approval is really just a formality as several working groups are going ahead to work on assessing the topic anyway.3GPP Anchors WebRTC to IMS

According to the draft work item the objectives are to specify service requirements (and subsequently implementation) for:

  • “the ability for WebRTC clients to access IMS, including for example, reusing IMS client security credentials and/or  public identities/credentials as appropriate;
  • how IMS clients communicate with WebRTC clients connected to IMS, both for originating and terminating calls;
  • the ability to realise any IMS services to the WebRTC client;
  • access to IMS client capabilities, including regulatory functions (e.g. lawful interception) and charging for WebRTC clients connected to IMS;
  • the ability to support applicable IMS access types (e.g., LTE) for webRTC clients connected to IMS;
  • ability for an IMS service provider to offer IMS services to users interacting with a 3rd party website which is using the webRTC client (users of the 3rd party website may or may-not have IMS credentials)”

Is it wise to rush to codify an interworking between a standard that’s not yet written with a standard that’s hardly deployed? Probably not, but the obsession with IMS as the “one true path” by some parts of the mobile industry makes this type of response highly predictable. Nobody has actually seen an IMS service that’s a meaningful improvement on existing technology but still we must ensure that IMS services are accessible via WebRTC.

At some level this work is harmless. Nothing will prevent the fans of IMS wishing to specify the interworking to everything in sight. You might as well let them have their fun and see how the market responds. The sad thing, which is ultimately bad for the mobile industry, is that this seems to be the only point of contact between WebRTC and LTE/3GPP.  If 3GPP frames WebRTC exclusively in the context of IMS then it risks not addressing broader questions about how to provide flexible and efficient support of non-IMS related WebRTC apps in the mobile context. QoS, policy management and congestion control are all topics that are relevant to (non-IMS) WebRTC.

If mobile operators just treat WebRTC as some kind of weird IMS client then they are once again going to find themselves out innovated and out classed by more agile players who operate in an OTT mode. The rhetoric will again be “these OTT players are not fair” but really the failure will be that of the mobile industry to adapt to disruptive technology.

Who adds value in the WebRTC ecosystem?

The W3C/IETF plan to support real time communication in the browser, WebRTC, is moving from high-flown concept to somewhat messy, but useful, reality. Generally, I am a fan. WebRTC offers new opportunities to both indie developers and the established players. It will undoubtedly accelerate the adoption of new conversational services at the expense of traditional fixed and mobile telephony. There are a lot of interesting and important topics to explore related to WebRTC but I want to start by talking about what kind of ecosystem will be involved in successful WebRTC services. A lot of the technically focussed analysis of WebRTC has not highlighted the extent to which WebRTC services will require multiple components and company inputs to be integrated.

To start this discussion we need to understand a bit about how WebRTC is structured. This isn’t a WebRTC tutorial but a sketch showing roughly where the main functions reside. The heart of WebRTC is a new W3C standardized API between the browser and Javascript running in a web page. The API provides three elements:

  • Mediastream which allows the Javascript to access media inputs on the local computer (eg cameras and microphones);
  • RTCPeerConnection which allows connections that send and receive media to be made with peer devices running WebRTC; and
  • RTCDataChannel which allows generic non-media data to be sent over an RTCPeerConnection channel.

In this discussion I am just going to focus on RTCPeerConnection. RTCDataChannel is a really interesting initiative with a lot of implications but it’s probably the least mature part of WebRTC and not really central to the point I want to make about ecosystems. Mediastream is the most mature element but it doesn’t feature majorly in my thoughts on the value chain.

WebRTC – Simplified service architecture

The most interesting point about these APIs for my discussion is where W3C decided to draw the boundary of the RTCPeerConnection API. It is quite different for the media part of the connection and the signalling part. In the case of the media the API has been put at a high level leaving the Javascript with a rather abstract view of the media. The browser takes on the task of managing the media coding and doing the heavy lifting to make media work nicely across unreliable networks. The browser is responsible for packet loss concealment, echo cancellation, media adaptation to varying network bandwidth, and dynamic jitter buffering and any proprietary technologies used to subjectively improve the perception of media. In practice this means that a lot of the features that dictate the quality of the service as perceived by users will be controlled by the browser. It also means that browser developers will need access to media technology that has traditionally been the domain of telephony app developers.

For signalling the WebRTC API is at a somewhat lower level than it is for media. The RTCPeerConnection component in the browser manages establishment of media connections between WebRTC peers. The browser also contains the tools to perform NAT and Firewall traversal for the media based on the ICE framework (ie using STUN and TURN). The Javascript interacts with, and manages, these capabilities using the API. However WebRTC does not define any service signalling between WebRTC peers or between WebRTC clients and network servers. It is up to the Javascript to implement its own signalling protocols based on the requirements of the designer of the unique WebRTC service. If you want to have a WebRTC client that talks SIP then you can do that, but you need to implement the SIP signalling and the interworking to the WebRTC APIs in Javascript. Whereas the browser largely controls the user experience for the media the Javascript will control the functional user experience.

Getting the Javascript right for any non-trivial service in the very asynchronous environment of real time communications management is going to be tricky. Despite the work of W3C there are going to be browser dependencies no doubt and subtleties about how to use the ICE capabilities in the browser to best effect that will take time to learn. Many WebRTC developers will choose to use a public or proprietary library built on top of RTCPeerConnection which provides a higher level interface as part of their app. Library providers will be another part of the ecosystem and will corral a lot of the value. What’s not clear is whether proprietary libraries will be able to retain value in the face of open alternatives.

Though WebRTC is a browser technology the complete services will need network servers as part of the solution. At least day one WebRTC services will rely on network servers for media transcoding, signalling interworking, media conferencing and media forking for multicast. In the open environment where Javascript controls the signalling it will be up to service designers to choose the functional split between clients and servers. Organisations with existing skills in VoIP systems are going to hold value in the server components and the systems design.

Lastly, we should not forget about the (not really) “dumb” pipes that all this flows over. WebRTC changes browser behaviour quite profoundly by creating the potential for a lot of traffic on new protocols and new IP ports. To some extent the support of web sockets have already opened up the use of multiple protocols on the browser but WebRTC will make this much more common. All this will have impacts on local personal firewalls, home gateways and ISPs but that is a discussion for another day.

In summary the factors that lead to the final user experience for a WebRTC service are going to be distributed across several technology components that are likely to be obtained from different sources. Some developers (eg in browsers and for the Javascript) are going to face problems that are outside their traditional areas of expertise and building the right skill set by recruitment or acquisition is going to be important to successful development of services. Vendors will take different roles as suppliers of individual components and/or complete solutions. Successful WebRTC services will not just require a good service concept but also the technical ability to design the service in a way that makes the best use of the tools available and integrates the value added by multiple underlying components.

Critical Communication on LTE

We’re now entering the most important and interesting period in the path to create standards for critical communications, also called “public safety”, based on LTE. The first set of standards will be in LTE Release 12 which means that the fundamental technical work will take place in 3GPP during 2013. The work in the next 1-2 years will carry over in to critical communications networks for many years to come. Here are my quick thoughts on the progress so far and how to get the best out of the opportunity.

The Good

The US National Public-Safety Telecommunications Council (NPSTC) made a bold move in 2009 when they committed to a national critical communications network based on LTE in the 700MHz band. Since then they have continued to show great leadership both in political and technical areas. It’s good to see this leadership paying-off in terms of growing consensus support for the project to make LTE the platform for critical communications evolution. The announcement in June

The differences between critical communications and cellular are not just skin deep.

from the TETRA & Critical Communications Association (TCCA) that they are supporting LTE is an important reorientation of a group that has historically treated critical communications as a separate technology silo from cellular.

Not of a lot of real technical work has happened in 3GPP yet but some of what has been done on requirements is refreshingly pragmatic. A decision that initially concerned me was the plan to define “proximity services” for direct mobile to mobile communication that were to be common for both public cellular and critical communications. The expressed desire of the NPSTC is to have maximum commonality with public cellular technology to provide the best economies of scale. In principle this is a good plan but it can be undermined if requirements that are really exclusive to one community are force-fitted in to a bodged common solution. Fortunately the requirements work on proximity services seems to have spotted this risk and separated requirements that are common, and could well have a commercial business case, from those that are exclusive to the critical communications roles.

The Bad

In these early days I worry that too many committees are becoming involved in the process and that organizations that are historically strong in critical communications may try and do too much by “remote control” instead of fully engaging directly with 3GPP. I’ve worked through standards projects like GSM-R and Common IMS that involved disjoint organizations firing liaison statements at each other and they were all inefficient, slow and ultimately rather unsatisfactory in the outcome. If you really believe in LTE as a critical communications platform then the most constructive thing to do is to use technical delegates empowered by their companies to directly contribute to progressing the work in 3GPP. By all means use industry groups like TCCA to address the wider development of the industry but don’t perpetuate parallel or shadow standards development activities.

Breaking down the tribal barriers between cellular and critical communications is going to be essential if the LTE critical communications project is going to meet its goals.

The Ugly

The 3GPP standards work on device to device communication (“Proximity Services”) seems to be progressing nicely and is well managed. Work on group calls, which are another important requirement for critical communications, is a bit late but recoverable. What looks least well controlled from a project point of view are the proposed changes to the radio aspects for example to support very high speed users and some cases of terrestrial communication to aircraft. The radio work is complicated and I wonder whether all the suggested requirements are essential to a first release. In any case it is urgent that this work is scoped and organized if anything is going to be delivered in Release 12.

Conclusion

Thanks largely to the initiative of the US NPSTC we have a rare opportunity to completely modernize and improve the technology platform used by a sector that is both commercially and socially important. We need to do a good job and that will require constructive cooperation beyond tribal boundaries as well as clear prioritisation and project management.

If you work for an organization that has an interest in critical communications on LTE then we would be pleased to talk to you about how Netovate can help you achieve your objectives. Contact us at info@netovate.com.

The “N” Stands for Network – SON is not just about radio

I’ve just had an interesting couple of days at Informa’s Self Organizing Network (SON) conference in Cannes. I was invited to take part in a panel discussion and one topic we explored was where SON could go next and how it can globally improve the user experience leading to user better retention.

For those new to the field the idea behind SON is that networks can automatically and dynamically configure themselves to optimize use of resources and performance. A wide range of tasks previously done manually are candidates for automation with SON. In the context of 3GPP technologies like UMTS and LTE the term “SON” has been specifically applied to certain RAN technologies like automatic establishment of neighbour relations between cells and tuning of radio parameters. These SON capabilities have been enabled by new 3GPP standards in recent releases.

We learnt from the conference that the current generation of RAN SON features can significantly improve the performance of RANs while also lowering Opex and Capex. AT&T seems to be leading the field with one of the most sophisticated and widely deployed RAN SON solutions. The way 3GPP associates the term “ SON” with particular and RAN-oriented standards causes confusion though. It is better to understand SON as a broad concept that has widespread applicability in many layers of the network.

Google is already showing the way with innovative application of SON ideas in their domain (though of course they don’t call it “SON”). Google has been a vigorous supporter of OpenFlow – a standard that allows real-time and server controlled reconfiguration of IP networks. In big data centres OpenFlow is being used to achieve new levels of elasticity and reliability. If additional server capacity is needed for a particular application OpenFlow allows a fat IP pipe to be opened in the data centre and the necessary disk images to be copied on to an idle server to configure it to run the stressed application. Once the new server is running the fat pipe used for the transfer is closed and the data centre’s routing behaviour reconfigured to direct user requests to the added capacity. These updates are being done automatically to meet changes in demand and service mix.

OpenFlow – a very network oriented SON concept

In mobile networks our concept of SON outside the RAN is much less developed but if we take the point that “N stands for Network” then we should see the user experience as a product of the whole network (well in fact the whole system – including the mobile) and not just a horizontal slice. The new horizon that’s opening is how SON can be applied to core network and system issues. Video transcoding and optimization is an exciting use case. Streaming video is already a major driver of traffic in mobile data networks. Operators are responding to this with a variety of network based transcoding and “optimization” solutions that aim to reduce the system impact. Blindly applying these techniques without reference to the radio conditions is wasteful and risks degrading user experience. Why reduce the bandwidth of a video for a user in an uncongested cell?

Pushing the boundary of SON is going to involve bringing RAN performance data up to network layers like content distribution and policy management. These higher layers can then decide on content how best to present content considering the radio status, device characteristics and user profile. Getting policy systems and SON to jive with each other is going to allow much richer and more accurate management of the user experience. We can even imagine a reverse flow where the RAN is dynamically reconfigured to optimize the delivery of particular services to particular users.

My plea today is that we should not allow SON to be a term that is exclusively used in the RAN context. SON is an idea that is too important to limit to a particular domain and let’s start to understand where broader application of this idea will yield best returns.

3GPP RAN Future Technology – D2D, MDT and Mobile Relay

The presentations from the recent 3GPP RAN workshop on future technology were interesting. Very strong interest in 3D beamforming, support for high-latency X2 interfaces between base stations and tighter WiFi integration with LTE.

I also took this as an opportunity to gauge interest in some of the topics that I have been particularly following.

Device to Device (D2D)

The popularity of D2D discussion was a surprise to me. This is a topic that has always been on the margins of interest in 3GPP but has now hit the mainstream. It is a broad subject and there are competing views on how much of the problem 3GPP should try and tackle. The likely starting points are going to be using the network to detect proximity between two devices that might become candidates for D2D communication and the specific needs of public safety applications. It is claimed that a network integrated solution to find device proximity can be more efficient than Over The Top approaches. This claim is probably true but unless the results are made available in a way that allows a platform for apps and innovation then OTT approaches will probably win in the market.

Presentations mentioning D2D (excluding those that specifically address public safety):

  • Operators (all regions)
    • Dish Networks
    • KDDI
    • Sprint (for public safety)
    • T-Mobile
    • Vodafone (implicitly as part of public safety)
  • European vendors
    • Alcatel-Lucent
    • Ericsson
    • Nokia
    • Renesis
  • Asian vendors
    • LG
  • US vendors
    • Intel
    • Motorola
    • Qualcomm
  • Research (all regions)
    • ETRI

Mobile Relay

The Mobile Relay feature is 3GPP’s approach to providing coverage on rail and transport systems. There was some support for this feature at the workshop but not obviously a critical mass. Mobile operators are noticably absent from the list of companies that commented on Mobile Relay.

Presentations mentioning mobile relay:

  • CATT
  • Intel
  • LG
  • Samsung

Minimization of Drive Testing (MDT)

The minimization of drive testing (MDT) feature uses data collected from mobile phones to build a view on network coverage instead of performing coverage measurements using a “drive testing” process. MDT didn’t get a lot of attention in the workshop but as the first two versions are already in the standard I guess that it isn’t “new” enough to feature prominantly. My view is that MDT is seen as a feature that will continue to be enhanced to provide new capabilities. Some interesting proposals were mentioned such as using features to specifically enhance MDT in fringe and inter-radio technology handover situations.

Presentations mentioning MDT:

  • NEC
  • Samsung
  • TeliaSonera

Train Communications Systems 2012 Wrap Up

I am just back from the excellent BWCS Train Communication Systems 2012 conference. The quality and knowledge of the speakers was exceptionally high and I gained new insights in to the current state of the art. Train Communications is a vast area but the primary focus of the conference was squarely on the provision of Internet service to passengers via in-train WiFi services.

In many ways the rail environment is a perfect storm for data services. To the passengers it feels like they are in a coffee shop and they expect to whip-out their data devices and go online for entertainment, social networking and work. Many speakers said that their newly installed WiFi systems had almost immediately reached maximum capacity and only with the application of service barring (particularly for video) and per-user throttling were they able to maintain a reasonable grade of service. Of course just because passengers want to make generous use of Internet services doesn’t mean they are willing to pay for them. Business cases and billing models are still very unclear with about half the European industry offering WiFi for free as an incentive to increase “ridership”.

Technically, providing the “ship to shore” communications between the train and the fixed Internet is massively challenging. The combination of high-speed trains (sometimes), difficult physical environment, high capacity and large user groups exhibits almost all the features that make mobile data systems difficult. The premier solution is to install a dedicated track-side wireless infrastructure operating in licenced, if you can get it, or unlicensed spectrum. This is undoubtedly the highest capacity option but the price-tag will deter all but the most determined. For long-distance routes that involved a lot of open countryside then satellite systems seem the preferred option. The new train operator NTV in Italy gave an impressive presentation on their new satellite-based infotainment systems but despite considerable attention given to the design and requirements these systems are already being challenged by the number of users and the generated loads.

The dominant solution in the market is cellular backhaul and even systems with other communication modes enabled normally have a cellular fall-back option too. The big limitations here are capacity and coverage. The question is can we bring together a solution to these problems which has a reasonable business case associated with it? Like many users the rail industry is excited about the potential of LTE but even when LTE is deployed and available it is clear that in the existing architectures it only offers a breathing-space as demand and expectations continue to rise.

How might this all get resolved? The combination of difficult or non-existent business cases combined with extremely high customer demand for data will be familiar to almost any mobile operator. Broadly the rail industry has the same levers as the mobile operators:
• New technology: LTE, small cells
• Tiered pricing
• Service-based pricing or filtering
• Enforcement of “fair-use” or “equitable sharing” policies
• Use of local (ie on-train) content storage and caching
Bringing these tools together in an effective combination is going to bring the best commercially feasible solutions. I would really like to see better cooperation between the rail and cellular industries to make the best use of the available technologies.

One odd outlier here is the role of the mobile operators in the rail sector. Only one operator was represented at the conference which had a global audience. Despite this there are clear signs that some mobile operators are thinking about developing their own approaches to providing rail communications. 3GPP has started a new work item on “Moving LTE Relay” which aims to light-up the insides of trains with an LTE signal which is relayed from the external network. Perhaps the mobile operators think that their ability to bill users in a “frictionless” way builds a better business case for them. However the Moving LTE Relay work seems to have very little input from the rail industry and I am concerned that it will ignore the hard-won existing experience already there and also the very complicated operational aspects of rail systems.

What is clear is that Internet access is going to become an essential part of the travelling experience for many train passengers and once you see past the technical and commercial difficulties this is going to make the travelling experience more productive and more enjoyable. It may even become an important factor in changing people’s choice of travel mode.

For more of my thoughts on how LTE could impact train communications please see my slides (with notes) on “LTE – service opportunities, threats and challenges for the rail industry” presented at the conference.

Will Minimization of Drive Testing Expose Some Surprising Coverage Data?

The 3GPP Release 10 standard contains an interesting feature under the rather unexciting title of “Minimization of Drive Testing” (MDT). Drive Testing is the process often used by operators to measure their network coverage. Special measuring equipment is installed in a car and it is driven round different locations to take measurements. The upside of traditional drive testing is that is provides a nice summer job for students. The downside is that even with students doing the work it is slow and expensive.

MDT exploits the fact that all mobile phone routinely measure signal quality information and use it to make decisions about how best to connect to the network. Once MDT is enabled (which in theory should require user permission for privacy reasons) then the network can request a phone that supports MDT to log its coverage measurements and report them to the network. The logging process can take place even if the phone isn’t in an active session with the network. MDT thus give the network operator a view of the coverage as measured by the phone actually in users’ bags, pockets and glove boxes.

The idea of MDT is simply to reduce the amount of drive testing a network operator has to perform (dur!) but in fact MDT may reveal facts about coverage that could be surprising or even uncomfortable for the operators. When the BBC used a mobile app to measure UK 3G coverage the results were significantly worse than the operators’ official coverage predictions (http://www.bbc.co.uk/news/business-14574816). What’s the difference between coverage as measured by a drive test and coverage as measured by a mobile? A drive test measures the available signal in an outdoor location. Mobiles measure the signal that actually reaches them – this could be limited by many factors including screening when inside cars or buildings, antenna orientation, proximity of other objects and interference by other devices. All in all signal quality as measured by the mobile could be very different (and more representative of user experience) that than seen in a drive test.