Spectrum Liberalisation and Technology Neutrality Licences

Roberto Ercole, CEng, [email protected]

Director Spectrum and Telecoms Consulting Ltd – Cambridge, UK

Introduction

The purpose of this note is to examine the issues related to moving from technology specific spectrum licensing to technology neutral licensing. This is important for spectrum liberalisation because such licences can encourage a more liquid market for spectrum. It is this liquidity that will deliver the greatest benefits for consumers and the economy. 

However, such a move does not of itself mean the need to consider interference issues between services is removed. How such concerns are dealt with is a key issue in the transaction costs of such a market, and hence its overall efficiency. Transaction costs matter in terms of the diversity of services that can arise, such as encouraging things like dynamic spectrum sharing.

To what extent the minimising of transaction costs is possible depends not only on the regulatory will to do so, but also on:

  1. A detailed engineering analysis of the potential for harmful interference (and how we define that term) – which is complex to calculate and hard to interpret; and
  2. Closely associated with above, is what level of protection a victim receiver should have? – too much sterilises large swaths of spectrum to protect perhaps inefficient or poorly designed services.

In theory a technology specific licence can still allow for some liquidity (i.e. change of use) if regulators agree that such permissions will not be unreasonably withheld. However, this introduces delay and uncertainty into the process. According to the RSPG in 2019: “The aim of technology and service neutrality is to provide spectrum users the freedom to deploy new technologies and services without needing to seek regulatory approval or changes. The process of seeking of approval, even if eventually granted, can introduce uncertainty and delays for the innovator which can frustrate the process of innovation.”[1]  

Up to now this engineering analysis has been undertaken by a regulator in most cases, to ensure that spectrum is used efficiently and effectively, and  to meet economic or public policy objectives. A technology neutral spectrum right does not obviate the need to do a sharing analysis. In theory market players will be able to do this bilaterally, but there may well be disagreements between adjacent band users about whether or not harmful interference has occurred (and how to financially cost such harm). 

Even if we rely on a market-based system there still needs to be the enforcement of a spectrum right. This will involve some form of arbitration or tribunal, that will need a third party to assess what is or is not harmful interference. To be able to do this in a way that minimises uncertainty and transaction costs is no trivial matter. 

We might think of this as analogous to competition law, which is a mixture of legal and economic analysis, that has had many years to evolve. It does not seem realistic to rely on current contract law or non-specialised courts to deal with these issues in an effective and timely manner.

There is also a tension between spectrum liberalisation and the benefits of harmonisation that needs to be considered. If increasing liberalisation leads to a reduction in the benefits of harmonisation that could be considered a negative. Not only does harmonisation minimise costs it also helps in controlling harmful interference. In part this is masked by the success of 3GPP/ETSI standards, where almost the whole world uses the GSM family (UMTS/LTE/5G) for mobile networks. To date what we have seen from spectrum liberalisation for mobile is the ability to update to the latest technology (refarming) – not the ability to move spectrum from one service/use to another.

At a high level, one might simply specify the policy principle that a mobile operator can use any mobile technology that causes no more interference than the existing technology. But this would not minimise transaction costs.

Harmonisation vs Technology Neutrality

It is easier to control radio interference and get the benefits of economies of scale (that are vital for mobile services) by mandating the technology and the frequency band plan. The RSPG noted in 2016 : “The application of technology and service neutrality is in practice limited by technical and usage restrictions that are required in order to prevent harmful interference.”[2]  The RSPG specifically noted that greater flexibility  came at a price of more complexity and less predictability regarding adjacent band compatibility and harmful interference.

This is shown in the diagram below, where we can see how the more we mandate technical parameters to control interference the less flexibly we can use the spectrum.

So, we can see that there is a trade-off between complexity and flexibility in terms of spectrum liberalisation. It is therefore important that this trade-off be considered in any spectrum liberalisation regime.

In practice what we tend to see is that spectrum liberalisation applies to commercial mobile bands. In theory any technology can be used in the band, but this is constrained by the way the band plan is mandated, and the transmitter emission mask chosen. This band plan is especially important for FDD services. In practice for commercial mobile, economies of scale have driven most networks to use a single family of standards (GSM, LTE etc). These two elements, band plan and emission mask, are chosen to suit a particular technology/standard. 

Whilst this flexibility is beneficial, it does not realise the full potential of spectrum liberalisation, which requires spectrum to flow from low value to high value uses, with as little “friction/transaction costs” as possible.

There is also the associated issue of protection of incumbent services. The way a band is made available for mobile will need to take account of any services it needs to share with. This might set things like maximum powers or mean mobile must respect certain exclusion zones (perhaps near airports). 

To what extent the problem is to do with poor victim receiver performance versus high out of band emissions from the interferer is partly subjective. We can probably all agree that receivers with very little front-end filtering, needing many 10’s of channels separation is not efficient generally and may in fact end up benefiting “poorly designed” (from one viewpoint) networks by paying them to vacate bands.

The European Experience

The experience of technology neutral licensing from Europe, has been implemented via the WAPECS RSPG opinion in 2005[3]. This has been generally well received and does not appear to have given rise to any problems, but does not seem to have increased spectrum trading, beyond what was seen before (i.e. mergers). 

WAPECS has not meant that spectrum use is completely laissez-fair and is still tightly controlled in terms of transmitter emission masks (which relates to a specific radio technology standard in many cases), and frequency band plans harmonised by the CEPT[4]. This liberalisation does not mean that broadcasting spectrum can be used for mobile, and WAPECS is in effect limited to refarming spectrum for mobile; that is to enable the latest technology to be used. This limits the complexity in analysis of interference potential and still arguably delivers large benefits; at least over earlier regimes that sought to promote specific technologies or bar any changes without issuing a new licence (and perhaps increasing spectrum fees). 

 It has been recognised in Europe that there is a tension between flexibility and the benefits of harmonisation, but that there are potentially very great benefits in allowing market players to choose the technology they wish to use to serve their customers. This was in part driven by previous cases such as the GSM900 directive that prevented refarming to 3G/UMTS for many years[5], finally being updated by 2009/114/EC. Many observers felt that this delay put Europe at a disadvantage compared to the US for 3G – which had technology neutral licensing.

Ultimately both technology specific and neutral licensing require that some consideration be given to the potential impact of harmful interference. However, when upgrading from 2G to 3/4/5 G (i.e. the 3GPP/ETSI specs) then much of this work has been done either within CEPT or 3GPP. Also, vendors that install the networks will likely have a lot of experience in deploying in bands with legacy systems, that were effectively designed “from the ground up” to share.

The recent announcements in the US  by the FCC and FAA (delaying 5G roll-out in 3.7 GHz  in auction 107) is not a good sign for a liquid market in spectrum. The FAA raised concerns about interference to radio altimeters used by aircraft  over 200 MHz away[6]. Hopefully this will be resolved shortly, but the fact that an auction that raised over $81 billion in February this year (and was seen as key to delivering 5G in the US) can be affected this way is not the sign of a “frictionless” market. 

This is perhaps the biggest and most recent such case but there are many others, such as the Netherlands delaying the 3.5 GHz 5G auction due to incumbent interference concerns (and legal challenge) earlier this year[7]. The author believes this is due to the complexity in establishing what is or is not harmful interference, as well as deciding what protection rights incumbents should have. 

The temptation is to perhaps rely on protecting incumbent services to levels specified in ITU-R reports. Such levels can be useful starting points but will not necessarily consider improvements in receiver performance. They will also not be set at a level that considers the economic impact of services in a country. 

Spectrum Licensing

The main aim of spectrum licensing has been to ensure that users of radio spectrum can do so without suffering harmful interference.  This was the main thrust of spectrum licensing requirements when radio systems were first deployed at the beginning of the 20th century.  To ensure that this harmful interference does not happen requires that some form of engineering analysis be done where either (or some combination of) factors apply:

  1. Enough frequency (or polarization, code etc) separation between different radio systems; and/or
  2. Enough geographic separation; and/or
  3. Enough separation in time (i.e. system A by day and system B by night).

To carry out the sharing analysis requires that the various radio system parameters are defined – e.g. transmitter emission mask, receiver protection ratio required etc.  This is more easily done when a radio system is defined as a particular standard (GSM, UMTS, LTE etc).  It also means that performance characteristics/measures can be taken directly from the standard in 3GPP. Not only that, it will be updated periodically to take account of technology improvements in things like filters and power amplifiers.

 It will also be the case that vendors will likely want to be able to deploy 4/5 G in bands where legacy systems are in use, as was the case with 3G in the 900 band. Hence, they carry out studies to determine what frequency separation is required between the various technologies for a defined loss in performance – say a 5% capacity loss.

When the radio systems for studies are generic (i.e., mobile, or fixed, not LTE say) then assumptions must be made about transmitter and receiver performance.  Such generic assumptions will tend to be quite conservative and not updated when say filter technology improves.  Such generic limits will tend to err on the side of incumbent services and can be worst case in effect. This can sometimes be the case with ITU-R studies. It may also be the case that the protection criteria used in such studies may not be optimal for many markets. That is because parameters assumed are for older technologies which were assumed when drafted, or spectrum was not used as intensively.

Even though the ITU-R Radio Regulations note in the preamble that spectrum should be used “rationally, efficiently and economically,” this sort of worst-case analysis will likely lead to spectrum being “under used”. For example, with larger guard bands than perhaps necessary, or reduced output power, requiring more infrastructure for the new service. This will have an impact on economic efficiency. This can therefore be considered a disadvantage of spectrum liberalisation/technology neutrality.

From a purely engineering point of view then, it is easier to carry out a sharing analysis on specific technologies/standards. However, this can lead to spectrum being used in an inefficient manner – in the sense that an operator cannot use perhaps the latest or best suited technology to meet their customer’s needs. It was felt in the EU that the benefits of allowing market forces to decide the generation of mobile deployed outweighed any disbenefits in terms of extra interference, or competition concerns. 

A potential benefit of technology specific licensing that was suggested in the past was the ability to promote certain technologies (DVB-H, ERMES, GSM, etc).  This argument no longer seems to carry much weight, especially today with such large economies of scale benefits, meaning countries are less willing to try and “go it alone”.

Radio Interference Issues

In a traditional assignment for a technology specific licence, a sharing study would be carried out to check what the impact of the new licenced network would be on incumbent services. As noted above it is easier to carry out such a study when victim and aggressor systems are specific systems, i.e. UMTS vs LTE etc.  Such a study would need to look at the performance of the aggressor, that is how much energy from the transmitter “leaks” in the adjacent channel.  It would also need to look at the performance of the victim receiver, that is  how selective it was, and how much it “hears” from the adjacent channel due to its “selectivity”. This will allow the calculation of how much interference will be received by the victim due to what falls within its band, and what it hears in the adjacent band. 

To be clear this adjacent channel leakage (from the interferer) and selectivity (of the victim) is because real radio system filters cannot be perfect “brick walls” stopping at the edge of their allotted radio channel.  Much time and effort has gone into optimising them and improving them over time.  There is a trade-off in performance for filters, the more selective, and less they leak, the more energy they will absorb, and the more they will distort the signal. Things like increased power consumption due to filter losses has a major impact on handset batteries. 

Under a technology specific licence then with the transmitter and receiver performance are known, and the transmitter powers known, it is then possible to run a simulation.  Such a simulation was run in CEPT Report 40, which itself was based on 3GPP TR 36.942. The simulations will need to take account of likely cell deployments, so the simulation can work out the distances between the victim and aggressor cell sites. They will then run the simulation with handsets randomly distributed across the cells to see what the interference impact is over time. An example output of several simulations is shown below as submitted to 3GPP:

Figure 7.1 from 3GPP TR 36.42 showing UMTS capacity loss (downlink) from an adjacent LTE interferer

The figure above was run a 2 GHz for an uncoordinated macro cell in urban areas, assuming 500m cell range FDD. The Adjacent Channel Interference Ratio (ACIR) is calculated from the victim selectivity and aggressor leakage.  According to Report 40 it is around 33 dB in this case.  This 33 dB can be read of the x-axis (ACIR), and the capacity loss read from the y-axis. 

The assumption used is that the interference must lead to a greater than 5% capacity loss for it to be harmful. The average red line shows that 33 dB ACIR gives around 2%.  However, there is a range in the supplied simulations from vendors, and different assumptions on cell range and environment can be used. 

It can be seen from the above discussion just how difficult it can be to run such sharing studies. Even CEPT defers back in large part to the work done in 3GPP. 

Propagation Models

An important issue is how to model the propagation of radio waves from the interfering to victim system. Is there a clear line of sight path, or are there hills or building and trees in the way? This can make the path loss between the two much larger (i.e. no line of sight). Also, what is the distance to the radio horizon? The signal will drop off very rapidly after that point (especially at higher frequencies). The path loss is a key element to be used in any interference analysis. 

In the case of mobile these waves will likely propagate many kilometres and still provide coverage. The propagation model will determine how quickly the signal power drops with distance, and hence at what distance the victim will no longer be subject to harmful interference. The propagation model needs to be as accurate as possible to make the sharing analysis accurate. 

The temptation in doing sharing analysis is to use the simplest propagation model and hence effectively make worst case assumptions. This might be to assume free space loss, and a flat earth. This can lead to 100’s of kms separation distance requirements, when more accurate models (that also use real terrain profiles) can lead to distances of 10 km say. This might effectively mean that sharing between two services is ruled out because of the inaccuracy in the propagation model chosen.

In practice what we see with mobile systems is operators deploy their networks based on a theoretical model, and then do measurements (drive rounds) at and nearby the actual mobile mast site, to see how the signal varies, and optimise based on this. They do this to ensure good coverage, but there can be significant differences between theoretical calculations  and actual measured power levels. 

It would therefore seem important  for encouraging sharing in a more intensive, and ultimately economically efficient way, that countries should invest in developing enhancements to existing propagation models (such as those from the ITU-R)  – i.e. optimised for their own countries. Improvements of a few dBs in accuracy could have a significant impact on reducing transaction costs. This will be highly dependent on the accuracy of terrain path models (hills/undulations between a victim and interferer) as well as clutter data bases (buildings etc along that path).

How to define a technology neutral spectrum licence

In principle all that is required to convert a technology specific licence to a technology neutral one is to ensure that the transmitter masks of the mobile and base stations “fit” within the existing mask.    For example,  UK Ofcom specifies a power of 65 dBm per 5 MHz in the band 2110 – 2170 MHz for a WAPEC device.

In theory to fully define the interference potential, one would need to know the locations and duty cycles of all devices in a network to work out the probability of interference, using a tool such as SEAMCAT[8]. However, this is complex and costly, and in any case would not guarantee protection in the future as users may change their usage patterns over time (use more data per month etc). In practice this issue is normally resolved between opcos, who try to ensure they do not build base stations in locations that cause problems to one another. 

One could foresee a problem if an opco went from a system with very few users and base stations, to one that had many more – this would have a much larger harmful interference potential (even if the new system stayed within the old transmitter mask). That is the probability of interference changes as the number of potential users of the aggressor system increases. The problem from one user may be minimal, 10 million will be much worse. However, that is not the case with mobile as these systems are pretty much everywhere and were licensed as national ubiquitous systems. That might not be the case in using a band that was lightly used for paging or push to talk services used by utilities (such as Private Mobile Radio or TETRA).

Overall Conclusion

The use of technology neutral licensing does not of itself allow an efficient market for spectrum to function. The uncertainty about the potential for harmful interference caused to incumbents can act as a barrier. To date what has been seen in Europe amounts to the ability of mobile operators to refarm mobile bands to the latest mobile technology. 

We have seen very little change of use from one service to another via technology neutral licensing. The main driver of change of use has been the traditional one of regulatory fiat following sharing analysis and public consultation (which may include some form of compensation). One might argue that this flow of spectrum to mobile has in part been driven by the potential for auction proceeds.

To enable the possibility of a more liquid market for spectrum requires more detailed consideration of what is or is not harmful interference and the engineering models/simulations that will be used to determine this. This may include more detailed national propagation models (to better determine path loss) that is a vital element in such sharing studies. 

This needs to be incorporated into an enforcement regime where interference cases can be heard and resolved speedily by a tribunal or court  with the advice of an expert and independent panel that can advise on engineering issues. This will speed up and hopefully minimise uncertainty and costs. Especially as a body of case law develops on best practice in terms of engineering and spectrum property rights. Whilst it is possible to hear such cases in a traditional judicial system this seems unlikely to minimise transaction costs. 

It is the minimising of transaction costs that are essential to the efficiency of a well-functioning market. 

Mr Roberto Ercole 

BSC, MSc, CEng

Spectrum Telecoms and Consulting Ltd.

Cambridge, UK

[email protected]

www.linkedin.com/in/roberto-ercole-1158771/

Roberto is a Chartered Engineer in Europe, specialising in mobile radio systems and radio spectrum regulation. He graduated with a degree in Applied Physics in 1988, and a Masters in Electronic Engineering in 1990. He also has a post graduate certificate in EU and UK Competition Policy and Law.

Roberto spent 10 years at GSMA as a senior global policy director for spectrum from 2006 to 2016. He was responsible for the GSMA’s WRC campaigns in 2007 and 2012, as well as several regulatory market interventions around the world. 

He also worked as a radio spectrum regulator in the UK for 7 years, following that he worked with a UK GSM1800 operator as a spectrum engineer, specialising in regulatory issues (related to the UK spectrum auction for) 3G for 2 years. 

Roberto has extensive experience in mobile competition and economic regulation issues. He worked for the UK telecoms competition regulator (Oftel) for 5 years looking at mobile and spectrum competition issues such as spectrum auctions and infrastructure sharing. He has also prepared competition cases for clients.

Prior to joining the GSMA in 2006, He worked as an independent consultant advising on radio spectrum engineering issues, as well as in spectrum valuations.  Roberto has also assisted governments developing spectrum liberalisation policies and in helping to promote competition in mobile markets by encouraging new entrants.

Roberto left GSMA in 2016 and now works as a consultant for several clients (including mobile operators and regulators) and has worked extensively in the MENA region. 


[1] https://rspg-spectrum.eu/wp-content/uploads/2019/10/RSPG19-031final_report_on_spectrum_strategy.pdf

[2] https://circabc.europa.eu/d/a/workspace/SpacesStore/ddb735a3-a7e8-4c55-a4a5-679577c8d2bd/RSPG16-004final-Efficient_Awards_report.pdf

[3] https://rspg-spectrum.eu/_documents/documents/opinions/rspg05_102_op_wapecs.pdf

[4] Such as ECC/DEC(09)03 for 800 MHz.

[5] https://www.fiercewireless.com/europe/eu-frees-up-900-mhz-band-for-other-uses and https://www.europarl.europa.eu/doceo/document/H-6-2009-0115_HU.html?redirect

[6] https://arstechnica.com/tech-policy/2021/11/faa-forced-delay-in-5g-rollout-despite-having-no-proof-of-harm-to-aviation/

[7] https://www.reuters.com/technology/dutch-court-sides-with-inmarsat-dispute-over-use-5g-frequency-2021-06-30/

[8] http://www.ero.dk/seamcat

1 thought on “Spectrum Liberalisation and Technology Neutrality Licences”

Leave a Reply