Itur-M 2516 Ai Part
Itur-M 2516 Ai Part
Itur-M 2516 Ai Part
2516-0
(11/2022)
M Series
Mobile, radiodetermination, amateur
and related satellite services
ii Rep. ITU-R M.2516-0
Foreword
The role of the Radiocommunication Sector is to ensure the rational, equitable, efficient and economical use of the radio-
frequency spectrum by all radiocommunication services, including satellite services, and carry out studies without limit
of frequency range on the basis of which Recommendations are adopted.
The regulatory and policy functions of the Radiocommunication Sector are performed by World and Regional
Radiocommunication Conferences and Radiocommunication Assemblies supported by Study Groups.
Series Title
BO Satellite delivery
BR Recording for production, archival and play-out; film for television
BS Broadcasting service (sound)
BT Broadcasting service (television)
F Fixed service
M Mobile, radiodetermination, amateur and related satellite services
P Radiowave propagation
RA Radio astronomy
RS Remote sensing systems
S Fixed-satellite service
SA Space applications and meteorology
SF Frequency sharing and coordination between fixed-satellite and fixed service systems
SM Spectrum management
Note: This ITU-R Report was approved in English by the Study Group under the procedure detailed in
Resolution ITU-R 1.
Electronic Publication
Geneva, 2022
© ITU 2022
All rights reserved. No part of this publication may be reproduced, by any means whatsoever, without written permission of ITU.
Rep. ITU-R M.2516-0 1
TABLE OF CONTENTS
Page
1 Introduction .................................................................................................................... 3
2 Scope .............................................................................................................................. 3
8 Conclusion ...................................................................................................................... 44
1 Introduction
International Mobile Telecommunications (IMT) systems are mobile broadband systems
encompassing IMT-2000, IMT-Advanced and IMT-2020.
IMT-2000 provides access through one or more radio links to a wide range of telecommunications
services supported by fixed telecommunications networks (e.g. public switched telecommunication
network (PSTN)/Internet) and other services specific to mobile users. Since the year 2000, IMT-2000
has continuously improved. Recommendation ITU-R M.1457, which provides the detailed radio
interface specifications for IMT-2000, has also been updated accordingly. New features and
technologies have been introduced to improve the capabilities of IMT-2000.
IMT-Advanced is a mobile system that can support high-quality multimedia applications across a
wide range of services and platforms. The system provides a significant improvement in performance
and quality relative to IMT-2000. IMT-Advanced systems can operate in low to high mobility
conditions over a wide range of data rates in multiple user environments according to user and service
demands. Such systems provide access to a wide range of telecommunication services, including
advanced mobile services, which are supported by packet-based mobile and fixed networks.
Recommendation ITU-R M.2012 provides the detailed radio interface specifications of
IMT-Advanced, and it also has been updated accordingly.
IMT-2020 includes new capabilities of IMT that go beyond those of IMT-2000 and IMT-Advanced.
These new capabilities make IMT systems more efficient, fast, flexible and reliable when providing
a variety of services. Diverse usage scenarios were introduced in IMT-2020 such as enhanced mobile
broadband (eMBB), ultra-reliable low-latency communications (URLLC) and massive machine-type
communications (mMTC). Besides significantly enhancing the data rate and mobility provided in
IMT-Advanced, IMT-2020 introduced advantages such as spectrum efficiency, latency, reliability,
connection density, energy efficiency and area traffic capacity to efficiently support emerging usage
scenarios and applications. Recommendation ITU-R M.2150 provides the detailed radio interface
specifications of IMT-2020.
ITU-R studied technology trends that led to the development of IMT-Advanced and IMT-2020, and
the results were documented in Reports ITU-R M.2038 and ITU-R M.2320, respectively. Since the
publication of Report ITU-R M.2320 in 2014, there have been significant advances in IMT
technologies and the deployment of IMT systems. The capabilities of IMT systems are being
continuously updated in line with user trends and technological developments. Accordingly, this
Report provides information on the technology trends of terrestrial IMT systems considering the time
frame up to 2030 and beyond such as emerging technology trends and enablers, technologies to
enhance the radio interface, and technology enablers to enhance the radio network.
2 Scope
This Report provides a broad view of future technical aspects of terrestrial IMT systems considering
the time frame up to 2030 and beyond, characterized with respect to key emerging services,
applications trends and relevant driving factors. Technologies described in this Report are collections
of potential technology enablers which may be applied in the future. It comprises a toolbox of
technological enablers for terrestrial IMT systems, including the evolution of IMT through advances
in technology and their deployment. This Report does not preclude the adoption of any other existing
technologies and emerging technologies expected in the future.
4 Rep. ITU-R M.2516-0
New services and application trends for IMT towards 2030 and beyond can be summarized as follows:
– Networks will support enabling services that help to steer communities and countries towards
reaching the United Nations’ Sustainable Development Goals (UN-SDGs)
– Customization of user experience will increase with the help of user-centric resource
orchestration models
– Localized demand–supply–consumption models will become prominent at a global level
– Community-driven networks and public–private partnerships will bring about new models
for future service provisioning
– Networks will have a strong role in various vertical and industrial contexts
– Market entry barriers will be lowered by the decoupling of technology platforms, enabling
for multiple entities to contribute to innovations
– Empowering citizens as knowledge producers, users and developers will contribute to a
process of human-centred innovation, contributing to pluralism and increased diversity
– Privacy will be strongly influenced by increased platform data economy or sharing the
economy, emergence of intelligent assistants, connected living in smart cities,
transhumanism, and digital twins
– Monitoring and steering of circular economy will be possible, helping to create a better
understanding of sustainable data economy
– Sharing and circular economy-based co-creation will enable the promotion of sustainable
interaction with existing resources and processes
– Development of products and technologies that innovate to zero will be promoted; for
example, zero-waste and zero-emission technologies
– Immersive digital realities will facilitate novel ways of learning, understanding and
memorizing in several fields of science.
The role of IMT towards 2030 and beyond will be to connect many devices, processes and humans
to a global information grid cognitively, thereby offering new opportunities for various verticals.
Considering their different development cycles, a full complement of potential advances and vertical
transformations will continue in the post-2030 era. The trend towards higher data rates will continue
leading up to 2030, where peak data rates may approach Tera bits per second (Tbit/s) indoors,
requiring large available bandwidths giving rise to (sub-) Tera hertz (THz) communications. At the
same time, a large portion of the vertical data traffic will be measurement-based or actuation-related
small data. In most cases, this will require extremely low latency in tight control loops, which may
necessitate short over-the-air latencies to allow time for computation and decision making.
Simultaneously, the reliability and the QoS requirements in many vertical applications will increase
so that required services are available in the areas where it is needed. Industrial devices, processes
and future haptic applications, including multi-stream holographic applications will require strict
timing synchronization with tight requirements for jitter.
4.1.1 Potential new services, trends and opportunities
The three usage scenarios described in IMT-2020, eMBB, mMTC and URLLC will still remain
relevant. New use cases and applications should be considered for continuing evolution, especially
for those driving the technologies development and reflecting the future requirements. Consequently,
the following new services are envisioned as trends and opportunities:
– Holographic communication
Holographic displays are the next evolution in multimedia experience delivering 3D images
from one or multiple sources to one or multiple destinations, providing an immersive 3D
6 Rep. ITU-R M.2516-0
experience for the end user. Interactive holographic capability in the network will require a
combination of very high data rates and ultra-low latency.
– Tactile and haptic Internet applications
Human operators can monitor the remote machines by virtual reality (VR) or holographic-
communications, and are aided by tactile sensors which may also involve actuation and
control via kinaesthetic feedback.
Tele-diagnosis, remote surgery and telerehabilitation are just some of the many potential
applications in healthcare. Tele-diagnostic tools, medical expertise/consultation could be
available anywhere and anytime regardless of the location of the patient and the medical
practitioner. Remote and robotic surgery is an application where a surgeon gets real-time
audio-visual feeds of the patient that is being operated upon in a remote location. The
technical requirements for haptic internet capability cannot be fully provided by current
systems.
– Network and computing convergence
Mobile edge computing (MEC) will continue being deployed towards future IMT networks.
When clients request a low latency service, the network may direct this to the nearest edge
computing site. Augmented reality/virtual reality (AR/VR) rendering, autonomous driving
and holographic type communications are all candidates for edge cloud coordination.
– Extremely high-rate access
Access points (APs) in transport nodes, shopping malls, and other public places may form
information access points. These access points will provide fibre-like speeds. They could also
act as the backhaul needs of millimetre-wave (mmWave) small cells. Co-existence with
cellular services as well as security appears to be the major issue requiring further attention
in this direction.
– Connectivity for Everything
Scenarios include real-time monitoring of buildings, cities, environment, cars and
transportation, roads, critical infrastructure, water and power amongst others. The Internet of
bio-things through smart wearable devices, intra-body communications achieved via
implanted sensors will drive the need of connectivity much beyond mMTC.
It is anticipated that Private networks, applications, or vertical-specific networks, Internet of
Things (IoT) sensor networks will increase in numbers in the coming years. Interoperability
is one of the most significant challenges in such a ubiquitous connectivity/compute
environment (smart environments), where different products, processes, applications, use
cases and organizations are connected. Interactions among telecommunications networks,
computers and other peripheral devices have been of interest since the earliest distributed
computing systems.
Vehicle-to-vehicle (V2V) or vehicle-to-infrastructure (V2I) communication and
coordination, autonomous transport can result in a reduction of road accidents and traffic
jams. Latency in the order of a few milliseconds will likely be needed for collision avoidance
and remote driving.
– Extended Reality (XR) – Interactive immersive experience
This interactive immersive experience use case will have the ability to seamlessly blend
virtual and real-world environments and offer new multi-sensory experiences to users.
Extended Reality (XR), such as virtual reality (VR), augmented reality (AR) and mixed
reality (MR) is expected to provide higher resolution, larger field of view, higher frames per
second and lower motion-to-photon, which all translate into higher demand on the
transmission data rate and end-to-end latency.
Rep. ITU-R M.2516-0 7
in every field of the future society. The diversification of terminals will lead new verticals to
emerge and thrive.
4.2 Drivers for future technology trends towards 2030 and beyond
The continuing evolution of the IMT systems and the underlying technologies must be guided by the
imperative to satisfy fundamental requirements and contextualized in terms of how they can help
society, the end users, and value creation/delivery. These necessities and key driving factors are as
follows:
– Societal goals – Future technologies should contribute further to the success of several
UN-SDG goals including environmental sustainability, efficient delivery of health care,
reduction in poverty and inequality, improvements in public safety and privacy, support for
aging populations and managing expanding urbanization.
– Market expectations – new technologies should enable significant and novel capabilities,
support radically new and differentiated services, and create greater market opportunities.
– Operational necessities – The need to manage complexity, drive efficiency and reduce costs
with end-to-end automation and visibility is also an imperative motivation and driving factor.
Key drivers for IMT Systems for 2030 and beyond include:
– Energy efficiency
Energy efficiency has long been one important design target for both network and terminal.
While improving the energy efficiency, the total energy consumption should also be kept as
low as possible for sustainable development. Power efficient technology solutions are needed
both in backhaul and local access to make use of small-scale renewable energy sources.
– Data Rate, Latency and Jitter
The data rate for future systems should be increased as much as practical in order to support
extremely high bandwidth services such as extremely immersive XR and holographic
communications.
Services with real-time and precise control usually have high demands on the latency of
communications, such as the air interface delay, end-to-end latency, and roundtrip latency.
Jitter refers to the degree of latency variation. Some of the future services such as time
sensitive industry automation applications may request the jitter close to zero.
Future system should guarantee users’ experience regardless of users’ location and network
traffic conditions.
– Sensing resolution and accuracy
Sensing based services, including traditional positioning and new functions such as imaging
and mapping, will be widely integrated with future smart services, including indoor and
outdoor scenarios. Very high accuracy and resolution will be needed to support a better
service experience.
– Connection density
Refers to the number of connected or accessible devices per unit space. It is an important
indicator to measure the ability of mobile networks to support large-scale terminal devices.
With the popularity of the IoT and the diversification of terminal accesses in the specific
applications such as industrial automation and personal health care, mobile systems need to
have the ability to support ultra-large connections.
Rep. ITU-R M.2516-0 9
significantly better than today’s network. Trust modelling, trust policies and trust
mechanisms need to be defined.
Security algorithms may use machine learning (ML) to identify attacks and respond to them.
Continuous deep learning on a packet/byte level and machine learning can enforce policies,
detect, contain, mitigate, and prevent threats or active attacks.
– Dynamically controllable radio environment
A dynamically controllable radio environment may be able to change the characteristics of
the radio propagation environment, therefore creating favourable channel conditions to
support higher data rate communication and improve coverage.
implementation issues related to the periodic updating of deep learning models used in various blocks
of the PHY layer must be addressed.
Examples of the proposed areas to be investigated are presented next.
AI in symbol detection/decoding: ML techniques can be used for symbol detection and/or decoding.
While de-modulation/decoding in the presence of Gaussian noise or interference has been studied for
many decades, and optimal solutions are available in many cases, ML could be useful in scenarios
where either the interference/noise does not conform to the assumptions of the optimal theory, or
optimal solutions are too complex. Meanwhile, IMT towards 2030 and beyond is likely to utilize even
shorter codewords than IMT-2020 with low-resolution hardware, which inherently introduce non-
linearity that is difficult to handle using classical methods. Accordingly, ML could play an important
role in symbol detection, precoding, beam selection, and antenna selection.
AI in channel estimation: Another promising area for ML is the estimation and prediction of
propagation channels. IMT systems of prior generations have mostly exploited channel state
information (CSI) at the receiver, while CSI at the transmitter was mostly based on roughly quantized
feedback of received signal quality and/or beam directions. In systems with even larger number of
antenna elements, wider bandwidths and higher degree of time variations, the performance loss of
these CSI feedback scheme is non-negligible. ML may be a promising approach to overcome such
limitations.
AI in MAC layer design: Medium access control (MAC) layer is a major application area of AI where
many problems with legacy solutions can be replaced with AI-based methods using supervised
learning, data collection and ML model deployment. To jointly update the deployed ML models,
collect data for supervised learning tasks and enable reinforcement learning on different blocks of the
network, next generation MAC algorithms need to consider the coordination with AI functions used
in various layers of the network, especially in PHY layer.
AI in radio resource management: Radio resource management or resource allocation can also be
implemented via AI/ML based methods. In a multi-user environment, with reinforcement learning,
BSs and UE can automatically coordinate the channel access and resource allocation based on the
signals they respectively received. Each node calculates its reward for its transmission, and adjusts
its power, beam direction and other signalling to accomplish the distributed interference coordination
and improve the system capacity.
Semantic communications: With the progresses of ML and information theory, the ultimate air
interface can perform the automatic semantic communications. There are many open fundamental
problems in this issue. For example, learning algorithms usually relies highly on the wireless data
which may be hard to obtain or be preserved under privacy constraints. To solve it, it is possible to
learn with both the practical wireless data and the statistical models.
Questions related to the optimal ML algorithms given certain conditions, required amount of training
data, transferability of parameters to different environments, and improvement of explainability will
be the major topics of research in the foreseeable future. There will be various phases towards
development of AI for radio interface technologies, and it is imperative to ensure the increased
integration of the technology comes with minimum disruption to the rollout and operation of radio
interface technologies. In the short and medium terms, AI models can target for optimization of
specific features within radio interface technologies for IMT-2020 and its evolution. In the longer
term, AI can be used to enable new features over legacy wireless systems.
5.1.2 AI-native radio network
Future IMT-systems must support extremely reliable and performance-guaranteed services. They will
introduce a multi-dimensional network topology, which will make network management and
operation more difficult and introduce more challenging problems. To address these problems, AI
12 Rep. ITU-R M.2516-0
technologies can be adopted for automated and intelligent networking services. Consequently, radio
access network for IMT towards 2030 and beyond will evolve into an AI-native network architecture
to assist computationally intensive tasks.
The highest level of AI-native radio network is expected to be designed and implemented by AI to
act as an intelligent radio network, which can automatically optimize and adjust the network based
on the specific requirements/objectives/commands, or environmental changes. The research includes
high-layer protocols, network architecture and networking technologies enabling intelligent radio
network.
Numerous use cases of AI-empowered network automation have been proposed, including fault
recovery/root cause analysis, AI-based energy optimization, optimal scheduling, and network
planning. Key challenges of training issues have been identified: lack of bounding performance, lack
of explainability, uncertainty in generalization and lack of interoperability to realize full network
automation. Four types of analytics can be classified for future AI-native networks: descriptive,
diagnostic, predictive and prescriptive analytics.
The future RAN will be able to perceive and adapt to complex and dynamic environments by
monitoring and tracking conditions in the radio network while diagnosing and restoring any RAN
issues in an automated fashion. To achieve autonomy for its full life cycle management, at least the
following novel networking technologies could be considered: 1) efficient and intelligent network
telemetry technologies that leverage AI to apply management operations based on a collection of
historical and live network data; 2) automated network management and orchestration technologies
that continuously seek the optimal state of the RAN and enforce management operations accordingly;
3) automatically perform life cycle management operations, adjust configurations on radio network
elements, and optimize new services and features during and after deployment; and 4) provide AI
based assistance, in particular for aspects such as forecasting, root cause analysis, anomaly detection
and intent translation.
Examples of the proposed areas to be investigated are presented next.
Intelligent data perception: Large quantity of data transportation will bring burdens to each network
interface. Besides, data sensed from the radio environment sometimes do not have the corresponding
labels. Intelligent data perception, e.g. utilizing Generative Adversarial Networks (GAN) to generate
the required data so as to simulate real data, will avoid transferring large amount of data over
interfaces, and protect the data privacy to a certain degree. To further this vision of zero-touch
network management, an open network data set and open eco-system could be established.
Introducing user feedback: It is also possible that user feedback is introduced into the decision-
making process of the network to improve the decision-making of AI algorithms and help the machine
to better understand user preferences and make more user-preferred decisions.
Pervasive computation nodes: In future IMT-systems, more computation nodes will be required to
support highly computationally intensive services. Thus, computation nodes will be pervasive from
core to edge and from network to device. To cope with this trend, the control and user planes of the
network for future IMT-systems could be redesigned, and emerging technologies such as
programmable switches and distributed/federated learning could be adopted.
Supply of on-demand capability: To support services in multiple application scenarios, an intelligent
network is needed. In the AI-native Radio Network, AI is no longer just optimizing the wireless
resources of the wireless network, but an intelligent system integrating with radio network, which can
realize the supply of on-demand capability.
Collaboration of sensing and AI: In order to realize the intelligence of Radio Network, the new
functions of sensing and AI need to be supported. The end-to-end collection, processing and storage
of network data can be realized through the data sensing function. AI function can use these data on
Rep. ITU-R M.2516-0 13
demand to support different application scenarios. In this way, the utilization and support of AI
capabilities can be realized more efficiently and globally.
Distributed and unified AI control: AI system in AI-native radio network is expected to be distributed
over various network functions. For example, AI algorithms running on different functions or AI
models trained on different functions are integral components of this distributed AI system. There
should be a unified AI control centre for the distributed AI system, and under the control or
coordination of this AI control centre, each component of the distributed AI system independently
completes the assigned tasks, interacts with other components and reports measurements to the
control centre. By doing this, distributed AI system is expected to be an end-to-end solution.
Adaptive solutions for different usage: AI techniques can be used to target one or more wireless
domains, including non-real-time network orchestration and management, such as configuration of
antenna parameters and near-real-time network operation, e.g. load balancing and mobility robustness
optimization. Each wireless domain involves different sets of physical and virtual components, family
of parameters including KPIs, underlying complexities, and time constraints for updates. Hence, there
is a need to consider tailored AI solutions for different classes of the RAN, and their associated
problems. There already exists a rich body of research and practical demonstrations of the potential
benefits of AI for wireless, including significant network energy savings.
5.1.3 Radio network to support AI services
The radio network will be migrated from over the top towards the AI era. Wireless networks should
consider AI applications and paradigms that require the exchange of large volumes of data, ML
models, and inference data between different entities in the networks. Long-term platform
technologies are needed to better support AI services, which will have a significant impact on the
design of future radio networks, i.e. radio network for AI. Distributed and collaborative ML is
required to comprehensively exploit the computing/communication load and the efficiency and
comply with the local governance of data requirements and data privacy. Hence, the data-split and
model-split approaches will be emphasized in future research. The impacts of this on the future
network design are threefold:
Shift from downlink (DL) -centric radio to uplink (UL) -centric radio: Unlike the current DL-centric
radio which usually supports heavier traffic and better QoS for DLs, AI requires more frequent model
and data exchanges between a BS and the users it serves. The ULs should be reconsidered in network
design to attain a balanced, efficient, and robust distributed ML.
Shift from the core network to the deep edge: The locality of data and the computing/communication
needed for deep ML bring big challenges to the end-to-end delay. To mitigate it, new network as well
as the corresponding protocols should be redesigned. One of such research directions is to place the
major learning processes and threads close to the edge and thus forms a deep edge which can greatly
mitigate the system delay.
Shift from cloudification to ML: Due to the distributive nature of data and computing power, the
communication and computing procedures of an ML algorithm often take place across the whole
network from the cloud to the edge and the devices. Therefore, traditional cloudification should also
be reconsidered to be application-centric, i.e. to meet the specific needs of the more general
distributed ML applications with proper deployment of computing and communication resources.
In addition, future data-intensive, real-time applications require distributed AI/ML solutions. These
solutions support augmenting human-decision processes, developing autonomous systems from small
devices to complete factories, and optimising the network performance and marshalling the billions
of IoT devices expected to be interacting in the future. Since heterogeneous IoT devices are not as
reliable as high-performance centralized servers, distributed and self-organising schemes are essential
to provide strong robustness in device and link failures. Currently there are still many open questions
14 Rep. ITU-R M.2516-0
in fulfilling the requirements of the true distributed AI/ML solutions, such as data and resource
distribution, distributed and online model training, as well as AI inferring based on those models
across multiple heterogeneous devices, locations and domains of varying context-awareness. The
future network architecture is expected to provide native support for radio-based sensing and, through
versatile connectivity, accommodate ultra-dense sensor and actuator networks, enabling hyper-local
and real-time sensing and communication.
hardware, signalling, protocol, networking and others, achieving mutual promotion and benefits.
Further combined with technologies such as AI, network cooperation and multi-nodes cooperative
sensing, the ISAC system will have benefits in enhanced mutual performance, overall cost, size and
power consumption of the whole system.
The capabilities of ISAC enable many new services which the mobile operators can offer, including
but not limited to extremely high accuracy positioning, tracking, imaging (e.g. for biomedical and
security applications), simultaneous localization and mapping, pollution or natural disaster
monitoring, gesture and activity recognition, flaw and materials detection. These capabilities enable
application scenarios in future consumer and vertical applications in all forms of business such as
context-aware immersive human-centric communications, industrial automation, connected
automated vehicles and transportation, energy and healthcare/e-health.
Communication and sensing services need to share available hardware and waveforms while fusing
information from distinct sources of measurements in the network deployment area. Research
challenges remain in areas such as system level design and evaluation methodologies to characterize
the fundamental trade-offs of the two functions in the integrated system, the solutions to deal with
the increased sensitivity to hardware imperfections, joint waveform design and optimization and
others.
These new technological trends present new technology challenges related to scalability, dynamic
workload distribution, and data collection/management/sharing. Scalability is one such challenge. In
modern cloud computing, computing resources are often centralized in a few national or regional data
centres. Centralized service discovery and orchestration mechanisms are given full visibility of
computing resources and services in data centres. The centralized approach is no longer scalable when
computing resources and services become more widely distributed. Therefore, a more scalable
approach is required for widely distributed computing resources.
Dynamic computing workload distribution is another challenge. Modern workload distribution
between devices and the cloud is based on a client-server model with a fixed workload partition
between a client and the cloud. The fixed workload partition is application-specific and is pre-
determined during the application development phase under the assumption that there are always
sufficient computing resources in the cloud to accomplish the server-side workload. As computing
resources become distributed, a scheme is needed to allow dynamic device computing scaling out
based on various network conditions, such as workload requirements and communication computing
resource availability. A dynamic computing scaling scheme can be enabled as an IMT system
capability with minimal dependency on applications to minimize the impact on them.
Additional challenges are data collection, synchronization, processing, management and sharing.
More specifically, with the widespread application of AI in society/industry, a systematic approach
to collecting, processing, managing and sharing data to facilitate AI/ML is vital. Split computing also
requires the synchronization of a large volume of data, context, and the software among network
entities. Conventional data management functions in cellular networks focus on managing
subscription information and policies. In IMT-2020, a network data analytics function was added to
the specifications through which the measurement data of network functions can be collected and
used for analytics. It is expected that future IMT towards 2030 and beyond will have further
diversification on data sources, types, and consumptions. Therefore, it is expected that data plane
functions will be part of the IMT system function from the beginning and would provide support to
full-blown data services to devices, network functions and applications.
time and significant signalling overhead caused by the frequent movement of nodes. Therefore, the
integrated design of short range and cellular may help the sidelink to achieve optimized system level
performance. How to increase the integration efficiency, as well as how it can co-exist with other
systems on the same spectrum need further research.
5.4.2 Cooperation with peripheral devices
In the subjected technology, a UE is connected to its peripheral devices by using THz broadband
radio, while the peripheral devices receive/transmit data signal in THz band radio with UE and also
receive/transmit data signal in the different (lower) frequencies connecting to BSs (operated in, e.g.
the mmWave bands and the sub-6 GHz bands) and then connect to the APs located in BS. Here, the
peripheral devices play a role to mediate between a UE and BS with AP.
Generally, the THz radio has been investigated for use in fixed and long-range radio applications such
as wireless backhaul, but the range depends upon path loss, antenna gains and rain fading. However,
it is expected that the THz radio application could also be disseminated in the form of these short-
range use cases.
In achieving the exchange of information with sufficiently high quality and quantity to satisfy the
diverse user demands of individuals, UEs present significant limitations in terms of their size, which
limits the number of integrated antennae and their maximum transmission power. It is impractical to
increase the size of UEs to alleviate constraints such as the number of antennae, and the performance
of UL communication is vastly inferior to DL communications.
Therefore, a cooperation technique between various peripheral devices that communicate with UEs
is needed. Specifically, through the cooperation, it would be possible to solve issues arising from the
constraints caused by a single user device, such as power transmission and the number of integrated
antennae. For examples, peripheral devices around UEs, such as PCs, watches, glasses (smart
glasses), or self-driving cars, can become wireless devices and cooperate with each other, making it
possible to overcome transmission power constraints in a single user terminal, and to virtually
overcome limitations in the number of antennae. When riding in a car with a UE, the antenna on the
car can also be used virtually as the UE’s antenna to enhance the communication performance.
Typically, communication between a UE and its peripheral devices requires a short-range but
extremely wideband signal transmission. Since the capabilities required for wireless signal processing
are limited in small devices such as watches and glasses, complex wireless signal processing should
be avoided. Therefore, it is expected that the above-mentioned technology will be introduced.
packaging. It is noteworthy that operating from around 100 GHz could require up to an order-of-
magnitude more elements relative to arrays deployed in millimetre waves, which could pose design
challenges.
between backscattering signals and source signals, and limited communications range and data rates.
Accordingly, the techniques for backscattering communication include modulation and channel
coding, signal detection algorithms, interference coordination techniques, combinations with MIMO
technology, multi-user access approaches and others.
5.6.2 On-demand access technologies
The on-demand passive device with triggered wakeup receiving chain is an alternative approach to
resolve low power communication. The on-demand passive device would stay in sleeping mode with
zero power consumption and will have the receiver waken up when the network sends the wakeup
signals when data arrives, which turns on the transceiver to switch to the connected status. In
particular, the zero-energy passive device for triggering wakeup would be useful for machine type
communication, wearable devices, health devices, and general mobile phone. The next generation
wireless system needs to design the network and control signal for this on-demand access.
Mobile devices can support on-demand network access based on backscattering technology to
minimize power consumption. The receiver sensitivity is the main challenge of the on-demand
network access with a passive wakeup device. It will limit the coverage of the passive wakeup device.
To accommodate the low receiver sensitivity of the frond-end passive device, the wakeup signals
need to transmit in much higher density compared to that of the traditional BS deployments. In
addition to being difficult to accomplish the blank cover of on-demand network access, this
requirement will increase the network energy consumption for tracking the mobile device with front-
end wakeup passive devices. Moreover, the low receiver sensitivity would hinge the coverage and
development of next generation wireless network.
eavesdropper capabilities and position. Research on the secrecy guarantees at finite block lengths has
allowed better understanding of the trade-off between secrecy rate, error rate and information leakage.
Finally, there is the ability to carry out anomaly detection at the physical layer. There are lightweight
proposals for distributed anomaly detection by observing metrics such as transmission and reception
times and energy and memory usage, and these could be used to protect radio interfaces of IMT
towards 2030 and beyond.
The throughput of a single decoder in a future device will reach hundreds of Gbit/s. Infrastructure
links are even more demanding since they aggregate user throughput in a given cell or virtual cell,
which is expected to increase due to spatial multiplexing. However, it will be difficult to achieve such
a high throughput, only relying on the progress of integrated circuit manufacturing technology within
ten years. Solutions must be found on the algorithm side as well.
To further develop IMT technologies, the advanced coding schemes need to be investigated, including
advanced versions of Polar Coding, Low Density Parity Check (LDPC) and other coding
technologies. Owing to the diverse demands, the advanced codes should demonstrate superior
performance over a wide range of code lengths and rates, support flexible choices of decoders and
preferably unified into a single framework. To meet the higher throughput than legacy IMT systems,
both code design and corresponding encoding/decoding algorithms need to be taken into account to
reduce the decoding complexity and improve decoding parallelism. Besides, it is vital for a channel
coding decoder to maintain a reasonable power consumption level. Considering the dramatically
increased throughput requirement, the energy consumption per bit needs to be further reduced by at
least 1 ~ 2 orders of magnitude. It is also expected that some emerging scenarios such as new verticals
and intelligent services may require novel coding schemes.
Application scenario-oriented designs need to be considered. For example, for the mixed scenario of
“eMBB+URLLC”, the design of forward error correction (FEC) code needs to consider “higher code
rate (for higher data rate) + stronger error correction ability (for higher reliability) + lower error floor
(for shorter latency due to reducing the number of hybrid automatic repeat request (HARQ)
retransmissions)”. In future high-reliability scenario channel coding schemes should provide lower
‘error floor’ and better ‘waterfall’ performance than that in IMT-2020. Short and moderate length
codes with excellent performance need to be considered. Accordingly, at the BS side, depending on
application scenarios and user device types, the adaptive switching between different FEC coding
schemes or FEC coding scheme with different parametrisation is worthy of being considered.
In addition, new coding strategies should encompass both FEC and novel iterative re-
transmission/feedback mechanisms. This is particularly the case for applications which require short
packets, such as in IoT systems. LDPC codes and Polar codes that have short block lengths have been
employed for IMT-2020 systems for use in traffic and control UL/DL channels. On one hand, codes
with short block lengths are less reliable, such that error-free transmission cannot easily be
guaranteed. An increase in the error probability may increase the need for automatic repeat request
(ARQ) re-transmissions, which may not be suitable for time sensitive applications requiring ultra-
low latencies. On the other hand, codes with longer block lengths also imply increasing latency. To
this end, the interplay between the minimum required block length and robustness against
transmission errors needs to be optimized. Furthermore, low energy applications are often not well
suited to ARQ, since this requires leaving the device in a non-sleep mode for an extended period of
time, leading to an increase in energy consumption.
6.1.3 Advanced waveforms
Over the past decade, OFDM has, by far, become the most dominant modulation format. It is being
applied in the DL for both IMT-Advanced and IMT-2020. For some future applications, OFDM may
still be retained due to backward compatibility. However, some effects of OFDM like sensitivity to
frequency dispersion and high PAPR may become more critical at mmWave and THz frequencies. In
addition, future IMT system will face unprecedented complex communication scenarios, where
enhanced waveform design may be beneficial in specific scenarios to guarantee desirable
performance. For example, in scenarios with high mobility or high frequency bands where the
orthogonally between subcarriers will no longer be maintained. Also, in scenarios where low PAPR
is needed, such as for low-cost devices or to reduce the impact to power amplifier in very high
frequency band (e.g. sub-THz), new waveform design should also be investigated. DFT-s-OFDM is
a variant of OFDM which provides low PAPR and is already used in current and previous IMT
24 Rep. ITU-R M.2516-0
systems and should therefore also be considered as baseline for low PAPR waveforms in future IMT
systems.
Modulation methods can be classified under orthogonal, bi-orthogonal and non-orthogonal
categories. Besides classical OFDM, other orthogonal techniques include null suffix OFDM, filtered
multitone, universal filtered multicarrier (UFMC), lattice OFDM, Filter Bank OFDM and staggered
multitone (FBMC). Among bi-orthogonal methods, there exists cyclic prefix OFDM, windowed
OFDM, and Bi-orthogonal frequency-division multiplexing (FDM). For non-orthogonal schemes
which need to eliminate inter-symbol interference via more complex receivers include generalized
FDM (GFDM) and faster than-Nyquist signalling.
Historically, Doppler frequency shifts (or its dual time-varying effects) have long been considered as
a type of degree of freedom to provide additional diversity gain. The transformed domain waveform
design, i.e. orthogonal time frequency space (OTFS), is an effective approach to harvest the gain of
Doppler domain diversity when the waveform can extend sufficiently in time to enable low enough
Doppler resolution. Moreover, for high-speed scenarios, OFDM with advanced reference signals
design also have the capability to track the time varying channel due to the Doppler effect. Thus,
further enhancement based on OFDM waveform can also be investigated in future.
It is likely that for future IMT applications, OFDM may still be retained due to backward
compatibility. Nonetheless, it has long been pointed out that OFDM has several drawbacks arising in
non-ideal situations, which motivates further research into either modified multicarrier systems or
other alternatives.
6.1.4 Multiple access
Multiple access technology is the key technology to enable a large number of users sharing the overall
radio resources, which has been cornerstones for the evolution of wireless standards. It can increase
the capacity of the system and allow different users to access the system simultaneously. Generally,
from the way of resource sharing, multiple access can be classified as orthogonal multiple access
(OMA) and non-orthogonal multiple access (NOMA), while from the access procedure, multiple
access can be classified as grant-based multiple access and grant-free multiple access.
Orthogonal multiple access has been the longest adopted multiple access scheme from the earlier
cellular communication system to IMT-2020. As an advanced form of FDMA, orthogonal frequency-
division multiple access (OFDMA) scheme has been utilized in both IMT-Advanced and IMT-2020
systems. On the other hand, NOMA also allows multiple devices to share the same physical time and
frequency resources, thereby efficiently connecting a large number of sporadically transmitting
devices. However, the success of this approach primarily depends on both user detection and data
decoding on the shared resources.
The requirements for future networks are very challenging, and the KPIs vary considerably from
application to application. Multiple access techniques require a re-think in IMT towards 2030 and
beyond, especially due to the integration of massive connectivity and extremely low energy
applications. Current systems use contention access methods and/or non-contention access methods
such as orthogonal time-frequency division multiple access for cellular systems. However, these
multiple access schemes do not scale well to scenarios where thousands of devices or more aim to
access a single BS, but with a low duty cycle.
Therefore, the new multiple access should be very dynamic and application oriented, for example,
new structures that allow for better scaling and possibly further reduce latency. Different types of
multiple access techniques should be used under a ubiquitous umbrella. There are several promising
candidates for that.
The multiple access via massive-MIMO could be the one that has the strong potential. It can provide
a very directive beam for each device to enable multiplexing with a very low inter-beam interference.
Rep. ITU-R M.2516-0 25
Despite associated drawbacks such as cost (e.g. RF chains) and complexity, the recent progress in
massive-MIMO (e.g. holographic or lens antenna array) theoretically proves these challenges could
be overcome. However, the massive-MIMO is not a one size fits all case. In fact a large number of
antennas makes massive-MIMO unsuitable for specific scenarios (e.g. IoT applications). For these
scenarios, there are other promising NOMA solutions such as Multi-User Shared Access (MUSA),
Pattern Division Multiple Access (PDMA), sparse code multiple access (SCMA) and cyclic prefix
code division multiple access (CP-CDMA), where each can provide very spectral efficient solutions
to overcome the resource block bottleneck in sub-6 GHz.
For IMT towards 2030 and beyond, the usage of NOMA in diversified scenarios can be further
investigated and identified to provide better performances. A fundamental rethink of the conventional
multiple access technologies is required in favour of grant-free schemes suited for massive random
access. The further development of NOMA is expected to meet future requirements including more
massive connectivity, higher spectral efficiency, low latency and lower implementation complexity,
and to provide differentiated service capabilities.
The evolution of NOMA should consider identifying the potential application scenarios that can
reflect the NOMA gain and the evolution of NOMA technology itself. Depending on future
requirements and the characteristics of NOMA technology, potential application scenarios that can
reflect NOMA gain, particularly under massive connectivity should be identified. For example, in
massive connectivity application scenario, more sequences need to be generated to support the
simultaneous transmission of large numbers of terminals.
Besides above discussion of various multiple access schemes for resource sharing, from the access
procedure, multiple access can be classified as grant-based multiple access and grant-free multiple
access.
In grant-based multiple access, users (i.e. the transmitters) are coordinated by a central unit (i.e. an AP
or a BS) prior to the transmissions and each user is assigned a unique signalling/signature which can
be used by the receiver to perform detection. Grant-based multiple access requires dedicated multiple
access protocols to coordinate the communication of the accessible users in the systems. Grant-based
multiple access technology has become mature and been adopted by various wireless communication
standards.
However, the grant-based multiple access technology, designed for current human-centric wireless
networks, is not appropriate for future autonomous thing-centric wireless networks to support
millions of devices in the future cellular network. On the other hand, Grant-free multiple access does
not need to perform a sufficient coordination among the users, and can more efficiently handle the
low latency requirement, scheduling information deficiency, or the bursty and random access pattern
of user activity. Grant-free multiple access technology has been mainly used in the initial access, and
several challenges must be overcome to realize grant-free multiple access. Challenges include the
performance limits of massive bursty devices simultaneously transmitting short packets, the
requirements of low complexity and energy-efficient coding and modulation schemes for massive
access, and efficient detection methods for a small number of active users with sporadic transmission.
For example, it is reported that spectrally efficient URLLC multiple access, scheduling, and protocols
need to be developed for broadband URLLC, and a grant-free based multiple access is required for
massive URLLC. It is noted that 1) ultra-broadband transmission techniques utilizing new spectrum
or antenna technology need to be considered; 2) spectrally efficient protocol, channelization, and
scheduling need be further developed for guaranteeing URLLC QoS; and 3) multiple access schemes
supporting both massive connectivity and ultra-low latency need to be developed.
Moreover, the collisions of multiple packets in the same slot are one of main challenges in NOMA
research. To solve these collisions and support a grant-free transmission of high user loading, non-
orthogonal physical layer design should be considered. NOMA has been well researched in grant-
26 Rep. ITU-R M.2516-0
based schemes. However, in grant-free transmissions, the global power control, resource allocation
and configuration cannot be used, which poses a challenge to deal with inter-user interference (IUI).
The one-dimension discrimination of power domain brought by the near-far effect of grant-free
transmission is not enough to deal with severe IUI. Therefore, higher-dimension domains like code
domain and spatial domain should be considered. In code domain grant-free schemes, the transmitters
preconfigure or randomly select their non-orthogonal spread codes. At the receiver side, the codes
are detected and used to alleviate IUI. The prior knowledge of the statistic properties of data (e.g.
constellation shape), codebook, and CRC result should be fully utilized for advanced blind detection.
Compressed Sensing (CS)-Based Random Access has re-emerged as a signal processing technology
for massive connectivity with grant-free or unsourced random access. A typical IoT network involves
sporadic traffic patterns because only a small subset of devices is activated at any given time slot to
minimize energy consumption. Considering that some active devices initially send their unique
preambles (metadata) to the base station before directly transmitting the data signals, here, CS can be
effectively applied to detect the active devices and estimate their channels from the metadata
transmitted by IoT devices. It is also mentioned that grant-free or unsourced random access can reduce
the signalling overhead at the expense of high computational complexity at the base station, as well
as improve energy-efficiency.
enhance multi-user transmission capabilities of network, which is crucial for improving spectral
efficiency.
Lens antennas can be used to increase array gain. However, in practice, this may be difficult to achieve
due to the presence of aberration and spill over losses. Antennas at sub-THz region will need to be
tightly integrated with the packaging, RF circuits, and systems to optimise interconnections losses
between radio transceivers and antennas and thus improve the radiated radio performance.
Future communication system with many antennas could also be built with multiple chips that each
contains on-chip antennas in the same semiconducting substrate as the active circuitry on the chip.
Nanotechnology opens new perspectives for THz communication to design and manufacture
nanoscale electronic devices and systems in the terahertz range. Graphennas, i.e. graphene-based
plasmonic nano-antennas, provide a technology to radiate electromagnetic waves with competitive
conductivity over 100 GHz frequencies. Different kinds of metasurface can be added as part of the
antenna structure for improving the antenna gain, isolation, reflectivity, or other properties.
Besides the extension in scales of antenna array, new types of antenna arrays can be applied for better
performance, various array sizes, low cost or more convenient deployment. The new types of antenna
array involve various types of reconfigurable antenna arrays, passive/active antenna arrays with new
RF architectures, and antenna arrays with new materials, spatially continuous transceiver aperture
and so on.
As the performance of E-MIMO will ultimately be limited by the propagation channels, the
propagation channel is also a vital topic. As the number of antenna elements is increased, the total
physical aperture of the radiating elements is also increased, and effects such as wavefront curvature
due to scattering in the near field of the array, shadowing differences in different parts of the array,
and beam squinting due to the non-negligible run time of the signal across the array, start to become
much more pronounced. When this happens, conventional propagation theories and results exploiting
the plane wave assumption start to breakdown. All of these physical artifacts need to be taken into
account in the design and implementation of beamforming architectures and signal processing
algorithms at the transmitter and receiver.
Ideally, E-MIMO should be implemented using fully digital arrays with hundreds or thousands of
phase-synchronized antennas. While this could be practically possible in both sub-6 GHz and
mmWave bands, the implementation complexity grows with the carrier frequency. For some
IMT-2020 deployments, digital beamforming remains the choice of interest, due to its ability to
provide a higher beamforming gain, while utilizing the channel’s spatial degrees-of-freedom. In sharp
contrast, most current commercial deployments at mmWave frequencies use analogue beamforming
to explicitly steer the array gain in desired directions. This is since digital beamforming at mmWave
frequencies yields high circuit complexity, energy consumption and cost of operation. In the future,
closer investigations of fully digital implementations at mmWave frequencies are merited. For
instance, new device technologies, potentially leveraging new materials, can be utilized to implement
on-chip compact ultra-massive antenna arrays that can potentially enable fully digital architectures.
New implementation concepts are needed that do not involve the suboptimal beam-space paradigm,
as in the case of hybrid beamforming, but can make use of all the spatial dimensions. In addition, the
compromise solution of hybrid beamforming striking the right balance between processing in the
analogue and digital domains has also received considerable attention.
With optimal beamforming architecture, E-MIMO provides extremely high spatial resolution. It is
capable of achieving very high positioning accuracy with E-MIMO. By equipping antenna array at
both sides of a communication link, orientation of user device could be obtained with 3D positioning.
Furthermore, at THz frequency band, highly directional pencil-like antennas enable sensing
applications capable of creating high-definition images of the environment and surrounding objects.
In fact, such high-resolution scanning in the beam space domain has the potential to create real-time
detailed 3D maps; thus, enabling elaborated digital twin applications in industrial use cases, or
28 Rep. ITU-R M.2516-0
connected via front-haul connections to CPUs that coordinate transmissions from BSs phase-
coherently, and the CPUs are inter-connected via backhaul connections. In case Time division duplex
(TDD) is used to ensure scalability in massive MIMO systems, channel estimation and precoding can
be performed by each BS according to channel reciprocity. Therefore, no instantaneous CSI needs to
be sent over the fronthaul.
A new type of distributed TRxP is a transparent forward TRxP with limited processing capability that
its beamforming parameters are controlled by a BS or a CPU. Such TRxP may also use RIS as its
transmission/reception antennas. It is different to traditional relay based distributed MIMO system
since signals can be reflected/ transmitted directly and full duplex can be realized at the TRxP. For a
distributed E-MIMO system with such new type of TRxP, designs of following factors can be taken
into consideration: the interface between the distributed TRxP and BS/CPU, channel model,
synchronization, interference coordination and others.
One of the main challenges for implementing cell-free massive E-MIMO networks are to have cost
efficient deployments while achieving sufficiently accurate network synchronization and satisfying
the requirements for the fronthaul and backhaul connections. Another challenge is to realize promised
theoretical gains in practice for realistic scenarios with distances spanning up to hundreds of metres
and variations in UE/scatter mobility. Moreover, since UEs in cell-free E-MIMO networks can
communicate with multiple access points at the same time, practical factors such as channel
information acquisition, RF channel calibration, synchronization, precoding algorithm, etc. should be
considered.
6.2.3 E-MIMO with AI assistance
Tools from AI/ML are particularly suited for dealing with the inherent complexities of E-MIMO
operation. In addition to enabling solutions towards dealing with various sources of RF impairments
in MIMO transceivers, such as power amplifier non-linearities, the adoption of AI-based techniques
to replace conventional signal processing blocks, such as channel estimation and detection can lead
to performance improvements and/or complexity reduction in the short and medium terms.
AI-based techniques can be used effectively to continuously match reference signal density to channel
variation, compress CSI feedback by accurate prediction, and reduce beam-pairing complexity. In a
MIMO environment where there are relatively many optimization factors, including the practical
limitations of an ADC and RF chain, there are several efforts that send and receive a large amount of
information with low transmission power. The encoder and decoder responsible for compressing and
decoding information are viewed as a single deep neural network, both learning simultaneously.
Combined with compressive sensing, compressing CSI can be implemented through DL. However,
a metric such as mean-squared error can be used to evaluate image compression performance.
Therefore, it is important to examine the communication metric.
Another representative example of solving a future IMT-system problem through AI is the beam
selection problem. In millimetre waves, beamforming technology is essential owing to its relatively
high propagation loss. The beam selection problem refers to selecting one of several beams stored in
the BS. Instead of performing a full search of beams in all directions, the goal is to reduce the
complexity of the search using AI. Studies have been attempting to solve this problem through AI,
particularly with a DL -based method. However, the selection/provision of training data is the primary
issue for handling the beam selection problem with AI.
And AI technology also can be used to assist RIS-based E-MIMO to realize the intelligent
reconstruction of the wireless communication environment. In this way, network coverage, multi-user
capacity and signal strength are enhanced.
30 Rep. ITU-R M.2516-0
In today’s practical cellular systems, a band has a fixed duplex scheme, i.e. either FDD or TDD. If
deviating from the ‘mutually exclusive’ principle becomes a reality, it would be possible to adapt the
duplex scheme in a dynamic manner. This would improve the spectral efficiency as well as the system
operation flexibility
There are significant challenges to vary the duplex scheme on existing links. Research into the gains
of sub band duplex will help identify the gains and the feasibility of full duplex under different
interference scenarios.
– Phased antenna arrays using only a single RF-input for each array and a programmable
distribution network to vary dynamically the directivity of the beamforming, enabling ultra-
massive MIMO (thousands of elements) at low cost and low power
– Improve positioning and sensing performance
– Wireless power transfer and backscattering to relieve energy consumption for battery-
powered devices.
For RISs to be ready for successful commercial deployment, several open research challenges need
to be addressed, including:
– development of accurate device electromagnetic models and channel models and their
experimental validation
– studying the fundamental limitations and potential gains of RIS-aided communication
systems and thereby identifying scenarios where deploying RIS offers advantages over
traditional relays and non-reconfigurable passive reflective structures
– passive beamforming design
– new channel estimation is required due to lack of an RF chain
– materials research and studying on hardware implementation issues
– real-time control protocols for RISs
– there are enormous challenges with integration of the active IRSs/RISs to the core network,
since a dedicated link for control signalling between BSs and IRSs/RISs presents itself as an
addition to the existing transport network of unspecified bandwidth and a consequential
increase in the control plane latency
– the bandwidth of the transport links to the RIS would depend on the number of elements at
the RIS/IRS, the number of control bits per-elements, their refresh rates, and the data frame
transmission time intervals and would also scale with the number of UEs. Consequently, the
number of surfaces per sector/cell remains unclear
– the transport links would also have an impact on the network architecture that needs to be
analysed
– the vulnerability of the surface elements due to inclement weather would. Lead to pixel
failures.
RISs turn the wireless environment from a passive to an intelligent actor, so the channel becomes
programmable. Importantly, it is characterized by low cost, low power consumption, and easy
deployment. So, this new fundamental technology will challenge basic wireless system design
paradigms, create innovation opportunities which may progressively impact the evolution of wireless
system architecture, access technologies, and networking protocols over the next decade.
6.4.2 Holographic radio
Holographic radio (HR) can be applied in the use cases, such as high-precision positioning and
perception, smart factory and immersive media. It utilizes the spatially continuous electromagnetic
aperture and interference exploitation to enable spatial multiplexing and spectral multiplexing with
pixelated ultra-high resolution. Comparing the beam-space approach, HR has native intelligence
because it models refined holographic electromagnetic space by Fresnel-Fraunhofer interference,
diffraction, and spatial correlation modules, which is similar to the deep neural network. HR can
upgrade RF holography to optical holography and, by doing this, optical signal processing schemes
like time inversion and aperture coding coherence can replace the traditional equalizer. Hence, HR
can make use of all available spatial dimensions to achieve benefits in terms of flexibility, spectral
efficiency, delay, power consumption and complexity. HR has been studied to some extent in the
fields of RF holography with ultra-high resolution, but the application in the field of wireless
Rep. ITU-R M.2516-0 33
communication still faces many challenges including integration between microwave photonics-
based continuous aperture active antenna array and high-performance optical computing, hardware
design and physical layer design issues.
6.4.3 Orbital angular momentum
OAM imposes ‘twists’ on the phases of the propagating laser beams, such that modes with different
amounts of twist are orthogonal to each other. Studies have focused on its beams and advanced
transceiver designs as potential solutions to increase the spectral efficiency of line-of-sight (LOS)
propagation or reduce the hardware complexity of extremely high data rate in future IMT systems.
OAM in air-interface accessing application between BS and UEs face more challenges to directly
obtain multiplexing gains of different OAM modes. Investigations on how to make such systems
robust to practical impairments of multipath, misalignment of orientation, etc., are critical to improve
their practical utility. While some preliminary work has been completed in that direction, e.g. in
multipath propagation and turbulence, extensive studies are required to establish feasible systems.
Since OAM system or OAM-MIMO symbiotic system performs better with electrically smaller
antennas and quasi-static terminal, it is much more suitable for indoor small cells with millimetre
waves and THz systems, and in particular for free-space optics applications. Given the breadth of
applications anticipated for future IMT systems, OAM solution may comprise three phases: i) vortex
waveforms carrying OAM used as a set of beamforming patterns based on the universal antenna array
of the MIMO system; ii) vortex waveforms carrying OAM used as multiple orthogonal sub-channels
based on the dedicated antenna array of OAM system; iii) light photon of microwave photon carrying
quantum state OAM used as a novel signal carrier. Depending on the maturity of technology, different
phases of an OAM-based solution can be progressively introduced into future IMT systems.
physical asset including a representation of the asset’s structure, role, and behaviour within the digital
domain. In future this technology is expected to include representation of the environment as well
and interactions between DTs, that require the transfer of vast information volumes, low delays and
high reliability that can be enabled by THz communications. Moreover, mapping of the physical
world to the digital world with extreme precision is made possible because of the precise positioning
capabilities of THz/sub-THz systems due to their use of extremely fine beams and wide channel
bandwidths. As an additional use case, future MR systems will allow humans to seamlessly interact
with each other, and with physical and digital things, thereby enabling massive new capabilities in
both work and social interaction. Again, this requires support for extremely high data rates, low
latency, and highly precise positioning, which can be supported by THz communications.
It is evident from the wide-ranging applications that THz communications is an umbrella technology
that contains multiple sub-systems and enabling use-cases targeting next generation IMT systems.
Naturally, each sub-system will have its own detailed requirements. However, as in all commercially
successful technology solutions, each of these sub-systems will need to deliver feasible size, weight,
power and cost (SWaP-C) KPIs, which ideally satisfy the demand of the corresponding next
generation of use-cases. An important KPI for the radio receiver (transmitter) is the energy expended
per received (transmitted) bit of information.
On the other hand, there are some well-known challenges for the deployment of Tera-Hertz high
frequency bands in IMT systems, e.g. high propagation loss with increased frequency,
atmospheric/precipitation/foliage loss due to physical interaction of electromagnetic waves and the
medium through which these waves propagate. In addition, THz signals are more vulnerable to
different types of obstacles (e.g. people, walls, vehicles). The potential of THz technologies to shape
the future of wireless communications is very much dependent on the ability to devise feasible
enablers in terms of baseband processing, RF frontend and antenna design, propagation and channel
modelling, beamforming and (ultra-massive) MIMO, as well as resource management and medium
access control schemes.
6.5.1 Pencil-beam THz radio
Wireless connectivity in the THz regime creates the need for high-gain large antenna arrays with
pencil-beam characteristics to cope with the high molecular absorption loss of the THz band. Narrow
beams below ten degrees are already used in IMT-2020 millimetre wave communication systems to
overcome increased signal path loss compared to the sub-6 GHz communication. Such
communication is implemented with phase arrays at both ends of the communication link, and an
identical approach can be adopted with IMT systems for 2030 and beyond. Such narrow signals are
called pencil beams, which are steered with electrical beam steering by controlling the phases of the
communication signals at both ends of the radio link. Owing to the extremely short wavelength at
THz bands, the ultra-massive MIMO structure can be integrated within a small size, to provide high
beam gain with flexibility in spatial signal pre-processing. Besides providing coverage extension by
pencil-beamforming, due to the extremely short wavelength of THz spectrum, antenna elements
become much smaller than those designed at millimetre wave bands and many more antenna elements
can be integrated in the footprint. This ultra-massive MIMO system also improves spectrum
efficiency by exploiting higher spatial resolution and frequency reuse.
High-gain directional antennas communicating over distances far beyond a few centimetres in the
THz band require advanced beamforming techniques that are significantly affected by THz channel
dynamics.
At first, the design of suitable pencil-beamforming algorithms to address the challenges of the THz
band, with respect to number of antenna elements, calibration requirements, suitable frequency
windows and others, is expected to play a key role in the next generation wireless technologies. These
Rep. ITU-R M.2516-0 35
algorithms will be focused on efficient device tracking in THz band by capitalizing on accurate
channel models, efficiently designed signalling and optimized RIS design.
Secondly, THz technology also requires very directional antennas with ideally steerable features with
beamforming capabilities. The complexity and losses associated with the antenna feed network may
be challenging. Alternatively, the use of high gain, e.g. hemispherical lens antennas fed by a planar
array with a moderate number of antenna elements may be attractive. Alternatives that are expected
to advance further and reach to practical implementation level include Graphene based plasmonic
antennas compatible in nano scale, plasmonic patch antennas, and Graphene based patch antenna
array in Yagi-Uda MIMO configuration with beam-steering capabilities.
Thirdly, micro- and macro-mobility are critically important for THz wireless links to be practical part
of an access system. This is especially true in mobile access cases while less so for backhaul
connection. Even when a user is not moving it is highly possible for a mobile device to be rotated or
moved with moderate speed over short distance due to the user’s hand or other movements. It can
also happen that blockage of the Line-of-Sight link may occur. In this case, device tracking may need
to search for, e.g., a reflected path between receiver and transmitter provided by a RIS.
Furthermore, beyond the PHY layer, new link and network layer strategies for ultra-directional THz
links are needed. Indeed, the necessity for very highly directional antennas (or antenna arrays)
simultaneously at the transmitter and at the receiver to close a link introduces many challenges and
requires a revision of common channel access strategies, cell and user discovery, and even relaying
and collaborative networks. For example, receiver-initiated channel access policies based on polling
from the receiver, as opposed to transmitter-led channel contention, are emerging.
Similarly, innovative strategies that leverage the full antenna radiation pattern to expedite the
neighbour discovery process have been experimentally demonstrated. All these aspects become more
challenging for some of the specific use cases.
6.5.2 THz transceiver technologies
To achieve Tbit/s transmission, a true THz communication system comprising all the necessary
components, from antenna and RF components, through AD/DA conversion to digital signal
processing, need to utilize the state-of-the-art components and it will be a significant challenge to
tackle by 2030 and beyond. There will be paramount challenges related to cost, power consumption,
and engineering resources required to solve all relevant open technical details.
To guarantee the implementation of THz communications in the IMT systems three areas of
technology developments are required: transceiver architecture, RF device and baseband signal
processing.
Transceiver architecture
To bring THz communications to fruition, several key pieces of the entire communication chain will
have to be realized for successful commercial deployments. An important question relates to the
transceiver architecture. In general, there are three kinds of transceiver architectures in THz
communication systems that are widely developed, including solid-state based THz system, direct
modulation-based THz system, and photoelectric combination-based THz system. Furthermore, the
solid-state based THz transceivers fall into two broad categories, fundamentally operated and sub-
harmonically operated transceiver architectures. Each of these approaches has its own merits and can
better adapt to different requirements of the THz technology, such as high frequency capability of
photonics and higher power output of solid-state based solutions. For next generation wireless
systems, it is expected that these options will find their use and deployments for different scenarios
and applications.
36 Rep. ITU-R M.2516-0
Receiver architectures will need to address the design of tuneable band filters, wideband low noise
amplifiers, linear mixers and high-speed ADCs. Research will need to address the receiver
architecture and support greater frequency agility. Also, integrated silicon and photonic devices are
expected to play an increasing role in the signal processing chain in order to realise large bandwidths
and high sampling rates. Technologies such as nano-opto-electro-systems, which support all silicon
fabrication, are expected to make a significant contribution to the realisation of cost effective, low
power Tbit/s modems.
Radio frequency (RF) devices
Another critical area for THz communications is RF and mixed-signal devices. The challenges here
include lack of underlying analytical hardware models, design of high efficiency components
including frequency conversion circuits, mixers, multipliers, power amplifiers (PA) and oscillators.
Both the peak output power and power added efficiency of the THz PA and phase noise of the THz
oscillators will be design challenges. Given that THz communication systems will likely utilize wide
communication bandwidths, design of energy-efficient ADCs and DACs converters will also be a
challenge.
While the THz-band channel supports bandwidth in excess of 100 GHz, the sampling frequency of
state of-the-art DACs and ADCs is in the order of 100 Giga samples-per-second.
Operating at such high frequencies puts stringent requirements on the semiconductor technology.
Since THz band is known as ‘THz gap’, two technologies such as electronics and photonics have
been studied to develop THz transceivers. While the electronic technologies using silicon metal oxide
semiconductor field effect transistor (MOSFET) transistors are predicted to have reached their peak
speed, and will actually degrade with further scaling, silicon germanium (SiGe) bipolar transistors
are predicted to reach a maximum frequency of close to 2 THz within a 5 nm unit cell. In such a
technology, amplifiers and oscillators up to about 1 THz could be realized with high performance and
integration. A better option may then be to use GaAs (Gallium Arsenide) or indium phosphide (InP)
technology for the highest frequency parts, combined with a silicon complementary metal oxide
semiconductor driven baseband circuit.
As an alternative of electronic technologies, two-terminal devices such as Resonant Tunnelling Diode
(RTD) and Schottky-Barrier Diode (SBD) have been investigated for THz oscillation and detection
functions, respectively. THz transmission experiment using those devices integrated with planar
antennas has been demonstrated in the frequency up to 1 THz. Compact THz transceivers could also
be developed using two-terminal devices because of the simplified structure compared with three-
terminal devices and compatibility with planar antenna elements.
Photonics can extend the oscillation frequency by combination of LiNbO3 (LN) single-sideband
(SSB) optical modulators and uni-travelling carrier photodiode (UTC-PD). Regardless of complex
configuration of photonic architecture, such technologies could be expected to deploy distributed
antenna systems in limited areas such as indoor facilities. In addition to signal distribution through
optical fibre in limited areas, the beam direction of the distributed antenna system can also be
controlled remotely by assigning a wavelength to each beam at the so-called central station. This
technology is expected to connect a large number of miniaturized and cost-effective remote BSs for
future IMT systems.
Baseband signal processing
Another key building block of THz technology is the digital baseband component. Since THz system
needs to deal with Tbit/s data rate, the baseband signal processing schemes for THz communications
are expected to be low complexity and low power consumption. In particular, for the ultra-high-
throughput use-cases, baseband solutions that are scalable in terms of data rates, power, and form
factor will be instrumental for the future IMT system operating in THz frequencies.
Rep. ITU-R M.2516-0 37
While a wide bandwidth of tens to hundreds of GHz is expected to be used to achieve Tbit/s
transmission, the ADC resolution is inevitably limited. Thus, baseband signal processing needs to be
robust and energy efficient, adapt new bands’ channel characteristics and new transmission schemes.
New waveforms, channel coding, modulation schemes and antenna technologies are currently under
study and development.
The realization of the ultra-fast, reliable and low complexity decoder with Tbit/s throughput is a key
issue. There have been numerous efforts towards extending the range of the state-of-the-art design in
terms of throughput. Although over 500 Gbit/s decoders have been implemented once for polar codes
and LDPC codes, those designs were provided with limited code flexibility and performance
compromise. Since channel coding is the most computationally demanding component of the
baseband chain, efficient coding schemes need to be developed for Tbit/s operations. Nevertheless,
the complete chain should be efficient and parallelizable. Therefore, algorithm and architecture co-
optimization of channel estimation, channel coding, and data detection is required.
In contrast with sub-6 GHz systems, waveforms and modulation schemes will have to be co-designed
with the sub-THz radio front-end to meet the required KPIs (e.g. energy/bit). Energy-efficient radio
access technologies exploiting multiple degrees of freedom (e.g. spatial, spectral, and temporal) will
have to be developed. New waveforms and a mix of analogue and digital (de)-modulation schemes
will have to be developed to achieve state of the art energy/bit transmission efficiency. For example,
to keep the modem complexity and power consumption low, solutions based on amplitude and phase-
shift keying (APSK), which has a low PAPR, are attractive. The choice of multiple-access (MA)
scheme will be strongly influenced by the implementation technology adopted. Therefore, different
MA schemes need to be researched such as GFDM, Filter Bank Multi-Carrier (FBMC) and UFMC
as well as schemes based on CDMA such as CP-CDMA or sparse sequence MA. And the increasingly
sparse nature of the wireless channel in sub-THz frequency bands and the huge bandwidths of Tbit/s
modems (e.g. >100 GHz) will have a major role in determining the optimal waveform, which may
include joint functionality such as sensing as well as communication.
Joint channel estimation and detection, joint demodulation, and decoding are considered for low
complexity and latency implementation. For Tbit/s channel coding, code design and decoding
algorithms are also expected to be developed regarding parallelizability, implementation constraints,
and new channel characteristics. Newly coded modulation schemes can be combined for better
spectral efficiency, and deep-learning aided approaches can also be applied commonly to both
baseband signal processing and channel coding algorithms.
In addition, the baseband complexity can further be reduced by using low-resolution digital-to-
analogue conversion systems, and all-analogue solutions should also be considered.
For a practical Tbit/s baseband modem, low complexity, high-parallelized systems and efficient
signal processing algorithms for THz MIMO systems are expected to be developed.
Related with beam forming, Classical MIMO has several limitations, and specifically, the complex
DSP implementation will be a major drawback compared to an RF/analogue beamforming approach.
Even then, the required parallelism and complexity of combining signals from different antennas and
steer beams goes well beyond what has been seen in any communications or radar system below
100 GHz. Circular polarization diversity MIMO is yet another way to overcome these channel
bandwidth limitations. Physical and financial constraints are setting strict boundaries, and as the
continuum of Moore’s law requires the favourable and rapid development of core technologies from
semiconductor processes to complete chipsets and other associated components like antennas to keep
the past trend moving forward to enable Tbit/s communication modem implementation at the base
and mobile stations. A significant improvement of signal processing power with reduced power
consumption compared with current IMT-2020 mobile station implementations is needed to enable
one Tbit/s communication modem.
38 Rep. ITU-R M.2516-0
networks deployed to shorten the measurement distance and secure the line-of-sight path make it
easier to attain the accuracy goal.
Line-of-sight/non-line-of-sight path detection and identification is key component to harness ultra-
wide bandwidth and ultra-massive MIMO in a millimetric wave or terahertz band. Moreover, THz
technology with wide contiguous signal bandwidth also allows for very fine spatial resolution which
in its turn allows for identifying multipath components, thereby intrinsically improving the
positioning accuracy.
Also, a sampling rate of more than 3 GHz on the receiver-side and sub-nanosecond synchronization
between reference nodes are required for centimetre-level accuracy. Precision of synchronization is
critical to positioning technologies that are based on time of flight (ToF) measurement of traveling
waves, such as ultrasonic sound, light, and radio wave. Another positioning technology that requires
synchronization is the stereo vision-based positioning. As the synchronization technology matures
better toward 2030, it is conceivable that wireless space-time synchronization in future IMT to be
available by around 2030, enabling Location Based Services to fully equipped with higher precision
localization capability.
Furthermore, THz sensing permits combining traditional metrics such as range and Doppler with
detailed imaging of the environment for even more accurate positioning. More specifically, high-
frequency cm-level localization exploits Simultaneous Localization and Mapping (SLAM) to
enhance the accuracy by collecting high-resolution RF imaging: 3D images using THz signals and
then combined with angle- and time-of-flight information.
In fact, millimetre wave and THz frequency bands provide interesting new features, such as
densification, highly directive pencil-like beam that not only improve communications, but also make
possible centimetre-level positioning accuracy. Ultra-wide bandwidth and Extreme MIMO in a
millimetre wave or THz band provide additional degrees of freedom and impart new performance
gains for UE localization through the positioning technologies utilizing timing and angle
measurements. Moreover, further enhancement of positioning accuracy including carrier phase
positioning (CPP) based on cellular signals and AI/ ML positioning techniques can be considered.
CPP has been used very successfully in GNSS for centimetre-level positioning or even millimetre-
level positioning. However, the RAT-dependent CPP technique is so far not supported in IMT-
system. The main motivation of using CPP in IMT systems towards 2030 and beyond is its capability
to accurately determine the UE position with centimetre-level -accuracy without the need to use ultra-
wide bandwidth or GNSS, which leads to more efficient usage of radio resources.
For NR positioning, it remains a challenge to support the required accuracy, reliability, scalability,
and adaptability due to unpredictable radio propagation characteristics (e.g. LOS/NLOS link) in
especially indoor scenarios. AI/ ML methods have recently been widely used to overcome these
challenges with reasonable success. For the future terrestrial IMT systems towards 2030 and beyond,
AI/ML positioning techniques can be considered for positioning enhancements. For example, in the
absence of a line-of-sight path, fingerprinting or ray-tracing can be considered the most promising
technologies with the help of AI/ML. So, the following issues need to be further investigated for
positioning with AI/ML, at least including corresponding AI/ML training and algorithms, positioning
efficiency and performance validation.
Each network slice could be administered by a mobile virtual network operator (MVNO) or by the
customers themselves.
It is expected that future IMT-systems’ RAN is a user-centric cell-free network using various
frequency bands. Every RAN resource including radio unit association, frequency band, subchannels
and processing time, should be flexibly partitioned (sliced) to guarantee packet flows with similar
QoS requirements. Such RAN slicing supports 1) adaptive RAN slicing architecture for cell-free
networks using massive MIMO and various frequency bands; 2) MIMO/beamforming/power
control/transmission technology to overcome fading channels and mobility; and 3) spectrally efficient
channelization and scheduling to ensure URLLC QoS by considering mobility and traffic
characteristics. As billions of intelligent devices are expected to connect to future IMT-systems, RAN
slicing techniques will be critically important for massive-broadband URLLC connectivity.
From a radio access perspective, RAN slices (RAN-S) could also be configured and created from the
available radio resources. Recently, ML and AI techniques have been proposed for traffic forecasting
and classification, dynamic resource availability prediction across the various radio interfaces,
call/session admission control, and scheduling in RAN-S. The introduction of ML and AI in RAN
slicing creates potential opportunities to evolve towards intelligent RAN-S.
7.2 Technologies to support resilient and soft networks for guaranteed QoS
As our economies and societies are increasingly reliant on IMT-systems, the availability and
resilience of these radio network resources, and service assurances, have become crucial to
maintaining highly efficient societies. Simultaneously, new services, such as immersive, holographic,
tactile communications, and new media beyond 8K, will emerge in the future. Since QoS
requirements vary from one user to another, the future network is proposed to be resilient and soft,
i.e. user-centric, service oriented, flexible, and powerful in capabilities, guaranteed in QoS, and
consistent in user experience. Several RAN technologies can be considered to satisfy these network
requirements.
The QoS guarantee mechanism for user data transmission over an air-interface is obtained by
considering the overall service characteristics and user air-interface channel environment. Examples
include QoS identifier or similar attributes and CSI or potential attributes as defined in future
networks. Combined with AI/big data technologies in L2/L3, networks can auto-adjust their
QoS. Based on service feedback from UE, RAN can intelligently predict the trends of subsequent
services and provide QoS guarantees in advance. Furthermore, deterministic RAN can be considered
to meet the requirements of vertical industry scenarios and services. Zero-jitter in RAN can be
achieved through efficient buffer mechanisms, enhanced scheduling strategies, and new ARQ and
HARQ mechanisms.
Service-based and user centric RANs could be necessary to provide a soft capability to satisfy user-
specific requirements. Typically, traditional networks are designed and planned to maximize capacity
considering dynamic changes from the user demand and network load. Based on the concept of micro-
services, the monolithic radio network can be split into multiple basic radio network functions or
services and elements. With a flexible combination of these basic elements, future IMT-systems are
expected to support on-demand generation of radio network functions. Alternative approaches
propose a RAN with deep edge nodes that support AI, ultra-high throughput, and ultra-low latency.
architecture, and plug-and-play feature. Moreover, the network architecture should be designed to
reduce the processing latency and enhance on-demand capabilities. In parallel, DOICT (data,
operation, information, and communication technologies) convergence driven RAN architecture will
be a trend that plays a more prominent role in future network design. Furthermore, native-AI enabled
RAN functions can be used to enhance the network performance, reduce the cost, enable smart
decision making in resource management, and realize digital transformation. It can also act as a
service that supports new operations.
As the network scale continues to expand, network complexity has rapidly increased. Efficiency in
data processing is reduced owing to redundancies among different layers, i.e. reordering,
retransmission. Therefore, a thinner or lite protocol stack design could be necessary in future
networks. RANs can consider flexible controlling mechanism designs to achieve the simplest
architecture while maintaining the strongest capabilities, i.e. in the mapping of upper-layer and lower-
layer protocol entities and selection of lower-layer data bearing channels.
Future networks should support the decoupling of signalling and data for robust control and on-
demand data services. For instance, low-frequency bands can be used for signalling coverage to
simplify mobility management and ensure users have real-time access. High-frequency bands can be
enabled on-demand to support high-speed services. Additionally, a unified RAN architecture and
signalling design could be considered to support diversity in radio interface technologies and new
relationships among BSs and UEs, and wide-area and micro-area networks, which include RAN node
cooperation and aggregation.
Alternative approaches for a novel RAN architecture are considered.
7.3.1 Support RAN nodes cooperation and aggregation
With the introduction of new services, applications, and scenarios, RAN nodes including BSs and
UEs should function collaboratively to satisfy the QoS requirements of specific services, such as
holographic connectivity, or enhance system performance in specific scenarios, such as multiple
devices in proximity belonging to one user.
For holographic services, different profiles could be presented by different nodes, particularly in large
scale activity. The end points could be BS(s) and/or UE(s), while different flows belonging to the
same holographic service should be transmitted to their corresponding end points via separate
interfaces, including wired and wireless interfaces. In this architecture, the functions and relationships
of RAN nodes should be redefined and remodelled, such as introducing a new L2 protocol
architecture for multiple nodes’ cooperation. Furthermore, RAN procedures such as access control
containing participating node(s), system information, paging, and mobility should be revisited while
considering multiple RAN nodes. In the user plane, it is necessary to consider QoS satisfaction for
coherent flows of specific service(s)/application(s) that could be transmitted through or received in
multiple terminals.
In scenarios where a user owns multiple devices, UEs can cooperate in upper or PHY layers in the
RAN. Here, service continuity should be guaranteed across different terminals, and a thinner protocol
stack is applicable for a diversified controlling mechanism over air interface. Other technologies
required to accommodate UE cooperation and aggregation include security for a group of UEs,
connection control for the UE group, capability coordination and so forth. With UE cooperation and
aggregation, adaptive network deployments can be realized by adding or deleting UEs in a UE group
dynamically while maintaining service continuity. Consequently, UL transmission can be enhanced
to improve the performance of the system.
7.3.2 User-centric architecture
In modern mobile networks, users are under the network’s control. User-centric network (UCN)
architecture is a native architecture that empowers users with the capabilities to define, configure,
42 Rep. ITU-R M.2516-0
and control the network functions related to their subscribed services. In contrast to the existing
architecture, UCN will enable each user to have a dedicated virtual network that integrates all
functions needed for their services. Such a user-centric radio network can dynamically match and
update radio resources (i.e. frequency spectrum, TRxP, radio units) for the specific user in real-time,
depending on the user’s environment and services.
UCN users will be able to define the services they would like to receive and consider how to operate
and manage them. For active services, users can configure policies for resource usage. UCN users
will also be able to control the data that is generated or owned by them, as well as the corresponding
process rights (i.e. authentication, authorization, and access control).
using a single HIBS; 2) no modifications are required for normal terrestrial mobile phones; 3) robust
and resilient networks that are unaffected by power outages or collapses because of natural disasters
(earthquakes, tsunamis); and 4) provision of mobile communications in the sky (for flying cars,
drones, etc.) and at sea (for ships, etc.), which are difficult to be covered by ground-based BSs.
UASs have a wide range in terms of size and weight and could be used in various business sectors in
future smart cities. In the future, UASs can be used as BS platform or as a relay to form a temporary
network to extend the mobile communication. The benefit of such UAS-assisted wireless
communications is the flexibility and agility to provide on demand deployment of network coverage
promptly, which is highly complementary to the fixed infrastructures in scenarios such as natural
disaster and short-term events, such as concerts or big games in crowded stadiums. In some sense,
UAS-assisted wireless communication can also bring the BS closer to the user, which could enhance
the service quality and reduce power consumption for users. Moreover, UAS BSs can also serve as
extended mobile sensors to enhance the sensing capability.
The interconnection of terrestrial IMT and non-terrestrial communications enhance the coverage of
future IMT systems from the ground to air to space on a multi-layered basis. It would enable ubiquity
of communications and is expected to enable new use cases, such as connections with unmanned
systems, monitoring (video and data), mobile eMBB, IoT, logistics systems, and backhaul and
smartphone interconnectivity. In addition to the coverage extension, when urban and sub-urban areas
covered by ground-based IMT BSs are overloaded and/or when there is a high-capacity demand, they
can offload their traffic to HIBS, UAS BS, and satellite network. Despite these benefits, as the
technologies applicable to terrestrial communication networks might not be directly used in NTN,
challenges are expected due to the highly dynamic network topology, different operational
environment, and long propagation delay among others.
Mitigation solutions that will allow connectivity and seamless mobility between terrestrial and non-
terrestrial networks should be studied.
8 Conclusion
This Report describes technology trends of terrestrial IMT systems that are applicable to radio
interfaces, mobile terminals, and radio access networks by considering the timeframe up to 2030 and
beyond. These trends include emerging technologies and the technologies to enhance the radio
interface as well as the radio network.
Further technical information and feasibility studies for higher frequency bands can be found in other
ITU-R documents, and ITU-R is working on a report on the technical feasibility of IMT in bands
above 100 GHz.
CT Communication technology
CU Central unit
DAC Digital-to-analogue converter
DOICT The Convergence of DT, OT, IT, and CT
D2D Device-to-device
DT Digital twin
DTN Digital twin network
DT-RAN Digital twin RAN
DTN Delay and disruption tolerant network
DL Downlink
eMBB Enhanced mobile broadband
E-MIMO Extreme-MIMO
FBMC Filter bank multi-carrier
FDD Frequency division duplex
FDM Frequency-division multiplexing
FEC Forward error correction
FL Federated learning
GNSS Global navigation satellite system
GPU Graphics processing unit
HARQ Hybrid automatic repeat request
HIBS High altitude platform stations as IMT base stations
HR Holographic radio
IAB Integrated access and backhaul
IMT International Mobile Telecommunications
ISAC Integrated sensing and communication
IoT Internet of Things
IIoT Industrial internet of things
IT Information technology
KPI Key performance indicators
LDPC Low density parity check
LOS Line of site
MAC Medium access control
MBB Mobile broadband
MEC Mobile edge computing
MIMO Multi input multi output
ML Machine learning
mmWave millimetre wave
mMTC Massive machine-type communications
MOSFET Metal oxide semiconductor field effect transistor
MR Mixed reality
46 Rep. ITU-R M.2516-0