Telecommunications

Keys to building a telecommunications infrastructure that supports AI

Modern devices have much more computing power than their predecessors. With the increase in computing capacity comes the added benefit of low power consumption.

New devices can process large volumes of data at a fraction of the cost required by previous generations of hardware.

Advanced technologies such as the Internet of Things (IoT) and 5G networks are enabling telecommunications industries to increase their data processing capabilities at an unprecedented rate.

AI in telecoms

Artificial intelligence (AI) is the field of computer science in which algorithms are used to automate computers to solve complex problems without human intervention.

With the help of artificial intelligence, telecommunications companies can automate routine tasks to increase operational efficiency. Businesses are now able to eliminate manual labor, minimizing human error.

Customer data processing costs can also be outsourced to an advanced computing network that prepares inputs for advanced decision-making algorithms in the mainnet. The combination of artificial intelligence technologies with IoT, 5G and edge computing enables enterprises to optimize network performance, energy efficiency, latency and security requirements.

As AI is a data-driven technology, telcos and their customers should focus on good data management practices to help maintain smooth, industry-standard operations.

Data Centers and Distributed Networks: Key Components of AI Infrastructure in Telecommunications

For telecom networks, one of the biggest challenges in adopting AI is building the network infrastructure. Previously, telecommunications networks were only designed for telephony, but many networks have transitioned to modern 4G/LTE technologies that harness the power of digital signals for operational tasks.

However, AI technologies require the allocation of additional computing resources to handle data inputs for training and inference processing. These resources typically depend on computer servers placed in physical locations called data centers, and networking and application servers that transmit data and run front-end applications.

Hardware Servers vs Virtual Servers

Physical servers provide a large number of computing resources that can be fully dedicated to a single client application or resource. They are ideal for undertaking large data workloads and cases where data privacy is important.

Yet physical servers require expensive physical storage.

Virtualization is a way to run a computer entirely on an existing hard drive (also called a VM). A physical server can run multiple virtual machines which can be customized to customer specifications. Although virtual machines may not be as powerful as physical servers, they offer a more lightweight and scalable solution for running algorithm-dependent applications, such as AI and machine learning (ML) applications.

Distributed Computing Resources

In real-world scenarios, computer systems in telecommunications networks have limited bandwidth and computing resources. This means that they can only process and transmit a limited amount of data, and perform a limited number of calculations in a given time.

Distributed networks attempt to compensate for these limitations by allocating primary tasks to a central server with sufficient bandwidth and computing resources. This central server meets the requirements of low latency and high processing speed. It can be a cloud server.

Computers that lack bandwidth or computing power are separated from the central server. These devices can be placed closer to the main server if they have sufficient bandwidth, or closer to the application servers if low latency is required. These computers form the peripheral network. The requirements for a specific distributed network configuration depend on an application’s use case, as we will demonstrate below.

Centralized training and inference

Centralized servers offer high computing resources and bandwidth in one location. They are usually placed away from the application layer. A centralized server will have AI/ML inputs to relay from another location.

Once the algorithm has processed the inputs and calculated an inference, it relays them to the application server. Centralized resources are needed when:

  • Large amounts of data must be processed
  • The treatment is complex in nature
  • Real-time latency or efficiency is not important

Distributed training and inference

The distributed computers for running the training and inference models are at opposite ends of a network. Algorithms running on these computers must be simplified to work with limited hardware resources. The data from these computers will be processed closer to the application and may possibly be sent to a central server for further calculation. Distributed systems like this require extra work to get them in sync with the main network. Distributed computing resources are required when:

  • Ultra-low latency is required for performance or user experience
  • Data entry is simple
  • Computing resources or bandwidth are limited.

Hybrid training and inference (centralized/distributed)

In telecommunications, infrastructure design can be designed with flexibility in mind for a wide variety of applications. Since telecommunications companies deal with Big Data (large volumes of structured and unstructured data), there is a need to divide computing resources so that latency, performance, and security are not compromised.

For example, a telecommunications company that wants to have the lowest latency in its services to its customers will have to compromise on performance and security. From a practical point of view, this is not possible, and the requirements for acceptable latency fall between acceptable low and high ranges.

This is where hybrid infrastructure comes in handy. Some AI/ML modeling techniques include:

Extract, Transform, Load

An extract, transform, and load (ETL) model is used when edge devices are not very powerful and a central server is needed. In these cases, peripheral computers send data to a central server in batches for processing.

The central server prunes the data, simplifies its structure, and sends it back to peripheral computers to train the AI. This technique eases the computational load on peripheral devices and allows them to train AI without requiring high computing resources at all times.

Centralized initial training

Exclusively centralized initial training that relieves edge-computing networks. Recycling requiring less resources can be transferred to the peripheral network.

Reducing data size to meet bandwidth and compute requirements can be achieved in two ways:

  • Quantification. It is the process of reducing the precision of the input values ​​sent for the algorithmic calculation to reduce the data size.
  • Sparsification. It refers to minimizing the encoding length of stochastic gradients in a neural network, reducing its memory and computational costs.

Balancing pipeline complexity and maintenance costs

An AI/ML pipeline typically has multiple stages of experimenting, testing, and deploying models that require manual work. Typically, the complexity of the pipeline is proportional to the number of features in your AI model.

To minimize the cost of work performed by supporting operations personnel (data scientists, engineers, and developers), some of the pipeline steps should be automated. Implementing an ML process that is not automated will significantly increase development and operational costs. Teams should consider reducing a feature set if an AI implementation proves too expensive.

Building better telecommunications with AI

5G and IoT systems enable telecoms to create distributed systems that enable cost-effective and scalable AI solutions. Yet these benefits are only realized through careful and innovative implementations of a hardware infrastructure that adequately supports AI algorithms.

By understanding the full potential and limitations of edge computing and cloud networking, the telecom industry can use intelligent infrastructure to build AI systems that add value to businesses and consumers.

About the Author:

Subbu Seetharaman is Director of Engineering at Lantronix, a global provider of turnkey solutions and engineering services for the Internet of Things (IoT). Subbu is a senior engineering executive with over 25 years of experience leading software development teams, building geographically dispersed, high-performing teams involved in the development of complex software products around programmable hardware devices.