To Unleash Tomorrow’s Most Innovative Applications, Account for Latency in PNT Testing

By Ricardo Verdeguer Moreno, Product Line Manager at Spirent Communications

There’s a secret ingredient at the heart of emerging applications like autonomous vehicles, precision agriculture, and many others: increasingly accurate positioning, navigation, and timing (PNT) systems. The more precisely we can measure real-time position and velocity—and the faster systems can respond to changing PNT data—the more powerful automation and innovation we can unleash. For those tasked with testing and validating such systems, however, the growing sophistication of PNT technologies exacerbates a longstanding challenge in hardware-in-the-loop (HIL) testbed design: latency.

System designers and manufacturers continue to add more sensors to vehicles and systems—global navigation satellite systems (GNSS), light detection and ranging (LiDAR), inertial measurement units (IMUs), and others—to capture ever-more precise PNT data. In the lab, however, simulating the interaction of all those sensors, and the vast range of conditions under which they might operate, demands complex and interoperable test processes. This often takes the form of HIL, but this brings unique challenges. Unaccounted for, even small fluctuations in latency can lead to wildly different results in the device under test (DUT), and over time, significant testing reliability issues. Yet if we plan to apply PNT systems to more mission- and life-critical applications, it’s even more important to understand exactly how they perform.

Fortunately, it’s possible to minimize HIL testing uncertainty, even in environments using multiple PNT sensors and simulators. That starts with understanding the sources of latency in your testing environment and taking straightforward steps to mitigate it.

Inside PNT Latency

In a HIL rig, a GNSS simulator’s job is to receive command inputs, process them, and convert them into analog RF outputs. As in all transmission and computation processes, this takes some amount of time, introducing latency into the overall environment. The higher the latency (the more time that elapses between the control command being issued and the simulator outputting an updated signal), the more the signal information will vary from the underlying truth—and the less realistic the test. As more simulators in a testbed add inconsistent or unpredictable latency, test results quickly accumulate uncertainty.

In many cases, it’s possible to work around this problem. In open-loop testing configurations that don’t incorporate real-time feedback from remote systems, you can use predefined motion and trajectories, enabling the simulator to process updates ahead of time and generate them without any latency. In closed-loop testing, however, where the goal is to model real-time feedback from the DUT (such as when simulating a drone controlled by a human operator), it’s not possible to know motion and control commands in advance. In these HIL testbeds, latency can have much more significant effects. Did the drone start turning 10 meters before the obstacle that the remote pilot was trying to avoid, or five? Knowing that answer is essential for effective testing. But the more latency fluctuations the test setup injects, the more uncertain the results. In testbeds incorporating multiple sensors and dynamic systems, the potential error increases exponentially.

There are four component parts of latency in a typical HIL testbed:

  • Network latency: This is the delay between when a message is transmitted (such as from a HIL controller) and when the simulator receives it. This latency depends entirely on the testbed network and is typically around 1-2 milliseconds, depending on protocols.
  • Sampling uncertainty: GNSS simulators work by sampling inputs from the test environment at regular intervals and updating their models. If something changes just after a simulator has collected a sample, that change won’t be reflected until the next iteration. The less frequently a simulator samples and updates, and the more simulators used in the testbed, the more uncertainty is introduced.
  • Update latency: This is the time a simulator takes to update its model after receiving new input data.
  • Output latency: This is the time a simulator takes between updating its model and outputting a new RF signal to reflect the latest information.

The combined input latency, output latency, and sampling uncertainty create the total “system latency” of a simulator. System latency varies across simulators. In lower-quality equipment where the computational power changes from cycle to cycle, latency can be inconsistent even within the same device.

Reducing Uncertainty

When configuring a testbed, make sure you’re taking steps to minimize latency in both the simulator(s) and the network. Start by following these general guidelines:

  • Optimize the network: The hardware and software choices you make in the testbed network directly affect the network latency introduced, as well as contributing to sampling uncertainty. Make sure you’re using high-quality cabling and high-performing external equipment (such as Ethernet switches), and the most efficient communication protocols.   
  • Account for system latency in simulators: The latency of a given simulator depends entirely on the design and construction of that device. Multiple factors—hardware type, quality of internal connections, effectiveness of the software algorithms—can contribute to latency, none of which are within a tester’s control. Seek out simulators known to have highly consistent latency, so it’s easier to account for in testing. Don’t assume that latencies will be the same from one simulator to another. And avoid using low-quality equipment, where latency can fluctuate across test runs from the same simulator.
  • Minimize sampling uncertainty: HILtestbed setups often include multiple simulators, each of which introduces its own sampling uncertainty. You can minimize these effects, however, by synchronizing all simulators to send/receive messages and update their models at the same intervals. If you’re using a GNSS simulator that provides a one pulse per second (1 PPS) signal input/output synchronized with its update rate, for example, you can use that device to synchronize update rates in the other simulators and systems in the testbed. This ensures that new commands are always issued as close to the next update cycle as possible.
  • Test and test again: Especially for HIL setups with multiple devices and simulators, the best option is to conduct many test runs, under a wide range of operating conditions, so you can understand and account for all latency in the testbed.

By taking steps to mitigate latency in PNT systems, you create a more realistic, lifelike testing environment. You minimize the time and effort needed to separate latency generated by the testbed from the responsiveness of the DUT. Best of all, you can move forward with more innovative and transformative PNT applications, knowing that your testing delivers consistency you can trust.

for more information visit: https://www.spirent.com/