GPU-Based Parallel Algorithm for Wideband Signal Timing Recovery
Main Article Content
Abstract
Symbol timing recovery is a complex calculation process that detects and corrects timing error in a coherent receiver. This paper presents a new implementation of GPU-based symbol timing recovery based on the parallel version of Gardner’s method to minimize the timing error. Gardner’s method utilizes a sequential process that relies on feedback error. The proposed method is a fast parallel implementation method on a GPU for time-error detection (TED) using the parallel timing recovery structure of sample signal blocks which makes fast error detection possible. We calculate the interpolation filter coefficients before timing recovery to detect the timing error of the
symbols. We then compare the performance of timing recovery for different parallel techniques on different GPUs to minimize error and improve processing speed, up to 100 times, compared to Gardner's method. Performance evaluations show that we achieved a very high rate of timing recovery (50 Msymb/sec on GTX 1050 Ti) by optimizing the GPU implementation.
Downloads
Metrics
Article Details
Licensing
TURCOMAT publishes articles under the Creative Commons Attribution 4.0 International License (CC BY 4.0). This licensing allows for any use of the work, provided the original author(s) and source are credited, thereby facilitating the free exchange and use of research for the advancement of knowledge.
Detailed Licensing Terms
Attribution (BY): Users must give appropriate credit, provide a link to the license, and indicate if changes were made. Users may do so in any reasonable manner, but not in any way that suggests the licensor endorses them or their use.
No Additional Restrictions: Users may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.