Artifact mitigation for high-resolution near-field SAR images by means of conditional generative adversarial networks
Autor(es) y otros:
Palabra(s) clave:
Generative adversarial networks
Freehand
Synthetic aperture radar
Fecha de publicación:
Versión del editor:
Citación:
Resumen:
This work presents an approach to enhance the quality of high-resolution images obtained by means of systems relying on synthetic aperture radar (SAR). For this purpose, a deep learning method called conditional generative adversarial networks (cGAN) is applied to the imager outcome when it is prone to suffer artifacts. This is specially the case of novel systems pushing the limits of SAR (e.g., irregular sampling, multilayered media, etc.) resulting in very chaotic clutter and image artifacts that cannot be easily removed with conventional approaches. The cGAN can be trained to detect high-level characteristic features in the image (e.g., parts of a scissor blade) so another output based on these detected features can be tailored. In other words, it can translate features contaminated by artifacts into clean features, effectively improving the quality of SAR images. Unlike other deep learning approaches, the training of the involved neural networks tends to be stable thanks to the structure based on two competing subsystems. The proposed approach is illustrated using simulated and measurement data in the context of two advanced near-field SAR systems considering: i) cylindrical multi-layered media, and ii) freehand acquisitions. Results show that cGANs clearly outperform conventional approaches removing most of the artifacts, enabling to produce a clean output image.
This work presents an approach to enhance the quality of high-resolution images obtained by means of systems relying on synthetic aperture radar (SAR). For this purpose, a deep learning method called conditional generative adversarial networks (cGAN) is applied to the imager outcome when it is prone to suffer artifacts. This is specially the case of novel systems pushing the limits of SAR (e.g., irregular sampling, multilayered media, etc.) resulting in very chaotic clutter and image artifacts that cannot be easily removed with conventional approaches. The cGAN can be trained to detect high-level characteristic features in the image (e.g., parts of a scissor blade) so another output based on these detected features can be tailored. In other words, it can translate features contaminated by artifacts into clean features, effectively improving the quality of SAR images. Unlike other deep learning approaches, the training of the involved neural networks tends to be stable thanks to the structure based on two competing subsystems. The proposed approach is illustrated using simulated and measurement data in the context of two advanced near-field SAR systems considering: i) cylindrical multi-layered media, and ii) freehand acquisitions. Results show that cGANs clearly outperform conventional approaches removing most of the artifacts, enabling to produce a clean output image.
ISSN:
Patrocinado por:
This work was supported in part by the Ministerio de Ciencia, Innovación y Universidades of Spain/Fondo Europeo de Desarrollo Regional (FEDER) under Project PID2021-122697OB-I00, by Principado de Asturias under project AYUD/2021/51706 and by the Spanish Ministry of Universities and European Union (NextGenerationEU Fund) under Project MU-21-UP2021-030.