Autonomous underwater vehicles (AUVs) rely on a variety of sensors for decision-making, with vision-based sensors being one of the most appealing. Nevertheless, the visual data must be improved because red quickly fades and turns the photos blue, and the brightness of the photographs is also impacted by minute variations in the object’s altitude above the seafloor. A noisy and distorted visual data is caused by variables including refraction and absorption, suspended particles in the water, and colour distortion.
First row consists of ground truth that we want, second row consists of visual data as seen underwater.
To tackle this Pix2Pix GANs have been used to restore images. It’s a Conditional Adversarial Network which performs image to image translation by translating an image from any arbitrary domain X to another arbitrary domain Y. By letting X be a set of distorted underwater images and Y be a set of undistorted underwater images, we can generate an image that is the enhanced version of the given underwater image.
1st column: Underwater Image / 2nd column: Ground Truth / 3rd column: Generated Image