Video stabilization of atmospheric turbulence distortion
Yifei Lou
UCSD
Abstract:
We present a method to enhance the quality of a video
sequence captured through a turbulent atmospheric medium. Enhancement
is framed as the inference of the radiance of the distant scene,
represented as a latent image," that is assumed to be constant
throughout the video. Temporal distortion is thus zero-mean and
temporal averaging produces a blurred version
of the scene's radiance, that is processed via a Sobolev gradient flow
to yield the latent image in a way that is reminiscent of the \lucky
region" method. Without enforcing prior knowledge, we can stabilize
the video sequence while preserving ne details. We also present the
well-posedness theory for the stabilizing PDE and a linear stability
analysis of the numerical scheme.
This is a joint work with Sung Ha Kang, Stefano Soatto and Andrea Bertozzi.