The blurring issue and how to avoid it

Sometimes the blurry effect could be used as an artistic way, but when it occurs as a distortion, it could be a problem that algorithms can detect and prevent to showing it to the audience.

Blurry or jumpy video could be a consequence of different patterns. A high amount of movement or action in the content of streaming video could be one cause. High Internet traffic or a slow Internet connection speed is another possibility. It is true that the audience can take different steps to resolve the trouble, such as checking the connection speed, rebooting their computer, restarting the home network, or trying to improve the WiFi signal.

But sometimes the problem is on the other side of the connection: the stream provider. How can this kind of company avoid blurry distortion?

Defining the problem

The image restoration can be seen as a suppression of image blurring and noising. Both these perturbing influences can be also interpreted as an image transmission through certain composite random channel consisting of two parts.

The first one forms a deterministic convolutional channel representing blurring and the second one random channel emulates signal contamination by noises. The incidence of such composite channel is able to be eliminated by iterative detection networks (IDNs) based on the spatial separation of the optimal (single-stage) MAP detection. Therefore, these IDNs presenting suboptimal MAP detectors provide generally inferior estimation than the optimal detector, however with numerical exigencies incomparable lower.

Some of these influences are time-invariant and some others temporal. The time-invariant influence is an image blurring (rising from various origins) that can be mathematically understood as a deterministic 20 lSI channel (20 convolutions with the kernel A).

All sources create a single composite noise source that affects the captured blurred image as a random memory-less channel with independent eliminated states (IECS-ML). The behavior of such random channel (advanced noise model) is biased by three parameters: mean value and standard deviation of the readout noise), depending on the sensor readout rate, and mean value and squared standard deviation of the thermal noise, exponentially raising according to the sensor temperature.

Same video for different devices

One of the challenges streaming services providers have to face is that they deliver content on-demand to a wide variety of devices. Due that these objects feature different hardware and run diverse operating systems, it could add some complications in order to send the best quality image for all users.

As streamers send the signal through Internet, they also have to deal with the bandwidth congestion. It is a fight among a lot of companies who are trying to get the most of the speed of Internet to deliver the best user experience.

To get around this problem, streaming services use a technique called adaptive bitrate streaming (ABR), a technique for dynamically adjusting the compression level and video quality of a stream to match bandwidth availability. This process filters the file through an encoder, which then produces separate feeds of varying quality, especially in three ranges: a high quality stream, a medium quality stream, and a low quality stream.

For OTT services, Adaptive Bitrate Streaming (ABR) will usually rely on an ABR packaging protocol such as HLS or MPEG-DASH where multiple streams are defined by profiles such as low, medium, and high quality. The ABR streams are divided into chunks of video, between 1 and 15 seconds, so that individual viewing devices can dynamically pick and choose the video chunk that best fits available bandwidth at a given time. ABR streaming for OTT requires the use of an encoder or transcoder which can encode a single video source at multiple bitrates.

It is supposed that all video compression codecs use motion prediction (temporal, interframe prediction) and spatial (intraframe) prediction to remove redundancy from the signal. But these methods are lossless. The lossy part is the subsequent Discrete Cosine Transform (DCT) transform of the residual signal and division by quantization matrix. The DCT can be optimally replaced by wavelets for some codecs (also lossy technique).

In case of extreme compression, the image becomes blurry due to higher frequencies being omitted. Also, blocking is present for codecs without deblocking filter.

At Video MOS, we have developed a tool that uses advanced Artificial Intelligence algorithms to control quality and services for audiovisual content, fully tailored to each customer, primarily aiming broadcasters, content producers, OTTs, content platforms and TV producers. Streamers can avoid monitor blurring video thanks to this solution and offer the best user experience.

Video-MOS
webmaster@video-mos.com
No Comments

Post A Comment