Sunday, July 24, 2011

Masked sharpening works

Over the past two weeks, I've been working on a filter that sharpens only specific parts of video frames, selected by a mask. Now I have a version that I think is worth a look. The basic operation is the following.

The filter has two sink pads: one for the video stream to be sharpened (AYUV), and one for the selection mask (8 bit grayscale). The sharpening is done by a bin built by the init function. The sharpening bin has two source pads (one for the original, and one for the globally sharpened stream), and of course a sink pad. Sharpening is performed by a gaussianblur element, which, when its sigma property is set to a negative value, sharpens the video. The original, sharpened, and mask streams are collected by a CollectPads, from which the callback function extracts the frames, and blends the original and sharpened images using the mask frames as the blend ratio. This means that the brighter a pixel in the mask frame is, the more weight the sharpened image has in the result at the corresponding pixel.

If the mask is an edge magnitude image of the original frame, this method gives good results similar to the smart sharpening method discussed before, even if there is still plenty of room for improvement. First, the original and sharpened frame buffers from the sharpening bin must go through internal sink pads not added to anything, but activated manually in push mode, in order for the CollectPads to be able to collect them. Second, the original smart sharpening method sharpens luminance only. Third, the filter has a CollectPads, but no custom event handler yet, so it breaks events in the pipeline. Fourth, when stopping gst-launch with Ctrl+C, the filter segfaults. I'm almost sure this is because of bad handling of a state change, as the segfault is caused by the CollectPads callback function trying to get the data from a buffer. Fifth, I used a deprecated function for requesting source pads from the tee element in the sharpening bin. This is because I use the GStreamer version of Ubuntu 10.10, which doesn't yet support the new function. I'll install a newer version into my home directory to be able to make sure everything works well with the latest and greatest GStreamer.

For some reason (which I suspect is number 3 from the 5 problems mentioned above), the pipeline doesn't start if the source is a video file. (I tried an Ogg/Theora file with a filesrc ! oggdemux ! theoradec, and with a filesrc ! decodebin, and a 3GPP file with decodebin.) If I use v4l2src instead, and record directly from my webcam, it works.

The new code is up on github. After compiling and installing the usual way, you can try the following - not particularly short - launch line, which takes a video stream from a v4l2src, and saves the original stream, the same stream after going through the gimpdespeckle filter, and the sharpened, despeckled stream to separate Ogg/Theora files. The sharpening mask is produced by using the sobel filter to get an edge magnitude image from the despeckled stream, and using the gimpcontraststretch filter to make the sharpening more visible. In real-world scenarios, smaller or no tweaking of the edge magnitude image, and/or a weaker sharpening may suffice. (The sigma of the sharpening gaussianblur element will be a tunable property later.) The launch line:

gst-launch-0.10 -e maskedunsharp name="unsh" v4l2src device=/dev/video0 ! video/x-raw-yuv,width=320,height=240 ! tee name="orig" ! queue ! ffmpegcolorspace ! gimpdespeckle despeckle-radius=1 ! ffmpegcolorspace ! video/x-raw-yuv,format=\(fourcc\)AYUV,width=320,height=240 ! tee name="t" ! queue ! unsh.fsink t. ! queue ! ffmpegcolorspace ! sobel ! ffmpegcolorspace ! gimpcontraststretch ! ffmpegcolorspace ! unsh.msink unsh.src ! ffmpegcolorspace ! theoraenc ! oggmux ! filesink location=sharpened.ogv t. ! queue ! ffmpegcolorspace ! theoraenc ! oggmux ! filesink location=despeckled.ogv orig. ! queue ! ffmpegcolorspace ! theoraenc ! oggmux ! filesink location=original.ogv

If you'd like to see the results of the stages side by side, you can use this launch line:

gst-launch-0.10 filesrc location=despeckled.ogv ! oggdemux ! theoradec ! xvimagesink filesrc location=sharpened.ogv ! oggdemux ! theoradec ! xvimagesink filesrc location=original.ogv ! oggdemux ! theoradec ! xvimagesink

Saturday, July 9, 2011

Despeckle filter

One approach to reduce CCD noise on an image is using a despeckle filter, then sharpening the image to offset the blurring effect of the despeckle filter. Implementation of the particular sharpening method is in progress, so far I have the Sobel edge magnitude filter mentioned in the last post.

For the despeckle part, I pushed a GStreamer port of the GIMP Despeckle filter to github. It is basically an adjustable region size median filter that can optionally adjust filtering region size based on the histogram of the region. It can also optionally operate in a recursive mode, in which filtered pixel data is also written back to the source buffer, producing a different, and stronger, effect. Region size, bright and dark thresholds, recursivity, and adaptivity can be adjusted via element properties. The plugin can be installed the usual way. The name of the element will be "gimpdespeckle".

Tuesday, July 5, 2011

Sobel filter cleaned up

This is another case of something I did for fun coming in handy later. After I learned the basics in the beginning with the help of the GStreamer Plugin Writer's Guide (a very good piece of documentation indeed), I wanted to do a moderately difficult task for self-assessment before getting started with real porting work. I chose to put this semester's DIP class knowledge to use, and write a filter that calculates gradient magnitude on the luminance channel of I420 video streams using the Sobel operator.

The original plans for the project included an idea about smart sharpening, a procedure useful in CCD noise removal. (Details here.) Those who read hypertext depth-first already see the point: it requires edge detection. (At the time that article was written, the GIMP edge detection plugin used in the article was implemented with the Sobel operator.) So I cleaned up the code, and added a property that can turn border mirroring on/off. If it's on, pixel indices in horizontal and vertical gradient calculation are clamped inside the frame boundaries, effectively mirroring the border pixels outside the frame, so that the operator can be applied to them. (The gradient is otherwise undefined for the borders.) If it's off, the border pixels are set to zero instead. The latter is faster, the former gives meaningful data on the borders. I believe that for most cases, off is the better choice.

The new code is up on github.