Using the Video-Capture API

This chapter covers the things you need to know to use the video-capture API (gf_vcap_*() functions).

Video capture

The GF Video-Capture functionality is designed to give as much flexibility to the application as possible. For this reason, the API is not generic; software using the GF video-capture API must have some knowledge of the target hardware. The GF library doesn't handle configuration of devices that are external to the display device (such as a video decoder).

You can use any kind of video decoder chip that can stream video in a decoded format (for example, ITU656 format) to the display device. However it is up to the application to take care of the specifics of programming any devices that are external to the display device.

The GF API does, however, provide API calls to allow communication with devices that are connected to the display device via the I2C bus. The API lets you write a sequence of bytes to a device, which is identified by its I2C slave address (gf_i2c_write()). Similarly, bytes can be read from a given slave device (gf_i2c_read()). A common operation when communicating with a slave device is to write a register offset to the device, then read one or more bytes starting at the specified register offset. This operation is performed as a single operation on the I2C bus, known as a combine transfer. Call gf_i2c_writeread() to perform a combine transfer.

You can use the GF Video Capture API to display live video on a display layer. There is no provision for accessing the captured frame data with the CPU, or for postprocessing the captured data using GF or OpenGL ES rendering APIs.

To use the GF video-capture API to display video on a layer, you'd typically perform these steps:

  1. Configure and enable the video source. This might, for example, involve programming the video decoder chip (such as the Philips SAA7113) using the GF I2C API calls (gf_i2c_*()).
  2. Attach to the video-capture unit you want to configure, using gf_vcap_attach().
  3. Set various capture and parameters such as cropping and scaling, using the gf_vcap_set_*() functions. You need to call gf_vcap_update() to make these settings take effect.
  4. Specify the layer the captured data is to be output on by using gf_vcap_bind_to_layer(). You don't need to worry about allocating a surface to buffer the captured video data; this is all handled internally by the QNX Graphics Framework.
  5. To avoid displaying garbage from video memory before the first frame is captured, enable the video-capture unit with gf_vcap_enable() before enabling the layer that the video content will be displayed on.

Once the output layer's size and position relative to the display are set, and the layer enabled, live video should be displayed.

Note that software using the GF video-capture API must be targeted for a given type of display device, and know some of the specifics of programming that device. There is no generic mechanism for querying various limitations and attributes of a video-capture unit. It is your responsibility as the developer to make sure that cropping and scaling parameters and so on are within the limits of what the hardware can support. Consult the documentation supplied by the hardware vendor to determine any such limitations. You also need to make sure that the layer you are using to display the video is compatible with the video-capture unit.

An example

Let's look at an example of performing video-capture on a Fujitsu Coral-based board that has the Philips SAA7113 (for PAL and NTSC) and Analog Devices AD9883 (for VGA) video decoder chips. This example is from the coral-capturesample application shipped with QNX Advanced Graphics. Only code snippets are shown here; see the example project for full source code.

In the main function of our application, after we've attached the device, the display, and a layer to display the captured video on, we attach to the video-capture device (in this case, the first available device, at index 0):

if (gf_vcap_attach(&vcap, gdev, 0, 0) != GF_ERR_OK) {
    fprintf(stderr, "gf_vcap_attach() failed\n");
    exit(EXIT_FAILURE);
}

Next, configure the Philips SAA7133 to automatically handle PAL or NTSC formats. The board's documentation tells us its I2C address (0x4a), and which registers to set for various parameters:

#define PHILIPS_I2C_ADDR        0x4a

int
saa7113_init(gf_dev_t gdev)
{
    static uint8_t saa7113_defaults[] = {
        0x01,       /* Subaddr of first register to write */
        0x08,       /* Horizontal Increment delay, recommended */
        0xc1,       /* Input, Composite */
        0x33,       /* Analog Control, Enable */
        0x00,       /* Gain Control */
        0x00,       /* Gain Control */
        0xe9,       /* Hsync Start */
        0x0d,       /* Hsync Stop */
        0x98,       /* Sync Control Change 000118 */
        0x01,       /* Luminance Control */
        0x80,       /* Brightness */
        0x47,       /* Contrast */
        0x40,       /* Saturation */
        0x00,       /* Chroma Hue */
        0x01,       /* Chrominance Control */
        0x2a,       /* Chroma Gain */
        0x00,       /* Format */
        0x0c,       /* Output Control 1 */
        0x01        /* Output Control 2 */
    };

    if (gf_i2c_write(gdev, 0, PHILIPS_I2C_ADDR,
        saa7113_defaults, sizeof (saa7113_defaults)) != GF_ERR_OK) {
        fprintf(stderr, "gf_i2c_write() failed\n");
        return -1;
    }

    return 0;
}

You'd configure the AD9883 in a similar way (see the example project).

Finally, we bind the video-capture device to our video-capture layer, set any flags to configure the capture behavior, set the input source (in this case, an ITU656 source) and enable the capture device:

gf_vcap_bind_to_layer(vcap, layer);
gf_vcap_set_flags(vcap, GF_VCAP_FLAG_DEINTERLACE_BOB);
gf_vcap_set_input_source(vcap, CORAL_VIDEO_SOURCE_ITU656);
gf_vcap_enable(vcap);

At this point the video-capture device is started, and it's sending video to the layer we bound to it. You can now adjust the appearance of the video by: