Camera Library Overview

The Camera library provide applications with a consistent interface to interact with up to 64 cameras connected to the hardware platform — which is dependent on hardware platform limitations and resources. Each application can support up to 16 cameras. If you had 64 cameras on a system and wanted to use all 16, you could do so if you had four applications running where each application used 16 cameras.

Architecture of the Camera library

The Camera library helps to simplify the process to interact with those cameras connected to your system. You can use the Camera library to build various applications to stream live video, play prerecorded video, record video, and encode video.


Figure 1. Camera library
The library consists of the following components:
Sensor service
The Sensor service is a process that interacts directly with the camera hardware. The service runs initially as root, gets the required privileges, and then runs as non-root (UID/GID 521). The camera hardware is configured in a separate configuration file. The configuration file allows you to change settings on the camera and determines the identifiers to use when you work with the cameras using the Camera API. The Sensor service does the following:
  • processes requests from applications that use the Camera library API
  • tracks the permissions and access of applications to cameras in the system
  • ensures that the client applications have the necessary permissions to access the camera
  • manages the power to camera based on the connections to a camera
  • manages the camera configuration settings
The service also handles the main interactions associated with the framework, that includes interactions with the Screen service, managing a viewfinder window and Stream object of the image buffers from a camera.
Sensor configuration file
A text-based file that's used to configure the cameras on the target board. You can specify the name and additional metadata to describe the position, direction, and settings of the camera. Cameras must be configured based on the connection type (e.g., USB, GigE, or sensor). In addition, to the connection types, you can configure a file camera, which is a virtual camera that plays a video file to simulate a video stream as if it came from a live camera. You'll find that file cameras are very useful for doing testing. For more information about the sensor configuration file, see Sensor configuration file in the Sensor chapter of the System Services guide for the QNX Platform for ADAS.
Camera library
A C-based API to interface with the Camera. The library provides various high-level functions to work with the camera, that includes starting/closing the camera, changing focus, zoom, etc.
Encoding/decoding
The library provides capabilities for you to encode and decode H.264-formatted files. The Camera library also supports uncompressed video, both UCV and MOV files. The libraries required are included with your ADAS Base package installation and include in your $QNX_TARGET/target_processor/libs directory. For example, for x86_64 based targets, it would be $QNX_TARGET/x86_64/libs. To see the libraries, in the Software Center, right-click any one of the Multimedia components included in the your ADAS Base installation, select Properties, and click Package contents to see the list of libraries.

About the Camera library

The Camera library provides a client library that abstracts the interactions with cameras connected to your system. For example, you can do things such as:
  • connect and disconnect from a camera
  • start and stop the viewfinder, which is our abstraction of a image buffer or stream
  • use features on the camera, such as zoom, autofocus, auto white balance, and auto exposure
  • adjust focus, exposure, and white balance algorithms by defining regions of interest, or applying manual controls
  • access the stream of image buffers (framebuffers)
  • encode video and save recordings to file
Note: The Camera library does not automatically play audible sounds when it starts to record video. Applications must supply their own sound when a video recording starts or ends. While you can choose to not have an audible sound heard when you start or stop a recording–you are responsible to ensure that the application adheres to the local laws of the regions in which you wish to distribute the application.

For example, it's illegal to mute or modify sound of a camera application in Japan and Korea. Ensure that you comply with the laws and regulations for the countries in which you distribute your software.

The Camera Developer's Guide is intended for application developers who want to implement a vision platform. In particular, the Camera library provides the camera input, video capture, video playback, and support for hardware accelerated vision algorithms on supported target boards. There are several code snippets included in this guide. These examples show what functions, can be called and how they can be called by your application. Make sure you perform the appropriate error handling in your application. For readability, we’ve have often omitted any error checking and handling in the code snippets.

Interaction with other QNX components

To use the Camera library, you should have a basic understanding of how to work with Screen Graphics Subsystem. Depending on what your application does, you may be required have knowledge of the following concepts of Screen.

Camera library header files

To use the Camera library, these are the header files that you can use:
  • camera_api.h: This is the main interface to the Camera library Library. You must always include this header file to use the Camera library.
  • camera_encoder.h: This interface is used to get and set video-encoding properties.
  • camera_h264avc.h: This interface defines the encoding properties that are used with the camera_encoder.h interface.
  • camera_3a.h: This interface is used to work with various manual settings on the camera that include ISO, shutterspeeds, white balance, and aperture.

Working with the camera

The Camera library supports many different cameras, such as ISPs, USB, CMOS sensors, and System on a Chip (SoC) systems. Because of this, the Camera library supports decoding and encoding on different hardware platforms and hardware platform variants. When the Sensor service runs, it knows which cameras are available based on what's specified in the sensor configuration file. For more information about the Sensor configuration, see Sensor configuration file in the Sensor chapter of the System Services guide. After you start the Sensor service with a valid sensor configuration file, you can use the Camera library to:
  • query what cameras are available, as well as metadata (if available in the configuration file) regarding each camera
  • connect to the camera
  • configure the camera, that includes framerates, resolution, etc.
  • perform operations with the camera, such as showing live video, recording, and encoding video.

Working with image buffers

The library allows you access the image stream (or video stream), which we refer to as image buffers in the context of using the Camera library. With the image buffers, you can build applications that:
  • process the image buffers using third-party algorithms to get information about objects in the image buffers.
  • access process camera frames. You can do this using two mechanisms. You can either access them using callback or using an event framework provided with the Camera library. For more information about using callbacks or events, see Using callback mode and Using event mode.”

    In addition to accessing the camera frames, you can also access the image buffers using streams, which allows you process the image buffers using the GPU (if available on your target). This is done using streams from the Screen Graphics Subsystem. For more information, see Using stream mode.”

  • display a live preview (unprocessed image buffers). In the context of the Camera library, we refer to this as a viewfinder. For more information, see Viewfinder Mode.”
  • encode the image buffers to H.264-formatted frames that can be sent to another component on a network bus or saved to a file. In addition to being able to encode image buffers to a file, the Camera library allows you decode an H.26- formatted MP4 file.
  • record and play uncompressed video as both UCV and MOV files. MOV files are a standard format that can be played on a PC, however they support only RGB 32-bit and YCRYCb 4:2:2: packed formats. If the camera you use doesn't support one of those formats, you must use UCV. UCV is a proprietary format that can't be played on your PC, but you can record to any video format supported by the Camera library.