Playing video content

To prepare for playing a video track, you must follow the same steps as when setting up audio playback. But you must use the Screen API instead of mm-renderer to configure output parameters; doing so gives you control over how and when content is displayed.

The code excerpts below show how to give mm-renderer an output URL of screen: to render video content on the full display, and an input URL that names a local video file or a network-accessible stream for playback. These scenarios demonstate a simple case when you want to use the full display because no other UI apps are running. At other times, you may want to restrict the content to a particular area, which requires using the Screen API and is demonstrated in Managing video windows.

You can add subtitles to the video; this is explained in Playing video with subtitles.

Configuring mm-renderer to play a video file

After connecting to mm-renderer and creating a context, we must tell the service to direct the output to Screen:
/* Code to connect to mm-renderer and create a context goes here 
   (see "Playing audio content" for an example of doing this) */
const mmr_error_info_t* errorInfo;

// Attach a video output with a URL of "screen:" to use the full display
int outputID = mmr_output_attach( ctxt_videorec, "screen:", "video" );

if (outputID == -1) {
    errorInfo = mmr_error_info(ctxt_videorec);
    /* Remaining error-handling code goes here */
    return EXIT_FAILURE;
}
Nearly all videos have an audio component. The next step is to attach the audio output to the context:
// Attach an audio output with a URL of "snd:" to use the preferred device
audio_device_output_id = mmr_output_attach( ctxt_videorec, 
                                            "snd:", 
                                            "audio" );

if (audio_device_output_id == -1) {
    errorInfo = mmr_error_info(ctxt_videorec);
    /* Remaining error-handling code goes here */
    return EXIT_FAILURE;
}

Using mmr_output_parameters(), you can configure any of the audio output parameters described in Parameters that affect how the output is delivered.

Defining the input and starting playback

We now attach the input by providing a URL that names the video file we want to play:
// Attach a local file as the input; use "track" as the type because
// we're playing a single video track
int inputID = mmr_input_attach( ctxt_videorec,
                                "/fs/usb0/dontcry_official_video.mpg",
                                "track" );

if (inputID == -1) {
    /* Error-handling code goes here */
    return EXIT_FAILURE;
}

// Start playback
if ( mmr_play( ctxt_videorec ) < 0 ) {
    /* Error-handling code goes here */
    return EXIT_FAILURE;
}

For the input URL, we provide a hard-coded path to a local MPEG video file. In your program, you may likely need to use a variable; for example, if the track to be played is based on user selection. Also, when naming a file as input, you can prepend the URL with file: to make it clear that you're accessing a file. For the list of supported file extensions and associated video formats, see the product release notes.

To play an HTTP-accessible file or stream, we would use the same API call as above but provide a URL beginning with http: or https: followed by the full path of the source. HTTP Live Streaming (HLS) is supported just as any HTTP stream. The mm-renderer service supports HLS version 3, with media segments encoded as follows:
  • MP2 Transport stream with H.265, H.264, or MPEG2 video, and MP3, AAC, or AC-3 audio (when the appropriate codecs are available on the platform)
  • Video only and audio only when embedded in the MPEG2-TS stream

HLS playlists, which can be M3U files, must be attached as track inputs, not playlist inputs. The reason is that such playlists are basically references to small chunks of a long audio or video file and therefore must be played as one long track.

Note: For secure playback of HTTP streams, mm-renderer lets you set cookies, SSL, and authentication properties by defining parameters that map to libcurl options.
The mm-renderer service also supports RTP streams. We can play an RTP stream by specifying an rtp: URL that lists the port on which the stream is delivered. The port number is prefixed with an IP address or the AT sign (which means to use any IP address) and a colon:
rtp://@:49152
RTP streaming is useful if you want to configure a camera to stream to a QNX Neutrino host, but it requires knowing the stream's properties in advance. To automate the setup of streams and receive video over RTP “on-demand”, we can instead name an rtsp: URL naming an RTSP server as the media source. With the RTSP protocol, the stream properties are negotiated between the server and remote client during setup. In the URL, the source is indicated by either a host name or an IP address, followed by the path. To authenticate with the server, we can provide a username and password, separated by a colon and followed by the AT sign, in front of the source:
rtsp://username:password@10.222.97.225/axis-media/media_demo_a1.amp
We can also use RTPTS, which has a URL syntax identical to RTP:
rtpts://@:10000
The difference is that RTPTS carries the stream's properties with it, meaning it's easier to set up than RTP or RTSP. This URL syntax was developed for mm-renderer to indicate an RTP stream carrying Transport Stream (TS) packets. The TS container multiplexes audio and video streams and hence, allows a single RTP stream to carry both, instead of carrying only raw audio or raw video.

Controlling playback

When playing a video file, we can manage playback in the same way as when playing an audio file.