Media applications link in the mm-renderer client library, libmmrndclient.so,
so they can talk to the mm-renderer service. The service uses a layered architecture to process client
commands, manage playback contexts and event queues, and load plugins to handle media flows from input to outputs.
The
mm-renderer architecture consists of three abstraction layers:
- Messaging layer
-
This layer receives messages from the client library, unpacks the arguments, and calls the corresponding functions
in the core library. Essentially, this layer ensures client messages are delivered to the
right contexts (playback zones). It also reads the event queues and function call results written by the
core library and returns the appropriate data to clients.
- Core library
- This layer is the central component of
mm-renderer. In addition to implementing the API so it can
process client requests sent through the higher layer, the core library handles
the initial setup and manages all components involved in media playback and
event reporting.
- For the setup, it parses the configuration file to learn which engine and routing plugins to load and
which values to assign to some of their parameters. These plugins are
implemented in DLL files and register their role with the core library when
they're loaded. Then, whenever the client issues a request that must be carried
out by a plugin, the library selects the appropriate plugin for the job.
- Through the API, the client application defines contexts to manage independent
media flows. For each context, the core library:
- keeps track of the outputs attached to it
- maps each output to the plugins that can support that output type
- selects the appropriate plugins and attaches them to the context
- preserves the context state between detaching and reattaching inputs
- delivers client requests (e.g., play) to plugins
-
The core library also keeps its own copy of any parameters defined by the
client application and exposes them to the plugins. Some plugins may adjust
input and track parameters if improper values were given.
- To support event reporting, this layer
translates state changes reported by plugins into events that can be read by the
higher service layer. The core library queues the events and keeps track of how
much of the queue has been viewed using a given context handle; this allows client
applications to use different handles to read event data at different rates.
- Plugins
-
The plugins handle the flow of media content between inputs and outputs.
-
Engine plugins are used to play input files. There are two engine plugins:
the track engine plugin, which supports the track input type;
and the playlist engine plugin, which supports the autolist and playlist
input types. Each of these plugins in turn uses routing plugins to manage the actual media flow;
the core library helps an engine plugin select the appropriate routing plugin based on the attached input and outputs.
-
Note that there are two routing plugins: the audio recorder, which supports the file
output type; and the audio/video player, which supports the audio and
video output types.
-
Output plugins are used to send media content to output files or devices.
These plugins aren't listed in the configuration file and therefore aren't loaded and configured in the same way as
other plugin types. Each output plugin is defined in its own library, which gets linked in by any routing plugins that
know about it.
-
The selection of output plugins to use in playback or recording isn't done until the client application attaches
the input to a context (which occurs after the outputs are attached). At this point, the core library chooses the
engine plugin, which in turn chooses the routing plugin. This last plugin talks to the output plugins
and for each attached output, picks the best one for writing the media data to the corresponding file or device.