RealSystem's Audio Services provides device-independent, cross-platform audio services to a rendering plug-in. This plug-in can use Audio Services to render audio streams without concern for the specifics of the audio hardware. Audio Services provides several useful features for rendering audio in RealSystem:
Audio Services converts streams in supported input formats to the same sampling rate and then mixes the streams to produce a single output.
![]() |
Additional Information |
---|
"Supported Input Formats". |
Through Audio Services, a plug-in has access to decoded streams of pre-mixed audio data and to the final, mixed audio data. Plug-ins can intercept this data to add special effects or perform other processing.
![]() |
Additional Information |
---|
"Using Post-Processed Audio Data". |
With the volume interface, a plug-in can control the volume of individual streams, of the final mixed stream, and of the audio hardware.
![]() |
Additional Information |
---|
"Controlling Volume". |
This feature lets the rendering plug-in start an audio stream at any specified time in the current presentation or play an "instant sound" at the current time.
![]() |
Additional Information |
---|
"Implementing Midstream Playback and Instant Sounds". |
Audio Services provides playback synchronization to all rendering plug-ins. The synchronization values are based on the actual playback time in the audio hardware.
![]() |
Additional Information |
---|
"Synchronizing to Audio". |
As shown in the following figure, a rendering plug-in uses the Player object to register with Audio Services and request audio data. The plug-in then writes each audio stream to a separate stream object.
As illustrated in the next figure, Audio Services supports multiple audio streams from one or more rendering plug-ins, creating a separate stream object for each audio stream. Because Audio Services handles the output to the audio hardware, rendering plug-ins do not need to compete for access to the audio device.
A rendering plug-in implements the following interfaces, depending on which Audio Services features it needs to use.
IRMAAudioHook
. Header file: rmaausvc.h
. A rendering plug-in implements this interface to access pre- or post-mixed audio data, as well as to get post-processed audio buffers and their associated audio formats.
IRMAAudioStreamInfoResponse
. Header file: rmaausvc.h
. The rendering plug-in implements this interface to receive notification of the total number of streams associated with an audio player.
IRMADryNotification
. Header file: rmaausvc.h
. A rendering plug-in implements this interface to receive notice of a dry audio stream.
IRMAVolumeAdviseSink
. Header file: rmaausvc.h
. A rendering plug-in implements this interface to receive notifications of changes in volume level or mute state. The plug-in uses the IRMAVolume
interface to register to receive the notifications.
A rendering plug-in uses the following interface to access Audio Services functions:
IRMAAudioPlayer
. Header file: rmaausvc.h
. This interface provides access to Audio Services. A rendering plug-in uses it to create audio streams, "hook" post-mixed audio data, and control volume levels. Its response interface, used solely by the RealSystem client to receive playback notifications, is IRMAAudioPlayerResponse
.
IRMAAudioStream
. Header file: rmaausvc.h
. The renderer uses this interface to access a stream object to play audio, "hook" audio stream data, and get audio stream information.
Audio Services accepts 8-bit and 16-bit Pulse Code Modulation (PCM) data. The rendering plug-in must convert audio data in other formats, such as MIDI or µ-law, to PCM before sending it to Audio Services. However, the plug-in can also write MIDI data directly to the MIDI hardware, bypassing Audio Services entirely.
The audio data can be mono or stereo at any of the following sampling rates:
When multiple rendering plug-ins send audio data to Audio Services, Audio Services mixes the inputs:
Suppose that one plug-in renders 11025 Hz stereo and another plug-in renders 22050 Hz mono at the same time. Playback performance is set to Best Audio Quality. In this case, Audio Services upsamples the 11025 Hz input to 22050 Hz and converts the mono input to stereo before mixing the two signals. It then sends a 22050 Hz stereo signal to the audio device.
When multiple streams are mixed, the quality of the mixed audio output depends on the type and quality of the input streams. Some types of input mix better than others, and higher quality input always gives higher quality output.
To achieve the best possible quality, encode all mixed input streams at 44100 Hz. If the audio is digitized from an analog source, use the same audio hardware to digitize each input. Avoid mixing audio encoded at multiples of 8000 Hz with audio encoded at multiples of 11025 Hz.
Follow the steps in this section to use Audio Services to render audio data with your rendering plug-in. These steps are based on the sample rendering plug-in exaudio.cpp. "Modifying the Audio Rendering Sample Code" provides more information about this sample file.
![]() |
Additional Information |
---|
Be sure to review "Chapter 6: Rendering Plug-In". |
IRMARenderer::StartStream
method of the rendering plug-in, get the interface to the AudioPlayer
object:
QueryInterface(IID_IRMAAudioPlayer, (void**) &m_pAudioPlayer )
AudioStream
object with IRMARenderer::OnHeader
:
m_pAudioPlayer-CreateAudioStream( &m_pAudioStream );
If the rendering plug-in needs to render more than one audio stream, create a stream object for each stream.
IRMARenderer::OnHeader
. The required data are:
AudioStream
object. The pValues
parameter contains information to identify this stream uniquely.
m_pAudioStream-Init( &AudioFmt, pValues);
IRMARenderer::OnPacket
method, perform renderer-specific decoding or processing of each audio packet sent by RealPlayer. Set the buffer size and the start time of the packet in milliseconds. Set the uAudioStreamType
member of the RMAAudioData
structure to one of these values:
TIMED_AUDIO
for the first packet of the stream or, if packets were lost, the first received packet that follows the lost packets.
STREAMING_AUDIO
for all packets that follow the TIMED_AUDIO
packet.
INSTANTANEOUS_AUDIO
to play the buffer immediately.
![]() |
Additional Information |
---|
"Implementing Midstream Playback and Instant Sounds". |
Note that within a single stream you cannot write packets that have start times earlier than packets already written. If you need to process these packets first, either buffer them for later writing within the current stream or create a separate audio stream for them.
AudioStream
object:
m_pAudioStream-Write( &audioData );
After calling IRMAAudioStream::Write
for timed audio, the plug-in increments the ulAudioTime
member of the RMAAudioData
structure by the length of the buffer just written to get the time of the next buffer.
IRMARenderer::EndStream
, which RealPlayer calls when the stream finishes, release the audio player and stream objects:
if (m_pAudioPlayer)
{
m_pAudioPlayer-Release()
;
m_pAudioPlayer = NULL;
}
if (m_pAudioStream)
{
m_pAudioStream-Release()
;
m_pAudioStream = NULL;
}
Audio Services provides the IRMAVolume
interface to query, set, and mute volume, as well as register a plug-in for notifications through the IRMAVolumeAdviseSink
interface. IRMAVolumeAdviseSink
then lets your plug-in receive notices of changes to volume and mute settings. Audio Services enables plug-ins to control the volume of individual streams, of the final mixed stream, and of the physical audio device:
Each input stream has an IRMAVolume
interface that maintains the volume and mute settings. You control the stream volume by multiplying each audio sample by a volume value. The maximum volume setting of 100
means 100% of the input signal. Values less than 100
reduce the volume proportionally. Call IRMAAudioStream::GetAudioVolume
to return a pointer to the IRMAVolume
interface.
The final mixed stream for the player has its own IRMAVolume
interface. A volume setting of 100
means 100% of the input signal. The maximum volume setting of 100
means 100% of the input signal. Values less than 100
reduce the volume proportionally. Call IRMAAudioPlayer::GetAudioVolume
to return a pointer to the IRMAVolume
interface.
An IRMAVolume
interface also controls the audio device volume. The audio device volume can range from 0 to 100%. A volume setting of 0
means no sound. A volume setting of 100
means the maximum volume for the audio hardware. Call IRMAAudioPlayer::GetDeviceVolume
to return a pointer to the IRMAVolume
interface.
The following figure illustrates the relationships between the plug-in, RealPlayer, the audio device, and the various volume objects:
Audio Services lets a rendering plug-in begin playback of an audio stream at any specified time in the current presentation's timeline. It also provides a special case of midstream playback that starts a new stream at the current time. These "instant sounds" are typically linked to events such as keyboard or mouse input. The following figure shows a stream (Stream 1) being played. At time T
, another audio stream starts. At the current time, an instant sound plays.
The following steps and figure explain how to implement midstream playback and instant sounds:
AudioPlayer
objects and initialize the stream as described in "Rendering Audio".
ulAudioTime
member of the RMAAudioData
structure to set the packet time to begin playing the new stream. This attribute determines the number of milliseconds into the presentation timeline to start the new stream. To start the new stream at 15 seconds into the current stream, for example, set ulAudioTime
to 15000
. To find out what time ulAudioTime
should be set to for playback to begin with the next audio block being written, call IRMAAudioStream::Write
with pData
set to NULL
. ulAudioTime
will be set to the next audio timestamp.
To play an instant sound, set the uAudioStreamType
member of the RMAAudioData
structure to INSTANTANEOUS_AUDIO
. Because audio time is not significant for instantaneous playback, ulAudioTime
is set to 0
. Audio Services plays the instant sound immediately, mixing it with any stream currently playing, whether a timed track or another instantaneous sound.
IRMAAudioStream::Write
.
IRMAAudioStream::Write
. Increment ulAudioTime
by the length in milliseconds of the last buffer sent.
An application that needs access to data sent to the audio device, such as an application that adds sound effects to a stream, can receive pre-mix audio data (individual decoded streams) or post-mix audio data (final mixed stream). The plug-in receives data as headerless buffers that it can modify and return to Audio Services, or even pass to another plug-in. Various Audio Services methods let the plug-in obtain the stream's sampling rate, number of channels, and audio format attributes. As explained in "Supported Input Formats", the audio output format depends on the inputs to Audio Services.
![]() |
Note |
---|
Your plug-in must be able to receive and modify post-processed audio data synchronously in real-time. Ensure that your plug-in platform is capable of performing such real-time processing. |
As illustrated in the following figure, a plug-in can examine and modify pre-mix audio data, which is the decoded data from a single stream, before Audio Services mixes it with other streams.
The following steps explain how to get pre-mix audio data. An example plug-in that uses the pre-mix interface is premixrd.cpp. See "Modifying the Audio Rendering Sample Code" for more information on using this code.
IRMAAudioStreamInfoResponse
class.
IRMAAudioHook
class.
IRMARenderer::StartStream
method:
IRMAAudioPlayer
using IUnknown::QueryInterface
.
IRMAAudioHook
interface.
IRMAAudioStreamInfoResponse
interface.
IRMAAudioStreamInfoResponse
interface with the AudioPlayer
:
m_pAudioPlayer-SetStreamInfoResponse( m_pResp );
AudioPlayer
object passes the stream (or streams) to your renderer with IRMAAudioStreamInfoResponse::OnStream
, test the appropriate IRMAValues
name/value pair to determine if this is the desired stream. For example, the following sample code locates the stream with "MimeType" equal to audio/x-pn-wav
:
{
IRMAValues
* pValues = 0;
IRMABuffer
* pMimeType = 0;
pValues = pAudioStream-GetStreamInfo()
;
pValues-GetPropertyCString("MimeType", pMimeType);
char* pMime = (char*) pMimeType-GetBuffer()
;
char* pStreamName = (char*) m_pHookStreamName-GetBuffer()
;
/* In this example, let's hook all wav streams. */
if (pMime && pStreamName && (!strcmp(pMime, pStreamName)))
{
/* Add pre mix hook on this stream. */
pAudioStream-AddPreMixHook(m_pHook, FALSE);
}
return PNR_OK;
}
The call to pAudioStream-AddPreMixHook(m_pHook, FALSE)
adds the pre-mix hook. The m_pHook
parameter is the pointer to the IRMAAudioHook
interface. The bDisableWrite
parameter is set to FALSE
to send the stream to the audio mixer. Set bDisableWrite
to TRUE
to keep the stream out of audio mixing. Remove a hook with pAudioStream-RemovePreMixHook(m_pHook)
.
IRMAAudioStream::AddPreMixHook
is called, the AudioStream
object calls IRMAAudioHook::OnInit
and passes the audio format of the audio stream. Within this method, initialize the plug-in as needed.
AudioStream
object calls IRMAAudioHook::OnBuffer
and passes the audio data from the stream. Copy the contents of the buffer and process as needed. Do the following if you need to modify the audio data:
IRMABuffer
interface to store the modified audio data.
IRMAAudioHook::OnBuffer
.
As shown in the figure below, a plug-in can modify the post-mix audio data, which is the final audio stream after all audio streams are mixed.
Complete the following steps to get post-mix audio data. An example plug-in that uses the post-mix interface is included in pstmixrd.cpp. See "Modifying the Audio Rendering Sample Code" for more information on using this code.
IRMAAudioHook
class.
IRMARenderer::StartStream
method:
IRMAAudioPlayer
interface through IUnknown::QueryInterface
.
IRMAAudioHook
interface.
IRMARenderer::OnHeader
, add the post-mix hook:
// Add post process hook
BOOL bDisableWrite = FALSE; //write data to the audio device
BOOL bFinal = FALSE;
m_pAudioPlayer-AddPostMixHook(m_pHook, bDisableWrite, bFinal);
Specifying bDisableWrite
as TRUE
prevents Audio Services from sending audio data to the audio device. The plug-in then must write the data to the audio device itself. Even when the plug-in writes the data itself, Audio Services provides all renderers with time synchronization based on a real-time clock. Remove a hook with M_pAudioPlayer-RemovePostMixHook(m_pHook)
.
AudioPlayer
object calls IRMAAudioHook::OnInit
with the audio format of the hooked data.
AudioSession
object calls IRMAAudioHook::OnBuffer
with the post-mixed audio data.
IRMAAudioHook::OnBuffer
may change the data; but it must create its own IRMABuffer
to do this (use IRMACommonClassFactory
) and return the modified data in the pAudioOutData
parameter.
RealSystem provides IRMAStream::ReportRebufferStatus
as a standard means for a plug-in to notify RealPlayer that the available data has dropped to a critically low level and rebuffering should occur. If your renderer does not send buffered data because, for example, the rendered data stems from interactive input, you can implement IRMADryNotification
to receive notification of a stream running dry, which occurs when the player must write data to the audio device but it does not have enough data to write.
![]() |
Additional Information |
---|
See "Using the Stream Object". |
Set up a notification response object with IRMAAudioStream::AddDryNotification
. The player core then uses IRMADryNotification::OnDryNotification
to notify your renderer of a stream running dry. This method passes two parameters:
ulCurrentStreamTime
, the time in the stream timeline when the next packet is expected.
ulMinimumDurationRequired
, the minimum time length of data that needs to be written to prevent silence from occurring.
The renderer must take action synchronously within the function call. It is acceptable for the renderer not to respond. It just means that silence occurs until the renderer delivers the next packets.
RealSystem synchronizes playback to a presentation's audio track. If there is no audio track, it synchronizes playback based on the system time.
During start-up, rendering plug-ins request periodic time synchronization callbacks. The audio hardware generates the synchronization signals based on the actual playback of the audio track. The AudioDevice
object passes these signals back to the client, which then issues callbacks to the rendering plug-in through IRMARenderer::OnTimeSync
.
The rendering plug-in's IRMARenderer::GetRendererInfo
method specifies the granularity of the time synchronization that the plug-in needs. The player issues callbacks as closely as possible to the requested interval. The minimum granularity is 20 milliseconds.
The RealSystem SDK includes sample Audio Services plug-ins that you can use as a starting point for creating your own plug-in. The /samples directory contains code for several basic Audio Services plug-ins. The /advanced subdirectory under /samples contains plug-in samples that use more advanced Audio Services features:
/samples/intermed/pcmrendr/pcmrendr.cpp
This is an intermediate sample file that shows how to render PCM audio data in streaming mode.
This advanced sample shows how to get notifications from Audio Services when the audio stream is running dry. It is useful if and only if your datatype writes minimal audio data in advance. Otherwise, refer to the intermediate pcmrendr.cpp.
This intermediate sample rendering plug-in uses midstream playback to render data sent by the sample file format plug-in, exffplin. Before you compile the sample code, change the value of the pURL
global variable in the source file to the fully-qualified URL of frog.pcm, located in the /samples/intermed/exaudio/testdata directory. The plug-in will not function correctly without this change.
This advanced sample file performs the same functions as the intermediate exaudio.cpp sample, but also shows how to create a new player object to start a new timeline and use it for instantaneous sound. Before you compile the sample code, change the value of the pURL
global variable in the source file to the fully-qualified URL of frog.pcm, located in the /samples/intermed/exaudio/testdata directory. The plug-in will not function correctly without this change.
This sample uses the pre-mix audio interface to intercept a stream before it is mixed. It demonstrates how to change the audio data by reducing the volume of the stream. This plug-in renders data sent by the post-process sample file format plug-in, /samples/intermed/pmffplin/pmixplin.cpp
.
This sample uses the post-mix audio interface to intercept the final stream after all the inputs are mixed. It also demonstrates how to change the audio data by reducing the stream volume. This plug-in renders data sent by the post-process sample file format plug-in /samples/intermed/pmffplin/pmixplin.cpp
.
Do the following to modify the code in any of the sample files:
![]() |
Additional Information |
---|
See "Compiling a Plug-In". |