Audio Multiplexing

Once you have multiple sources being published as described above, the next step is to receive the sources within the viewer for playback.

By default, the Millicast servers will always negotiate a stream with an audio and video track corresponding to the main source (i.e. the one without a source id) in order to keep backward compatibility with the streams not using multisource.

You can specify the main source stream by using the pinnedSourceId attribute in the view command.

In order to be able to dynamically receive the other sources within a stream, Millicast has implemented a new feature called audio multiplexing. This feature allows the viewer to setup a number of audio tracks when starting the stream view and receive the active sources within the viewer.

The Millicast Viewer node will track the voice activity of each of the incoming audio sources on behalf of the viewer, and decide which are the most active ones, forwarding the audio data in one of the tracks that the viewer has already opened. Each time a track changes the incoming source, the server multiplexes and a “vad” event will be sent to the viewer so it is possible for the application to know exactly which source is being multiplexed in each track at any given time.

As many applications will both publish an audio source and view a stream, it is possible to send a list of excluded source ids within the view command that will not be tracked or multiplexed by the viewer.

It is worth noting that by choosing to multiplex instead of audio mixing, Millicast is able to fully support End-to-End Encryption (E2EE) with interactive streaming since the audio never needs to be processed or re-encoded.

In order to improve the performance of the feature and to avoid incurring higher bandwidth costs, it is recommended to enable dtx (discontinuous transmission) on the publishing side, so audio data is only sent when a user’s voice is detected.

//Create viewer
const viewer = new View(streamId, () => { return viewerToken });

//Start streaming
await viewer.connect({
    pinnedSourceId,
    multiplexedAudioTracks: 5,
    excludedSourceIds: [sourceId],
    disableVideo,
    dtx: true,
});

In the example above, setting the pinnedSourceId ensures the Millicast Viewer server will always send you the audio (and video) of the sourceId specified, and lastly, setting your own sourceId in the excludedSourceIds parameter ensures that the server does not multiplex our own audio back to us.

In the example we also configure five audio tracks for multiplexing (in the multiplexedAudioTracks parameter) which will be used to multiplex data. By default, libwebrtc only mixes the top three audio levels detected within the tracks received, so this seems like a good trade off to achieve the best performance.

Once the connection is established, you will receive one event for each remote track in the main stream and one for each multiplexed track. You can differentiate between them either by the number of audio tracks in the stream (all multiplexed audio tracks will be associated to the same stream) or by the mid order of the associated transceiver (the first ones of each belonging to the main stream).


Did this page help you?