Multisource streams

When you publish a live feed to Real-time Streaming (either by WebRTC or RTMP) using the current API, any previously published live feed is overwritten seamlessly, so viewers will always watch the most recently published stream.

This is done in order to allow reconnections or to replace a malfunctioning encoder before the previously published live feed is disconnected due to a timeout.

In other words, Real-time Streaming only allowed a single audio and video feed to be active for any given stream.
With the new multisource feature, Real-time Streaming is now able to bundle different independent publication feeds (each of one identified with a different source id) under the same stream, which enables multiple audio and video tracks from different sources to be available to viewers.

The multisource feature is supported on WebRTC, RTMP and WHIP sources.

// Create publisher 
const publisher = new Publish(streamId, () => { return publisherToken });

// Start publishing
await publisher.connect({
    mediaStream: mediaStream,
    sourceId: sourceId,
    dtx: true,

The publication of each feed is handled normally by just adding a sourceId attribute when sending the publish command to Real-time Streaming or by adding a sourceId url parameter to the RTMP or WHIP publishing url. In the case where no sourceId is present, the stream will be treated as the default one to ensure backward compatibility.

In order to improve the performance of the feature and to avoid incurring higher bandwidth costs, it is recommended to enable dtx (discontinuous transmission) on the publishing side, so audio data is only sent when a user’s voice is detected.

The reconnection feature is still supported within the same source, so if you publish a feed with the same source id as an existing one, the latest media source will be sent to viewers.


For using multisource streams, your account must be allowed to use multisource feature and the Publish Token must have the multisource flag enabled