Multisource Streams

Multisource is a feature that allows a single stream (WebRTC or RTMP) to support broadcasting multiple video and audio feeds. There are two main use cases where broadcasting multiple video and audio feeds to a stream is useful:

  1. Overwriting streams: When you publish a live feed to Real-time Streaming, any previously published live feed is overwritten seamlessly, so viewers always watch the most recently published stream. This is done in order to allow reconnections or to replace a malfunctioning encoder before the previously published live feed is disconnected due to a timeout.
  2. Multi-view streams: Multi-view is a feature whereby viewers ingest and render multiple streams simultaneously inside a browser or mobile native applications. Once rendered these streams can be switched between, offering the viewer the ability to control how they view and listen to the content.
  3. Audio Multiplexing: Audio Multiplexing is a feature that allows viewers to receive multiple overlapping audio streams in a conference-like experience, where each audio stream is emphasized or deemphasized based on activity.

In other words, Real-time Streaming is able to bundle different independent publication feeds (each of one identified with a different source ID) under the same stream, which enables multiple audio and video tracks from different sources to be available to viewers.

How-to Set a Streaming Source


Enable Multisource

To use Multisource, the Publish token must have Multisource enabled in the Dashboard token settings.

When multisource is enabled, you use the sourceId attribute when publishing a stream.

  • Without a sourceId the stream is treated as the default or main stream to ensure backward compatibility.
  • When identical sourceId settings are given with the same id value, the broadcast will only include the most recently published stream, allowing broadcasters to swap a stream seamlessly.
  • When there are multiple unique sourceId attributes the stream plays the Main source.
  • This functionality is the same regardless of whether the stream is video only, audio only, or video and audio.

The sourceID is set when connecting to the publisher as shown below:

// Create publisher 
const publisher = new Publish(streamId, () => { return publisherToken });

// Start publishing
await publisher.connect({
	mediaStream: mediaStream,
	sourceId: sourceId,// SourceId handles stream identification
	dtx: true,

In order to improve the performance of the feature and to avoid incurring higher bandwidth costs, it is recommended to enable dtx (discontinuous transmission) when publishing, so audio data is only sent when a user’s voice is detected.


Enable Multisource on your Stream Token

For using multisource streams, your account must be allowed to use the multisource feature and the Publish Token must have the multisource flag enabled. The default cluster region must not be set to auto, and must be set to the region from which you want to stream. Multisource will not work if you publish the stream from two different locations that do not fall under same cluster/region coverage and the current default setting has not been modified.

Learn More

Learn more by exploring the developer blog and code samples.