Getting started with native SDKs

Overview

The Millicast SDK is an API that let you publish a video feed to the Dolby.io Real-time Streaming or subscribe to one.
The SDK provides a C++ API for desktop (Ubuntu, Mac, Windows), a Java API for android and an objective-C API that can be wrap in Swift for iOS/ipadOS/tvOS.
In this guide we will look at basic use of the API and learn how to start a capture and publish a stream to the Dolby.io Real-time Streaming, and how to subscribe and render audio and video.

The current version is : 1.3.1

Using the API

Initialization

Android

For android, you will need to call a method to initialize the SDK with your application context.

import com.millicast.Client;

public class MainActivity extends AppCompatActivity implements NavigationView.OnNavigationItemSelectedListener {
    public static final String TAG = "MainActivity";
    private static Context context;

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        // Initialize on App start.
        if (savedInstanceState == null) {
          context = getApplicationContext();
          
          // Load native shared library and init the SDK with the application context
          Client.initMillicastSdk(context);
        }
    }
}

Capturing from a source

First, you can list all input sources found on your device (camera, NDI, decklink ...).
There is a method for audio sources and video sources.

Audio

To get audio sources, use :

auto sources = millicast::Media::get_audio_sources();
Media media = Media.getInstance(context); // get instance with your application context
ArrayList<AudioSource> audioSources = media.getAudioSources();
let audioSources = MCMedia.getAudioSources();

You will get an array of audio sources. Then you can choose one and start a capture :

auto src = sources[0]; // Get the first available source
auto audioTrack = src->start_capture();
if(audioTrack == nullptr)
{
    // Error could not start capture 
}
AudioSource audioSource = audioSources.get(0); // get the first audio source
AudioTrack audioTrack;
try {
 audioTrack = (AudioTrack) audioSource.startCapture(); // start the capture and create the track
} catch(RuntimeException e) {
    // Issue when starting the capture 
}
// unwrap audiosources array
var audioTrack : MCAudioTrack?
if let asrcs = audioSources {
  let audioSource = asrcs[0]; // get the first audio source
  // unwrap audioSource
  if let asrc = audioSource {
    // Start the audio recording and create the audio track
    audioTrack = asrc.startCapture() as! MCAudioTrack 
  }
}

Starting the capture will return you an audio track, that you need to keep in order to add it to the publisher later.

Video

Similarly to audio, to get video sources use :

auto sources = millicast::Media::get_video_sources();
Media media = Media.getInstance(context); // get instance with your application context
ArrayList<VideoSource> videoSources = media.getVideoSources();
let videoSources = MCMedia.getVideoSources();

Once you get it, for any source in the array, you can query it for its capabilities.

auto capabilities = sources[0]->capabilities(); // Getting the capabilities of first source
VideoSource videoSource = videoSources.get(0); // get the first source
ArrayList<VideoCapabilities> capabilities = videoSource.getCapabilities(); // get the capabilities

// you can also get the camera parameters used in the android api
// https://developer.android.com/reference/android/hardware/Camera.Parameters
Camera.Parameters parameters = videoSource.getParameters();
// unwrap videosources array
var videoSource : MCVideoSource?
var capabilities : MCVideoCapabilities?
if let vsrcs = videoSources {
  videoSource = vsrcs[0]; // get the first video source
  capabilities = videoSource.getCapabilities()
  if(capabilities == nil) {
        print("[getCapabilities] No capability is available!")
  }
}

Capabilities are which width, height and fps a device can capture.
You can set capabilities like this :

source[0]->set_capability(capabilities[0]); // Setting the first capability object.
videoSource.setCapability(capabilities.get(0)); // Setting the first capability of the list
let capability = capabilities[0]; // get first capability
videoSource.setCapability(capability);

If you don't set a capability object, the first one that is found is used by default.

Finally, just start the capture, get the returned track and keep it for adding it to the publisher later.

auto src = sources[0]; // Get the first available source
auto vidoeTrack = src->start_capture();
if(videoTrack == nullptr)
{
    // Error could not start capture 
}
VideoTrack videoTrack;
try {
  // start the capture and create the track
  videoTrack = (VideoTrack) videoSource.startCapture(); 
} catch(RuntimeException e) {
    // Issue when starting the capture 
}
// unwrap videoSource
var videoTrack : MCVideoTrack?
if let vsrc = videoSource {
  // Start the video recording and create the video track
  videoTrack = vsrc.startCapture() as! MCVideoTrack
}

Dynamic changes

With the android SDK, it is possible to dynamically switch camera (switching between front and rear camera) or change the capture format.
To switch camera, you need to provide SwitchCameraHandler

class SwitchHdl implements VideoSource.SwitchCameraHandler {
    @Override
    public void onCameraSwitchDone(boolean b) {}
    @Override
    public void onCameraSwitchError(String s) {}
}
videoSource.switchCamera(new SwitchHdl); // Switch camera
// Change dynamically the video format
videoSource.changeFormat(width, height, fps);

Logger

You can set your own logger function so can print Dolby.io Real-time Streaming logs according to the severity (info, error, warnings ...).
By default, it will print to the standard output, first the severity and then the message.

millicast::Logger::set_logger([](const std::string& msg, millicast::LogLevel lvl) {
  // Print you message here
  std::cout << msg << std::endl;
});
Logger.setLoggerListener((String msg, LogLevel level) -> {
  String logTag = "[SDK][Log][L:" + level + "] ";
  logD(TAG, logTag + msg);
});

Publishing

We will now cover how to configure your Dolby.io Real-time Streaming credentials and start publishing to the Millicast Platform.
First, you need to create a Publisher object.

std::unique_ptr<millicast::Publisher> publisher = millicast::Publisher::create();
PubListener listener = new PubListener; // see the class in the next section
Publisher publisher = Publisher.createPublisher(listener);
let publisher = MCPublisher.create();

Listener

You need to set a listener object to the publisher so you can receive different event. For instance, your authentication is successful, when you start publishing or different broadcast event from Dolby.io Real-time Streaming.
You have to create a class that inherits the Publisher listener interface :

class PubListener : public millicast::Publisher::Listener
{
 public:
  PubListener() = default;
  virtual ~PubListener() = default;

  void on_connection_error(int code, const std::string& message) override {}
  void on_connected() override { publisher->publish(); }
  void on_stats_report(const millicast::StatsReport &) override {}

  void on_signaling_error(const std::string& reason) override {}

  void on_publishing() override {}
  void on_publishing_error(const std::string& reason) override {}
  
  void on_active() override {}
  void on_inactive() override {}

  void on_viewer_count(int count) override {}
};
public class PubListener implements Publisher.Listener {
    public PubListener() {}

    @Override
    public void onPublishing() {}

    @Override
    public void onPublishingError(String s) {}

    @Override
    public void onConnected() {}

    @Override
    public void onConnectionError(String reason) {}

    @Override
    public void onSignalingError(String s) {}

    @Override
    public void onStatsReport(RTCStatsReport statsReport) {}

    @Override
    public void onViewerCount(int count) {}

    @Override
    public void onActive() {}

    @Override
    public void onInactive() {}
}
class PubListener : MCPublisherListener {
    func onPublishing() {}
    func onPublishingError(_ error: String!) {}
    func onConnected() {}
    func onConnectionError(_ status: Int32, withReason reason: String!) {}
    func onSignalingError(_ error: String!) {}
    func onStatsReport(_ report: MCStatsReport!) {}
    func onViewerCount(_ count: Int32) {}
    func onActive() {}
    func onInactive() {}
}

Then, create an instance of you listener and set it to the publisher :

auto listener = std::make_unique<PublisherListener>();
publisher->set_listener(listener.get());
let listener = PubListener(); // see next section to see classs declaration
publisher!.setListener(listener)

Setup the credentials

Once you got the publisher instance, you can configure your Dolby.io Real-time Streaming credentials corresponding to the stream.
You first need to create the stream in your Dolby.io developer dashboard, or using the Dolby.io Streaming REST API.

Then, in order to configure your credentials, basically, you need to get the credentials structure from your publisher instance, fill the different fields and finally, set the modified credentials.

auto credentials = publisher->get_credentials(); // Get the current credentials
credentials.stream_name = "streamName"; // The name of the stream we want to publish
credentials.token = "aefea56153765316754fe"; // The publishing token
credentials.api_url = "https://director.millicast.com/api/director/publish"; // The publish API URL
publisher->set_credentials(std::move(credentials)); // Set the new credentials
Publisher.Credential creds = publisher.getCredentials();
creds.streamName = "streamName"; // The name of the stream we want to publish
creds.token = "aefea56153765316754fe"; // The publishing token
creds.apiUrl = "https://director.millicast.com/api/director/publish"; // The publish API URL

publisher.setCredentials(creds);
let creds = MCPublisherCredentials()
creds.apiUrl = "streamName"; // The name of the stream we want to publish
creds.streamName = "aefea56153765316754fe"; // The publishing token
creds.token = "https://director.millicast.com/api/director/publish"; // The publish API URL

publisher!.setCredentials(creds);

Configuring the Publisher

You got several options in order to configure your publishing session. For instance, you can choose the codecs, enable multisource feature and other things that we will cover in this section.

millicast::Publisher::Option options;
Publisher.Option publisherOption = new Publisher.Option();
let publisherOptions = MCClientOptions()

Codecs

You can get a list of the supported codec names :

auto audio_codecs = millicast::Client::get_supported_audio_codecs();
auto video_codecs = millicast::Client::get_supported_video_codecs();
Media media = Media.getInstance(context); // Get media instance with application context
ArrayList<String> videoCodecs = media.getSupportedVideoCodecs();
ArrayList<String> audioCodecs = media.getSupportedAudioCodecs();
let videoCodecs = Media.getSupportedVideoCodecs()
let audioCodecs = Media.getSupportedAudioCodecs()

Usually, the available video codecs are VP8, VP9, H264 and AV1. Some have support for hardware acceleration depending on the platform. H264 is hardware accelerated on some android devices and on apple devices.
Then, choose a codec within the list and set it :

options.codecs.video = video_codecs.front(); // Setting the first video codec of the list
options.codecs.audio = audio_codecs.front(); // Setting the first audio codec of the list
publisherOption.videoCodec = Optional.of(videoCodecs.get(0)); // Set the first video codec
publisherOption.audioCodec = Optional.of(audioCodecs.get(0)); // Set the first audio codec
publisherOptions.videoCodec = videoCodecs![0] // get first video codec
publisherOptions.audioCodec = audioCodecs![0] // get first audio codec

By default, VP8 is used as the video codec, and opus as the audio codec.

Simulcast

You can enable simulcast to send three different resolutions at once of your stream.

options.simulcast = true; // Enable simulcast

Basically, simulcast is sending 3 video streams, one at the resolution you are capturing (h/high), one at 1/2 of the resolution (m/medium) and one at 1/4 of the resolution (l/low). Then, the media server will automatically choose which layer to send to the viewer according to the bandwidth estimation.
Simulcast is off by default, and only available for VP8 and H264.

SVC

If you are using VP9 or AV1 video codec, you can choose to use SVC. SVC mode is sending several spatial and temporal layers encoded within the same stream. The media server will then forward one of the layer to the viewers according to the bandwidth estimation.
To enable SVC, just set the layer id you want to use in the options, and it will automatically enable SVC.

options.svc_mode = millicast::ScalabilityMode::L3T3; // Setting L3T3 SVC mode

For VP9 only the following mode are allowed : L2T1, L2T2, L2T3, L3T1, L3T2, L3T3

Multisource

If you are using the multisource feature of Dolby.io Real-time Streaming, you can set the source id of the publisher like this :

options.multisource.source_id = "YourId";
publisherOption.sourceId = "sourceId";
publisherOptions.sourceId = "MySource"

You can publish several source from the same app, you just need to create several publishers instances.

It is recommended to enable discontinuous transmission when multiplex audio, in order to detect audio input, and only send data when there is one :

options.dtx = true;
publisherOption.dtx = true;
publisherOptions.dtx = true

Stereo

You can enable to send stereo audio on the wire, so it will not be downmixed to mono :

options.stereo = true;
publisherOption.stereo = true;
publisherOptions.stereo = true

Set the options

Once you have configured the options as you wish, just set the option to the publisher :

publisher->set_options(options);
publisher.setOptions(publisherOption);
publisher!.setOptions(publisherOptions)

Publish

Now, we can start publishing.
First you need to add the audio/video track you created earlier when you started the capture.

publisher->add_track(video_track);
publisher->add_track(audio_track);
publisher.addTrack(videoTrack);
publisher.addTrack(audioTrack);
publisher!.addTrack(videoTrack)
publisher!.addTrack(audioTrack)

Then, you need to authenticate to Dolby.io Real-time Streaming through the director API, if successful it will open the websocket connection with a Dolby.io Real-time Streaming server.

publisher->connect();
publisher.connect();
publisher!.connect()

If the connection is successful, the listener's method on_connected will be called.
Finally, you can start publishing :

publisher->publish();
publisher.publish();
publisher!.publish()

Once the publisher starts sending media, the listener's method on_publishing will be called.

Subscribing

Now that we have seen how use the publisher, we will now cover how to use the viewer and subscribe to a Dolby.io Real-time Streaming feed.

std::unique_ptr<millicast::Viewer> viewer = millicast::Viewer::create();
SubListener listener = new SubListener; // See next section for the declaration of the class
Subscriber subscriber = Subscriber.createSubscriber(listener);
let subscriber = MCSubscriber.create()

Listener

Create a viewer listener class by inheriting the viewer listener's interface :

class ViewerListener : public millicast::Viewer::Listener
{
 public:
  // Your own code
  
  ViewerListener() = default;
  virtual ~ViewerListener() = default;
  
  // Overrides from millicast::Viewer::Listener
  void on_connected()  override { viewer->subscribe(); }
  void on_connection_error(int code, const std::string& message) override {}
  void on_stats_report(const millicast::StatsReport &) override {};

  void on_signaling_error(const std::string& reason) override {};

  void on_subscribed() override {};
  void on_subscribed_error(const std::string& error) override {};

  void on_track(std::weak_ptr<millicast::AudioTrack> track,
                const std::optional<std::string>& mid) override {};
  void on_track(std::weak_ptr<millicast::VideoTrack> track,
                const std::optional<std::string>& mid) override {};
  
  void on_vad(const std::string& mid, const std::optional<std::string>& source_id) override {};
  void on_stopped() override {};
  void on_active(const std::string& stream_id,
                 const std::vector<millicast::TrackInfo>& tracks,
                 const std::optional<std::string>& source_id) override {};
  void on_inactive(const std::string& stream_id, const std::optional<std::string>& source_id) override {};
  void on_layers(const std::string& mid,
         const std::vector<millicast::Viewer::LayerData>& active_layers,
         const std::vector<millicast::Viewer::LayerData>& inactive_layers) override {};

  void on_viewer_count(int count) override {};
};
public class SubListener implements Subscriber.Listener {
    public SubListener() {}

    @Override
    public void onSubscribed() {}
    @Override
    public void onSubscribedError(String s) {}
    @Override
    public void onConnected() {}
    @Override
    public void onConnectionError(String reason) {}
    @Override
    public void onStopped() {}
    @Override
    public void onSignalingError(String s) {}
    @Override
    public void onStatsReport(RTCStatsReport statsReport) {}
    @Override
    public void onTrack(VideoTrack videoTrack, Optional<String> mid) {}
    @Override
    public void onTrack(AudioTrack audioTrack, Optional<String> mid) {}
    @Override
    public void onActive(String streamId, String[] tracks, Optional<String> sourceId) {}
    @Override
    public void onInactive(String streamId, Optional<String> sourceId) {}
    @Override
    public void onLayers(String mid, LayerData[] activeLayers, LayerData[] inactiveLayers) {}
    @Override
    public void onVad(String mid, Optional<String> sourceId) {}
    @Override
    public void onViewerCount(int count) {}
}
class SubListener: MCSubscriberListener {
    func onSubscribed() {}
    func onSubscribedError(_ error: String) {}    
    func onConnected() {}    
    func onConnectionError(_ status: Int32, withReason reason: String!) {}    
    func onStopped() {}    
    func onSignalingError(_ error: String) {}    
    func onStatsReport(_ report: MCStatsReport!) {}    
    func onVideoTrack(_ track: MCVideoTrack!, withMid: String) {}   
    func onAudioTrack(_ track: MCAudioTrack!, withMid: String) {}  
    func onActive(_ _: String!, tracks: [String]!, sourceId: String!) {}
    func onInactive(_ streamId: String!, sourceId: String!) {}
    func onLayers(_ mid: String!, activeLayers: [MCLayerData]!, inactiveLayers: [MCLayerData]!) {}
    func onVad(_ mid: String!, sourceId: String!) {}
    func onViewerCount(_ count: Int32) {}
}

Then, create an instance and set it to the viewer :

auto listener = std::make_unique<ViewerListener>();
viewer->set_listener(listener.get());
let listener = SubListener()
subscriber!.setListener(listener)

Setup the credentials

auto credentials = viewer->get_credentials(); // Get the current credentials
credentials.stream_name = "streamName"; // The name of the stream we want to subscribe to
credentials.account_id = "ACCOUNT"; // ID of your Dolby.io Real-time Streaming account
credentials.token = "aefea56153765316754fe"; // Optionally set the subscribing token
credentials.api_url = "https://director.millicast.com/api/director/subscribe"; // The subscribe API URL
publisher->set_credentials(std::move(credentials)); // Set the new credentials
Subscriber.Credential creds = this.subscriber.getCredentials();
creds.accountId =  "streamName"; // The name of the stream we want to subscribe to
creds.streamName = "ACCOUNT"; // id of your Dolby.io Real-time Streaming account
creds.apiUrl ="https://director.millicast.com/api/director/subscribe"; // The subscribe API URL

subscriber.setCredentials(creds);
let creds = MCSubscriberCredentials()
creds.accountId =  "streamName"; // The name of the stream we want to subscribe to
creds.streamName = "ACCOUNT"; // id of your Dolby.io Real-time Streaming account
creds.apiUrl ="https://director.millicast.com/api/director/subscribe"; // The subscribe API URL

subscriber!.setCredentials(creds);

The stream name and the account id are informations you can get from your stream configuration on your Dolby.io developer dashboard.
The subscribing token is not the same as the publishing token and is not available through the Dolby.io developer dashboard. You must only set it if you have enabled the secured viewer and you can get this token through the Dolby.io Streaming REST API.

Configure the Viewer

Like the publisher, you have options for the viewer so you can configure your connection with Dolby.io Real-time Streaming.

millicast::Viewer::Option options;
Subscriber.Option optionSubscriber = new Subscriber.Option();
let subscriberOptions = MCClientOptions()

Multisource

You can configure your stream to receive multisource stream, how much audio track you will receive at most, pinned the main source id or excluded some source id (to avoid hearing yourself for example).

options.multisource.pinned_source_id = "main";
options.multisource.mutiplexed_audio_track = 3; // Will create three audio tracks
options.multisource.excluded_source_id = { "toexclude" };
optionSubscriber.pinnedSourceId = Optional.of("mainSource");
optionSubscriber.multiplexedAudioTrack = 3;
optionSubscriber.excludedSourceId = new String[]{ "excluded" };
subscriberOptions.pinnedSourceId = "mainSource";
subscriberOptions.multiplexedAudioTrack = 3;
subscriberOptions.excludedSourceId = [ "excluded" ]

Set the options

viewer->set_options(options); // Set the option
subscriber.setOptions(optionSubscriber);
subscriber!.setOptions(subscriberOptions);

Subscribe

Now that the viewer is configured, we can start subscribing.
First, you must authenticate and create the websocket connection with a Dolby.io Real-time Streaming server:

viewer->connect();
subscriber.connect();
subscriber!.connect();

In case it fails, the listener's method on_authentication_failed will be called with the HTTP error code and failure message. If the code is 0, please double check your internet connection or the API URL you have set in the credentials.
Otherwise, if it is successful, the method on_authenticated will be called. When that happens, you can call subscribe :

viewer->subscribe();
subscriber.subscribe();
subscriber!.subscribe();

If it is successful, on_subscribed will be called, otherwise on_subscribed_error will be called, with an error message.
Once you are subscribed, you will receive event in the listener to get the audio/video tracks that have been created.

Project

If publishers are using the multisource feature, you will need to project tracks into a specified transceiver using its mid. By default, if you don't project anything, you will receive no media.
Basically, when you start subscribing, you will receive an active event with the tracks id and the source id.
In order to project a track into a transceiver, you must use the project method of the viewer. You need to specify the source id you are targeting first and then an array of the tracks you want to project. Several info are needed to identify a track :

  • track_id : Name of the track on the media server side (the track id you get in the active event)
  • media : The kind of the media track, whether video of audio
  • mid: the mid of the transceiver you want to project into
  • layer: You can optionally choose a layer for this track if the s=publisher is using simulcast/SVC

You can get the mid either in the on_track callback of the listener object, or by calling the get_mid method with the id of the track:

/* option 1 */

 struct Listener : public Client::Listener
  {
    /* ... */
    void on_track(std::weak_ptr<VideoTrack> track,
              const std::optional<std::string>& mid) {
        // Keep the mid value somewhere 
    }
 }

/* option 2 */
// Let's say you have video track named track

auto mid = viewer->get_mid(track->id());
/* option 1 */

public class SubListener implements Subscriber.Listener {
        /* ... */
    @Override
    public void onTrack(VideoTrack videoTrack, Optional<String> mid) {
    // keep the mid value somewhere 
    }
  /* ... */
}

/* option 2 */
// let's say you have video track named track
String mid = subscriber.getMid(track.getName()).get();
/* option 1 */
class SubListener: MCSubscriberListener {  
  /* ... */
    func onVideoTrack(_ track: MCVideoTrack!, withMid: String) {
    // keep the mid value somewhere 
    }   
  /* ... */
}

/* option 2 */
// let's say you have video track named track
let mid = subscriber!.getMid(track.getId());

Typically, if you want to project a video track :

millicast::Viewer::ProjectionData data;
data.mid = mid; /* Mid of one of the video track negociated in your sdp */
data.media = "video"; /* Kind of the media */
data.track_id = track_id; /* Track id you have received in the active event */

viewer->project(source_id, { data });
ArrayList<Subscriber.ProjectionData> projectionDataArray = new ArrayList<>();

Subscriber.ProjectionData projectionData = new Subscriber.ProjectionData();
projectionData.mid = mid; /* mid of one of the video track negociated in your sdp */
projectionData.media = "video"; /* kind of the media */
projectionData.trackId = trackId; /* track id you have received in the active event */

subscriber.project(sourceId, projectionDataArray);
let projectionData = MCProjectionData()
projectionData.mid = mid /* mid of one of the video track negociated in your sdp */
projectionData.media = "video" /* kind of the media */
projectionData.trackId = trackId /* track id you have received in the active event */

subscriber!.project(sourceId, [projectionData])

By default, only one video and audio tracks are negotiated in the SDP. If there is several publishers sending media in the stream, you can dynamically add more track using the method add_remote_track each time you receive an active event.

viewer.add_remote_track("video"); // "Audio" or "video" depending the kind of track you want to add
subscriber.addRemoteTrack("video"); // "audio" or "video" depending the kind of track you want to add
subscriber!.addRemoteTrack("video"); // "audio" or "video" depending the kind of track you want to add

When you call this method, it will add a new transceiver and renegotiate locally the SDP. Then, if it is successful, a new track will be created add the on_track callback will be called, so you can get the track and its corresponding mid.

Finally, it is possible to stop projecting a track by calling the unproject method.
It requires only an array of the mids you want to stop projecting.

Select layer

When a publisher is using simulcast or SVC mode when sending a video feed, the media server automatically choose the right layer to send to the viewer according to the bandwidth estimation. However, you can force the server to send you a specific layer by calling the select method. You will need the following informations :

  • The encoding id : The id of the simulcast / SVC layer.
  • The temporal layer id : Which temporal layer to use (the temporal layer acts on the fps)
  • The spatial layer id: which spatial layer to use (the spatial layer acts on the frame width/height)

For example, if the sender is using simulcast, it is possible to receive 3 different encoding id : 'h' for the high resolution, 'm' for the medium (scaled by /2), 'l' for the low (scaled by /4).

So, in order to choose the medium resolution, with the fps divided by 2 (so temporal layer id 1 in the example) :

millicast::Viewer::LayerData data;
data.encoding_id = "m";
data.temporal_layer_id = 1;
data.spatial_layer_id = 0;

viewer->select(data);
LayerData layerData = new LayerData();
layerData.encoding_id = "m";
layerData.temporal_layer_id = 1;
layerData.spatial_layer_id = 0;

subscriber.select(Optional.of(layerData));
// empty optional means the server will make automatic selection
let layerData = MCLayerData()
layerData.encoding_id = "m"
layerData.temporal_layer_id = 1
layerData.spatial_layer_id = 0

subscriber.select(layerData);
// null value means the server will make automatic selection

You can retrieve all the available layers with their corresponding ids when the layer event is called.

Broadcast event

Broadcast events are message sent by the media server to the client when a specific event occurs.
There are common events for both publishers and viewers, and also specific events for each one.
When an event is received by the client, it will call the corresponding callback in the listener object.
As for now, the SDK will listen for all events, there is no API to disable some.

Common event

  • on_viewer_count : each time viewer enters or leave the stream, all the clients connected to the stream are notified with the new number of viewer currently watching the stream.

Publisher events

  • on_active: called when the first viewer starts viewing the stream
  • on_inactive: called when the last viewer stops viewing the stream

Viewer events

  • on_active : Called when a new source has been publishing within the new stream. It is called with the stream id, the tracks information and the source id if it is used.
  • on_inactive: Called when a source has been unpublished within the stream. It is called with the stream id and the source id if it is used.
  • on_stopped: Called when a stream has stopped for a given reason.
  • on_vad: Called when a source id is being multiplexed into an audio track based on the voice activity level. It is called with the mid of the track and the source id.
  • on_layers: Called when simulcast/SVC layers are available. It will give you arrays of LayerData object that can be used in the select command.

Rendering

Video

The SDK provide an interface so you can implement a class that will receive the video frame and then you are free to render the frames using any graphic library.
Basically, you just need to write a class that inherits the VideoRenderer and you will get VideoFrame when available.

class MyRenderer : public millicast::VideoRenderer
{
    public:
  
  void on_frame(const millicast::VideoFrame& frame) override;
}
<!-- for android, add a layout in your fragment it will be used to add a view later in java code -->

<LinearLayout
  android:id="@+id/linear_layout_video"
  android:layout_width="match_parent"
  android:layout_height="0dp"
  android:layout_weight="0.6"
    android:orientation="horizontal">
</LinearLayout>
// a view object you can add to a vstack in your swift UI
struct VideoView : UIViewRepresentable {
    let renderer: MCIosVideoRenderer
    
    func makeUIView(context: Context) -> UIView {
        let uiView = renderer.getView()!
        return uiView
    }
    
    func updateUIView(_ uiView: UIView, context: Context) {
    }
}

You can get the video data from the VideoFrame using the method get_buffer. You need to allocate a buffer with the correct size beforehand. You can get the size of the buffer according to its VideoType using the size method. Both get_buffer and size have a template parameter that let you choose in which VideoType you want you video data, wheter millicast::VideoType::ARGB or millicast::VideoType::I420.

There are already implementation for NDI, DeckLink and iOS renderer. You just need to configure them beforehand (name of the session for NDI, which output device for decklink). The iOS renderer is available for iOS and tvOS and use the metal renderer. You can get the UIView if you want to do something specific with it.

Then, you just need to create an instance of your renderer and add it to a video track, local or remote.

MyRenderer * renderer = new MyRenderer();
track->add_renderer(renderer);
linearLayoutVideo = view.findViewById(R.id.linear_layout_video);
VideoRenderer renderer = new VideoRenderer(context); // create renderer with app context
linearLayoutVideo.addView(renderer);

// when you get your video track
videoTrack.setRenderer(renderer);
let renderer = MCIosVideoRenderer()
let videoView = VideoView(renderer) // add this view to your UI

// once you get a video track
videoTrack?.addRenderer(renderer)

It is possible to add several renderers or remove one.

Audio

In order to play audio, you just need to select one available playback device.

auto playback_list = millicast::Media::get_playback_devices(); // Get the playback devices
auto playback = playback_list[0]; // Get the first one
playback->init_playback(); // Set this playback as the one to play audio
Media media = Media.getInstance(applicationContext);
ArrayList<AudioPlayback> audioPlayback = media.getAudioPlayback();
audioPlayback.initPlayback();
// by default audio will be played on the earpiece
// if you want to play it on the speaker
// add this code after you are subscribed
do {
      try inst.setCategory(AVAudioSession.Category.playAndRecord, options: AVAudioSession.CategoryOptions.defaultToSpeaker)
    } catch {
      print("Could not set speakers mode.")
    }
}

Playback devices can be speakers on your computer/phone, NDI output, or decklink output devices connected to your computer.

For remote track, you can adjust the volume :

audio_track->set_volume(1.0); // Volume between 0 and 1.
audioTrack.setVolume(1.0); // Volume should be between 0 and 1
audioTrack.setVolume(1.0); // Volume should be between 0 and 1

RTC Stats

You can collect statistics from the WebRTC peerconnection periodically if you enable the stats through the viewer or publisher' enable_stats. If you enable the stats, you will get a stats report every second through the callback on_stats_report in the listener object. The identifiers and way to browse the stats are following the RTC specification : https://www.w3.org/TR/webrtc-stats/.
Basically, you will get a StatsReport object, which is a collection of several Stats objects. They all have a specific type, whether it is inbound, outbound, codec or media. For example, inbound is the statistics of incoming transport, so it is for the viewer, and outbound, for outgoing stats, so the publisher.

Clean up

In order to clean some memory, you should call a cleanup method at the end of your program.

millicast::Client::cleanup();
[MCCleanup cleanup];
MCCleanup.cleanup();

Documentation

You can find a complete documentation of every methods and classes of the SDK :