The Native SDK provides C++ APIs for desktop platforms, such as Linux, Mac, and Windows. You may use the SDK in your project to connect, capture, publish, or subscribe to streams using the Dolby.io Streaming Platform.

Mac

The following packages are included in the Native SDK package for Mac applications:

  • millicast-native-sdk-version-Darwin.tar.gz

  • millicast-native-sdk-version-Darwin-no-av1.tar.gz

  • millicast-native-sdk-version-Darwin-m1.tar.gz

  • millicast-native-sdk-version-Darwin-m1-no-av1.tar.gz

Note: The packages that contain no-av1 in the file name do not support the AV1 codec.

Requirements

Before you start:

  • Sign up for a free Dolby.io account.
  • Make sure that you have a working video camera and microphone.
  • Make sure that you use macOS Catalina or later.

Test application

Use a simple test application to check whether the SDK installation is correct. You can build the application using the following commands in the SDK folder:

mkdir build && cd build

cmake .. -DMillicastSDK_DIR=/path_to_millicastSDK/lib/cmake

cmake --build .

Windows

The following packages are included in the Native SDK package for Windows applications:

  • millicast-native-sdk-version-Windows-no-av1.zip

  • millicast-native-sdk-version-Windows.zip

Note: The package that contains no-av1 in the file name does not support the AV1 codec.

Requirements

Before you start:

  • Sign up for a free Dolby.io account.
  • Make sure that you have a working video camera and microphone.
  • Make sure that you use Windows 10 or later.
  • Make sure that you use Visual Studio 2022.

Test application

Use a simple test application to check whether the SDK installation is correct. You can build the application using the following commands in the example folder:

mkdir build && cd build

cmake .. -DMillicastSDK_DIR=/path_to_millicastSDK/lib/cmake

cmake --build . --config Debug

Before running, you might want to add the bin directory to your path as it contains the OpenSSL and NDI DLLs, which are required in your path environment variable to start the application. After using the command, open the debug directory and execute the application.

Linux

The following packages are included in the Native SDK package for Linux applications:

  • millicast-native-sdk-version-Ubuntu-22.04-no-av1.deb

  • millicast-native-sdk-version-Ubuntu-22.04.deb

  • millicast-native-sdk-version-Linux.deb

  • millicast-native-sdk-version-Linux-no-av1.deb

Note: The packages that contain no-av1 in the file name do not support the AV1 codec.

Requirements

Before you start:

  • Sign up for a free Dolby.io account.
  • Make sure that you have a working video camera and microphone.
  • Make sure that you use Ubuntu 20 or later.

Test application

Use a simple test application to check whether the SDK installation is correct.

The application requires the following dependencies: X11, Xtst, Xfixes, Xdamage, Xrandr, Xcomposite,
avahi-common, avahi-client, and libcurl. You may install them using the following command:

sudo apt install -y libx11-dev libxfixes-dev libxdamage-dev libxcomposite-dev libxtst-dev \
  libxrandr-dev libcurl4-openssl-dev libavahi-client3 libavahi-common3

Compile with clang to have libc++-dev and libc++abi-dev installed:

export CC=/usr/bin/clang

export CXX=/usr/bin/clang++

Build the application using the following commands in the SDK folder:

mkdir build && cd build

cmake .. -DMillicastSDK_DIR=/path_to/MillicastSDK/lib/cmake

make -j4

Getting started with publishing

Follow these steps to add the publishing capability to your application.

1. Capture audio and video

To capture media, get an array of available audio and video sources and choose the preferred sources from the list. After you start capturing audio and video, the SDK will return an audio and video track that you can add to the publisher later.

// Get an array of audio sources
auto sources = millicast::Media::get_audio_sources();

// Choose the preferred audio source and start capturing
auto src = sources[0]; // Get the first available source
auto audioTrack = src->start_capture();
if(audioTrack == nullptr)
{
 	// Error could not start capture 
}

// Get an array of available video sources
auto sources = millicast::Media::get_video_sources();

// Get capabilities of the available video sources, such as width, height, and frame rate of the video sources
auto capabilities = sources[0]->capabilities(); // Get the capabilities of first source

// Set the preferred capability; not setting any capability object results in setting the first one from the list
source[0]->set_capability(capabilities[0]); // Set the first capability object

// Start capturing video
auto src = sources[0]; // Get the first available source
auto vidoeTrack = src->start_capture();
if(videoTrack == nullptr)
{
 	// Error could not start capture 
}

2. Set logger

Optionally, set your own logger function to print Dolby.io Real-time Streaming logs according to the severity. By default, the SDK prints the standard output, where the severity is first and then the message.

millicast::Logger::set_logger([](const std::string& msg, millicast::LogLevel lvl) {
  // Print your message here
  std::cout << msg << std::endl;
});

3. Publish a stream

Create a publisher object and set a listener object to the publisher to receive proper events. This requires creating a class that inherits the publisher's listener interface. Then, create a stream in your Dolby.io developer dashboard or using the Dolby.io Streaming REST API and set your credentials.

// Create a publisher object
std::unique_ptr<millicast::Publisher> publisher = millicast::Publisher::create();

// Set a listener object to the publisher
class PubListener : public millicast::Publisher::Listener
{
 public:
  PubListener() = default;
  virtual ~PubListener() = default;

  void on_connection_error(int code, const std::string& message) override {}
  void on_connected() override { publisher->publish(); }
  void on_stats_report(const millicast::StatsReport &) override {}

  void on_signaling_error(const std::string& reason) override {}

  void on_publishing() override {}
  void on_publishing_error(const std::string& reason) override {}
  
  void on_active() override {}
  void on_inactive() override {}

  void on_viewer_count(int count) override {}
};

// Create an instance of your listener and set it to the publisher
auto listener = std::make_unique<PublisherListener>();
publisher->set_listener(listener.get());

// Get the credentials structure from your publisher instance, fill it in, and set the modified credentials
auto credentials = publisher->get_credentials(); // Get the current credentials
credentials.stream_name = "streamName"; // The name of the stream you want to publish
credentials.token = "aefea56153765316754fe"; // The publishing token
credentials.api_url = "https://director.millicast.com/api/director/publish"; // The publish API URL
publisher->set_credentials(std::move(credentials)); // Set the new credentials

4. Configure your publishing session

Get a list of the available codecs and set the codecs that you want to use. By default, the SDK uses VP8 as the video codec and Opus as the audio codec.

Additionally, to publish several sources from the same application, create a publisher instance for each source. We recommend enabling discontinuous transmission that detects audio input and sends audio only when it is detected.

The SDK also offers Simulcast that allows sending three different resolutions at once. It is disabled by default and only available for VP8 and H264. Optionally, when using the VP9 or AV1 video codec, you can choose to use SVC instead of Simulcast. The SVC mode sends several spatial and temporal layers encoded within the same stream. To enable SVC, set the layer ID you want to use in the options; only the following modes are available: L2T1, L2T2, L2T3, L3T1, L3T2, and L3T3.

millicast::Publisher::Option options;

// Get a list of supported codecs
auto audio_codecs = millicast::Client::get_supported_audio_codecs();
auto video_codecs = millicast::Client::get_supported_video_codecs();

// Choose the preferred codecs
options.codecs.video = video_codecs.front(); // Setting the first video codec of the list
options.codecs.audio = audio_codecs.front(); // Setting the first audio codec of the list

// Optionally, enable Simulcast
options.simulcast = true;

// Optionally, enable SVC
options.svc_mode = millicast::ScalabilityMode::L3T3; // Set the L3T3 SVC mode

//Optionally, set a source ID of the publisher and enable discontinuous transmission to enable multi-source
options.multisource.source_id = "YourId";
options.dtx = true;

// Enable stereo
options.stereo = true;

// Set the selected options to the publisher
publisher->set_options(options);

5. Add the audio and video track

Add the audio and video track that you created earlier when you started capturing media.

publisher->add_track(video_track);
publisher->add_track(audio_track);

6. Authenticate using the Director API

Authenticate to access Dolby.io Real-time Streaming through the Director API.

publisher->connect();

Successful authentication results in opening a WebSocket connection that allows using the Dolby.io Real-time Streaming server and calling the listener's on_connected method.

7. Start publishing

publisher->publish();

Once the publisher starts sending media, the SDK calls the listener's on_publishing method.

Getting started with subscribing

Follow these steps to add the subscribing capability to your application.

1. Create a subscriber object

std::unique_ptr<millicast::Viewer> viewer = millicast::Viewer::create();

2. Create a listener class

Create a viewer's listener class by inheriting the viewer listener's interface.

class ViewerListener : public millicast::Viewer::Listener
{
 public:
  // Your own code
  
  ViewerListener() = default;
  virtual ~ViewerListener() = default;
  
  // Overrides from millicast::Viewer::Listener
  void on_connected()  override { viewer->subscribe(); }
  void on_connection_error(int code, const std::string& message) override {}
  void on_stats_report(const millicast::StatsReport &) override {};

  void on_signaling_error(const std::string& reason) override {};

  void on_subscribed() override {};
  void on_subscribed_error(const std::string& error) override {};

  void on_track(std::weak_ptr<millicast::AudioTrack> track,
                const std::optional<std::string>& mid) override {};
  void on_track(std::weak_ptr<millicast::VideoTrack> track,
                const std::optional<std::string>& mid) override {};
  
  void on_vad(const std::string& mid, const std::optional<std::string>& source_id) override {};
  void on_stopped() override {};
  void on_active(const std::string& stream_id,
                 const std::vector<millicast::TrackInfo>& tracks,
                 const std::optional<std::string>& source_id) override {};
  void on_inactive(const std::string& stream_id, const std::optional<std::string>& source_id) override {};
  void on_layers(const std::string& mid,
		 const std::vector<millicast::Viewer::LayerData>& active_layers,
		 const std::vector<millicast::Viewer::LayerData>& inactive_layers) override {};

  void on_viewer_count(int count) override {};
};

3. Create an instance and set it to the viewer

auto listener = std::make_unique<ViewerListener>();
viewer->set_listener(listener.get());

4. Set up credentials

Get your stream name and stream ID from the dashboard and set them up in the SDK.

auto credentials = viewer->get_credentials(); // Get the current credentials
credentials.stream_name = "streamName"; // The name of the stream you want to subscribe to
credentials.account_id = "ACCOUNT"; // The ID of your Dolby.io Real-time Streaming account
credentials.token = "aefea56153765316754fe"; // Optionally set the subscribing token
credentials.api_url = "https://director.millicast.com/api/director/subscribe"; // The subscribe API URL
publisher->set_credentials(std::move(credentials)); // Set the new credentials

5. Configure the viewer by setting your preferred options

Configure your stream to receive multi-source content.

millicast::Viewer::Option options;

options.multisource.pinned_source_id = "main"; // The main source that will be received by the default media stream
options.multisource.mutiplexed_audio_track = 3; // Enables  audio multiplexing and denotes the number of audio tracks to receive as Voice Activity Detection (VAD) multiplexed audio
options.multisource.excluded_source_id = { "toexclude" }; // Audio streams that should not be included in the multiplex, for example your own audio stream

// Set the selected options
viewer->set_options(options);

6. Create a WebSocket connection

Authenticate and create a WebSocket connection to connect with the Dolby.io Real-time Streaming server.

viewer->connect();

If the connection fails, the listener's on_authentication_failed method is called with the HTTP error code and failure message. If the code is 0, double-check your internet connection or the API URL set in the credentials. If the connection is successful, the SDK calls the on_authenticated method.

7. Subscribe to the streamed content

viewer->subscribe();

When the operation is successful, the SDK calls on_subscribed and sends you an event in the listener with the created audio and video tracks. Otherwise, the SDK calls on_subscribed_error with an error message.

8. Project media

If publishers use the multi-source feature, you need to project tracks into a specified transceiver using its media ID (mid). By default, if you do not project anything, you receive no media. When you start subscribing, you receive the active event with the track IDs and the source ID. In order to project a track into a transceiver, you must use the project method of the viewer. You need to specify the source ID you are targeting and an array of the tracks you want to project.

By default, only one video and audio track is negotiated in the SDP. If there are several publishers sending media in one stream, you can dynamically add more tracks using the add_remote_track method each time you receive an active event. The method adds a new transceiver and renegotiates locally SDP. When successful, the SDK creates a new track and calls the on_track callback, so you can get the track and its corresponding mid.

// Get mid either from the `on_track` callback of the listener object or by calling the `get_mid` method with the track ID
/* option 1 */

 struct Listener : public Client::Listener
  {
	/* ... */
    void on_track(std::weak_ptr<VideoTrack> track,
			  const std::optional<std::string>& mid) {
     	// Keep the mid value somewhere 
    }
 }

/* option 2 */
// Let's say you have video track named track

auto mid = viewer->get_mid(track->id());

// Project a video track
millicast::Viewer::ProjectionData data;
data.mid = mid; // The media ID of the transceiver you want to project into
data.media = "video"; // The media track type, either video or audio
data.track_id = track_id; // The name of the track on the media server side, which is the track ID you get in the active event

viewer->project(source_id, { data });

viewer.add_remote_track("video"); // "Audio" or "video" depending on the type of track you want to add

To stop projecting the track, call the unproject method, which requires an array of the media IDs that you want to stop projecting.

9. Select a layer that you want to receive

When a publisher uses Simulcast or the SVC mode when sending a video feed, the media server automatically chooses the right layer to send to the viewer according to the bandwidth estimation. However, you can force the server to send you a specific layer by calling the select method.

For example, if the sender uses Simulcast, it is possible to receive three different encoding IDs: 'h' for the high resolution, 'm' for the medium one, and 'l' for the low. In order to choose the medium resolution, you have to do the following:

millicast::Viewer::LayerData data;
data.encoding_id = "m"; // The encoding ID, which is the ID of the Simulcast or SVC layer
data.temporal_layer_id = 1; // The ID of the temporary layer
data.spatial_layer_id = 0; // The ID of the spatial layer

viewer->select(data);

You can retrieve all the available layers with their corresponding IDs by calling the layer event.

10. Manage broadcast events

When broadcast events occur, the SDK calls the corresponding callback in the listener object. The SDK listens to all events and does not allow disabling them; it offers the following event listeners:

  • Publisher event listeners:
    • on_active: called when the first viewer starts viewing a stream
    • on_inactive: called when the last viewer stops viewing a stream
    • on_viewer_count: called each time the number of viewers changes; all clients connected to the stream are notified about the current number of viewers
  • Viewer event listeners:
    • on_active: called when a new source starts publishing a stream; it contains the stream ID, the track information, and the source ID
    • on_inactive: called when a source is no longer published within a stream; it contains the stream ID and the source ID
    • on_stopped: called when a stream stops
    • on_vad: called when a source ID is multiplexed into an audio track based on the voice activity level; it contains mid of the track and the source ID
    • on_layers: called when Simulcast or SVC layers are available; contains arrays of the LayerData object that you can use in the select command
    • on_viewer_count: called each time a new viewer enters or leaves a stream; all clients connected to the stream are notified about the current number of viewers

11. Render video

The SDK provides an interface that lets you implement a class responsible for receiving video frames.

// Write a class that inherits _VideoRenderer_
class MyRenderer : public millicast::VideoRenderer
{
	public:
  
  void on_frame(const millicast::VideoFrame& frame) override;
}

After this step, you will get the VideoFrame whenever it is available. You can render the frame using any graphic library.

To get video data from the VideoFrame, use the get_buffer method. Allocate a buffer with the correct size beforehand. You can get the size of the buffer according to its VideoType using the size method. Both get_buffer and size have a template parameter that lets you choose in which VideoType you want your video data, either millicast::VideoType::ARGB or millicast::VideoType::I420.

Then, create an instance of your renderer and add it to a local or remote video track, select one playback device to be able to play audio, adjust the volume of remote tracks, and clean the memory.

// Create an instance of your renderer and add it to a local or remote video track
MyRenderer * renderer = new MyRenderer();
track->add_renderer(renderer);

// Select one playback device
auto playback_list = millicast::Media::get_playback_devices(); // Get the playback devices
auto playback = playback_list[0]; // Get the first device
playback->init_playback(); // Set this playback as the one to play audio

// Adjust the volume of remote tracks
audio_track->set_volume(1.0); // Volume can be between 0 and 1

// To clean the memory at the end of your program
millicast::Client::cleanup();

Collecting RTC statistics

You can periodically collect the WebRTC peer connection statistics if you enable them through the enable_stats method of the viewer or publisher. After enabling the statistics, you will get a report every second through the on_stats_report callback in the listener object. The identifiers and way to browse the stats are following the RTC specification.
The report contains the StatsReport object, which is a collection of several Stats objects. They all have a specific type, whether it is inbound, outbound, codec, or media. Inbound is the statistics of incoming transport for the viewer and outbound is a type of outgoing statistics for the publisher.