Getting Started

🚀

SDK Beta

Injecting media, capturing and storing remote audio and video data, and joining conferences as a user is currently available only as a part of the Beta program.

This guide goes through the steps of creating a basic command-line application that joins and leaves a conference and records audio and video streams.

A ready-to-use sample application is available with the SDK package.

Prerequisites

Make sure that you have:

  • A Dolby.io account
  • CMake 3.0 or a later version
  • GCC 7.1 or a later version

The Server C++ SDK is compatible with the following operating systems:

  • Ubuntu 18.04 LTS (gcc-7)
  • Ubuntu 20.04 LTS (gcc-9)

Run time dependencies

PulseAudio Sound Server

The Server C++ SDK requires access to the PulseAudio Sound Server to successfully initialize. Make sure that your machine has either system-wide PulseAudio running or that an instance of PulseAudio is started by the same user who will run the sample application. This allows the library to access the sound server.

Verify if you have PulseAudio installed:

pulseaudio --version

If PulseAudio is not installed, run the following command:

sudo apt-get install pulseaudio

Note: To use a package manager you need to be a super user.

When PulseAudio is installed, start it for your user:

nohup pulseaudio

Asynchronous operations

Be aware that the Server C++ SDK is following the asynchronous concept, so the user thread does not get blocked when other operations of the SDK are not completed. If certain operations need to happen sequentially, the application needs to ensure this order. Each asynchronous operation of the SDK returns an async_result object. These operations can be chained together with then calls in cases when the application needs to provide a function object to be invoked upon the return of the asynchronous operation. To force the synchronous behavior, we offer the wait helper method that effectively blocks the calling thread until the asynchronous operation completes and returns. However, if the underlying asynchronous operation fails, the wait method throws an exception. The following code snippet shows two approaches that you can use for dealing with functions returning async_result:

// Chaining operations together
auto success_cb = []() { next_async_operation(); };
auto error_cb = []() { failure_operation(); };
some_asynchronous_operation().then(success_cb).on_error(error_cb);

// Using the wait helper
try {
    wait(some_asynchronous_operation());
    wait(next_async_operation());
}
catch(std::exception&) {
    // Handle the exception
    failure_operation();
}

Download the SDK package

  1. Go to the GitHub Releases and download the SDK package (under the Assets section of the release) to your computer.

  2. Unzip the package.

  3. Go to the sdk-release directory.

cd sdk-release/

In this directory, you can find the following directories:

DirectoryContents
includeHeader files that are a part of public APIs.
libShared libraries constituting the Server C++ SDK.
shareCMake files for building the target, sample application, CA certificates, and licenses.

CA certificates

The SDK uses Certificate Authority (CA) certificates to authenticate the identity of the remote servers during an SSL handshake. While writing your application, be careful about the location of the cacert.pem file that stores the certificates. The SDK library in runtime queries the location of these certificates and then attempts to load the certificate file from locations relative to the library. By default, the SDK tries three locations:

  • The SDK checks a value of the DOLBYIO_COMMS_CA_CERT_FILE environment variable. If the variable points to the certificate file, the certificate file is loaded.
  • A directory right next to the shared library: /path/to/scpp_sdk/lib/cacert.pem
  • A directory in the share directory: /path/to/scpp_sdk/share/dolbyio/comms/cacert.pem

If the certificate is not found in paths relative to the SDK library, the initialization fails. The SDK does not try to use the system-installed CA cert files.

📘

Note

The CA certificate in SDK 1.0.1 and SDK 1.1.0-beta.1 is outdated. If you use one of these versions, we recommend switching to SDK 1.0.2 or SDK 1.1.0-beta.2. If you do not want to change the SDK version, you can replace the certificate file with the cacert.pem file from the SDK 1.0.2 or 1.1.0-beta.2 package.

Write the application

The application can be created in any directory, it just needs to have correct relative paths to CMake files from the sdk-release/share/dolbyio/comms/sample/ directory. In this example, we create the application in the app/ folder inside the sdk-install/share/dolbyio/comms/sample/ directory.

  1. Create the app/ directory.
mkdir app
touch app/main.cc app/CMakeLists.txt
  1. Open the CMakeLists.txt file and mention there the directories that contain CMake files.
cmake_minimum_required(VERSION 3.0...3.21)
add_executable(scpp_app
main.cc
)
target_link_libraries(scpp_app
        DolbyioComms::sdk
        DolbyioComms::multimedia_streaming_addon
        media_source_file
)

This code links the application to the libdolbyio_comms_sdk.so library, the libdolbyio_comms_multimedia_streaming_addon.so default Multi Media Streaming Module library, and the media_source_file library, which is the sample library providing audio capture from a media file.

  1. Open the sdk-release/share/dolbyio/comms/sample/CMakeLists.txt file, which is the top level CMake file for all the samples, and include the recently created app/ directory as a subdirectory.
add_subdirectory(app)

This line should be at the bottom of the CMakeLists.txt file where the other subdirectories are added.

  1. Open the app/main.cc file and include there the necessary headers.
#include <iostream>
#include <comms/multimedia_streaming/recorder.h>
#include <comms/multimedia_streaming/injector.h>
#include <comms/sdk.h>
#include <comms/sample/media_source/file/source_capture.h>

int main(int argc, char** argv) {
    try {
        
    }
    catch (std::exception& e) {
        std::cerr << "Error! " << e.what() << std::endl;
        return 1;
    }

    return 0;
}
  • The comms/sdk.h header provides access to the components of SDK with which the application must interact.

  • The comms/multimedia_streaming/recorder.h header is an interface for configuring the default Recording Module.

  • The comms/multimedia_streaming/injector.h header is the interface to the default Injector Module.

  • The comms/sample/media_source/file/source_capture.h header provides the top-level file_source object, which can be used as a media source. This is a sample library written to provide an example of using media from a file as the source for the Paced Injector. The sources for the entire library can be found in the sdk-release/share/dolbyio/comms/sample/media_source/ directory.

  1. Pass the following to the command line:
  • The output directory where you want to store the audio and video files
  • The conference alias for a conference that you want to create and join
  • The initial access token to connect to the Dolby.io platform
  • Provide the name of the media file you would like to inject into the conference

At the beginning of the main function, check the arguments passed from the command line with the following code:

std::string output_dir = argv[1];
std::cout << "output_dir: " << output_dir << std::endl;
std::string conf_alias = argv[2];
std::cout << "conf_alias: " << conf_alias << std::endl;
std::string access_token = argv[3];
std::cout << "Access Token: " << access_token << std::endl;
std::vector<std::string> media_files;
media_files.push_back(argv[4]);
  1. Fetch the access token to connect the application to the Dolby.io backend. You can either fetch the token inside the application via an HTTP request or provide the token externally. The fetch function is not included in the SDK.

In this sample application, the initial access token is passed in the command line and is not refreshed. Once the access token expires, the conference fails. If you want to refresh the token, you need to create a refresh_token_callback function object to fetch a new token when the current token needs to be refreshed. Then, you need to provide the new token to the SDK. The application needs to pass refresh_token_callback to the dolbyio::comms::sdk::create call during the SDK initialization. The passed refresh_token_callback function object is invoked on the SDK's event loop, thus refresh_token_callback needs to fetch the token asynchronously to not block the event loop while fetching the token.

auto refresh_token_callback = [](std::unique_ptr<dolbyio::comms::refresh_token>&& refresh_token) {
    (void)refresh_token;
};
  1. Create the SDK instance.
// Create an instance of the SDK. This call is synchronous and returns a
// pointer to the SDK on success.
auto sdk = dolbyio::comms::sdk::create(access_token, refresh_token_callback);

The create method is a synchronous operation. When the function returns, the user gets an instance of the SDK ready to access the underlying services.

  1. Create an instance of the recording module. You can either use an instance of the provided Recording Module or create your own recording module. In this example, we use the default Recording Module. If you want to write your own module, go to this section.

Initiate the recorder and set the desired recording formats. Then, connect the recorder to a conference. The conference needs to use the recorder as a sink device for the incoming audio and video streams. The media sink setting is an asynchronous operation that makes use of the wait helper.

If you do not provide any recording module, the SDK will not record any audio or video streams.

// Create an instance of the default media recording module. The recorder
// writes media and metadata files to the output directory specified in the argument.
auto media_recorder = dolbyio::comms::plugin::recorder::create(output_dir, *sdk);

// Configure the media recorder.
// In this example, audio is stored as PCM and video is stored as encoded.
media_recorder->set_recording_config(
    dolbyio::comms::plugin::recorder::audio_recording_config::PCM,
    dolbyio::comms::plugin::recorder::video_recording_config::ENCODED
);

// Set the media recorder to be used by the conference service.
wait(sdk->conference().set_media_sink(media_recorder.get()));
  1. Create an instance of the Paced Injector module. In this example, we use the Paced Injector module in combination with the Media Source File sample library. Before creating an instance of the Paced Injector we must create a function object, which is used as the Injection status callback. This function object is invoked by the Injector to inform us about status changes. This function object is passed to the Paced Injector's constructor. Both the Paced Injector and Passthrough Injector require some type of Media Source, which provides the decoded frames. This can be the sample Media Source File Library used in this example or something else, as long as it provides audio and video data to the Injectors in the specified format.
// Create an instance of the Paced Injector module.
auto status_cb = [](const dolbyio::comms::plugin::media_injection_status& status){};
auto injector = std::make_unique<dolbyio::comms::plugin::injector_paced>(std::move(injection_status));
  1. Create an instance of the Media Source File file_source object. This class performs the media capture from the specified file and injects the decoded frames into the Paced Injector. Thus it must receive a reference to the Paced Injector during its construction. The file_source class provides feedback about the status of the source capture to the application using a function object provided by the application. This function object must be provided by the application when the file_source::create function is called. The entire public API for the file_source class can be found in sdk_release/share/dolbyio/comms/sample/media_source/file/source_capture.h. The application can use this API to perform functions such as pause, resume, seek, start, and stop the injection of media from a file.
auto status_cb = [](const dolbyio::comms::sample::file_source_status& status) { /* handle the status update */ };
auto injection_src = dolbyio::comms::sample::file_source::create(
      std::move(media_files), false/*loop file*/, *injector, std::move(status_cb));
  1. Connect the Injector to a conference.
wait(sdk.conference->set_media_source(injector.get()));
  1. Use the open method to open a session.
// Open the session for the participant specified in the
// participant_info structure.
std::string user_name = "scpp_app";
dolbyio::comms::services::session::participant_info participant{};
participant.externalId = user_name;
participant.name = user_name;

wait(sdk->session().open(std::move(participant)));
  1. Use the create method to create a conference.
// Create a conference
dolbyio::comms::services::conference::conference_options create{};
create.alias = conf_alias;
dolbyio::comms::conference_info conf = wait(sdk->conference().create(create));

Even if the requested conference already exists, the method returns the conference_info object.

  1. Use the listen or join methods and the received conference_info object to join the conference. The join method has the application join the conference as an active user who can send media streams into the conference. For this example we will join as a user an automatically start media injection when we join the conference, by setting the respective fields in the join_options structure. We will also add the flag to turn off audio_processing of our media from the file.
// Join the conference as a user or listener
dolbyio::comms::services::conference::join_options join_options{};
// Set the initial injection of audio/video to true
join_options.constraints.audio = true
join_options.constraints.video = true
join_options.constraints.audio_processing = false
wait(sdk->conference().join(conf, join_options));

If the recorder was successfully provided to the conference, this step automatically starts recording the incoming media streams.

  1. Start the media injection from the file using the file_source object. When the conference is joined, if the join_options are configured as such to set initial injection of audio/video, the Video and Audio tracks are added. So all that is left is to start the injection source.
if (!injection_src->set_video_capture(true))
  std::cerr << "starting video capture failed"
;
if (!injection_src->set_audio_capture(true))
  std::cerr << "starting audio capture failed";

In this example we just show starting the Media Injection Source after joining, because we joined the conference with join_options::constraints::audio and join_options::constraints::video set to true. If you would like to see the starting and stopping of audio/video injection you can refer to sdk-release/share/dolbyio/comms/sample_app/sample_app.cc.

  1. In order to block the application while the conference is running, you can wait for the user to press q and Enter on the keyboard. Here you can also provide the ability for the user to input other characters as commands that can interact with your application. Review the sdk-release/share/dolbyio/comms/sample_app/sample_app.cc code for all of the different interactive commands provided in the sample application.
std::cout << "Press q + Enter to leave" << std::endl;

for (;;) {
    std::string command;
    std::cin >> command;

    if (command == "q")
        break;
}
  1. Once the operator has requested to leave the conference, use the leave method to leave the conference and stop the recording.
// Leave the active conference
wait(sdk->conference().leave());
  1. Use the close method to close the session.
// Close the session
wait(sdk->session().close());

Summary

For reference, here is the content of the main.cc file:

##include <iostream>
#include <comms/multimedia_streaming/recorder.h>
#include <comms/multimedia_streaming/injector.h>
#include <comms/sdk.h>
#include <comms/sample/media_source/file/source_capture.h>

int main(int argc, char** argv) {
        std::string output_dir = argv[1];
std::cout << "output_dir: " << output_dir << std::endl;
std::string conf_alias = argv[2];
std::cout << "conf_alias: " << conf_alias << std::endl;
std::string access_token = argv[3];
std::cout << "Access Token: " << access_token << std::endl;
std::vector<std::string> media_files;
media_files.push_back(argv[4]);
        // This sample app gets the token from the command line and can not refresh
        // the token during run time. The token refresh callback is no-op and when
        // the token expires, the conference fails:
        auto refresh_token_callback = [](std::unique_ptr<dolbyio::comms::refresh_token>&& refresh_token) {
            (void)refresh_token;
        };

        // Create an instance of the SDK. This call is synchronous and returns a
        // pointer to the SDK on success.
        auto sdk = dolbyio::comms::sdk::create(access_token, refresh_token_callback);

        // Create an instance of the default media recording module. The recorder
        // writes media and metadata files to the output directory specified in the argument.
        auto media_recorder = dolbyio::comms::plugin::recorder::create(output_dir, *sdk);

        // Configure the media recorder.
        // In this example, audio is stored as PCM and video is stored as encoded.
        media_recorder->set_recording_config(
            dolbyio::comms::plugin::recorder::audio_recording_config::PCM,
            dolbyio::comms::plugin::recorder::video_recording_config::ENCODED
        );

        // Set the media recorder to be used by the conference service.
        wait(sdk->conference().set_media_sink(media_recorder.get()));

        // Create an instance of the Paced Injector module.
        auto status_cb = [](const dolbyio::comms::plugin::media_injection_status& status){};
        auto injector = std::make_unique<dolbyio::comms::plugin::injector_paced>(std::move(injection_status));
        auto injection_src = dolbyio::comms::sample::file_source::create(
      std::move(params.media_files), false/*loop file*/, *injector);
        // Set the injector to be used by Conference Service
        wait(sdk.conference->set_media_source(injector.get()));      // Open a session for a participant specified in the
        // participant_info structure.
        std::string user_name = "my_cpp_app";
        dolbyio::comms::services::session::participant_info participant{};
        participant.externalId = user_name;
        participant.name = user_name;

        wait(sdk->session().open(std::move(participant)));

        // Create a conference.
        dolbyio::comms::services::conference::conference_options create{};
        create.alias = conf_alias;
        dolbyio::comms::conference_info conf = wait(sdk->conference().create(create));

        // Join the conference as a listener.
        dolbyio::comms::services::conference::join_options join_options{};
        wait(sdk->conference().join(conf, join_options));

        if (!injection_src->set_video_capture(true))
          std::cerr << "starting video capture failed"
;
        if (!injection_src->set_audio_capture(true))
          std::cerr << "starting audio capture failed";

        std::cout << "Press q + Enter to leave" << std::endl;

        for (;;) {
            std::string command;
            std::cin >> command;

            if (command == "q")
                break;
        }

        // Leave the active conference.
        wait(sdk->conference().leave());

        // Close the session.
        wait(sdk->session().close());
   }
   catch (std::exception& e) {
        std::cerr << "Error! " << e.what() << std::endl;
        return 1;
   }

   return 0;
}

Build and run

  1. Compile the application to build the scpp_app executable.
cd sample/
mkdir build && cd build/
cmake ../
cmake --build . --target scpp_app
  1. List files from the build/app/ folder to find the scpp_app executable. Run this executable and pass the required information:
./scpp_app . conf_alias access_token user test.mp4

Get an access token using the Client access token REST API.

Write a media injection source

The Default Injector Module requires a source to provide it with raw video and audio. It requires YUV video frames and 10ms Signed 16 bit PCM audio frames. For information on how to write a media injection source, which can be used to capture media frames and inject them into the conference using one of our Default Media Injectors, see sdk-release/share/dolbyio/comms/sample/media_source/file/. This is a library that captures audio and video from mp4/mov files, decodes H264 video and AAC audio, and provides the raw frames to the Paced Injector. This media_source_file example library is used in the sdk-release/share/dolbyio/comms/sample/sample_app/sample_app.cpp application provided in the package.

Write a media recorder

In the previous section, we used the default Media Recorder for creating a recording module. This section shows how to write the recording module from scratch using the Media Recorder API.

The SDK provides a set of Media Recorder C++ API interfaces with virtual functions that are called with media data for respective WebRTC media streams. All these interfaces need to be implemented in the recording module. This section shows an example of a recorder implementation in which we present a bare minimum that is required to compile a recording module without doing anything with audio and video frames. The following examples present the implementation in the custom_recorder.h, custom_recorder.cc, and CMakeLists.txt files:

The custom_recorder.h file:

#include <comms/media_engine/media_engine.h>
#include <comms/sdk.h>

#include <string>
#include <memory>

class custom_recorder_impl
    : public dolbyio::comms::audio_sink,
      public dolbyio::comms::video_sink_yuv,
      public dolbyio::comms::video_sink_encoded,
      public dolbyio::comms::media_sink_interface {
public:
    custom_recorder_impl(dolbyio::comms::sdk &sdk);
    ~custom_recorder_impl();

    enum class audio_format {
        NONE,
        PCM,
        AAC
    };

    enum class video_format {
        NONE,
        ENCODED,
        YUV
    };

    void configure_custom_recorder(video_format, audio_format);

    // The audio_recorder interface
    void handle_audio(const std::string &stream_id,
                      const std::string &track_id,
                      const int16_t *data,
                      size_t n_data,
                      int sample_rate,
                      size_t channels) override;

    // The video_recorder_yuv interface
    void handle_frame(const std::string &stream_id,
                      const std::string &track_id,
                      std::unique_ptr<dolbyio::comms::frame> frame) override;

    // The video_recorder_encoded interface
    void set_codec_name(const std::string &codec,
                        const std::string &track_id) override;

    void handle_frame_encoded(const std::string &track_id,
                              const uint8_t *data,
                              ssize_t size,
                              int width,
                              int height,
                              bool is_keyframe) override;

    // The media_sink_interface
    dolbyio::comms::audio_sink *audio() override;
    dolbyio::comms::video_sink_yuv *video_yuv() override;
    dolbyio::comms::video_sink_encoded *video_enc() override;

private:
    dolbyio::comms::sdk &sdk_;

    audio_format af_;
    video_format vf_;
};

The custom_recorder.cc file:

#include "custom_recorder.h"

custom_recorder_impl::custom_recorder_impl(dolbyio::comms::sdk &sdk)
    : sdk_(sdk) {}
custom_recorder_impl::~custom_recorder_impl() = default;

void custom_recorder_impl::configure_custom_recorder(video_format vf, audio_format af) {
    vf_ = vf;
    af_ = af;
}

void custom_recorder_impl::handle_audio(const std::string &stream_id,
                                        const std::string &track_id,
                                        const int16_t *data,
                                        size_t n_data,
                                        int sample_rate,
                                        size_t channels) {
    // Handle audio frames
}

void custom_recorder_impl::handle_frame(const std::string &stream_id,
                                        const std::string &track_id,
                                        std::unique_ptr<dolbyio::comms::frame> frame) {
    // Handle raw video frames
}

void custom_recorder_impl::set_codec_name(const std::string &codec,
                                          const std::string &track_id) {
    // Set the codec for encoded video frames
}
void custom_recorder_impl::handle_frame_encoded(const std::string &track_id,
                                                const uint8_t *data,
                                                ssize_t size,
                                                int width,
                                                int height,
                                                bool is_keyframe) {
    // Handle encoded video frames
}

dolbyio::comms::audio_sink *custom_recorder_impl::audio() {
    if (af_ != audio_format::NONE) {
        return this;
    }

    return nullptr;
}

dolbyio::comms::video_sink_yuv *custom_recorder_impl::video_yuv() {
    if (vf_ == video_format::YUV) {
        return this;
    }

    return nullptr;
}

dolbyio::comms::video_sink_encoded *custom_recorder_impl::video_enc() {
    if (vf_ == video_format::ENCODED) {
        return this;
    }

    return nullptr;
}

The CMakeLists.txt file:

cmake_minimum_required(VERSION 3.0)

project(scpp_app)

set(CMAKE_CXX_STANDARD 17)
if(NOT DOLBYIO_COMMS_MODULES_LOCATION)
    set(DOLBYIO_COMMS_MODULES_LOCATION ${CMAKE_CURRENT_LIST_DIR}/../share/dolbyio/comms/cmake)
endif()

find_package(DolbyioComms REQUIRED
    PATHS ${DOLBYIO_COMMS_MODULES_LOCATION}
)

add_executable(scpp_app
    main.cc
    custom_recorder.h
    custom_recorder.cc
)

target_link_libraries(scpp_app
    DolbyioComms::sdk
)

The recorder instance needs to be passed to the conference service as a media sink for the conference via the set_media_sink method. To do it, in the app/main.cc file, add the include statement to use the customer recorder:

#include "custom_recorder.h"

After the initialization of the SDK instance, remove the creation of the Media Recorder instance and replace it with the custom recorder:

auto custom_recorder = std::make_unique<custom_recorder_impl>(*sdk);
custom_recorder->configure_custom_recorder(
    custom_recorder_impl::video_format::YUV,
    custom_recorder_impl::audio_format::PCM
);
wait(sdk->conference().set_media_sink(custom_recorder.get()));

Write a media injector

In the previous section, we used the default Media Injector for creating an injection module. This section shows how to write the injector module from scratch using the Media Injector API.

The SDK provides a set of Media Injector C++ API interfaces with virtual functions that provide the ability to create media pipelines to inject respective WebRTC media streams. All these interfaces need to be implemented in the injector module. This section shows an example of an injector implementation in which we present the minimum required to compile a injector module without doing anything with audio and video frames. The following examples present the implementation in the custom_injector.h, custom_injector.cc, and CMakeLists.txt files:

The custom_injector.h file:

#include <comms/media_engine/media_engine.h>
#include <memory>
#include <mutex>

class custom_injector_impl : public dolbyio::comms::media_source_interface,
                             public dolbyio::comms::video_source,
                             public dolbyio::comms::audio_source {
 public:
    custom_injector_impl();
    ~custom_injector_impl();

    bool inject_audio_frame(std::unique_ptr<dolbyio::comms::audio_frame>&& frame);
    bool inject_video_frame(std::unique_ptr<dolbyio::comms::video_frame>&& frame);

    // media_source_interface
    dolbyio::comms::video_source* video() override { return this; }
    dolbyio::comms::audio_source* audio() override { return this; }

    // audio_source interface
    void register_audio_frame_rtc_source(
        dolbyio::comms::rtc_audio_source* source) override;
    void deregister_audio_frame_rtc_source() override;

    // video_source interface
    void register_video_frame_rtc_source(
        dolbyio::comms::rtc_video_source* source) override;
    void deregister_video_frame_rtc_source() override;

 private:
    // These are essentially audio/video sinks from the POV of the injector
    // providing the connection in the respective media pipelines to Webrtc
    dolbyio::comms::rtc_audio_source* rtc_audio_ = nullptr;
    dolbyio::comms::rtc_video_source* rtc_video_ = nullptr;

    std::mutex audio_lock_;
    std::mutex video_lock_;
};

The custom_injector.cc file:

custom_injector_impl::custom_injector_impl() = default;
custom_injector_impl::~custom_injector_impl() = default;

bool custom_injector_impl::inject_audio_frame(
    std::unique_ptr<dolbyio::comms::audio_frame>&& frame) {
    std::lock_guard<std::mutex> lock(audio_lock_);
    if (frame && rtc_audio_) {
        rtc_audio_->on_data(frame->data(), 16, frame->sample_rate(),
                            frame->channels(), frame->samples());
        return true;
    }
    return false;
}

bool custom_injector_impl::inject_video_frame(
    std::unique_ptr<dolbyio::comms::video_frame>&& frame) {
    std::lock_guard<std::mutex> lock(video_lock_);
    if (frame && rtc_video_) {
        rtc_video_->handle_frame(std::move(frame));
        return true;
    }
    return false;
}

// audio_source interface
void custom_injector_impl::register_audio_frame_rtc_source(
    dolbyio::comms::rtc_audio_source* source) {
    std::lock_guard<std::mutex> lock(audio_lock_);
    rtc_audio_ = source;
}

void custom_injector_impl::deregister_audio_frame_rtc_source() {
    std::lock_guard<std::mutex> lock(audio_lock_);
    rtc_audio_ = nullptr;
}

// video_source interface
void custom_injector_impl::register_video_frame_rtc_source(
    dolbyio::comms::rtc_video_source* source) {
    std::lock_guard<std::mutex> lock(video_lock_);
    rtc_video_ = source;
}

void custom_injector_impl::deregister_video_frame_rtc_source() {
    std::lock_guard<std::mutex> lock(video_lock_);
    rtc_video_ = nullptr;
}

The CMakeLists.txt file:

cmake_minimum_required(VERSION 3.0...3.21)
      
add_executable(custom_injector
    main.cc
    custom_injector.h
    custom_injector.cc
)

target_link_libraries(custom_injector
    DolbyioComms::sdk
)

Sample Application

To build and run the sample application that is available in the SDK package, follow these steps:

  1. Download the SDK package.

  2. Go to the the sdk-release/share/dolbyio/comms/sample/ directory and find the sample_app/sample_app.cpp and sample_app/CMakeLists.txt files.

  3. Use the sdk-release/share/dolbyio/comms/sample/CMakeLists.txt top level CMake file and specify the target of sample_app to compile the application and build the sample_app executable.

mkdir build && cd build/
cmake ../
cmake --build . --target sample_app
  1. Run the created executable using the following command line parameters:
ArgumentDescription
-uThe user name. Setting this parameter assigns the provided value to the external ID.
-aThe access token.
-cThe conference alias.
-tThe conference access token.
-iThe conference ID.
-lThe logging level, where the available values are between 0 and 5. By default, the parameter is set to 3 (INFO).
-jJoin as user or listener: options are "user" or "listener"
-mThe initial media to be injected on joining a conference. Options are "AV" for audio/video, "A" for audio only, "V" for video only.
-fThe name of the media file to inject.

For example:

./sample_app -u USERNAME -a ACCESS_TOKEN -i CONF_ID -l LOG_LEVEL -m AV -j user -f some_file.mp4

If you have problems with running the application due to a not-found media library, set the LD_LIBRARY_PATH to the location of the sdk-release/lib directory. If you plan to move libraries to different locations, include these locations in the paths. The path can be set as follows:

export LD_LIBRARY_PATH=/path/to/sdk-release/lib/:$LD_LIBRARY_PATH

Did this page help you?