The Native SDK for creating Android applications. You may use it in your Android project to connect, capture, publish, or subscribe to streams using the Streaming Platform.

The package name of the Native SDK for Android is millicast-native-sdk-version-Android.tar.gz.



If you want to use Streaming APIs on Android and iOS, you can also use the Flutter SDK or the React Native plugin.

This guide explains the basic use of the SDK by showing how to start publishing a stream and how to subscribe to the streamed content to render audio and video.


Before you start:

  • Sign up for a free account.
  • Make sure that you have a working video camera and microphone.
  • Make sure that you use the Android API 24 or later.

Getting started with publishing

Follow these steps to add the publishing capability to your application.

1. Initialize the SDK

Call the initMillicastSdk method to initialize the SDK with your application context.

import com.millicast.Client;

public class MainActivity extends AppCompatActivity implements NavigationView.OnNavigationItemSelectedListener {
    public static final String TAG = "MainActivity";
    private static Context context;

    protected void onCreate(Bundle savedInstanceState) {
        // Initialize when the application starts
        if (savedInstanceState == null) {
            context = getApplicationContext();
            // Load the native shared library and initialize the SDK with the application context

2. Capture audio and video

To capture media, get an array of available audio and video sources and choose the preferred sources from the list. After you start capturing audio and video, the SDK will return an audio and video track that you can add to the publisher later.

// Get an array of audio sources
Media media = Media.getInstance(context); // Get the instance with your application context
ArrayList<AudioSource> audioSources = media.getAudioSources();

// Choose the preferred audio source from the array
AudioSource audioSource = audioSources.get(0); // Get the first audio source
AudioTrack audioTrack;
try {
    audioTrack = (AudioTrack) audioSource.startCapture();
} catch(RuntimeException e) {
    // Problem when starting the audio capture 

// Get an array of available video sources
Media media = Media.getInstance(context); // Get the instance with your application context
ArrayList<VideoSource> videoSources = media.getVideoSources();

// Choose the preferred audio source from the array
VideoSource videoSource = videoSources.get(0); // Get the first source

// Get capabilities of the available video sources, such as width, height, and frame rate
ArrayList<VideoCapabilities> capabilities = videoSource.getCapabilities();
Camera.Parameters parameters = videoSource.getParameters(); // Optionally, get the camera parameters

// Set the preferred capability; not setting any capability object results in setting the first one from the list
videoSource.setCapability(capabilities.get(0)); // Set the first capability from the list

// Start capturing video
VideoTrack videoTrack;
try {
    videoTrack = (VideoTrack) videoSource.startCapture(); 
} catch(RuntimeException e) {
    // Problem when starting the video capture 

// Handle switching between cameras
class SwitchHdl implements VideoSource.SwitchCameraHandler {
    public void onCameraSwitchDone(boolean b) {}
    public void onCameraSwitchError(String s) {}

videoSource.switchCamera(new SwitchHdl());
videoSource.changeFormat(width, height, fps);

3. Set logger

Optionally, set your own logger function to print Real-time Streaming logs according to the severity. By default, the SDK prints the standard output, where the severity is displayed first and then the message.

Logger.setLoggerListener((String msg, LogLevel level) -> {
    String logTag = "[SDK][Log][L:" + level + "] ";
    logD(TAG, logTag + msg);

4. Publish a stream

Create a publisher object and set a listener object to the publisher to receive proper events. This requires creating a class that inherits the publisher's listener interface. Then, create a stream in your developer dashboard or using the Streaming REST API and set your credentials.

PubListener listener = new PubListener();
Publisher publisher = Publisher.createPublisher(listener);

// Set a listener object to the publisher to receive proper events
public class PubListener implements Publisher.Listener {
    public PubListener() {}

    public void onPublishing() {}

    public void onPublishingError(String s) {}

    public void onConnected() {}

    public void onConnectionError(String reason) {}

    public void onSignalingError(String s) {}

    public void onStatsReport(RTCStatsReport statsReport) {}

    public void onViewerCount(int count) {}

    public void onActive() {}

    public void onInactive() {}

// Get the credentials structure from your publisher instance, fill it in, and set the modified credentials
Publisher.Credential creds = publisher.getCredentials();
creds.streamName = "streamName"; // The name of the stream you want to publish
creds.token = "aefea56153765316754fe"; // The publishing token
creds.apiUrl = ""; // The publish API URL


5. Configure your publishing session

Get a list of the available codecs and set the codecs that you want to use. By default, the SDK uses VP8 as the video codec and Opus as the audio codec.

Additionally, to publish several sources from the same application, create a publisher instance for each source. We recommend enabling discontinuous transmission that detects audio input and sends audio only when it is detected.

Publisher.Option publisherOption = new Publisher.Option();

// Get a list of codecs
Media media = Media.getInstance(context); // Get the media instance with the application context
ArrayList<String> videoCodecs = media.getSupportedVideoCodecs();
ArrayList<String> audioCodecs = media.getSupportedAudioCodecs();

// Choose the preferred codecs
publisherOption.videoCodec = Optional.of(videoCodecs.get(0)); // Set the first video codec
publisherOption.audioCodec = Optional.of(audioCodecs.get(0)); // Set the first audio codec

// If you want to support multi-source, set a source ID of the publisher
publisherOption.sourceId = "sourceId";

// To publish several sources from the same application, create a publisher instance for each source
publisherOption.dtx = true;

// Enable stereo
publisherOption.stereo = true;

// Set the selected options to the publisher

6. Add the audio and video track

Add the audio and video track that you created earlier when you started capturing media.


7. Authenticate using the Director API

Authenticate to access Real-time Streaming through the Director API.


Successful authentication results in opening a WebSocket connection that allows using the Real-time Streaming server and calling the listener's on_connected method.

8. Start publishing


Once the publisher starts sending media, the SDK calls the listener's on_publishing method.

Getting started with subscribing

Follow these steps to add the subscribing capability to your application.

1. Create a subscriber object

SubListener listener = new SubListener();
Subscriber subscriber = Subscriber.createSubscriber(listener);

2. Create a listener class

Create a viewer's listener class by inheriting the viewer listener's interface.

public class SubListener implements Subscriber.Listener {
    public SubListener() {}

    public void onSubscribed() {}
    public void onSubscribedError(String s) {}
    public void onConnected() {}
    public void onConnectionError(String reason) {}
    public void onStopped() {}
    public void onSignalingError(String s) {}
    public void onStatsReport(RTCStatsReport statsReport) {}
    public void onTrack(VideoTrack videoTrack, Optional<String> mid) {}
    public void onTrack(AudioTrack audioTrack, Optional<String> mid) {}
    public void onActive(String streamId, String[] tracks, Optional<String> sourceId) {}
    public void onInactive(String streamId, Optional<String> sourceId) {}
    public void onLayers(String mid, LayerData[] activeLayers, LayerData[] inactiveLayers) {}
    public void onVad(String mid, Optional<String> sourceId) {}
    public void onViewerCount(int count) {}

3. Set up credentials

Get your stream name and stream ID from the dashboard and set them up in the SDK.

Subscriber.Credential creds = this.subscriber.getCredentials();
creds.accountId =  "streamName"; // The name of the stream you want to subscribe to
creds.streamName = "ACCOUNT"; // The ID of your Real-time Streaming account
creds.apiUrl = ""; // The subscribe API URL


4. Configure the viewer by setting your preferred options

Configure your stream to receive multi-source content.

Subscriber.Option optionSubscriber = new Subscriber.Option();

optionSubscriber.pinnedSourceId = Optional.of("mainSource"); // The main source that will be received by the default media stream
optionSubscriber.multiplexedAudioTrack = 3; // Enables audio multiplexing and denotes the number of audio tracks to receive as Voice Activity Detection (VAD) multiplexed audio
optionSubscriber.excludedSourceId = new String[]{ "excluded" }; // Audio streams that should not be included in the multiplex, for example your own audio stream

// Set the selected options

5. Create a WebSocket connection

Authenticate and create a WebSocket connection to connect with the Real-time Streaming server.


If the connection fails, the listener's on_authentication_failed method is called with the HTTP error code and failure message. If the code is 0, double-check your internet connection or the API URL set in the credentials. If the connection is successful, the SDK calls the on_authenticated method.

6. Subscribe to the streamed content


When the operation is successful, the SDK calls on_subscribed and sends you an event in the listener with the created audio and video tracks. Otherwise, the SDK calls on_subscribed_error with an error message.

7. Project media

If publishers use the multi-source feature, you need to project tracks into a specified transceiver using its media ID (mid). By default, if you do not project anything, you receive no media. When you start subscribing, you receive the active event with the track IDs and the source ID. In order to project a track into a transceiver, you must use the project method of the viewer. You need to specify the source ID you are targeting and an array of the tracks you want to project.

By default, only one video and audio track is negotiated in the SDP. If there are several publishers sending media in one stream, you can dynamically add more tracks using the add_remote_track method each time you receive an active event. The method adds a new transceiver and renegotiates locally SDP. When successful, the SDK creates a new track and calls the on_track callback, so you can get the track and its corresponding mid.

// Get mid either from the `on_track` callback of the listener object or by calling the `get_mid` method with the track ID
// Option 1

public class SubListener implements Subscriber.Listener {
		/* ... */
    public void onTrack(VideoTrack videoTrack, Optional<String> mid) {
    		// Store the mid value somewhere 
    /* ... */

// Option 2
String mid = subscriber.getMid(track.getName()).get();

// Project a video track
ArrayList<Subscriber.ProjectionData> projectionDataArray = new ArrayList<>();

Subscriber.ProjectionData projectionData = new Subscriber.ProjectionData();
projectionData.mid = mid; // The media ID of the transceiver you want to project into = "video"; // The media track type, either video or audio
projectionData.trackId = trackId; // The name of the track on the media server side, which is the track ID you get in the active event

subscriber.project(sourceId, projectionDataArray);

subscriber.addRemoteTrack("video"); // "audio" or "video" depending on the type of track you want to add

To stop projecting the track, call the unproject method, which requires an array of the media IDs that you want to stop projecting.

8. Select a layer that you want to receive

When a publisher uses Simulcast or the SVC mode when sending a video feed, the media server automatically chooses the right layer to send to the viewer according to the bandwidth estimation. However, you can force the server to send you a specific layer by calling the select method.

For example, if the sender uses Simulcast, it is possible to receive three different encoding IDs: 'h' for the high resolution, 'm' for the medium one, and 'l' for the low. In order to choose the medium resolution, you have to do the following:

LayerData layerData = new LayerData();
layerData.encoding_id = "m"; // The encoding ID, which is the ID of the Simulcast layer
layerData.temporal_layer_id = 1; // The ID of the temporary layer
layerData.spatial_layer_id = 0; // The ID of the spatial layer;
// Empty optional means the server will make an automatic selection

You can retrieve all the available layers with their corresponding IDs by calling the layer event.

9. Manage broadcast events

When broadcast events occur, the SDK calls the corresponding callback in the listener object. The SDK listens to all events and does not allow disabling them; it offers the following event listeners:

  • Publisher event listeners:
    • on_active: called when the first viewer starts viewing a stream
    • on_inactive: called when the last viewer stops viewing a stream
    • on_viewer_count: called each time the number of viewers changes; all clients connected to the stream are notified about the current number of viewers
  • Viewer event listeners:
    • on_active: called when a new source starts publishing a stream; it contains the stream ID, the track information, and the source ID
    • on_inactive: called when a source is no longer published within a stream; it contains the stream ID and the source ID
    • on_stopped: called when a stream stops
    • on_vad: called when a source ID is multiplexed into an audio track based on the voice activity level; it contains mid of the track and the source ID
    • on_layers: called when Simulcast or SVC layers are available; contains arrays of the LayerData object that you can use in the select command
    • on_viewer_count: called each time a new viewer enters or leaves a stream; all clients connected to the stream are notified about the current number of viewers

10. Render video

The SDK provides an interface that lets you implement a class responsible for receiving video frames.

<!-- Write a class that inherits _VideoRenderer_ -->
<!-- Add a layout in your fragment to add a view later in Java code -->


After this step, you will get the VideoFrame whenever it is available. You can render the frame using any graphic library.

To get video data from the VideoFrame, use the get_buffer method. Allocate a buffer with the correct size beforehand. You can get the size of the buffer according to its VideoType using the size method. Both get_buffer and size have a template parameter that lets you choose in which VideoType you want your video data, either millicast::VideoType::ARGB or millicast::VideoType::I420.

Then, create an instance of your renderer and add it to a local or remote video track, select one playback device to be able to play audio, and adjust the volume of remote tracks.

// Create an instance of your renderer and add it to a local or remote video track
linearLayoutVideo = view.findViewById(;
VideoRenderer renderer = new VideoRenderer(context); // Create a renderer with the application context

// When you get your video track

// Select one playback device to be able to play audio
Media media = Media.getInstance(applicationContext);
ArrayList<AudioPlayback> audioPlayback = media.getAudioPlayback();

// Adjust the volume of remote tracks
audioTrack.setVolume(1.0); // The volume should be between 0 and 1

Collecting RTC statistics

You can periodically collect the WebRTC peer connection statistics if you enable them through the getStats method of the viewer or publisher. After enabling the statistics, you will get a report every second through the onStatsReport callback in the listener object. The identifiers and way to browse the stats are following the RTC specification.