Getting Started with Publishing

Follow these steps to add the publishing capability to your application.

1. Initialize the SDK

Call the initialize method to initialize the SDK with your application context.

import com.millicast.Core

class MainActivity : FragmentActivity() {
    override fun onCreate(savedInstanceState: Bundle?) {

2. Capture audio and video

To capture media, get an array of available audio and video sources and choose the preferred sources from the list. After you start capturing audio and video, the SDK will return an audio and video track that you can add to the publisher later.

// Get the first available microphone
val audioSource = Media.audioSources<MicrophoneAudioSource>().first()

val audioTrack = try {
} catch (e: Throwable) {
    // In the case of a problem when starting the audio capture try checking your permissions

// Get the first camera source
val videoSource = Media.videoSources<CameraVideoSource>().first()

// Get capabilities of the available video sources, such as width, height, and frame rate
val capabilities = videoSource.capabilities

// Set the preferred capability; not setting any capability object results in setting the first one from the list

// Start capturing video
val videoTrack = try {
} catch (e: RuntimeException) {
    // In the case of a problem when starting the video capture
    // check the camera permissions or exclusive access from another application

// Handle switching between cameras
videoSource.switchCamera(object: SwitchCameraHandler {
    override fun onCameraSwitchDone(p0: Boolean) {
        TODO("Not yet implemented")

    override fun onCameraSwitchError(reason: String?) {
        TODO("Not yet implemented")

// Replace width, height, and fps with your own values
videoSource.changeCaptureFormat(width, height, fps)

3. Set logger

Optionally, set your own logger function to print Real-time Streaming logs according to the severity. By default, the SDK prints the standard output, where the severity is displayed first and then the message.

import com.millicast.utils.Logger

Logger.setLoggerListener({msg, logLevel -> Log.d("SDK log", msg)})

4. Create a publisher

Create a publisher object and make sure to use the publisher's methods in a coroutine context. Then, create a stream in your developer dashboard or using the Streaming REST API and set your credentials.

// Helper for later usage
fun <T> CoroutineScope.safeLaunch(
	onError: (suspend CoroutineScope.(err: Throwable) -> T)? = null,
    block: suspend CoroutineScope.() -> T
  ) = launch {
    try {
    } catch (err: Throwable) {
      onError?.invoke(this, err)

  val publisher = Core.createPublisher()

  // Most of the publisher's methods needs to be in a coroutine context,
  // such as viewModelScope or ServiceJob
  val coroutineScope = CoroutineScope(Dispatchers.IO)

  // In this sample, we deroute the scope to collect every new publisher's state
  // For instance, in a jetpack compose implementation it could be:
  // @Composable
  // fun MyLoadingScreen(publisher: Publisher) {
  //   val state by publisher.state.collectAsState(null)
  //   state?.let {
  //     when(it.connectionState) {
  //       ConnectionState.Connected -> ...
  //       else -> ...
  //     }
  //   }
  // }
  coroutineScope.async { 
    publisher.state.collect { newPublisherState ->
      Log.d("SAMPLE", "having new State ${newPublisherState}")

  // Get the credentials structure from your publisher instance, fill it in, and set the modified credentials
  coroutineScope.safeLaunch {
    val credential = Credential(
      // Set the streamName, token, and API URL
      streamName = "myStreamName",
      token = "aefea56153765316754fe",
      apiUrl = ""


5. Add the audio and video track

Add the audio and video track that you created earlier when you started capturing media.

// Use the previous publisher and coroutine scope
coroutineScope.safeLaunch {
  // Previously created tracks:


6. Publish the stream

Get a list of the available codecs and set the codecs that you want to use. By default, the SDK uses VP8 as the video codec and Opus as the audio codec.

Additionally, to publish several sources from the same application, create a publisher instance for each source. We recommend enabling discontinuous transmission that detects audio input and sends audio only when it is detected.

Use the connect method to authenticate and access Real-time Streaming through the Director API. Successful authentication results in opening a WebSocket connection that allows using the Real-time Streaming server and calling the listener's onConnected method.

Then, use the publish method to start publishing the stream. Once the publisher starts sending media, the SDK calls the listener's onPublishing method.

// Use the previous publisher and coroutine scope

coroutineScope.safeLaunch {
  val videoCodecs = Media.supportedVideoCodecs
  val audioCodecs = Media.supportedAudioCodecs


      // Choose the preferred codecs
      videoCodec = videoCodecs.first(),
      audioCodec = audioCodecs.first(),
      // If you want to support multi-source, set a source ID of the publisher
      sourceId = "sourceId",
      // To publish several sources from the same application, create a publisher instance for each source
      dtx = true,
      // Enable stereo
      stereo = true

Collecting RTC statistics

You can periodically collect the WebRTC peer connection statistics if you enable them through the enableStats method of the publisher. After enabling the statistics, you will get a report every second through the onStatsReport callback in the listener object. The identifiers and way to browse the stats are following the RTC specification.