Getting Started with Subscribing

Follow these steps to add the subscribing capability to your application.

1. Import the SDK

import MillicastSDK

2. Configure the audio session for playback.

let session = AVAudioSession.sharedInstance()
try session.setCategory(
    .playback,
    mode: .videoChat,
    options: [.mixWithOthers, .allowBluetooth, .allowBluetoothA2DP]
)
try session.setActive(true)

guard let subscriber = MCSubscriber.create() else {
    fatalError("Could not create subscriber.") // In production replace with a throw
}

3. Create a subscriber

Create a subscriber of type MCSubscriber.

let subscriber = MCSubscriber()

You can optionally implement the MCSubscriberDelegate to receive callbacks:

class SubDelegate: MCSubscriberDelegate {
  func onConnected() { }
  func onStatsReport(_ report: MCStatsReport) { }
  func onViewerCount(_ count: Int32) { }
  func onSubscribed() { }
  func onSubscribedError(_ reason: String) { }
  
  func onVideoTrack(_ track: MCVideoTrack, withMid mid: String) {
    track.enable(true)
    // Add the track to the rendering view as needed
  }
  
  func onAudioTrack(_ track: MCAudioTrack, withMid mid: String) {
    track.enable(true)
   	// Keep hold of the track
  }
  
  func onActive(_ streamId: String, tracks: [String], sourceId: String) { }
  func onInactive(_ streamId: String, sourceId: String) { }
  func onStopped() { }
  func onVad(_ mid: String, sourceId: String) { }
  func onLayers(_ mid: String, activeLayers: [MCLayerData], inactiveLayers: [MCLayerData]) { }
  func onLayers(_ mid: String, activeLayers: [MCLayerData], inactiveLayers: [String]) { }
  func onDisconnected() { }
  func onConnectionError(_ status: Int32, withReason reason: String) { }
  
  func onSignalingError(_ message: String) { }
}

You can set this delegate during the initialization of MCSubscriber instead. Make sure to keep the delegate alive throughout the lifetime of the subscriber, since it does not retain its reference.

let subDelegate = SubDelegate
let subscriber = MCSubscriber(delegate: subDelegate)

4. Setup your credentials

Get your stream name and stream ID from the dashboard and set them up in the SDK using the setCredentials method.

let credentials = MCSubscriberCredentials()
credentials.streamName =  "streamName"; // The name of the stream you want to subscribe to
credentials.accountId = "ACCOUNT"; // The ID of your Dolby.io Real-time Streaming account
credentials.apiUrl 
    = "https://director.millicast.com/api/director/subscribe"; // The subscribe API URL

do {
	try await subscriber.setCredentials(credentials)
} catch MCGenericError.noCredentials {
  fatalError("Could not set credentials.") // In production replace with a throw
}

5. Configure the subscriber by setting your preferred options

Configure your stream to receive multi-source content.

Define your subscription preferences and then call the connect method to connect to the Millicast platform. After this step, call the subscribe(with:) method and provide the defined options as its parameter.

let subscriberOptions = MCClientOptions()

subscriberOptions.pinnedSourceId 
    = "MySource"; // The main source that will be received by the default media stream
subscriberOptions.multiplexedAudioTrack 
    = 3; // Enables audio multiplexing and denotes the number of audio tracks to receive
         // as Voice Activity Detection (VAD) multiplexed audio
subscriberOptions.excludedSourceId
    = [ "excluded" ] // Audio streams that should not be included in the multiplex, for
                     // example your own audio stream

do {
  // Set the selected options
  try await subscriber.connect()
  try await subscriber.subscribe(with: subscriberOptions)
} catch {
  fatalError("Could not connect or subscribe.") // In production replace with a throw
}

If the connection fails, the call will throw an error with the HTTP error code and failure message. If the code is 0, double-check your internet connection or the API URL set in the credentials.

6. Manage broadcast events

When broadcast events occur, the SDK emits the appropriate event and calls the corresponding callback in the delegate object.

Those events are available through the MCSubscriberDelegate:

  • onActive: Called whenever a new source starts publishing a stream. It contains the stream ID, the IDs of tracks within a stream, and the source ID.
  • onInactive: Called whenever a source is no longer published within a stream. It contains the stream ID and the source ID.
  • onStopped: Called whenever a stream stops.
  • onVad: Called whenever a source ID is multiplexed into an audio track based on the voice activity level. It contains the media ID of the track and the source ID.
  • onLayers: Called whenever Simulcast or Scalable Video Coding (SVC) layers are available. It contains arrays of the MCLayerData object that you can use in the select method.
  • onViewerCount: Called each time a new viewer enters or leaves a stream. All clients connected to the stream are notified about the current number of viewers.

Those events are also available in their AsyncStream format. You can query events as in this example:

Task {
  for await activity in subscriber.activity() {
    case .active(let streamId, let tracks, let sourceId):
      // A publisher with sourceId has started publishing tracks[] to streamId
    case .inactive(let streamId, let sourceId):
      // A publisher with sourceId has stopped publishing to streamId
  }
}

Task {
  for await viewerCount in subscriber.viewerCount() {
    print("viewer count has become: \(viewerCount)")
  }
}

Task {
  for await layers in subscriber.layers() {
    print(layers.mid)
    print(layers.activeLayers)
    print(layers.inactiveLayers)
  }
}

// etc

7. Project media

Using the multi-source feature requires projecting tracks into a specified transceiver using its media ID (mid). When you start subscribing, you receive the onActive event with the track IDs and the source ID. In order to project a track into a transceiver, you must use the project method of the subscriber. You need to specify the source ID you are targeting and an array of the tracks you want to project.

By default, only one video and audio track is negotiated in the SDP. If there are several publishers sending media in one stream, you can dynamically add more tracks using the addRemoteTrack method each time you receive the onActive event. The method adds a new transceiver and renegotiates locally SDP. When successful, the SDK creates new tracks and calls the onAudioTrack and onVideoTrack callback, so you can get the tracks and their corresponding Media IDs.

Receive an active event of a source that you want to project:


// You can also use the MCSubscriberDelegate to receive the onActive event; this approach is
// using AsyncStreams
Task {
  for await activity in subscriber.activity() {
    switch(activity) {
    case .active(_, let tracks, let sourceId):
      // Store the sourceId and tracks which you will use later to project
      // Tracks is an array of trackIds which is either "audio" or "video"
    default: break
      // ... 
    }
  }
}

Then, add a track that acts as a vessel for receiving media.

// Get mid either from the TrackEvent event, the `onVideoTrack` method of the delegate, or by calling the getMid method with the track ID

/* Option 1 */
Task {
  for await track in subscriber.tracks() {
    switch track {
    case .video(let track, let mid):
      // ...
   	  // Store the mid value somewhere
      // This in combination with the information received in the active event
      // above will be finally used to project the media onto 
      // this track.
    case .audio(let track, let mid):
      // ... 
    }
  }
}

/* Option 2 */
class SubDelegate: MCSubscriberDelegate {  
  /* ... */
    func onVideoTrack(_ track: MCVideoTrack!, withMid: String) {
    // Store the mid value somewhere 
    }   
  /* ... */
}

/* option 3 */
let mid = subscriber!.getMid(track.getId());

// Project a video track
let projectionData = MCProjectionData()
projectionData.mid = mid // The media ID of the transceiver you want to project into
projectionData.media = "video" // The media track type, either video or audio
projectionData.trackId = trackId // The name of the track on the media server side, which is the track ID you get in the onActive event

try await subscriber.addRemoteTrack("video"); // "audio" or "video" depending on the type of track you want to add
//...
try await subscriber.project(sourceId, [projectionData])


Make sure that you follow the flow of projecting media:

  1. Receive an active event. You can use MCSubscriberDelegate or subscriber.activity() AsyncStream.
  2. Store the source information from the received event, such as which tracks to project and the source ID.
  3. Call subscriber.addRemoteTrack(...) with the track kind to create a track that you will project the media onto. Make sure to consume the subscriber.tracks() AsyncStream beforehand to receive the track you have just created.
  4. When the track is received, combine the information together, mainly the sourceId, trackId received from step 1, and mid received from step 3. Finally, call subscriber.project().

To stop projecting the track, call the unproject method, which requires an array of the media IDs that you want to stop projecting.

8. Select a layer that you want to receive

When a publisher uses Simulcast or the SVC mode when sending a video feed, the media server automatically chooses the right layer to send to the subscriber according to the bandwidth estimation. However, you can force the server to send you a specific layer by calling the select method.

For example, if the sender uses Simulcast, it is possible to receive three different encoding IDs: 'h' for the high resolution, 'm' for the medium one, and 'l' for the low. In order to choose the medium resolution, you have to do the following:

let layerData = MCLayerData()
layerData.encodingId = "m" // The encoding ID, which is the ID of the Simulcast layer
layerData.temporalLayerId = 1 // The ID of the temporal layer
layerData.spatialLayerId = 0 // The ID of the spatial layer

try await subscriber.select(layerData);

// The nil value means the server will perform automatic selection
// try await subscriber.select(nil)

9. Render video

The SDK provides a UIKit view for rendering a video track. This UIKit view is obtained from the MCIosVideoRenderer as follows:

let renderer = MCIosVideoRenderer()

guard let view = renderer.getView() else {
    fatalError("Could not retrieve view from renderer.") // In production replace with a throw
}

You can insert the view element obtained from the renderer in any UIKit view hierarchy. In SwiftUI, wrap the UIKit view in UIViewRepresentable. To display a track in the view, add the renderer to the view element by modifying the onVideoTrack method of the MCSubscriberDelegate. Note that audio and video tracks need to be enabled. In this example, we are using the tracks AsyncStream instead:

Task {
  for await track in subcriber.tracks() {
    case .video(let track, _):
   	  track.enable(true)
      DispatchQueue.main.async {
        track.add(renderer)
      }
    case .audio(let track, _):
      track.enable(true)
  }
}

Collecting RTC statistics

You can periodically collect the WebRTC peer connection statistics if you enable them through the enableStats method of the viewer or publisher. After enabling the statistics, you will get a report every second through the statsReport async stream or the onStatsReport method of the deleage. The identifiers and way to browse the stats are following the RTC specification.
The report contains the MCStatsReport object, which is a collection of several MCStats objects. They all have a specific type, whether it is inbound, outbound, codec, or media. Inbound is the statistics of incoming transport for the viewer and outbound is a type of outgoing statistics for the publisher.