NEWDolby Interactivity APIs are now the Dolby.io Communications APIs Learn More >
X

ConferenceService

The ConferenceService allows the application to manage the conference life-cycle and interact with the conference.

The ConferenceService introduces APIs that allow the application to:

  • Create a conference
  • Fetch the Conference object required to join a conference
  • Join a conference with permission to share media or as a listener
  • Set the maximum number of video streams a participant would like to receive.
  • Configure the quality of the received Simulcast streams
  • Create a demo conference and join it
  • Start and stop audio transmission
  • Start and stop video transmission
  • Start and stop sharing the screen
  • Control the mute state of the conference participants
  • Get the current mute state of the local participant
  • Check the audio level of a specific participant
  • Get the participants' list
  • Get the speaking status of a selected participant
  • Check the WebRTC statistics
  • Leave the conference
  • Replay the previously recorded conference
  • Enable and disable audio processing for the local participant
  • Kick a participant from a conference
  • Update the participant's permissions
  • Ask about conference details
  • Set a participant's position to enable the spatial audio experience during a Dolby Voice conference
  • Configure a spatial environment of the application for the spatial audio feature
  • Set the direction a participant is facing during a conference with enabled spatial audio

The ConferenceService introduces events that inform the application that:

  • The participant has joined a conference or has left it
  • A connected participant joins a conference using another device and the same ExternalId
  • A conference participant has joined a conference or changed status
  • A stream is added, updated, or removed
  • The replayed conference has ended
  • An error has occurred
  • Conference permissions have been updated

Additionally, every 5 seconds the application emits the qualityIndicators event informing about the audio and video quality of the remote participants.

If a browser blocks the received audio streams due to auto-play policy, the application can call the autoplayBlocked and playBlockedAudio APIs to enable playing the received audio.

Events

autoplayBlocked

autoplayBlocked(): void

Emitted when conference participant's audio streams are blocked by a browser's auto-play policy that requires access to the participant's camera.
When this event occurs, the application requests permission to play the incoming audio stream. After a user interaction (click or touch),
the application calls the playBlockedAudio method to play the audio stream.

example

VoxeetSDK.conference.on("autoplayBlocked", () => {
  const button = document.getElementById("unmute_audio");
  button.onclick = () => {
    VoxeetSDK.conference.playBlockedAudio();
  };
});

Returns: void


ended

ended(): void

Emitted when the replayed conference has ended.

example

VoxeetSDK.conference.on("ended", () => {});

Returns: void


error

error(error: Error): void

Emitted when WebSocketError, PeerConnectionFailedError, or PeerDisconnectedError occurred.

PeerConnectionFailedError and PeerDisconnectedError are PeerErrors with the failed and disconnected PeerConnectionState value.

example

VoxeetSDK.conference.on("error", (error) => {});

Parameters:

NameTypeDescription
errorErrorThe received error.

Returns: void


joined

joined(): void

Emitted when the application has successfully joined the conference.

example

VoxeetSDK.conference.on("joined", () => {});

Returns: void


left

left(): void

Emitted when the application has left the conference.

example

VoxeetSDK.conference.on("left", () => {});

Returns: void


participantAdded

participantAdded(participant: Participant): void

Emitted when a new participant is invited to a conference. The SDK does not emit the participantAdded event for the local participant. Listeners only receive the participantAdded events about users; they do not receive events for other listeners. In SDK 3.2 and prior releases, users receive events about users and the first 1000 listeners. However, in SDK 3.3 and next releases, users receive the participantAdded events about users and do not receive any events about listeners. To notify all application users about the number of participants who are present at a conference, the Web SDK 3.3 introduces the activeParticipants events.

example

VoxeetSDK.conference.on("participantAdded", (participant) => {});

Parameters:

NameTypeDescription
participantParticipantThe invited participant who is added to a conference.

Returns: void


participantUpdated

participantUpdated(participant: Participant): void

Emitted when a participant changes ParticipantStatus. Listeners only receive the participantUpdated events about users; they do not receive events for other listeners. In SDK 3.2 and prior releases, users receive events about users and the first 1000 listeners. However, in SDK 3.3 and next releases, users receive the participantUpdated events about users and do not receive any events about listeners. To notify all application users about the number of participants who are present at a conference, the Web SDK 3.3 introduces the activeParticipants events.

The following graphic shows possible status changes during a conference:

Diagram that presents the possible status changesDiagram that presents the possible status changes

Diagram that presents the possible status changes

example

VoxeetSDK.conference.on("participantUpdated", (participant) => {});

Parameters:

NameTypeDescription
participantParticipantThe conference participant who changed status.

Returns: void


permissionsUpdated

permissionsUpdated(permissions: Set<ConferencePermission>): void

Emitted when the local participant's permissions are updated.

Parameters:

NameTypeDescription
permissionsSet<ConferencePermission>The updated conference permissions.

Returns: void


qualityIndicators

qualityIndicators(indicators: Map<string, QualityIndicator>): void

The Mean Opinion Score (MOS) which represents the participants' audio and video quality. The SDK calculates the audio and video quality scores and displays the values in a rage from 1 to 5, where 1 represents the worst quality and 5 represents the highest quality. In cases when the MOS score is not available, the SDK returns the value -1.

Note: With SDK 3.0, audio Mean Opinion Scores (MOS) are unavailable for web clients connected to Dolby Voice conferences.

Parameters:

NameTypeDescription
indicatorsMap<string, QualityIndicator>A map that includes all conference participants' quality indicators.

Returns: void


streamAdded

streamAdded(participant: Participant, stream: MediaStreamWithType): void

Emitted when the SDK adds a new stream to a conference participant. Each conference participant can be connected to two streams: the audio and video stream and the screen-share stream. If a participant enables audio or video, the SDK adds the audio and video stream to the participant and emits the streamAdded event to all participants. When a participant is connected to the audio and video stream and changes the stream, for example, enables a camera while using a microphone, the SDK updates the audio and video stream and emits the streamUpdated event. When a participant starts sharing a screen, the SDK adds the screen-share stream to this participants and emits the streamAdded event to all participants. The following graphic shows this behavior:

The difference between the streamAdded and streamUpdated eventsThe difference between the streamAdded and streamUpdated events

The difference between the streamAdded and streamUpdated events

Based on the stream type, the application chooses to either render a camera view or a screen-share view.

When a new participant joins a conference with enabled audio and video, the SDK emits the streamAdded event that includes audio and video tracks.

The SDK can also emit the streamAdded event only for the local participant. When the local participant uses the stopAudio method to locally mute the selected remote participant who does not use a camera, the local participant receives the streamRemoved event. After using the startAudio method for this remote participant, the local participant receives the streamAdded event.

Note: In Dolby Voice conferences, each conference participant receives only one mixed audio stream from the server. To keep backward compatibility with the customers' implementation, SDK 3.0 introduces a faked audio track for audio transmission. The faked audio track is included in the streamAdded and streamRemoved events. The SDK 3.0 takes the audio stream information from the participantAdded and participantUpdated events.

example

VoxeetSDK.conference.on("streamAdded", (participant, stream) => {
  var node = document.getElementById("received_video");
  navigator.attachMediaStream(node, stream);
});

Parameters:

NameTypeDescription
participantParticipantThe participant whose stream was added to a conference.
streamMediaStreamWithTypeThe added media stream.

Returns: void


streamRemoved

streamRemoved(participant: Participant, stream: MediaStreamWithType): void

Emitted when the SDK removes a stream from a conference participant. Each conference participant can be connected to two streams: the audio and video stream and the screen-share stream. If a participant disables audio and video or stops a screen-share presentation, the SDK removes the proper stream and emits the streamRemoved event to all conference participants.

The SDK can also emit the streamRemoved event only for the local participant. When the local participant uses the stopAudio method to locally mute a selected remote participant who does not use a camera, the local participant receives the streamRemoved event.

Note: In Dolby Voice conferences, each conference participant receives only one mixed audio stream from the server. To keep backward compatibility with the customers' implementation, SDK 3.0 introduces a faked audio track for audio transmission. The faked audio track is included in the streamAdded and streamRemoved events. The SDK 3.0 takes the audio stream information from the participantAdded and participantUpdated events.

example

VoxeetSDK.conference.on("streamRemoved", (participant, stream) => {});

Parameters:

NameTypeDescription
participantParticipantThe participant whose stream was removed from a conference.
streamMediaStreamWithTypeThe removed media stream.

Returns: void


streamUpdated

streamUpdated(participant: Participant, stream: MediaStreamWithType): void

Emitted when a conference participant who is connected to the audio and video stream changes the stream by enabling a microphone while using a camera or by enabling a camera while using a microphone. The event is emitted to all conference participants. The following graphic shows this behavior:

The difference between the streamAdded and streamUpdated eventsThe difference between the streamAdded and streamUpdated events

The difference between the streamAdded and streamUpdated events

The SDK can also emit the streamUpdated event only for the local participant. When the local participant uses the stopAudio or startAudio method to locally mute or unmute a selected remote participant who uses a camera, the local participant receives the streamUpdated event.

example

VoxeetSDK.conference.on("streamUpdated", (participant, stream) => {
  var node = document.getElementById("received_video");
  navigator.attachMediaStream(node, stream);
});

Parameters:

NameTypeDescription
participantParticipantThe participant whose stream was updated during a conference.
streamMediaStreamWithTypeThe updated media stream.

Returns: void


switched

switched(): void

Emitted when a new participant joins a conference using the same external ID as another participant who has joined this conference earlier. This event may occur when a participant joins the same conference using another browser or device. In such a situation, the SDK removes the participant who has joined the conference earlier.

Returns: void

Accessors

current

• get current(): Conference | null

Returns information about the current conference. Use this accessor if you wish to receive information that is available in the Conference object, such as the conference alias, ID, information if the conference is new, conference parameters, local participant's conference permissions, conference PIN code, or conference status. For example, use the following code to ask about the local participant's conference permissions:

VoxeetSDK.conference.current.permissions

Returns: Conference | null


maxVideoForwarding

• get maxVideoForwarding(): number

Provides the number of video streams that are transmitted to the local user.

Returns: number


participants

• get participants(): Map<string, Participant>

Provides a list of conference participants.

Returns: Map<string, Participant>

Methods

audioLevel

audioLevel(participant: Participant, callback: Function): any

Gets the participant's audio level. The possible values of the audio level are in range from 0.0 to 1.0 point.

Note: This API is no longer supported for remote participants when the client who does not use Desktop SDK connects to a Dolby Voice conference.

Parameters:

NameTypeDescription
participantParticipantThe conference participant.
callbackFunctionThe callback that retrieves the audio level.

Returns: any


audioProcessing

audioProcessing(participant: Participant, options: AudioProcessingOptions): Promise<void>

Enables and disables audio processing for the conference participant.

Parameters:

NameTypeDescription
participantParticipantThe conference participant.
optionsAudioProcessingOptionsThe audio processing information.

Returns: Promise<void>


create

create(options: ConferenceOptions): Promise<Conference>

Creates a conference with ConferenceOptions.

Parameters:

NameTypeDescription
optionsConferenceOptionsThe conference options.

Returns: Promise<Conference>


demo

demo(): Promise<Conference>

Creates and joins a demo conference.

Returns: Promise<Conference>


fetch

fetch(conferenceId: string): Promise<Conference>

Provides a Conference object that allows joining a conference. The returned object is based on the cached data received from the SDK and includes only the conference ID, list of the conference participants, and the conference permissions.

For more information about using the fetch method, see the Conferencing document.

Parameters:

NameTypeDescription
conferenceIdstringThe conference ID.

Returns: Promise<Conference>


isMuted

isMuted(): Boolean

Gets the current mute state of the local participant.

Note: This API is no longer supported for remote participants.

Returns: Boolean


isSpeaking

isSpeaking(participant: Participant, callback: Function): any

Gets the participant's current speaking status for an active talker indicator.

Parameters:

NameTypeDescription
participantParticipantThe conference participant.
callbackFunctionThe callback that accepts a boolean value indicating the participant's current speaking status. If the boolean value is true, the callback can mark the participant as an active speaker in the application UI.

Returns: any


join

join(conference: Conference, options: JoinOptions): Promise<Conference>

Joins the conference.

Note: Participants who use Apple Mac OS and the Safari browser to join conferences may experience problems with distorted audio. To solve the problem, we recommend using the latest version of Safari.

Note: Due to a known Firefox issue, a user who has never permitted Firefox to use a microphone and camera cannot join a conference as a listener. If you want to join a conference as a listener using the Firefox browser, make sure that Firefox has permission to use your camera and microphone. To check the permissions, follow these steps:

1. Select the lock icon in the address bar.

2. Select the right arrow placed next to Connection Secure.

3. Select More information.

4. Go to the Permissions tab.

5. Look for the Use the camera and Use the microphone permission and select the Allow option.

See also: listen, replay

example

// For example
const constraints = {
  audio: true,
  video: {
    width: {
      min: "320",
      max: "1280",
    },
    height: {
      min: "240",
      max: "720",
    },
  },
};

// A simplest example of constraints would be:
const constraints = { audio: true, video: true };

VoxeetSDK.conference
  .join(conference, { constraints: constraints })
  .then((info) => {})
  .catch((error) => {});

Parameters:

NameTypeDescription
conferenceConferenceThe conference object.
optionsJoinOptionsThe additional options for the joining participant.

Returns: Promise<Conference>


kick

kick(participant: Participant): Promise<any>

Allows the conference owner, or a participant with adequate permissions, to kick another participant from the conference by revoking the conference access token. The kicked participant cannot join the conference again.

VoxeetSDK.conference.kick(participant);

Parameters:

NameTypeDescription
participantParticipantThe participant who needs to be kicked from the conference.

Returns: Promise<any>


leave

leave(options?: ConferenceLeaveOptions: Promise<void>

Leaves the conference.

Parameters:

NameTypeDescription
options?ConferenceLeaveOptionsThe additional options for the leaving participant.

Returns: Promise<void>


listen

listen(conference: Conference, options?: ListenOptions): Promise<Conference>

Joins a conference as a listener. You can choose to either join, replay, or listen to a conference. The listen method connects to the conference in the receiving only mode which does not allow transmitting video or audio.

Note: Conference events from other listeners are not available for listeners. Only users will receive conference events from other listeners.

See also: join, replay

Parameters:

NameTypeDescription
conferenceConferenceThe conference object.
options?ListenOptionsThe additional options for the joining listener.

Returns: Promise<Conference>


localStats

localStats(): WebRTCStats

Provides standard WebRTC statistics for the application. Based on the WebRTC statistics, the SDK computes audio and video statistics.

Returns: WebRTCStats


mute

mute(participant: Participant, isMuted: boolean): void

Stops playing the specified remote participants' audio to the local participant or stops playing the local participant's audio to the conference. The mute method does not notify the server to stop audio stream transmission. To stop sending an audio stream to the server or to stop receiving an audio stream from the server, use the stopAudio method.

The mute method depends on the Dolby Voice usage:

  • In conferences where Dolby Voice is not enabled, conference participants can mute themselves or remote participants.
  • In conferences where Dolby Voice is enabled, conference participants can only mute themselves.

If you wish to mute remote participants in Dolby Voice conferences, we recommend using the stopAudio API. This API allows the conference participants to stop receiving the specific audio streams from the server.

Note: In SDK 2.4 and prior releases, if a conference participant calls the mute method, empty frames are sent to the other participants. Due to a Safari issue, participants who join a conference using Safari and start receiving the empty frames can experience a Safari crash. Due to a different API implementation in SDK 3.0, this problem does not occur during Dolby Voice conferences.

Parameters:

NameTypeDescription
participantParticipantThe local or remote conference participant.
isMutedbooleanThe mute state, true indicates that a participant is muted, false indicates that a participant is not muted.

Returns: void


playBlockedAudio

playBlockedAudio(): void

Allows a specific participant to play audio that is blocked by the browser's auto-play policy.

Returns: void


replay

replay(conference: Conference, replayOptions?: ReplayOptions, mixingOptions?: MixingOptions): Promise<Conference>

Replays a previously recorded conference. For more information, see the Recording mechanisms article.

See also: join, listen

Parameters:

NameTypeDefault valueDescription
conferenceConference-The conference object.
replayOptionsReplayOptions{ offset: 0 }The replay options.
mixingOptions?MixingOptions-The model that notifies the server that a participant who replays the conference is a special participant called Mixer.

Returns: Promise<Conference>


setSpatialDirection

setSpatialDirection(participant: Participant, direction: SpatialDirection): void

🚀

Closed Beta

This API is a part of the Beta program.

Sets the direction a participant is facing in space. This method is available only for participants who joined the conference with the spatialAudio parameter enabled. Otherwise, SDK triggers UnsupportedError.

Currently, this method is only supported for the local participant. The method changes the direction the local participant is facing. When the specified participant is a remote participant, SDK triggers UnsupportedError.

If the local participant hears audio from the position (0,0,0) facing down the Z-axis and locates a remote participant in the position (1,0,1), the local participant hears the remote participant from their front-right. If the local participant chooses to change the direction they are facing and rotate +90 degrees about the Y-axis, then instead of hearing the speaker from the front-right position, they hear the speaker from the front-left position. The following video presents this example:

For more information, see the SpatialDirection model.

If sending the updated positions to the server fails, the SDK generates the ConferenceService event error that includes SpatialAudioError.

Parameters:

NameTypeDescription
participantParticipantThe local participant.
directionSpatialDirectionThe direction the participant is facing in space.

Returns: void


setSpatialEnvironment

setSpatialEnvironment(scale: SpatialScale, forward: SpatialPosition, up: SpatialPosition, right: SpatialPosition): void

🚀

Closed Beta

This API is a part of the Beta program.

Configures a spatial environment of an application, so the audio renderer understands which directions the application considers forward, up, and right and which units it uses for distance.

This method is available only for participants who joined the conference with the spatialAudio parameter enabled. Otherwise, SDK triggers UnsupportedError.

If not called, the SDK uses the default spatial environment, which consists of the following values:

  • forward = (0, 0, 1), where +Z axis is in front
  • up = (0, 1, 0), where +Y axis is above
  • right = (1, 0, 0), where +X axis is to the right
  • scale = (1, 1, 1), where one unit on any axis is 1 meter

The default spatial environment is presented in the following diagram:

If sending the updated positions to the server fails, the SDK generates the ConferenceService event error that includes SpatialAudioError.

Parameters:

NameTypeDescription
scaleSpatialScaleA scale that defines how to convert units from the coordinate system of an application (pixels or centimeters) into meters used by the spatial audio coordinate system. For example, if SpatialScale is set to (100,100,100), it indicates that 100 of the applications units (cm) map to 1 meter for the audio coordinates. In such a case, if the listener's location is (0,0,0)cm and a remote participant's location is (200,200,200)cm, the listener has an impression of hearing the remote participant from the (2,2,2)m location. The scale value must be greater than 0. Otherwise, SDK emits ParameterError. For more information, see the Spatial Audio article.
forwardSpatialPositionA vector describing the direction the application considers as forward. The value must be orthogonal to up and right. Otherwise, SDK emits ParameterError.
upSpatialPositionA vector describing the direction the application considers as up. The value must be orthogonal to forward and right. Otherwise, SDK emits ParameterError.
rightSpatialPositionA vector describing the direction the application considers as right. The value must be orthogonal to forward and up. Otherwise, SDK emits ParameterError.

Returns: void


setSpatialPosition

setSpatialPosition(participant: Participant, position: SpatialPosition): void

🚀

Closed Beta

This API is a part of the Beta program.

Sets a participant's position in space to enable the spatial audio experience during a Dolby Voice conference. This method is available only for participants who joined the conference with the spatialAudio parameter enabled. Otherwise, SDK triggers UnsupportedError. Depending on the specified participant in the participant parameter, the setSpatialPosition method impacts the location from which audio is heard or from which audio is rendered:

  • When the specified participant is the local participant, setSpatialPosition sets a location from which the local participant listens to a conference. If the local participant does not have an established location, the participant hears audio from the default location (0, 0, 0).

  • When the specified participant is a remote participant, setSpatialPosition ensures the remote participant's audio is rendered from the specified position in space. If the remote participant does not have an established location, the participant does not have a default position and will remain muted until a position is specified.

For example, if a local participant Eric, who does not have a set direction, calls setSpatialPosition(VoxeetSDK.session.participant, {x:3,y:0,z:0}), Eric hears audio from the position (3,0,0). If Eric also calls setSpatialPosition(Sophia, {x:7,y:1,z:2}), he hears Sophia from the position (7,1,2). In this case, Eric hears Sophia 4 meters to the right, 1 meter above, and 2 meters in front. The following graphic presents the participants' locations:

If sending the updated positions to the server fails, the SDK generates the ConferenceService event error that includes SpatialAudioError.

Parameters:

NameTypeDescription
participantParticipantThe selected participant. Using the local participant sets the location from which the participant will hear a conference. Using a remote participant sets the position from which the participant's audio will be rendered.
positionSpatialPositionThe participant's audio location.

Returns: void


simulcast

simulcast(requested: Array<ParticipantQuality>): any

Configures the quality of the received Simulcast streams.

Parameters:

NameTypeDescription
requestedArray<ParticipantQuality>An array that includes the streams qualities for specific conference participants.

Returns: any


startAudio

startAudio(participant: Participant): Promise<any>

Starts audio transmission between the local client and a conference. The startAudio method impacts only the audio streams that the local participant sends and receives; the method does not impact the audio transmission between remote participants and a conference and does not allow the local participant to force sending remote participants’ streams to the conference or to the local participant. Depending on the specified participant in the participant parameter, the startAudio method starts the proper audio transmission:

  • When the specified participant is the local participant, startAudio ensures sending local participant’s audio from the local client to the conference.
  • When the specified participant is a remote participant, startAudio ensures sending remote participant’s audio from the conference to the local client. This allows the local participant to unmute remote participants who are locally muted through the stopAudio method.

The startAudio method in Dolby Voice conferences is not available for listeners and triggers UnsupportedError.

The SDK automatically manages audio rendering, which means that the application does not need to implement its own element. The application can use the selectAudioInput and selectAudioOutput methods to select the proper audio input and output devices.

The startAudio method requires up to a few seconds to become effective.

Parameters:

NameTypeDescription
participantParticipantThe selected participant. If you wish to transmit the local participant's audio stream to the conference, provide the local participant's object. If you wish to receive the specific remote participants' audio streams, provide these remote participants' objects.

Returns: Promise<any>


startScreenShare

startScreenShare(sourceId: any): any

Starts a screen-sharing session. This method is not available on mobile browsers; participants who join a conference using a mobile browser cannot share a screen.

example

VoxeetSDK.conference
  .startScreenShare()
  .then(() => {})
  .catch((e) => {});

Parameters:

NameTypeDescription
sourceIdanyThe device ID. If you use multiple screens, use this parameter to specify which screen you want to share.

Returns: any


startVideo

startVideo(participant: Participant, constraints: any): Promise<any>

Notifies the server to either start sending the local participant's video stream to the conference or start sending a remote participant's video stream to the local participant. The startVideo method does not control the remote participant's video stream; if a remote participant does not transmit any video stream, the local participant cannot change it using the startVideo method.

example

const videoConstraints = {
  width: {
    min: "320",
    max: "1280",
  },
  height: {
    min: "240",
    max: "720",
  },
};

VoxeetSDK.conference
  .startVideo(VoxeetSDK.session.participant, videoConstraints)
  .then(() => {});

Parameters:

NameTypeDescription
participantParticipantThe participant who will receive the video stream, either remote or local.
constraintsanyThe WebRTC video constraints.

Returns: Promise<any>


stopAudio

stopAudio(participant: Participant): Promise<any>

Stops audio transmission between the local client and a conference. The stopAudio method impacts only the audio streams that the local participant sends and receives; the method does not impact the audio transmission between remote participants and a conference and does not allow the local participant to stop sending remote participants’ streams to the conference. Depending on the specified participant in the participant parameter, the stopAudio method stops the proper audio transmission:

  • When the specified participant is the local participant, stopAudio stops sending local participant’s audio from the local client to the conference.
  • When the specified participant is a remote participant, stopAudio stops sending remote participant’s audio from the conference to the local client. This allows the local participant to locally mute remote participants.

The stopAudio method in Dolby Voice conferences is not available for listeners and triggers UnsupportedError.

Leaving a conference resets the stopAudio settings. Participants who rejoin a conference need to provide the desired stopAudio parameters and call the stopAudio method once again.

The stopAudio method requires up to a few seconds to become effective.

🚧

Warning

If you use the stopAudio method on remote participants in non-Dolby Voice conferences, do not rely on the streamAdded and streamRemoved events to determine the attendee list. When the local participant uses the stopAudio method on a specific remote participant while the local participant does not receive any video stream from this participant, the local participant receives the streamRemoved event. If the application uses the streamRemoved event to determine the list of conference participants, the application may incorrectly show that the muted participant is not present at a conference.

Parameters:

NameTypeDescription
participantParticipantThe selected participant. If you wish to not transmit the local participant's audio stream to the conference, provide the local participant's object. If you wish to not receive the specific remote participants' audio streams, provide these remote participants' objects.

Returns: Promise<any>


stopScreenShare

stopScreenShare(): Promise<any>

Stops the screen-sharing session.

Returns: Promise<any>


stopVideo

stopVideo(participant: Participant): Promise<void>

Notifies the server to either stop sending the local participant's video stream to the conference or stop sending a remote participant's video stream to the local participant.

Parameters:

NameTypeDescription
participantParticipantThe participant who will stop receiving the video stream.

Returns: Promise<void>


updatePermissions

updatePermissions(participantPermissions: Array<ParticipantPermissions>): Promise<any>

Updates the participant's conference permissions. If a participant does not have permission to perform a specific action, this action is not available for this participant during a conference, and the participant receives InsufficientPermissionsError. If a participant started a specific action and then lost permission to perform this action, the SDK stops the blocked action. For example, if a participant started sharing a screen and received the updated permissions that do not allow him to share a screen, the SDK stops the screen sharing session and the participant cannot start sharing the screen again.

VoxeetSDK.conference.updatePermissions(participantPermissions: Array<ParticipantPermissions>)

Parameters:

NameTypeDescription
participantPermissionsArray<ParticipantPermissions>The updated participant's permissions.

Returns: Promise<any>


videoForwarding

videoForwarding(max: number, participants?: Array<Participant>): Promise<any>

Sets the maximum number of video streams that may be transmitted to the local participant. This method also allows using a pin option to prioritize the specific participant's video streams and display their videos even when these participants do not talk. For more information, see the Video Forwarding article.

Parameters:

NameTypeDescription
maxnumberThe maximum number of video streams that may be transmitted to the local participant. The valid parameter's values are between 0 and 25 for desktop browsers and between 0 and 4 for mobile browsers. In the case of providing a value smaller than 0 or greater than the valid values, SDK triggers the VideoForwardingError. If the parameter value is not specified, the SDK automatically sets the maximum possible value: 25 for desktop browsers and 4 for mobile browsers.
participants?Array<Participant>The list of the prioritized participants. This parameter allows using a pin option to prioritize specific participant's video streams and display their videos even when these participants do not talk.

Returns: Promise<any>


Did this page help you?