ConferenceService

The ConferenceService allows the application to manage the conference life-cycle and interact with the conference.

Typical API workflow:

1. The application calls the create or demo method to create a conference.

2. The application calls the fetch method to receive the conference object.

3. The application joins the conference through the join or listen method.

4. The application can interact with the conference to:

  • Request a specific quality of received Simulcast video streams through the Simulcast method.

  • Send audio streams using the startAudio and stopAudio methods.

  • Customize the number of the received video streams, select the video forwarding strategy, and prioritize the selected participants' video streams.

  • Send media and video streams using the startVideo and stopVideo methods.

  • Share the screen using the startScreenShare and stopScreenShare methods.

  • Replay the recorded conference through the replay method.

  • Check the audio level using the audioLevel method.

  • Check the speaking status of a participant using the isSpeaking method.

  • Control the mute input state of conference participants through the mute method.

  • Inform whether a local participant is muted.

  • Control the mute output state of conference participants through the muteOutput method.

  • Check WebRTC statistics through the localStats method.

  • Control the audio processing state through the audioProcessing method.

  • Update a participant's permissions through the updatePermissions method.

  • Kick a participant from the conference when the conference access token is updated using the kick method.

  • Set a participant's position to enable the spatial audio experience during a Dolby Voice conference

  • Configure a spatial environment of the application for the spatial audio feature

  • Set the direction a participant is facing during a conference with enabled spatial audio

5. The application calls the leave method to leave the conference.

Events are emitted in the following situations:

Events

participantAdded

participantAdded(participant: VTParticipant)

Emitted when a new participant is invited to a conference. The SDK does not emit the participantAdded event for the local participant. Listeners only receive the participantAdded events about users; they do not receive events for other listeners. In SDK 3.2 and prior releases, users receive events about users and the first 1000 listeners. However, in SDK 3.3 and next releases, users receive the participantAdded events about users and do not receive any events about listeners. To notify all application users about the number of participants who are present at a conference, the iOS SDK 3.3 introduces the activeParticipants events.

Parameters:

NameTypeDescription
participantVTParticipantThe invited participant who is added to a conference.

participantUpdated

participantUpdated(participant: VTParticipant)

Emitted when a participant changes status. Listeners only receive the participantUpdated events about users; they do not receive events for other listeners. In SDK 3.2 and prior releases, users receive events about users and the first 1000 listeners. However, in SDK 3.3 and next releases, users receive the participantUpdated events about users and do not receive any events about listeners. To notify all application users about the number of participants who are present at a conference, the iOS SDK 3.3 introduces the activeParticipants events.

The following graphic shows possible status changes during a conference:

51225122

Diagram that presents the possible status changes

Parameters:

NameTypeDescription
participantVTParticipantThe conference participant who changed status.

permissionsUpdated

permissionsUpdated(permissions: [Int])

Emitted when the local participant's permissions are updated.

Parameters:

NameTypeDescription
permissions[Int]The updated conference permissions.

statusUpdated

statusUpdated(status: VTConferenceStatus)

Emitted when the conference status is updated.

Parameters:

NameTypeDescription
statusVTConferenceStatusThe updated conference status.

streamAdded

streamAdded(participant: VTParticipant, stream: MediaStream)

Emitted when the SDK adds a new stream to a conference participant. Each conference participant can be connected to two streams: the audio and video stream and the screen-share stream. If a participant enables audio or video, the SDK adds the audio and video stream to the participant and emits the streamAdded event to all participants. When a participant is connected to the audio and video stream and changes the stream, for example, enables a camera while using a microphone, the SDK updates the audio and video stream and emits the streamUpdated event. When a participant starts sharing a screen, the SDK adds the screen-share stream to this participants and emits the streamAdded event to all participants. The following graphic shows this behavior:

30483048

The difference between the streamAdded and streamUpdated events

When a new participant joins a conference with enabled audio and video, the SDK emits the streamAdded event that includes audio and video tracks.

The SDK can also emit the streamAdded event only for the local participant. When the local participant uses the stopAudio method to locally mute the selected remote participant who does not use a camera, the local participant receives the streamRemoved event. After using the startAudio method for this remote participant, the local participant receives the streamAdded event.

Parameters:

NameTypeDescription
participantVTParticipantThe participant whose stream was added to a conference.
streamMediaStreamThe added media stream.

streamUpdated

streamUpdated(participant: VTParticipant, stream: MediaStream)

Emitted when a conference participant who is connected to the audio and video stream changes the stream by enabling a microphone while using a camera or by enabling a camera while using a microphone. The event is emitted to all conference participants. The following graphic shows this behavior:

30483048

The difference between the streamAdded and streamUpdated events

The SDK can also emit the streamUpdated event only for the local participant. When the local participant uses the stopAudio or startAudio method to locally mute or unmute a selected remote participant who uses a camera, the local participant receives the streamUpdated event.

Parameters:

NameTypeDescription
participantVTParticipantThe participant whose stream was updated during a conference.
streamMediaStreamThe updated media stream.

streamRemoved

streamRemoved(participant: VTParticipant, stream: MediaStream)

Emitted when the SDK removes a stream from a conference participant. Each conference participant can be connected to two streams: the audio and video stream and the screen-share stream. If a participant disables audio and video or stops a screen-share presentation, the SDK removes the proper stream and emits the streamRemoved event to all conference participants.

The SDK can also emit the streamRemoved event only for the local participant. When the local participant uses the stopAudio method to locally mute a selected remote participant who does not use a camera, the local participant receives the streamRemoved event.

Parameters:

NameTypeDescription
participantVTParticipantThe participant whose stream was removed from a conference.
streamMediaStreamThe removed media stream.

Accessors

current

current: VTConference?

Returns information about the current conference. Use this accessor if you wish to receive information that is available in the VTConference object, such as the conference alias, ID, information if the conference is new, the list of the conference participants, conference parameters, local participant's conference permissions, or conference status.

Returns: VTConference?


cryptoDelegate

cryptoDelegate: VTConferenceCryptoDelegate

The encrypted delegate, a means of communication between objects in the conference service.

Note: This API is no longer supported in iOS Client SDK 3.0.0.

Returns: VTConferenceCryptoDelegate


delegate

delegate: VTConferenceDelegate

Delegate, a means of communication between objects in the conference service.

Returns: VTConferenceDelegate


defaultBuiltInSpeaker

defaultBuiltInSpeaker: Bool

A boolean that sets a default built-in device that should be used in a conference, either a built-in speaker (true) or a built-in receiver (false). By default, defaultBuiltInSpeaker is set to true.

Returns: Bool


defaultVideo

defaultVideo: Bool

A boolean that is responsible for a default camera setting. When set to false, all participants join a conference without video. When set to true, the SDK enables participants' cameras when they join a conference. By default, defaultVideo is set to false.

Returns: Bool


maxVideoForwarding

maxVideoForwarding: Int

Gets the maximum number of video streams that may be transmitted to the local participant.

Returns: Int


videoForwardingStrategy

videoForwardingStrategy(): VideoForwardingStrategy

Gets the video forwarding strategy that the local participant uses in the current conference. This method is available only in SDK 3.6 and later.

Returns: VideoForwardingStrategy


Methods

audioLevel

audioLevel(participant: VTParticipant): Double

Gets the participant's audio level. The audio level value ranges from 0.0 to 1.0.

Note: When the local participant is muted, the audioLevel value is set to a non-zero value, and isSpeaking is set to true if the audioLevel is greater than 0.05. This implementation allows adding a warning message to notify the local participant that their audio is not sent to a conference.

Parameters:

NameTypeDescription
participantVTParticipantThe conference participant.

Returns: Double


audioProcessing

audioProcessing(enable: boolean)

deprecated
Note: This method is deprecated in SDK 3.7 and replaced with the setCaptureMode method.

Enables and disables audio processing.

Parameters:

NameTypeDescription
enablebooleanEnables and disables audio processing for the conference participant.

create

create(options: VTConferenceOptions?, success: (( conference: VTConference) -> Void)?, fail: (( error: NSError) -> Void)?)

Creates a conference.

Parameters:

NameTypeDefaultDescription
optionsVTConferenceOptions?nilThe conference options.
success((_ conference: VTConference) -> Void)?nilThe block to execute when the operation is successful.
fail((_ error: NSError) -> Void)?nilThe error to trigger when the operation fails.

demo

demo(spatialAudio: boolean, completion: ((_ error: NSError?) -> Void)? = nil)

Creates a demo conference.

Parameters:

NameTypeDefaultDescription
spatialAudiobooleanfalseEnables and disables Spatial Audio in a demo conference.
completion((_ error: NSError) -> Void)?nilThe block to execute when the query completes.

fetch

fetch(conferenceID: String, completion: (VTConference) -> Void)

Provides the conference object that allows joining a conference. For more information about using the fetch method, see the Conferencing document.

Parameters:

NameTypeDescription
conferenceIDStringThe conference ID.
completion(VTConference) -> VoidThe block to execute when the query completes.

isSpeaking

isSpeaking(participant: VTParticipant) : Bool

Gets the participant's current speaking status.

Parameters:

NameTypeDescription
participantVTParticipantThe conference participant.

Returns: Bool


isMuted

isMuted() -> Bool

Informs whether a participant is muted.

Note: This API is no longer supported for remote participants.

Returns: Bool


join

join(conference: VTConference, options: VTJoinOptions?, success: (( conference: VTConference) -> Void)?, fail: (( error: NSError) -> Void)?)

Joins a conference. For more information about joining conferences, see the Conferencing document.

Parameters:

NameTypeDefaultDescription
conferenceVTConference-The conference object.
optionsVTJoinOptions?nilThe additional options for the joining participant.
success((_ conference: VTConference) -> Void)?nilThe block to execute when the operation is successful.
fail((_ error: NSError) -> Void)?nilThe error to trigger when the operation fails.

kick

kick(participant: VTParticipant, completion: ((_ error: NSError) -> Void)?)

Allows the conference owner, or a participant with adequate permissions, to kick another participant from the conference by revoking the conference access token. The kicked participant cannot join the conference again.

Parameters:

NameTypeDefaultDescription
participantVTParticipant-The participant who needs to be kicked from the conference.
completion((_ error: NSError) -> Void)?nilThe block to execute when the query completes.

leave

leave(completion: ((_ error: NSError?) -> Void)?)

Leaves the conference.

Parameters:

NameTypeDefaultDescription
completion((_ error: NSError?) -> Void)?nilThe block to execute when the query completes.

listen

listen(conference: VTConference, options: VTListenOptions?, success: (( conference: VTConference) -> Void)?, fail: (( error: NSError) -> Void)?)

Joins a conference as a listener.

Note: Conference events from other listeners are not available for listeners. Only users will receive conference events from other listeners.

Parameters:

NameTypeDefaultDescription
conferenceVTConference-The conference object.
success((_ conference: VTConference) -> Void)?nilThe block to execute when the operation is successful.
fail((_ error: NSError) -> Void)?nilThe error to trigger when the operation fails.

localStats

localStats(): [[String: Any]]?

Provides the WebRTC statistics.

Returns: [[String: Any]]?


mute

mute(participant: VTParticipant, isMuted: boolean, completion: ((_ error: NSError?) -> Void)? = nil

Stops playing the specified remote participants' audio to the local participant or stops playing the local participant's audio to the conference. The mute method does not notify the server to stop audio stream transmission. To stop sending an audio stream to the server or to stop receiving an audio stream from the server, use the stopAudio method.

Note: The mute method depends on the Dolby Voice usage:

  • In conferences where Dolby Voice is not enabled, conference participants can mute themselves or remote participants.
  • In conferences where Dolby Voice is enabled, conference participants can only mute themselves.

If you wish to mute remote participants in Dolby Voice conferences, you must use the stopAudio API. This API allows the conference participants to stop receiving the specific audio streams from the server.

Parameters:

NameTypeDefaultDescription
isMutedboolean-The mute state, true indicates that a participant is muted, false indicates that a participant is not muted.
completion((_ error: NSError?) -> Void)?nilThe block to execute when the query completes.

muteOutput

muteOutput(isMuted: boolean, completion: ((_ error: NSError?) -> Void)?)

Controls playing remote participants' audio to the local participant.

Note: This API is only supported when the client connects to a Dolby Voice conference.

Parameters:

NameTypeDefaultDescription
isMutedboolean-The mute state. True indicates that the local participant's application does not play the remote participants' audio, false indicates that the local participant's application plays the remote participants' audio.
completion((_ error: NSError?) -> Void)?nilThe block to execute when the query completes.

replay

replay(conference: VTConference, options: VTReplayOptions?, completion: ((_ error: NSError?) -> Void)?)

Replays the recorded conference.

Parameters:

NameTypeDefaultDescription
conferenceVTConference-The conference object.
optionsVTReplayOptions?nilThe parameters responsible for replaying conferences.
completion((_ error: NSError?) -> Void)?nilThe block to execute when the query completes.

setSpatialDirection

setSpatialDirection(participant: VTParticipant, direction: VTSpatialDirection, completion: ((_ error: NSError?) -> Void)? = nil)

Sets the direction the local participant is facing in space. This method is available only for participants who joined the conference using the join method with the spatialAudio parameter enabled. Otherwise, SDK triggers the spatialAudio error. To set a spatial direction for listeners, use the Set Spatial Listeners Audio REST API.

If the local participant hears audio from the position (0,0,0) facing down the Z-axis and locates a remote participant in the position (1,0,1), the local participant hears the remote participant from their front-right. If the local participant chooses to change the direction they are facing and rotate +90 degrees about the Y-axis, then instead of hearing the speaker from the front-right position, they hear the speaker from the front-left position. The following video presents this example:

For more information, see the VTSpatialDirection model.

Parameters:

NameTypeDescription
participantVTParticipantThe local participant.
directionVTSpatialDirectionThe direction the participant is facing in space.
completion((_ error: NSError?) -> Void)?The block to execute when the query completes.

setSpatialEnvironment

setSpatialEnvironment(scale: VTSpatialScale, forward: VTSpatialPosition, up: VTSpatialPosition, right: VTSpatialPosition, completion: ((_ error: NSError?) -> Void)? = nil)

Configures a spatial environment of an application, so the audio renderer understands which directions the application considers forward, up, and right and which units it uses for distance.

This method is available only for participants who using the join method with the spatialAudio parameter enabled. Otherwise, SDK triggers the spatialAudio error. To set a spatial environment for listeners, use the Set Spatial Listeners Audio REST API.

If not called, the SDK uses the default spatial environment, which consists of the following values:

  • forward = (0, 0, 1), where +Z axis is in front
  • up = (0, 1, 0), where +Y axis is above
  • right = (1, 0, 0), where +X axis is to the right
  • scale = (1, 1, 1), where one unit on any axis is 1 meter

The default spatial environment is presented in the following diagram:

19201920

Parameters:

NameTypeDescription
scaleVTSpatialScaleA scale that defines how to convert units from the coordinate system of an application (pixels or centimeters) into meters used by the spatial audio coordinate system. For example, if SpatialScale is set to (100,100,100), it indicates that 100 of the applications units (cm) map to 1 meter for the audio coordinates. In such a case, if the listener's location is (0,0,0)cm and a remote participant's location is (200,200,200)cm, the listener has an impression of hearing the remote participant from the (2,2,2)m location. The scale value must be greater than 0. For more information, see the Spatial Audio article.
forwardVTSpatialPositionA vector describing the direction the application considers as forward. The value must be orthogonal to up and right.
upVTSpatialPositionA vector describing the direction the application considers as up. The value must be orthogonal to forward and right.
rightVTSpatialPositionA vector describing the direction the application considers as right. The value must be orthogonal to forward and up.
completion((_ error: NSError?) -> Void)?The block to execute when the query completes.

setSpatialPosition

setSpatialPosition(participant: VTParticipant, position: VTSpatialPosition, completion: ((_ error: NSError?) -> Void)? = nil)

Sets a participant's position in space to enable the spatial audio experience during a Dolby Voice conference. This method is available only for participants who joined the conference using the join method with the spatialAudio parameter enabled. Otherwise, SDK triggers the spatialAudio error. To set a spatial position for listeners, use the Set Spatial Listeners Audio REST API.

Depending on the specified participant in the participant parameter, the setSpatialPosition method impacts the location from which audio is heard or from which audio is rendered:

  • When the specified participant is the local participant, setSpatialPosition sets a location from which the local participant listens to a conference. If the local participant does not have an established location, the participant hears audio from the default location (0, 0, 0).

  • When the specified participant is a remote participant, setSpatialPosition ensures the remote participant's audio is rendered from the specified location in space. Setting the remote participants’ positions is required in conferences that use the individual spatial audio style. In these conferences, if a remote participant does not have an established location, the participant does not have a default position and will remain muted until a position is specified. The shared spatial audio style does not support setting the remote participants' positions. In conferences that use the shared style, the spatial scene is shared by all participants, so that each client can set a position and participate in the shared scene. Calling setSpatialPosition for remote participants in the shared spatial audio style triggers the spatialAudio error.

For example, if a local participant Eric, who uses the individual spatial audio style and does not have a set direction, calls setSpatialPosition(VoxeetSDK.session.participant, {x:3,y:0,z:0}), Eric hears audio from the position (3,0,0). If Eric also calls setSpatialPosition(Sophia, {x:7,y:1,z:2}), he hears Sophia from the position (7,1,2). In this case, Eric hears Sophia 4 meters to the right, 1 meter above, and 2 meters in front. The following graphic presents the participants' locations:

19201920

Parameters:

NameTypeDescription
participantVTParticipantThe selected participant. Using the local participant sets the location from which the participant will hear a conference. Using a remote participant sets the position from which the participant's audio will be rendered.
positionVTSpatialPositionThe participant's audio location.
completion((_ error: NSError?) -> Void)?The block to execute when the query completes.

startAudio

startAudio(participant: VTParticipant?, completion: ((_ error: NSError?) -> Void)?)

deprecated
Note: This method is deprecated in SDK 3.7 and replaced with the start methods that are available in the LocalAudio and RemoteAudio models.

Starts audio transmission between the local client and a conference. The startAudio method impacts only the audio streams that the local participant sends and receives; the method does not impact the audio transmission between remote participants and a conference and does not allow the local participant to force sending remote participants’ streams to the conference or to the local participant. Depending on the specified participant in the participant parameter, the startAudio method starts the proper audio transmission:

  • When the specified participant is the local participant, startAudio ensures sending local participant’s audio from the local client to the conference.

  • When the specified participant is a remote participant, startAudio ensures sending remote participant’s audio from the conference to the local client. This allows the local participant to unmute remote participants who are locally muted through the stopAudio method.

The startAudio method in Dolby Voice conferences is not available for listeners.

The startAudio method requires up to a few seconds to become effective.

Parameters:

NameTypeDefaultDescription
participantVTParticipant?-The selected participant. If you wish to transmit the local participant's audio stream to the conference, provide the local participant's object. If you wish to receive the specific remote participants' audio streams, provide these remote participants' objects.
completion((_ error: NSError?) -> Void)?nilThe block to execute when the query completes.

startScreenShare

startScreenShare(broadcast: boolean, completion: ((_ error: NSError?) -> Void)?)

Starts a screen-sharing session. The ScreenShare with iOS document describes how to set up screen-share outside the application.

NameTypeDefaultDescription
broadcastbooleanfalseA boolean that specifies whether the application should share the screen only inside the application (false) or should share the whole screen, even when the application is enabled in the background (true).
completion((_ error: NSError?) -> Void)?nilThe block to execute when the query completes.

startVideo

startVideo(participant: VTParticipant?, isDefaultFrontFacing: Bool, completion: ((_ error: NSError?) -> Void)?)

deprecated
Note: This method is deprecated in SDK 3.7 and replaced with the start methods that are available in the LocalVideo and RemoteVideo models.

Notifies the server to either start sending the local participant's video stream to the conference or start sending a remote participant's video stream to the local participant. The startVideo method does not control the remote participant's video stream; if a remote participant does not transmit any video stream, the local participant cannot change it using the startVideo method.

Parameters:

NameTypeDefaultDescription
participantVTParticipant?-The participant who will receive the video stream, either remote or local.
isDefaultFrontFacingBooltrueA boolean that indicates which camera should be enabled. True indicates that the application should use the front-facing camera, false indicates that the application should enable the back-facing camera.
completion((_ error: NSError?) -> Void)?nilThe block to execute when the query completes.

simulcast

simulcast(requested: [VTParticipantQuality], completion: ((_ error: NSError?) -> Void)?)

Requests a specific quality of the received Simulcast video streams. You can use this method for selected conference participants or for all participants who joined the conference.

NameTypeDefaultDescription
requested[VTParticipantQuality]-The requested quality of the Simulcast video streams.
completion((_ error: NSError?) -> Void)?nilThe block to execute when the query completes.

stopAudio

stopAudio(participant: VTParticipant?, completion: ((_ error: NSError?) -> Void)?)

deprecated
Note: This method is deprecated in SDK 3.7 and replaced with the stop methods that are available in the LocalAudio and RemoteAudio models.

Stops audio transmission between the local client and a conference. The stopAudio method impacts only the audio streams that the local participant sends and receives; the method does not impact the audio transmission between remote participants and a conference and does not allow the local participant to stop sending remote participants’ streams to the conference. Depending on the specified participant in the participant parameter, the stopAudio method stops the proper audio transmission:

  • When the specified participant is the local participant, stopAudio stops sending local participant’s audio from the local client to the conference.

  • When the specified participant is a remote participant, stopAudio stops sending remote participant’s audio from the conference to the local client. This allows the local participant to locally mute remote participants.

The stopAudio method in Dolby Voice conferences is not available for listeners.

Leaving a conference resets the stopAudio settings. Participants who rejoin a conference need to provide the desired stopAudio parameters and call the stopAudio method once again.

The stopAudio method requires up to a few seconds to become effective.

Parameters:

NameTypeDefaultDescription
participantVTParticipant?-The selected participant. If you wish to not transmit the local participant's audio stream to the conference, provide the local participant's object. If you wish to not receive the specific remote participants' audio streams, provide these remote participants' objects.
completion((_ error: NSError?) -> Void)?nilThe block to execute when the query completes.

stopScreenShare

stopScreenShare(completion: ((_ error: NSError?) -> Void)?)

Stops a screen-sharing session.

NameTypeDefaultDescription
completion((_ error: NSError?) -> Void)?nilThe block to execute when the query completes.

stopVideo

stopVideo(participant: VTParticipant?, completion: ((_ error: NSError?) -> Void)?)

deprecated
Note: This method is deprecated in SDK 3.7 and replaced with the stop methods that are available in the LocalVideo and RemoteVideo models.

Notifies the server to either stop sending the local participant's video stream to the conference or stop sending a remote participant's video stream to the local participant.

Parameters:

NameTypeDefaultDescription
participantVTParticipant?-The participant who will stop receiving the video stream.
completion((_ error: NSError?) -> Void)?nilThe block to execute when the query completes.

updatePermissions

updatePermissions(participantPermissions: VTParticipantPemissions, completion: ((_ error: NSError) -> Void)?)

Updates the participant's conference permissions. If a participant does not have permission to perform a specific action, this action is not available for this participant during a conference. If a participant started a specific action and then lost permission to perform this action, the SDK stops the blocked action. For example, if a participant started sharing a screen and received the updated permissions that do not allow him to share a screen, the SDK stops the screen sharing session and the participant cannot start sharing the screen again.

Parameters:

NameTypeDefaultDescription
participantPermissionsVTParticipantPermissions-The updated participant's permissions.
completion((_ error: NSError) -> Void)?nilThe block to execute when the query completes.

videoForwarding

videoForwarding(max: Int, participants: [VTParticipant]?, completion: ((_ error: NSError?) -> Void)?)

Sets the maximum number of video streams that may be transmitted to the local participant. This method also allows the local participant to use a pin option to prioritize the specific participant's video streams and display their videos even when these participants do not talk. For more information, see the Video Forwarding article.

This method was introduced in SDK 3.1 and deprecated in SDK 3.6.

Parameters:

NameTypeDefaultDescription
maxInt-The maximum number of video streams that may be transmitted to the local participant. The valid values are between 0 and 25. The default value is 4. In the case of providing a value smaller than 0, SDK triggers the videoForwarding error. In the case of providing a value greater than 25, the Dolby.io server triggers an error.
participants[VTParticipant]?nilThe list of participant objects.
completion((_ error: NSError?) -> Void)?nilThe block to execute when the query completes.

videoForwarding

videoForwarding(options: VideoForwardingOptions, completion: ((_ error: NSError?) -> Void)?)

Sets the video forwarding functionality for the local participant. The method allows:

  • Setting the maximum number of video streams that may be transmitted to the local participant
  • Prioritizing specific participants' video streams that need to be transmitted to the local participant
  • Changing the video forwarding strategy that defines how the SDK should select conference participants whose videos will be received by the local participant

This method is available only on SDK 3.6 and later.

Parameters:

NameTypeDefaultDescription
optionsVideoForwardingOptions-The video forwarding options.
completion((_ error: NSError?) -> Void)?nilThe block to execute when the query completes.

Did this page help you?