Creating Custom Sources and Sinks

The IceLink 3 API uses the concepts of sources and sinks, which you should already be familiar with. To review briefly, a source receives data to send to another user and a sink displays data that was sent from another user. IceLink includes several implementations of sources and sinks for common use cases. However, some application-specific use cases may require you to implement your own sources and sinks.

Some examples of use cases that IceLink does not support by default: - using a video file from a user's phone as a source - streaming received video to other devices

Before you start working on your own source or sink, note that you should try not to implement sources or sinks whose only purpose is to apply transformations to a video or audio stream. These use cases are more easily dealt with by the media chaining API, which has a guide under the Advanced Topics section.

Prerequisites

Before working through this guide, ensure that you have a working knowledge of the following topics:

  • IceLink Local Media API
  • IceLink Remote Media API
  • IceLink Streams API

These are covered in the Getting Started section of the docs.

JavaScript

This API is not supported in JavaScript.

JavaScript browsers use their own technology stack, which limits the number of inputs and outputs. Generally, the only inputs available are the user's microphone, their camera, or their screen and the only outputs available are DOM elements. The API does, however, provide DomAudioSink and DomVideoSink classes for JavaScript. These are only named in this way to remain consistent with the APIs on other platforms. They do not implement any underlying interface, so it is not possible to further extend them.

Audio and Video Formats

To implement a custom source or sink, some knowledge about the way IceLink handles audio and video formats is required. Each FM.IceLink.AudioSource and FM.IceLink.AudioSink instance has an associated FM.IceLink.AudioFormat instance. A format consists of a clock rate, in Hz and the number of audio channels. For a source, the format indicates the format of the audio output by the source. For a sink, the format indicates the format that the sink expects input audio to be in.

You will need to specify AudioFormat instances for your sources and sinks. There are a number of pre-defined formats you can use. The following code demonstrates the creation of several AudioFormat instances.


var opusFormat = new FM.IceLink.Opus.Format();
var pcmaFormat = new FM.IceLink.Pcma.Format();
var pcmuFormat = new FM.IceLink.Pcmu.Format();
fm.icelink.opus.Format opusFormat = new fm.icelink.opus.Format();
fm.icelink.pcma.Format pcmaFormat = new fm.icelink.pcma.Format();
fm.icelink.pcmu.Format pcmuFormat = new fm.icelink.pcmu.Format();
FMIceLinkOpusFormat* opusFormat = [FMIceLinkOpusFormat format];
FMIceLinkPcmaFormat* pcmaFormat = [FMIceLinkPcmaFormat format];
FMIceLinkPcmuFormat* pcmuFormat = [FMIceLinkPcmuFormat format];
var opusFormat = FMIceLinkOpusFormat()
var pcmaFormat = FMIceLinkPcmaFormat()
var pcmuFormat = FMIceLinkPcmuFormat()


One other format implementation that may be useful is the FM.IceLink.Pcm.Format class. This format allows you to specify a generic PCM format with a custom clock rate and audio channel count. The following code demonstrates creating a 48,000Hz, 2 channel audio format instance.

var pcmFormat = new FM.IceLink.Pcm.Format(48000, 2);
fm.icelink.pcm.Format pcmFormat = new fm.icelink.pcm.Format(48000, 2);
FMIceLinkPcmFormat* pcmFormat = [FMIceLinkPcmFormat formatWithClockRate:48000 channelCount: 2];
var pcmFormat = FMIceLinkPcmFormat(clockRate: 48000, channelCount: 2)


For the complete list of pre-defined audio formats, refer to the API documentation.

Like audio sources and sink, each FM.IceLink.VideoSource and FM.IceLink.VideoSink instance also has an associated FM.IceLink.VideoFormat instance. Video formats consist of a clock rate and information about the colorspace of the format. Again, like audio sources and sinks, there is a set of pre-defined video formats to select from. The following code demonstrates creating instances of two of the most common video formats, RGB and I420.

var rgbFormat = FM.IceLink.VideoFormat.Rgb;
var i420Format = FM.IceLink.VideoFormat.I420;
fm.icelink.VideoFormat rgbFormat = fm.icelink.VideoFormat.getRgb();
fm.icelink.VideoFormat i420Format = fm.icelink.VideoFormat.getI420();
FMIceLinkVideoFormat* rgbFormat = [FMIceLinkVideoFormat rgb];
FMIceLinkVideoFormat* i420Format = [FMIceLinkVideoFormat i420];
var rgbFormat = FMIceLinkVideoFormat.rgb()
var i420Format = FMIceLinkVideoFormat.i420()

Refer to the API docs for the complete list of supported formats.

Custom Sources

To create a custom audio or video source, first inherit from either the FM.IceLink.AudioSource or the FM.IceLink.VideoSource class. Neither of these classes have a default constructor; they require you to specify either an FM.IceLink.AudioFormat or an FM.IceLink.VideoFormat instance. Most custom sources are designed for a specific output format, so it is common to create a default constructor that invokes the base constructor with a pre-defined format. The following code demonstrates this.

public class CustomAudioSource : FM.IceLink.AudioSource
{
    public CustomAudioSource()
        : base(new FM.IceLink.Pcm.Format(48000, 2))
    {
    }
}

public class CustomVideoSource : FM.IceLink.VideoSource
{
    public CustomVideoSource()
        : base(FM.IceLink.VideoFormat.Rgb)
    {
    }
}
public class CustomAudioSource extends fm.icelink.AudioSource {
    public CustomAudioSource() {
        super(new fm.icelink.pcm.Format(48000, 2))
    }
}

public class CustomVideoSource extends fm.icelink.VideoSource {
    public CustomVideoSource() {
        super(fm.icelink.VideoFormat.getRgb())
    }
}
@interface CustomAudioSource : FMIceLinkAudioSource
@end

@implementation CustomAudioSource
- (instancetype) init {
    self = [super initWithOutputFormat: [FMIceLinkPcmFormat formatWithClockRate:48000 channelCount:2]];
    return self;
}
@end

@interface CustomVideoSource : FMIceLinkVideoSource
@end

@implementation CustomVideoSource
- (instancetype) init {
    self = [super initWithOutputFormat: [FMIceLinkVideoFormat rgb];
    return self;
}
@end
public class CustomAudioSource : FMIceLinkAudioSource {
    init() {
        self.init(outputFormat: FMIceLinkPcmFormat(clockRate: 48000, channelCount: 2))
    }
}

public class CustomVideoSource : FMIceLinkVideoSource {
    init() {
        self.init(outputFormat: FMIceLinkVideoFormat.rgb())
    }
}

Next, override the Label property. This is an accessor that returns a string that identifies the type of source. The value you provide here is only for diagnostic purposes and will not affect the output of an audio or video source.

public class CustomAudioSource : FM.IceLink.AudioSource
{
    public override string Label => "CustomAudioSource";
}

public class CustomVideoSource : FM.IceLink.VideoSource
{
    public override string Label => "CustomVideoSource";
}
public class CustomAudioSource extends fm.icelink.AudioSource {
    @Override
    public String getLabel() {
        return "CustomAudioSource";
    }
}

public class CustomVideoSource extends fm.icelink.VideoSource {
    @Override
    public String getLabel() {
        return "CustomVideoSource";
    }
}
@implementation CustomAudioSource : FMIceLinkAudioSource
- (NSString *) label {
    return @"CustomAudioSource";
}
@end

@implementation CustomVideoSource : FMIceLinkVideoSource
- (NSString *) label {
    return @"CustomVideoSource";
}
@end
public class CustomAudioSource : FMIceLinkAudioSource {
    override func label() -> String {
        return "CustomAudioSource"
    }
}

public class CustomVideoSource : FMIceLinkVideoSource {
    override func label() -> String {
        return "CustomVideoSource"
    }
}

Finally, you must implement the DoStart and DoStop methods. Usually, these methods will follow one of two patterns. They will either manage an event handler on a interface that captures audio and video data or they will manage a separate thread that runs in the background, which will generate audio and video data. With both patterns, the source must invoke the RaiseFrame method when data is available. RaiseFrame is a protected method that signals to components in the media stack that new data is available.

Note that the DoStart and DoStop methods are asynchronous and return an FM.IceLink.Future. For the sake of simplicity, these examples will are synchronous and will resolve the promise immediately. In practice, your implementation will likely be more complex. You can read more about the Promise API in its dedicated guide under the Advanced Topics section.

Capturing Audio

The first set of examples demonstrate how to begin capturing audio using the event-based pattern. Do not use any of these snippets as-is. They are not full implementations. They are only designed to give a brief overview of how you might implement these patterns in your own application. The examples do not cover important details such as endianness or upsampling/downsampling of audio. These details are glossed over in the samples.

A fictional AudioCaptureObject class is used for these examples. When implementing your own source, you should adapt the source here to your own application. The example code first creates an instance of the AudioCaptureObject, then adds an event handler that will be raised whenever new audio data is available. Note that the event handler has two parameters, duration and data. The data parameters is a byte array containing raw audio data over a period of time. The duration parameter is the number of milliseconds that this data represents. You must calculate the duration yourself, as it will vary based on your specific implementation.

With these two parameters, you can finally raise an audio frame, though there are several intermediate steps. First, wrap the raw audio data in an instance of FM.IceLink.DataBuffer. Next, wrap the data buffer in an instance of FM.IceLink.AudioBuffer, which also requires you to specify the FM.IceLink.AudioFormat of the audio data. You can simply use the OutputFormat property of your audio source to retrieve this. Finally, wrap the audio buffer in an instance of FM.IceLink.AudioFrame and provide the audio duration. Invoke RaiseFrame on this new AudioFrame instance.

public class CustomAudioSource : FM.IceLink.AudioSource
{
    private AudioCaptureObject _Capture;

    public override FM.IceLink.Future<object> DoStart()
    {
        var promise = new FM.IceLink.Promise<object>();

        _Capture = new AudioCaptureObject();
        _Capture.AudioDataAvailable += (double duration, byte[] data) =>
        {
            var dataBuffer = FM.IceLink.DataBuffer.Wrap(data);
            var audioBuffer = new FM.IceLink.AudioBuffer(dataBuffer, this.OutputFormat);
            var audioFrame = new FM.IceLink.AudioFrame(duration, audioBuffer);

            this.RaiseFrame(audioFrame);
        });

        promise.resolve(null);        
        return promise;
    }
}
public class CustomAudioSource extends fm.icelink.AudioSource {
    private AudioCaptureObject _capture;

    @Override
    public fm.icelink.Future<Object> doStart() {
        fm.icelink.Promise<Object> promise = new fm.icelink.Promise<Object>();

        _capture = new AudioCaptureObject();
        _capture.addOnAudioDataAvailable((double duration, byte[] data) -> {
            fm.icelink.DataBuffer dataBuffer = fm.icelink.DataBuffer.wrap(data);
            fm.icelink.AudioBuffer audioBuffer = new fm.icelink.AudioBuffer(dataBuffer, this.getOutputFormat());
            fm.icelink.AudioFrame audioFrame = new fm.icelink.AudioFrame(duration, audioBuffer);

            this.raiseFrame(audioFrame);
        });

        promise.resolve(null);
        return promise;
    }
}
@interface CustomAudioSource : FMIceLinkAudioSource
    AudioCaptureObject _capture;
@end

@implementation CustomAudioSource
- (FMIceLinkFuture*) doStart {
    FMIceLinkPromise promise = [FMIceLinkPromise promise];

    _capture = [AudioCaptureObject new];
    [_capture addOnAudioDataAvailable: ^(double duration, NSData* data) {
        FMIceLinkDataBuffer* dataBuffer = [FMIceLinkDataBuffer wrapWithData:data];
        FMIceLinkAudioBuffer* audioBuffer = [FMIceLinkAudioBuffer audioBufferWithDataBuffer:dataBuffer format:[self outputFormat]];
        FMIceLinkAudioFrame* audioFrame = [FMIceLinkAudioFrame audioFrameWithDuration:duration buffer: audioBuffer];

        [self raiseFrame:audioFrame];
    }];

    [promise resolveWithResult:nil];
    return promise;
}
@end
public class CustomAudioSource : FMIceLinkAudioSource {
    var _capture:AudioCaptureObject

    overrides func doStart() -> FMIceLinkFuture {
        var promise = FMIceLinkPromise()

        _capture = AudioCaptureObject()
        _capture.addOnAudioDataAvailable { (Double duration, NSData data) in
            var dataBuffer:FMIceLinkDataBuffer = FMIceLinkDataBuffer.wrap(data: data)
            var audioBuffer:FMIceLinkAudioBuffer = FMIceLinkAudioBuffer(dataBuffer: dataBuffer, format: self.outputFormat())
            var audioFrame:FMIceLinkAudioFrame = FMIceLinkAudioFrame(duration: duration, buffer: audioBuffer)

            self.raiseFrame(audioFrame)
        }

        promise.resolve(result: nil)
        return promise
    }
}

Stopping an FM.IceLink.AudioSource instance is much simpler. Destroy whatever capture interface you were using or remove any event handlers.

public class CustomAudioSource : FM.IceLink.AudioSource
{
    public override Future<object> DoStop()
    {
        var promise = new FM.IceLink.Promise<object>();

        _Capture.Destroy();
        _Capture = null;

        promise.resolve(null);
        return promise;
    }
}
public class CustomAudioSource extends fm.icelink.AudioSource {
    @Override
    public fm.icelink.Future<Object> doStop() {
        fm.icelink.Promise<Object> promise = new fm.icelink.Promise<Object>();

        _capture.destroy();
        _capture = null;

        promise.resolve(null);
        return promise;
    }
}
@implementation CustomAudioSource
- (FMIceLinkFuture*) doStop {
    FMIceLinkPromise promise = [FMIceLinkPromise promise];

    [_capture destroy];
    _capture = nil;

    [promise resolveWithResult:nil];
    return promise;
}
@end
public class CustomAudioSource : FMIceLinkAudioSource {
    func doStop() -> FMIceLinkFuture {
        var promise = FMIceLinkPromise()

        _capture.destroy()
        _capture = nil

        promise.resolve(result: nil)
        return promise
    }
}

Capturing Video

The next set of examples demonstrate how to capture video; again, using the event-based pattern. Like the previous examples, a fictional VideoCaptureObject class is used for these examples. The parameters associated with video data are slightly different than those associated with audio data. Instead of a duration, you need to specify the width and height of the video frames.

To raise a video frame, proceed the same way you did when raising an audio frame. First, wrap the raw video data in an instance of FM.IceLink.DataBuffer. Next, wrap the data buffer in an instance of FM.IceLink.VideoBuffer, which also requires you to specify the FM.IceLink.VideoFormat of the data as well as the width and height of the video. Like the audio source, you can use the OutputFormat property of your video source. Finally, wrap the video buffer in an instance of FM.IceLink.VideoFrame and invoke RaiseFrame.

public class CustomVideoSource : FM.IceLink.VideoSource
{
    private VideoCaptureObject _Capture;

    public override FM.IceLink.Future<object> DoStart()
    {
        var promise = new FM.IceLink.Promise<object>();

        _Capture = new VideoCaptureObject();
        _Capture.VideoDataAvailable += (int width, int height, byte[] data) =>
        {
            var dataBuffer = FM.IceLink.DataBuffer.Wrap(data);
            var videoBuffer = new FM.IceLink.VideoBuffer(width, height, dataBuffer, this.OutputFormat);
            var videoFrame = new FM.IceLink.VideoFrame(videoBuffer);

            this.RaiseFrame(videoFrame);
        });

        promise.resolve(null);        
        return promise;
    }
}
public class CustomVideoSource extends fm.icelink.VideoSource {
    private VideoCaptureObject _capture;

    @Override
    public fm.icelink.Future<Object> doStart() {
        fm.icelink.Promise<Object> promise = new fm.icelink.Promise<Object>();

        _capture = new VideoCaptureObject();
        _capture.addOnVideoDataAvailable((int width, int height, byte[] data) -> {
            fm.icelink.DataBuffer dataBuffer = fm.icelink.DataBuffer.wrap(data);
            fm.icelink.VideoBuffer videoBuffer = new fm.icelink.VideoBuffer(width, height, dataBuffer, this.getOutputFormat());
            fm.icelink.VideoFrame videoFrame = new fm.icelink.VideoFrame(duration, videoBuffer);

            this.raiseFrame(videoFrame);
        });

        promise.resolve(null);
        return promise;
    }
}
@interface CustomVideoSource : FMIceLinkVideoSource
    VideoCaptureObject _capture;
@end

@implementation CustomVideoSource
- (FMIceLinkFuture *) doStart {
    FMIceLinkPromise promise = [FMIceLinkPromise promise];

    _capture = [VideoCaptureObject new];
    [_capture addOnVideoDataAvailable: ^(int width, int height, NSData* data) {
        FMIceLinkDataBuffer* dataBuffer = [FMIceLinkDataBuffer wrapWithData:data];
        FMIceLinkVideoBuffer* videoBuffer = [FMIceLinkVideoBuffer videoBufferWithWidth:width height:height dataBuffer:dataBuffer format:[self outputFormat]];
        FMIceLinkVideoFrame* videoFrame = [FMIceLinkVideoFrame videoFrameWithDuration:duration buffer: videoBuffer];

        [self raiseFrame:videoFrame];
    }];

    [promise resolveWithResult:nil];
    return promise;
}
@end
public class CustomVideoSource : FMIceLinkVideoSource {
    func doStart() -> FMIceLinkFuture {
        var promise = [FMIceLinkPromise promise]

        _capture = VideoCaptureObject()
        _capture.addOnVideoDataAvailable() { (Int width, Int height, NSData data) in
            var dataBuffer:FMIceLinkDataBuffer = FMIceLinkDataBuffer.wrap(data: data)
            var videoBuffer:FMIceLinkVideoBuffer = FMIceLinkVideoBuffer(width: width, height: height, format: self.outputFormat())
            var videoFrame:FMIceLinkVideoFrame = FMIceLinkVideoFrame(duration: duration, buffer: videoBuffer)

            self.raiseFrame(videoFrame)
        }

        promise.resolve(result: nil)
        return promise
    }
}

Stopping an FM.IceLink.VideoSource is as simple as stopping an audio source - simply release or destroy any resources you were using. The code below will look very similar to what you've already written.

public class CustomVideoSource : FM.IceLink.VideoSource
{
    public override Future<object> DoStop()
    {
        var promise = new FM.IceLink.Promise<object>();

        _Capture.Destroy();
        _Capture = null;

        promise.resolve(null);
        return promise;
    }
}
public class CustomVideoSource extends fm.icelink.VideoSource {
    @Override
    public fm.icelink.Future<Object> doStop() {
        fm.icelink.Promise<Object> promise = new fm.icelink.Promise<Object>();

        _capture.destroy();
        _capture = null;

        promise.resolve(null);
        return promise;
    }
}
@implementation CustomVideoSource
- (FMIceLinkFuture *) doStop {
    FMIceLinkPromise promise = [FMIceLinkPromise promise];

    [_capture destroy];
    _capture = nil;

    [promise resolveWithResult:nil];
    return promise;
}
@end
public class CustomVideoSource : FMIceLinkVideoSource {
    func doStop() -> FMIceLinkFuture {
        var promise = FMIceLinkPromise()

        _capture.destroy()
        _capture = nil

        promise.resolve(result: result)
        return promise
    }
}

Raising Frames

Raising frames was not covered in much detail above other than how to do it. One thing to keep in mind is that the rate at which you raise frames will determine the frame rate of your source. For audio, this has fewer implications because audio capture is associated with a duration. For video capture, it gets more complex.

Video buffers don't have a timestamp or duration associated with them. Regardless of whether you raise 15 video frames in a second or 30 frames, all of those video frames will be displayed during that second. If performance becomes an issue, you may need to throttle this method to limit the amount of frames that are raised per second.

Custom Sinks

Like you did when creating a custom source, to create a custom audio or video sink, first inherit from either the FM.IceLink.AudioSink or the FM.IceLink.VideoSink class. Sinks also require you to specify an FM.IceLink.AudioFormat or an FM.IceLink.VideoFormat instance. In this case, the format represents the input into the sink, rather than the output from the source. The following code is a simple example of how to create a custom sink. It should look familiar.

public class CustomAudioSink : FM.IceLink.AudioSink
{
    public CustomAudioSink()
        : base(new FM.IceLink.Pcm.Format(48000, 2))
    {
    }
}

public class CustomVideoSink : FM.IceLink.VideoSink
{
    public CustomVideoSink()
        : base(FM.IceLink.VideoFormat.RGB)
    {
    }
}
public class CustomAudioSink extends fm.icelink.AudioSink {
    public CustomAudioSink() {
        super(new fm.icelink.pcm.Format(48000, 2))
    }
}

public class CustomVideoSink extends fm.icelink.VideoSink {
    public CustomVideoSink() {
        super(fm.icelink.VideoFormat.getRgb())
    }
}
@interface CustomAudioSink : FMIceLinkAudioSink
@end

@implementation CustomAudioSink
- (instancetype) init {
    self = [super initWithOutputFormat: [FMIceLinkPcmFormat formatWithClockRate:48000 channelCount:2]];
    return self;
}
@end

@interface CustomVideoSink : FMIceLinkVideoSink
@end

@implementation CustomVideoSink
- (instancetype) init {
    self = [super initWithOutputFormat: [FMIceLinkVideoFormat rgb];
    return self;
}
@end
public class CustomAudioSink : FMIceLinkAudioSink {
    init() {
        self.init(outputFormat: FMIceLinkPcmFormat(clockRate: 48000, channelCount: 2))
    }
}

public class CustomVideoSink : FMIceLinkVideoSink {
    init() {
        self.init(outputFormat: FMIceLinkVideoFormat.rgb())
    }
}

Sinks also have a Label property. Like the property of the same name on sources, it is used for diagnostic purposes and has no effect on what goes into your sinks.

public class CustomAudioSink : FM.IceLink.AudioSink
{
    public override string Label => "CustomAudioSink";
}

public class CustomVideoSink : FM.IceLink.VideoSink
{
    public override string Label => "CustomVideoSink";
}
public class CustomAudioSink extends fm.icelink.AudioSink {
    @Override
    public String getLabel() {
        return "CustomAudioSink";
    }
}

public class CustomVideoSink extends fm.icelink.VideoSink {
    @Override
    public String getLabel() {
        return "CustomVideoSink";
    }
}
@implementation CustomAudioSink
- (NSString *) label {
    return @"CustomAudioSink";
}
@end

@implementation CustomVideoSink
- (NSString *) label {
    return @"CustomVideoSink";
}
@end
public class CustomAudioSink : FMIceLinkAudioSink {
    override func label() -> String {
        return "CustomAudioSink"
    }
}

public class CustomVideoSink : FMIceLinkVideoSink {
    override func label() -> String {
        return "CustomVideoSink"
    }
}

At this point, the implementation for sinks is different. There are no DoStart or DoStop methods because sinks do not follow a "start/stop" pattern. Instead, whenever an audio or video frame is available, the sink will invoke its DoProcessFrame method. When a sink is instantiated, it is assumed to be ready to receive frames. The last method that sinks must implement is DoDestroy, which IceLink invokes when tearing down a session. Its purpose is to clean up any resources that are still in use.

Unlike the audio and video source methods, DoProcessFrame and DoDestroy are synchronous, and do not return an FM.IceLink.Promise. The reason they are synchronous is because they will never be invoked on the main thread. As a side effect of this, you must ensure that these methods are thread-safe.

Rendering Audio

This set of examples demonstrate how to play received audio data. It uses a fictional AudioRenderObject class, which abstracts away many of the details of audio playback. This does not mean that you can ignore them - in your own implementation, you must still deal with the upsampling and downsampling of audio.

There are many properties that are accessible from the FM.IceLink.AudioFrame and FM.IceLink.AudioBuffer classes. This example will focus on retrieving the two properties that were used in the sources example above: duration and data. Assume that the AudioRenderObject has a method, PlayAudio, that takes a duration parameter and a data parameter. You must retrieve these values from either the audio buffer or the audio frame.

First, retrieve the duration of the audio frame, by accessing the Duration properly of the AudioFrame parameter. Next, access the data buffer object by accessing the DataBuffer property of the AudioBuffer. With this DataBuffer instance, you can retrieve the raw audio data through the Data property. Finally, pass these values into the AudioRenderObject or whatever interface you are using for this sink.

public class CustomAudioSink : FM.IceLink.AudioSink
{
    private AudioRenderObject _Render = new AudioRenderObject();

    public override void DoProcessFrame(FM.IceLink.AudioFrame frame, FM.IceLink.AudioBuffer buffer)
    {
        var duration = frame.Duration;

        var dataBuffer = buffer.DataBuffer;
        var data = dataBuffer.Data;

        _Render.PlayAudio(duration, data);
    }
}
public class CustomAudioSink extends fm.icelink.AudioSink {
    private AudioRenderObject _render = new AudioRenderObject();

    @Override
    public void doProcessFrame(fm.icelink.AudioFrame frame, fm.icelink.AudioBuffer buffer) {
        double duration = frame.Duration;

        fm.icelink.DataBuffer dataBuffer = buffer.getDataBuffer();
        byte[] data = dataBuffer.getData();

        _render.playAudio(duration, data);
    }
}
@interface CustomAudioSink : FMIceLinkAudioSink
    AudioRenderObject _capture;
@end

@implementation CustomAudioSink
- (void) doProcessFrameWithFrame: (FMIceLinkAudioFrame*)frame buffer:(FMIceLinkAudioBuffer*)buffer {
    double duration = [frame duration];

    FMIceLinkDataBuffer* dataBuffer = [buffer dataBuffer];
    NSData* data = [dataBuffer data];

    [_render playAudioWithDuration:duration data:data];
}
@end
public class CustomAudioSink : FMIceLinkAudioSink {
    var _capture:AudioRenderObject

    func doProcessFrame(frame: FMIceLinkAudioFrame, buffer: FMIceLinkAudioBuffer) {
        var duration:Double = frame.duration()

        var buffer:FMIceLinkDataBuffer = buffer.dataBuffer()
        var data:NSData = dataBuffer.data()

        _render.playAudio(duration: duration, data: data)
    }
}

For the implementation of the DoDestroy method, call any disposal methods and unset anything that is no longer in use.

public class CustomAudioSink : FM.IceLink.AudioSink
{
    public override void DoDestroy()
    {
        _Render.Destroy();
        _Render = null;
    }
}
public class CustomAudioSink extends fm.icelink.AudioSink {
    @Override
    public void doDestroy() {
        _render.destroy();
        _render = null;
    }
}
@implementation CustomAudioSink
- (void) doDestroy: {
    [_render destroy];
    _render = nil;
}
@end
public class CustomAudioSink : FMIceLinkAudioSink {
    override func doDestroy() {
        _render.destroy()
        _render = nil
    }
}

Rendering Video

Rendering video is similar to rendering audio. As we've been doing for the previous examples, we'll demonstrate video playback using a fictional VideoRenderObject class. The example will demonstrate how to retrieve the width and height of video data in the DoProcessFrame method, as well as the raw video data itself.

Retrieve the dimensions of the video by accessing the Width and Height properties of the video buffer parameter. Next, access the data buffer object by accessing the DataBuffer property of the same parameter. The raw video data is accessible through the Data property of the data buffer. You can pass these values into the AudioRenderObject or whatever interface you are using for this sink.

These examples will also cover a basic implementation of DoDestroy, which should look almost identical to that of those from the previous example set.

public class CustomVideoSink : FM.IceLink.VideoSink
{
    private VideoRenderObject _Render = new VideoRenderObject();

    public override void DoProcessFrame(FM.IceLink.VideoFrame frame, FM.IceLink.VideoBuffer buffer)
    {
        var width = buffer.Width;
        var height = buffer.Height;

        var dataBuffer = buffer.DataBuffer;
        var data = dataBuffer.Data;

        _Render.PlayVideo(width, height, data);
    }

    public override void DoDestroy()
    {
        _Render.Destroy();
        _Render = null;
    }
}
public class CustomVideoSink extends fm.icelink.VideoSink {
    private VideoRenderObject _render = new VideoRenderObject();

    @Override
    public void doProcessFrame(fm.icelink.VideoFrame frame, fm.icelink.VideoBuffer buffer) {
        int width = buffer.getWidth();
        int height = buffer.geHeight();

        fm.icelink.DataBuffer dataBuffer = buffer.getDataBuffer();
        byte[] data = dataBuffer.getData();

        _render.playVideo(width, height, data);
    }

    @Override
    public void doDestroy() {
        _render.destroy();
        _render = null;
    }
}
@implementation CustomVideoSink
- (void) doProcessFrameWithFrame: (FMIceLinkVideoFrame*)frame buffer:(FMIceLinkVideoBuffer*)buffer {
    int width = [buffer width];
    int height = [buffer height];

    FMIceLinkDataBuffer* dataBuffer = [buffer dataBuffer];
    NSData* data = [dataBuffer.data];

    [_render playVideoWithWidth:width height:height data:data];
}

- (void) doDestroy: {
    [_render destroy];
    _render = nil;
}
@end
public class CustomVideoSink : FMIceLinkVideoSink {
    func doProcessFrame(frame: FMIceLinkVideoFrame, buffer: FMIceLinkVideoBuffer) {
        var width:Int = buffer.width()
        var height:Int = buffer.height()

        var dataBuffer:FMIceLinkDataBuffer = buffer.dataBuffer()
        var data:NSData = dataBuffer.data()

        _render.play(width: width, height: heihgt, data: data)
    }

    func doDestroy() {
        _render.destroy()
        _render = nil
    }
}

Wrapping Up

You can now create your own custom sources and sinks for your application. If you're trying to change the look and feel of your application, you may also be interested in Customizing the Layout Manager.