Creating Media Chains

A significant improvement in the IceLink 3 API is the ability to chain media objects together to create complex interactions that would not be possible in previous versions of IceLink. A media "chain" can be thought of as a series of steps that are applied to audio or video data.

This API is used extensively within IceLink and you only need to know about it if you are planning to apply application-specific transformations to your media streams, or if you are planning to implement a codec that IceLink does not provide an implementation for.

The next sections describe the use of this API and how it relates to concepts that you are already familiar with from the Getting Started guide.

Prerequisites

Before working through this guide, ensure that you have a working knowledge of the following topics:

  • IceLink Local Media API
  • IceLink Remote Media API
  • IceLink Streams API

These are covered in the Getting Started section of the docs.

JavaScript

This API is not supported in JavaScript.

JavaScript browsers use their own technology stack, which is a generally mixture of hardware and software codecs. Due to security concerns, none of the major browsers allow the access necessary to implement additional codecs or any of the features that would be necessary to port this API to JavaScript.

Tracks

As mentioned above, a media chain represents a series of steps for audio/video data. Each step can also be thought of as a node in the media chain. There are three types of nodes:

  • Source: A media chain node that sends media. They are normally located at the beginning of a chain. A user's camera or microphone is an example of a source. 
  • Sink: A media chain node that receives media. They are normally located at the end of a chain. An example of a sink is the speakers of a user. 
  • Pipe: A media chain node that can both send and receive media. Pipes implements both Sink and Source interfaces and are shorthands for media chains that operate the same in both directions.

Any combinations of these nodes is referred to in the API as a track. A media track is the same as a media chain. The term track is used to maintain consistency with the WebRTC specification.

You are already familiar with tracks, though you know them by another name. The LocalMedia and RemoteMedia classes that you create for your application are simplified implementations that wrap instances of FM.IceLink.AudioTrack and FM.IceLink.VideoTrack. The next section explores a very simple example, and demonstrates how you can use your own tracks.

Normally, Branches differ slightly, in that a branch may have multiple outputs. - AudioBranch : The audio version of the MediaBranch. - VideoBranch : The video version of the VideoBranch.

Basic Example

Normally, to create an a send-only audio stream, you would create a class, LocalMedia to extend FM.IceLink.RtcLocalMedia. You would then pass it into a new instance of FM.IceLink.AudioStream, omitting the second parameter. Your code to create this stream and then to create the associated FM.IceLink.Connection instance would look like:

var localMedia = new LocalMedia(); // call localMedia.Start() when ready to stream
var audioStream = new FM.IceLink.AudioStream(localMedia);
var connection = new FM.IceLink.Connection(audioStream);
LocalMedia localMedia = new LocalMedia(); // call localMedia.start() when ready to stream
fm.icelink.AudioStream = new fm.icelink.AudioStream(localMedia);
fm.icelink.Connection = new fm.icelink.Connection(audioStream);
LocalMedia* localMedia = [LocalMedia new]; // call [localMedia start] when ready to stream
FMIceLinkAudioStream* audioStream = [FMIceLinkAudioStream audioStreamWithLocalMedia: localMedia];
FMIceLinkConnection* connection - [FMIceLinkConnection connectionWithStream: audioStream];
var localMedia = LocalMedia() // call localMedia.start() when ready to stream
var audioStream = FMIceLinkAudioStream(localMedia)
var connection = FMIceLinkConnection(audioStream)


LocalMedia can be started or stopped at any time. Don't forget to call "stop" at some point!


Call "destroy" when you are done with the LocalMedia and ready to dispose of all connected resources.

Substituting an Audio Track

Notice that the FM.IceLink.AudioStream constructor can also take an instance of FM.IceLink.AudioTrack. The default implementation for FM.IceLink.RtcLocalMedia actually wraps several different audio tracks, to support multiple codecs. The media chaining API allows you to skip creating the LocalMedia instance entirely, to instead provide your own FM.IceLink.AudioTrack instance.

First, create an FM.IceLink.AudioConfig instance and then instantiate an audio source. The source will vary based on what language you are using. You can provide your own FM.IceLink.AudioSource implementation or use one of the implementations included in the SDK. The example will use Opus, so specify a clock rate of 48,000Hz, with two audio channels.

var audioConfig = new FM.IceLink.AudioConfig(48000, 2);
var audioSource = new FM.IceLink.NAudio.Source(audioConfig); // call audioSource.Start() when ready to stream
// for android
fm.icelink.AudioConfig audioConfig = new fm.icelink.AudioConfig(48000, 2);
fm.icelink.AudioSource.AudioRecordSource audioSource = new fm.icelink.android.AudioRecordSource(audioConfig);

// for other java applications
fm.icelink.AudioConfig audioConfig = new fm.icelink.AudioConfig(48000, 2);
fm.icelink.AudioSource audioSource = new fm.icelink.java.SoundSource(audioConfig); // call audioSource.start() when ready to stream
FMIceLinkAudioConfig* audioConfig = [FMIceLinkAudioConfig audioConfigWithClockRate: 48000 channelCount: 2];
FMIceLinkAudioSource* audioSource = [FMIceLinkAudioUnitSource audioUnitSourceWithConfig: audioConfig]; // call [audioSource start] when ready to stream
var audioConfig = FMIceLinkAudioConfig(clockRate: 48000, channelCount: 2)
var audioSource = FMIceLinkAudioUnitSource(config: audioConfig) // call audioSource.start() when ready to stream


Audio sources can be started or stopped at any time. Don't forget to call "stop" at some point!

The next step is to define the audio chain. The first step is to instantiate an instance of FM.IceLink.AudioTrack using the FM.IceLink.AudioSource that was created in the previous section. The Next method of the returned track is then invoked repeatedly on the result of the previous Next invocation to add multiple steps to the chain.

As mentioned above, this example demonstrates the use of Opus-encoded audio. To encode and send an Opus-encoded audio stream, there are three steps that have to be added to the audio chain. The first step is to convert the sound from the user's microphone into a format that is usable by the Opus encoder. This can be accomplished using an instance of FM.IceLink.SoundConverter. The second step is to encode the audio data using the FM.IceLink.Opus.Encoder class. Finally, the encoded audio frames must be packetized so they can be sent in an RTP stream; this is accomplished using an instance of FM.IceLink.Opus.Packetizer.

Note in the following code that the SoundConverter takes two parameters, an input audio configuration, and an output audio configuration. The output audio configuration can be taken directly from the Encoder itself. Specifically, what the converter does is convert between audio formats with different sample rates or number of audio channels. In this instance, there is no difference but in many scenarios, high quality audio must be down-sampled to be compatible with certain audio codecs.

After the entire chain has been defined, pass it to the FM.IceLink.AudioStream class's constructor where you would normally use a LocalMedia object.

var opusEncoder = new FM.IceLink.Opus.Encoder();
var opusPacketizer = new FM.IceLink.Opus.Packetizer();
var localSoundConveter = new FM.IceLink.SoundConverter(audioSource.Config, opusEncoder.InputConfig);

var localAudioTrack = new AudioTrack(audioSource)
    .Next(localSoundConveter)
    .Next(opusEncoder)
    .Next(opusPacketizer);

var audioStream = new AudioStream(localAudioTrack);
fm.icelink.opus.Encoder opusEncoder = new fm.icelink.opus.Encoder();
fm.icelink.opus.Packetizer opusPacketizer = new fm.icelink.opus.Packetizer();
fm.icelink.SoundConverter localSoundConveter = new fm.icelink.SoundConverter(audioSource.getConfig(), opusEncoder.getInputConfig());

fm.icelink.AudioTrack localAudioTrack = new fm.icelink.AudioTrack(audioSource)
    .next(localSoundConveter)
    .next(opusEncoder)
    .next(opusPacketizer);

fm.icelink.AudioStream audioStream = new fm.icelink.AudioStream(localAudioTrack);
FMIceLinkOpusEncoder* opusEncoder = [FMIceLinkOpusEncoder encoder];
FMIceLinkOpusPacketizer* opusPacketizer = [FMIceLinkOpusPacketizer packetizer];
FMIceLinkSoundConveter* localSoundConverter = [FMIceLinkSoundConveter soundConverterWithInputConfig:[audioSource config] outputConfig:[opusEncoder inputConfig]];

[[[FMIceLinkAudioTrack* localAudioTrack = [FMIceLinkAudioTrack audioTrackWithElement:audioSource]
    next:localSoundConveter]
    next:opusEncoder]
    next:opusPacketizer];

FMIceLinkAudioStream* audioStream = [FMIceLinkAudioStream audioStreamWithLocalTrack:localAudioTrack];
var opusEncoder = FMIceLinkOpusEncoder()
var opusPacketizer = FMIceLinkOpusPacketizer()
var localSoundConvter = FMIceLinkSoundConverter(inputConfig: audioSource.config(), outputConfig: opusEncoder.inputConfig())

var localAudioTrack = FMIceLinkAudioTrack(element: audioSource)
    .next(localSoundConveter)
    .next(opusEncoder)
    .next(opusPacketizer)

var audioStream = FMIceLinkAudioStream(localTrack: localAudioTrack)


Call "destroy" when you are done with the audio track and ready to dispose of all connected resources.

You've now replaced the LocalMedia instance and have developed a completely independent media chain for sending Opus-encoded audio. The next section will continue the example by demonstrating how to create a video track.

Substituting a Video Track

Creating an FM.IceLink.VideoTrack works the same way as it does for audio tracks. Similar to the audio stream, the FM.IceLink.VideoStream constructor can take a video track instead of an FM.IceLink.RtcLocalMedia implementation.

Create an FM.IceLink.VideoConfig instance and use it to instantiate a video source. Again, the type of the FM.IceLinnk.VideoSource and the particulars of its initialization vary slightly for each platform. This example will use the VP8 codec. You may select any reasonable parameters for the video capture parameters.

var videoConfig = new FM.IceLink.VideoConfig(640, 480, 15);
var videoSource = new FM.IceLink.AForge.CameraSource(videoConfig); // call videoSource.Start() when ready to stream
// for android
fm.icelink.android.CameraPreview videoPreview = getPreview();

fm.icelink.VideoConfig videoConfig = new fm.icelink.VideoConfig(640, 480, 15);
fm.icelink.VideoSource videoSource = new fm.icelink.android.CameraSource(videoPreview, videoConfig);

// for other java applications
fm.icelink.VideoConfig videoConfig = new fm.icelink.VideoConfig(640, 480, 15);
fm.icelink.VideoSource videoSource = new fm.icelink.java.sarxos.VideoSource(videoConfig); // call videoSource.start() when ready to stream
FMIceLinkCocoaAVCapturePreview* videoPreview = [FMIceLinkCocoaAVCapturePreview avCapturePreview];

FMIceLinkVideoConfig* videoConfig = [FMIceLinkVideoConfig videoConfigWithWidth: 640 height: 480 frameRate: 15];
FMIceLinkVideoSource* videoSource = [FMIceLinkCocoaAVCaptureSource avCaptureSourceWithPreview:preview config:videoConfig]; // call [videoSource start] when ready to stream
var videoPreview = FMIceLinkCocoaAVCApturePreview()

var videoConfig = FMIceLinkVideoConfig(width: 640, height: 480, frameRate: 15)
var videoSource = FMIceLinkCocoaAVCaptureSource(preview: preview, config: videoConfig) // call videoSource.start() when ready to stream


Video sources can be started or stopped at any time. Don't forget to call "stop" at some point!

The code for creating a video chain is similar to the code for creating an audio chain. The key differences are that you will specify an FM.IceLink.Vp8.Encoder and an FM.IceLink.Vp8.Packetizer instead of their Opus equivalents. You will also need to use an FM.IceLink.Yuv.ImageConverter instance to translate from the RGB colorspace to the YUV colorspace.

Like the sound converter, its constructor takes an input and an output format. Don't be confused by the ordering of the parameters. The first parameter is the input into the image converter, which is the output of the camera source. The second parameter is the output from the conveter, which is the input to the VP8 encoder.

var vp8Encoder = new FM.IceLink.Vp8.Encoder();
var vp8Packetizer = new FM.IceLink.Vp8.Packetizer();
var localImageConveter = new FM.IceLink.Yuv.ImageConverter(videoSource.OutputFormat, vp8Encoder.InputFormat);

var localVideoTrack = new FM.IceLink.VideoTrack(videoSource)
    .Next(localImageConveter)
    .Next(vp8Encoder)
    .Next(vp8Packetizer);

var videoStream = new FM.IceLink.VideoStream(localVideoTrack);
fm.icelink.vp8.Encoder vp8Encoder = new fm.icelink.vp8.Encoder();
fm.icelink.vp8.Packetizer vp8Packetizer = new fm.icelink.vp8.Packetizer();
fm.icelink.yuv.ImageConveter localImageConveter = new fm.icelink.yuv.ImageConveter(videoSource.getOutputFormat(), vp8Encoder.getInputFormat());

fm.icelink.VideoTrack localVideoTrack = new fm.icelink.VideoTrack(videoSource)
    .next(localImageConveter)
    .next(vp8Encoder)
    .next(vp8Packetizer);

fm.icelink.VideoStream videoStream = new fm.icelink.VideoStream(localVideoTrack);
FMIceLinkVp8Encoder* vp8Encoder = [FMIceLinkVp8Encoder encoder];
FMIceLinkVp8Packetizer* vp8Packetizer = [FMIceLinkVp8Packetizer packetizer];
FMIceLinkYuvImageConveter* localImageConverter = [FMIceLinkYuvImageConverter imageConverterWithInputFormat:[videoSource outputFormat] outputFormat:[vp8Encoder inputFormat]];

[[[FMIceLinkVideoTrack* localVideoTrack = [FMIceLinkVideoTrack videoTrackWithElement:videoSource]
    next:localImageConveter]
    next:vp8Encoder]
    next:vp8Packetizer];

FMIceLinkVideoStream* videoStream = [FMIceLinkVideoStream videoStreamWithLocalTrack:localVideoTrack];
var vp8Encoder = FMIceLinkVp8Encoder()
var vp8Packetizer = FMIceLinkVp8Packetizer()
var localImageConverter = FMIceLinkYuvImageConveter(inputFormat: videoSource.outputFormat(), outputFormat: vp8Encoder.inputFormat())

var localVideoTrack = FMIceLinkVideoTrack(element: videoSource)
    .next(localImageConveter)
    .next(vp8Encoder)
    .next(vp8Packetizer)

var videoStream = FMIceLinkVideoStream(localTrack: localVideoTrack)


Call "destroy" when you are done with the video track and ready to dispose of all connected resources.

You now have an Opus-encoded audio stream and a VP8-encoded video stream. These can be combined as normal to create an FM.IceLink.Connection instance. However, these streams are currently send-only. The next step is to add decoders to receive media from remote peers.

Adding Remote Media Tracks

To send media, you first convert audio from your microphone to a format usable by Opus using the FM.IceLink.SoundConverter. You then encode the audio data using the FM.IceLink.Opus.Encoder and pack it into RTP packets using the FM.IceLink.Opus.Packetizer. To receive audio data, you will need to reverse these operations.

You will need to first de-packetize the RTP packets using the FM.IceLink.Opus.Depacketizer, then decode them using the FM.IceLink.Opus.Decoder. Next you will need to convert the audio. You will use the same sound converter class but you will begin with the decoder's output format and end with the FM.IceLink.AudioSink instance's input format.

var audioSink = NAudio.Sink(audioConfig);

var opusDecoder = new Opus.Decoder();
var opusDepacketizer = new Opus.Depacketizer();
var remoteSoundConverter = new FM.IceLink.SoundConverter(opusDecoder.OutputConfig, audioSink.Config);

var remoteAudioTrack = new AudioTrack(opusDepacketizer)
    .Next(opusDecoder)
    .Next(remoteSoundConverter);
    .Next(audioSink);

var audioStream = new AudioStream(localAudioTrack, remoteAudioTrack);
// for android
fm.icelink.AudioSink audioSink = new fm.icelink.android.AudioTrackSink(audioConfig);

// for other java applications
fm.icelink.AudioSink audioSink = new fm.icelink.java.SoundSink(audioConfig);

fm.icelink.opus.Decoder opusDecoder = new fm.icelink.opus.Decoder();
fm.icelink.opus.Depacketizer opusDepacketizer = new fm.icelink.opus.Depacketizer();
fm.icelink.SoundConverter remoteSoundConveter = new fm.icelink.SoundConverter(opusDecoder.getOutputConfig(), audioSink.getConfig());

fm.icelink.AudioTrack remoteAudioTrack = new fm.icelink.AudioTrack(opusDepacketizer)
    .next(opusDecoder)
    .next(remoteSoundConveter)
    .next(audioSink);

fm.icelink.AudioStream audioStream = new fm.icelink.AudioStream(localAudioTrack, remoteAudioTrack);
FMIceLinkAudioSink* audioSink = [FMIceLinkCocoaAudioUnitSink audioUnitSinkWithConfig:audioConfig];

FMIceLinkOpusDecoder* opusDecoder = [FMIceLinkOpusDecoder decoder];
FMIceLinkOpusDepacketizer* opusDepacketizer = [FMIceLinkOpusDepacketizer depacketizer];
FMIceLinkSoundConveter* remoteSoundConverter = [FMIceLinkSoundConverter soundConverterWithInputConfig:[opusEncoder outputConfig] outputConfig:[audioSink config]];

[[[FMIceLinkAudioTrack* remoteAudioTrack = [FMIceLinkAudioTrack audioTrackWithElement:opusDepacketizer]
    next:opusDecoder]
    next:remoteSoundConverter]
    next:audioSink];

FMIceLinkAudioStream* audioStream = [FMIceLinkAudioStream audioStreamWithLocalTrack:localAudioTrack, remoteTrack:remoteAudioTrack];
var audioSink = FMIceLinkCocoaAudioUnitSink(config: audioConfig)

var opusDecoder = FMIceLinkOpusDecoder()
var opusDepacketizer = FMIceLinkOpusDepacketizer()
var remoteSoundConverter = FMIceLinkSoundConveter(inputConfig: opusEncoder.outputConfig(), outputConfig: audioSink.config())

var remoteAudioTrack = FMIceLinkAudioTrack(element: opusDepacketizer)
    .next(opusDecoder)
    .next(remoteSoundConverter)
    .next(audioSink)

var audioStream = FMIceLinkAudioSTream(localTrack: localAudioTrack, remoteTrack: remoteAudioTrack)

Receiving video works the same way. You first de-packetize the RTP packets using the FM.IceLink.Vp8.Depacketizer, then decode the received data using the FM.IceLink.Vp8.Decoder. Finally, you use an FM.IceLink.Yuv.ImageConverter instance to convert from the YUV colorspace to the RGB colorspace, so that an instance of FM.IceLink.VideoSink can display the decoded frames.

var videoSink = FM.IceLink.Wpf.ImageSink();

var vp8Decoder = new FM.IceLink.Vp8.Decoder();
var vp8Depacketizer = new FM.IceLink.Vp8.Depacketizer();
var remoteImageConveter = new FM.IceLink.Yuv.ImageConverter(vp8Decoder.OutputFormat, videoSink.InputFormat);

var remoteVideoTrack = new FM.IceLink.VideoTrack(vp8Depacketizer)
    .Next(vp8Decoder)
    .Next(remoteImageConveter)
    .Next(videoSink);

var videoStream = new FM.IceLink.VideoStream(localVideoTrack, remoteVideoTrack);
// for android
android.content.Context context = getContext();
fm.icelink.VideoSink videoSink = new fm.icelink.android.OpenGLSink(context);

// for other java applications
fm.icelink.VideoSink videoSink = new fm.icelink.java.VideoComponentSink();

fm.icelink.vp8.Decoder vp8Decoder = new fm.icelink.vp8.Decoder();
fm.icelink.vp8.Depacketizer vp8Depacketizer = new fm.icelink.vp8.Depacketizer();
fm.icelink.yuv.ImageConveter remoteImageConverter = new fm.icelink.yuv.ImageConveter(vp8Decoder.getOutputFormat(), videoSink.getInputFormat());

fm.icelink.VideoTrack remoteVideoTrack = new fm.icelink.VideoTrack(vp8Depacketizer)
    .next(vp8Decoder)
    .next(remoteImageConverter)
    .next(videoSink);

fm.icelink.VideoStream videoStream = new fm.icelink.VideoStream(localVideoTrack, remoteVideoTrack);
FMIceLinkVideoSink* videoSink = [FMIceLinkCocoaOpenGLSink openGLSinkWithViewScale:FMIceLinkLayoutScaleContain];

FMIceLinkVp8Decoder* vp8Decoder = [FMIceLinkVp8Decoder decoder];
FMIceLinkVp8Depacketizer* vp8Depacketizer = [FMIceLinkVp8Depacketizer depacketizer];
FMIceLinkYuvImageConveter* remoteImageConverter = [FMIceLinkYuvImageConverter imageConverterWithInputFormat:[vp8Decoder outputFormat] outputFormat:[videoSink inputFormat]];

[[[FMIceLinkVideoTrack* remoteVideoTrack = [FMIceLinkVideoTrack videoTrackWithElement:vp8Depacketizer]
    next:vp8Decoder]
    next:remoteImageConverter]
    next:videoSink];

FMIceLinkVideoStream* videoStream = [FMIceLinkVideoStream videoStreamWithLocalTrack:localVideoTrack, remoteTrack:remoteVideoTrack];
var videoSink = FMIceLinkCocoaOpenGLSink(viewScale: FMIceLinkLayoutScaleContain)

var vp8Decoder = FMIceLinkVp8Decoder()
var vp8Depacketizer = FMIceLinkVp8Depacketizer()
var remoteImageConveter = FMIceLinkYuvIMageConveter(inputFormat: vp8Decoder.outputFormat(), outputFormat: videoSink.inputFormat())

var remoteVideoTrack = FMIceLinkVideoTrack(element: vp8Depacketizer)
    .next(vp8Decoder)
    .next(remoteImageConveter)
    .next(videoSink)

var videoStream = FMIceLinkVideoStream(localTrack: localVideoTrack, remoteTrack: remoteVideoTrack)

You now have two-way audio and video streams. There are a number of optimizations that the default implementation provides, so you should not create media chains in this way unless you have a specific need to extend the functionality that is already there. The next sections will cover more advanced usages.

Branching

Previously, it was mentioned that the default FM.IceLink.RtcLocalMedia implementation provided multiple FM.IceLink.AudioTrack and FM.IceLink.VideoTrack instances, but it was never explained how this was done. The media chaining API accomplishes this with branches. A branch operates exactly how it sounds - at any point in a media chain, you may branch, creating two separate output streams.

Whenever you invoke the Next method of a track, you can branch by passing in an array of tracks instead of a single object. The following code, demonstrates how you could branch to encode a video stream using either VP8 and H264 codecs.

var vp8Encoder = new FM.IceLink.Vp8.Encoder();
var vp8Packetizer = new FM.IceLink.Vp8.Packetizer();
var vp8ImageConverter = new FM.IceLink.Yuv.ImageConverter(videoSource.OutputFormat, vp8Encoder.InputFormat);

var h264Encoder = new FM.IceLink.H264.Encoder();
var h264Packetizer = new FM.Icelink.H264.Packetizer();
var h264ImageConveter = new FM.IceLink.Yuv.ImageConverter(videoSource.OutputFormat, h264Encoder.InputFormat);

var videoTrack = new FM.IceLink.VideoTrack(videoSource)
    .Next(new [] {
        vp8ImageConverter
            .Next(vp8Encoder)
            .Next(vp8Packetizer),

        h264ImageConverter
            .Next(h264Encoder)
            .Next(h264Packetizer)
    });

var videoStream = new FM.IceLink.VideoStream(videoTrack);
fm.icelink.vp8.Encoder vp8Encoder = new fm.icelink.vp8.Encoder();
fm.icelink.vp8.Packetizer vp8Packetizer = new fm.icelink.vp8.Packetizer();
fm.icelink.yuv.ImageConveter vp8ImageConverter = new fm.icelink.yuv.ImageConveter(videoSource.getOutputFormat(), vp8Encoder.getInputFormat());

fm.icelink.h264.Encoder h264Encoder = new fm.icelink.h264.Encoder();
fm.icelink.h264.Packetizer h264acketizer = new fm.icelink.h264.Packetizer();
fm.icelink.yuv.ImageConveter h264ImageConveter = new fm.icelink.yuv.ImageConveter(videoSource.getOutputFormat(), h264Encoder.getInputFormat());

fm.icelink.VideoTrack videoTrack = new fm.icelink.VideoTrack(videoSource)
    .next(new fm.icelink.MediaTrackBase[] {
        vp8ImageConverter
            .next(vp8Encoder)
            .next(vp8Packetizer),

        h264ImageConverter
            .next(h264Encoder)
            .next(h264Packetizer)
    });

fm.icelink.VideoStream videoStream = new fm.icelink.VideoStream(videoTrack);
// there are no h264 bindings for cocoa, so use two separate vp8 encoders
// the premise is the same

FMIceLinkVp8Encoder* vp8Encoder1 = [FMIceLinkVp8Encoder encoder];
FMIceLinkVp8Packetizer* vp8Packetizer1 = [FMIceLinkVp8Packetizer packetizer];
FMIceLinkYuvImageConveter* vp8ImageConverter1 = [FMIceLinkYuvImageConverter imageConverterWithInputFormat:[videoSource outputFormat] outputFormat:[vp8Encoder1 inputFormat]];

FMIceLinkVp8Encoder* vp8Encoder2 = [FMIceLinkVp8Encoder encoder];
FMIceLinkVp8Packetizer* vp8Packetizer2 = [FMIceLinkVp8Packetizer packetizer];
FMIceLinkYuvImageConveter* vp8ImageConverter2 = [FMIceLinkYuvImageConverter imageConverterWithInputFormat:[videoSource outputFormat] outputFormat:[vp8Encoder2 inputFormat]];

FMIceLinkVideoTrack* videoTrack = [[FMIceLinkVideoTrack videoTrackWithElement:videoSource]
    next:[NSArray arrayWithObjects:
        [[vp8ImageConverter1
            next:vp8Encoder1]
            next:vp8Packetizer1],

        [[vp8ImageConverter2
            next:vp8Encoder2]
            next:vp8Packetizer2]
    ]];

FMIceLinkVideoStream* videoStream = [FMIceLinkVideoStream videoStreamWithLocalTrack:videoTrack];
var vp8Encoder1 = FMIceLinkVp8Encoder()
var vp8Packetizer1 = FMIceLinkVp8Packetizer()
var vp8ImageConveter1 = FMIceLinkYuvImageConveter(inputFormat: videoSource.outputFormat(), outputFormat: vp8Encoder1.inputFormat())

var vp8Encoder2 = FMIceLinkVp8Encoder()
var vp8Packetizer2 = FMIceLinkVp8Packetizer()
var vp8ImageConveter2 = FMIceLinkYuvImageConveter(inputFormat: videoSource.outputFormat(), outputFormat: vp8Encoder2.inputFormat())

var videoTrack = FMIceLinkVideoTrack(element: videoSource)
    .next([
        vp8ImageConverter1
            .next(vp8Encoder1)
            .next(vp8Packetizer1),
        vp8ImageConverter2
            .next(vp8Encoder2)
            .next(vp8Packetizer2)
    ])

var videoStream = FMIceLinkVideoStream(localTrack: videoTrack)

Relation to SDP

Note that the above VideoTrack does not encode data twice. It encodes the video data using either the VP8 codec or the H264 codec. The selection of codec depends on what is negotiated between the two peers while attempting to establish a connection. What the above code does is ensure that the codecs in the audio or video streams will appear as a= lines in a peer's SDP initial offer or response. You can verify this at any point by inspecting the InputFormats and OutputFormats properties of an FM.IceLink..AudioStream or an FM.IceLink.VideoStream instance.

Whenever new information is learned about a remote peers' capabilities, the branches will update themselves by traversing the media chain. If, at any point, a codec is found to be unsupported, the branch will set the Disabled property for the node they have arrived at. This value will propagate to the end of the chain, and throughout any branches that originate from the original Disabled node.

Wrapping Up

After reading this guide, you should know how to set up a custom media chain for your application. Most users will not need this functionality, but if you need to implement a custom codec or to perform some advanced transformations on a media stream, the media chaining API will let you do that.