Starting Local Media

When talking about audio and video data, it's common to talk about both types of data together as media data. There is also a distinction made between local media data and remote media data. Local media data refers to the media data that the current user sends to everyone else. Remote media data refers to the media data that the current user receives from everyone else. The first half of participating in a video conference is being able to send your own audio and video data to other participants. This section describes how to do this with the IceLink SDK.

Defining the Local Media Object

The local media object specifies platform-specific implementations for several core IceLink features. Specifically, it enables:

  • audio and video encoding
  • hardware device access for capturing audio and video
  • image processing utilities

The IceLink SDK provides these platform-specific implementations, but you must create your own local media object, which specifies which implementations to use. Before you can create a local media object, you must define your own local media class.

A quick note before you go any further - this step is not necessary for JavaScript. In JavaScript, everything is tightly controlled by the browser; there is no way to specify alternative implementations. IceLink provides a default implementation instead. If you are using JavaScript, skip ahead to the section on Controlling Media Capture.

To create a local media object, first create a new class. This class must inherit from FM.IceLink.RtcLocalMedia<T>. The generic type T is the type of the view that will be used to display video. For example, if you were using WPF, T would be System.Windows.Controls.Image. For Swift and Objective-C, you do not have to specify a generic type. This guide focuses on capturing the users's camera and microphone, and so describes the implementation of a LocalCameraMedia class. You do not have to follow this convention - you can call your local media class whatever you would like.

To complete your LocalCameraMedia class, you must provide implementations for a number of factory methods. Each method is associated with a specific feature. For example, the CreateOpusEncoder allows you to encode audio using the Opus codec. If you want to disable a feature, you can provide an implementation that returns null. Note that you will never invoke these methods yourself - the IceLink SDK itself will invoke them when it needs a particular component. The next sections of this guide provide implementations for each method and discuss which are required and which are optional.

Initializing the Local Media Object

To begin implementing your LocalCameraMedia class, you will need to define one or more constructors. Each constructor must call a parent constructors from the RtcLocalMedia class and must invoke the inherited Initialize method. There are three parent constructors that you can choose from. The first is a default constructor and has no special properties. The second has two parameters, disableAudio and disableVideo, which allow you to avoid sending either audio or video data. The third constructor has an additional parameter that allows you to enable acoustic echo cancellation. This is beyond the scope of this guide. Refer to the Enabling Acoustic Echo Cancellation. guide for more information on this feature.

The following code demonstrates the implementation of a LocalCameraMedia class with two constructors.


// if using winforms
public class LocalCameraMedia : FM.IceLink.RtcLocalMedia<FM.IceLink.WinForms.PictureBoxControl>

// if using wpf
public class LocalCameraMedia : FM.IceLink.RtcLocalMedia<System.Windows.Controls.Image>
{
    public LocalCameraMedia()
        : base()
    {
        Initialize();
    }
    
    public LocalCameraMedia(bool disableAudio, bool disableVideo)
        : base(disableAudio, disableVideo)
    {
        Initialize();
    }
}

// for android
public class LocalCameraMedia extends fm.icelink.RtcLocalMedia<android.view.View> {
    public LocalCameraMedia() {
        super();
        
        initialize();
    }
    
    public LocalCameraMedia(bool disableAudio, bool disableVideo) {
        super(disableAudio, disableVideo);
        
        initialize();
    }
}
public class LocalCameraMedia extends fm.icelink.RtcLocalMedia<java.awt.VideoComponent> {
    public LocalCameraMedia() {
        super();
        
        initialize();
    }
    
    public LocalCameraMedia(bool disableAudio, bool disableVideo) {
        super(disableAudio, disableVideo);
        
        initialize();
    }
}
@interface LocalCameraMedia : FMIceLinkRtcLocalMedia
@end

@implementation LocalCameraMedia
- (instancetype)init {
    self = [super init];
    
    [self initialize];
}

- (instancetype)initWithDisableAudio:(bool)disableAudio disableVideo(bool)disableVideo {
    self = [super initWithDisableAudio:disableAudio disableVideo:disableVideo];
    
    [self initialize];
}
@end
public class LocalCameraMedia : FMIceLinkRtcLocalMedia {
    init() {
        super.init();
        
        initialize();
    }
    
    init(disableAudio:Bool, disableVideo:Bool) {
        super.init(disableAudio, disableVideo);
        
        initialize();
    }
}

Now that you can instantiate your local media class, the next step is to start capturing a user's audio and video data.

Capturing Audio

To capture microphone data with your LocalCameraMedia class, you must implement two methods. First, implement CreateAudioSource, which returns an audio source that provides hardware access to the user's microphone. IceLink provides platform-specific implementations of audio sources that capture microphone data. These are available in the IceLink distribution as supplementary libraries:

  • For C#: include FM.IceLink.NAudio.dll and FM.IceLink.Dmo.dll.
  • For Java for Android: including fm.icelink.android.jar.
  • For Other Java Programs: include fm.icelink.java.jar.
  • For Objective-C: include FMIceLinkCocoa.a.

Each of these libraries has a platform-specific FM.IceLink.AudioSource implementation that captures the user's microphone. Refer to the code snippets below to see the specific implementation for your platform. Note that for .NET platforms, you should first attempt to use DMO (DirectX Media Objects) and then fall back to NAudio if it is not supported. DMO offers better sound quality and provides acoustic echo cancellation. If it is available, you should use it.

DMO and VMs

Note that in virtual environments DMO can still produce echo. For improved results disable all enhancements to your virtual audio devices, but for best results you must not run in a VM.


public class LocalCameraMedia : FM.IceLink.RtcLocalMedia<...>
{
    public override FM.IceLink.AudioSource CreateAudioSource(FM.IceLink.AudioConfig config)
    {
        if (Dmo.VoiceCaptureSource.IsSupported())
        {
            return new Dmo.VoiceCaptureSource(true);
        }
        else
        {
            return new NAudio.Source(config);
        }
    }
}

public class LocalCameraMedia extends fm.icelink.RtcLocalMedia<...> {
    @Override
    public fm.icelink.AudioSource createAudioSource(fm.icelink.AudioConfig config) {
        // for android
        return new fm.icelink.android.AudioRecordSource(config);
        
        // for java
        return new fm.icelink.java.SoundSource(config);        
    }
}

@implementation LocalCameraMedia
- (FMIceLinkAudioSource *)createAudioSourceWithConfig: (FMIceLinkAudioConfig *)config {
    return [FMIceLinkCocoaAudioUnitSource audioUnitSourceWithConfig:config];
}
@end

public class LocalCameraMedia : FMIceLinkRtcLocalMedia {
    override func createAudioSource(config:FMIceLinkAudioConfig) -> FMIceLinkAudioSource {
        return FMIceLinkCocoaAudioUnitSource(config)
    }
}

The second method to implement is the CreateOpusEncoder method. Technically, this isn't required, but if you do not implement it, your application will be forced to fall back to the lower quality PCMA/PCMU audio codecs. Similar to above, IceLink provides platform-specific libraries under the FM.IceLink.Opus namespace. Add the corresponding libraries to your project:

  • For C#: include FM.IceLink.Opus.dll.

  • For Java: include fm.icelink.opus.jar.
  • For Cocoa: include FMIceLinkOpus.a.

The code samples below demonstrate how to enable the Opus encoder on all platforms.


public class LocalCameraMedia : FM.IceLink.RtcLocalMedia<...>
{
    public override FM.IceLink.AudioEncoder CreateOpusEncoder(FM.IceLink.AudioConfig config)
    {
        return new FM.IceLink.Opus.Encoder(config);
    }
}

public class LocalCameraMedia extends fm.icelink.RtcLocalMedia<...> {
    @Override
    public fm.icelink.AudioEncoder CreateOpusEncoder(fm.icelink.AudioConfig config) {
        return new fm.icelink.opus.Encoder(config);        
    }
}
@implementation LocalCameraMedia
- (FMIceLinkAudioEncoder *) createOpusEncoderWithConfig: (FMIceLinkAudioConfig *)config {
    return [FMIceLinkOpusEncoder encoderWithConfig:config];
}
@end
public class LocalCameraMedia : FMIceLinkRtcLocalMedia {
    override func createOpusEncoder(config:FMIceLinkAudioConfig) -> FMIceLinkAudioEncoder {
        return FMIceLinkOpusEncoder(config)
    }
}

Your application can now capture local audio data from the user's microphone. The next describes how to capture video data with the user's camera.

Capturing Video

Capturing video for the LocalCameraMedia class the same as it does for capturing audio but there are a few more steps. In addition to creating a video source and a video encoder, you also need to provide factory methods to create a video preview and to create some image manipulation tools. To start, however, provide an implementation for the CreateVideoSource method. IceLink provides platform-specific implementations for video sources that capture data from the user's camera. To use these implementations, include a library appropriate for your platform:

  • For C#: include FM.IceLink.AForge.dll.
  • For Java for Android: include fm.icelink.android.jar.
  • For Other Java Programs: include fm.icelink.java.jar.
  • For Objective-C: include FMIceLinkCocoa.a.

The examples below show how to create these sources.

Note that the Android and iOS implementations require an additional parameter. These platforms have native objects for displaying a preview of the user's camera. You will need to access these, and pass them into the video source that you create. For simplicity, these native objects have been instantiated inline. Generally, however, these are initialized earlier in the local media lifecycle, commonly in the constructor.


public class LocalCameraMedia : FM.IceLink.RtcLocalMedia<...>
{
    public override FM.IceLink.VideoSources CreateVideoSource(FM.IceLink.VideoConfig config)
    {
        var config = new FM.IceLink.VideoConfig(640, 480, 30);
        
        return new FM.IceLink.AForge.CameraSource(config);
    }
}

public class LocalCameraMedia extends fm.icelink.RtcLocalMedia<...> {
    // for android
    @Override
    public fm.icelink.VideoSource createVideoSource(fm.icelink.VideoConfig config) {
        android.content.Context context;
        fm.icelink.VideoConfig config = new fm.icelink.VideoConfig(640, 480, 30);
        fm.icelink.CameraPreview preview = new fm.icelink.android.CameraPreview(context, LayoutScale.Contain);
    
        return new fm.icelink.android.CameraSource(this._preview, config);
    }
    
    // for java
    @Override
    public fm.icelink.VideoSource createVideoSource(fm.icelink.VideoConfig config) {
        fm.icelink.VideoConfig config = new fm.icelink.VideoConfig(640, 480, 30);
        
        return new fm.icelink.java.VideoSource(config);        
    }
}
@implementation LocalCameraMedia
// for iOS
- (FMIceLinkVideoSource *)createVideoSource {
    FMIceLinkVideoConfig* config = [FMIceLinkVideoConfig videoConfigWithWidth:640 height:480 frameRate:30];
    FMIceLinkCocoaAVCapturePreview* preview = [FMIceLinkCocoaAVCapturePreview avCapturePreview];

    return [FMIceLinkCocoaAVCaptureSource avCaptureSourceWithPreview:self.preview config: config];
}

// for macOS
- (FMIceLinkVideoSource *)createVideoSource {
    FMIceLinkVideoConfig* config = [FMIceLinkVideoConfig videoConfigWithWidth:640 height:480 frameRate:30];

    return [FMIceLinkCocoaAVCaptureSource avCaptureSourceWithConfig:config];
}
@end
public class LocalCameraMedia : FMIceLinkRtcLocalMedia {
    // for iOS
    override func createVideoSource() -> FMIceLinkVideoSource {
        var config = FMIceLinkVideoConfig(width: 640, height: 480, frameRate: 30)
        var preview = FMIceLinkCocoaAVCapturePreview()
        
        return FMIceLinkCocoaAVCaptureSource(preview: preview, config: config)
    }
    
    // for macOS
    override func createVideoSource() -> FMIceLinkVideoSource {
        var config = FMIceLinkVideoConfig(width: 640, height: 480, frameRate: 30)
        
        return FMIceLinkCocoaAVCaptureSource(config: config)
    }
}

Next, create a video preview control for the media session. This shows a preview of the user's camera as it records and sends data. To create this control, implement the CreateViewSink method. This method must return a view of the same type as the generic type parameter of your LocalCameraMedia class. Examples for each platform are below.

If you remember from the above section, the mobile platform implementations created an object to use the platform's native camera preview. The object you create here is different - you are creating a view that displays the data from the native preview instance that you created previously. Note that this gets even trickier for Android. Refer to the next section for more information on Android.


public class LocalCameraMedia : FM.IceLink.RtcLocalMedia<...>
{
    public override FM.IceLink.ViewSink CreateViewSink()
    {
        return new FM.IceLink.Wpf.ImageSink
        {
            ViewScale = LayoutScale.Contain,
            ViewMirror = true
        };
    }
}

public class LocalCameraMedia extends fm.icelink.RtcLocalMedia<...> {
    // for android
    @Override
    public fm.icelink.ViewSink<...> createViewSink() {
        // see comments in next section
        return null;
    }
    
    // for java
    @Override
    public fm.icelink.ViewSink<...> createViewSink() {
        return new fm.icelink.java.VideoComponentSink();
    }
}
@implementation LocalCameraMedia
// for iOS
- (FMIceLinkViewSink *)createViewSink {
    return [FMIceLinkCocoaOpenGLSink openGLSinkWithViewScale: FMIceLinkLayoutScaleContain];
}

// for macOS
- (FMIceLinkViewSink *)createViewSink {
    return [FMIceLinkCocoaImageViewSink imageViewSinkWithView:[[ContextNSView alloc] initWithId:nil];
}
@end
public class LocalCameraMedia : FMIceLinkRtcLocalMedia {
    // for iOS
    override func createViewSink() -> FMIceLinkViewSink {
        return FMIceLinkCocoaOpenGLSink(viewScale: FMIceLinkLayoutScaleContain)
    }
    
    // for macOS
    override func createViewSink() -> FMIceLinkViewSink {    
        var config = FMIceLinkVideoConfig(width: 640, height: 480, frameRate: 30)
        
        return FMIceLinkCocoaAVCaptureSource(config: config)
    }
}

For Android, you will be creating the camera preview object outside of the normal local media life cycle. You should create the preview in the constructor of your LocalCameraMedia instance and then override the getView method. In this method, invoke the getView method of the CameraPreview instance to return the correct preview.


public class LocalCameraMedia extends fm.icelink.RtcLocalMedia<android.view.View> {
    private CameraPreview viewSink;
    
    public LocalCameraMedia(android.content.Context context) {
        viewSink = new fm.icelink.android.CameraPreview(context, LayoutScale.Contain);
    }

    /** @Override */
    public android.view.View getView() {
        return viewSink.getView();
    }
}


The next step is to specify which video codecs to use. This is done by providing implementations for the CreateVp8Encoder, CreateVp9Encoder and CreateH264Encoder methods. These implementations can be found in the FM.IceLink.Vpx and FM.IceLink.OpenH264 libraries. Note that for iOS and macOS, there is no OpenH264 library, as these platforms natively support the H264 codec. Include the following libraries, based on your platform:

  • For C#: include FM.IceLink.Vpx.dll and/or FM.IceLink.OpenH264.dll.
  • For Java: include fm.icelink.vpx.jar and/or fm.icelink.openh264.jar.
  • For Objective-C: include FMIceLinkVpx.a.

If you want to support a codec, return an appropriate encoder from the factory method associated with a codec. You can instead disable a codec, by returning null from the its factory method. The code below demonstrates how to enable each codec.


public class LocalCameraMedia : FM.IceLink.RtcLocalMedia<...>
{
    public override FM.IceLink.VideoEncoder CreateVp8Encoder()
    {
        return new FM.IceLink.Vp8.Encoder();
    }
    
    public override FM.IceLink.VideoEncoder CreateVp9Encoder()
    {
        return new FM.IceLink.Vp9.Encoder();
    }
    
    public override FM.IceLink.VideoEncoder CreateH264Encoder()
    {
        return new FM.IceLink.H264.Encoder();
    }
}

public class LocalCameraMedia extends fm.icelink.RtcLocalMedia<...> {
    @Override
    public fm.icelink.VideoEncoder createVp8Encoder() {
        return fm.icelink.vp8.Encoder();
    }
    
    @Override
    public fm.icelink.VideoEncoder createVp9Encoder() {
        return fm.icelink.vp9.Encoder();
    }
    
    @Override
    public fm.icelink.VideoEncoder createH264Encoder() {
        return fm.icelink.openh264.Encoder();
    }
}
@implementation LocalCameraMedia
- (FMIceLinkVideoEncoder *)createVp8Encoder {
    return [FMIceLinkVp8Encoder encoder];
}

- (FMIceLinkVideoEncoder *)createVp9Encoder {
    return [FMIceLinkVp9Encoder encoder];
}

- (FMIceLinkVideoEncoder *)createH264Encoder {
    return [FMIceLinkCocoaVideoToolboxH264Encoder videoToolboxH264Encoder];
}

@end
public class LocalCameraMedia : FMIceLinkRtcLocalMedia {
    override func createVp8Encoder() -> FMIceLinkVideoEncoder {
        return FMIceLinkVp8Encoder()
    }
    
    override func createVp9Encoder() -> FMIceLinkVideoEncoder {    
        return FMIceLinkVp9Encoder()
    }
    
    override func createH264Encoder() -> FMIceLinkVideoEncoder {
        return FMIceLinkCocoaVideoToolboxH264Encoder()
    }
}

Finally, you must specify an implementation for some minor image formatting. Implementing the CreateImageConverter method. This creates a tool that converts between various color spaces, which are different ways of representing colors. This is needed because cameras usually do not capture data in the i420 color space, which is required by the IceLink video encoders. You should return an instance of FM.IceLink.Yuv.ImageConverter for this method.

As the class name indicates, this class can be found in the FM.IceLink.Yuv library. The library itself is a small wrapper around libyuv. IceLink provides compiled versions of this library for all supported platforms:

  • For C#: include FM.IceLink.Yuv.dll.
  • For Java: including fm.icelink.yuv.jar.
  • For Objective-C: include FMIceLinkYuv.a.

The code below demonstrates how to create this objects.


public class LocalCameraMedia : FM.IceLink.RtcLocalMedia<...>
{
    public override FM.IceLink.VideoPipe CreateImageConverter(FM.IceLink.VideoFormat outputFormat)
    {
        return new FM.IceLink.Vuv.ImageConverter(outputFormat);
    }
}

public class LocalCameraMedia extends fm.icelink.RtcLocalMedia<...> {
    @Override
    public fm.icelink.VideoPipe createImageConverter(fm.icelink.VideoFormat outputFormat) {
        return fm.icelink.yuv.ImageConverter(outputFormat);
    }    
}
@implementation LocalCameraMedia
- (FMIceLinkVideoPipe *)createImageConverterWithOutputFormat: (FMIceLinkVideoFormat *)outputFormat {
    return [FMIceLinkYuvImageConverter imageConverterWithOutputFormat:outputFormaat];
}
@end
public class LocalCameraMedia : FMIceLinkRtcLocalMedia {
    override func createImageConverter(outputFormat:FMIceLinkVideoFormat) -> FMIceLinkVideoPipe {
        return FMIceLinkYuvImageConverter(outputFormat: outputFormat)
    }    
}

Your LocalCameraMedia class can now capture both audio and video data using the user's camera and microphone. The next step is to actually kick off the process of capturing data.

Controlling Media Capture

To start capturing media, invoke the Start method of your LocalCameraMedia instance. This method is inherited from the FM.IceLink.RtcLocalMedia base class; you don't have to implement it yourself. The Start method returns a promise, which will resolve when the instance starts capturing data from the user's camera and microphone. If media can't be captured, then the promise will be rejected. You should always specify a reject action, so that you can be notified if an error occurs. The code below demonstrates how to start capturing media data.

For JavaScript, you will not have a LocalCameraMedia class. IceLink instead provides an implementation that works with all of the major browsers. Instead of creating a LocalCameraMedia instance, create an instance of fm.icelink.LocalMedia. You must provide two parameters, audio and video, which indicate whether or not to capture audio and whether or not to capture video. Assuming that you want to capture both, pass true for both parameters. The JavaScript implementation otherwise works exactly the same as your LocalCameraMedia implementation.

localMedia.Start().Then((FM.IceLink.LocalMedia lm) =>{
    Console.WriteLine("media capture started");
}).Fail((Exception ex) =>
{
    Console.WriteLine(ex.Message);
});
localMedia.start().then((fm.icelink.LocalMedia lm) -> {
    System.out.println("media capture started");
}).fail((Exception ex) -> {
    System.out.println(ex.getMessage());
});
[[[localMedia start] thenWithResolveActionBlock: ^(FMIceLinkLocalMedia* lm) {
    NSLog(@"media capture started")
}] failWithRejectActionBlock: ^(NSException* ex) {
    NSLog(ex.reason)
}];
localMedia.start().then(resolveActionBlock: { (lm:FMIceLinkLocalMedia) in
    print("media capture started")
}).fail(rejectActionBlock: { (ex:NSException) in
    print(ex.reason)
})
localMedia.start().then(function(lm) {
    console.log("media capture started");
}).fail(err) {
    console.log(err.message);
});


Stopping capture works the same way. Invoke the Stop method of your LocalCameraMedia class to stop capturing camera and microphone data. Again, this will return a promise, which you can use to determine when the media capture has stopped.

localMedia.Stop().Then((FM.IceLink.LocalMedia lm) =>
{
    Console.WriteLine("media capture stopped");
}).Fail((Exception ex) =>
{
    Console.WriteLine(ex.Message);
});
localMedia.stop().then((fm.icelink.LocalMedia lm) -> {
    System.out.println("media capture stopped");
}).fail((Exception ex) -> {
    System.out.println(ex.getMessage());
});
[[[localMedia stop] thenWithResolveActionBlock: ^(FMIceLinkLocalMedia* lm) {
    NSLog(@"media capture stopped")
}] failWithRejectActionBlock: ^(NSException* ex) {
    NSLog(ex.reason)
}];
localMedia.stop().then(resolveActionBlock: { (lm:FMIceLinkLocalMedia) in
    print("media capture stopped")
}).fail(rejectActionBlock: { (ex:NSException) in
    print(ex.reason)
})
localMedia.stop().then(function(lm) {
    console.log("media capture stopped");
}).fail(function(err) {
    console.log(err.message);
});


You can start and stop the local media object as many times as necessary. When you are completely finished with the local media instance, you should destroy it by invoking its Destroy method. This ensures that any input devices are released back to the system.


localMedia.Destroy();
localMedia.destroy();
[localMedia destroy];
localMedia.destroy()
localMedia.destroy();

For Android, each local media action corresponds to a particular activity lifecycle callback:

  • onCreate: create local media instance
  • onPause: stop local media
  • onResume: start local media
  • onDestroy: destroy local media

You are now capturing user data, but there is no way for the user to know this. For this reason, it is common to display a component that allows the user to preview the video data that they are sending out. The next section introduces you to the IceLink layout manager and describes how to add a local video preview.

Displaying a Local Preview

The IceLink layout manager is a component that decides where to display each participant's video feed. It scales and repositions each video feed as participants join and leave a video conference. To show users a preview of their outgoing video, you need to tell the layout manager to display it.

First, you need to actually create a layout manager instance. Each platform has its own layout manager implementation but they work in the same way. To start, create an FM.IceLink.LayoutManager instance, and specify the container parameter. The container is used as a canvas for the layout manager and contains the video feeds. The code below shows how to instantiate a layout manager on each platform:


// if using WinForms
System.Windows.Forms.Control container;
var layoutManager = new FM.IceLink.WinForms.LayoutManager(container);

// if using WPF
System.Windows.Controls.Canvas container;
var layoutManager = new FM.IceLink.Wpf.LayoutManager(container);
// for android
android.view.ViewGroup container;
fm.icelink.android.LayoutManager layoutManager = new fm.icelink.android.LayoutManager(container);

// for java
java.awt.Container container;
fm.icelink.java.LayoutManager layoutManager = new fm.icelink.java.LayoutManager(container);
// for iOS
UIView* container;
FMIceLinkCocoaLayoutManager* layoutManager = [FMIceLinkCocoaLayoutManager layoutManagerWithContainer:container];

// for macOS
NSView* container;
FMIceLinkCocoaLayoutManager* layoutManager = [FMIceLinkCocoaLayoutManager layoutManagerWithContainer:container];
// for iOS
var container:UIView
var layoutManager = FMIceLinkCocoaLayoutManager(container)

// for macOS
var container:NSView
var layoutManager = FMIceLinkCocoaLayoutManager(container)
var container = document.getElementById("div-element-id");
var layoutManager = new fm.icelink.DomLayoutManager(container);

Once you have created a LayoutManager instance, you can assign the local view from your LocalCameraMedia instance to it. You first retrieve the view by accessing the View property of the LocalCameraMedia instance. This returns an object of an appropriate type for your platform. You can now assign this view to the layout manager by invoking the SetLocalView method, demonstrated below.


layoutManager.SetLocalView(localMedia.View);
layoutManager.setLocalView(localMedia.getView());
[layoutManager setLocalViewWithView:[localMedia view]];
layoutManager.setLocalView(view: localMedia.view())
layoutManager.setLocalView(localMedia.getView());


Finally, you when you are done with a media session, make sure that you clean up. You can remove the view from the layout manager by invoking the UnsetLocalView method of the layout manager. You should invoke this at the end of every media session.


layoutManager.UnsetLocalView();
layoutManager.unsetLocalView();
[layoutManager unsetLocalView];
layoutManager.unsetLocalView()
layoutManager.unsetLocalView();

Changing Input Devices

IceLink favors convention over configuration and, by default, does not require you to specify an input device. Instead, it will use the first available device on a system. Most of the time, this is fine, as most users only have one input device. However, there are some scenarios where you will want the ability to select an alternate device, such as:

  • many mobile devices have a front and a rear camera
  • many desktop users have an integrated camera and a usb camera

Before you can switch inputs, you need to get a list of the available inputs for your device. This is accomplished by invoking either the GetAudioSourceInputs or the GetVideoSourceInputs method, both of which are inherited from RtcLocalMedia. These methods are asynchronous, as the information depends on the device's hardware and might not be available instantaneously. As such, you must wait for the result of the promise that these methods return. The code below demonstrates how to retrieve the video source inputs.


var localMedia = new LocalCameraMedia();

localMedia.GetVideoSourceInputs().Then((FM.IceLink.SourceInput[] inputs) =>
{
    Console.WriteLine("retrieved video inputs");
}).Fail(ex =>
{
    Console.WriteLine("failed to retrieve video inputs");
});
LocalCameraMedia localMedia = new LocalCameraMedia();


localMedia.getVideoSourceInputs().then((fm.icelink.SourceInput[] inputs) -> {
    System.out.println("retrieved video inputs");
}).fail(ex -> {
    System.out.println("failed to retrieve video inputs");
};
LocalCameraMedia* localMedia = [LocalCameraMedia new];

[[localMedia getVideoSourceInputs] thenWithResolveActionBlock:^(NSMutableArray *inputs) {
	NSLog(@"retrieved video inputs");
} rejectActionBlock:^(NSException *ex) {
	NSLog(@"failed to retrieve video inputs");
}];
var localMedia = LocalCameraMedia()


localMedia.getVideoSourceInputs().then(resolveAction: { (inputs:[FMIceLinkSourceInput]) in
    print("retrieved video inputs")
}).fail(rejectAction: { (ex:NSException) in 
    print("failed to retrieve video inputs")
})
var localMedia = new fm.icelink.LocalMedia(true, true);

localMedia.getVideoSourceInputs().then(function(inputs) {
    console.log("retrieved video inputs");
}).fail(function(ex) {
    console.log("failed to retrieve video inputs");
});

Using the retrieved SourceInput instances, you can then invoke either the ChangeAudioSourceInput method or the ChangeVideoSourceInput method. This will change the input device to the device specified in the SourceInput instance you specify. Note that the input change is not immediate, and that you should wait for the result of the returned promise. This is because there is a possibility that the device cannot be changed. This can occur, for example, if the device is locked by another system process. You should plan for scenario by adding a failure handler to the returned promise. The code samples below demonstrate this.


localMedia.ChangeVideoSourceInput(videoSourceInput).Then((Object result) =>
{
    Console.WriteLine("changed video input");
}).Fail(ex =>
{
    Console.WriteLine("failed to change video input");
});
localMedia.changeVideoSourceInput(videoSourceInput).then((Object result) -> {
    System.out.println("changed video input");
}).fail(ex -> {
    System.out.println("failed to change video input");
};
[[[localMedia changeVideoSourceInput:videoSourceInput] thenWithResolveAction: ^(NSObject* result) {
    NSLog(@"changed video input");
}] failWithRejectAction: ^(NSException* ex) {
    NSLog(@"failed to change video input");
}];
localMedia.changeVideoSourceInput(videoSourceInput).then(resolveAction: { (result:NSObject) in
    print("changed video input")
}).fail(rejectAction: { (ex:NSException) in 
    print("failed to change video input")
})
localMedia.changeVideoSourceInput(videoSourceInput).then(function(result) {
    console.log("changed video input");
}).fail(function(ex) {
    console.log("failed to change video input");
});

Note that in a previous version of this API, the syntax was slightly different for JavaScript. This is no longer case - once again, the IceLink APIs are wholly unified across platforms.

Wrapping Up

You now have a complete local media implementation that captures data with a user's camera and microphone. The next step to getting your application up and running is to learn how how to receive remote media. Refer to the Receiving Remote Media section step-by-step instructions on how to accomplish this.

// if using winforms
public class LocalCameraMedia : FM.IceLink.RtcLocalMedia<FM.IceLink.WinForms.PictureBoxControl>

// if using wpf
public class LocalCameraMedia : FM.IceLink.RtcLocalMedia<System.Windows.Controls.Image>
{
    public LocalCameraMedia()
        : base()
    {
        Initialize();
    }
    
    public LocalCameraMedia(bool disableAudio, bool disableVideo)
        : base(disableAudio, disableVideo)
    {
        Initialize();
    }
}