Migrating to DMO

In IceLink 3.1.1, we introduced the DirectX Media Objects (DMO) VoiceCaptureSource. The VoiceCaptureSource is a new option for capturing audio for both IceLink and LiveSwitch. It utilizes the built in Voice Capture DMO in Windows to perform high quality echo cancellation close to the hardware. This results in clear audio with very little echo.

DMO and VMs

Note that in virtual environments DMO can still produce echo. For improved results disable all enhancements to your virtual audio devices, but for best results you must not run in a VM.

Implementing the DMO Voice Capture Source

To use the VoiceCaptureSource in your project, there are two steps that must be followed. First, replace whatever audio source you’re using with the VoiceCaptureSource located in either FM.IceLink.Dmo or FM.LiveSwitch.Dmo. This will normally be in your LocalMedia class. For example, a previous CreateAudioSource implementation may have returned an instance of NAudio.Source, like the example below.

public class LocalCameraMedia : FM.IceLink.RtcLocalMedia<...>
{
    public override FM.IceLink.AudioSource CreateAudioSource(FM.IceLink.AudioConfig config)
    {
        return new NAudio.Source(config);
    }
}


To use the new DMO VoiceCaptureSource, instead return an instance of FM.IceLink.Dmo.VoiceCaptureSource. Pass true to the VoiceCaptureSource instance's constructor to enable acoustic echo cancellation. You can check if DMO is supported first, by using the Dmo.VoiceCaptureSource.IsSupported static method. If DMO is not supported, you can instead return an instance of NAudio.Source, as seen below.


public class LocalCameraMedia : FM.IceLink.RtcLocalMedia<...>
{
    public override FM.IceLink.AudioSource CreateAudioSource(FM.IceLink.AudioConfig config)
    {
        if (Dmo.VoiceCaptureSource.IsSupported())
        {
            return new Dmo.VoiceCaptureSource(true);
        }
        else
        {
            return new NAudio.Source(config);
        }
    }
}


The second step is to replace any reference to FM.IceLink.AudioProcessing.AecProcessor. Before, your implementation of AecContext would likely have simply returned an instance of the above mentioned AecProcessor, like so:


public class AecContext : FM.IceLink.AecContext
{
    protected override FM.IceLink.AecPipe CreateProcessor()
    {
          var config = new FM.IceLink.AudioConfig(16000, 1);
          var tailLength = FM.IceLink.NAudio.Sink.GetBufferDelay(config) + FM.IceLink.NAudio.Source.GetBufferDelay(config);
          
          return new FM.IceLink.AudioProcessing.AecProcessor(config, tailLength);
    }
    
    protected override FM.IceLink.AudioSink CreateOutputMixerSink(FM.IceLink.AudioConfig config)
    {
        return new FM.IceLink.NAudio.Sink(config);
    }
}


Instead, return an instance of FM.IceLink.Dmo.AecProcessor. Again, you can check if DMO is supported by using the Dmo.AecProcessor.IsSupported static method. If DMO is not supported, you can fallback to NAudio again, like so:


public class AecContext : FM.IceLink.AecContext
{
    protected override FM.IceLink.AecPipe CreateProcessor()
    {
        if (Dmo.AecProcessor.IsSupported())
        { 
            return new Dmo.AecProcessor();
        }
        else
        {
            var config = new FM.IceLink.AudioConfig(16000, 1);
            var tailLength = FM.IceLink.NAudio.Sink.GetBufferDelay(config) + FM.IceLink.NAudio.Source.GetBufferDelay(config);
            
            return new FM.IceLink.AudioProcessing.AecProcessor(config, tailLength);
        }
    }
}


Wrapping Up

You now know how to use the new DMO library with your application. You should notice that the echo cancellation is much better. Once again, make sure that you continue to provide the NAudio fallback, as not all systems will be able to support DMO.