-
-
Save alexciarlillo/4b9f75516f93c10d7b39282d10cd17bc to your computer and use it in GitHub Desktop.
let rtcConnection = null; | |
let rtcLoopbackConnection = null; | |
let loopbackStream = new MediaStream(); // this is the stream you will read from for actual audio output | |
const offerOptions = { | |
offerVideo: true, | |
offerAudio: true, | |
offerToReceiveAudio: false, | |
offerToReceiveVideo: false, | |
}; | |
let offer, answer; | |
// initialize the RTC connections | |
this.rtcConnection = new RTCPeerConnection(); | |
this.rtcLoopbackConnection = new RTCPeerConnection(); | |
this.rtcConnection.onicecandidate = e => | |
e.candidate && this.rtcLoopbackConnection.addIceCandidate(new RTCIceCandidate(e.candidate)); | |
this.rtcLoopbackConnection.onicecandidate = e => | |
e.candidate && this.rtcConnection.addIceCandidate(new RTCIceCandidate(e.candidate)); | |
this.rtcLoopbackConnection.ontrack = e => | |
e.streams[0].getTracks().forEach(track => this.loopbackStream.addTrack(track)); | |
// setup the loopback | |
this.rtcConnection.addStream(stream); // this stream would be the processed stream coming out of Web Audio API destination node | |
offer = await this.rtcConnection.createOffer(offerOptions); | |
await this.rtcConnection.setLocalDescription(offer); | |
await this.rtcLoopbackConnection.setRemoteDescription(offer); | |
answer = await this.rtcLoopbackConnection.createAnswer(); | |
await this.rtcLoopbackConnection.setLocalDescription(answer); | |
await this.rtcConnection.setRemoteDescription(answer); |
@weepy I am referring to: https://developer.mozilla.org/en-US/docs/Web/API/MediaStreamAudioDestinationNode so you run your audio through whatever series of nodes you want in Web Audio, and the last one is a media stream destination node. This node will then provide a regular audio stream of the processed audio that can fed into the peer connection.
@alexciarlillo, I've tried your workaround to overcome this issue (electron/electron#27337) that was observed on the Electron framework (which has Chromium inside). The problem is that on Electron echo cancellation is currently not supported for desktop audio capturing. So if A calls B and shares the screen including audio, B will hear the own voice when talking, because it is not cancelled out from the microphone stream on A. Unfortunately your workaround did not help. Can you tell if it should work in that case at all? In an earlier post you mentioned that echo cancellation must be enabled for the stream that's being input into your code. But that's the problem. Echo cancellation cannot be enabled in this case. My understanding was that your loopback - workaround would achieve exactly that (adding echo cancellation to a stream which is not echo-canceled for some reason).
Thanks!
Bernd
@berkon Unfortunately it does not work this way. All this does is re-add echo cancellation to a stream which has been passed through WebAudio and thus lost it's echo cancellation (but all the input data to apply cancellation still exists). Chromium does not consider screen capture audio for echo cancellation so this wont work. Echo cancellation works based on running the algorithm on an input audio stream and output audio stream. In this case there is no "input" to be considered as Chromium totally bypasses it for screenshare, and thus no echo cancelation from that source can be applied to the output stream. It looks like there are patches to workaround this in the issue you linked to though.
@alexciarlillo can you give me some demo which can run as webpage?
@liyoubdu sorry I do not have time to do this right now (I am just returning to work after paternity leave and have much to catch up on). But if you do come up with a single page working example please do share it back here for others to reference.
What do you mean by "Web Audio API destination node". Is this just the final node before going to the context.destination ?