Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save guest271314/c38042935db4e0131c1e0b68ca59f4ac to your computer and use it in GitHub Desktop.
Save guest271314/c38042935db4e0131c1e0b68ca59f4ac to your computer and use it in GitHub Desktop.
Why is video not playing on recording of remote stream?
https://stackoverflow.com/questions/61022341/why-is-video-not-playing-on-recording-of-remote-stream
This works without a problem. If I replace the local stream with a remote stream, same code, only received over webRTC, I see the first frame and nothing more... No errors... jsut one Frame.
@guest271314
Copy link
Author

@cracker0dks
Copy link

Thanks for your response.
I tried it like this:

var mediaEl = $('<video autoplay="autoplay"></video>'); //Remote stream direclty played
mediaEl.appendTo("body")
mediaEl[0].srcObject = stream; //This is the remote stream

var mediaEl2 = $('<video autoplay="autoplay"></video>'); //Remote stream after mediaRecorder
mediaEl2.appendTo("body")

var mediaSource = new MediaSource();
mediaEl2[0].src = window.URL.createObjectURL(mediaSource);
var sourceBuffer;
mediaSource.addEventListener('sourceopen', function () {
    sourceBuffer = mediaSource.addSourceBuffer('video/webm; codecs=vp8');
    // var curMode = sourceBuffer.mode;
    // if (curMode == 'segments') {
    // 	console.log("SET SEQ")
    // 	sourceBuffer.mode = 'sequence';
    // }
    console.log(sourceBuffer);
})

stream.getVideoTracks()[0].addEventListener("unmute", event => {

    var mediaRecorder = new MediaRecorder(stream, { mimeType: "video/webm; codecs=vp8" });
    console.log(mediaRecorder.mimeType)
    mediaRecorder.ondataavailable = function (e) {
        var reader = new FileReader();
        reader.onload = function (e) {
            sourceBuffer.appendBuffer(new Uint8Array(e.target.result));
        }
        reader.readAsArrayBuffer(e.data);

        if (mediaEl2.paused) {
            mediaEl2.play(0); // Start playing after 1st chunk is appended.
        }
    }
    mediaRecorder.start(200);
}, false);

If I play the stream directly it works. second video still freeze on first frame (even when I wait for "unmute"). Thanks for help :)

@guest271314
Copy link
Author

Where does stream originate? Does stream have audio tracks?

You can try the variations of code at guest271314/MediaFragmentRecorder#8.

Can you create a plnkr https://plnkr.co, including all code, to demonstrate?

@guest271314
Copy link
Author

The issue could be related to Chromium/Chrome not setting duration for WebM files produced by MediaRecorder. The second linked plnkr at guest271314/MediaFragmentRecorder#8 (comment) passes the Blob from dataavailable event to ts-ebml first to set durations, then to MediaSource.

@guest271314
Copy link
Author

Note, at

if (mediaEl2.paused) {
            mediaEl2.play(0); // Start playing after 1st chunk is appended.
        }

both variables are referencing jQuery objects, not DOM elements.

Also, since autoplay is set the checking for paused and executing play() is not necessary.

@guest271314
Copy link
Author

Given the a value is passed to start() event.data (Blob) at BlobEvent could have size 0. That can be checked with

if (e.data.size > 0) {// do stuff}

@cracker0dks
Copy link

ok, thanks I also did it with canvas workaround meanwhile.

@guest271314
Copy link
Author

Note, Chromium, Chrome and Firefox, Nightly implement mute, unmute and ended attributes and events differently at both HTMLMediaElement and MediaStreamTrack, particularly as to MediaStream and MediaStreamTrack from <canvas>. This code outputs similar result at Firefox 74, Nightly 76, and Chromium 82, save for consistency with regard to events at MediaStreamTrack and other implementation differences.

<!DOCTYPE html>

<html>
  <head>
    <link rel="stylesheet" href="lib/style.css" />
    <script src="lib/script.js"></script>
  </head>

  <body>
    <script>
      const cssColor = 'https://drafts.csswg.org/css-color/';

      fetch(cssColor)
        .then(response => response.text())
        .then(html => {
          const parser = new DOMParser();
          const doc = parser.parseFromString(html, 'text/html');
          const colorNames = Array.from(
            doc.querySelectorAll('.named-color-table dfn'),
            ({ textContent }) => textContent
          );
          return colorNames;
        })
        .then(colorNames => {
          var mediaRecorder;
          const canvas = document.createElement('canvas');
          canvas.width = canvas.height = 150;
          const ctx = canvas.getContext('2d');
          const stream = canvas.captureStream(0);
          const [videoTrack] = stream.getVideoTracks();
          videoTrack.onended = e => {
            console.log(e.target, e.type);
          };
          videoTrack.enabled = false;
          const handleUnmute = event => {
            mediaRecorder = new MediaRecorder(stream, {
              mimeType: 'video/webm; codecs=vp8',
            });
            console.log(mediaRecorder.mimeType);
            mediaRecorder.ondataavailable = function(e) {
              var reader = new FileReader();
              reader.onload = function(e) {
                sourceBuffer.appendBuffer(new Uint8Array(e.target.result));
              };
              reader.readAsArrayBuffer(e.data);
            };
            mediaRecorder.start(200);
          };
          videoTrack.addEventListener('unmute', handleUnmute, false);
          videoTrack.onmute = e => {
            if (e) console.log(e.type);
            videoTrack.enabled = true;
            raf = requestAnimationFrame(paint);
          };
          let raf;
          console.log(stream, colorNames);
          function paint() {
            if (colorNames.length) {
              let [color] = colorNames.splice(0, 1);
              ctx.fillStyle = color;
              ctx.fillRect(0, 0, canvas.width, canvas.height);

              raf = requestAnimationFrame(paint);
              if ('requestFrame' in videoTrack) videoTrack.requestFrame();
              else stream.requestFrame();
              console.log(videoTrack.readyState);
            } else {
              console.log(videoTrack.readyState);

              videoTrack.stop();
              mediaRecorder.stop();
              mediaRecorder.ondataavailable = null;
              mediaSource.endOfStream();
              cancelAnimationFrame(raf);

              raf = null;
              console.log(videoTrack.readyState);
            }
          }

          var mediaEl = document.createElement('video'); //Remote stream direclty played
          mediaEl.autoplay = true;
          document.body.appendChild(mediaEl);
          mediaEl.srcObject = stream; //This is the remote stream

          var mediaEl2 = document.createElement('video'); //Remote stream after mediaRecorder
          mediaEl2.autoplay = true;
          mediaEl.controls = mediaEl2.controls = true;
          document.body.appendChild(mediaEl2);
          var mediaSource = new MediaSource();
          mediaEl2.src = window.URL.createObjectURL(mediaSource);
          var sourceBuffer;
          mediaEl2.onwaiting = e => {
            console.log(e.type);
          };

          mediaSource.addEventListener('sourceopen', function() {
            sourceBuffer = mediaSource.addSourceBuffer(
              'video/webm; codecs=vp8'
            );
            // var curMode = sourceBuffer.mode;
            // if (curMode == 'segments') {
            // 	console.log("SET SEQ")
            // 	sourceBuffer.mode = 'sequence';
            // }
            console.log(sourceBuffer);
            if (!videoTrack.enabled) {
              videoTrack.enabled = true;
              if (!videoTrack.muted) {
                videoTrack.onmute();
                handleUnmute();
              }
            }
          });
        })
        .catch(console.error);
    </script>
  </body>
</html>

plnkr https://plnkr.co/edit/51S0fpkdKfkOyyzH

@cracker0dks
Copy link

Thanks for the code, I only need to get it working on chromium :)

@guest271314
Copy link
Author

FWIW see https://stackoverflow.com/questions/61035109/html5-mediarecorder-capture-very-slow for other issues.

Re

(tested on Firefox, as playback on guestPlayer does not work with Chrome), is that normal?

The codecs must match. MediaRecorder implementation at Chromium, Chrome does not support

'video/webm; codecs="vorbis,vp8"'

the MIME types passed to MediaRecorder and MediaSource should be explicitly 'video/webm/codecs=vp8' which is supported by both Chromium, Chrome and Firefox.

The "forked" video to the guestPlayer is delayed for about 5 seconds

100 is being passed to start() which sets the amount of data collected until dataavailable is fired. For "real-time" stream 0 could be passed to start() which as mentioned above should include checking if event.data.size (Blob) is greater than 0 before passing to appendBuffer().

@guest271314
Copy link
Author

Thanks for the code, I only need to get it working on chromium :)

No worries. Ping if you have issues implementing the use case.

@guest271314
Copy link
Author

Chromium fires mute and unmute event on CanvasCaptureMediaStreamTrack. This part

if (!videoTrack.enabled) {
              videoTrack.enabled = true;
              if (!videoTrack.muted) {
                videoTrack.onmute();
                handleUnmute();
              }
            }

of the code is included for Mozilla browsers.

@cracker0dks
Copy link

Thanks for the code, I only need to get it working on chromium :)

No worries. Ping if you have issues implementing the use case.

I will, thanks for the adavanced help again 👍

@cracker0dks
Copy link

cracker0dks commented Apr 5, 2020

Hey, I saw that on firefox MediaSource has about a 2 second delay (lag) playing my stream. I only found this: https://bugzilla.mozilla.org/show_bug.cgi?id=1340302 related to it, but no solution... do you know a way around that lag? Chrome I have <0.2sec
thanks

@guest271314
Copy link
Author

Is getting the code to work only at Chromium no longer a restriction as to the scope of the requirement?

A solution depends on the use case, the actual API's used to output an extension of MediaStreamTrack and expected output.

Conversely, see MediaSource audio output is 6 seconds faster than video output at Chromium https://plnkr.co/plunk/6ULyOE.

Can you create a plnkr https://plnkr.co including all of the code to demonstrate the output described?

@cracker0dks
Copy link

cracker0dks commented Apr 5, 2020

Yeah the recording part has to run on chromium only, the render part on both (if possible).
This site (plnkr), and also jsfiddle the getUserMedia is not working on firefox because of the iframe they use I guess. Please paste this:

<html>

<body>
    <video class="real1" autoplay controls></video>
    <video class="real2" controls></video>

    <script>
        const constraints = { video: true, audio:false };

        const video1 = document.querySelector('.real1');
        const video2 = document.querySelector('.real2');

        var mediaSource = new MediaSource();
        video2.src = window.URL.createObjectURL(mediaSource);
        var sourceBuffer;
        mediaSource.addEventListener('sourceopen', function () {
            sourceBuffer = mediaSource.addSourceBuffer('video/webm; codecs=vp8');
            console.log(sourceBuffer);
        })

        var isFirst = true;
		var mediaRecorder;
		var i = 0;
        function handleSuccess(stream) {
            video1.srcObject = stream;
            mediaRecorder = new MediaRecorder(stream, { mimeType: "video/webm; codecs=vp8" });
            console.log(mediaRecorder.mimeType)
            mediaRecorder.ondataavailable = function (e) {
                var reader = new FileReader();
                reader.onload = function (e) {				
                    sourceBuffer.appendBuffer(new Uint8Array(e.target.result));
                }
                reader.readAsArrayBuffer(e.data);

                if (video2.paused) {
                    video2.play(0); // Start playing after 1st chunk is appended.
                }
            }
            mediaRecorder.start(20);
        }

        function handleError(error) {
            console.error('Reeeejected!', error);
        }
        navigator.mediaDevices.getUserMedia(constraints).
            then(handleSuccess).catch(handleError);
    </script>
</body>

</html>

into a index.html file and open with firefox and then chrome. And watch the delay between the left and right video playback.

@guest271314
Copy link
Author

Is using MediaSource part of the requirement?

@guest271314
Copy link
Author

Why is 20 passed to start()? FileReader and Uint8Array portions of the code could probably be omitted.

@cracker0dks
Copy link

cracker0dks commented Apr 5, 2020

is there an other way to play video chunks than MediaSource? generating a new ObjectURL for every chunk at 30fps live data? I tried that, and its no fun.
The 20 is just random, could be anything. I think fileReader and Uint8Array are not the problem, but could be removed, yes :)

@guest271314
Copy link
Author

Confirm the apparent delay even when using

            mediaRecorder.start(0);
            function paint() {
              if (video1.srcObject.getVideoTracks()[0].readyState === "live")
              mediaRecorder.requestData();
              raf = requestAnimationFrame(paint);
            }

Is there a restriction against

  video1.srcObject = stream;
  mediaRecorder = new MediaRecorder(stream, { mimeType: "video/webm; codecs=vp8" });
  video2.srcObject = mediaRecorder.stream;

?

@guest271314
Copy link
Author

Or, just video2.srcObject = stream?

What is meant by

play video chunks

?

Yes, there are images could be streamed and played as a video. See also w3c/webrtc-encoded-transform#5.

How are the video chunks generated and stored in the application?

What is the actual complete use case?

@guest271314
Copy link
Author

getUserMedia() works at jsfiddle here. Note, for testing you can launch Chrome, Chromium with

$ chromium-browser --user-data-dir=/path/to/local/folder/ --use-fake-device-for-media-stream --use-fake-ui-for-media-stream --disable-gesture-requirement-for-media-playback

Note, technocally it is possible to capture (all) images and audio as Uint8Array and Float32Array and stream that data, or pass SDP information by postMessage().

Can you describe what is meant by

the render part on both (if possible)

?

@cracker0dks
Copy link

cracker0dks commented Apr 6, 2020

Or, just video2.srcObject = stream?

Not a solution.
Here is the hole usecase / story: I've build a conference tool with webRTC (SFU Style) in NodeJS with node-webrtc and it was working create up to about 10 Users. But if you have more than 10 Users, SFU will not work create anymore because you'll have N*N Streams and cpu load is to mutch. So I moved from running the SFU in Node to chromeium (in puppeteer) because this way I'm able use the Web AudioAPI. I've changed the Audio to MCU, so every user only has on downstream of Audio (Video still SFU). Right now, the server can handle 200 Users (Audio Only) in one coference instead of 10 -> My Project page here.

So I need to change the video as well because the video is transcoded for every webRTC connection on its own, if you screenshare (1080p) and have 50 subscribing, the server has to transcode this video 50 times... will not work.

My plan is:

  1. User sends video via WebRTC to the server (chromium running in puppeteer), so we have the webRTC features for the uploader
  2. record the webRTC video stream chunks on the server
  3. Distribute them over webSockets back instead of webrtc transcoding for every user and render the chunks inside the video element via MediaSource on the client

I know I'll lose some webRTC features downstream this way, but I don't see a better solution (trade off). My prototype with this approche is working but with the delay for firefox.

Sending captured images back is not working on (bad) internet connections because of the filesize.

I also thought of creating a ffmpeg lhls stream but I think the delay is more than the ~0.2 seconds I've got with my approche so far. Maybe I'll try it anyway just to see the difference :)

Thanks for your help

@guest271314
Copy link
Author

Have you tried passing SDP to clients from server https://sourcey.com/articles/webrtc-native-to-browser-video-streaming-example instead of transcoding video? If client has SDP they should be able to get the MediaStream directly.

@cracker0dks
Copy link

can you describe that? You mean sending the SDP for the client (sending the video) to the other clients so all have the same codec (on signaling)? The video is also transcoded on bad connections (bitrate) or can/should I disable that? Do I really need the c++ lib you've linked if it is "just" a SDP thing?
Sidequestion: do you think changing the codec to h264 will improve performance because maybe it its accelerated on server hardware (the server is not really state of the art)?

@guest271314
Copy link
Author

If you create an RTCPeerConnection on one tab, create offer, then create an RTCPeerConnection on another tab, store offer, pass offer to other tab, create answer, it should be possible to establish a connection, e.g., https://github.com/ldecicco/webrtc-demos; https://stackoverflow.com/questions/54980799/webrtc-datachannel-with-manual-signaling-example-please; https://stackoverflow.com/questions/20607002/webrtc-how-stun-server-feedbacks-the-sdp-and-ice-candidates; https://webrtchacks.com/the-minimum-viable-sdp/.

  1. User sends video via WebRTC to the server (chromium running in puppeteer), so we have the webRTC features for the uploader
  2. record the webRTC video stream chunks on the server

If user sends video to user via WebRTC why does the video need to be recorded? For the TODO

Recording of Audio/Video (Prototype working)
?

The MediaStream from client to server can be streamed to other peers. You already have the server operable.

Sidequestion: do you think changing the codec to h264 will improve performance because maybe it its accelerated on server hardware (the server is not really state of the art)?

VP8 and VP9 are, in general, lossy encoding algorithms. H264 should always give same value for recordings for same input video https://plnkr.co/edit/021iNc?preview. Am not sure about improving performance. You would need to run those tests, measure result.

If the 2 second delay at Firefox is the only issue, Firefox users can be notified of that, correct?

@cracker0dks
Copy link

f you create an RTCPeerConnection on one tab

The connection part is not a problem. The amount of people are.

If user sends video to user via WebRTC why does the video need to be recorded? For the TODO

Just an other feature so that people can record their conference calls. (not related to this problem)

The MediaStream from client to server can be streamed to other peers. You already have the server operable.

yes that is how I do it atm. Above 20 people the cpu load is just to mutch everything else is working fine.

If the 2 second delay at Firefox is the only issue, Firefox users can be notified of that, correct?

Drawing on an canvas works without delay. https://webrtchacks.github.io/samples/src/content/wasm/libvpx/ maybe I'll do that on firefox, I also have problems with my current solution if I want to start two streams... for some reason mediaSource can only be there once?! (but have to look a bit deeper into this). And there semms to be a problem if someone joins midstream, I need to find a way around that (or restart the recording?)

@guest271314
Copy link
Author

for some reason mediaSource can only be there once?!

Not certain what is being described?

And there semms to be a problem if someone joins midstream, I need to find a way around that (or restart the recording?)

Joined how? Video and audio? You can use OfflineAudioContext or ChannelMerger and ChannelSplitter to result in multiple audio streams that are all recorded https://stackoverflow.com/questions/40570114/is-it-possible-to-mix-multiple-audio-files-on-top-of-each-other-preferably-with.

If a user exists and another user enters the communication, one approach could be to have one MediaStream that is recorded and use RTCRtpSender.replaceTrack() to replace audio and video tracks in "midstream".

@cracker0dks
Copy link

WebRTC, the standard way (Client->Server->Client), is all working fine (Video and Audio). Only problem is the server load on video on many people.
So to solve this I record the video on the server with mediaRecorder,send the chunks back to the clients via websockets, (this way I avoid webRTC video transcoding), and render it via MediaSource. That is working but with two little problems atm:

  1. If someone joins the session while the video is already running, he is not able to render the stream (because he is not starting on a keyframe or so)
  2. If I start the video (everything is woking), stop it, and start again I'll get somthing like "MediaStream is closed" on the client even if I did a "new MediaStream" on every new stream start. (but will change that to the canvas solution anyway I think).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment