Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save guest271314/c38042935db4e0131c1e0b68ca59f4ac to your computer and use it in GitHub Desktop.
Save guest271314/c38042935db4e0131c1e0b68ca59f4ac to your computer and use it in GitHub Desktop.
Why is video not playing on recording of remote stream?
https://stackoverflow.com/questions/61022341/why-is-video-not-playing-on-recording-of-remote-stream
This works without a problem. If I replace the local stream with a remote stream, same code, only received over webRTC, I see the first frame and nothing more... No errors... jsut one Frame.
@guest271314
Copy link
Author

If you create an RTCPeerConnection on one tab, create offer, then create an RTCPeerConnection on another tab, store offer, pass offer to other tab, create answer, it should be possible to establish a connection, e.g., https://github.com/ldecicco/webrtc-demos; https://stackoverflow.com/questions/54980799/webrtc-datachannel-with-manual-signaling-example-please; https://stackoverflow.com/questions/20607002/webrtc-how-stun-server-feedbacks-the-sdp-and-ice-candidates; https://webrtchacks.com/the-minimum-viable-sdp/.

  1. User sends video via WebRTC to the server (chromium running in puppeteer), so we have the webRTC features for the uploader
  2. record the webRTC video stream chunks on the server

If user sends video to user via WebRTC why does the video need to be recorded? For the TODO

Recording of Audio/Video (Prototype working)
?

The MediaStream from client to server can be streamed to other peers. You already have the server operable.

Sidequestion: do you think changing the codec to h264 will improve performance because maybe it its accelerated on server hardware (the server is not really state of the art)?

VP8 and VP9 are, in general, lossy encoding algorithms. H264 should always give same value for recordings for same input video https://plnkr.co/edit/021iNc?preview. Am not sure about improving performance. You would need to run those tests, measure result.

If the 2 second delay at Firefox is the only issue, Firefox users can be notified of that, correct?

@cracker0dks
Copy link

f you create an RTCPeerConnection on one tab

The connection part is not a problem. The amount of people are.

If user sends video to user via WebRTC why does the video need to be recorded? For the TODO

Just an other feature so that people can record their conference calls. (not related to this problem)

The MediaStream from client to server can be streamed to other peers. You already have the server operable.

yes that is how I do it atm. Above 20 people the cpu load is just to mutch everything else is working fine.

If the 2 second delay at Firefox is the only issue, Firefox users can be notified of that, correct?

Drawing on an canvas works without delay. https://webrtchacks.github.io/samples/src/content/wasm/libvpx/ maybe I'll do that on firefox, I also have problems with my current solution if I want to start two streams... for some reason mediaSource can only be there once?! (but have to look a bit deeper into this). And there semms to be a problem if someone joins midstream, I need to find a way around that (or restart the recording?)

@guest271314
Copy link
Author

for some reason mediaSource can only be there once?!

Not certain what is being described?

And there semms to be a problem if someone joins midstream, I need to find a way around that (or restart the recording?)

Joined how? Video and audio? You can use OfflineAudioContext or ChannelMerger and ChannelSplitter to result in multiple audio streams that are all recorded https://stackoverflow.com/questions/40570114/is-it-possible-to-mix-multiple-audio-files-on-top-of-each-other-preferably-with.

If a user exists and another user enters the communication, one approach could be to have one MediaStream that is recorded and use RTCRtpSender.replaceTrack() to replace audio and video tracks in "midstream".

@cracker0dks
Copy link

WebRTC, the standard way (Client->Server->Client), is all working fine (Video and Audio). Only problem is the server load on video on many people.
So to solve this I record the video on the server with mediaRecorder,send the chunks back to the clients via websockets, (this way I avoid webRTC video transcoding), and render it via MediaSource. That is working but with two little problems atm:

  1. If someone joins the session while the video is already running, he is not able to render the stream (because he is not starting on a keyframe or so)
  2. If I start the video (everything is woking), stop it, and start again I'll get somthing like "MediaStream is closed" on the client even if I did a "new MediaStream" on every new stream start. (but will change that to the canvas solution anyway I think).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment