Skip to content

Instantly share code, notes, and snippets.

💭
Whole universe is my outlet -Prodigy of Mobb Deep

guest271314

💭
Whole universe is my outlet -Prodigy of Mobb Deep
View GitHub Profile
View upgrade_18.04.md

For every error printed following

$ sudo apt upgrade

in the form of

dpkg: error processing archive /var/cache/apt/archives/libgl1-mesa-dri_19.2.8-0ubuntu0~18.04.3_i386.deb (--unpack): unable to make backup link of './usr/lib/i386-linux-gnu/dri/vmwgfx_dri.so' before installing new version: Input/output error Reinstalling /etc/drirc that was moved away Errors were encountered while processing:

View native-file-system-notifications.txt
> A huge huge huge part of file-system management & capabilities as we use them is being notified about when something changes. Without this, awful awful awful terrible badly performing highly costly & deeply inadequate hacks grow up like weeds, all over the place, as bad terrible coders do an awful job of probing around to figure out "what has changed?". Huge amounts of engineering effort have gone into trying to tackle this issue. Works like node-watch (attempting to fill the painful dx gaps in node's fs.watch being non recursive) & watchman grow like weeds, consuming developer year after developer year of time to make maintain & sustain.
View gist:cde8c5e0cd1b786b3c3027fde2b8dd57
@pinoyyid Re https://stackoverflow.com/questions/61550581/can-i-http-poll-or-use-socket-io-from-a-service-worker-on-safari-ios, https://stackoverflow.com/questions/61602441/what-are-the-restrictions-on-what-can-and-cannot-be-done-in-a-a-service-worker `EventSource` could probably be used, see https://stackoverflow.com/q/42475492.
iOS does not support `FetchEvent` where `event.respondwith()` can be used for continuous polling and keeping `ServiceWorker` alive. Alternatively, a `Worker` or `SharedWorker` can be used to achieve the same result.
@guest271314
guest271314 / gist:661dfd623a0043ca77c7947dabd7fc9c
Created May 9, 2020
Encode AudioBuffer with Opus (or other codec) in Browser
View gist:661dfd623a0043ca77c7947dabd7fc9c
So the Main Question is: How can i encode the AudioBuffer. (and Decode it at the Receiver) Is there an API or Library? Can i get the encoded Buffer from another API in the Browser?
@guest271314
guest271314 / gist:d522cbab584867ba5a50c392e20800b2
Created May 3, 2020
Playing audio files in unsupported codecs through <audio> tag Ask
View gist:d522cbab584867ba5a50c392e20800b2
Playing audio files in unsupported codecs through <audio> tag https://stackoverflow.com/q/61502237
https://github.com/Kagami/mpv.js/
@guest271314
guest271314 / destructuring_get_properties_and_object.js
Created Apr 19, 2020
Is it possible to destructure some properties and also assign/pass the entire object within a single assignment expression of a map callback function?
View destructuring_get_properties_and_object.js
// https://stackoverflow.com/q/61298632
let o = [{firstname:0, lastname:1}, {firstname:2, lastname:3}];
const handleMapCallback = (person, _, __, {lastname, firstname} = person) => {
console.log(person, lastname, firstname);
};
o.map(handleMapCallback);
@guest271314
guest271314 / gist:c38042935db4e0131c1e0b68ca59f4ac
Created Apr 4, 2020
Why is video not playing on recording of remote stream?
View gist:c38042935db4e0131c1e0b68ca59f4ac
https://stackoverflow.com/questions/61022341/why-is-video-not-playing-on-recording-of-remote-stream
This works without a problem. If I replace the local stream with a remote stream, same code, only received over webRTC, I see the first frame and nothing more... No errors... jsut one Frame.
@guest271314
guest271314 / speech_synthesis_to_a_mediastreamtrack_or_how_to_execute_arbitrary_shell_commands_using_inotify_tools_and_devtools_snippets.txt
Last active Jan 1, 2020
SpeechSynthesis *to* a MediaStreamTrack or: How to execute arbitrary shell commands using inotify-tools and DevTools Snippets
View speech_synthesis_to_a_mediastreamtrack_or_how_to_execute_arbitrary_shell_commands_using_inotify_tools_and_devtools_snippets.txt
The requirement described at Support SpeechSynthesis *to* a MediaStreamTrack (https://github.com/WICG/speech-api/issues/69)
is possible at Firefox and Nightly due to those browsers exposing `"Monitor of <device>"` as a device that can be selected
when `navigator.getUserMedia()` is executed. That provides a means to capture audio being output to speakers or headphones
without also capturing microphone input.
That output is also possible at Chrome/Chromium by following the proceure described at This is again recording from microphone,
not from audiooutput device (https://github.com/guest271314/SpeechSynthesisRecorder/issues/14#issuecomment-527020198).
After filing issues Support capturing audio output from sound card (https://github.com/w3c/mediacapture-main/issues/629),
Clarify getUserMedia({audio:{deviceId:{exact:<audiooutput_device>}}}) in this specification mandates capability
@guest271314
guest271314 / ESModuleImportExportFileProtocol.html
Created Dec 22, 2019
ES Module import export at file: protocol
View ESModuleImportExportFileProtocol.html
<!DOCTYPE html>
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Tetris</title>
<script>
/*
ES Module import export at file: protocol
@guest271314
guest271314 / decodePixelsFromOffscreenCanvas.js
Created Nov 25, 2019
Convert OffscreenCanvas to ImageData using png.js (OffscreenCanvasRenderingContext2D not implemented at Firefox 70 or Nightly 72)
View decodePixelsFromOffscreenCanvas.js
const imageData = new ImageData(
new Uint8ClampedArray(
new PNG(new Uint8Array(await (await offscreen.toBlob({type: "image/png"})).arrayBuffer())).decodePixels()
)
, offscreen.width, offscreen.height);
You can’t perform that action at this time.