Skip to content

Instantly share code, notes, and snippets.

time node --trace-opt --trace-deopt -e '
function bench() {
const loops = 3000;
const objSize = 4 * 3
const i32StartOffset = loops * objSize
const i32Size = 4 * 2
const ab = new ArrayBuffer(i32StartOffset + i32Size)
const f32 = new Float32Array(ab, 0)
const i32 = new Int32Array(ab, i32StartOffset)
time node --trace-opt --trace-deopt -e '
function bench() {
const loops = 3000;
const objSize = 4 * 3
const i32StartOffset = loops * objSize
const i32Size = 4 * 2
const ab = new ArrayBuffer(i32StartOffset + i32Size)
const f32 = new Float32Array(ab, 0)
const i32 = new Int32Array(ab, i32StartOffset)

A code chunk's lines of code must be in the same order you want them to be when the code is converted for the computer.

<<a>>=
console.log(foo)
@

<<a>>+
var foo = "bar"
@
# An Introduction to Literate Programming, for "translators"
Start explaining how your Arras does *some area of technical expertise*, showing the code
behind what you're talking about as you explain. You must eventually use all
the code. Try to be logical in your explanation, for the sake of the human
reading it. When you want to say that a particular chunk of code will be
defined and explained later and not show it now, use this syntax:
<<chunk name>>

I want read-only access to your GitLab repository, with the permission to propose merge requests. Do that and I'll considering continuing staying on arras-dev.

If not, I'm gonna ask to join Fillygroove/Hellcat's server as a co-dev, and I'm gonna contribute my anti-lag efforts and my CTHD there.

I have three prongs of evidence that strongly lead me to believe that GC is the issue. And although I can't disprove the null hypothesis like you want me to, intuition trumps proof when you want speed.

Testing

We can do the Meausirng / Testing after you try out my fix in prod. But until then, it's not worth it for me to try to replicate your freeze-every-15-seconds behavior and do a controlled trial just to prove what my intuition already knows. Because I can't replicate your freeze-every-15-seconds behavior on my server by just adding lots of bots, and to do so I'd probably have to set up a VM (Or register for and wait for an OpenShift account.) with extremely limited memory size and allocate some swap to replic

-march=native -mtune=native
const { genericEntity } = require('../lib/basedefinitions.js')
module.exports.flag = {
LABEL: 'Flag',
ACCEPTS_SCORE: false,
TYPE: 'flag',
SHAPE: 6,
PARENT: [genericEntity],
SIZE: 50,
DANGER: 0,
BODY: {
// gamemode files would be best in CoffeeScript
// real width is 3000
// food was originally 800, food would add an element of dodging
// this mode should have the leaderboard disabled on the client
// npm install driftless for more accurate timing
// also could npm install driftless help make arras faster, by maing gameloop execute more accurately?
exports.flagCarriers = []
exports.flagTimes = {[-1] /*blue*/: 5*60, [-3] /*red*/: 5*60}
const isFlag = e => e.type === 'flag'
@CrazyPython
CrazyPython / basedefinitions.js
Created December 15, 2018 14:22
Barbarossa server
// GUN DEFINITIONS
const dfltskl = exports.dfltskl = 7;
exports.combineStats = function(arr) {
try {
// Build a blank array of the appropiate length
let data = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1];
arr.forEach(function(component) {
for (let i=0; i<data.length; i++) {
data[i] = data[i] * component[i];
}

Benchmark notes

Putting the code within for (let j = 0; j < 100; j++) in its own function did not affect results in either benchmark. Removing this.x = 0; this.y = 0; did not affect results for the JS Array benchmark. Passing --trace-opt and --trace-deopt shows that the JIT optimizes all lines of code for both benchmarks in the first few iterations; it does not repeatedly try to re-optimize for either. The results scale. When passing j < 300, the JS Array takes 15-17 seconds while the Int8Array takes 0.54 seconds. That's a 27x performance improvement for a pure read/write/multiply ALU test.

More complex loop benchmark (if/else branch) Surprisingly, the Int8Array (now turned into a Float64Array) isn't just faster at loads and stores. Here's a floating point arithmetic benchmark (the code that's identical to the previous benchmark have been omitted):