- Configure Gmail in you gitconfig:
[sendemail]
smtpserver = smtp.gmail.com
smtpserverport = 587
smtpencryption = tls
smtpuser = <gmail email address>
from = <email address for From: field>
#!/bin/bash | |
################################################################ | |
# Install a printer on Arch Linux with cups using command line # | |
# Used for a HP PSC 1510 with default driver # | |
################################################################ | |
sudo pacman -S cups | |
sudo systemctl start org.cups.cupsd |
[sendemail]
smtpserver = smtp.gmail.com
smtpserverport = 587
smtpencryption = tls
smtpuser = <gmail email address>
from = <email address for From: field>
// compile with `gcc -I/usr/include getxkblayout.c -lX11 -lxkbfile` | |
#include <stdio.h> | |
#include <stdlib.h> | |
#include <string.h> | |
#include <X11/XKBlib.h> | |
#include <X11/extensions/XKBrules.h> | |
int main(int argc, char **argv) { | |
Display *dpy = XOpenDisplay(NULL); |
Hello, brethren :-)
As it turns out, the current version of FFmpeg (version 3.1 released earlier today) and libav (master branch) supports full H.264 and HEVC encode in VAAPI on supported hardware that works reliably well to be termed "production-ready".
diff --git a/main.c b/main.c | |
index f979e24..2f54043 100644 | |
--- a/main.c | |
+++ b/main.c | |
@@ -628,6 +628,7 @@ int main(int argc, char *argv[]) { | |
zwlr_layer_surface_v1_add_listener( | |
state.layer_surface, &layer_surface_listener, &state); | |
zwlr_layer_surface_v1_set_anchor(state.layer_surface, anchor); | |
+ zwlr_layer_surface_v1_set_size( state.layer_surface, 2560, 1440 ); | |
zwlr_layer_surface_v1_set_margin(state.layer_surface, |
Here I will describe a simple configuration of the slurm management tool for launching jobs in a really simplistic cluster. I will assume the following configuration: a main node (for me it is an Arch Linux distribution) and 3 compute nodes (for me compute nodes are Debian VMs). I also assume there is ping access between the nodes and some sort of mechanism for you to know the IP of each node at all times (most basic should be a local NAT with static IPs)
Slurm management tool work on a set of nodes, one of which is considered the master node, and has the slurmctld
daemon running; all other compute nodes have the slurmd
daemon. All communications are authenticated via the munge
service and all nodes need to share the same authentication key. Slurm by default holds a journal of activities in a directory configured in the slurm.conf
file, however a Database management system can be set. All in all what we will try to do is: