Skip to content

Instantly share code, notes, and snippets.

@jmrr
jmrr / tvheadend-quickstart.md
Created March 22, 2018 08:16
Tvheadend quickstart

hts user

Tvheadend works "as a service" making use of a dedicated user: hts. Therefore its files and configuration are stored under /home/hts

tvheadend usage

Start the daemon

Tvheadend works with upstart therefore to start/stop/restart the service:

@jmrr
jmrr / pull-request-template.md
Last active May 12, 2017 09:24
Pull Request template

Status

✅ READY/ 🔧 IN DEVELOPMENT

Description

A few sentences or bullet-point list describing the overall goal of the contribution.

Related PRs/Issues

List related PRs against other branches or related JIRA issues:

@jmrr
jmrr / catching_exceptions.py
Last active June 12, 2017 16:53
Catching exception the proper way in Python: finding out the exception name
try:
# Code here
except Exception as ex:
template = "An exception of type {0} occured. Arguments:\n{1!r}"
message = template.format(type(ex).__name__, ex.args)
print(message)
# Here you can do post-mortem analysis, present GUI error message, etc.
@jmrr
jmrr / extract_audio.sh
Created October 4, 2016 16:30
Raw audio extractor using ffmpeg inspired by @terdon from stackexchange
#!/bin/bash
# Specify destination folder
mkdir -p output
# Select extensions. Videos must be in the current dir
extension=flv
for vid in *.$extension; do
codec="$(ffprobe -v error -select_streams a:0 -show_entries stream=codec_name -print_format csv=p=0 "$vid")"
case "$codec" in
#!/bin/bash
# Destination folder
mkdir -p output/transcoded
# Specify extensions for for loop. Videos must be in the current dir
extension=flv
for vid in *.$extension;
do ffmpeg -i "$vid" -vn -acodec libmp3lame output/transcoded/"${vid%.$extension}."mp3; done
@jmrr
jmrr / install_predictionio.md
Last active February 13, 2021 09:51
Installing prediction.io commands on a CentOS Linux machine

Install dependencies

yum install -y \
  bzip2 \
  git \
  java-1.8.0-openjdk \
  java-1.8.0-openjdk-devel \
  python-setuptools python-dev python-numpy \
  install mysql-connector-python \
  easy_install predictionio \
{
"@context": "https://www.schema.org",
"@type": "JobPosting",
"id": 8991,
"title": "Placement - Business / IT Process and Project Management",
"description": "Explore this unique opportunity to join a global power leader...",
"datePosted": "2016-06-01",
"hiringOrganization": {
"id": 873,
"name": "Cummins Inc.",
mvn -T 4 clean package -Pspark-1.6 -Phadoop-2.4 -Pyarn -Ppyspark -DskipTests -Dspark.version=1.6.0
@jmrr
jmrr / launch_spark_shell.sh
Last active December 25, 2019 07:35
Spark-shell (also PySpark, spark-submit, etc.) call including the MySQL JDBC driver
#!/bin/sh
# Assumes MySQL connector in /zeppelin/local-repo and Spark in /usr/spark It also assumes 8 cores
SPARK_CLASSPATH=/zeppelin/interpreter/jdbc/mysql-connector-java-5.1.35.jar /usr/spark/bin/spark-shell --master local[8]
@jmrr
jmrr / mysql2parquet.scala
Last active June 23, 2022 20:04
MySQL tables to parquet files on the Spark shell
val sqlContext = new org.apache.spark.sql.SQLContext(sc) // optional
val df = sqlContext.load("jdbc", Map(
"url" -> "jdbc:mysql://<ip.address.your.db>/<table>?user=<username>&password=<pwd>",
"dbtable" -> "<tablename>"))
df.select("<col1>","<col2>","<col3>").save("</path/to/parquet/file.parquet>","parquet")
//Alternatively, to save all the columns: