Skip to content

Instantly share code, notes, and snippets.

@jobel-code
jobel-code / bash__install_nodejs_ubuntu.md
Last active June 17, 2019 09:57
How to install node.js 9.x in Ubuntu 16.04
@jobel-code
jobel-code / database.dart
Created May 6, 2018 08:16 — forked from branflake2267/database.dart
Flutter - Firebase Realtime Database Persistence
import 'dart:async';
import 'package:firebase_database/firebase_database.dart';
import 'package:intl/intl.dart';
class Database {
static Future<String> createMountain() async {
String accountKey = await _getAccountKey();
@jobel-code
jobel-code / gist_create_file_lists.py
Last active December 14, 2018 13:53
Different ways to read all files from a directory using python
# Knowing a base `dirpath` retrieve all files starting with `NV` and with file extension `.img`
from glob import glob
onlyfiles = glob( '{}/NV*.img'.format(dirpath))
#
import glob
onlyfiles = glob.glob("/home/adam/*.txt"))
# List recursibly in `dirpath` all the `.tif` files if `mygroup` is in filename and does not include `DEPRECIATED`
onlyfiles = [y for x in os.walk(dirpath) for y in glob(os.path.join(x[0], '*.tif')) if mygroup[:-2] in y and 'DEPRECIATED' not in y]
@jobel-code
jobel-code / gist_version_uuid_by_date.py
Last active November 30, 2018 14:54
Generates the equivalent of an uuid3 but qualifing only by datetime.utcnow()
from hashlib import md5
from uuid import UUID, uuid4
from datetime import datetime
def version_uuid_by_date()->str:
"""Generates the equivalent of an uuid3 but qualifing only by datetime.utcnow()
example:
'2018-11-15' will return 'f77ef8a9-a377-3083-b131-148cded89c95'
"""
@jobel-code
jobel-code / gist_find_all_files.bash
Last active December 10, 2018 09:31
Bash collection of find files with given text
# List all the files that contains `Text to find` in the `dirpath`
grep -l "Text to find" ~/dirpath/*
# Find all the yaml files, case insensitive (-i) that have "Text to find" and save it to to a text file.
# add -r for recursion.
grep -i --include="*.yaml *.yml" "text to find" dirptahToSearch/* > ~/saveResultsHere.txt
@jobel-code
jobel-code / gist_split_large_files_with_header.bash
Created December 12, 2018 08:52
Splits large files into smaller files of 1000 lines, keeping the header on each small file
%%bash
# echo $filepath
in_file=$in_filepath
DIR=$(dirname "$in_filepath")
filename=$(basename -- "$in_filepath")
extension="${filename##*.}"
#filename="${filepath##*/}" # This one will keep the extension
@jobel-code
jobel-code / gist_find_duplicates.py
Created January 14, 2019 12:13
Find duplicates in a list
# Find duplicates in list
import collections
def find_duplicates(my_list:list)->list:
return [item for item, count in collections.Counter(my_list).items() if count > 1]
@jobel-code
jobel-code / gist_how2_install_qgis.md
Last active January 15, 2019 15:07
how to install qgis in Ubuntu 18.04

Install gdal

sudo add-apt-repository -y ppa:ubuntugis/ubuntugis-unstable
sudo apt update 
sudo apt upgrade # if you already have gdal 1.11 installed 
sudo apt install gdal-bin python-gdal python3-gdal

Using vim open the sources.list file.

sudo vim /etc/apt/sources.list

@jobel-code
jobel-code / gist_aggregate_multiple_tsv_files_into_one.bash
Created January 23, 2019 09:10
Aggregate text files with same header into single file
# TO AGGREGATE RUN # THIS WILL ADD THE HEADERS ON FOR EACH FILE. USE ONLY FOR DEBUG
# cat *.tsv > aggregated_files_with_headers.csv
# TO FINAL AGGREGATION RUN IN CONSOLE
# SOURCE: https://unix.stackexchange.com/questions/60577/concatenate-multiple-files-with-same-header
# The first line of the awk script matches the first line of a file (FNR==1)
# except if it's also the first line across all files (NR==1).
# When these conditions are met, the expression while (/^<header>/) getline; is executed,
# which causes awk to keep reading another line (skipping the current one) as long as
# the current one matches the regexp ^<header>.
@jobel-code
jobel-code / gist_regex_column_names_text_match.py
Created January 24, 2019 12:48
regex find all columns in a pandas dataframe if the column includes matching text
import re
r = re.compile("depth", re.IGNORECASE)
# regex to find all the columns names that matches depth at any place in the string.
depth_cols = sorted(list(filter(r.search, subset_df)))