Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
sudo dd bs=4M if=/path/to/manjaro.iso of=/dev/sd[drive letter] status=progress oflag=sync |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
truncate -s 0 $(docker inspect --format='{{.LogPath}}' t1) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from lxml import etree | |
from functional import seq | |
p = etree.load(open('a.xml')) | |
root = p.getroot() | |
seq(root.getchildren()).map(lambda x: etree.QName(x).localname) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
sc.binaryFiles(new_file).values().flatMap(lambda x: x.decode("iso-8859-1").split(chr(172))).map(lambda x: x.split(chr(171))) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
def DataStoreFileReader(file, chunk_size=512, lineterminator=172, delimiter=171): | |
chunk = "" | |
while True: | |
curr = file.read(chunk_size) | |
chunk += curr | |
if not curr: | |
break | |
if chr(lineterminator) in chunk: | |
lines = chunk.split(chr(lineterminator)) | |
for line in lines[0:-1]: |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import asyncio | |
import glob | |
import aiofiles | |
import subprocess | |
async def read_file(f): | |
print("start: " + f) | |
async with aiofiles.open(f, 'r') as fd: | |
lines = await fd.read() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
<!DOCTYPE html> | |
<html lang="en"> | |
<head> | |
<meta charset="UTF-8"> | |
<meta name="viewport" content="width=device-width, initial-scale=1"> | |
<meta http-equiv="X-UA-Compatible" content="ie=edge"> | |
<script src="/static/d3.min.js"></script> | |
<script src="/static/vue.js"></script> | |
<script src="/static/jsnetworkx.js"></script> | |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
function readFile(stream) { | |
return new Promise((resolve, reject) => { | |
var fr = new FileReader(); | |
fr.onload = () => { | |
resolve(msgpack.decode(new Uint8Array(fr.result))) | |
}; | |
fr.readAsArrayBuffer(stream); | |
}); | |
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
def extract_data(query_var, file_prefix): | |
idx = pd.date_range(start="2019-01-01", periods=13, freq="MS").strftime("%Y-%m-%d").values | |
dt = [] | |
for i in range(len(idx)-1): | |
name = str(i+1).zfill(2) | |
df = pd.read_sql(query_var.format(idx[i], idx[i+1]), conn) | |
dt.append(df.dtypes) | |
df.to_parquet(f'parquet/{file_prefix}_{name}.parquet') | |
# Unique dataframe with all types |