Skip to content

Instantly share code, notes, and snippets.

Avatar

Pratik Khadloya tispratik

View GitHub Profile
@tispratik
tispratik / cuda_check.py
Created Feb 14, 2020 — forked from f0k/cuda_check.py
Simple python script to obtain CUDA device information
View cuda_check.py
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Outputs some information on CUDA-enabled devices on your computer,
including current memory usage.
It's a port of https://gist.github.com/f0k/0d6431e3faa60bffc788f8b4daa029b1
from C to Python with ctypes, so it can run without compiling anything. Note
that this is a direct translation with no attempt to make the code Pythonic.
View gist:02628943731d3e2eaf4e
Counting objects: 125, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (119/119), done.
Writing objects: 100% (125/125), 125.92 KiB | 0 bytes/s, done.
Total 125 (delta 85), reused 0 (delta 0)
remote: Compressing source files... done.
remote: Building source:
remote:
remote: -----> Fetching set buildpack https://github.com/HashNuke/heroku-buildpack-elixir... done
remote: -----> elixir app detected
View domain.ex
defmodule Myapp.Domain do
import Plug.Conn
import Ecto.Model
@doc false
def init(opts), do: opts
@doc false
def call(conn, _opts) do
IO.inspect conn
@tispratik
tispratik / gist:aebff28b8c5afd7bee59
Created Feb 5, 2015
Testing argparse for exception
View gist:aebff28b8c5afd7bee59
try:
input_args = []
arg_parse_utils.parse_args(arg_meta_list, input_args)
except SystemExit:
pass
@tispratik
tispratik / hive 0.13 parquet import through hcatalog
Last active Feb 8, 2018
hive 0.13, parquet import through hcatalog
View hive 0.13 parquet import through hcatalog
sqoop-7e14e38$ bin/sqoop import -jt <jt> --connect jdbc:mysql://... --username user --password pwd --query "... WHERE \$CONDITIONS" --num-mappers 1 --hcatalog-storage-stanza "STORED AS PARQUET" --create-hcatalog-table --hcatalog-table abc116 --verbose
Warning: /hell does not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: /home/pkhadloya/sqoop-7e14e38/bin/../../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
14/09/19 11:33:09 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-SNAPSHOT
14/09/19 11:33:09 DEBUG tool.BaseSqoopTool: Enabled debug logging.
14/09/19 11:33:09 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
14/09/19 11:33:09 DEBUG sqoop.ConnFactory: Loaded manager factory: org.apache.sqoop.manager.oracle.OraOopManagerFactory
14/09/19 11:33:09 DEBUG sqoop.ConnFactory: Loaded manager factory: com.cloudera.sqoop.manager.De
@tispratik
tispratik / Hive 0.13 sqoop import as parquetfile
Last active Aug 29, 2015
Hive 0.13 sqoop import as parquetfile
View Hive 0.13 sqoop import as parquetfile
bin/sqoop import -jt <jt> --connect <dburl> --username <user> --password <pwd> --query "SELECT ..... WHERE \$CONDITIONS" --num-mappers 1 --mapreduce-job-name test_sqoop_import --as-parquetfile --target-dir /user/pkhadloya/sqoop/test --delete-target-dir --hive-import --hive-table test --create-hive-table --verbose
Warning: /hell does not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: /home/pkhadloya/sqoop-7e14e38/bin/../../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
14/09/19 10:58:12 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-SNAPSHOT
14/09/19 10:58:12 DEBUG tool.BaseSqoopTool: Enabled debug logging.
14/09/19 10:58:12 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
14/09/19 10:58:12 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override
14/09/19 10:58:12 INFO tool.BaseSqoopTool: delimi
View parquet_columns_reader2.java
package com.company.grid.lookup_new;
import parquet.column.ColumnDescriptor;
import parquet.column.ColumnReader;
import parquet.column.impl.ColumnReadStoreImpl;
import parquet.column.page.PageReadStore;
import parquet.hadoop.ParquetFileReader;
import parquet.hadoop.metadata.BlockMetaData;
import parquet.hadoop.metadata.ParquetMetadata;
import parquet.io.api.Binary;
@tispratik
tispratik / parquet_columns_reader1.java
Last active Mar 15, 2017
Parquet columns reader1
View parquet_columns_reader1.java
package com.company.grid.lookup;
import parquet.column.ColumnDescriptor;
import parquet.column.ColumnReader;
import parquet.column.impl.ColumnReadStoreImpl;
import parquet.column.page.PageReadStore;
import parquet.hadoop.ParquetFileReader;
import parquet.hadoop.metadata.BlockMetaData;
import parquet.hadoop.metadata.ParquetMetadata;
import parquet.io.api.Binary;
View gist:270869e17aaaf0d1b092
2014-07-29 12:47:46
Full thread dump Java HotSpot(TM) 64-Bit Server VM (14.2-b01 mixed mode):
"qtp1680568263-28" prio=10 tid=0x00007f79d0001000 nid=0x71ce waiting on condition [0x00007f7a15545000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00007f7a8da1aa38> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1963)
at org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:320)
@tispratik
tispratik / override_cache_key.rb
Last active Sep 6, 2018
Override cache key and touch methods in active record when we cannot update the last modified / updated at column in database. We keep track of the changes in memcache.
View override_cache_key.rb
module CacheKeyMonkeyPatch
module ActiveRecord
module Integration
# Returns a cache key that can be used to identify this record.
#
# ==== Examples
#
# Product.new.cache_key # => "products/new"
# Product.find(5).cache_key # => "products/5" (updated_at / last_modified_time not available)
You can’t perform that action at this time.