Skip to content

Instantly share code, notes, and snippets.


Rajkumar Singh rajkrrsingh

View GitHub Profile
rajkrrsingh / KNOX over HS2 HA
Last active Jun 26, 2020
steps to configure KNOX over HiveServer2 HA on HDP-2.5
View KNOX over HS2 HA
1. under Ambari-> knox -> Advanced topologies
2. add following snippet in advanced topologies
rajkrrsingh / Google protobuf installation on Mac
Created Nov 27, 2016
Steps to Install google protobuf on Mac
View Google protobuf installation on Mac
$tar xvf protobuf-2.5.0.tar.bz2
$cd protobuf-2.5.0
$./configure CC=clang CXX=clang++ CXXFLAGS='-std=c++11 -stdlib=libc++ -O3 -g' LDFLAGS='-stdlib=libc++' LIBS="-lc++ -lc++abi"
$make -j 4
$sudo make install
$protoc --version
rajkrrsingh / SparkStreamingSampleApp
Created Nov 27, 2016
Spark Streaming Sample program using scala
View SparkStreamingSampleApp
mkdir spark-streaming-example
cd spark-streaming-example/
mkdir -p src/main/scala
cd src/main/scala
vim TestStreaming.scala
add following line of code to TestStreaming.scala
import org.apache.spark.streaming.StreamingContext
import org.apache.spark.streaming.StreamingContext._
rajkrrsingh / PamAuthenticator
Created Nov 27, 2016
Java Program to use JPam Authentication
View PamAuthenticator
JPam is a Java-PAM bridge. PAM, or Pluggable Authentication Modules, is a standard security architecture used on Linux, Mac OS X, Solaris, HP-UX and other Unix systems. JPam is the missing link between the two.
JPAM permits the use of PAM authentication facilities by Java applications running on those platforms.
These facilities include:
rajkrrsingh / Atlas_Rest_API
Last active Nov 27, 2016
quick reference guide of ATLAS REST API
View Atlas_Rest_API
[root@rksnode ~]# curl http://rksnode:21000/api/atlas/admin/version
{"Version":"","Name":"apache-atlas","Description":"Metadata Management and Data Governance Platform over Hadoop"}[root@rksnode ~]#
[root@rksnode ~]#
[root@rksnode ~]#
[root@rksnode ~]# curl http://rksnode:21000/api/atlas/types
{"results":["DataSet","hive_order","Process","hive_table","hive_db","hive_process","hive_principal_type","hive_resource_type","hive_object_type","Infrastructure","hive_index","hive_column","hive_resourceuri","hive_storagedesc","hive_role","hive_partition","hive_serde","hive_type"],"count":18,"requestId":"qtp1286783232-60 - 0128be6a-076e-4ad3-972a-58783a1f7180"}[root@rksnode ~]#
[root@rksnode ~]#
[root@rksnode ~]# curl http://rksnode:21000/api/atlas/types/hive_process
{"typeName":"hive_process","definition":"{\n \"enumTypes\":[\n \n ],\n \"structTypes\":[\n \n ],\n \"traitTypes\":[\n \n ],\n \"classTypes\":[\n {\n \"superTypes\":[\n
rajkrrsingh / HiveAcidQuickTest
Last active Oct 8, 2018
quick start guide to test ACID functionality in hive
View HiveAcidQuickTest
hive> set;
hive> set hive.enforce.bucketing;
hive> set hive.exec.dynamic.partition.mode;
hive> set hive.txn.manager;
hive> set hive.compactor.initiator.on;
rajkrrsingh / Spark2StarterApp
Created Nov 30, 2016
sample spark2 application using scala
View Spark2StarterApp
mkdir Spark2StarterApp
cd Spark2StarterApp/
mkdir -p src/main/scala
cd src/main/scala
vim Spark2Example.scala
import org.apache.spark.sql.SparkSession
object Spark2Example {
rajkrrsingh / Spark2DataSetDemo
Created Nov 30, 2016
sample spark2 application demonstrating dataset api
View Spark2DataSetDemo
[root@rkk1 Spark2StarterApp]# /usr/hdp/current/spark2-client/bin/spark-shell
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel).
16/11/30 18:01:48 WARN SparkContext: Use an existing SparkContext, some configuration may not take effect.
Spark context Web UI available at
Spark context available as 'sc' (master = local[*], app id = local-1480528906336).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
View Benchmark-results.txt
End-to-end Latency
0.0543 ms ms (median)
0.003125 ms (99th percentile)
5 ms (99.9th percentile)
Producer and consumer
Producer - 1431170.2 records/sec (136.49 MB/sec)
Consumer - 3276754.7021 records/sec (312.4957 MB/sec)
rajkrrsingh / KafkaProducerWithCallBack
Created Dec 10, 2016
sample kafka async producer demonstrating callback
View KafkaProducerWithCallBack
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
import org.apache.log4j.Logger;
import java.util.Properties;