Skip to content

Instantly share code, notes, and snippets.

View RajenDharmendra's full-sized avatar
🎯
Focusing

RajenDharmendra RajenDharmendra

🎯
Focusing
View GitHub Profile
import anthropic
client = anthropic.Anthropic(
api_key="my_api_key",
)
message = client.messages.create(
model="claude-3-opus-20240229",
max_tokens=1000,
temperature=0,
messages=[
@RajenDharmendra
RajenDharmendra / latency.txt
Created June 15, 2020 19:21 — forked from jboner/latency.txt
Latency Numbers Every Programmer Should Know
Latency Comparison Numbers (~2012)
----------------------------------
L1 cache reference 0.5 ns
Branch mispredict 5 ns
L2 cache reference 7 ns 14x L1 cache
Mutex lock/unlock 25 ns
Main memory reference 100 ns 20x L2 cache, 200x L1 cache
Compress 1K bytes with Zippy 3,000 ns 3 us
Send 1K bytes over 1 Gbps network 10,000 ns 10 us
Read 4K randomly from SSD* 150,000 ns 150 us ~1GB/sec SSD
CREATE TABLE tpcdsch.customer_address
(
`ca_address_sk` Int8,
`ca_address_id` String,
`ca_street_number` String,
`ca_street_name` String,
`ca_street_type` String,
`ca_suite_number` String,
`ca_city` String,
`ca_county` String,
1、
select i_item_id,
avg(cs_quantity) agg1,
avg(cs_list_price) agg2,
avg(cs_coupon_amt) agg3,
avg(cs_sales_price) agg4
from catalog_sales, customer_demographics, date_dim, item, promotion
where cs_sold_date_sk = d_date_sk and
cs_item_sk = i_item_sk and
cs_bill_cdemo_sk = cd_demo_sk and
1、select avg(ss_item_sk) from store_sales;
2、select ss_sold_Date_sk,count(*) as cnt from store_sales group by ss_sold_Date_sk order by cnt desc,ss_sold_Date_sk limit 10;
3、select ss_sold_Date_sk,avg(ss_item_sk) as cnt from store_sales group by ss_sold_Date_sk order by cnt desc,ss_sold_Date_sk limit 10;
4、select ss_item_sk,count(*) from store_sales group by ss_item_sk having count(*)>1 limit 10;
5、select sum(ss_item_sk) from store_sales;
6、select ss_sold_Date_sk,ss_wholesale_cost,avg(ss_item_sk) as cnt from store_sales group by ss_sold_Date_sk,ss_wholesale_cost order by cnt desc,ss_sold_Date_sk limit 10;
7、select ss_sold_Date_sk,ss_wholesale_cost,avg(ss_item_sk) as cnt from store_sales group by ss_sold_Date_sk,ss_wholesale_cost order by cnt desc,ss_sold_Date_sk limit 10;
8、select ss_sold_Date_sk,ss_wholesale_cost,avg(ss_item_sk) as cnt,count(distinct(ss_sales_price)) as avg1 from store_sales group by ss_sold_Date_sk,ss_wholesale_cost order by cnt desc,ss_sold_Date_sk limit 10;
@RajenDharmendra
RajenDharmendra / export_mysql_table.sh
Created March 29, 2020 18:29 — forked from filimonov/export_mysql_table.sh
mysql to clickhouse (bash)
set -o errexit
TABLE_NAME="${1:?You must pass a TABLE_NAME as first argument}"
STARTID="${2:?You must pass a STARTID as 2nd argument}"
ENDID="${3:?You must pass a ENDID as 3rd argument}"
[[ -z "$4" ]] && LIMIT="" || LIMIT="LIMIT $4"
. logins.sh
INSERT_COMMAND="INSERT INTO clickhouse_table(column1,column2,...) FORMAT TSV"
@RajenDharmendra
RajenDharmendra / type12.md
Created May 12, 2019 00:18 — forked from leonardofed/type12.md
I built this thing to make coding interviews suck less

We built this thing to make coding interviews suck less

The traditional technical interview process is designed to ferret out a candidate's weaknesses whereas the process should be designed to find a candidate's strengths.

No one can possibly master all of the arcana of today's technology landscape, let alone bring that mastery to bear on a problem under pressure and with no tools other than a whiteboard.

Under those circumstances, everyone can make anyone look like an idiot.

The fundamental problem with the traditional technical interview process is that it is based on a chain of inference that seems reasonable but is in fact deeply flawed. That chain goes something like this:

@RajenDharmendra
RajenDharmendra / README.md
Created May 1, 2019 16:18 — forked from leonardofed/README.md
A curated list of AWS resources to prepare for the AWS Certifications


A curated list of AWS resources to prepare for the AWS Certifications

A curated list of awesome AWS resources you need to prepare for the all 5 AWS Certifications. This gist will include: open source repos, blogs & blogposts, ebooks, PDF, whitepapers, video courses, free lecture, slides, sample test and many other resources.