Skip to content

Instantly share code, notes, and snippets.

View anjijava16's full-sized avatar

Anjaiah Methuku anjijava16

View GitHub Profile
welcome@welcomes-MacBook-Pro python_envs % python3 -m venv mlops_water_metrics
welcome@welcomes-MacBook-Pro python_envs % pwd
cd /Users/welcome/sai_workspace/python_envs/mlops_water_metrics/bin
source activate
aws emr create-cluster \
--name "ModelServing" \
--log-uri "s3n://aws-logs-654288303595-us-east-1/elasticmapreduce/" \
--release-label "emr-6.7.0" \
--service-role "arn:aws:iam::654288303595:role/EMR_DefaultRole" \
--ec2-attributes '{"InstanceProfile":"EMR_EC2_DefaultRole","EmrManagedMasterSecurityGroup":"sg-0cf75a954ffb6d02e","EmrManagedSlaveSecurityGroup":"sg-0fc72ac9d5e6a6759","KeyName":"welcome_emr","AdditionalMasterSecurityGroups":[],"AdditionalSlaveSecurityGroups":[],"SubnetId":"subnet-0066baf1bb164b3de"}' \
--applications Name=Hadoop Name=Hive Name=Hue Name=Pig Name=Spark \
--instance-groups '[{"InstanceCount":1,"InstanceGroupType":"MASTER","Name":"Master - 1","InstanceType":"m5.xlarge","EbsConfiguration":{"EbsBlockDeviceConfigs":[{"VolumeSpecification":{"VolumeType":"gp2","SizeInGB":40},"VolumesPerInstance":1}],"EbsOptimized":true}},{"InstanceCount":2,"InstanceGroupType":"CORE","Name":"Core - 2","InstanceType":"m5.xlarge","EbsConfiguration":{"EbsBlockDeviceConfigs":[{"VolumeSpecification":{"Vol
This package will install:
• Node.js v18.14.2 to /usr/local/bin/node
• npm v9.5.0 to /usr/local/bin/npm
# Create a File
seq 1 100000000 > my_file.txt
# Check File Size
welcome@welcomes-MacBook-Pro temp_data % ls -hl
total 2326664
-rw-r--r-- 1 welcome staff 1.1G Mar 3 20:27 my_file.txt
-rw-r--r-- 1 welcome staff 71B Mar 3 20:22
welcome@welcomes-MacBook-Pro temp_data %
  1. Snowflake architecture
  2. Virtual Warehouse
  3. Internal stage
  4. AWS/Azure/GCP based external stage
  5. Snowpipe
  6. File formats
  7. Task
  8. Dependent Task
  9. Micro Partitioning
  10. ADF to Snowflake

Parquet compression options​ Parquet is designed for large-scale data with several types of data compression formats supported. Depending on your data format, you might want a different compression.

LZ4: Compression codec loosely based on the LZ4 compression algorithm, but with an additional undocumented framing scheme. The framing is part of the original Hadoop compression library and was historically copied first in parquet-mr, then emulated with mixed results by parquet-cpp. LZO: Compression codec based on or interoperable with the LZO compression library. GZIP: Compression codec based on the GZIP format (not the closely-related "zlib" or "deflate" formats) defined by RFC 1952. Snappy: Default compression for parquet files. ZSTD: Compression codec with the highest compression ratio based on the Zstandard format defined by RFC 8478.

anjijava16 / Top
Last active September 14, 2022 22:37
1. Hive Joins
2. Functions SQL,Window Functions should write one example in notepad
3. Top 3 Records, or Top n Records
4. Best File formats hive : Ans Should be Parquet Why
5. map side vs reduce side join
6. Spark Connectors, Spark with Hive connectors
6. Reduce By vs group by (Good ANs : group by having more shuffle but in reduce by less shuffle
7. cache vs perist
8. repartition vs colasec
9. RDD vs Dataframe
# HIve