Skip to content

Instantly share code, notes, and snippets.

View AnanthaRajuC's full-sized avatar
💻
Learning

Anantha Raju C AnanthaRajuC

💻
Learning
View GitHub Profile
@AnanthaRajuC
AnanthaRajuC / actor_film_actor_join.sql
Last active December 25, 2023 12:16
actor_film_actor_join.sql
/*
Welcome to your first dbt model!
Did you know that you can also configure models directly within SQL files?
This will override configurations stated in dbt_project.yml
Try changing "table" to "view" below
*/
{{ config(materialized='table') }}
@AnanthaRajuC
AnanthaRajuC / docker-compose-non-dev-updated-localhost-connection.yml
Created December 24, 2023 10:39
Apache Superset Default Docker Compose file Updated to connect to localhost in order to connect to locally installed Database (ex: MySQL)
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
@AnanthaRajuC
AnanthaRajuC / create_superset_users.py
Created April 4, 2023 13:16 — forked from pajachiet/create_superset_users.py
Scripting on Superset backend to fix bugs or automate tasks
#!/usr/bin/env python
# coding: utf8
# Script to create superset users
import pandas as pd
from sqlalchemy import create_engine
import datetime
import pexpect
import os
==========================
How Software Companies Die
==========================
- Orson Scott Card
The environment that nurtures creative programmers kills management and
marketing types - and vice versa.
Programming is the Great Game. It consumes you, body and soul. When
you're caught up in it, nothing else matters. When you emerge into
@AnanthaRajuC
AnanthaRajuC / jpa-cheatsheet.java
Created November 27, 2022 03:56 — forked from jahe/jpa-cheatsheet.java
JPA Cheatsheet
/*
JPA (Java Persistence API)
Transaction Management with an Entity-Mananger:
---
entityManager.getTransaction().begin();
entityManager.persist(<some-entity>);
entityManager.getTransaction().commit();
entityManager.clear();
@AnanthaRajuC
AnanthaRajuC / Dockerfile
Created November 8, 2022 04:24 — forked from Abhi-Codes/Dockerfile
Dockerfile to build optimized OCI image in Spring Boot
#Multi Stage Docker Build
#STAGE 1 - Build the layered jar
#Use Maven base image
FROM maven-3.8.3-openjdk-17 AS builder
COPY src /home/app/src
COPY pom.xml /home/app
#Build an uber jar
RUN mvn -f /home/app/pom.xml package
WORKDIR /home/app/target
# A Docker Compose must always start with the version tag.
# We use '3' because it's the last version.
version: "3.7"
# You should know that Docker Compose works with services.
# 1 service = 1 container.
# For example, a service, a server, a client, a database...
# We use the keyword 'services' to start to create services.
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
@AnanthaRajuC
AnanthaRajuC / Retrive Json Data from Mysql.sql
Created September 15, 2022 05:58 — forked from ti-ka/Retrive Json Data from Mysql.sql
If you have saved data into a field as json, you can use mysql to retrieve each field. It may be useful in migrating database when you have decided that using JSON in a database was not a good idea.
/* APPENDIX 1-B: CLEANUP
This makes this sql query re-run able */
DROP TABLE IF EXISTS JSON_TABLE;
DROP TABLE IF EXISTS SPLIT_TABLE;
DROP VIEW IF EXISTS SPLIT_VIEW;
/* APPENDIX 1-B: Prepare TABLE
Let's say this is an example table */

Delete message after consuming it in KAFKA

In Kafka, the responsibility of what has been consumed is the responsibility of the consumer and this is also one of the main reasons why Kafka has such great horizontal scalability.

Using the high level consumer API will automatically do this for you by committing consumed offsets in Zookeeper (or a more recent configuration option is using by a special Kafka topic to keep track of consumed messages).

The simple consumer API make you deal with how and where to keep track of consumed messages yourself.

Purging of messages in Kafka is done automatically by either specifying a retention time for a topic or by defining a disk quota for it so for your case of one 5GB file, this file will be deleted after the retention period you define has passed, regardless of if it has been consumed or not.

@AnanthaRajuC
AnanthaRajuC / tail_cdc_kafka_topic.sh
Created August 14, 2022 10:59
tail kafka cdc topic
docker run --tty --network postgres_debezium_cdc_default confluentinc/cp-kafkacat kafkacat -b kafka:9092 -C -s key=s -s value=avro -r http://schema-registry:8081 -t postgres.public.student