Skip to content

Instantly share code, notes, and snippets.

View ddjerqq's full-sized avatar
👀
Delirious

10x developer ddjerqq

👀
Delirious
View GitHub Profile
@ddjerqq
ddjerqq / discord-timestamps.md
Created March 17, 2023 10:04 — forked from LeviSnoot/discord-timestamps.md
Discord Timestamp Syntax

Discord Timestamps

Discord timestamps can be useful for specifying a date/time across multiple users time zones. They work with the Unix Timestamp format and can be posted by regular users as well as bots and applications.

The Epoch Unix Time Stamp Converter is a good way to quickly generate a timestamp. For the examples below I will be using the Time Stamp of 1543392060, which represents November 28th, 2018 at 09:01:00 hours for my local time zone (GMT+0100 Central European Standard Time).

Formatting

Style Input Output (12-hour clock) Output (24-hour clock)
Default <t:1543392060> November 28, 2018 9:01 AM 28 November 2018 09:01
@ddjerqq
ddjerqq / cuINDEX.md
Last active March 6, 2023 10:46
this is a small helper for indexing blocks and threads with cuda.

CUDA helper for block and thread indexing

general rule of thumb

threadsperblock = 32
blockspergrid = (an_array.size + (threadsperblock - 1)) // threadsperblock
increment_by_one[blockspergrid, threadsperblock](an_array)
@ddjerqq
ddjerqq / sha256.cuh
Last active May 20, 2024 17:51
CUDA C++ implementation of SHA256
#include <cstring>
#include <sstream>
#include <iomanip>
#ifndef SHA256_CUH
#define SHA256_CUH
#include <string>
#include <array>
@ddjerqq
ddjerqq / ripple.cu
Created January 21, 2023 20:02
CUDA ripple effect on the GPU, with measured frames and BLOCK explanation comment for help with indexing.
#include <chrono>
#include "../common/book.h"
#include "../common/cpu_anim.h"
#define DIM 1024
using namespace std;
using namespace std::chrono;
// 2D array of blocks and dimensions
@ddjerqq
ddjerqq / vector_add.cu
Last active January 20, 2023 22:30
simple vector addition code for CUDA. with rich comments
#include <chrono>
#include "../common/book.h"
#define N (1 << 16)
// element count: 65536
// max blocks -> 65535
// max threads per block -> 512
using namespace std;
@ddjerqq
ddjerqq / julia.cu
Last active March 28, 2023 06:30
julia fractal set using CUDA and complex numbers
// check comments for where to find these headers
#include "../common/book.h"
#include "../common/cpu_bitmap.h"
#define DIM 1000
#define SCALE 0.1f
#define STEPS 300
#define JULIA_REAL (-0.8f)
#define JULIA_IMAG (0.155f)
@ddjerqq
ddjerqq / hello.cu
Created January 8, 2023 15:39
hello CUDA application my first CUDA app
#include <iostream>
using namespace std;
// create a 'kernel'. a kernel is a function that runs on the 'device' - GPU
__global__
void saxpy(int n, float a, float* x, float* y)
{
// get the current index in the arrays;
int i = blockIdx.x * blockDim.x + threadIdx.x;
@ddjerqq
ddjerqq / bloom_filter.py
Created December 15, 2022 22:27
A Bloom filter is a space-efficient probabilistic data structure, conceived by Burton Howard Bloom in 1970, that is used to test whether an element is a member of a set.
"""Bloom Filter
A Bloom filter is a space-efficient probabilistic data structure,
conceived by Burton Howard Bloom in 1970, that is used to test whether
an element is a member of a set. False positive matches are possible,
but false negatives are not – in other words, a query returns either
"possibly in set" or "definitely not in set". Elements can be added to
the set, but not removed (though this can be addressed with the counting
Bloom filter variant); the more items added, the larger the probability
of false positives.
@ddjerqq
ddjerqq / recursive url and title extractor.py
Created December 13, 2022 22:23
a better version to url_extract_and_save.py. recursively extract urls from a url and go 1 level deeper.
"""recursively extract links and their titles from a web address
"""
from __future__ import annotations
import re
import asyncio as aio
import aiohttp
from aiohttp import ClientTimeout
@ddjerqq
ddjerqq / url_extract_and_save.py
Created December 13, 2022 22:18
extract and save urls
"""
1. Create a web crawler/scraper that uses socket connections (No Selenium) to gather links from webpages and add
them to a process queue.
2. The queue will be processed by P number of processes (However many cores on the machine).
3. Each process will use aiohttp (Async) with max T number of threads/tasks (Variable default: 100) to scrape from
the queue and add to the queue.
4. Store the title of all scraped HTML pages along with their URLs in an SQLITE database file.