Skip to content

Instantly share code, notes, and snippets.

View pushpendrapratap's full-sized avatar
🎧

pushpendra pratap pushpendrapratap

🎧
View GitHub Profile
import marvin
from marvin import ai_fn
marvin.settings.llm_model = 'gpt-4'
@ai_fn
def semantic_deduplication(new_item: str, existing_items: list[str]) -> list[str]:
"""
Check if the `new_item` is semantically the same as any of the `existing_items`.
If it is, update the existing item's description with the new one.
@rain-1
rain-1 / llama-home.md
Last active June 24, 2025 11:12
How to run Llama 13B with a 6GB graphics card

This worked on 14/May/23. The instructions will probably require updating in the future.

llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet. It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)

Note: I have been told that this does not support multiple GPUs. It can only use a single GPU.

It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.

  • Clone llama.cpp from git, I am on commit 08737ef720f0510c7ec2aa84d7f70c691073c35d.
from sagemaker.huggingface.model import HuggingFaceModel
from sagemaker.serverless import ServerlessInferenceConfig
from sagemaker.serializers import DataSerializer
import sagemaker
import boto3
try:
role = sagemaker.get_execution_role()
except ValueError:
iam = boto3.client('iam')
@joelonsql
joelonsql / PostgreSQL-EXTENSIONs.md
Last active November 1, 2025 10:08
1000+ PostgreSQL EXTENSIONs

🗺🐘 1000+ PostgreSQL EXTENSIONs

This is a list of URLs to PostgreSQL EXTENSION repos, listed in alphabetical order of parent repo, with active forks listed under each parent.

⭐️ >= 10 stars
⭐️⭐️ >= 100 stars
⭐️⭐️⭐️ >= 1000 stars
Numbers of stars might not be up-to-date.

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@0xabad1dea
0xabad1dea / copilot-risk-assessment.md
Last active June 26, 2025 22:23
Risk Assessment of GitHub Copilot

Risk Assessment of GitHub Copilot

0xabad1dea, July 2021

this is a rough draft and may be updated with more examples

GitHub was kind enough to grant me swift access to the Copilot test phase despite me @'ing them several hundred times about ICE. I would like to examine it not in terms of productivity, but security. How risky is it to allow an AI to write some or all of your code?

Ultimately, a human being must take responsibility for every line of code that is committed. AI should not be used for "responsibility washing." However, Copilot is a tool, and workers need their tools to be reliable. A carpenter doesn't have to

@dabit3
dabit3 / basicmarket.sol
Last active January 9, 2024 08:54
Basic NFT marketplace
// SPDX-License-Identifier: MIT OR Apache-2.0
pragma solidity ^0.8.4;
import "@openzeppelin/contracts/utils/Counters.sol";
import "@openzeppelin/contracts/token/ERC721/extensions/ERC721URIStorage.sol";
import "@openzeppelin/contracts/security/ReentrancyGuard.sol";
import "@openzeppelin/contracts/token/ERC721/ERC721.sol";
contract NFT is ERC721URIStorage {
using Counters for Counters.Counter;
@hamelsmu
hamelsmu / draft-mode.ipynb
Last active April 2, 2021 06:03
What is the payload field for determining the draft status of a PR for GitHub Actions?
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@akihikodaki
akihikodaki / README.en.md
Last active October 29, 2025 09:09
Linux Desktop on Apple Silicon in Practice

Linux Desktop on Apple Silicon in Practice

I bought M1 MacBook Air. It is the fastest computer I have, and I have been a GNOME/GNU/Linux user for long time. It is obvious conclusion that I need practical Linux desktop environment on Apple Silicon.

Fortunately, Linux already works on Apple Silicon/M1. But how practical is it?

  • Two native ports exist.
@muellerzr
muellerzr / memory_profiling.ipynb
Last active February 16, 2021 16:45
Memory Profiling
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.