Skip to content

Instantly share code, notes, and snippets.

View aymuos15's full-sized avatar
🌍
Always Curious

Soumya Snigdha Kundu aymuos15

🌍
Always Curious
View GitHub Profile
import torch
import torch.nn as nn
import torch.nn.utils.prune as prune
import numpy as np
import matplotlib.pyplot as plt
class SimpleCNN(nn.Module):
def __init__(self, conv1_out=32, conv2_out=64):
super(SimpleCNN, self).__init__()
self.conv1 = nn.Conv2d(1, conv1_out, 3, 1)
@aymuos15
aymuos15 / Running_Ollama_locally.md
Last active August 31, 2024 14:20
Simple way to run llama3 locally

On Linux

  1. Create a Python virtial environment.

     - python3 -m venv ollama
    
  2. To install

     - Open Terminal