Skip to content

Instantly share code, notes, and snippets.

View jacobpretorius's full-sized avatar
🏠
Working from home

Jacob Pretorius jacobpretorius

🏠
Working from home
View GitHub Profile
namespace Epi.Libraries.Commerce.Predictions
{
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading;
using System.Web;
using EPiServer.Commerce.Catalog.ContentTypes;
using EPiServer.Commerce.Catalog.Linking;
CREATE TABLE [dbo].[tblFindTrackingQueue]
(
[Id] [bigint] IDENTITY(1,1) NOT NULL,
[TrackingId] [nvarchar](max) NOT NULL,
[NrOfHits] [int] NOT NULL,
[Query] [nvarchar](max) NOT NULL,
[Tags] [nvarchar](max) NOT NULL
);
GO
@adrienbrault
adrienbrault / llama2-mac-gpu.sh
Last active July 1, 2024 05:32
Run Llama-2-13B-chat locally on your M1/M2 Mac with GPU inference. Uses 10GB RAM. UPDATE: see https://twitter.com/simonw/status/1691495807319674880?s=20
# Clone llama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
# Build it
make clean
LLAMA_METAL=1 make
# Download model
export MODEL=llama-2-13b-chat.ggmlv3.q4_0.bin