This project sets up Ollama and Open WebUI using Docker Compose.
- Start the services with Docker Compose:
docker-compose up -d
❱ git config user.signingKey 38AF394C | |
❱ git config commit.gpgSign true | |
❱ echo "test" | gpg --clearsign | |
-----BEGIN PGP SIGNED MESSAGE----- | |
Hash: SHA256 | |
test | |
gpg: signing failed: Inappropriate ioctl for device | |
gpg: [stdin]: clear-sign failed: Inappropriate ioctl for device |
#!/bin/bash | |
# Constants | |
HOST_NAME="$(hostname)" | |
DATE=$(date +%F-%H%M%S) # Format as 'YYYY-MM-DD-HHMMSS' | |
AWS_PROFILE="<YOUR_AWS_PROFILE>" | |
R2_ACCOUNT_ID="<YOUR_R2_ACCOUNT_ID>" | |
R2_BUCKET="<YOUR_BUCKET_NAME>" | |
R2_ENDPOINT="https://${R2_ACCOUNT_ID}.r2.cloudflarestorage.com" | |
REDIS_HOST="<YOUR_REDIS_HOST>" |
use scylla::{SessionBuilder, IntoTypedRows}; | |
use std::error::Error; | |
use std::env; | |
use std::time::Instant; | |
use dotenv::dotenv; | |
const QUERY_LIMIT: i32 = 5000; | |
#[tokio::main] | |
async fn main() -> Result<(), Box<dyn Error>> { |
const cassandra = require('cassandra-driver'); | |
const CASSANDRA_HOST = process.env.CASSANDRA_HOST || 'localhost'; | |
const CASSANDRA_NODES = [`${CASSANDRA_HOST}:9042`, `${CASSANDRA_HOST}:9043`, `${CASSANDRA_HOST}:9044`] | |
const CASSANDRA_USER = process.env.CASSANDRA_USER || 'cassandra'; | |
const CASSANDRA_PASSWORD = process.env.CASSANDRA_PASSWORD || 'cassandra'; | |
const CASSANDRA_DATA_CENTER = process.env.CASSANDRA_DATA_CENTER || 'datacenter1'; | |
const KEYSPACE = 'mykeyspace'; | |
const TABLE = 'event'; | |
const QUERY_LIMIT = 50000; | |
const FETCH_SIZE = 5000; |
#!/bin/bash | |
# Constants | |
S3_BUCKET="<YOUR_BUCKET_NAME>" | |
HOST_NAME="$(hostname)" | |
DATE=$(date +%F-%H%M%S) # Format as 'YYYY-MM-DD-HHMMSS' | |
MAX_BACKUPS=5 | |
AWS_PROFILE="<YOUR_AWS_PROFILE>" | |
R2_ACCOUNT_ID="<YOUR_R2_ACCOUNT_ID>" |
This guide provides instructions on setting up a comprehensive monitoring stack using Grafana, Prometheus, Node Exporter, cAdvisor and Loki. These components are orchestrated with Docker Compose and exposed via an NGINX reverse proxy, making them accessible through a single domain.
curl -XPUT -k -u elastic:changeme "https://localhost:9200/_snapshot/repository_backups" -H 'Content-Type: application/json' -d '{
"type": "fs",
"settings": {
"location": "/usr/share/elasticsearch/backup"
}
version: 0.2 | |
phases: | |
pre_build: | |
commands: | |
- echo Logging in to Amazon ECR... | |
- aws --version | |
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com | |
- REPOSITORY_URI=$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME | |
- COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7) |