sudo nmcli dev wifi hotspot ifname wlan0 con-name "my-hotspot" ssid "my-hotspot" password "My HotsPoT Strong Password"
More info here
# template tag: {% url django.contrib.auth.views.password_reset_confirm uidb36=uidb36 token=token %} | |
from django.utils.http import int_to_base36 | |
from django.contrib.auth.tokens import default_token_generator | |
from django.contrib.auth.models import User | |
user = User.objects.get(pk=1) | |
context = { | |
'uidb36': int_to_base36(user.pk), | |
'token' = default_token_generator.make_token(user), |
sudo nmcli dev wifi hotspot ifname wlan0 con-name "my-hotspot" ssid "my-hotspot" password "My HotsPoT Strong Password"
More info here
The problem with large language models is that you can’t run these locally on your laptop. Thanks to Georgi Gerganov and his llama.cpp project, it is now possible to run Meta’s LLaMA on a single computer without a dedicated GPU.
There are multiple steps involved in running LLaMA locally on a M1 Mac after downloading the model weights.