Skip to content

Instantly share code, notes, and snippets.

@mberman84
Created August 31, 2023 00:25
Show Gist options
  • Save mberman84/9e008131d96af27256cc9cb53ad834cf to your computer and use it in GitHub Desktop.
Save mberman84/9e008131d96af27256cc9cb53ad834cf to your computer and use it in GitHub Desktop.
Code LLaMA Installation
# Make sure you have Anaconda installed
# This tutorial assumes you have an Nvidia GPU, but you can find the non-GPU version on the Textgen WebUI github
# More information found here: https://github.com/oobabooga/text-generation-webui
conda create -n textgen python=3.10.9
conda activate textgen
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
git clone https://github.com/oobabooga/text-generation-webui
cd text-generation-webui
python -m pip install -r requirements.txt
python -m torch.utils.collect_env # optional, checks that you have CUDA enabled
# if you have trouble with CUDA being enabled in torch, try this:
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
# if you run into the chardet and cchardet issues I did, try this:
python -m pip install chardet
python -m pip install cchardet
python server.py
@Joga19Bonito
Copy link

I have the same issue here on a MacBook Pro M2. What can I do?

Bildschirmfoto 2023-09-05 um 14 35 54

@Volko76
Copy link

Volko76 commented Sep 5, 2023

I finished installing C++ with defaults but it does not solved my issue

@Volko76
Copy link

Volko76 commented Sep 5, 2023

Capture d'écran 2023-09-05 144106

@Volko76
Copy link

Volko76 commented Sep 5, 2023

1- Go to the text-gen-webui folder
2- Go to modules folder
3- Open exllama_hf.py and change the 21th line from :
from model import ExLlama, ExLlamaCache, ExLlamaConfig
to :
from exllama.model import ExLlama, ExLlamaCache, ExLlamaConfig

It does not solve all the issues but I think it go forward because now I have :

Capture d'écran 2023-09-05 144744

@Volko76
Copy link

Volko76 commented Sep 5, 2023

Capture d'écran 2023-dezdez09-05 145444

@Joga19Bonito
Copy link

Capture d'écran 2023-dezdez09-05 145444

Can you provide the URL for the repo? Thank you.

@Versfragment
Copy link

Could someone please help me out. I runned this once and now a day later I want to run it again but it doesn't seem to work anymore. Therefore I tried to re-install everything the same way but I still get this error which I am not capable of fixing myself (remind: I am a complete beginner into this field, so I may dont have something installed which could be required to let this run smoothly every time?):

Screenshot (4)

@prithishh3
Copy link

prithishh3 commented Sep 6, 2023

python -m pip install cchardet gave me this

Collecting cchardet
  Downloading cchardet-2.1.7.tar.gz (653 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 653.6/653.6 kB 6.8 MB/s eta 0:00:00
  Preparing metadata (setup.py) ... done
Building wheels for collected packages: cchardet
  Building wheel for cchardet (setup.py) ... error
  error: subprocess-exited-with-error

  × python setup.py bdist_wheel did not run successfully.
  │ exit code: 1
  ╰─> [11 lines of output]
      running bdist_wheel
      running build
      running build_py
      creating build
      creating build\lib.win-amd64-cpython-310
      creating build\lib.win-amd64-cpython-310\cchardet
      copying src\cchardet\version.py -> build\lib.win-amd64-cpython-310\cchardet
      copying src\cchardet\__init__.py -> build\lib.win-amd64-cpython-310\cchardet
      running build_ext
      building 'cchardet._cchardet' extension
      error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for cchardet
  Running setup.py clean for cchardet
Failed to build cchardet
ERROR: Could not build wheels for cchardet, which is required to install pyproject.toml-based projects

I faced with the same issues while installing 'cchardet'. Just use conda to install cchardet directly. It worked for me.

conda install -c conda-forge cchardet

@LongShotRanger
Copy link

LongShotRanger commented Sep 6, 2023

Getting this while loading models

ImportError: DLL load failed while importing exllama_ext_v1: The specified module could not be found.

@zyrain
Copy link

zyrain commented Sep 7, 2023

I was installing on Debian WSL2. I had to do the following additional things:

wget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo apt update
sudo apt install build-essential gcc cmake cuda

Hope that helps someone!

@maddydevel
Copy link

Traceback (most recent call last):

File “/home/super/Devel/textgen/text-generation-webui/modules/ui_model_menu.py”, line 194, in load_model_wrapper

shared.model, shared.tokenizer = load_model(shared.model_name, loader)

File “/home/super/Devel/textgen/text-generation-webui/modules/models.py”, line 77, in load_model

output = load_func_maploader

File “/home/super/Devel/textgen/text-generation-webui/modules/models.py”, line 332, in ExLlama_HF_loader

return ExllamaHF.from_pretrained(model_name)

File “/home/super/Devel/textgen/text-generation-webui/modules/exllama_hf.py”, line 154, in from_pretrained

return ExllamaHF(config)

File “/home/super/Devel/textgen/text-generation-webui/modules/exllama_hf.py”, line 31, in init

self.ex_model = ExLlama(self.ex_config)

File “/home/super/anaconda3/envs/textgen/lib/python3.10/site-packages/exllama/model.py”, line 834, in init

else: keep_tensor = tensor.to(device)

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 34.00 MiB (GPU 0; 3.94 GiB total capacity; 3.80 GiB already allocated; 9.75 MiB free; 3.84 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

@Volko76
Copy link

Volko76 commented Sep 17, 2023

Capture d'écran 2023-dezdez09-05 145444

Can you provide the URL for the repo? Thank you.
https://github.com/turboderp/exllama (I think)

@Volko76
Copy link

Volko76 commented Sep 17, 2023

Traceback (most recent call last):

File “/home/super/Devel/textgen/text-generation-webui/modules/ui_model_menu.py”, line 194, in load_model_wrapper

shared.model, shared.tokenizer = load_model(shared.model_name, loader)

File “/home/super/Devel/textgen/text-generation-webui/modules/models.py”, line 77, in load_model

output = load_func_maploader

File “/home/super/Devel/textgen/text-generation-webui/modules/models.py”, line 332, in ExLlama_HF_loader

return ExllamaHF.from_pretrained(model_name)

File “/home/super/Devel/textgen/text-generation-webui/modules/exllama_hf.py”, line 154, in from_pretrained

return ExllamaHF(config)

File “/home/super/Devel/textgen/text-generation-webui/modules/exllama_hf.py”, line 31, in init

self.ex_model = ExLlama(self.ex_config)

File “/home/super/anaconda3/envs/textgen/lib/python3.10/site-packages/exllama/model.py”, line 834, in init

else: keep_tensor = tensor.to(device)

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 34.00 MiB (GPU 0; 3.94 GiB total capacity; 3.80 GiB already allocated; 9.75 MiB free; 3.84 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Your GPU doesn't have enough VRAM

@Volko76
Copy link

Volko76 commented Sep 17, 2023

Getting this while loading models

ImportError: DLL load failed while importing exllama_ext_v1: The specified module could not be found.

Try to use pinokio https://pinokio.computer/

@jgdk
Copy link

jgdk commented Sep 20, 2023

On my Windows 11 machine I got these errors and messages: INFO:Loading TheBloke_WizardCoder-Python-13B-V1.0-GPTQ...
WARNING:Exllama module failed to load. Will attempt to load from repositories.
ERROR:Could not find repositories/exllama/. Make sure that exllama is cloned inside repositories/ and is up to date.
ERROR:Failed to load the model.

@yishaihl
Copy link

I have the same issue here on a MacBook Pro M2. What can I do?

Bildschirmfoto 2023-09-05 um 14 35 54

did you managed to fix it?

@Volko76
Copy link

Volko76 commented Sep 21, 2023

Why don't you read the thread?
All of your errors come from the fact that you haven't cloned the exllama repo
1st solution (faster, best etc) : Use Pinokio https://pinokio.computer/
2nd solution: clone the exllama repo https://github.com/turboderp/exllama into the appropriate folder (READ THE README PLEASE!)
https://user-images.githubusercontent.com/70014984/265704342-eb505282-c4b6-498d-8201-82723ed5c939.png

@richdrummer33
Copy link

richdrummer33 commented Sep 23, 2023

Here is the repo that you paste into the Pinokio app to run the full installation - to get Code Llama running, with the web UI:

https://github.com/cocktailpeanut/text-generation-webui.pinokio

2023-09-23 16_46_41-Pinokio

FYI: Pinokio views this repo as an "installer" and will perform all of the necessary setup steps for you, automagically (including creating virtual environment, package installations, etc). Once the installation is complete, it lets you launch the "app" through it's UI.

I had not heard about Pinokio until now, and I ran into similar issues with the llama model/cpps, and cuda as well.

It saved me a lot of headeache!

@Volko76
Copy link

Volko76 commented Sep 24, 2023

Thanks man

@ssaha7714
Copy link

Hi, I am using mac and getting the following error white executing the following line:
(textgen)  ~/Documents/test/ pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117 Looking in indexes: https://download.pytorch.org/whl/cu117 ERROR: Could not find a version that satisfies the requirement torch (from versions: none) ERROR: No matching distribution found for torch

@romilandc
Copy link

Double check that your installed cuda version matches the torch version you're installing in this environment. @mberman84 installing cu117 but if you have cuda 11.8 use this instead: pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

You can check your cuda version in cmd using: nvcc --version

Finally use suggested code to check that cuda is available: python -m torch.utils.collect_env

@Youngprof3
Copy link

Traceback (most recent call last):

File "C:\Users\dell\text-generation-webui\modules\ui_model_menu.py", line 213, in load_model_wrapper

shared.model, shared.tokenizer = load_model(selected_model, loader)
File "C:\Users\dell\text-generation-webui\modules\models.py", line 87, in load_model

output = load_func_maploader
File "C:\Users\dell\text-generation-webui\modules\models.py", line 387, in ExLlamav2_HF_loader

from modules.exllamav2_hf import Exllamav2HF
File "C:\Users\dell\text-generation-webui\modules\exllamav2_hf.py", line 7, in

from exllamav2 import (
File "C:\Users\dell\AppData\Local\Programs\Python\Python310\lib\site-packages\exllamav2_init_.py", line 3, in

from exllamav2.model import ExLlamaV2
File "C:\Users\dell\AppData\Local\Programs\Python\Python310\lib\site-packages\exllamav2\model.py", line 17, in

from exllamav2.cache import ExLlamaV2CacheBase
File "C:\Users\dell\AppData\Local\Programs\Python\Python310\lib\site-packages\exllamav2\cache.py", line 2, in

from exllamav2.ext import exllamav2_ext as ext_c
File "C:\Users\dell\AppData\Local\Programs\Python\Python310\lib\site-packages\exllamav2\ext.py", line 15, in

import exllamav2_ext
ImportError: DLL load failed while importing exllamav2_ext: The specified module could not be found.

@Youngprof3
Copy link

How can I resolve this

@faychicken2
Copy link

How can I resolve this

I'm having the same issue

@Nasenblutn
Copy link

Why don't you read the thread? All of your errors come from the fact that you haven't cloned the exllama repo 1st solution (faster, best etc) : Use Pinokio https://pinokio.computer/ 2nd solution: clone the exllama repo https://github.com/turboderp/exllama into the appropriate folder (READ THE README PLEASE!) https://user-images.githubusercontent.com/70014984/265704342-eb505282-c4b6-498d-8201-82723ed5c939.png

can you tell me where I have to place the repositories folder exactly?

@Patrizi5
Copy link

Patrizi5 commented Feb 2, 2024

(textgeneration) C:\Users\Patrick>python server.py
python: can't open file 'C:\Users\Patrick\server.py': [Errno 2] No such file or directory

@Patrizi5
Copy link

Patrizi5 commented Feb 2, 2024

(textgeneration) C:\Users\Patrick\text-generation-webui>python server.py
Traceback (most recent call last):
File "C:\Users\Patrick\text-generation-webui\server.py", line 4, in
from modules import shared
File "C:\Users\Patrick\text-generation-webui\modules\shared.py", line 10, in
from modules.logging_colors import logger
File "C:\Users\Patrick\text-generation-webui\modules\logging_colors.py", line 67, in
setup_logging()
File "C:\Users\Patrick\text-generation-webui\modules\logging_colors.py", line 30, in setup_logging
from rich.console import Console
ModuleNotFoundError: No module named 'rich'

@oneil5able
Copy link

22:29:20-139114 ERROR Failed to load the model.
Traceback (most recent call last):
File "C:\Users\onego\anaconda3\envs\textgen\lib\site-packages\transformers\utils\import_utils.py", line 1364, in get_module
return importlib.import_module("." + module_name, self.name)
File "C:\Users\onego\anaconda3\envs\textgen\lib\importlib_init
.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1050, in _gcd_import
File "", line 1027, in _find_and_load
File "", line 1006, in _find_and_load_unlocked
File "", line 688, in _load_unlocked
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "C:\Users\onego\anaconda3\envs\textgen\lib\site-packages\transformers\models\llama\modeling_llama.py", line 55, in
from flash_attn import flash_attn_func, flash_attn_varlen_func
File "C:\Users\onego\anaconda3\envs\textgen\lib\site-packages\flash_attn_init
.py", line 3, in
from flash_attn.flash_attn_interface import (
File "C:\Users\onego\anaconda3\envs\textgen\lib\site-packages\flash_attn\flash_attn_interface.py", line 8, in
import flash_attn_2_cuda as flash_attn_cuda
ImportError: DLL load failed while importing flash_attn_2_cuda: The specified procedure could not be found.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "C:\Windows\System32\text-generation-webui\modules\ui_model_menu.py", line 220, in load_model_wrapper
shared.model, shared.tokenizer = load_model(selected_model, loader)
File "C:\Windows\System32\text-generation-webui\modules\models.py", line 87, in load_model
output = load_func_maploader
File "C:\Windows\System32\text-generation-webui\modules\models.py", line 235, in huggingface_loader
model = LoaderClass.from_pretrained(path_to_model, **params)
File "C:\Users\onego\anaconda3\envs\textgen\lib\site-packages\transformers\models\auto\auto_factory.py", line 565, in from_pretrained
model_class = _get_model_class(config, cls._model_mapping)
File "C:\Users\onego\anaconda3\envs\textgen\lib\site-packages\transformers\models\auto\auto_factory.py", line 387, in _get_model_class
supported_models = model_mapping[type(config)]
File "C:\Users\onego\anaconda3\envs\textgen\lib\site-packages\transformers\models\auto\auto_factory.py", line 740, in getitem
return self._load_attr_from_module(model_type, model_name)
File "C:\Users\onego\anaconda3\envs\textgen\lib\site-packages\transformers\models\auto\auto_factory.py", line 754, in _load_attr_from_module
return getattribute_from_module(self._modules[module_name], attr)
File "C:\Users\onego\anaconda3\envs\textgen\lib\site-packages\transformers\models\auto\auto_factory.py", line 698, in getattribute_from_module
if hasattr(module, attr):
File "C:\Users\onego\anaconda3\envs\textgen\lib\site-packages\transformers\utils\import_utils.py", line 1354, in getattr
module = self._get_module(self._class_to_module[name])
File "C:\Users\onego\anaconda3\envs\textgen\lib\site-packages\transformers\utils\import_utils.py", line 1366, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.models.llama.modeling_llama because of the following error (look up to see its traceback):
DLL load failed while importing flash_attn_2_cuda: The specified procedure could not be found.

@oneil5able
Copy link

(textgeneration) C:\Users\Patrick>python server.py python: can't open file 'C:\Users\Patrick\server.py': [Errno 2] No such file or directory

this was the cause of the error. conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia

@khoshi990
Copy link

please help what is this issue

PS C:\Users\Hp\cChardet> pip install twint
Defaulting to user installation because normal site-packages is not writeable
Collecting twint
Using cached twint-2.1.20-py3-none-any.whl
Requirement already satisfied: aiohttp in c:\users\hp\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from twint) (3.9.3)
Collecting aiodns (from twint)
Using cached aiodns-3.1.1-py3-none-any.whl.metadata (4.0 kB)
Requirement already satisfied: beautifulsoup4 in c:\users\hp\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from twint) (4.12.3)
Collecting cchardet (from twint)
Using cached cchardet-2.1.7.tar.gz (653 kB)
Preparing metadata (setup.py) ... done
Collecting elasticsearch (from twint)
Using cached elasticsearch-8.12.1-py3-none-any.whl.metadata (5.3 kB)
Requirement already satisfied: pysocks in c:\users\hp\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from twint) (1.7.1)
Requirement already satisfied: pandas in c:\users\hp\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from twint) (2.2.1)
Collecting aiohttp-socks (from twint)
Using cached aiohttp_socks-0.8.4-py3-none-any.whl.metadata (3.1 kB)
Collecting schedule (from twint)
Using cached schedule-1.2.1-py2.py3-none-any.whl.metadata (3.3 kB)
Collecting geopy (from twint)
Using cached geopy-2.4.1-py3-none-any.whl.metadata (6.8 kB)
Collecting fake-useragent (from twint)
Using cached fake_useragent-1.4.0-py3-none-any.whl.metadata (13 kB)
Requirement already satisfied: googletransx in c:\users\hp\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from twint) (2.4.2)
Collecting pycares>=4.0.0 (from aiodns->twint)
Using cached pycares-4.4.0-cp312-cp312-win_amd64.whl.metadata (4.5 kB)
Requirement already satisfied: aiosignal>=1.1.2 in c:\users\hp\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from aiohttp->twint) (1.3.1)
Requirement already satisfied: attrs>=17.3.0 in c:\users\hp\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from aiohttp->twint) (23.2.0)
Requirement already satisfied: frozenlist>=1.1.1 in c:\users\hp\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from aiohttp->twint) (1.4.1)
Requirement already satisfied: multidict<7.0,>=4.5 in c:\users\hp\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from aiohttp->twint) (6.0.5)
Requirement already satisfied: yarl<2.0,>=1.0 in c:\users\hp\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from aiohttp->twint) (1.9.4)
Collecting python-socks<3.0.0,>=2.4.3 (from python-socks[asyncio]<3.0.0,>=2.4.3->aiohttp-socks->twint)
Using cached python_socks-2.4.4-py3-none-any.whl.metadata (7.1 kB)
Requirement already satisfied: soupsieve>1.2 in c:\users\hp\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from beautifulsoup4->twint) (2.5)
Collecting elastic-transport<9,>=8 (from elasticsearch->twint)
Using cached elastic_transport-8.12.0-py3-none-any.whl.metadata (3.5 kB)
Collecting geographiclib<3,>=1.52 (from geopy->twint)
Using cached geographiclib-2.0-py3-none-any.whl.metadata (1.4 kB)
Requirement already satisfied: requests in c:\users\hp\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from googletransx->twint) (2.31.0)
Requirement already satisfied: numpy<2,>=1.26.0 in c:\users\hp\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from pandas->twint) (1.26.4)
Requirement already satisfied: python-dateutil>=2.8.2 in c:\users\hp\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from pandas->twint) (2.8.2)
Requirement already satisfied: pytz>=2020.1 in c:\users\hp\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from pandas->twint) (2024.1)
Requirement already satisfied: tzdata>=2022.7 in c:\users\hp\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from pandas->twint) (2024.1)
Requirement already satisfied: urllib3<3,>=1.26.2 in c:\users\hp\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from elastic-transport<9,>=8->elasticsearch->twint) (2.2.1)
Requirement already satisfied: certifi in c:\users\hp\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from elastic-transport<9,>=8->elasticsearch->twint) (2024.2.2)
Collecting cffi>=1.5.0 (from pycares>=4.0.0->aiodns->twint)
Using cached cffi-1.16.0-cp312-cp312-win_amd64.whl.metadata (1.5 kB)
Requirement already satisfied: six>=1.5 in c:\users\hp\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from python-dateutil>=2.8.2->pandas->twint) (1.16.0)
Collecting async-timeout>=3.0.1 (from python-socks[asyncio]<3.0.0,>=2.4.3->aiohttp-socks->twint)
Using cached async_timeout-4.0.3-py3-none-any.whl.metadata (4.2 kB)
Requirement already satisfied: idna>=2.0 in c:\users\hp\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from yarl<2.0,>=1.0->aiohttp->twint) (3.6)
Requirement already satisfied: charset-normalizer<4,>=2 in c:\users\hp\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from requests->googletransx->twint) (3.3.2)
Collecting pycparser (from cffi>=1.5.0->pycares>=4.0.0->aiodns->twint)
Using cached pycparser-2.21-py2.py3-none-any.whl.metadata (1.1 kB)
Using cached aiodns-3.1.1-py3-none-any.whl (5.4 kB)
Using cached aiohttp_socks-0.8.4-py3-none-any.whl (9.6 kB)
Using cached elasticsearch-8.12.1-py3-none-any.whl (432 kB)
Using cached fake_useragent-1.4.0-py3-none-any.whl (15 kB)
Using cached geopy-2.4.1-py3-none-any.whl (125 kB)
Using cached schedule-1.2.1-py2.py3-none-any.whl (11 kB)
Using cached elastic_transport-8.12.0-py3-none-any.whl (59 kB)
Using cached geographiclib-2.0-py3-none-any.whl (40 kB)
Using cached pycares-4.4.0-cp312-cp312-win_amd64.whl (76 kB)
Using cached python_socks-2.4.4-py3-none-any.whl (52 kB)
Using cached async_timeout-4.0.3-py3-none-any.whl (5.7 kB)
Using cached cffi-1.16.0-cp312-cp312-win_amd64.whl (181 kB)
Using cached pycparser-2.21-py2.py3-none-any.whl (118 kB)
Building wheels for collected packages: cchardet
Building wheel for cchardet (setup.py) ... error
error: subprocess-exited-with-error

× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [11 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-312
creating build\lib.win-amd64-cpython-312\cchardet
copying src\cchardet\version.py -> build\lib.win-amd64-cpython-312\cchardet
copying src\cchardet_init_.py -> build\lib.win-amd64-cpython-312\cchardet
running build_ext
building 'cchardet._cchardet' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for cchardet
Running setup.py clean for cchardet
Failed to build cchardet
ERROR: Could not build wheels for cchardet, which is required to install pyproject.toml-based projects
PS C:\Users\Hp\cChardet>

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment