Running setup.py install for sentencepiece ... error
error: subprocess-exited-with-error
× Running setup.py install for sentencepiece did not run successfully.
│ exit code: 1
╰─> [15 lines of output]
running install
C:\Users\openA\AppData\Local\Programs\Python\Python311\Lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-311
creating build\lib.win-amd64-cpython-311\sentencepiece
copying src\sentencepiece/__init__.py -> build\lib.win-amd64-cpython-311\sentencepiece
copying src\sentencepiece/_version.py -> build\lib.win-amd64-cpython-311\sentencepiece
copying src\sentencepiece/sentencepiece_model_pb2.py -> build\lib.win-amd64-cpython-311\sentencepiece
copying src\sentencepiece/sentencepiece_pb2.py -> build\lib.win-amd64-cpython-311\sentencepiece
running build_ext
building 'sentencepiece._sentencepiece' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> sentencepiece
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
(D:\openai.wiki\ChatGLM-6B\MyENV) D:\openai.wiki\ChatGLM-6B>streamlit run web_demo2.py --server.port 6006
Welcome to Streamlit!
If you’d like to receive helpful onboarding emails, news, offers, promotions,
and the occasional swag, please enter your email address below. Otherwise,
leave this field blank.
Email:
(D:\openai.wiki\ChatGLM-6B\MyENV) D:\openai.wiki\ChatGLM-6B>python web_demo.py
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 8/8 [01:39<00:00, 12.43s/it]
Traceback (most recent call last):
File "D:\openai.wiki\ChatGLM-6B\web_demo.py", line 5, in <module>
model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).half().cuda()
File "D:\openai.wiki\ChatGLM-6B\MyENV\lib\site-packages\torch\nn\modules\module.py", line 905, in cuda
return self._apply(lambda t: t.cuda(device))
File "D:\openai.wiki\ChatGLM-6B\MyENV\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\openai.wiki\ChatGLM-6B\MyENV\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\openai.wiki\ChatGLM-6B\MyENV\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "D:\openai.wiki\ChatGLM-6B\MyENV\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
param_applied = fn(param)
File "D:\openai.wiki\ChatGLM-6B\MyENV\lib\site-packages\torch\nn\modules\module.py", line 905, in <lambda>
return self._apply(lambda t: t.cuda(device))
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 32.00 MiB (GPU 0; 11.00 GiB total capacity; 10.24 GiB already allocated; 0 bytes free; 10.25 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
解决方法
修改cli_demo.py和web_demo.py中的第7行,添加显存限定。
原内容
model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).half().cuda()
修改后
model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).half().quantize(8).cuda()
ChatGLMForConditionalGeneration
(D:\openai.wiki\ChatGLM-6B\MyENV) D:\openai.wiki\ChatGLM-6B>python cli_demo.py
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 8/8 [00:26<00:00, 3.37s/it]
欢迎使用 ChatGLM-6B 模型,输入内容即可进行对话,clear 清空对话历史,stop 终止程序
用户:AI是什么?
Traceback (most recent call last):
File "D:\openai.wiki\ChatGLM-6B\cli_demo.py", line 57, in <module>
main()
File "D:\openai.wiki\ChatGLM-6B\cli_demo.py", line 42, in main
for response, history in model.stream_chat(tokenizer, query, history=history):
File "D:\openai.wiki\ChatGLM-6B\MyENV\lib\site-packages\torch\nn\modules\module.py", line 1614, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'ChatGLMForConditionalGeneration' object has no attribute 'stream_chat'
总报这个是怎么回事,安装了也报
ImportError: This modeling file requires the following packages that were not found in your environment: icetk. Run `pip install icetk`
Traceback (most recent call last):
File “G:\GLM AI\ChatGLM-6B\web_demo.py”, line 6, in
model = AutoModel.from_pretrained(“.\\model”, trust_remote_code=True).half().quantize(4).cuda()
File “C:\Users\86183\.conda\envs\chatglmEnv\lib\site-packages\transformers\models\auto\auto_factory.py”, line 466, in from_pretrained
return model_class.from_pretrained(
File “C:\Users\86183\.conda\envs\chatglmEnv\lib\site-packages\transformers\modeling_utils.py”, line 2646, in from_pretrained
) = cls._load_pretrained_model(
File “C:\Users\86183\.conda\envs\chatglmEnv\lib\site-packages\transformers\modeling_utils.py”, line 3019, in _load_pretrained_model
raise RuntimeError(f”Error(s) in loading state_dict for {model.__class__.__name__}:\n\t{error_msg}”)
RuntimeError: Error(s) in loading state_dict for ChatGLMForConditionalGeneration:
size mismatch for transformer.word_embeddings.weight: copying a param with shape torch.Size([150528, 4096]) from checkpoint, the shape in current model is torch.Size([130528, 4096]).
You may consider adding `ignore_mismatched_sizes=True` in the model `from_pretrained` method.
请问这个咋办呀
错误如下:
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Traceback (most recent call last):
File “D:\openai.wiki\ChatGLM-6B\cli_demo.py”, line 6, in
tokenizer = AutoTokenizer.from_pretrained(“THUDM/chatglm-6b”, trust_remote_code=True)
File “C:\Users\admin\AppData\Local\Programs\Python\Python39\lib\site-packages\transformers\models\auto\tokenization_auto.py”, line 642, in from_pretrained
tokenizer_class = get_class_from_dynamic_module(
File “C:\Users\admin\AppData\Local\Programs\Python\Python39\lib\site-packages\transformers\dynamic_module_utils.py”, line 363, in get_class_from_dynamic_module
final_module = get_cached_module_file(
File “C:\Users\admin\AppData\Local\Programs\Python\Python39\lib\site-packages\transformers\dynamic_module_utils.py”, line 237, in get_cached_module_file
modules_needed = check_imports(resolved_module_file)
File “C:\Users\admin\AppData\Local\Programs\Python\Python39\lib\site-packages\transformers\dynamic_module_utils.py”, line 134, in check_imports
raise ImportError(
ImportError: This modeling file requires the following packages that were not found in your environment: icetk. Run `pip install icetk`
执行python cli_demo.py
提示
Traceback (most recent call last):
File “D:\openai.wiki\ChatGLM-6B\cli_demo.py”, line 4, in
from transformers import AutoTokenizer, AutoModel
ModuleNotFoundError: No module named ‘transformers’
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Traceback (most recent call last):
File “D:\openai.wiki\ChatGLM-6B\cli_demo.py”, line 6, in
tokenizer = AutoTokenizer.from_pretrained(“THUDM/chatglm-6b”, trust_remote_code=True)
File “D:\openai.wiki\ChatGLM-6B\MyENV\lib\site-packages\transformers\models\auto\tokenization_auto.py”, line 686, in from_pretrained
tokenizer_class = get_class_from_dynamic_module(
File “D:\openai.wiki\ChatGLM-6B\MyENV\lib\site-packages\transformers\dynamic_module_utils.py”, line 388, in get_class_from_dynamic_module
return get_class_in_module(class_name, final_module.replace(“.py”, “”))
File “D:\openai.wiki\ChatGLM-6B\MyENV\lib\site-packages\transformers\dynamic_module_utils.py”, line 157, in get_class_in_module
module = importlib.import_module(module_path)
File “D:\openai.wiki\ChatGLM-6B\MyENV\lib\importlib\__init__.py”, line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File “”, line 1050, in _gcd_import
File “”, line 1027, in _find_and_load
File “”, line 992, in _find_and_load_unlocked
File “”, line 241, in _call_with_frames_removed
File “”, line 1050, in _gcd_import
File “”, line 1027, in _find_and_load
File “”, line 1004, in _find_and_load_unlocked
ModuleNotFoundError: No module named ‘transformers_modules.THUDM/chatglm-6b’
File “C:\Users\zhoux\miniconda3\lib\site-packages\transformers\utils\hub.py”, line 380, in cached_file
raise EnvironmentError(
OSError: D:\openai.wiki\ChatGLM-6B does not appear to have a file named config.json. Checkout ‘https://huggingface.co/D:\openai.wiki\ChatGLM-6B/None’ for available files.启动不了出错帮看一下怎么回事都按一步步来的
重新部署后出现这个问题了请问该怎么操作(D:\openai.wiki\ChatGLM-6B\MyENV) D:\openai.wiki\ChatGLM-6B>python cli_demo.py
Traceback (most recent call last):
File “D:\openai.wiki\ChatGLM-6B\cli_demo.py”, line 5, in
import readline
ModuleNotFoundError: No module named ‘readline’
问一下cpu部署怎么弄
@天体运行 在安装PyTorch的时候选择CPU版本
请问一下,如果服务器上有多张卡,如何指定使用哪张卡?
@tmp 抱歉,知识盲区了,从来没用过一张卡以上,所以还真不清楚。
@tmp CUDA_VISIBLE_DEVICES=1表示只用第二张卡
总报这个是怎么回事,安装了也报
ImportError: This modeling file requires the following packages that were not found in your environment: icetk. Run `pip install icetk`
@tmp centos7 系统
@tmp 这个错误提示说明你需要在你的环境中安装 icetk 这个包,否则无法导入相关模块。可以尝试在终端中运行
pip install icetk
命令来安装该包。如果要安装在其他目录,那个6b的模型文件的目录要改哪些
@9841 直接改成目录的绝对路径也可以
你好,源网站中没看到cli_demo.py|web_demo.py|web_demo2.py,这三个文件
@Haochen 我刚去看了一遍,明明有啊。
我安装的时候报错了,就是您说的没有安装正确版本的PyTorch和CUDA的错误,我的版本是 NVIDIA CUDA 12.0.149 driver,选择了CUDA 11.8。
@emii 你是在Conda环境里面安装的吗?如果是直接安装电脑里,是不行的哦。
所以缺少config.json文件是咋解决呢😂,用的清华源装到一半才看到这帖子
@SeeU 下载我所提供的模型覆盖你自己的文件就好啦
大佬,请问提示缺少config.json该咋解决?清华源装到一半出这个报错😂
Traceback (most recent call last):
File “G:\GLM AI\ChatGLM-6B\web_demo.py”, line 6, in
model = AutoModel.from_pretrained(“.\\model”, trust_remote_code=True).half().quantize(4).cuda()
File “C:\Users\86183\.conda\envs\chatglmEnv\lib\site-packages\transformers\models\auto\auto_factory.py”, line 466, in from_pretrained
return model_class.from_pretrained(
File “C:\Users\86183\.conda\envs\chatglmEnv\lib\site-packages\transformers\modeling_utils.py”, line 2646, in from_pretrained
) = cls._load_pretrained_model(
File “C:\Users\86183\.conda\envs\chatglmEnv\lib\site-packages\transformers\modeling_utils.py”, line 3019, in _load_pretrained_model
raise RuntimeError(f”Error(s) in loading state_dict for {model.__class__.__name__}:\n\t{error_msg}”)
RuntimeError: Error(s) in loading state_dict for ChatGLMForConditionalGeneration:
size mismatch for transformer.word_embeddings.weight: copying a param with shape torch.Size([150528, 4096]) from checkpoint, the shape in current model is torch.Size([130528, 4096]).
You may consider adding `ignore_mismatched_sizes=True` in the model `from_pretrained` method.
请问这个咋办呀
@chiya 不应该啊,这个错误提示显示在加载预训练模型时出现了形状不匹配的问题,可能是因为预训练模型的权重参数形状与当前模型不匹配。您可以尝试在from_pretrained()方法中添加ignore_mismatched_sizes=True参数来忽略形状不匹配的问题。
model = AutoModel.from_pretrained(".\\model", trust_remote_code=True, ignore_mismatched_sizes=True).half().quantize(4).cuda()
这样,模型在加载预训练模型时就会忽略形状不匹配的问题,但是这样可能会导致模型性能下降。
按道理来说这种问题应该是不会出现的,建议重新看教程再布置一遍,或者下载本站所提供的模型,然后将直接解压到项目目录,不要自己改动和模型相关的任何东西。
@chiya 可以调整config.json文件中参数130528为150528
ChatGLM-6B-INT4已经部署成功,这个过程中学到了不少知识,谢谢大神这么详尽的教程。
@lucky 不客气哈,感谢支持。
错误如下:
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Traceback (most recent call last):
File “D:\openai.wiki\ChatGLM-6B\cli_demo.py”, line 6, in
tokenizer = AutoTokenizer.from_pretrained(“THUDM/chatglm-6b”, trust_remote_code=True)
File “C:\Users\admin\AppData\Local\Programs\Python\Python39\lib\site-packages\transformers\models\auto\tokenization_auto.py”, line 642, in from_pretrained
tokenizer_class = get_class_from_dynamic_module(
File “C:\Users\admin\AppData\Local\Programs\Python\Python39\lib\site-packages\transformers\dynamic_module_utils.py”, line 363, in get_class_from_dynamic_module
final_module = get_cached_module_file(
File “C:\Users\admin\AppData\Local\Programs\Python\Python39\lib\site-packages\transformers\dynamic_module_utils.py”, line 237, in get_cached_module_file
modules_needed = check_imports(resolved_module_file)
File “C:\Users\admin\AppData\Local\Programs\Python\Python39\lib\site-packages\transformers\dynamic_module_utils.py”, line 134, in check_imports
raise ImportError(
ImportError: This modeling file requires the following packages that were not found in your environment: icetk. Run `pip install icetk`
@Q 先执行一下
pip install icetk
试试执行python cli_demo.py
提示
Traceback (most recent call last):
File “D:\openai.wiki\ChatGLM-6B\cli_demo.py”, line 4, in
from transformers import AutoTokenizer, AutoModel
ModuleNotFoundError: No module named ‘transformers’
另外,那个CUDA 我选11.7才安装成功
@一步步复制 关于CUDA的版本,要根据自己的显卡型号来确定哈。
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Traceback (most recent call last):
File “D:\openai.wiki\ChatGLM-6B\cli_demo.py”, line 6, in
tokenizer = AutoTokenizer.from_pretrained(“THUDM/chatglm-6b”, trust_remote_code=True)
File “D:\openai.wiki\ChatGLM-6B\MyENV\lib\site-packages\transformers\models\auto\tokenization_auto.py”, line 686, in from_pretrained
tokenizer_class = get_class_from_dynamic_module(
File “D:\openai.wiki\ChatGLM-6B\MyENV\lib\site-packages\transformers\dynamic_module_utils.py”, line 388, in get_class_from_dynamic_module
return get_class_in_module(class_name, final_module.replace(“.py”, “”))
File “D:\openai.wiki\ChatGLM-6B\MyENV\lib\site-packages\transformers\dynamic_module_utils.py”, line 157, in get_class_in_module
module = importlib.import_module(module_path)
File “D:\openai.wiki\ChatGLM-6B\MyENV\lib\importlib\__init__.py”, line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File “”, line 1050, in _gcd_import
File “”, line 1027, in _find_and_load
File “”, line 992, in _find_and_load_unlocked
File “”, line 241, in _call_with_frames_removed
File “”, line 1050, in _gcd_import
File “”, line 1027, in _find_and_load
File “”, line 1004, in _find_and_load_unlocked
ModuleNotFoundError: No module named ‘transformers_modules.THUDM/chatglm-6b’
求助求助
@aizy 补充:用cli运行的
@aizy 初步来看,是transform的版本不对造成的。
@aizy 从错误信息来看,问题出在试图加载模型时找不到对应的模块。
大概率是模型路径问题,检查一下路径是否正确。
File “C:\Users\zhoux\miniconda3\lib\site-packages\transformers\utils\hub.py”, line 380, in cached_file
raise EnvironmentError(
OSError: D:\openai.wiki\ChatGLM-6B does not appear to have a file named config.json. Checkout ‘https://huggingface.co/D:\openai.wiki\ChatGLM-6B/None’ for available files.启动不了出错帮看一下怎么回事都按一步步来的
@落夜秦雨 好像是模型问题,找不到config.json。
@PhiltreX 弄好了现在出现cli_demo启动时提示AttributeError: module ‘collections’ has no attribute ‘Callable’怎么回事而web_demo跟web_demo2启动就没问题正常使用
@落夜秦雨 这通常是由于 Python 版本过低导致的,因为 collections.Callable 是在 Python 3.3 版本中引入的。
你的Python是什么版本啊?
@PhiltreX 我的python是3.109 64-bit
@落夜秦雨 哦我升级一下版本
@落夜秦雨 可是你这个报错是Python的版本问题,重新部署一下吧。
@PhiltreX The following packages will be DOWNGRADED:
pytorch 2.0.1-py3.11_cuda11.7_cudnn8_0 –> 2.0.1-py3.11_cuda11.8_cudnn8_0
torchaudio 2.0.2-py311_cu117 –> 2.0.2-py311_cu118
torchvision 0.15.2-py311_cu117 –> 0.15.2-py311_cu118
安装自动选择版本
CUDA版本错了重新安装试试
Could not find module ‘nvcuda.dll’ (or one of its dependencies). Try using the full path with constructor syntax.是为啥呀
@yiyi 是不是没安装CUDA啊
能在linux系统里部署吗
@飞跃 乌班图理论上可以
重新部署后出现这个问题了请问该怎么操作(D:\openai.wiki\ChatGLM-6B\MyENV) D:\openai.wiki\ChatGLM-6B>python cli_demo.py
Traceback (most recent call last):
File “D:\openai.wiki\ChatGLM-6B\cli_demo.py”, line 5, in
import readline
ModuleNotFoundError: No module named ‘readline’
@落夜秦雨 安装pyreadline模块。在命令行中运行以下命令:pip install pyreadline。这个模块是readline模块的一个替代品,可以在 Windows 上使用。
@PhiltreX 清华这个开源问题很多啊除了第一次部署网页版能用,后面起码3-4次重新部署都不成功总有新问题出现
@落夜秦雨 不太理解,第一次能用了,为什么后面还要重新部署呢。
TypeError:不能直接创建描述符。如果此调用来自_pb2.py文件,则您生成的代码已过期,必须使用protoc>=3.19.0重新生成。如果你不能立即再生你的质子,其他一些可能的解决办法是:1。将protobuf软件包降级至3.20.x或更低。2.设置PROTOCOL_BUFFERS_PYTHON_ECTION=PYTHON(但这将使用纯PYTHON解析,速度会慢得多)。更多信息:https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-更新,老大怎么弄protobuf删除不了也降级不了,安装低版本提示没有包
@落夜秦雨 这个错误提示说明你的代码生成工具 protoc 版本过低,不能正确处理你的 .proto 文件。你需要升级 protoc 版本,或者降级 protobuf 软件包来解决这个问题。
如果你遇到了无法删除或降级 protobuf 软件包的问题,可以尝试以下方法:
使用 pip install –force-reinstall protobuf==3.20 命令来强制重新安装 protobuf 软件包。这可能会覆盖之前安装的版本,解决依赖问题。
如果你不能立即升级或降级 protobuf 软件包,可以尝试设置环境变量 PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python 来使用纯 Python 解析器,这可能会导致性能下降,但是可以暂时解决问题。
另外,你可以尝试手动下载和安装 protobuf 3.20 软件包。你可以在 protobuf 的官方网站(https://developers.google.com/protocol-buffers)或者 PyPI(https://pypi.org/project/protobuf/)上找到相应的安装包。
无论你选择哪种方法,都需要重新生成你的 .proto 文件来更新你的代码。你可以使用新版本的 protoc 工具来重新生成代码,确保你的代码能够正确地使用最新版本的 protobuf 软件包。
@PhiltreX 谢谢
老大发个一键部署包给我们小白用,我都折腾起码一个礼拜了都以失败告终
@落夜秦雨 一键装包可能存在各种各样的暗病,所以本站一直没有提供。
终于一路顺畅重新部署完毕原来是python版本问题创建Conda虚拟环境时指定python=3.8版本
conda create -p E:\openai.wiki\ChatGLM-6B\MyENV python=3.8
@落夜秦雨 抱歉,忘记写指定版本了,现已修正。
@PhiltreX 我也是看了你的Anaconda|Miniconda|基础命令使用教程里布有创建环境时指定python版本试了一下才成功的,不过最后还会提示pip install pyreadline,pip install icetk这两个模块。也要谢谢你写的这么详细连没基础都能部署
@落夜秦雨 不客气哈,问题解决了就好。
今天部署官方已经量化好的4G大小int4模型时遇到最后提示RuntimeError: Internal: [MASK] is already defined无法启动请问怎么回事怎么解决,int8模型也是无法启动。老大出个教程
@落夜秦雨 关于MASK部分,大概率可能是因为模型的问题引起的,在今天之前有成功运行过吗?还是首次无法启动?
@PhiltreX 按你的教程能跟提供的模型是没问题的能成功运行,就是官方提供的网盘里有chatglm-6b-int4跟chatglm-6b-int8这两个官方量化过的模型就是不成功提示RuntimeError: Internal: [MASK] is already defined。这是清华网盘https://cloud.tsinghua.edu.cn/d/674208019e314311ab5c/?p=%2F&mode=list。这是Hugging Face网站的连接https://huggingface.co/THUDM/chatglm-6b/tree/int4
@落夜秦雨 本想部署官方量化过的模型做个对比,也为后期官方可能的更新做准备
奇怪了第一部署成功时还能识别跟描述图片虽然只能web_demo2浏览器模式运行,后面重新部署怎么就不能识别图片了
@落夜秦雨 更新啦更新啦,别折腾这个了,赶紧去看看新版本。
https://openai.wiki/chatglm2-6b.html
@PhiltreX 哦