(D:\openai.wiki\ChatGLM2-6B\ENV) D:\openai.wiki\ChatGLM2-6B>python cli_demo.py
You are using a model of type chatglm to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 7/7 [01:58<00:00, 16.91s/it]
欢迎使用 ChatGLM2-6B 模型,输入内容即可进行对话,clear 清空对话历史,stop 终止程序
用户:AI是什么?
ChatGLM:AI指的是人工智能,是一种能够通过学习和推理来执行任务的计算机程序。它可以模仿人类的思维和行为,完成各种任务,例如语音识别、图像识别、自然语言处理、预测和决策等。AI 技术在当今社会中得到了广泛应用,例如自动驾驶汽车、智能家居、医疗诊断、金融分析等领域。
ChatGLM:以下是一个使用 Python 3 和 requests 库来爬取网页数据的简单示例。
```python
import requests
# 目标网页 URL
url = "https://www.example.com"
# 发送 GET 请求并获取响应
response = requests.get(url)
# 解析 HTML 内容
# 打印网页标题
print(soup.title.text)
```
在这个示例中,我们首先导入了 requests 和 BeautifulSoup 库。然后,我们定义了目标网页的 URL,使用 requests.get() 方法发送 GET 请求,并将响应保存在 response 变量中。
接下来,我们使用 BeautifulSoup 函数将 HTML 内容解析为 BeautifulSoup 对象,并将其保存在 soup 变量中。
最后,我们打印了网页的标题,使用 soup.title.text 属性获取标题文本。
请注意,此示例仅用于学习目的。在实际生产环境中,请确保遵守网站的 robots.txt 文件和使用网站的授权,以避免违反网站的规则。
用户:胃肠难受应该吃点什么药?
ChatGLM:胃肠难受的原因有很多,例如饮食不当、胃肠感染、胃炎、胃溃疡、胆囊炎、便秘等等。应该先确定病因并采取相应的治疗措施。
如果是因为饮食不当引起的胃肠难受,建议先停止刺激性食物和饮料的摄入,如辛辣食物、咖啡、酒精和油腻食物等,然后尝试吃一些易于 消化的食物,如米粥、面条、饼干、蛋白质粉等。
如果是胃肠感染引起的胃肠难受,则需要根据病因和症状选择相应的药物治疗,如抗生素、抗酸药、消化酶等。
如果是胃炎、胃溃疡或胆囊炎等疾病引起的胃肠难受,则需要遵循医生的建议进行治疗,并注意饮食和作息习惯。
胃肠难受时应该先确定病因并采取相应的治疗措施。如果不确定应该吃什么药,建议先咨询医生或药师的意见。
(D:\openai.wiki\ChatGLM2-6B\ENV) D:\openai.wiki\ChatGLM2-6B>streamlit run web_demo2.py --server.port 7860
Welcome to Streamlit!
If you’d like to receive helpful onboarding emails, news, offers, promotions,
and the occasional swag, please enter your email address below. Otherwise,
leave this field blank.
Email:
(D:\openai.wiki\ChatGLM2-6B\ENV) D:\openai.wiki\ChatGLM2-6B>python cli_demo.py
Traceback (most recent call last):
File "D:\openai.wiki\ChatGLM2-6B\cli_demo.py", line 5, in <module>
import readline
File "D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\readline.py", line 34, in <module>
rl = Readline()
File "D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\pyreadline\rlmain.py", line 422, in __init__
BaseReadline.__init__(self)
File "D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\pyreadline\rlmain.py", line 62, in __init__
mode.init_editing_mode(None)
File "D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\pyreadline\modes\emacs.py", line 633, in init_editing_mode
self._bind_key('space', self.self_insert)
File "D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\pyreadline\modes\basemode.py", line 162, in _bind_key
if not callable(func):
File "D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\pyreadline\py3k_compat.py", line 8, in callable
return isinstance(x, collections.Callable)
AttributeError: module 'collections' has no attribute 'Callable'
解决方法如下:
你的Python版本可能大于或等于3.10版本,因为 Python 3.10 版本将collections等一些属性放到了collections.abc子模块下。所以会出现collections’ has no attribute的错误。
Traceback (most recent call last):
File "D:\openai.wiki\ChatGLM2-6B\web_demo.py", line 1, in <module>
from transformers import AutoModel, AutoTokenizer
File "D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\__init__.py", line 26, in <module>
from . import dependency_versions_check
File "D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\dependency_versions_check.py", line 17, in <module>
from .utils.versions import require_version, require_version_core
File "D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\utils\__init__.py", line 30, in <module>
from .generic import (
File "D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\utils\generic.py", line 29, in <module>
from .import_utils import is_flax_available, is_tf_available, is_torch_available, is_torch_fx_proxy
File "D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\utils\import_utils.py", line 33, in <module>
from . import logging
File "D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\utils\logging.py", line 35, in <module>
import huggingface_hub.utils as hf_hub_utils
File "D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\huggingface_hub\utils\__init__.py", line 32, in <module>
from ._errors import (
File "D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\huggingface_hub\utils\_errors.py", line 3, in <module>
from requests import HTTPError, Response
File "D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\requests\__init__.py", line 45, in <module>
from .exceptions import RequestsDependencyWarning
File "D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\requests\exceptions.py", line 9, in <module>
from .compat import JSONDecodeError as CompatJSONDecodeError
File "D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\requests\compat.py", line 13, in <module>
import charset_normalizer as chardet
File "D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\charset_normalizer\__init__.py", line 23, in <module>
from charset_normalizer.api import from_fp, from_path, from_bytes, normalize
File "D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\charset_normalizer\api.py", line 10, in <module>
from charset_normalizer.md import mess_ratio
File "charset_normalizer\md.py", line 5, in <module>
ImportError: cannot import name 'COMMON_SAFE_ASCII_CHARACTERS' from 'charset_normalizer.constant' (D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\charset_normalizer\constant.py)
(D:\openai.wiki\ChatGLM2-6B\ENV) D:\openai.wiki\ChatGLM2-6B>python web_demo.py
Traceback (most recent call last):
File "D:\openai.wiki\ChatGLM2-6B\web_demo.py", line 5, in <module>
tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm2-6b", trust_remote_code=True)
File "D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 676, in from_pretrained
tokenizer_class = get_class_from_dynamic_module(class_ref, pretrained_model_name_or_path, **kwargs)
File "D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\dynamic_module_utils.py", line 443, in get_class_from_dynamic_module
return get_class_in_module(class_name, final_module.replace(".py", ""))
File "D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\dynamic_module_utils.py", line 164, in get_class_in_module
module = importlib.import_module(module_path)
File "D:\openai.wiki\ChatGLM2-6B\ENV\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 992, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1004, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'transformers_modules.THUDM/chatglm2-6b'
我按步骤弄完就出现这:
(D:\openai.wiki\ChatGLM2-6B-main\ENV) D:\openai.wiki\ChatGLM2-6B-main>python cli_demo.py
Traceback (most recent call last):
File “cli_demo.py”, line 4, in
from transformers import AutoTokenizer, AutoModel
ModuleNotFoundError: No module named ‘transformers’
这是什么情况,帮助一下,谢谢
我在执行pip install -r requirements.txt下载项目依赖库的时候总是会在下载torch的时候出问题,出现一长串的错误代码:
ERROR: Exception:
Traceback (most recent call last):
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_vendor\urllib3\response.py”, line 438, in _error_catcher
yield
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_vendor\urllib3\response.py”, line 561, in read
data = self._fp_read(amt) if not fp_closed else b””
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_vendor\urllib3\response.py”, line 527, in _fp_read
return self._fp.read(amt) if amt is not None else self._fp.read()
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_vendor\cachecontrol\filewrapper.py”, line 90, in read
data = self.__fp.read(amt)
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\http\client.py”, line 466, in read
s = self.fp.read(amt)
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\socket.py”, line 705, in readinto
return self._sock.recv_into(b)
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\ssl.py”, line 1274, in recv_into
return self.read(nbytes, buffer)
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\ssl.py”, line 1130, in read
return self._sslobj.read(len, buffer)
TimeoutError: The read operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\cli\base_command.py”, line 169, in exc_logging_wrapper
status = run_func(*args)
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\cli\req_command.py”, line 248, in wrapper
return func(self, options, args)
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\commands\install.py”, line 377, in run
requirement_set = resolver.resolve(
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\resolution\resolvelib\resolver.py”, line 92, in resolve
result = self._result = resolver.resolve(
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_vendor\resolvelib\resolvers.py”, line 546, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_vendor\resolvelib\resolvers.py”, line 397, in resolve
self._add_to_criteria(self.state.criteria, r, parent=None)
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_vendor\resolvelib\resolvers.py”, line 173, in _add_to_criteria
if not criterion.candidates:
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_vendor\resolvelib\structs.py”, line 156, in __bool__
return bool(self._sequence)
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\resolution\resolvelib\found_candidates.py”, line 155, in __bool__
return any(self)
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\resolution\resolvelib\found_candidates.py”, line 143, in
return (c for c in iterator if id(c) not in self._incompatible_ids)
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\resolution\resolvelib\found_candidates.py”, line 47, in _iter_built
candidate = func()
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\resolution\resolvelib\factory.py”, line 206, in _make_candidate_from_link
self._link_candidate_cache[link] = LinkCandidate(
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py”, line 293, in __init__
super().__init__(
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py”, line 156, in __init__
self.dist = self._prepare()
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py”, line 225, in _prepare
dist = self._prepare_distribution()
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py”, line 304, in _prepare_distribution
return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True)
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\operations\prepare.py”, line 516, in prepare_linked_requirement
return self._prepare_linked_requirement(req, parallel_builds)
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\operations\prepare.py”, line 587, in _prepare_linked_requirement
local_file = unpack_url(
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\operations\prepare.py”, line 166, in unpack_url
file = get_http_url(
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\operations\prepare.py”, line 107, in get_http_url
from_path, content_type = download(link, temp_dir.path)
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\network\download.py”, line 147, in __call__
for chunk in chunks:
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\cli\progress_bars.py”, line 53, in _rich_progress_bar
for chunk in iterable:
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\network\utils.py”, line 63, in response_chunks
for chunk in response.raw.stream(
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_vendor\urllib3\response.py”, line 622, in stream
data = self.read(amt=amt, decode_content=decode_content)
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_vendor\urllib3\response.py”, line 560, in read
with self._error_catcher():
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\contextlib.py”, line 153, in __exit__
self.gen.throw(typ, value, traceback)
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_vendor\urllib3\response.py”, line 443, in _error_catcher
raise ReadTimeoutError(self._pool, None, “Read timed out.”)
pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host=’files.pythonhosted.org’, port=443): Read timed out.
我也不知道是什么情况,能帮我一下吗?谢谢
Traceback (most recent call last):
File “E:\pythonStydy\code\chain_glm\web_demo.py”, line 6, in
tokenizer = AutoTokenizer.from_pretrained(“./glm”, trust_remote_code=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “E:\pythonStydy\home\chain_glm\Lib\site-packages\transformers\models\auto\tokenization_auto.py”, line 676, in from_pretrained
tokenizer_class = get_class_from_dynamic_module(class_ref, pretrained_model_name_or_path, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “E:\pythonStydy\home\chain_glm\Lib\site-packages\transformers\dynamic_module_utils.py”, line 443, in get_class_from_dynamic_module
return get_class_in_module(class_name, final_module.replace(“.py”, “”))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “E:\pythonStydy\home\chain_glm\Lib\site-packages\transformers\dynamic_module_utils.py”, line 164, in get_class_in_module
module = importlib.import_module(module_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\shhx\AppData\Local\Programs\Python\Python311\Lib\importlib\__init__.py”, line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “”, line 1206, in _gcd_import
File “”, line 1178, in _find_and_load
File “”, line 1128, in _find_and_load_unlocked
File “”, line 241, in _call_with_frames_removed
File “”, line 1206, in _gcd_import
File “”, line 1178, in _find_and_load
File “”, line 1128, in _find_and_load_unlocked
File “”, line 241, in _call_with_frames_removed
File “”, line 1206, in _gcd_import
File “”, line 1178, in _find_and_load
File “”, line 1142, in _find_and_load_unlocked
ModuleNotFoundError: No module named ‘transformers_modules.’
我已经pip install -r requirements.txt,而且编译器已经识别我看源码也可以看到,为什么还是说找不到,能帮帮忙吗
@最喜欢睡觉 (D:\openai.wiki\ChatGLM2-6B\ENV) D:\openai.wiki\ChatGLM2-6B>python cli_demo.py
You are using a model of type chatglm to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
Traceback (most recent call last):
File “cli_demo.py”, line 8, in
model = AutoModel.from_pretrained(“THUDM\chatglm2-6b”, trust_remote_code=True).cuda()
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\models\auto\auto_factory.py”, line 475, in from_pretrained
model_class = get_class_from_dynamic_module(
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\dynamic_module_utils.py”, line 443, in get_class_from_dynamic_module
return get_class_in_module(class_name, final_module.replace(“.py”, “”))
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\dynamic_module_utils.py”, line 164, in get_class_in_module
module = importlib.import_module(module_path)
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\importlib\__init__.py”, line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File “”, line 1014, in _gcd_import
File “”, line 991, in _find_and_load
File “”, line 975, in _find_and_load_unlocked
File “”, line 671, in _load_unlocked
File “”, line 843, in exec_module
File “”, line 219, in _call_with_frames_removed
File “C:\Users\胡睡觉/.cache\huggingface\modules\transformers_modules\chatglm2-6b\modeling_chatglm.py”, line 21, in
from transformers.modeling_utils import PreTrainedModel
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\modeling_utils.py”, line 85, in
from accelerate import __version__ as accelerate_version
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\accelerate\__init__.py”, line 3, in
from .accelerator import Accelerator
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\accelerate\accelerator.py”, line 40, in
from .tracking import LOGGER_TYPE_TO_CLASS, GeneralTracker, filter_trackers
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\accelerate\tracking.py”, line 42, in
from torch.utils import tensorboard
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\torch\utils\tensorboard\__init__.py”, line 12, in
from .writer import FileWriter, SummaryWriter # noqa: F401
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\torch\utils\tensorboard\writer.py”, line 9, in
from tensorboard.compat.proto.event_pb2 import SessionLog
File “C:\Users\胡睡觉\AppData\Roaming\Python\Python38\site-packages\tensorboard\compat\proto\event_pb2.py”, line 17, in
from tensorboard.compat.proto import summary_pb2 as tensorboard_dot_compat_dot_proto_dot_summary__pb2
File “C:\Users\胡睡觉\AppData\Roaming\Python\Python38\site-packages\tensorboard\compat\proto\summary_pb2.py”, line 17, in
from tensorboard.compat.proto import tensor_pb2 as tensorboard_dot_compat_dot_proto_dot_tensor__pb2
File “C:\Users\胡睡觉\AppData\Roaming\Python\Python38\site-packages\tensorboard\compat\proto\tensor_pb2.py”, line 16, in
from tensorboard.compat.proto import resource_handle_pb2 as tensorboard_dot_compat_dot_proto_dot_resource__handle__pb2
File “C:\Users\胡睡觉\AppData\Roaming\Python\Python38\site-packages\tensorboard\compat\proto\resource_handle_pb2.py”, line 16, in
from tensorboard.compat.proto import tensor_shape_pb2 as tensorboard_dot_compat_dot_proto_dot_tensor__shape__pb2
File “C:\Users\胡睡觉\AppData\Roaming\Python\Python38\site-packages\tensorboard\compat\proto\tensor_shape_pb2.py”, line 36, in
_descriptor.FieldDescriptor(
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\google\protobuf\descriptor.py”, line 561, in __new__
_message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
碰到下面这种情况怎么办?
(D:\openai.wiki\ChatGLM2-6B\ENV) D:\openai.wiki\ChatGLM2-6B>python web_demo.py
Traceback (most recent call last):
File “web_demo.py”, line 5, in
tokenizer = AutoTokenizer.from_pretrained(“THUDM\chatglm2-6b”, trust_remote_code=True)
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\models\auto\tokenization_auto.py”, line 676, in from_pretrained
tokenizer_class = get_class_from_dynamic_module(class_ref, pretrained_model_name_or_path, **kwargs)
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\dynamic_module_utils.py”, line 431, in get_class_from_dynamic_module
final_module = get_cached_module_file(
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\dynamic_module_utils.py”, line 268, in get_cached_module_file
modules_needed = check_imports(resolved_module_file)
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\dynamic_module_utils.py”, line 146, in check_imports
importlib.import_module(imp)
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\importlib\__init__.py”, line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File “”, line 1014, in _gcd_import
File “”, line 991, in _find_and_load
File “”, line 975, in _find_and_load_unlocked
File “”, line 671, in _load_unlocked
File “”, line 843, in exec_module
File “”, line 219, in _call_with_frames_removed
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\torch\__init__.py”, line 122, in
raise err
OSError: [WinError 127] 找不到指定的程序。 Error loading “D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\torch\lib\caffe2_detectron_ops_gpu.dll” or one of its dependencies.
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 214.00 MiB (GPU 0; 4.00 GiB total capacity; 3.23 GiB already allocated; 68.43 MiB free; 3.23 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
站长,我在运行web_demo.py时出现了一下报错:
Traceback (most recent call last):
File “web_demo.py”, line 6, in
tokenizer = AutoTokenizer.from_pretrained(“THUDM\chatglm2-6b”, trust_remote_code=True)
File “E:\ChatGLM\ChatGLM2-6B\ENV\lib\site-packages\transformers\models\auto\tokenization_auto.py”, line 643, in from_pretrained
tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
File “E:\ChatGLM\ChatGLM2-6B\ENV\lib\site-packages\transformers\models\auto\tokenization_auto.py”, line 487, in get_tokenizer_config
resolved_config_file = cached_file(
File “E:\ChatGLM\ChatGLM2-6B\ENV\lib\site-packages\transformers\utils\hub.py”, line 417, in cached_file
resolved_file = hf_hub_download(
File “E:\ChatGLM\ChatGLM2-6B\ENV\lib\site-packages\huggingface_hub\utils\_validators.py”, line 110, in _inner_fn
validate_repo_id(arg_value)
File “E:\ChatGLM\ChatGLM2-6B\ENV\lib\site-packages\huggingface_hub\utils\_validators.py”, line 164, in validate_repo_id
raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must use alphanumeric chars or ‘-‘, ‘_’, ‘.’, ‘–‘ and ‘..’ are forbidden, ‘-‘ and ‘.’ cannot start or end the name, max length is 96: ‘THUDM\chatglm2-6b’.
请问要怎么解决
站长好,我想尝试多卡部署,我的cli_demo.py代码如下
import os
import platform
from transformers import AutoTokenizer, AutoModel
from chatglm_parallel import load_model_on_gpus
model = load_model_on_gpus(“THUDM\chatglm2-6b”, num_gpus=2)
tokenizer = AutoTokenizer.from_pretrained(“THUDM\chatglm2-6b”, trust_remote_code=True)
os_name = platform.system()
clear_command = ‘cls’ if os_name == ‘Windows’ else ‘clear’
报错信息
ValueError: transformer.embedding.word_embeddings.weight doesn’t have any device set.
求问解决方案,感谢
大佬大佬,运行python web_demo.py后就这样报错了怎么办:
(D:\ChatGLM\ChatGLM2-6B\ENV) D:\ChatGLM\ChatGLM2-6B>python web_demo.py
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 7/7 [00:13<00:00, 1.99s/it]
Traceback (most recent call last):
File "web_demo.py", line 6, in
model = AutoModel.from_pretrained(“THUDM\chatglm2-6b”, trust_remote_code=True).cuda()
File “D:\ChatGLM\ChatGLM2-6B\ENV\lib\site-packages\torch\nn\modules\module.py”, line 918, in cuda
return self._apply(lambda t: t.cuda(device))
File “D:\ChatGLM\ChatGLM2-6B\ENV\lib\site-packages\torch\nn\modules\module.py”, line 810, in _apply
module._apply(fn)
File “D:\ChatGLM\ChatGLM2-6B\ENV\lib\site-packages\torch\nn\modules\module.py”, line 810, in _apply
module._apply(fn)
File “D:\ChatGLM\ChatGLM2-6B\ENV\lib\site-packages\torch\nn\modules\module.py”, line 810, in _apply
module._apply(fn)
File “D:\ChatGLM\ChatGLM2-6B\ENV\lib\site-packages\torch\nn\modules\module.py”, line 833, in _apply
param_applied = fn(param)
File “D:\ChatGLM\ChatGLM2-6B\ENV\lib\site-packages\torch\nn\modules\module.py”, line 918, in
return self._apply(lambda t: t.cuda(device))
File “D:\ChatGLM\ChatGLM2-6B\ENV\lib\site-packages\torch\cuda\__init__.py”, line 289, in _lazy_init
raise AssertionError(“Torch not compiled with CUDA enabled”)
AssertionError: Torch not compiled with CUDA enabled
Traceback (most recent call last):
File “D:\openai.wiki\ChatGLM2-6B\cli_demo.py”, line 9, in
model = AutoModel.from_pretrained(“THUDM\chatglm2-6b”, trust_remote_code=True).cuda()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “D:\openai.wiki\ChatGLM2-6B\ENV\Lib\site-packages\torch\nn\modules\module.py”, line 918, in cuda
return self._apply(lambda t: t.cuda(device))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “D:\openai.wiki\ChatGLM2-6B\ENV\Lib\site-packages\torch\nn\modules\module.py”, line 810, in _apply
module._apply(fn)
File “D:\openai.wiki\ChatGLM2-6B\ENV\Lib\site-packages\torch\nn\modules\module.py”, line 810, in _apply
module._apply(fn)
File “D:\openai.wiki\ChatGLM2-6B\ENV\Lib\site-packages\torch\nn\modules\module.py”, line 810, in _apply
module._apply(fn)
File “D:\openai.wiki\ChatGLM2-6B\ENV\Lib\site-packages\torch\nn\modules\module.py”, line 833, in _apply
param_applied = fn(param)
^^^^^^^^^
File “D:\openai.wiki\ChatGLM2-6B\ENV\Lib\site-packages\torch\nn\modules\module.py”, line 918, in
return self._apply(lambda t: t.cuda(device))
^^^^^^^^^^^^^^
File “D:\openai.wiki\ChatGLM2-6B\ENV\Lib\site-packages\torch\cuda\__init__.py”, line 289, in _lazy_init
raise AssertionError(“Torch not compiled with CUDA enabled”)
AssertionError: Torch not compiled with CUDA enabled
Traceback (most recent call last):
File “D:\openai.wiki\ChatGLM2-6B\cli_demo.py”, line 9, in
model = AutoModel.from_pretrained(“THUDM\chatglm2-6b”, trust_remote_code=True).cuda()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “D:\openai.wiki\ChatGLM2-6B\ENV\Lib\site-packages\torch\nn\modules\module.py”, line 918, in cuda
return self._apply(lambda t: t.cuda(device))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “D:\openai.wiki\ChatGLM2-6B\ENV\Lib\site-packages\torch\nn\modules\module.py”, line 810, in _apply
module._apply(fn)
File “D:\openai.wiki\ChatGLM2-6B\ENV\Lib\site-packages\torch\nn\modules\module.py”, line 810, in _apply
module._apply(fn)
File “D:\openai.wiki\ChatGLM2-6B\ENV\Lib\site-packages\torch\nn\modules\module.py”, line 810, in _apply
module._apply(fn)
File “D:\openai.wiki\ChatGLM2-6B\ENV\Lib\site-packages\torch\nn\modules\module.py”, line 833, in _apply
param_applied = fn(param)
^^^^^^^^^
File “D:\openai.wiki\ChatGLM2-6B\ENV\Lib\site-packages\torch\nn\modules\module.py”, line 918, in
return self._apply(lambda t: t.cuda(device))
^^^^^^^^^^^^^^
File “D:\openai.wiki\ChatGLM2-6B\ENV\Lib\site-packages\torch\cuda\__init__.py”, line 289, in _lazy_init
raise AssertionError(“Torch not compiled with CUDA enabled”)
AssertionError: Torch not compiled with CUDA enabled
Traceback (most recent call last):
File “C:\Softwares\ChatGLM2-6B\api.py”, line 1, in
from fastapi import FastAPI, Request
File “E:\Python311\Lib\site-packages\fastapi\__init__.py”, line 7, in
from .applications import FastAPI as FastAPI
File “E:\Python311\Lib\site-packages\fastapi\applications.py”, line 16, in
from fastapi import routing
File “E:\Python311\Lib\site-packages\fastapi\routing.py”, line 33, in
from fastapi.dependencies.utils import (
File “E:\Python311\Lib\site-packages\fastapi\dependencies\utils.py”, line 48, in
from fastapi.concurrency import (
File “E:\Python311\Lib\site-packages\fastapi\concurrency.py”, line 6, in
from anyio import CapacityLimiter
ImportError: cannot import name ‘CapacityLimiter’ from ‘anyio’ (unknown location)
(D:\openai.wiki\ChatGLM2-6B\ENV) D:\openai.wiki\ChatGLM2-6B>python web_demo.py
You are using a model of type chatglm to instantiate a model of type . This is not supported for all configurations of models and can yield errors.请帮忙看一下为什么报错呀
(C:\openai.wiki\ChatGLM2-6B\ENV) C:\openai.wiki\ChatGLM2-6B>python cli_demo.py
Traceback (most recent call last):
File “C:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\requests\compat.py”, line 11, in
import chardet
ModuleNotFoundError: No module named ‘chardet’
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “cli_demo.py”, line 4, in
from transformers import AutoTokenizer, AutoModel
File “C:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\__init__.py”, line 26, in
from . import dependency_versions_check
File “C:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\dependency_versions_check.py”, line 17, in
from .utils.versions import require_version, require_version_core
File “C:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\utils\__init__.py”, line 30, in
from .generic import (
File “C:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\utils\generic.py”, line 29, in
from .import_utils import is_flax_available, is_tf_available, is_torch_available, is_torch_fx_proxy
File “C:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\utils\import_utils.py”, line 33, in
from . import logging
File “C:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\utils\logging.py”, line 35, in
import huggingface_hub.utils as hf_hub_utils
File “C:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\huggingface_hub\utils\__init__.py”, line 19, in
from huggingface_hub.errors import (
File “C:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\huggingface_hub\errors.py”, line 3, in
from requests import HTTPError
File “C:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\requests\__init__.py”, line 45, in
from .exceptions import RequestsDependencyWarning
File “C:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\requests\exceptions.py”, line 9, in
from .compat import JSONDecodeError as CompatJSONDecodeError
File “C:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\requests\compat.py”, line 13, in
import charset_normalizer as chardet
File “C:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\charset_normalizer\__init__.py”, line 23, in
from charset_normalizer.api import from_fp, from_path, from_bytes, normalize
File “C:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\charset_normalizer\api.py”, line 10, in
from charset_normalizer.md import mess_ratio
File “charset_normalizer\md.py”, line 5, in
ImportError: cannot import name ‘COMMON_SAFE_ASCII_CHARACTERS’ from ‘charset_normalizer.constant’ (C:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\charset_normalizer\constant.py)
我按步骤弄完就出现这:
(D:\openai.wiki\ChatGLM2-6B-main\ENV) D:\openai.wiki\ChatGLM2-6B-main>python cli_demo.py
Traceback (most recent call last):
File “cli_demo.py”, line 4, in
from transformers import AutoTokenizer, AutoModel
ModuleNotFoundError: No module named ‘transformers’
这是什么情况,帮助一下,谢谢
@小陈 你这没找到transformers库,不应该啊,按道理来说是自动安装的。
执行这一步了吗?
pip install -r requirements.txt
@PhiltreX 感谢您的及时回复,我执行过了pip install -r requirements.txt但又报错,没找到要求的版本:
(D:\openai.wiki\ChatGLM2-6B-main\ENV) D:\openai.wiki\ChatGLM2-6B-main>pip install -r requirements.txt
Requirement already satisfied: protobuf in d:\openai.wiki\chatglm2-6b-main\env\lib\site-packages (from -r requirements.txt (line 1)) (4.23.3)
Collecting transformers==4.30.2 (from -r requirements.txt (line 2))
Using cached transformers-4.30.2-py3-none-any.whl (7.2 MB)
Collecting cpm_kernels (from -r requirements.txt (line 3))
Using cached cpm_kernels-1.0.11-py3-none-any.whl (416 kB)
Requirement already satisfied: torch>=2.0 in d:\openai.wiki\chatglm2-6b-main\env\lib\site-packages (from -r requirements.txt (line 4)) (2.0.1+cu118)
ERROR: Could not find a version that satisfies the requirement gradio (from versions: none)
ERROR: No matching distribution found for gradio
是CUDA版本的问题吗?CUDA查询可以安装12.2,我装的是11.8,跟你们一样。
@小陈 你这个问题,有点奇怪,好像不是transformers的问题了,现在是没有找到指定版本的gradio,手动执行一下这个试试
gradio==2.3.6
。@PhiltreX NameError: name ’round_up’ is not defined 怎么解决
@盆盆 这种是不应该出现的错误,把完整的错误贴出来吧,这样我看不出来。
@小陈 兄弟,你的这个问题解决了么?可以问问你是怎么解决的么?谢谢
怎么自己训练模型啊?有没有大佬给点入门的资料或链接
@小李 站长没有条件做微调这一块,毕竟是2080TI的显卡,唉。
没有gpu 怎么能跑啊
@里 CPU需要32GB内存,将模型加载那一行修改为
model = AutoModel.from_pretrained("THUDM\chatglm2-6b", trust_remote_code=True).float()
crud不可用 怎么解决不是英伟达的GPU
一次性部署成功确实比上一版本快很多
@落夜秦雨 是的,毕竟是二代嘛,一代更比一代强。
怎么感觉回复的时候特别慢呢?一秒一个字
@five 这个和电脑的硬件有关系,显卡越好,性能越快。
报错:NameError: name ’round_up’ is not defined 具体如下,请问要怎么解决好
Failed to load cpm_kernels:[WinError 267] 目录名称无效。: ‘C:\\Windows\\System32\\cmd.exe’
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ D:\gpt\ChatGLM2-6B-main\.\api.py:54 in │
│ │
│ 51 │
│ 52 if __name__ == ‘__main__’: │
│ 53 │ tokenizer = AutoTokenizer.from_pretrained(“THUDM\chatglm2-6b”, trust_remote_code=Tru │
│ ❱ 54 │ model = AutoModel.from_pretrained(“THUDM\chatglm2-6b”, trust_remote_code=True).half( │
│ 55 │ model.eval() │
│ 56 │ uvicorn.run(app, host=’0.0.0.0′, port=8000, workers=1) │
│ 57 │
│ │
│ C:\Users/.cache\huggingface\modules\transformers_modules\chatglm2-6b\modeling_chatglm.py:1 │
│ 109 in quantize │
│ │
│ 1106 │ │ │
│ 1107 │ │ self.config.quantization_bit = bits │
│ 1108 │ │ │
│ ❱ 1109 │ │ self.transformer.encoder = quantize(self.transformer.encoder, bits, empty_init=e │
│ 1110 │ │ │ │ │ │ │ │ │ │ │ **kwargs) │
│ 1111 │ │ return self │
│ 1112 │
│ │
│ C:\Users/.cache\huggingface\modules\transformers_modules\chatglm2-6b\quantization.py:155 │
│ in quantize │
│ │
│ 152 def quantize(model, weight_bit_width, empty_init=False, device=None): │
│ 153 │ “””Replace fp16 linear with quantized linear””” │
│ 154 │ for layer in model.layers: │
│ ❱ 155 │ │ layer.self_attention.query_key_value = QuantizedLinear( │
│ 156 │ │ │ weight_bit_width=weight_bit_width, │
│ 157 │ │ │ weight=layer.self_attention.query_key_value.weight.to(torch.cuda.current_dev │
│ 158 │ │ │ bias=layer.self_attention.query_key_value.bias, │
│ │
│ C:\Users/.cache\huggingface\modules\transformers_modules\chatglm2-6b\quantization.py:139 │
│ in __init__ │
│ │
│ 136 │ │ │ self.weight_scale = weight.abs().max(dim=-1).values / ((2 ** (weight_bit_wid │
│ 137 │ │ │ self.weight = torch.round(weight / self.weight_scale[:, None]).to(torch.int8 │
│ 138 │ │ │ if weight_bit_width == 4: │
│ ❱ 139 │ │ │ │ self.weight = compress_int4_weight(self.weight) │
│ 140 │ │ │
│ 141 │ │ self.weight = Parameter(self.weight.to(device), requires_grad=False) │
│ 142 │ │ self.weight_scale = Parameter(self.weight_scale.to(device), requires_grad=False) │
│ │
│ C:\Users/.cache\huggingface\modules\transformers_modules\chatglm2-6b\quantization.py:76 in │
│ compress_int4_weight │
│ │
│ 73 │ │ stream = torch.cuda.current_stream() │
│ 74 │ │ │
│ 75 │ │ gridDim = (n, 1, 1) │
│ ❱ 76 │ │ blockDim = (min(round_up(m, 32), 1024), 1, 1) │
│ 77 │ │ │
│ 78 │ │ kernels.int4WeightCompression( │
│ 79 │ │ │ gridDim, │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
NameError: name ’round_up’ is not defined
@盆盆 我看到上当名称无效,NameError: name ’round_up’ is not defined错误表示在代码中使用了未定义的函数 round_up。
这种一般来说是绝对不可能出现的错误,建议重新部署一下吧,我没遇到过,还真不清楚。
NameError: name ’round_up’ is not defined 如何解决
73 │ │ stream = torch.cuda.current_stream() │
│ 74 │ │ │
│ 75 │ │ gridDim = (n, 1, 1) │
│ ❱ 76 │ │ blockDim = (min(round_up(m, 32), 1024), 1, 1) │
│ 77 │ │ │
│ 78 │ │ kernels.int4WeightCompression( │
│ 79 │ │ │ gridDim,
我在执行pip install -r requirements.txt下载项目依赖库的时候总是会在下载torch的时候出问题,出现一长串的错误代码:
ERROR: Exception:
Traceback (most recent call last):
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_vendor\urllib3\response.py”, line 438, in _error_catcher
yield
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_vendor\urllib3\response.py”, line 561, in read
data = self._fp_read(amt) if not fp_closed else b””
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_vendor\urllib3\response.py”, line 527, in _fp_read
return self._fp.read(amt) if amt is not None else self._fp.read()
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_vendor\cachecontrol\filewrapper.py”, line 90, in read
data = self.__fp.read(amt)
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\http\client.py”, line 466, in read
s = self.fp.read(amt)
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\socket.py”, line 705, in readinto
return self._sock.recv_into(b)
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\ssl.py”, line 1274, in recv_into
return self.read(nbytes, buffer)
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\ssl.py”, line 1130, in read
return self._sslobj.read(len, buffer)
TimeoutError: The read operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\cli\base_command.py”, line 169, in exc_logging_wrapper
status = run_func(*args)
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\cli\req_command.py”, line 248, in wrapper
return func(self, options, args)
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\commands\install.py”, line 377, in run
requirement_set = resolver.resolve(
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\resolution\resolvelib\resolver.py”, line 92, in resolve
result = self._result = resolver.resolve(
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_vendor\resolvelib\resolvers.py”, line 546, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_vendor\resolvelib\resolvers.py”, line 397, in resolve
self._add_to_criteria(self.state.criteria, r, parent=None)
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_vendor\resolvelib\resolvers.py”, line 173, in _add_to_criteria
if not criterion.candidates:
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_vendor\resolvelib\structs.py”, line 156, in __bool__
return bool(self._sequence)
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\resolution\resolvelib\found_candidates.py”, line 155, in __bool__
return any(self)
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\resolution\resolvelib\found_candidates.py”, line 143, in
return (c for c in iterator if id(c) not in self._incompatible_ids)
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\resolution\resolvelib\found_candidates.py”, line 47, in _iter_built
candidate = func()
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\resolution\resolvelib\factory.py”, line 206, in _make_candidate_from_link
self._link_candidate_cache[link] = LinkCandidate(
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py”, line 293, in __init__
super().__init__(
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py”, line 156, in __init__
self.dist = self._prepare()
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py”, line 225, in _prepare
dist = self._prepare_distribution()
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py”, line 304, in _prepare_distribution
return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True)
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\operations\prepare.py”, line 516, in prepare_linked_requirement
return self._prepare_linked_requirement(req, parallel_builds)
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\operations\prepare.py”, line 587, in _prepare_linked_requirement
local_file = unpack_url(
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\operations\prepare.py”, line 166, in unpack_url
file = get_http_url(
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\operations\prepare.py”, line 107, in get_http_url
from_path, content_type = download(link, temp_dir.path)
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\network\download.py”, line 147, in __call__
for chunk in chunks:
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\cli\progress_bars.py”, line 53, in _rich_progress_bar
for chunk in iterable:
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_internal\network\utils.py”, line 63, in response_chunks
for chunk in response.raw.stream(
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_vendor\urllib3\response.py”, line 622, in stream
data = self.read(amt=amt, decode_content=decode_content)
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_vendor\urllib3\response.py”, line 560, in read
with self._error_catcher():
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\contextlib.py”, line 153, in __exit__
self.gen.throw(typ, value, traceback)
File “D:\chatGLM\ChatGLM2-6B\ENV\lib\site-packages\pip\_vendor\urllib3\response.py”, line 443, in _error_catcher
raise ReadTimeoutError(self._pool, None, “Read timed out.”)
pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host=’files.pythonhosted.org’, port=443): Read timed out.
我也不知道是什么情况,能帮我一下吗?谢谢
@夏天的一片叶子 HTTPSConnectionPool(host=’files.pythonhosted.org’, port=443): Read timed out.
这种报错一般是网络问题,开魔法了吗?
请问CPU的int4量化模型在哪找到呢
Traceback (most recent call last):
File “E:\pythonStydy\code\chain_glm\web_demo.py”, line 6, in
tokenizer = AutoTokenizer.from_pretrained(“./glm”, trust_remote_code=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “E:\pythonStydy\home\chain_glm\Lib\site-packages\transformers\models\auto\tokenization_auto.py”, line 676, in from_pretrained
tokenizer_class = get_class_from_dynamic_module(class_ref, pretrained_model_name_or_path, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “E:\pythonStydy\home\chain_glm\Lib\site-packages\transformers\dynamic_module_utils.py”, line 443, in get_class_from_dynamic_module
return get_class_in_module(class_name, final_module.replace(“.py”, “”))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “E:\pythonStydy\home\chain_glm\Lib\site-packages\transformers\dynamic_module_utils.py”, line 164, in get_class_in_module
module = importlib.import_module(module_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\shhx\AppData\Local\Programs\Python\Python311\Lib\importlib\__init__.py”, line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “”, line 1206, in _gcd_import
File “”, line 1178, in _find_and_load
File “”, line 1128, in _find_and_load_unlocked
File “”, line 241, in _call_with_frames_removed
File “”, line 1206, in _gcd_import
File “”, line 1178, in _find_and_load
File “”, line 1128, in _find_and_load_unlocked
File “”, line 241, in _call_with_frames_removed
File “”, line 1206, in _gcd_import
File “”, line 1178, in _find_and_load
File “”, line 1142, in _find_and_load_unlocked
ModuleNotFoundError: No module named ‘transformers_modules.’
我已经pip install -r requirements.txt,而且编译器已经识别我看源码也可以看到,为什么还是说找不到,能帮帮忙吗
[{
“resource”: “/d:/openai.wiki/ChatGLM2-6B/web_demo.py”,
“owner”: “_generated_diagnostic_collection_name_#0”,
“code”: {
“value”: “reportMissingImports”,
“target”: {
“$mid”: 1,
“path”: “/microsoft/pyright/blob/main/docs/configuration.md”,
“scheme”: “https”,
“authority”: “github.com”,
“fragment”: “reportMissingImports”
}
},
“severity”: 4,
“message”: “Import \”gradio\” could not be resolved”,
“source”: “Pylance”,
“startLineNumber”: 2,
“startColumn”: 8,
“endLineNumber”: 2,
“endColumn”: 14
}]
[{
“resource”: “/d:/openai.wiki/ChatGLM2-6B/web_demo.py”,
“owner”: “_generated_diagnostic_collection_name_#0”,
“code”: {
“value”: “reportMissingImports”,
“target”: {
“$mid”: 1,
“path”: “/microsoft/pyright/blob/main/docs/configuration.md”,
“scheme”: “https”,
“authority”: “github.com”,
“fragment”: “reportMissingImports”
}
},
“severity”: 4,
“message”: “Import \”mdtex2html\” could not be resolved”,
“source”: “Pylance”,
“startLineNumber”: 3,
“startColumn”: 8,
“endLineNumber”: 3,
“endColumn”: 18
}]
出现如下报错
@最喜欢睡觉 (D:\openai.wiki\ChatGLM2-6B\ENV) D:\openai.wiki\ChatGLM2-6B>python cli_demo.py
You are using a model of type chatglm to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
Traceback (most recent call last):
File “cli_demo.py”, line 8, in
model = AutoModel.from_pretrained(“THUDM\chatglm2-6b”, trust_remote_code=True).cuda()
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\models\auto\auto_factory.py”, line 475, in from_pretrained
model_class = get_class_from_dynamic_module(
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\dynamic_module_utils.py”, line 443, in get_class_from_dynamic_module
return get_class_in_module(class_name, final_module.replace(“.py”, “”))
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\dynamic_module_utils.py”, line 164, in get_class_in_module
module = importlib.import_module(module_path)
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\importlib\__init__.py”, line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File “”, line 1014, in _gcd_import
File “”, line 991, in _find_and_load
File “”, line 975, in _find_and_load_unlocked
File “”, line 671, in _load_unlocked
File “”, line 843, in exec_module
File “”, line 219, in _call_with_frames_removed
File “C:\Users\胡睡觉/.cache\huggingface\modules\transformers_modules\chatglm2-6b\modeling_chatglm.py”, line 21, in
from transformers.modeling_utils import PreTrainedModel
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\modeling_utils.py”, line 85, in
from accelerate import __version__ as accelerate_version
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\accelerate\__init__.py”, line 3, in
from .accelerator import Accelerator
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\accelerate\accelerator.py”, line 40, in
from .tracking import LOGGER_TYPE_TO_CLASS, GeneralTracker, filter_trackers
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\accelerate\tracking.py”, line 42, in
from torch.utils import tensorboard
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\torch\utils\tensorboard\__init__.py”, line 12, in
from .writer import FileWriter, SummaryWriter # noqa: F401
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\torch\utils\tensorboard\writer.py”, line 9, in
from tensorboard.compat.proto.event_pb2 import SessionLog
File “C:\Users\胡睡觉\AppData\Roaming\Python\Python38\site-packages\tensorboard\compat\proto\event_pb2.py”, line 17, in
from tensorboard.compat.proto import summary_pb2 as tensorboard_dot_compat_dot_proto_dot_summary__pb2
File “C:\Users\胡睡觉\AppData\Roaming\Python\Python38\site-packages\tensorboard\compat\proto\summary_pb2.py”, line 17, in
from tensorboard.compat.proto import tensor_pb2 as tensorboard_dot_compat_dot_proto_dot_tensor__pb2
File “C:\Users\胡睡觉\AppData\Roaming\Python\Python38\site-packages\tensorboard\compat\proto\tensor_pb2.py”, line 16, in
from tensorboard.compat.proto import resource_handle_pb2 as tensorboard_dot_compat_dot_proto_dot_resource__handle__pb2
File “C:\Users\胡睡觉\AppData\Roaming\Python\Python38\site-packages\tensorboard\compat\proto\resource_handle_pb2.py”, line 16, in
from tensorboard.compat.proto import tensor_shape_pb2 as tensorboard_dot_compat_dot_proto_dot_tensor__shape__pb2
File “C:\Users\胡睡觉\AppData\Roaming\Python\Python38\site-packages\tensorboard\compat\proto\tensor_shape_pb2.py”, line 36, in
_descriptor.FieldDescriptor(
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\google\protobuf\descriptor.py”, line 561, in __new__
_message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
一切按步骤运行就这样了,站长帮忙看看呗
@最喜欢睡觉 可能是因为protobuf版本不正确,重新部署一下,一定要把配置文件中需要删除的版本号删除。
@最喜欢睡觉 这些错误提示是由于缺少gradio和mdtex2html这两个模块导致的。需要安装这些模块以解决报错。
可以使用以下命令通过pip安装这些模块:
pip install gradio
pip install mdtex2html
model不是内部或外部命令怎么解决,电脑环境变量都是好的
@flydog 这可能是模型路径没有配置好的问题,把错误信息多贴一些出来。
@PhiltreX ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ D:\openai.wiki\ChatGLM2-6B\web_demo.py:7 in │
│ │
│ 4 from utils import load_model_on_gpus │
│ 5 │
│ 6 tokenizer = AutoTokenizer.from_pretrained(“THUDM\chatglm2-6b”, trust_remote_code=True) │
│ ❱ 7 model = AutoModel.from_pretrained(“THUDM\chatglm2-6b”, trust_remote_code=True).cuda() │
│ 8 # 多显卡支持,使用下面两行代替上面一行,将num_gpus改为你实际的显卡数量 │
│ 9 # from utils import load_model_on_gpus │
│ 10 # model = load_model_on_gpus(“THUDM/chatglm2-6b”, num_gpus=2) │
│ │
│ D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\torch\nn\modules\module.py:905 in cuda │
│ │
│ 902 │ │ Returns: │
│ 903 │ │ │ Module: self │
│ 904 │ │ “”” │
│ ❱ 905 │ │ return self._apply(lambda t: t.cuda(device)) │
│ 906 │ │
│ 907 │ def ipu(self: T, device: Optional[Union[int, device]] = None) -> T: │
│ 908 │ │ r”””Moves all model parameters and buffers to the IPU. │
│ │
│ D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\torch\nn\modules\module.py:797 in _apply │
│ │
│ 794 │ │
│ 795 │ def _apply(self, fn): │
│ 796 │ │ for module in self.children(): │
│ ❱ 797 │ │ │ module._apply(fn) │
│ 798 │ │ │
│ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │
│ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │
│ │
│ D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\torch\nn\modules\module.py:797 in _apply │
│ │
│ 794 │ │
│ 795 │ def _apply(self, fn): │
│ 796 │ │ for module in self.children(): │
│ ❱ 797 │ │ │ module._apply(fn) │
│ 798 │ │ │
│ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │
│ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │
│ │
│ D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\torch\nn\modules\module.py:797 in _apply │
│ │
│ 794 │ │
│ 795 │ def _apply(self, fn): │
│ 796 │ │ for module in self.children(): │
│ ❱ 797 │ │ │ module._apply(fn) │
│ 798 │ │ │
│ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │
│ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │
│ │
│ D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\torch\nn\modules\module.py:820 in _apply │
│ │
│ 817 │ │ │ # track autograd history of `param_applied`, so we have to use │
│ 818 │ │ │ # `with torch.no_grad():` │
│ 819 │ │ │ with torch.no_grad(): │
│ ❱ 820 │ │ │ │ param_applied = fn(param) │
│ 821 │ │ │ should_use_set_data = compute_should_use_set_data(param, param_applied) │
│ 822 │ │ │ if should_use_set_data: │
│ 823 │ │ │ │ param.data = param_applied │
│ │
│ D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\torch\nn\modules\module.py:905 in │
│ │
│ 902 │ │ Returns: │
│ 903 │ │ │ Module: self │
│ 904 │ │ “”” │
│ ❱ 905 │ │ return self._apply(lambda t: t.cuda(device)) │
│ 906 │ │
│ 907 │ def ipu(self: T, device: Optional[Union[int, device]] = None) -> T: │
│ 908 │ │ r”””Moves all model parameters and buffers to the IPU. │
│ │
│ D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\torch\cuda\__init__.py:239 in _lazy_init │
│ │
│ 236 │ │ │ │ “Cannot re-initialize CUDA in forked subprocess. To use CUDA with ” │
│ 237 │ │ │ │ “multiprocessing, you must use the ‘spawn’ start method”) │
│ 238 │ │ if not hasattr(torch._C, ‘_cuda_getDeviceCount’): │
│ ❱ 239 │ │ │ raise AssertionError(“Torch not compiled with CUDA enabled”) │
│ 240 │ │ if _cudart is None: │
│ 241 │ │ │ raise AssertionError( │
│ 242 │ │ │ │ “libcudart functions unavailable. It looks like you have a broken build? │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
AssertionError: Torch not compiled with CUDA enabled
碰到下面这种情况怎么办?
(D:\openai.wiki\ChatGLM2-6B\ENV) D:\openai.wiki\ChatGLM2-6B>python web_demo.py
Traceback (most recent call last):
File “web_demo.py”, line 5, in
tokenizer = AutoTokenizer.from_pretrained(“THUDM\chatglm2-6b”, trust_remote_code=True)
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\models\auto\tokenization_auto.py”, line 676, in from_pretrained
tokenizer_class = get_class_from_dynamic_module(class_ref, pretrained_model_name_or_path, **kwargs)
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\dynamic_module_utils.py”, line 431, in get_class_from_dynamic_module
final_module = get_cached_module_file(
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\dynamic_module_utils.py”, line 268, in get_cached_module_file
modules_needed = check_imports(resolved_module_file)
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\dynamic_module_utils.py”, line 146, in check_imports
importlib.import_module(imp)
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\importlib\__init__.py”, line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File “”, line 1014, in _gcd_import
File “”, line 991, in _find_and_load
File “”, line 975, in _find_and_load_unlocked
File “”, line 671, in _load_unlocked
File “”, line 843, in exec_module
File “”, line 219, in _call_with_frames_removed
File “D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\torch\__init__.py”, line 122, in
raise err
OSError: [WinError 127] 找不到指定的程序。 Error loading “D:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\torch\lib\caffe2_detectron_ops_gpu.dll” or one of its dependencies.
@墨俊 torch没安装对,按教程重头安装,不要自定义torch版本。
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 214.00 MiB (GPU 0; 4.00 GiB total capacity; 3.23 GiB already allocated; 68.43 MiB free; 3.23 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
这是啥情况
@阿斯蒂芬 用这个加载https://openai.wiki/text-generation-webui.html
站长,chatGLM2-6B在TEXTGEN正常加载,但每次回答除问的问题外,自动生成其他一堆话,有点自言自语的,是哪里设置错了吗?
@NSG 或许你可以尝试更改为Chat模式,然后在Chat settings的Instruction template中,将预设调整为ChatGLM模板试一下。
站长,我在运行web_demo.py时出现了一下报错:
Traceback (most recent call last):
File “web_demo.py”, line 6, in
tokenizer = AutoTokenizer.from_pretrained(“THUDM\chatglm2-6b”, trust_remote_code=True)
File “E:\ChatGLM\ChatGLM2-6B\ENV\lib\site-packages\transformers\models\auto\tokenization_auto.py”, line 643, in from_pretrained
tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
File “E:\ChatGLM\ChatGLM2-6B\ENV\lib\site-packages\transformers\models\auto\tokenization_auto.py”, line 487, in get_tokenizer_config
resolved_config_file = cached_file(
File “E:\ChatGLM\ChatGLM2-6B\ENV\lib\site-packages\transformers\utils\hub.py”, line 417, in cached_file
resolved_file = hf_hub_download(
File “E:\ChatGLM\ChatGLM2-6B\ENV\lib\site-packages\huggingface_hub\utils\_validators.py”, line 110, in _inner_fn
validate_repo_id(arg_value)
File “E:\ChatGLM\ChatGLM2-6B\ENV\lib\site-packages\huggingface_hub\utils\_validators.py”, line 164, in validate_repo_id
raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must use alphanumeric chars or ‘-‘, ‘_’, ‘.’, ‘–‘ and ‘..’ are forbidden, ‘-‘ and ‘.’ cannot start or end the name, max length is 96: ‘THUDM\chatglm2-6b’.
请问要怎么解决
@㊗ 检查一下文件名称输入
站长好,我想尝试多卡部署,我的cli_demo.py代码如下
import os
import platform
from transformers import AutoTokenizer, AutoModel
from chatglm_parallel import load_model_on_gpus
model = load_model_on_gpus(“THUDM\chatglm2-6b”, num_gpus=2)
tokenizer = AutoTokenizer.from_pretrained(“THUDM\chatglm2-6b”, trust_remote_code=True)
os_name = platform.system()
clear_command = ‘cls’ if os_name == ‘Windows’ else ‘clear’
报错信息
ValueError: transformer.embedding.word_embeddings.weight doesn’t have any device set.
求问解决方案,感谢
大佬大佬,运行python web_demo.py后就这样报错了怎么办:
(D:\ChatGLM\ChatGLM2-6B\ENV) D:\ChatGLM\ChatGLM2-6B>python web_demo.py
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 7/7 [00:13<00:00, 1.99s/it]
Traceback (most recent call last):
File "web_demo.py", line 6, in
model = AutoModel.from_pretrained(“THUDM\chatglm2-6b”, trust_remote_code=True).cuda()
File “D:\ChatGLM\ChatGLM2-6B\ENV\lib\site-packages\torch\nn\modules\module.py”, line 918, in cuda
return self._apply(lambda t: t.cuda(device))
File “D:\ChatGLM\ChatGLM2-6B\ENV\lib\site-packages\torch\nn\modules\module.py”, line 810, in _apply
module._apply(fn)
File “D:\ChatGLM\ChatGLM2-6B\ENV\lib\site-packages\torch\nn\modules\module.py”, line 810, in _apply
module._apply(fn)
File “D:\ChatGLM\ChatGLM2-6B\ENV\lib\site-packages\torch\nn\modules\module.py”, line 810, in _apply
module._apply(fn)
File “D:\ChatGLM\ChatGLM2-6B\ENV\lib\site-packages\torch\nn\modules\module.py”, line 833, in _apply
param_applied = fn(param)
File “D:\ChatGLM\ChatGLM2-6B\ENV\lib\site-packages\torch\nn\modules\module.py”, line 918, in
return self._apply(lambda t: t.cuda(device))
File “D:\ChatGLM\ChatGLM2-6B\ENV\lib\site-packages\torch\cuda\__init__.py”, line 289, in _lazy_init
raise AssertionError(“Torch not compiled with CUDA enabled”)
AssertionError: Torch not compiled with CUDA enabled
@MurasakiAbysm 我也是这样,以为是pytorch问题重新安装了一次结果照旧
@MurasakiAbysm 我也是这样请问解决了吗
python web_demo.py打开网址后,提出问题为什么没显示回答呢?
chatglm3-6b版本出来了更新一下
您好,我想请问一下怎么使用LoRA微调这个模型?
Traceback (most recent call last):
File “D:\openai.wiki\ChatGLM2-6B\cli_demo.py”, line 9, in
model = AutoModel.from_pretrained(“THUDM\chatglm2-6b”, trust_remote_code=True).cuda()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “D:\openai.wiki\ChatGLM2-6B\ENV\Lib\site-packages\torch\nn\modules\module.py”, line 918, in cuda
return self._apply(lambda t: t.cuda(device))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “D:\openai.wiki\ChatGLM2-6B\ENV\Lib\site-packages\torch\nn\modules\module.py”, line 810, in _apply
module._apply(fn)
File “D:\openai.wiki\ChatGLM2-6B\ENV\Lib\site-packages\torch\nn\modules\module.py”, line 810, in _apply
module._apply(fn)
File “D:\openai.wiki\ChatGLM2-6B\ENV\Lib\site-packages\torch\nn\modules\module.py”, line 810, in _apply
module._apply(fn)
File “D:\openai.wiki\ChatGLM2-6B\ENV\Lib\site-packages\torch\nn\modules\module.py”, line 833, in _apply
param_applied = fn(param)
^^^^^^^^^
File “D:\openai.wiki\ChatGLM2-6B\ENV\Lib\site-packages\torch\nn\modules\module.py”, line 918, in
return self._apply(lambda t: t.cuda(device))
^^^^^^^^^^^^^^
File “D:\openai.wiki\ChatGLM2-6B\ENV\Lib\site-packages\torch\cuda\__init__.py”, line 289, in _lazy_init
raise AssertionError(“Torch not compiled with CUDA enabled”)
AssertionError: Torch not compiled with CUDA enabled
您好,想问一下我这个是什么情况,该怎么样解决呢?非常感谢
Traceback (most recent call last):
File “D:\openai.wiki\ChatGLM2-6B\cli_demo.py”, line 9, in
model = AutoModel.from_pretrained(“THUDM\chatglm2-6b”, trust_remote_code=True).cuda()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “D:\openai.wiki\ChatGLM2-6B\ENV\Lib\site-packages\torch\nn\modules\module.py”, line 918, in cuda
return self._apply(lambda t: t.cuda(device))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “D:\openai.wiki\ChatGLM2-6B\ENV\Lib\site-packages\torch\nn\modules\module.py”, line 810, in _apply
module._apply(fn)
File “D:\openai.wiki\ChatGLM2-6B\ENV\Lib\site-packages\torch\nn\modules\module.py”, line 810, in _apply
module._apply(fn)
File “D:\openai.wiki\ChatGLM2-6B\ENV\Lib\site-packages\torch\nn\modules\module.py”, line 810, in _apply
module._apply(fn)
File “D:\openai.wiki\ChatGLM2-6B\ENV\Lib\site-packages\torch\nn\modules\module.py”, line 833, in _apply
param_applied = fn(param)
^^^^^^^^^
File “D:\openai.wiki\ChatGLM2-6B\ENV\Lib\site-packages\torch\nn\modules\module.py”, line 918, in
return self._apply(lambda t: t.cuda(device))
^^^^^^^^^^^^^^
File “D:\openai.wiki\ChatGLM2-6B\ENV\Lib\site-packages\torch\cuda\__init__.py”, line 289, in _lazy_init
raise AssertionError(“Torch not compiled with CUDA enabled”)
AssertionError: Torch not compiled with CUDA enabled
您好,请问一下我这个是什么情况,应该怎样解决呀?非常感谢
(anyio==3.7.1)
运行python api.py 后报错:
Traceback (most recent call last):
File “C:\Softwares\ChatGLM2-6B\api.py”, line 1, in
from fastapi import FastAPI, Request
File “E:\Python311\Lib\site-packages\fastapi\__init__.py”, line 7, in
from .applications import FastAPI as FastAPI
File “E:\Python311\Lib\site-packages\fastapi\applications.py”, line 16, in
from fastapi import routing
File “E:\Python311\Lib\site-packages\fastapi\routing.py”, line 33, in
from fastapi.dependencies.utils import (
File “E:\Python311\Lib\site-packages\fastapi\dependencies\utils.py”, line 48, in
from fastapi.concurrency import (
File “E:\Python311\Lib\site-packages\fastapi\concurrency.py”, line 6, in
from anyio import CapacityLimiter
ImportError: cannot import name ‘CapacityLimiter’ from ‘anyio’ (unknown location)
(D:\openai.wiki\ChatGLM2-6B\ENV) D:\openai.wiki\ChatGLM2-6B>python web_demo.py
You are using a model of type chatglm to instantiate a model of type . This is not supported for all configurations of models and can yield errors.请帮忙看一下为什么报错呀
你好,想问一下配置到最后都没有问题,但是最后运行,运行不出来
@nim 能不能帮忙看看?
信息如下:
(D:\openai.wiki\ChatGLM2-6B\ENV) D:\openai.wiki\ChatGLM2-6B>python cli_demo.py
(D:\openai.wiki\ChatGLM2-6B\ENV) D:\openai.wiki\ChatGLM2-6B>python cli_demo.py
除了装在c盘,其他都是按教程来的,请问为什么最后运行不了呢?
(base) C:\Users\JZH>cd /d C:\openai.wiki\ChatGLM2-6B
(base) C:\openai.wiki\ChatGLM2-6B>conda activate C:\openai.wiki\ChatGLM2-6B\ENV
(C:\openai.wiki\ChatGLM2-6B\ENV) C:\openai.wiki\ChatGLM2-6B>python cli_demo.py
Traceback (most recent call last):
File “C:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\requests\compat.py”, line 11, in
import chardet
ModuleNotFoundError: No module named ‘chardet’
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “cli_demo.py”, line 4, in
from transformers import AutoTokenizer, AutoModel
File “C:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\__init__.py”, line 26, in
from . import dependency_versions_check
File “C:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\dependency_versions_check.py”, line 17, in
from .utils.versions import require_version, require_version_core
File “C:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\utils\__init__.py”, line 30, in
from .generic import (
File “C:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\utils\generic.py”, line 29, in
from .import_utils import is_flax_available, is_tf_available, is_torch_available, is_torch_fx_proxy
File “C:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\utils\import_utils.py”, line 33, in
from . import logging
File “C:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\transformers\utils\logging.py”, line 35, in
import huggingface_hub.utils as hf_hub_utils
File “C:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\huggingface_hub\utils\__init__.py”, line 19, in
from huggingface_hub.errors import (
File “C:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\huggingface_hub\errors.py”, line 3, in
from requests import HTTPError
File “C:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\requests\__init__.py”, line 45, in
from .exceptions import RequestsDependencyWarning
File “C:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\requests\exceptions.py”, line 9, in
from .compat import JSONDecodeError as CompatJSONDecodeError
File “C:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\requests\compat.py”, line 13, in
import charset_normalizer as chardet
File “C:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\charset_normalizer\__init__.py”, line 23, in
from charset_normalizer.api import from_fp, from_path, from_bytes, normalize
File “C:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\charset_normalizer\api.py”, line 10, in
from charset_normalizer.md import mess_ratio
File “charset_normalizer\md.py”, line 5, in
ImportError: cannot import name ‘COMMON_SAFE_ASCII_CHARACTERS’ from ‘charset_normalizer.constant’ (C:\openai.wiki\ChatGLM2-6B\ENV\lib\site-packages\charset_normalizer\constant.py)
写得太好了,感谢博主