错误内容
- ...
- File "C:\stable-diffusion-webui\modules\sd_models.py", line 236, in load_model
- sd_model = instantiate_from_config(sd_config.model)
- File "C:\stable-diffusion-webui\repositories\stable-diffusion\ldm\util.py", line 85, in instantiate_from_config
- return get_obj_from_str(config["target"])(**config.get("params", dict()))
- File "C:\stable-diffusion-webui\repositories\stable-diffusion\ldm\models\diffusion\ddpm.py", line 461, in init
- self.instantiate_cond_stage(cond_stage_config)
- File "C:\stable-diffusion-webui\repositories\stable-diffusion\ldm\models\diffusion\ddpm.py", line 519, in instantiate_cond_stage
- model = instantiate_from_config(config)
- File "C:\stable-diffusion-webui\repositories\stable-diffusion\ldm\util.py", line 85, in instantiate_from_config
- return get_obj_from_str(config["target"])(**config.get("params", dict()))
- File "C:\stable-diffusion-webui\repositories\stable-diffusion\ldm\modules\encoders\modules.py", line 141, in init
- self.tokenizer = CLIPTokenizer.from_pretrained(version)
- File "C:\stable-diffusion-webui\venv\lib\site-packages\transformers\tokenization_utils_base.py", line 1784, in from_pretrained
- return cls.from_pretrained(
- File "C:\stable-diffusion-webui\venv\lib\site-packages\transformers\tokenization_utils_base.py", line 1929, in from_pretrained
- tokenizer = cls(*init_inputs, **init_kwargs)
- File "C:\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\tokenization_clip.py", line 163, in init
- self.encoder = json.load(vocab_handle)
- File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\json_init.py", line 293, in load
- return loads(fp.read(),
- File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\json_init.py", line 346, in loads
- return _default_decoder.decode(s)
- File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\json\decoder.py", line 337, in decode
- obj, end = self.raw_decode(s, idx=_w(s, 0).end())
- File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\json\decoder.py", line 353, in raw_decode
- obj, end = self.scan_once(s, idx)
- json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 267716 (char 267716)
复制代码 解决方案
遇到这种问题基本都是数据文件的问题,导致json解析错误,所以需要找到这个文件,然后修复它。
从报错可以看出,是在tokenization_clip.py文件中的__init__方法加载词汇文件时出现的JSONDecodeError。
然后找到这段代码,最后确定这里有错误:
- with open(vocab_file, encoding="utf-8") as vocab_handle:
- self.encoder = json.load(vocab_handle)
复制代码 一开始在项目中总是找不到这个vocab.json文件。
于是打印vocab_file,得到json文件的路径,发现不在项目中,而是在C盘,难怪找不到。
我的vocab.json文件路径:
C:\Users\xxx\.cache\huggingface\hub\models–openai–clip-vit-large-patch14\snapshots\8d052a0f05efbaefbc9e8786ba291cfdf93e5bff\vocab.json
我在C盘找到了这个文件,但它是快捷方式,所以继续找到它的源文件,并打开它。
快捷方式对应链接文件路径:
C:\Users\xxx\.cache\huggingface\hub\models–openai–clip-vit-large-patch14\blobs\4297ea6a8d2bae1fea8f48b45e257814dcb11f69
发现末尾少了很多内容,总字数只有267716,这是我的文件末尾内容:
…rium |