pipeline_se = pipeline(Tasks.sentence_embedding, model=model_id, device_map='auto')
报错如下:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/root/miniconda3/envs/modelscope/lib/python3.8/site-packages/modelscope/pipelines/base.py", line 218, in __call__
output = self._process_single(input, *args, **kwargs)
File "/root/miniconda3/envs/modelscope/lib/python3.8/site-packages/modelscope/pipelines/base.py", line 246, in _process_single
out = self.preprocess(input, **preprocess_params)
File "/root/miniconda3/envs/modelscope/lib/python3.8/site-packages/modelscope/pipelines/base.py", line 387, in preprocess
return self.preprocessor(inputs, **preprocess_params)
File "/root/miniconda3/envs/modelscope/lib/python3.8/site-packages/modelscope/preprocessors/nlp/sentence_embedding_preprocessor.py", line 84, in __call__
query_inputs = self.nlp_tokenizer(
File "/root/miniconda3/envs/modelscope/lib/python3.8/site-packages/modelscope/preprocessors/nlp/transformers_tokenizer.py", line 108, in __call__
return self.tokenizer(text, text_pair, **tokenize_kwargs)
File "/root/miniconda3/envs/modelscope/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 2561, in __call__
encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
File "/root/miniconda3/envs/modelscope/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 2647, in _call_one
return self.batch_encode_plus(
File "/root/miniconda3/envs/modelscope/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 2838, in batch_encode_plus
return self._batch_encode_plus(
TypeError: _batch_encode_plus() got an unexpected keyword argument 'device_map'
您好,在 PyTorch 1.6 中,device_map 参数已被弃用。如果需要在 Pipeline 中使用多 GPU,可以使用 nn.DataParallel 或 dist.DistributedDataParallel。