Error when evaluate this model using pipeline

#13
by xiaxin1998 - opened

Hello, I finetuned the model and then when I want to evaluate it, I need to get the scores as well. I set output_scores=True, return_dict_in_generate=True but I got an error :
predictions_with_scores = pipe(input_texts, max_new_tokens=30, num_beams=k, num_return_sequences=1, output_scores=True, return_dict_in_generate=True)
File "/opt/anaconda3/envs/openllama/lib/python3.10/site-packages/transformers/pipelines/text_generation.py", line 272, in call
return super().call(text_inputs, **kwargs)
File "/opt/anaconda3/envs/openllama/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1249, in call
outputs = list(final_iterator)
File "/opt/anaconda3/envs/openllama/lib/python3.10/site-packages/transformers/pipelines/pt_utils.py", line 124, in next
item = next(self.iterator)
File "/opt/anaconda3/envs/openllama/lib/python3.10/site-packages/transformers/pipelines/pt_utils.py", line 125, in next
processed = self.infer(item, **self.params)
File "/opt/anaconda3/envs/openllama/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1175, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "/opt/anaconda3/envs/openllama/lib/python3.10/site-packages/transformers/pipelines/text_generation.py", line 371, in _forward
out_b = generated_sequence.shape[0]
AttributeError: 'GenerateBeamDecoderOnlyOutput' object has no attribute 'shape'

My codes are:
for batch in tqdm(dataloader):
# Decode input IDs into text for the pipeline
input_texts = pipe.tokenizer.batch_decode(batch["input_ids"], skip_special_tokens=True)
predictions_with_scores = pipe(input_texts, max_new_tokens=30, num_beams=k, num_return_sequences=1, output_scores=True, return_dict_in_generate=True)

    # The generated sentences are the predictions
    generated_sents = [pred for pred in predictions_with_scores['sequences']]

    # Get the scores for each prediction
    prediction_scores = predictions_with_scores['sequences_scores']  # These are the scores returned from pipeline

    # Decode the gold (ground truth) sentences
    gold_sents = pipe.tokenizer.batch_decode(batch['label'], skip_special_tokens=True)

my transformers version : 4.45.1

Can anyone tell me how to get the score? Or why it fails when using 'output_scores=True, return_dict_in_generate=True'?

Thanks.

The problem seems to be the internal implementation of pipeline. Currently, when using beam search, you will get this error. The solution now is using the old version 'model.generate()' instead of pipeline.

I also ended up using an instance of AutoModelForCausalLM (which ends up yielding an instance of LlamaModelForCausal) as well instead of pipeline. Looking at the AttributeError, note that GenerateBeamDecoderOnlyoutput is defined in modeling_outputs.py of transformers library, which uses a base class of ModelOutput. Which in turn inherits from OrderedDict. So a .shape attribute isn't expected.

Sign up or log in to comment