Different results than Inference API

#4
by nlpsingh - opened

What are the preprocessing steps used on the data for the inference API? I notice different outputs for the same inference I run on my notebook vs the api and I was wondering what I could steps I could take to match the inference api. FYI, I am using the zero-shot-classification pipeline

Hi, is the difference significant ? I don't know what is under the hood of the hosted API, I expect it be the standard zero shot pipeline. Perhaps they apply some kind of quantization.
I'll also check if there is a difference on other models or if it's just me

For some instances, the difference is significant. The inference api correctly marks the answer as the top choice whereas when I run the pipeline on my end, it is one of the least probable choices.

Hello, any update on this ?
The real issue for me is when label_list doesn t have any subject relying on the text. The model is still answering with sum result = 1 so sometimes it is giving me absurd result.
For examples : labels_list = [art, war] text = "i like eating cheese" -> art = 0.64

Owner
This comment has been hidden
Owner

Hi @VicidiLochi
This is normal behavior
You should use multi_label=True
classifier(sequence_to_classify, candidate_labels, multi_label=True)

Sign up or log in to comment