whisper large v3 turbo

#160
by deepdml - opened

https://huggingface.co/deepdml/whisper-large-v3-turbo
New released model from OpenAI: "the turbo model is an optimized version of large-v3 that offers faster transcription speed with a minimal degradation in accuracy."
image.png

You can use deepdml/whisper-large-v3-turbo to get speedup ~5x inference!
On Colab GPU T4

Model Time (s) Relative speed
openai/whisper-large-v3 10.36 1x
deepdml/whisper-large-v3-turbo 2.11 ~5x
This comment has been hidden
This comment has been hidden

Sign up or log in to comment