SeanScripts commited on
Commit
d7c24e2
1 Parent(s): 710579c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -3
README.md CHANGED
@@ -1,3 +1,50 @@
1
- ---
2
- license: llama3.2
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3.2
3
+ base_model:
4
+ - meta-llama/Llama-3.2-11B-Vision-Instruct
5
+ pipeline_tag: image-text-to-text
6
+ library_name: transformers
7
+ ---
8
+
9
+ Converted from [meta-llama/Llama-3.2-11B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) using BitsAndBytes with NF4 (4-bit) quantization. Not using double quantization.
10
+ Requires `bitsandbytes` to load.
11
+
12
+ Example usage for image captioning:
13
+ ```python
14
+ from transformers import MllamaForConditionalGeneration, AutoProcessor, BitsAndBytesConfig
15
+ from PIL import Image
16
+ import time
17
+
18
+ # Load model
19
+ model_id = "SeanScripts/Llama-3.2-11B-Vision-Instruct-nf4"
20
+ model = MllamaForConditionalGeneration.from_pretrained(
21
+ model_id,
22
+ use_safetensors=True,
23
+ device_map="cuda:0"
24
+ )
25
+ # Load tokenizer
26
+ processor = AutoProcessor.from_pretrained(model_id)
27
+
28
+ # Caption a local image (could use a more specific prompt)
29
+ IMAGE = Image.open("test.png").convert("RGB")
30
+ PROMPT = """<|begin_of_text|><|start_header_id|>user<|end_header_id|>
31
+ Caption this image:
32
+ <|image|><|eot_id|><|start_header_id|>assistant<|end_header_id|>
33
+ """
34
+
35
+ inputs = processor(IMAGE, PROMPT, return_tensors="pt").to(model.device)
36
+ prompt_tokens = len(inputs['input_ids'][0])
37
+ print(f"Prompt tokens: {prompt_tokens}")
38
+
39
+ t0 = time.time()
40
+ generate_ids = model.generate(**inputs, max_new_tokens=256)
41
+ t1 = time.time()
42
+ total_time = t1 - t0
43
+ generated_tokens = len(generate_ids[0]) - prompt_tokens
44
+ time_per_token = generated_tokens/total_time
45
+ print(f"Generated {generated_tokens} tokens in {total_time:.3f} s ({time_per_token:.3f} tok/s)")
46
+
47
+ output = processor.decode(generate_ids[0][prompt_tokens:]).replace('<|eot_id|>', '')
48
+ print(output)
49
+
50
+ ```