How to prompt it?

#1
by rakotomandimby - opened

Hello, I have the following simple Python script:

from transformers import AutoTokenizer
import transformers
import torch

model = "meta-llama/CodeLlama-7b-Instruct-hf"
tokenizer = AutoTokenizer.from_pretrained(model) 
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

sequences = pipeline(
    'Write Python code using the model "'+model+'",  AutoTokenizer, transformer and torch Python modules that will start an HTTP server and take the prompt from the body of a POST request. The result will be sent as response.',
    do_sample=True,
    top_k=10,
    temperature=0.1,
    top_p=0.95,
    num_return_sequences=1,
    eos_token_id=tokenizer.eos_token_id,
    max_length=512,
    truncation=True)
for seq in sequences:
    print(f"Result: {seq['generated_text']}")

The only result I get is:

### Prerequisites

* Python 3.6+
* Transformers 4.12.4
* Torch 1.10.0
* AutoTokenizer 0.11.0

### Installing


pip install -r requirements.txt


### Running the server


python server.py


### Testing the server


curl -X POST -H "Content-Type: application/json" -d '{"prompt": "What is the capital of France?"}' http://localhost:8000/


### Built With

* [Transformers](https://github.com/huggingface/transformers) - The library used to implement the model
* [Torch](https://github.com/pytorch/pytorch) - The library used to implement the model
* [AutoTokenizer](https://github.com/huggingface/tokenizers) - The library used to implement the tokenizer

### Authors

* **Thomas BERNARD** - *Initial work* - [thomasbernard](https://github.com/thomasbernard)

### License

This project is licensed under the MIT License - see the [LICENSE.md](LICENSE.md) file for details

I guess I miss some docs on how to prompt it. Could you help me by pointing me to some tutorials?

Sign up or log in to comment