Safetensors
qwen2
linqq9 commited on
Commit
c4abbba
1 Parent(s): 15011dd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -7
README.md CHANGED
@@ -2,16 +2,15 @@
2
  license: cc-by-4.0
3
  datasets:
4
  - Salesforce/xlam-function-calling-60k
5
- - MadeAgents/XLAM-7.5k-Irrelevance
6
- base_model: Qwen/Qwen2-7B-Instruct
7
  ---
8
- # Hammer-7b Function Calling Model
9
 
10
  ## Introduction
11
- Hammer-7b is a cutting-edge Large Language Model (LLM) crafted to boost the critical capability of AI agents: function calling. Differing from existing models focusing on traning data refinement, Hammer-7b optimizes performance primarily through advanced training techniques.
12
 
13
  ## Model Details
14
- Hammer-7b is a finetuned model built upon [Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct). It's trained using the [APIGen Function Calling Datasets](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k) containing 60,000 samples, supplemented by [7,500 irrelevance detection data](https://huggingface.co/datasets/MadeAgents/XLAM-7.5k-Irrelevance) we generated. Employing innovative training techniques like function masking, function shuffling, and prompt optimization, Hammer-7b has achieved exceptional performances across numerous benchmarks including [Berkley Function Calling Leaderboard](https://gorilla.cs.berkeley.edu/leaderboard.html), [API-Bank](https://arxiv.org/abs/2304.08244), [Tool-Alpaca](https://arxiv.org/abs/2306.05301), [Nexus Raven](https://github.com/nexusflowai/NexusRaven-V2) and [Seal-Tools](https://arxiv.org/abs/2405.08355).
15
 
16
  ## Tuning Details
17
  Thanks so much for your attention, a report with all the technical details leading to our models will be published soon.
@@ -32,7 +31,7 @@ In addition, we evaluated our Hammer series (1.5b, 4b, 7b) on other academic ben
32
 
33
 
34
  ## Requiements
35
- The code of Hammer-7b has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`.
36
 
37
  ## How to Use
38
  This is a simple example of how to use our model.
@@ -41,7 +40,7 @@ import json
41
  import torch
42
  from transformers import AutoModelForCausalLM, AutoTokenizer
43
 
44
- model_name = "MadeAgents/Hammer-7b"
45
  model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True)
46
  tokenizer = AutoTokenizer.from_pretrained(model_name)
47
 
 
2
  license: cc-by-4.0
3
  datasets:
4
  - Salesforce/xlam-function-calling-60k
5
+ base_model: Qwen/Qwen2-1.5B-Instruct
 
6
  ---
7
+ # Hammer-1.5b Function Calling Model
8
 
9
  ## Introduction
10
+ Hammer-1.5b is a cutting-edge Large Language Model (LLM) crafted to boost the critical capability of AI agents: function calling. Differing from existing models focusing on traning data refinement, Hammer-1.5b optimizes performance primarily through advanced training techniques.
11
 
12
  ## Model Details
13
+ Hammer-1.5b is a finetuned model built upon [Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct). It's trained using the [APIGen Function Calling Datasets](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k) containing 60,000 samples, supplemented by [7,500 irrelevance detection data](https://huggingface.co/datasets/MadeAgents/XLAM-7.5k-Irrelevance) we generated. Employing innovative training techniques like function masking, function shuffling, and prompt optimization, Hammer-1.5b has achieved exceptional performances across numerous benchmarks including [Berkley Function Calling Leaderboard](https://gorilla.cs.berkeley.edu/leaderboard.html), [API-Bank](https://arxiv.org/abs/2304.08244), [Tool-Alpaca](https://arxiv.org/abs/2306.05301), [Nexus Raven](https://github.com/nexusflowai/NexusRaven-V2) and [Seal-Tools](https://arxiv.org/abs/2405.08355).
14
 
15
  ## Tuning Details
16
  Thanks so much for your attention, a report with all the technical details leading to our models will be published soon.
 
31
 
32
 
33
  ## Requiements
34
+ The code of Hammer-1.5b has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`.
35
 
36
  ## How to Use
37
  This is a simple example of how to use our model.
 
40
  import torch
41
  from transformers import AutoModelForCausalLM, AutoTokenizer
42
 
43
+ model_name = "MadeAgents/Hammer-1.5b"
44
  model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True)
45
  tokenizer = AutoTokenizer.from_pretrained(model_name)
46