--- language: - ja - en tags: - merge - mergekit - lazymergekit - elyza/ELYZA-japanese-Llama-2-7b - tokyotech-llm/Swallow-7b-hf base_model: - elyza/ELYZA-japanese-Llama-2-7b - tokyotech-llm/Swallow-7b-hf --- # 🌿 Heliotrope-Ely-Swa-slerp-7B Heliotrope-Ely-Swa-slerp-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing) of [Maxime Labonne](https://huggingface.co/mlabonne) powered by [MergeKit](https://github.com/arcee-ai/mergekit) of [Arcee AI](https://www.arcee.ai): * [elyza/ELYZA-japanese-Llama-2-7b](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b) (Base model) * [tokyotech-llm/Swallow-7b-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-hf) ## 💻 Configuration ```yaml slices: - sources: - model: elyza/ELYZA-japanese-Llama-2-7b layer_range: [0, 32] - model: tokyotech-llm/Swallow-7b-hf layer_range: [0, 32] merge_method: slerp base_model: elyza/ELYZA-japanese-Llama-2-7b parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 🤗 Usage for HuggingFace ```python # !pip install -qU transformers accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline import torch model_name = "AkimfromParis/Heliotrope-Ely-Swa-slerp-7B" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) pipe = pipeline("text-generation",model=model, tokenizer=tokenizer, torch_dtype=torch.float16, device_map="auto") sequences = pipe('大谷翔平選手は', do_sample=False, max_new_tokens=100) print(sequences[0].get("generated_text")) ``` # 🔖 Citation ``` @misc{goddard2024arcee, title={Arcee's MergeKit: A Toolkit for Merging Large Language Models}, author={Goddard, Charles and Siriwardhana, Shamane and Ehghaghi, Malikeh and Meyers, Luke and Karpukhin, Vlad and Benedict, Brian and McQuade, Mark and Solawetz, Jacob}, journal={arXiv preprint arXiv:2403.13257}, year={2024} } ``` arxiv.org/abs/2403.13257