--- base_model: [] tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * C:/Users/Jacoby/Downloads/text-generation-webui-main/models/uukuguy_speechless-instruct-mistral-7b-v0.2 * C:/Users/Jacoby/Downloads/text-generation-webui-main/models/SanjiWatsuki_Kunoichi-DPO-v2-7B ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: C:/Users/Jacoby/Downloads/text-generation-webui-main/models/SanjiWatsuki_Kunoichi-DPO-v2-7B - model: C:/Users/Jacoby/Downloads/text-generation-webui-main/models/uukuguy_speechless-instruct-mistral-7b-v0.2 merge_method: slerp base_model: C:/Users/Jacoby/Downloads/text-generation-webui-main/models/uukuguy_speechless-instruct-mistral-7b-v0.2 parameters: t: - value: [0.5, 0.1, 0.4, 0.3, 0.5, 0.7, 0.5, 0.7, 0.2, 0.1, 0.4] # Preserving the first and last layers of Miqu untouched is key for good results embed_slerp: true # This is super important otherwise the merge will fail dtype: float16 ```