---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: LeroyDyer/_Spydaz_Web_AI_ChatML_002
model-index:
- name: _Spydaz_Web_AI_ChatML_002
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 24.12
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=LeroyDyer/_Spydaz_Web_AI_ChatML_002
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 4.19
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=LeroyDyer/_Spydaz_Web_AI_ChatML_002
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 0.0
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=LeroyDyer/_Spydaz_Web_AI_ChatML_002
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 1.01
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=LeroyDyer/_Spydaz_Web_AI_ChatML_002
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 2.79
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=LeroyDyer/_Spydaz_Web_AI_ChatML_002
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 1.05
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=LeroyDyer/_Spydaz_Web_AI_ChatML_002
name: Open LLM Leaderboard
---
## SpydazWeb AI model : LeroyDyer/_Spydaz_Web_AI_ChatML_002 (512k)
The context has been sucessfully merged with the 512k project and realigned to the same datasets used to align the chatml model from the past traiing regimes :
This was also a descendant of the 512k contenct PRoject which also consited of the CodeAgent which also utilizs the React Templating and traing regimes which i have also deployed : hence including the model in the merge : after realiging these models using reafct llama datasets and the same alignment sets used to generate the chatMl latest Skew:
Quote for Motivation:
# "Success comes from defining each task in achievable steps. Every completed step is a success that brings you closer to your goal. If your steps are unreachable, failure is inevitable. Winners create more winners, while losers do the opposite. Success is a game of winners!"
# "To grow as a professional, set goals just beyond your current abilities. Achieving these milestones will not only overcome obstacles but also strengthen your skillset. If your tasks are too easy, you’ll never challenge yourself or improve, and life will pass you by!"
— # Leroy Dyer (1972-Present)
THis model is the updated aligned ChatMl Version of the curent models and merges :
It was realigned using the gliave function calling and word orca datsets as well s the normal internal datasets: (3)
This model is based on the worlds archive of knowledge maintaining historical documents and providing services for the survivors of mankind ,
who may need to construct shelters develop technologys , or medical resources as well as maintain the history of the past . keeping store of all the religious knowledge and data of the world:
A friendly interface with a personality caring and flirtatious at times : non binary !...
and Expert in all feilds: ie Uncensored and will not refuse to give information : the model can be used for role play as many character dialogues were als trained into the model as its personality to enable a greater perspective and outlook and natural discussion with the agents:
the model was trained to operateinaragenvironment utilizing content and internal knowledge to respond to questions or create enriched sumarys.
This mistral model was trained 5x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[](https://github.com/unslothai/unsloth)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_LeroyDyer___Spydaz_Web_AI_ChatML_002)
| Metric |Value|
|-------------------|----:|
|Avg. | 5.53|
|IFEval (0-Shot) |24.12|
|BBH (3-Shot) | 4.19|
|MATH Lvl 5 (4-Shot)| 0.00|
|GPQA (0-shot) | 1.01|
|MuSR (0-shot) | 2.79|
|MMLU-PRO (5-shot) | 1.05|