Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
LoRA-TMLR-2024
's Collections
Instruction Finetuning - Code (Magicoder-Evol-Instruct-110K)
Continued Pretraining - Code (StarCoder-Python)
Instruction Finetuning - Math (MetaMathQA)
Continued Pretraining - Math (OpenWebMath)
Instruction Finetuning - Code (Magicoder-Evol-Instruct-110K)
updated
15 days ago
Full finetuning and LoRA adapters for Llama-2-7B finetuned on Magicoder-Evol-Instruct-110K
Upvote
-
LoRA-TMLR-2024/magicoder-lora-rank-64-alpha-128
Updated
15 days ago
•
71
LoRA-TMLR-2024/magicoder-lora-rank-16-alpha-32
Updated
15 days ago
•
116
LoRA-TMLR-2024/magicoder-lora-rank-256-alpha-512
Updated
15 days ago
•
15
LoRA-TMLR-2024/magicoder-lora-rank-2048-alpha-4096
Updated
15 days ago
•
18
LoRA-TMLR-2024/magicoder-full-finetuning-lr-5e-05
Updated
15 days ago
•
7
Upvote
-
Share collection
View history
Collection guide
Browse collections