--- title: README emoji: 🐠 colorFrom: pink colorTo: green sdk: static pinned: false --- # Introducing Lamini, the LLM Engine for Rapid Customization [Lamini](lamini.ai) gives every developer the superpowers that took the world from GPT-3 to ChatGPT! Today, you can try out our open dataset generator for training instruction-following LLMs (like ChatGPT) on [Github](https://lamini.ai/). [Sign up](https://lamini.ai/contact) for early access to our full LLM training module, including enterprise features like cloud prem deployments. # Training LLMs should be as easy as prompt-tuning 🦾 Why is writing a prompt so easy, but training an LLM from a base model still so hard? Iteration cycles for finetuning on modest datasets are measured in months because it takes significant time to figure out why finetuned models fail. Conversely, prompt-tuning iterations are on the order of seconds, but performance plateaus in a matter of hours. Only a limited amount of data can be crammed into the prompt, not the terabytes of data in a warehouse. It took OpenAI months with an incredible ML team to fine-tune and run RLHF on their base GPT-3 model that was available for years — creating what became ChatGPT. This training process is only accessible to large ML teams, often with PhDs in AI. ‍ Technical leaders at Fortune 500 companies have told us: * “Our team of 10 machine learning engineers hit the OpenAI finetuning API, but our model got worse — help!” * “I don’t know how to make the best use of my data — I’ve exhausted all the prompt magic we can summon from tutorials online.” That’s why we’re building Lamini: to give every developer the superpowers that took the world from GPT-3 to ChatGPT. # Rapidly train LLMs to be as good as ChatGPT from any base model 🚀 Lamini is an LLM engine that allows any developer, not just machine learning experts, to train high-performing LLMs on large datasets using the Lamini library. The optimizations in this library reach far beyond what’s available to developers now, from more challenging ones like RLHF to simpler ones like reducing hallucinations. ![Lamini Process Step by Step](lamini.png "Lamini LLM Engine") Lamini runs across platforms, from OpenAI’s models to open-source ones on HuggingFace, with more to come soon. We are agnostic to base models, as long as there’s a way for our engine to train and run them. In fact, Lamini makes it easy to run multiple base model comparisons in just a single line of code. Now that you know a bit about where we’re going, today, we’re excited to release our first major community resource!