Skip to main content

Transformer Lab

Train, Tune, Eval, RAG: The Ultimate Toolkit for Working with Local LLMs

What is Transformer Lab?

Transformer Lab allows anyone to build, tune, & run Large Language Models locally. Beyond just talking to models, Transformer Lab allows users to train, fine-tune, preference-tune, RAG, and eval models in an easy-to-use application.

We imagine a world where every software developer will incorporate large language models in their products. Transformer Lab allows users to do this without needing to know Python nor have previous experience with machine learning.

Learn more about our vision

Detailed Feature List

  • 💕 One-click Download Hundreds of Popular Models:
    • Llama3, Phi3, Mistral, Mixtral, Gemma, Command-R, and dozens more
  • ⬇ Download any LLM from Huggingface
  • 🎶 Finetune / Train Across Different Hardware
    • Finetune using MLX on Apple Silicon
    • Finetune using Huggingface on GPU
  • ⚖️ RLHF and Preference Optimization
    • DPO
    • ORPO
    • SIMPO
    • Reward Modeling
  • 💻 Work with LLMs Across Operating Systems:
    • Windows App
    • MacOS App
    • Linux
  • 💬 Chat with Models
    • Chat
    • Completions
    • Preset (Templated) Prompts
      • Chat History
      • Tweak generation parameters
  • 🚂 Use Different Inference Engines
    • MLX on Apple Silicon
    • Huggingface Transformers
    • vLLM
    • Llama CPP
  • 🧑‍🎓 Evaluate models
  • 📖 RAG (Retreival Augmented Generation)
    • Drag and Drop File UI
    • Works on Apple MLX, Transformers, and other engines
  • 📓 Build Datasets for Training
    • Pull from hundreds of common datasets available on HuggingFace
    • Provide your own dataset using drag and drop
  • 🔢 Calculate Embeddings
  • 💁 Full REST API
  • 🌩 Run in the Cloud
    • You can run the user interface on your desktop/laptop while the engine runs on a remote or cloud machine
    • Or you can run everything locally on a single machine
  • 🔀 Convert Models Across Platforms
    • Convert from/to Huggingface, MLX, GGUF
  • 🔌 Plugin Support
    • Easily pull from a library of existing plugins
    • Write your own plugins to extend functionality
  • 🧑‍💻 Embedded Monaco Code Editor
    • Edit plugins and view what's happening behind the scenes
  • 📝 Prompt Editing
    • Easily edit System Messages or Prompt Templates
  • 📜 Inference Logs
    • While doing inference or RAG, view a log of the raw queries sent to the LLM