Skip to main content

Training vs Fine-Tuning vs LoRA

These terms are often mixed up. They represent very different levels of effort and cost.

1) Training from scratch

You build a model from random initialization using massive datasets and compute.

Use it when:

  • You are a frontier AI lab
  • You need a truly novel model capability not available elsewhere

Do not use it when:

  • You are an early startup trying to ship quickly
  • Your needs can be met by existing base models

2) Fine-tuning

You take a pretrained model and update many or all model weights on your domain data.

Use it when:

  • You need consistent behavior in a specialized domain
  • Prompting alone is not reliable enough

Do not use it when:

  • You have little or low-quality training data
  • Your requirements change rapidly week to week

3) LoRA (Low-Rank Adaptation)

LoRA is a parameter-efficient way to adapt models by training a smaller set of additional weights.

Use it when:

  • You need domain adaptation at lower cost
  • You want faster iteration than full fine-tuning

Do not use it when:

  • You need deep behavior changes that small adapters cannot capture
  • You do not have an evaluation setup to verify improvements
  1. Prompting + RAG + good product design
  2. LoRA
  3. Full fine-tuning
  4. Training from scratch (rare)

Bottom line

Start with the least expensive method that meets quality targets, then step up only when metrics prove you need it.