Directory structure

Your PeFT addon must contain three files:

  • adapter_config.json - The Hugging Face adapter configuration file.
  • adapter_model.binor adapter_model.safetensors- The saved addon file.
  • fireworks.json - A Fireworks configuration file.

It may also contain an optional file which will be ignored during the upload process.


The following limits are applied by default. Contact us if you need these limits relaxed.

Supported base models

Currently, the following base models are supported:

  • accounts/fireworks/models/llama-v3-8b-instruct (meta-llama/Meta-Llama-3-8B-Instruct on Huggingface)
  • accounts/fireworks/models/llama-v3-70b-instruct (meta-llama/Meta-Llama-3-70B-Instruct on Huggingface)
  • accounts/fireworks/models/llama-guard-2-8b (meta-llama/Meta-Llama-Guard-2-8B on Huggingface)
  • accounts/fireworks/models/llama-v2-7b
  • accounts/fireworks/models/llama-v2-7b-chat
  • accounts/fireworks/models/llama-v2-13b
  • accounts/fireworks/models/llama-v2-13b-chat
  • accounts/fireworks/models/llama-v2-34b-code
  • accounts/fireworks/models/llama-v2-70b-chat
  • accounts/fireworks/models/mistral-7b
  • accounts/fireworks/models/mistral-7b-instruct-4k
  • accounts/fireworks/models/zephyr-7b-beta
  • accounts/fireworks/models/mixtral-8x7b
  • accounts/fireworks/models/mixtral-8x7b-instruct

Additional base models (including custom models) are supported for enterprise accounts.

The base model name is specified in fireworks.json.

LoRA ranks

The LoRA rank must be an integer between 4 and 64, inclusive.

Supported target modules

Currently, the following target modules are supported:

  • Llama and Mixtral models (all linear layers)
    • q_proj
    • k_proj
    • v_proj
    • o_proj
    • up_proj/w1
    • down_proj/w2
    • gate_proj/w3
    • block_sparse_moe.gate

The target modules are specified in adapter_config.json.