Fine-tuning models

The guide will walk you through the steps to fine-tune a Fireworks supported base model.

📘 INFO

meta-llama/Meta-Llama-3-8B-Instruct, meta-llama/Meta-Llama-3-70B-Instruct models are now supported!


We utilize LoRA (Low-Rank Adaptation) for efficient and effective fine-tuning of large language models. LoRA is used for fine-tuning all models besides our 70B models, which use qLoRA (quantized) to improve training speeds. Take advantage of this opportunity to enhance your models with our cutting-edge technology!

Introduction

Fine-tuning a model with a dataset can be useful for several reasons:

  1. Enhanced Precision: It allows the model to adapt to the unique attributes and trends within the dataset, leading to significantly improved precision and effectiveness.
  2. Domain Adaptation: While many models are developed with general data, fine-tuning them with specialized, domain-specific datasets ensures they are finely attuned to the specific requirements of that field.
  3. Bias Reduction: General models may carry inherent biases. Fine-tuning with a well-curated, diverse dataset aids in reducing these biases, fostering fairer and more balanced outcomes.
  4. Contemporary Relevance: Information evolves rapidly, and fine-tuning with the latest data keeps the model current and relevant.
  5. Customization for Specific Applications: This process allows for the tailoring of the model to meet unique objectives and needs, an aspect not achievable with standard models.

In essence, fine-tuning a model with a specific dataset is a pivotal step in ensuring its enhanced accuracy, relevance, and suitability for specific applications. Let's hop on a journey of fine-tuning a model!

Installing firectl

The firectl command-line interface (CLI) will be used to manage fine-tuning jobs and their resulting models.

curl https://storage.googleapis.com/fireworks-public/firectl/stable/darwin-arm64.gz -o firectl.gz
gzip -d firectl.gz && chmod a+x firectl
sudo mv firectl /usr/local/bin/firectl
sudo chown root: /usr/local/bin/firectl
curl https://storage.googleapis.com/fireworks-public/firectl/stable/darwin-amd64.gz -o firectl.gz
gzip -d firectl.gz && chmod a+x firectl
sudo mv firectl /usr/local/bin/firectl
sudo chown root: /usr/local/bin/firectl
wget -O firectl.gz https://storage.googleapis.com/fireworks-public/firectl/stable/linux-amd64.gz
gunzip firectl.gz
sudo install -o root -g root -m 0755 firectl /usr/local/bin/firectl

Signing in

Run the following command to sign into Fireworks:

firectl signin

Confirm that you have successfully signed in by running:

firectl list accounts

You should see your account ID.

Preparing your dataset

To fine-tune a model, we need to first upload a dataset. Once uploaded, this dataset can be used to create one or more fine-tuning jobs. A dataset consists of a single JSONL file, where each line is a separate training example.

Limits:

  • Minimum number of examples is 1.
  • Maximum number of examples is 1,000,000.

Format:

  • Each line of the file must be a valid JSON object.

For the rest of this tutorial, we will use the databricks/databricks-dolly-15k dataset as an example. Each record in this dataset consists of a category, instruction, an optional context, and the expected response. Here are a few sample records:

{"instruction": "When did Virgin Australia start operating?", "context": "Virgin Australia, the trading name of Virgin Australia Airlines Pty Ltd, is an Australian-based airline. It is the largest airline by fleet size to use the Virgin brand. It commenced services on 31 August 2000 as Virgin Blue, with two aircraft on a single route. It suddenly found itself as a major airline in Australia's domestic market after the collapse of Ansett Australia in September 2001. The airline has since grown to directly serve 32 cities in Australia, from hubs in Brisbane, Melbourne and Sydney.", "response": "Virgin Australia commenced services on 31 August 2000 as Virgin Blue, with two aircraft on a single route.", "category": "closed_qa"}
{"instruction": "Which is a species of fish? Tope or Rope", "context": "", "response": "Tope", "category": "classification"}
{"instruction": "Why can camels survive for long without water?", "context": "", "response": "Camels use the fat in their humps to keep them filled with energy and hydration for long periods of time.", "category": "open_qa"}
{"instruction": "Alice's parents have three daughters: Amy, Jessy, and what\u2019s the name of the third daughter?", "context": "", "response": "The name of the third daughter is Alice", "category": "open_qa"}
{"instruction": "When was Tomoaki Komorida born?", "context": "Komorida was born in Kumamoto Prefecture on July 10, 1981. After graduating from high school, he joined the J1 League club Avispa Fukuoka in 2000. Although he debuted as a midfielder in 2001, he did not play much and the club was relegated to the J2 League at the end of the 2001 season. In 2002, he moved to the J2 club Oita Trinita. He became a regular player as a defensive midfielder and the club won the championship in 2002 and was promoted in 2003. He played many matches until 2005. In September 2005, he moved to the J2 club Montedio Yamagata. In 2006, he moved to the J2 club Vissel Kobe. Although he became a regular player as a defensive midfielder, his gradually was played less during the summer. In 2007, he moved to the Japan Football League club Rosso Kumamoto (later Roasso Kumamoto) based in his local region. He played as a regular player and the club was promoted to J2 in 2008. Although he did not play as much, he still played in many matches. In 2010, he moved to Indonesia and joined Persela Lamongan. In July 2010, he returned to Japan and joined the J2 club Giravanz Kitakyushu. He played often as a defensive midfielder and center back until 2012 when he retired.", "response": "Tomoaki Komorida was born on July 10,1981.", "category": "closed_qa"}

To create a dataset, run:

firectl create dataset <DATASET_ID> path/to/dataset.jsonl

and you can check the dataset with:

firectl get dataset <DATASET_ID>

To use an existing Hugging Face dataset, please refer to the script below for conversion. Datasets are private and cannot be viewed by other accounts.

Starting your tuning job

Fireworks supports three types of fine-tuning depending on the modeling objective:

  • Text completion - used to train a text generation model
  • Text classification - used to train a text classification model
  • Conversation - used to train a chat/conversation model

There are two ways to specify settings for your tuning job. You can create a settings YAML file and/or specify them using command-line flags. If a setting is present in both, the command-line flag takes precedence.

To start a job, run:

firectl create fine-tuning-job --settings-file path/to/settings.yaml --display-name "My Job"

firectl will return the fine-tuning job ID.

The following sections provide examples of a settings file for the given tasks.

Text completion

In this example, we will only be training on the context, instruction and the response fields. We won't use the category field at all.

# The ID of the dataset you created above.
dataset: my-dataset

text_completion:
  # How the fields of the JSON dataset should be formatted into the input text.
  input_template: "### GIVEN THE CONTEXT: {context}  ### INSTRUCTION: {instruction}  ### RESPONSE IS: "
  
  # How the fields of the JSON dataset should be formatted into the output text.
  output_template: "ANSWER: {response}"

# The Hugging Face model name of the base model.
base_model: mistralai/Mistral-7B-v0.1

Conversation

To train a conversation model, the dataset must conform to the schema expected by the Chat Completions API. Each JSON object of the dataset must contain a single array field called messages. Each message is an object containing two fields:

  • role - one of "system", "user", or "assistant".
  • content - the content of the message.

A message with the "system" role is optional, but if specified, must be the first message of the conversation. Subsequent messages start with "user" and alternate between "user" and "assistant". For example:

{"messages": [{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "What color is the sky?"}, {"role": "assistant", "content": "blue"}]}
{"messages": [{"role": "user", "content": "What is 1+1?"}, {"role": "assistant", "content": "2"}, {"role": "user", "content": "Now what is 2+2?"}, {"role": "assistant", "content": "4"}]}

The settings file for tuning a conversation model looks like:

# The ID of the dataset you created above.
dataset: my-dataset

conversation: {}

# The Hugging Face model name of the base model.
base_model: mistralai/Mistral-7B-v0.1

Or, you can optionally pass in a Jinja template to digest the messages, settings file look like:

# The ID of the dataset you created above.
dataset: my-dataset

conversation: {
	jinja_template: <jinja template string>
}

# The Hugging Face model name of the base model.
base_model: mistralai/Mistral-7B-v0.1

an example of template string will look like:

  {%- set _mode = mode | default('generate', true) -%}
  {%- set stop_token = '<|eot_id|>' -%}
  {%- set message_roles = ['USER', 'ASSISTANT'] -%}
  {%- set ns = namespace(initial_system_message_handled=false, last_assistant_index_for_eos=-1, messages=messages) -%}
  {%- for message in ns.messages -%}
      {%- if loop.last and message['role'] | upper == 'ASSISTANT' -%}
          {%- set ns.last_assistant_index_for_eos = loop.index0 -%}
      {%- endif -%}
  {%- endfor -%}
  {%- if _mode == 'generate' -%}
      {{ bos_token }}
  {%- endif -%}
  {%- for message in ns.messages -%}
      {%- if message['role'] | upper == 'SYSTEM' and not ns.initial_system_message_handled -%}
          {%- set ns.initial_system_message_handled = true -%}
          {{ '<|start_header_id|>system<|end_header_id|>\n\n' + message['content'] + stop_token }}
      {%- elif message['role'] | upper != 'SYSTEM' -%}
          {%- if (message['role'] | upper == 'USER') != ((loop.index0 - (1 if ns.initial_system_message_handled else 0)) % 2 == 0) -%}
              {{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}
          {%- endif -%}
          {%- if message['role'] | upper == 'USER' -%}
              {{ '<|start_header_id|>user<|end_header_id|>\n\n' + message['content'] + stop_token }}
          {%- elif message['role'] | upper == 'ASSISTANT' -%}
              {%- if _mode == 'train' -%}
                  {{ '<|start_header_id|>assistant<|end_header_id|>\n\n' + unk_token + message['content'] + stop_token + unk_token }}
              {%- else -%}
                  {{ '<|start_header_id|>assistant<|end_header_id|>\n\n' + message['content'] + (stop_token if loop.index0 != ns.last_assistant_index_for_eos else '') }}
              {%- endif -%}
          {%- endif -%}
      {%- endif -%}
  {%- endfor -%}
  {%- if _mode == 'generate' and ns.last_assistant_index_for_eos == -1 -%}
      {{ '<|start_header_id|>assistant<|end_header_id|>' }}
  {%- endif -%}

Notice: To use conversation settings, default polished Jinja templates will be provided for models that recommended for chat tuning to guarantee the quality, see the specs at conversation recommended column at model spec section. Otherwise, we will still provide a default generic template if no template provided to overwrite, but the tuned model quality might not be optimal.

Text classification

In this example, we'll only be training on the instruction and the category field. We won't use the context and response field at all

# The ID of the dataset you created above.
dataset: my-dataset

text_classification:
  # The JSON field containing the input text to be classified.
  text: instruction

  # The JSON field containing the classification label.
  label: category
  
  # The boolean field to enable evaluation, default to false.
  evaluation: false

# The Hugging Face model name of the base model.
base_model: mistralai/Mistral-7B-v0.1

Deploying the model for inference

You can monitor the progress of the tuning job by running

firectl get fine-tuning-job <JOB_ID>

Once the job successfully completes, a model will be created in your account. You can see a list of models by running:

firectl list models

Or if you specified a model ID when creating the fine-tuning job, you can get the model directly:

firectl get model <MODEL_ID>

The model should be in an UNDEPLOYED state after the fine-tuning job completes. To deploy the model for inference, run:

firectl deploy <MODEL_ID>

You can then query the model using our REST API:

curl \
  -H "Authorization: Bearer ${API_KEY}" \
  -H "Content-Type: application/json" \
  -d '{"model": "accounts/<ACCOUNT_ID>/models/<MODEL_ID>", "prompt": "hello, the sky is"}' \
  https://api.fireworks.ai/inference/v1/completions

See the following guides for more details:

Additional tuning options

Epochs

Epochs is the number of epochs (i.e. passes over the training data) the job should train for. Non-integer values are supported. If not specified, a reasonable default number will be chosen for you.

notice: we have the max value of 3 millions of dataset examples * epochs

# ...
epochs: 2.0
firectl create fine-tuning-job \
  --epochs 2.0 \
  ...

Learning rate

The learning rate used in training can be configured. If not specified, a reasonable default value will be chosen.

# ...
learning_rate: 0.0001
firectl create fine-tuning-job \
  --learning-rate 0.0001 \
  ...

Batch size

The batch size of dataset used in training can be configured with a positive integer less than 1024 and in power of 2. If not specified, a reasonable default value will be chosen.

# ...
batch_size: 32
firectl create fine-tuning-job \
  --batch-size 32 \
  ...

Lora Rank

LoRA rank refers to the dimensionality of trainable matrices in Low-Rank Adaptation fine-tuning, balancing model adaptability and computational efficiency in fine-tuning large language models. The LoRA rank of used in training can be configured with a positive integer at maximum of 32. If not specified, a reasonable default value will be chosen.

# ...
lora_rank: 16
firectl create fine-tuning-job \
  --lora-rank 16 \
  ...

Training progress and monitoring

The fine-tuning service integrates with Weights & Biases to provide observability into the tuning process. To use this feature, you must have a Weights & Biases account and have provisioned an API key.

wandb_entity: my-org
wandb_api_key: xxx
wandb_project: My Project
firectl create fine-tuning-job \
  --wandb-entity my-org \
  --wandb-api-key xxx \
  --wandb-project "My Project" \
  ...

Model ID

By default, the fine-tuning job will generate a random unique ID for the model. This ID is used to refer to the model at inference time. You can optionally choose a custom ID.

model_id: my-model
firectl create fine-tuning-job \
  --model-id my-model \
  ...

Job ID

By default, the fine-tuning job will generate a random unique ID for the fine-tuning job. You can optionally choose a custom ID.

job_id: my-fine-tuning-job
firectl create fine-tuning-job \
  --job-id my-fine-tuning-job \
  ...

Downloading model weights

Downloading model weights is available upon request. Please message a Fireworks team member from the fine-tuning Discord or email raythai [at] fireworks.ai with your account ID to download your model weights.

firectl download model <MODEL_ID>

Supported base models

The following base models are supported, and the default parameters if not specified:

ModelBatch SizeLoRA RankEpochsLearning RateCut-off LengthConversation Recommended
codellama/CodeLlama-34b-hf16813.00E-044096false
meta-llama/Llama-2-13b-hf161612.00E-044096false
meta-llama/Llama-2-13b-chat-hf161612.00E-054096true
meta-llama/Llama-2-70b-hf8412.00E-054096false
meta-llama/Llama-2-70b-chat-hf8412.00E-054096true
meta-llama/Llama-2-7b-hf166413.00E-044096false
meta-llama/Llama-2-7b-chat-hf166411.00E-044096true
meta-llama/Meta-Llama-3-8B-Instruct166411.00E-048192true
meta-llama/Meta-Llama-3-70B-Instruct8412.00E-058192true
meta-llama/Meta-Llama-Guard-2-8B16812.00E-058192false
mistralai/Mistral-7B-v0.116811.00E-044096true
mistralai/Mixtral-8x7B-v0.116811.00E-048192false
mistralai/Mixtral-8x7B-Instruct-v0.116811.00E-0432768true
mistralai/Mixtral-8x22B-v0.18811.00E-048192false
mistralai/Mixtral-8x22B-Instruct-v0.116811.00E-048192true

Hugging Face dataset to JSONL

To convert a Hugging Face dataset to the JSONL format supported by our fine-tuning service, you can use the following Python script:

import json
from datasets import load_dataset

dataset = load_dataset("<DATASET_NAME>")

# Replace 'dataset_split' with the appropriate split you want to export, e.g., 'train', 'test', etc.
split_data = dataset["<SPLIT_NAME>"]

counter = 0
with open("<OUTPUT_FILE>.jsonl", "w") as f:
    for item in split_data:
        json.dump(item, f)
        counter += f.write("\n")

print(f"{counter} lines converted")

Fine-tuning Discord

We'd love to hear what you think! Please connect with the team, ask questions and share your feedback in the #fine-tuning Discord channel.

Pricing

We charge based on the total number of tokens in your fine-tuning dataset (dataset size * number of epochs). Please see our pricing page for rates