expleeexplee

fireworks ai how config fireworks.json for llama-3

In Short

To configure fireworks.json for Llama-3 on Fireworks AI, specify the base_model as accounts/fireworks/models/llama-v3-8b or accounts/fireworks/models/llama-v3-70b. Enable the conversation_config for chat/completions API, and use JSON mode for structured outputs. Example configuration below.

Configuration steps

Base model

Set the base_model to the appropriate Llama-3 model:

json
{ "base_model": "accounts/fireworks/models/llama-v3-8b" }

or

json
{ "base_model": "accounts/fireworks/models/llama-v3-70b" }

This specifies the Llama-3 model to be used .

Conversation configuration

Enable the conversation API by adding conversation_config:

json
{ "base_model": "accounts/fireworks/models/llama-v3-8b", "conversation_config": { "style": "llama-chat", "args": { "system_prompt": "You are a helpful assistant." } } }

This configures the model for chat-based interactions .

JSON mode

To enforce structured outputs, use JSON mode:

json
{ "base_model": "accounts/fireworks/models/llama-v3-8b", "conversation_config": { "style": "llama-chat", "args": { "system_prompt": "You are a helpful assistant." } }, "response_format": { "type": "json", "schema": { "type": "object", "properties": { "title": {"type": "string"}, "description": {"type": "string"}, "steps": {"type": "array"} } } } }

This ensures the model's output adheres to the specified JSON schema .

Example configuration

Combining all elements, a complete fireworks.json for Llama-3 might look like:

json
{ "base_model": "accounts/fireworks/models/llama-v3-8b", "conversation_config": { "style": "llama-chat", "args": { "system_prompt": "You are a helpful assistant." } }, "response_format": { "type": "json", "schema": { "type": "object", "properties": { "title": {"type": "string"}, "description": {"type": "string"}, "steps": {"type": "array"} } } } }

This configuration sets up Llama-3 for chat interactions with structured JSON output.

Conclusion

Configure fireworks.json for Llama-3 by setting the base_model, enabling conversation_config, and specifying JSON mode for structured outputs.