|  | 
|  | 1 | +# Llama API | 
|  | 2 | + | 
|  | 3 | +[Llama API](https://llama.developer.meta.com/) is a Meta-hosted API service that helps you integrate Llama models into your applications quickly and efficiently. | 
|  | 4 | + | 
|  | 5 | +Llama API provides access to Llama models through a simple API interface, with inference provided by Meta, so you can focus on building AI-powered solutions without managing your own inference infrastructure. | 
|  | 6 | + | 
|  | 7 | +With Llama API, you get access to state-of-the-art AI capabilities through a developer-friendly interface designed for simplicity and performance. | 
|  | 8 | + | 
|  | 9 | +## Installation | 
|  | 10 | + | 
|  | 11 | +Llama API is configured as an optional dependency in Strands Agents. To install, run: | 
|  | 12 | + | 
|  | 13 | +```bash | 
|  | 14 | +pip install strands-agents[llamaapi] | 
|  | 15 | +``` | 
|  | 16 | + | 
|  | 17 | +## Usage | 
|  | 18 | + | 
|  | 19 | +After installing `llamaapi`, you can import and initialize Strands Agents' Llama API provider as follows: | 
|  | 20 | + | 
|  | 21 | +```python | 
|  | 22 | +from strands import Agent | 
|  | 23 | +from strands.models.llamaapi import LlamaAPIModel | 
|  | 24 | +from strands_tools import calculator | 
|  | 25 | + | 
|  | 26 | +model = LlamaAPIModel( | 
|  | 27 | +    client_args={ | 
|  | 28 | +        "api_key": "<KEY>", | 
|  | 29 | +    }, | 
|  | 30 | +    # **model_config | 
|  | 31 | +    model_id="Llama-4-Maverick-17B-128E-Instruct-FP8", | 
|  | 32 | +) | 
|  | 33 | + | 
|  | 34 | +agent = Agent(model=model, tools=[calculator]) | 
|  | 35 | +response = agent("What is 2+2") | 
|  | 36 | +print(response) | 
|  | 37 | +``` | 
|  | 38 | + | 
|  | 39 | +## Configuration | 
|  | 40 | + | 
|  | 41 | +### Client Configuration | 
|  | 42 | + | 
|  | 43 | +The `client_args` configure the underlying LlamaAPI client. For a complete list of available arguments, please refer to the LlamaAPI [docs](https://llama.developer.meta.com/docs/). | 
|  | 44 | + | 
|  | 45 | + | 
|  | 46 | +### Model Configuration | 
|  | 47 | + | 
|  | 48 | +The `model_config` configures the underlying model selected for inference. The supported configurations are: | 
|  | 49 | + | 
|  | 50 | +|  Parameter | Description | Example | Options | | 
|  | 51 | +|------------|-------------|---------|---------| | 
|  | 52 | +| `model_id` | ID of a model to use | `Llama-4-Maverick-17B-128E-Instruct-FP8` | [reference](https://llama.developer.meta.com/docs/) | 
|  | 53 | +| `repetition_penalty` | Controls the likelyhood and generating repetitive responses. (minimum: 1, maximum: 2, default: 1) |  `1`  | [reference](https://llama.developer.meta.com/docs/api/chat) | 
|  | 54 | +| `temperature` | Controls randomness of the response by setting a temperature. | `0.7` | [reference](https://llama.developer.meta.com/docs/api/chat) | 
|  | 55 | +| `top_p` | Controls diversity of the response by setting a probability threshold when choosing the next token. | `0.9` | [reference](https://llama.developer.meta.com/docs/api/chat) | 
|  | 56 | +| `max_completion_tokens` | The maximum number of tokens to generate.  | `4096` | [reference](https://llama.developer.meta.com/docs/api/chat) | 
|  | 57 | +| `top_k` | Only sample from the top K options for each subsequent token. | `10` | [reference](https://llama.developer.meta.com/docs/api/chat) | 
|  | 58 | + | 
|  | 59 | + | 
|  | 60 | +## Troubleshooting | 
|  | 61 | + | 
|  | 62 | +### Module Not Found | 
|  | 63 | + | 
|  | 64 | +If you encounter the error `ModuleNotFoundError: No module named 'llamaapi'`, this means you haven't installed the `llamaapi` dependency in your environment. To fix, run `pip install strands-agents[llamaapi]`. | 
|  | 65 | + | 
|  | 66 | +## References | 
|  | 67 | + | 
|  | 68 | +- [API](../../../api-reference/models.md) | 
|  | 69 | +- [LlamaAPI](https://llama.developer.meta.com/docs/) | 
0 commit comments