- render - Chat render
Given a list of messages forming a conversation, the API renders them into the final prompt text that will be sent to the model.
import os
from friendli import SyncFriendli
with SyncFriendli(
token=os.getenv("FRIENDLI_TOKEN", ""),
) as friendli:
res = friendli.dedicated.chat_render.render(
model="(endpoint-id)",
messages=[
{
"role": "system",
"content": "You are a helpful assistant.",
},
{
"role": "user",
"content": "Hello!",
},
],
)
# Handle response
print(res)| Parameter | Type | Required | Description | Example |
|---|---|---|---|---|
model |
str | ✔️ | ID of target endpoint. If you want to send request to specific adapter, use the format "YOUR_ENDPOINT_ID:YOUR_ADAPTER_ROUTE". Otherwise, you can just use "YOUR_ENDPOINT_ID" alone. | (endpoint-id) |
messages |
List[models.Message] | ✔️ | A list of messages comprising the conversation so far. | [ { "content": "You are a helpful assistant.", "role": "system" }, { "content": "Hello!", "role": "user" } ] |
x_friendli_team |
OptionalNullable[str] | ➖ | ID of team to run requests as (optional parameter). | |
chat_template_kwargs |
Dict[str, Any] | ➖ | Additional keyword arguments supplied to the template renderer. These parameters will be available for use within the chat template. | |
tools |
List[models.Tool] | ➖ | A list of tools the model may call. Use this to provide a list of functions the model may generate JSON inputs for. When tools is specified, min_tokens and response_format fields are unsupported. |
|
retries |
Optional[utils.RetryConfig] | ➖ | Configuration to override the default retry behavior of the client. |
models.DedicatedChatRenderSuccess
| Error Type | Status Code | Content Type |
|---|---|---|
| models.SDKError | 4XX, 5XX | */* |