Platform-specific config
Cloudflare
{
"modelName": "@cf/qwen/qwen1.5-14b-chat-awq",
"accountId": "<guid>",
"apiKey": "xxx-xxx-xxx"
}
-
modelName
: The name of the AI model to use, must be exist on Cloudflare and you have access to it. -
accountId
: The account ID of your Cloudflare account, it could be the tail route when you access https://dash.cloudflare.com/ . -
apiKey
: An API key issued by your account, must be valid and have permissionAccount.Workers AI
.
OpenAI
{
"authCredentials": "Bearer xxx-xxx-xxx",
"modelName": "gpt-4"
}
-
authCredentials
: Authorization credentials, will be added intoAutorizations
header. -
modelName
: The name of the model you use.
OpenAI-compatible
{
"authCredentials": "xxx-xxx-xxx",
"modelName": "gpt-4",
"chatCompletionEndpoint": "https://.../v1/chat/completion"
}
-
authCredentials
: Authorization credentials, will be added intoAutorizations
header. -
modelName
: The name of the model you use. -
chatCompletionEndpoint
: The URL to POST messages. If you use LM Studio, you may usehttp://localhost:<port>/v1/chat/completion
whereport
is the port that LM Studio is listening on.