# chatapi.ai-now.space API Documentation This API provides access to a variety of large language models (LLMs) for chat completions, token counting, and moderation. ## Table of Contents * [Authentication](#authentication) * [Endpoints](#endpoints) * [/models - List Available Models](#models) * [/chat/completions - Generate Chat Completions](#chatcompletions) * [Request Body](#chatcompletions-request-body) * [Non-Streaming Response](#chatcompletions-non-streaming-response) * [Streaming Response](#chatcompletions-streaming-response) * [/tokens - Count Tokens](#tokens) * [/credits - Check Remaining Credits](#credits) * [/moderate - Moderate Text](#moderate) * [Supported Models](#supported-models) * [Usage Examples](#usage-examples) * [Python](#usage-examples-python) * [Non-Streaming Example](#usage-examples-python-non-streaming-example) * [Streaming Example](#usage-examples-python-streaming-example) * [C#](#usage-examples-c) * [Non-Streaming Example](#usage-examples-c-non-streaming-example) * [Streaming Example](#usage-examples-c-streaming-example) * [Node.js](#usage-examples-nodejs) * [Non-Streaming Example](#usage-examples-nodejs-non-streaming-example) * [Streaming Example](#usage-examples-nodejs-streaming-example) ## Authentication You need an API key to use this API. To obtain an API key, please contact `[email protected]`. Once you have an API key, include it in the `Authorization` header of your requests as a Bearer token:
Authorization: Bearer YOUR_API_KEY
## Endpoints ### /models **Method:** GET **Description:** Returns a list of available models with their associated input and output costs. **Response:** ```json { "data": [ { "id": "Claude 3.5 Sonnet", "inputCost": 0.002, "outputCost": 0.003 }, { "id": "GPT-4 Turbo", "inputCost": 0.03, "outputCost": 0.06 }, // ... other models ] }
/chat/completions
Method: POST
Description: Generates chat completions using the specified model.
Request Body
{
"model": "MODEL_NAME", // See the list of supported models below
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "What is the capital of France?"
}
],
"stream": true/false, // Optional, defaults to false
// ... other parameters (see model-specific documentation)
}
Non-Streaming Response
{
"id": "chatcmpl-...",
"object": "chat.completion",
"created": 1677652288,
"model": "MODEL_NAME",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The capital of France is Paris."
},
"finish_reason": "stop"
}
],
"input_tokens": ...,
"output_tokens": ...,
"input_tokens_cost": ...,
"output_tokens_cost": ...,
"total_tokens_cost": ...,
"total_time_taken": ...,
"tokens_per_second": ...
}
Streaming Response
data: {"id": "chatcmpl-...", "object": "chat.completion.chunk", "created": 1677652288, "model": "MODEL_NAME", "choices": [{"index": 0, "delta": {"content": "The"}, "finish_reason": null}]}
data: {"id": "chatcmpl-...", "object": "chat.completion.chunk", "created": 1677652288, "model": "MODEL_NAME", "choices": [{"index": 0, "delta": {"content": " capital"}, "finish_reason": null}]}
data: {"id": "chatcmpl-...", "object": "chat.completion.chunk", "created": 1677652288, "model": "MODEL_NAME", "choices": [{"index": 0, "delta": {"content": " of"}, "finish_reason": null}]}
// ... more chunks
data: {"id": "chatcmpl-...", "object": "chat.completion.chunk", "created": 1677652288, "model": "MODEL_NAME", "choices": [{"index": 0, "delta": {"content": " is Paris."}, "finish_reason": "stop"}]}
data: {"input_tokens": ..., "output_tokens": ..., "input_tokens_cost": ..., "output_tokens_cost": ..., "total_tokens_cost": ..., "total_time_taken": ..., "tokens_per_second": ..., "time_to_first_data": ...}
/tokens
Method: POST
Description: Counts the number of tokens in a given text.
Request Body:
{
"text": "Your text here"
}
Response:
{
"tokens": 123
}
/credits
Method: GET
Description: Returns the remaining credits for your API key.
Response:
{
"credits": 1000
}
/moderate
Method: POST
Description: Moderates the given input text using OpenAI's moderation API.
Request Body:
{
"input": "Your text here"
}
Response:
{
"results": [
{
"categories": {
"hate": false,
"hate/threatening": false,
// ... other categories
},
"category_scores": {
"hate": 0.0000000000000000,
"hate/threatening": 0.0000000000000000,
// ... other category scores
},
"flagged": false
}
]
}
Supported Models
You can easily copy the model names below and use them in your requests.
Claude 3.5 Sonnet GPT-4 Turbo Gemini 1.5 Pro Exp Deepseek Chat Llama 3.1 405b Mixtral 8x7b GPT-4o Command-R Plus Mixtral-8x22B-Instruct-v0.1 GLM-4 Llama-3-70b-chat-hf WizardLM-13B-V1.2 Command-R Plus Online Meta-Llama-3.1-405B-Instruct-Turbo
Usage Examples
Python
Non-Streaming Example
import requests
headers = {
'Authorization': 'Bearer YOUR_API_KEY',
'Content-Type': 'application/json'
}
data = {
'model': 'GPT-4 Turbo',
'messages': [
{'role': 'system', 'content': 'You are a helpful assistant.'},
{'role': 'user', 'content': 'What is the capital of France?'}
]
}
response = requests.post('https://chatapi.ai-now.space/chat/completions', headers=headers, json=data)
print(response.json())
Streaming Example
import requests
headers = {
'Authorization': 'Bearer YOUR_API_KEY',
'Content-Type': 'application/json'
}
data = {
'model': 'Claude 3.5 Sonnet',
'messages': [
{'role': 'system', 'content': 'You are a helpful assistant.'},
{'role': 'user', 'content': 'Write a short poem about the sea.'}
],
'stream': True
}
response = requests.post('https://chatapi.ai-now.space/chat/completions', headers=headers, json=data, stream=True)
for line in response.iter_lines():
if line:
decoded_line = line.decode('utf-8')
if decoded_line.startswith('data: '):
print(decoded_line[6:])
C#
Non-Streaming Example
using System;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Text;
using System.Threading.Tasks;
using Newtonsoft.Json;
public class ChatCompletionExample
{
public static async Task Main(string[] args)
{
using (var client = new HttpClient())
{
client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", "YOUR_API_KEY");
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
var requestBody = new
{
model = "GPT-4 Turbo",
messages = new[]
{
new { role = "system", content = "You are a helpful assistant." },
new { role = "user", content = "What is the capital of France?" }
}
};
var json = JsonConvert.SerializeObject(requestBody);
var content = new StringContent(json, Encoding.UTF8, "application/json");
var response = await client.PostAsync("https://chatapi.ai-now.space/chat/completions", content);
var responseString = await response.Content.ReadAsStringAsync();
Console.WriteLine(responseString);
}
}
}
Streaming Example
using System;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Text;
using System.Threading.Tasks;
using Newtonsoft.Json;
public class ChatCompletionStreamingExample
{
public static async Task Main(string[] args)
{
using (var client = new HttpClient())
{
client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", "YOUR_API_KEY");
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
var requestBody = new
{
model = "Claude 3.5 Sonnet",
messages = new[]
{
new { role = "system", content = "You are a helpful assistant." },
new { role = "user", content = "Write a short poem about the sea." }
},
stream = true
};
var json = JsonConvert.SerializeObject(requestBody);
var content = new StringContent(json, Encoding.UTF8, "application/json");
var response = await client.PostAsync("https://chatapi.ai-now.space/chat/completions", content);
using (var stream = await response.Content.ReadAsStreamAsync())
using (var reader = new System.IO.StreamReader(stream))
{
string line;
while ((line = await reader.ReadLineAsync()) != null)
{
if (line.StartsWith("data: "))
{
Console.WriteLine(line.Substring(6));
}
}
}
}
}
}
Node.js
Non-Streaming Example
const axios = require('axios');
const headers = {
'Authorization': 'Bearer YOUR_API_KEY',
'Content-Type': 'application/json'
};
const data = {
model: 'GPT-4 Turbo',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'What is the capital of France?' }
]
};
axios.post('https://chatapi.ai-now.space/chat/completions', data, { headers })
.then(response => {
console.log(response.data);
})
.catch(error => {
console.error(error);
});
Streaming Example
const axios = require('axios');
const headers = {
'Authorization': 'Bearer YOUR_API_KEY',
'Content-Type': 'application/json'
};
const data = {
model: 'Claude 3.5 Sonnet',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Write a short poem about the sea.' }
],
stream: true
};
axios.post('https://chatapi.ai-now.space/chat/completions', data, { headers, responseType: 'stream' })
.then(response => {
response.data.on('data', (chunk) => {
const lines = chunk.toString().split('\n');
lines.forEach(line => {
if (line.startsWith('data: ')) {
console.log(line.substring(6));
}
});
});
})
.catch(error => {
console.error(error);
});
Remember to replace YOUR_API_KEY
with your actual API key.