Commit 445f4f
2024-08-22 15:52:53 admin: -/-/dev/null .. chat api.md | |
@@ 0,0 1,420 @@ | |
+ | # chatapi.ai-now.space API Documentation |
+ | |
+ | This API provides access to a variety of large language models (LLMs) for chat completions, token counting, and moderation. |
+ | |
+ | ## Authentication |
+ | |
+ | You need an API key to use this API. To obtain an API key, please contact `[email protected]`. |
+ | |
+ | Once you have an API key, include it in the `Authorization` header of your requests as a Bearer token: |
+ | |
+ | ``` |
+ | Authorization: Bearer YOUR_API_KEY |
+ | ``` |
+ | |
+ | ## Endpoints |
+ | |
+ | ### /models |
+ | |
+ | **Method:** GET |
+ | |
+ | **Description:** Returns a list of available models with their associated input and output costs. |
+ | |
+ | **Response:** |
+ | |
+ | ```json |
+ | { |
+ | "data": [ |
+ | { |
+ | "id": "Claude 3.5 Sonnet", |
+ | "inputCost": 0.002, |
+ | "outputCost": 0.003 |
+ | }, |
+ | { |
+ | "id": "GPT-4 Turbo", |
+ | "inputCost": 0.03, |
+ | "outputCost": 0.06 |
+ | }, |
+ | // ... other models |
+ | ] |
+ | } |
+ | ``` |
+ | |
+ | ### /chat/completions |
+ | |
+ | **Method:** POST |
+ | |
+ | **Description:** Generates chat completions using the specified model. |
+ | |
+ | **Request Body:** |
+ | |
+ | ```json |
+ | { |
+ | "model": "MODEL_NAME", // See the list of supported models below |
+ | "messages": [ |
+ | { |
+ | "role": "system", |
+ | "content": "You are a helpful assistant." |
+ | }, |
+ | { |
+ | "role": "user", |
+ | "content": "What is the capital of France?" |
+ | } |
+ | ], |
+ | "stream": true/false, // Optional, defaults to false |
+ | // ... other parameters (see model-specific documentation) |
+ | } |
+ | ``` |
+ | |
+ | **Response (Non-Streaming):** |
+ | |
+ | ```json |
+ | { |
+ | "id": "chatcmpl-...", |
+ | "object": "chat.completion", |
+ | "created": 1677652288, |
+ | "model": "MODEL_NAME", |
+ | "choices": [ |
+ | { |
+ | "index": 0, |
+ | "message": { |
+ | "role": "assistant", |
+ | "content": "The capital of France is Paris." |
+ | }, |
+ | "finish_reason": "stop" |
+ | } |
+ | ], |
+ | "input_tokens": ..., |
+ | "output_tokens": ..., |
+ | "input_tokens_cost": ..., |
+ | "output_tokens_cost": ..., |
+ | "total_tokens_cost": ..., |
+ | "total_time_taken": ..., |
+ | "tokens_per_second": ... |
+ | } |
+ | ``` |
+ | |
+ | **Response (Streaming):** |
+ | |
+ | ```text |
+ | data: {"id": "chatcmpl-...", "object": "chat.completion.chunk", "created": 1677652288, "model": "MODEL_NAME", "choices": [{"index": 0, "delta": {"content": "The"}, "finish_reason": null}]} |
+ | |
+ | data: {"id": "chatcmpl-...", "object": "chat.completion.chunk", "created": 1677652288, "model": "MODEL_NAME", "choices": [{"index": 0, "delta": {"content": " capital"}, "finish_reason": null}]} |
+ | |
+ | data: {"id": "chatcmpl-...", "object": "chat.completion.chunk", "created": 1677652288, "model": "MODEL_NAME", "choices": [{"index": 0, "delta": {"content": " of"}, "finish_reason": null}]} |
+ | |
+ | // ... more chunks |
+ | |
+ | data: {"id": "chatcmpl-...", "object": "chat.completion.chunk", "created": 1677652288, "model": "MODEL_NAME", "choices": [{"index": 0, "delta": {"content": " is Paris."}, "finish_reason": "stop"}]} |
+ | |
+ | data: {"input_tokens": ..., "output_tokens": ..., "input_tokens_cost": ..., "output_tokens_cost": ..., "total_tokens_cost": ..., "total_time_taken": ..., "tokens_per_second": ..., "time_to_first_data": ...} |
+ | ``` |
+ | |
+ | ### /tokens |
+ | |
+ | **Method:** POST |
+ | |
+ | **Description:** Counts the number of tokens in a given text. |
+ | |
+ | **Request Body:** |
+ | |
+ | ```json |
+ | { |
+ | "text": "Your text here" |
+ | } |
+ | ``` |
+ | |
+ | **Response:** |
+ | |
+ | ```json |
+ | { |
+ | "tokens": 123 |
+ | } |
+ | ``` |
+ | |
+ | ### /credits |
+ | |
+ | **Method:** GET |
+ | |
+ | **Description:** Returns the remaining credits for your API key. |
+ | |
+ | **Response:** |
+ | |
+ | ```json |
+ | { |
+ | "credits": 1000 |
+ | } |
+ | ``` |
+ | |
+ | ### /moderate |
+ | |
+ | **Method:** POST |
+ | |
+ | **Description:** Moderates the given input text using OpenAI's moderation API. |
+ | |
+ | **Request Body:** |
+ | |
+ | ```json |
+ | { |
+ | "input": "Your text here" |
+ | } |
+ | ``` |
+ | |
+ | **Response:** |
+ | |
+ | ```json |
+ | { |
+ | "results": [ |
+ | { |
+ | "categories": { |
+ | "hate": false, |
+ | "hate/threatening": false, |
+ | // ... other categories |
+ | }, |
+ | "category_scores": { |
+ | "hate": 0.0000000000000000, |
+ | "hate/threatening": 0.0000000000000000, |
+ | // ... other category scores |
+ | }, |
+ | "flagged": false |
+ | } |
+ | ] |
+ | } |
+ | ``` |
+ | |
+ | ## Supported Models |
+ | |
+ | You can easily copy the model names below and use them in your requests. |
+ | |
+ | ``` |
+ | Claude 3.5 Sonnet |
+ | GPT-4 Turbo |
+ | Gemini 1.5 Pro Exp |
+ | Deepseek Chat |
+ | Llama 3.1 405b |
+ | Mixtral 8x7b |
+ | GPT-4o |
+ | Command-R Plus |
+ | Mixtral-8x22B-Instruct-v0.1 |
+ | GLM-4 |
+ | Llama-3-70b-chat-hf |
+ | WizardLM-13B-V1.2 |
+ | Command-R Plus Online |
+ | Meta-Llama-3.1-405B-Instruct-Turbo |
+ | ``` |
+ | |
+ | ## Usage Examples Python |
+ | |
+ | ### Non-Streaming Example (using `GPT-4 Turbo`): |
+ | |
+ | ```python |
+ | import requests |
+ | |
+ | headers = { |
+ | 'Authorization': 'Bearer YOUR_API_KEY', |
+ | 'Content-Type': 'application/json' |
+ | } |
+ | |
+ | data = { |
+ | 'model': 'GPT-4 Turbo', |
+ | 'messages': [ |
+ | {'role': 'system', 'content': 'You are a helpful assistant.'}, |
+ | {'role': 'user', 'content': 'What is the capital of France?'} |
+ | ] |
+ | } |
+ | |
+ | response = requests.post('https://chatapi.ai-now.space/chat/completions', headers=headers, json=data) |
+ | |
+ | print(response.json()) |
+ | ``` |
+ | |
+ | ### Streaming Example (using `Claude 3.5 Sonnet`): |
+ | |
+ | ```python |
+ | import requests |
+ | |
+ | headers = { |
+ | 'Authorization': 'Bearer YOUR_API_KEY', |
+ | 'Content-Type': 'application/json' |
+ | } |
+ | |
+ | data = { |
+ | 'model': 'Claude 3.5 Sonnet', |
+ | 'messages': [ |
+ | {'role': 'system', 'content': 'You are a helpful assistant.'}, |
+ | {'role': 'user', 'content': 'Write a short poem about the sea.'} |
+ | ], |
+ | 'stream': True |
+ | } |
+ | |
+ | response = requests.post('https://chatapi.ai-now.space/chat/completions', headers=headers, json=data, stream=True) |
+ | |
+ | for line in response.iter_lines(): |
+ | if line: |
+ | decoded_line = line.decode('utf-8') |
+ | if decoded_line.startswith('data: '): |
+ | print(decoded_line[6:]) |
+ | ``` |
+ | |
+ | ## C# Examples |
+ | |
+ | ### Non-Streaming Example (using `GPT-4 Turbo`): |
+ | |
+ | ```csharp |
+ | using System; |
+ | using System.Net.Http; |
+ | using System.Net.Http.Headers; |
+ | using System.Text; |
+ | using System.Threading.Tasks; |
+ | using Newtonsoft.Json; |
+ | |
+ | public class ChatCompletionExample |
+ | { |
+ | public static async Task Main(string[] args) |
+ | { |
+ | using (var client = new HttpClient()) |
+ | { |
+ | client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", "YOUR_API_KEY"); |
+ | client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json")); |
+ | |
+ | var requestBody = new |
+ | { |
+ | model = "GPT-4 Turbo", |
+ | messages = new[] |
+ | { |
+ | new { role = "system", content = "You are a helpful assistant." }, |
+ | new { role = "user", content = "What is the capital of France?" } |
+ | } |
+ | }; |
+ | |
+ | var json = JsonConvert.SerializeObject(requestBody); |
+ | var content = new StringContent(json, Encoding.UTF8, "application/json"); |
+ | |
+ | var response = await client.PostAsync("https://chatapi.ai-now.space/chat/completions", content); |
+ | var responseString = await response.Content.ReadAsStringAsync(); |
+ | |
+ | Console.WriteLine(responseString); |
+ | } |
+ | } |
+ | } |
+ | ``` |
+ | |
+ | ### Streaming Example (using `Claude 3.5 Sonnet`): |
+ | |
+ | ```csharp |
+ | using System; |
+ | using System.Net.Http; |
+ | using System.Net.Http.Headers; |
+ | using System.Text; |
+ | using System.Threading.Tasks; |
+ | using Newtonsoft.Json; |
+ | |
+ | public class ChatCompletionStreamingExample |
+ | { |
+ | public static async Task Main(string[] args) |
+ | { |
+ | using (var client = new HttpClient()) |
+ | { |
+ | client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", "YOUR_API_KEY"); |
+ | client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json")); |
+ | |
+ | var requestBody = new |
+ | { |
+ | model = "Claude 3.5 Sonnet", |
+ | messages = new[] |
+ | { |
+ | new { role = "system", content = "You are a helpful assistant." }, |
+ | new { role = "user", content = "Write a short poem about the sea." } |
+ | }, |
+ | stream = true |
+ | }; |
+ | |
+ | var json = JsonConvert.SerializeObject(requestBody); |
+ | var content = new StringContent(json, Encoding.UTF8, "application/json"); |
+ | |
+ | var response = await client.PostAsync("https://chatapi.ai-now.space/chat/completions", content); |
+ | |
+ | using (var stream = await response.Content.ReadAsStreamAsync()) |
+ | using (var reader = new System.IO.StreamReader(stream)) |
+ | { |
+ | string line; |
+ | while ((line = await reader.ReadLineAsync()) != null) |
+ | { |
+ | if (line.StartsWith("data: ")) |
+ | { |
+ | Console.WriteLine(line.Substring(6)); |
+ | } |
+ | } |
+ | } |
+ | } |
+ | } |
+ | } |
+ | ``` |
+ | |
+ | ## Node.js Examples |
+ | |
+ | ### Non-Streaming Example (using `GPT-4 Turbo`): |
+ | |
+ | ```javascript |
+ | const axios = require('axios'); |
+ | |
+ | const headers = { |
+ | 'Authorization': 'Bearer YOUR_API_KEY', |
+ | 'Content-Type': 'application/json' |
+ | }; |
+ | |
+ | const data = { |
+ | model: 'GPT-4 Turbo', |
+ | messages: [ |
+ | { role: 'system', content: 'You are a helpful assistant.' }, |
+ | { role: 'user', content: 'What is the capital of France?' } |
+ | ] |
+ | }; |
+ | |
+ | axios.post('https://chatapi.ai-now.space/chat/completions', data, { headers }) |
+ | .then(response => { |
+ | console.log(response.data); |
+ | }) |
+ | .catch(error => { |
+ | console.error(error); |
+ | }); |
+ | ``` |
+ | |
+ | ### Streaming Example (using `Claude 3.5 Sonnet`): |
+ | |
+ | ```javascript |
+ | const axios = require('axios'); |
+ | |
+ | const headers = { |
+ | 'Authorization': 'Bearer YOUR_API_KEY', |
+ | 'Content-Type': 'application/json' |
+ | }; |
+ | |
+ | const data = { |
+ | model: 'Claude 3.5 Sonnet', |
+ | messages: [ |
+ | { role: 'system', content: 'You are a helpful assistant.' }, |
+ | { role: 'user', content: 'Write a short poem about the sea.' } |
+ | ], |
+ | stream: true |
+ | }; |
+ | |
+ | axios.post('https://chatapi.ai-now.space/chat/completions', data, { headers, responseType: 'stream' }) |
+ | .then(response => { |
+ | response.data.on('data', (chunk) => { |
+ | const lines = chunk.toString().split('\n'); |
+ | lines.forEach(line => { |
+ | if (line.startsWith('data: ')) { |
+ | console.log(line.substring(6)); |
+ | } |
+ | }); |
+ | }); |
+ | }) |
+ | .catch(error => { |
+ | console.error(error); |
+ | }); |
+ | ``` |
+ | |
+ | **Remember to replace `YOUR_API_KEY` with your actual API key.** |
+ | |
+ | These examples demonstrate how to interact with the `chatapi.ai-now.space` API using C# and Node.js for both streaming and non-streaming responses. You can adapt these examples to use different models and parameters as needed. |