API Documentation
Comprehensive documentation for your Express.js API with versioned routes (Base URL: /api/v1)
Base URL
Base URL
https://api.ai.nityasha.com/api/v1
Authentication
All endpoints require valid api_key
in request body or user authentication via JWT token in Authorization header.
1. Pricing API
Endpoints for managing model pricing information
GET/api/v1/pricing
Get Model Pricing
Fetch pricing information for all active AI models with per 1k and per 1M token costs.
Query Parameters:
Parameter | Type | Required | Description |
---|---|---|---|
model | string | No | Filter by specific model name |
Request Examples:
cURLbash
GET /api/v1/pricing
GET /api/v1/pricing?model=neorox
Response Example (200 OK):
JSON Responsejson
[
{
"model_name": "neorox",
"input_cost_per_1k_tokens": 0.0005,
"output_cost_per_1k_tokens": 0.0015,
"input_cost_per_1m_tokens": "0.500000",
"output_cost_per_1m_tokens": "1.500000"
},
{
"model_name": "gemini-1.5-flash",
"input_cost_per_1k_tokens": 0.0003,
"output_cost_per_1k_tokens": 0.0012,
"input_cost_per_1m_tokens": "0.300000",
"output_cost_per_1m_tokens": "1.200000"
}
]
400 - Bad Request
500 - Database error
2. Chat API
Send messages to AI models and receive responses
POST/api/v1/chat
Send Chat Message
Send a message to AI model and get response. Supports streaming and non-streaming modes.
Headers:
Content-Type: application/json
Request Body:
Parameter | Type | Required | Description |
---|---|---|---|
api_key | string | Yes | Your API key |
messages | array | Yes | Array of message objects |
model_used | string | Yes | Model name to use |
stream | boolean | No | Enable streaming (default: false) |
system_prompt | string | No | Additional system prompt |
Message Object Structure:
{
"role": "user",
"content": "Hello, how are you?"
}
Request Example:
{
"api_key": "your_api_key_here",
"messages": [
{
"role": "user",
"content": "What is the weather like today?"
}
],
"model_used": "neorox",
"stream": false,
"system_prompt": "Be helpful and concise"
}
Non-Streaming Response (200 OK):
{
"object": "chat.completion",
"model": "neorox",
"usage": {
"input_tokens": 15,
"output_tokens": 45,
"totalCost": "0.000075"
},
"system_prompt": "Be helpful and concise",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "I don't have access to real-time weather data. Please check a weather app or website for current conditions."
}
}
]
}
Streaming Response:
Content-Type: text/event-stream
Cache-Control: no-cache
Connection: keep-alive
data: I don't have access to real-time
data: weather data. Please check a weather
data: app or website for current conditions.
event: done
data: [DONE]
400 - Bad Request
402 - Payment Required
403 - Forbidden
500 - Internal Error
5. Usage Logging
Track and log API usage statistics
POST/api/v1/usage/log
Log API Usage
Manually log API usage (usually done automatically by chat endpoint).
Request Body:
{
"api_key_id": 456,
"endpoint_used": "/api/v1/chat",
"input_tokens": 15,
"output_tokens": 45,
"model_used": "neorox"
}
Response (200 OK):
{
"message": "Usage logged",
"api_key_id": 456,
"input_tokens": 15,
"output_tokens": 45,
"cost_in_usd": "0.000075",
"model_used": "neorox"
}
Error Handling
Understanding error responses and status codes
Common Error Response Format
{
"error": "Error message description",
"details": "Additional details (in development)"
}
HTTP Status Codes
200
Success400
Bad Request (validation error)401
Unauthorized (missing/invalid token)402
Payment Required (insufficient credits)403
Forbidden (invalid API key)404
Not Found500
Internal Server ErrorRate Limiting
Consider implementing rate limiting in production:
- Per API key: 100 requests/minute
- Per IP: 1000 requests/hour
- Chat endpoint: 60 requests/minute per API key
Testing Examples
Using cURL:
# Test pricing endpoint
curl -X GET "https://api.ai.nityasha.com/api/v1/pricing"
# Test chat endpoint
curl -X POST "https://api.ai.nityasha.com/api/v1/chat" \
-H "Content-Type: application/json" \
-d '{
"api_key": "your_key_here",
"messages": [{"role": "user", "content": "Hello"}],
"model_used": "neorox"
}'