📧 Get API Access at 1/5 Price: [email protected]
🌐 Platform: https://ai.lmzh.top | 💡 Pay as you go | No subscription
Complete AI API Tutorial 2026: Build Your First AI App in 10 Minutes
Python + JavaScript | Updated March 2026 | Beginner-friendly
What You'll Build:
- A working Python script calling GPT-4.1 and Claude Sonnet 4.6
- A JavaScript/Node.js version of the same app
- A streaming chat interface
- An image generation example
- Production error handling with retry logic
Why NexaAPI?
| Model | Official Price | NexaAPI Price | Savings |
|---|---|---|---|
| Claude Sonnet 4.6 | $3.00/M | ~$0.60/M | 80% |
| GPT-4.1 | $2.00/M | ~$0.40/M | 80% |
| Gemini 3.1 Pro | $2.00/M | ~$0.40/M | 80% |
Same models. Same quality. One line of code change. 80% cheaper.
Part 1: Python Tutorial
Step 1: Install
pip install openai
Step 2: Basic Chat Completion
from openai import OpenAI
client = OpenAI(
api_key="YOUR_NEXAAPI_KEY",
base_url="https://ai.lmzh.top/v1" # ← only change from OpenAI
)
response = client.chat.completions.create(
model="gpt-4.1", # or "claude-sonnet-4-6", "gemini-3.1-pro"
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"}
]
)
print(response.choices[0].message.content)
# Output: "The capital of France is Paris."Step 3: Streaming Responses
stream = client.chat.completions.create(
model="claude-sonnet-4-6",
messages=[{"role": "user", "content": "Write a haiku about Python"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="", flush=True)Step 4: Multi-Turn Chatbot
conversation = [
{"role": "system", "content": "You are a Python tutor. Be concise."}
]
while True:
user_input = input("You: ")
if user_input.lower() == "quit":
break
conversation.append({"role": "user", "content": user_input})
response = client.chat.completions.create(
model="gpt-4.1",
messages=conversation
)
assistant_msg = response.choices[0].message.content
conversation.append({"role": "assistant", "content": assistant_msg})
print(f"AI: {assistant_msg}\n")Step 5: Image Generation
response = client.images.generate(
model="dall-e-3",
prompt="A futuristic city skyline at sunset, digital art",
size="1024x1024",
n=1
)
print(f"Image URL: {response.data[0].url}")
# Cost: ~$0.003/image via NexaAPIStep 6: Async for Production
import asyncio
from openai import AsyncOpenAI
client = AsyncOpenAI(
api_key="YOUR_NEXAAPI_KEY",
base_url="https://ai.lmzh.top/v1"
)
async def process_batch(prompts):
tasks = [
client.chat.completions.create(
model="gemini-2.5-flash", # Cheapest for bulk
messages=[{"role": "user", "content": p}]
)
for p in prompts
]
responses = await asyncio.gather(*tasks)
return [r.choices[0].message.content for r in responses]
prompts = [f"Summarize topic {i}" for i in range(10)]
results = asyncio.run(process_batch(prompts))Part 2: JavaScript Tutorial
Install
npm install openai
Basic Chat
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'YOUR_NEXAAPI_KEY',
baseURL: 'https://ai.lmzh.top/v1'
});
const response = await client.chat.completions.create({
model: 'gpt-4.1',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello!' }
]
});
console.log(response.choices[0].message.content);Streaming
const stream = await client.chat.completions.create({
model: 'claude-sonnet-4-6',
messages: [{ role: 'user', content: 'Tell me a story' }],
stream: true
});
for await (const chunk of stream) {
const text = chunk.choices[0]?.delta?.content || '';
process.stdout.write(text);
}Express.js API Server
import express from 'express';
import OpenAI from 'openai';
const app = express();
app.use(express.json());
const client = new OpenAI({
apiKey: process.env.NEXAAPI_KEY,
baseURL: 'https://ai.lmzh.top/v1'
});
app.post('/api/chat', async (req, res) => {
const { message, model = 'gpt-4.1' } = req.body;
const response = await client.chat.completions.create({
model,
messages: [{ role: 'user', content: message }]
});
res.json({ response: response.choices[0].message.content });
});
app.listen(3000);Part 3: Choose the Right Model
| Task | Model | NexaAPI Cost |
|---|---|---|
| Customer support | Claude Haiku 4.5 | ~$40/10M calls |
| Code generation | Claude Sonnet 4.6 | ~$180/5M calls |
| Content writing | GPT-4.1 | ~$100/5M calls |
| Bulk processing | Gemini 2.5 Flash | ~$8/50M calls |
| Image generation | FLUX via NexaAPI | ~$0.003/image |
Part 4: Error Handling
from openai import OpenAI, RateLimitError
import time
client = OpenAI(
api_key="YOUR_NEXAAPI_KEY",
base_url="https://ai.lmzh.top/v1"
)
def robust_chat(message, max_retries=3):
for attempt in range(max_retries):
try:
response = client.chat.completions.create(
model="gpt-4.1",
messages=[{"role": "user", "content": message}],
timeout=30.0
)
return response.choices[0].message.content
except RateLimitError:
if attempt < max_retries - 1:
time.sleep(2 ** attempt) # Exponential backoff
else:
raiseQuick Start Checklist
- ☐Email [email protected] for your API key
- ☐
pip install openaiornpm install openai - ☐Set
base_url="https://ai.lmzh.top/v1" - ☐Test with a "Hello, world" prompt
- ☐Choose the right model for your use case
Start Building with AI APIs at 1/5 the Cost
GPT-4.1, Claude Sonnet 4.6, Gemini 3.1 Pro, Veo 3.1, FLUX images — all in one OpenAI-compatible API.
📧 Get API Access: [email protected]🌐 https://ai.lmzh.top | Pay as you go | No subscription | No minimum spend