Batch Processing Made Simple
Use Gemini AI models at scale with zero infrastructure complexity. Better performance, 90% cost savings.
- Type SafeStructured Objects
- Receive type-safe responses in schemas you define.
- MetadataAdd Metadata
- Attach user-defined job and request-level metadata.
- CallbacksSimplified Callbacks
- Define your processing logic and we'll handle the rest.
- PollingAutomated Polling
- Sophisticated polling is handled under the hood.
- No JSONLNo JSONL
- Forget the massive JSONL files—we handle all that for you.
- IntegrationSeamless Integration
- AjaxAI adapts to your systems, not vice versa.
- PricingUnbeatable Pricing
- Gemini outperforms OpenAI models at 1/10th the price.
Deploy Faster
Extremely Simple
We've eliminated all the infrastructure complexity that slows you down. With AjaxAI, building and deploying multimodal features is a breeze.
- No JSONL Files
- Skip the tedious data formatting and error-prone file management. Add content directly with simple method calls — text, images, and other data types are automatically handled.
- No File Formatting
- Forget about MIME types, base64 encoding, and content boundaries. AjaxAI automatically handles all data normalization behind the scenes.
- No Storage Buckets
- Stop managing cloud storage permissions and URL signing. AjaxAI handles file storage, caching, and delivery infrastructure — completely invisible to your application code.
from ajaxai import AjaxAI, AjaxBatchJob, AjaxRequest
from your_codebase import items
client = AjaxAI(api_key="your-api-key")
job = AjaxBatchJob(
client=client,
model="gemini-2.0-flash",
)
for item in items:
request = AjaxRequest(request_id="custom_request_id")
request.add_text("Describe the following images.")
request.add_image(item["images"][0]["url"])
request.add_image(item["images"][1]["url"])
job.add_request(request)
job.save()
job.submit()
client.start_polling()
The Smarter Choice
91% Cost Savings
AjaxAI delivers superior AI capabilities at just 9% of OpenAI's cost.
- Radical Price-Performance Ratio
- AjaxAI delivers superior AI capabilities at just 9% of OpenAI's cost. While others charge premium prices for their models, we've optimized the entire pipeline to pass massive savings directly to you without sacrificing performance.
- Predictable Monthly Spend
- Say goodbye to bill shock. AjaxAI provides comprehensive usage analytics and cost projections, allowing you to accurately forecast your AI expenditure and avoid unexpected overages that plague OpenAI customers.
- Scale Without Financial Penalties
- Many AI platforms become prohibitively expensive as you scale. AjaxAI is specifically designed for high-volume production workloads, with costs that remain proportional as you grow—whether you're processing thousands or millions of requests.
- Lower Development Costs
- Our streamlined API reduces development time by eliminating complex infrastructure setup. Teams spend less time wrestling with file formatting, storage buckets, and custom monitoring solutions, and more time building valuable features.
- Reduced Infrastructure Overhead
- AjaxAI eliminates the need for expensive storage solutions, monitoring systems, and self-hosted analytics that traditional AI integrations require. Everything is included in our simple pricing model with no hidden infrastructure costs.
- Real ROI Metrics
- Unlike other platforms that only track token usage, AjaxAI's comprehensive metrics let you precisely measure the business impact and ROI of each AI feature. Make data-driven decisions about where to invest your AI budget for maximum returns.
The Better Model
Flat-Out Smarter
Gemini 2.0 Flash beats OpenAI's flagship GPT-4o across virtually every benchmark that matters. Independent tests confirm it: Gemini is now the performance leader.
“Google has made a great comeback with Gemini, closing in on OpenAI in nearly every aspect. With its latest feature and model releases, Google’s AI products are now on par with OpenAI.”

- Smarter Overall
- When it comes to the full range of intelligence tests - from IQ to knowledge to reasoning - Gemini 2.0 Flash beats GPT-4o across the board. The numbers don't lie.
- Better Reasoning
- Gemini 2.0 Flash outscores GPT-4o on the MMLU-Pro benchmark. For tasks that need deep thinking and problem-solving, Gemini is clearly the smarter choice.
- Deeper Knowledge
- Gemini 2.0 Flash crushes GPT-4o on "Humanity's Last Exam" - proof that it simply knows more about the world and can answer tough questions better.
- Science Powerhouse
- Gemini 2.0 Flash scored 10 percentage points higher than GPT-4o on the GPQA Diamond Scientific benchmark. For research and science tasks, there's no competition.
- Math Champion
- Gemini 2.0 Flash beat GPT-4o by almost 15% on the MATH-500 benchmark. Need to crunch numbers or solve equations? The choice is clear.
- Competition Crusher
- Gemini 2.0 Flash DOUBLED the performance of GPT-4o in the AIME 2024 Math Competition. For solving the toughest math problems, Gemini is greater than, not equal to, GPT-4o.
Ready to dive in?
Begin using AjaxAI today.
Structured Outputs
Your Schema, Your Rules
Force AI outputs to conform to your application's data structures, not the other way around. AjaxAI guarantees type-safe, validated responses with zero parsing code.
- Type-Safe Responses
- Define Pydantic models once, and receive perfectly structured data every time. No more regex parsing or JSON extraction — get validated responses that integrate directly with your application.
- Intelligent Context
- Attach structured metadata to jobs and individual requests. Build complex multi-stage workflows while maintaining complete type safety throughout your processing pipeline.
Ship robust AI features in half the time. Eliminate brittle extraction code, reduce edge cases, and let your IDE's autocomplete guide you through perfect integrations.
from ajaxai import AjaxAI, AjaxBatchJob, AjaxRequest
from pydantic import BaseModel
class ProductDescription(BaseModel):
product_id: str
name: str
description: str
price: float
for product in products:
request = AjaxRequest(
request_id=product.id,
model="gemini-2.0-flash",
response_model=ProductDescription,
)
request.add_text("Write a product description for the following product.")
request.add_text(product.name)
request.add_text(product.price)
job.add_request(request)
Streamlined Development
Smart Metadata That Follows Your Data
Keep essential context with your tasks from start to finish. AjaxAI's metadata system eliminates awkward state management and reduces database load, letting you build more responsive applications with cleaner code.
- Job Context Preservation
- Attach critical information to any job that remains accessible throughout its lifecycle. Never lose track of essential context again.
- Request-Specific Data
- Define unique parameters for individual requests that travel with your data. Process responses intelligently without database lookups or complex state management.
- Eliminate Database Roundtrips
- Store vital information alongside your tasks from creation to completion. Cut development time and reduce infrastructure costs by minimizing database queries.
- Performance Optimized
- Access metadata instantly from our high-speed memory cache. Experience response times up to 50x faster than traditional database retrieval methods.
Seamless Processing
Code Once, Process Anywhere
Turn response handling into a single function call. AjaxAI's callback system automatically executes your code when results arrive – synchronously or asynchronously, at any scale.
- 1Define Your Handler
Create a function with your custom logic – from simple logging to complex ML pipelines. You focus on the core business value while AjaxAI handles the execution framework.
- 2Apply the Decorator
Add the @ajaxai_callback decorator to register your function. Each request can have its own dedicated handler, giving you complete flexibility for different response types and processing requirements.
- 3Continue Shipping
That's it – no webhooks to configure, no polling loops to maintain. Your function will automatically execute when results arrive.
Ready to dive in?
Begin using AjaxAI today.
Better Insights
Real Metrics
Standard APIs only tell you if a job is "in_progress" or "completed", AjaxAI delivers comprehensive performance metrics for every job. No more guessing about token usage or building your own analytics.
- Token Usage Breakdown
- See exact token consumption across your entire job - total, prompt, and completion tokens - all calculated for you.
- Total Job Statistics
- Get complete visibility into your batch job performance with detailed counts of total, processed, successful, and failed requests.
- Time & Performance Tracking
- Automatically track duration and execution time for each job. No manual calculations or custom timing code needed.
# AjaxAI automatically provides this for EVERY job
{
"total_requests": 500,
"processed_requests": 498,
"successful_requests": 495,
"failed_requests": 3,
"duration_seconds": 187.45,
"total_tokens": 156428,
"prompt_tokens": 52143,
"completion_tokens": 104285,
"created_at": "2025-05-20T14:32:18Z"
}
Google Cloud Setup
No GCloud? No Problem.
We have the easiest GCloud setup guide you'll ever see. We'll walk you through the setup process, from permissions, to billing to storage.
- 10 Minutes or Less
- Our guided setup process takes you from zero to hero in ten minutes or less.
- No Tears.
- No more wrestling with Google Cloud documentation on your own. We walk you through the entire process, and will have you up and running in no time.
- Already an Expert? Welcome.
- If you already know your way around GCloud, it's even easier. We'll just need your project ID and API key.

Ready to dive in?
Begin using AjaxAI today.
Ready to dive in?
Begin using AjaxAI today.
Pricing
How much does it cost?
You pay for the AI tokens you actually use, plus a small service fee. No monthly minimums, no setup costs, no surprises. Most teams save 90% compared to OpenAI while getting better performance.