Large Language Models
docAnalyzer.ai is committed to staying at the forefront of AI technology, offering a selection of the best available language models based on quality and speed. Our platform includes models from leading providers such as Anthropic, DeepSeek, Google, and OpenAI. Each model is meticulously evaluated to ensure optimal performance, providing you with the necessary power to chat with your documents and create AI agents. Below are the detailed specifications of each available model.
Model | Quality | Speed | Latency |
---|---|---|---|
o4-mini (high) OpenAI | 70 | 126.6 token/s | 36850 miliseconds |
Gemini 2.5 Pro Google | 68 | 210.1 token/s | 30150 miliseconds |
o3 OpenAI | 67 | 70.4 token/s | 45850 miliseconds |
R1 DeepSeek | 60 | 23.6 token/s | 3890 miliseconds |
Claude 3.7 Sonnet Anthropic | 57 | 70.4 token/s | 1010 miliseconds |
V3 (Mar 25) DeepSeek | 53 | 25.2 token/s | 3490 miliseconds |
GPT-4.1 OpenAI | 53 | 122.7 token/s | 430 miliseconds |
GPT-4.1-mini OpenAI | 53 | 81.1 token/s | 540 miliseconds |
Llama 4 Maverick Meta | 51 | 123.6 token/s | 360 miliseconds |
Gemini 2.0 Flash Google | 46 | 224.9 token/s | 290 miliseconds |
Llama 4 Scout Meta | 43 | 123.1 token/s | 380 miliseconds |
Gemini 2.0 Flash Lite Google | 41 | 126 token/s | 490 miliseconds |
GPT-4.1-nano OpenAI | 41 | 284.9 token/s | 300 miliseconds |
Claude 3.5 Haiku Anthropic | 35 | 64.7 token/s | 1100 miliseconds |