Large Language Models
docAnalyzer.ai is committed to staying at the forefront of AI technology, offering a selection of the best available language models based on quality and speed. Our platform includes models from leading providers such as Anthropic, DeepSeek, Google, and OpenAI. Each model is meticulously evaluated to ensure optimal performance, providing you with the necessary power to chat with your documents and create AI agents. Below are the detailed specifications of each available model.
Model | Quality | Speed | Latency |
---|---|---|---|
o3-mini * OpenAI | 90 | 216.6 token/s | 10740 miliseconds |
R1 * DeepSeek | 89 | 18 token/s | 73820 miliseconds |
Gemini 2.0 Flash Google | 82 | 168.1 token/s | 470 miliseconds |
Claude 3.5 Sonnet (Oct 24) * Anthropic | 80 | 70.4 token/s | 1010 miliseconds |
V3 DeepSeek | 79 | 11 token/s | 1030 miliseconds |
GPT-4o * OpenAI | 78 | 77 token/s | 640 miliseconds |
Llama 3.3 70B (groq) * Meta | 74 | 78.7 token/s | 510 miliseconds |
GPT-4o mini OpenAI | 73 | 97.2 token/s | 740 miliseconds |
(*) LLM model available for use through the BYOK (Bring Your Own Key) principle. BYOK allows you to use your own API keys from LLM providers such as Anthropic, Google AI studio, Groq or OpenAI. Simply obtain your API key from the model provider and input it into our system. This enables us to integrate the key with our platform, allowing you to leverage the most powerful advanced models at will.