Large Language Models
docAnalyzer.ai is committed to staying at the forefront of AI technology, offering a selection of the best available language models based on quality and speed. Our platform includes models from leading providers such as Anthropic, DeepSeek, Google, and OpenAI. Each model is meticulously evaluated to ensure optimal performance, providing you with the necessary power to chat with your documents and create AI agents. Below are the detailed specifications of each available model.
Model | Quality | Speed | Latency |
---|---|---|---|
R1 * DeepSeek | 89 | 22.4 token/s | 21730 miliseconds |
o1-mini * OpenAI | 86 | 210.4 token/s | 10550 miliseconds |
Gemini 1.5 Pro (Sep 24) * Google | 80 | 60.9 token/s | 730 miliseconds |
Claude 3.5 Sonnet (Oct 24) * Anthropic | 80 | 71.7 token/s | 1010 miliseconds |
V3 DeepSeek | 79 | 14.7 token/s | 950 miliseconds |
GPT-4o (Aug 24) * OpenAI | 78 | 61.2 token/s | 660 miliseconds |
Llama 3.3 70B (groq) * Meta | 74 | 67.1 token/s | 470 miliseconds |
Gemini 1.5 Flash (Sep 24) Google | 74 | 184.9 token/s | 370 miliseconds |
GPT-4o mini OpenAI | 73 | 73.1 token/s | 650 miliseconds |
Claude 3 Haiku Anthropic | 68 | 64.6 token/s | 790 miliseconds |
(*) LLM model available for use through the BYOK (Bring Your Own Key) principle. BYOK allows you to use your own API keys from LLM providers such as Anthropic, Google AI studio, Groq or OpenAI. Simply obtain your API key from the model provider and input it into our system. This enables us to integrate the key with our platform, allowing you to leverage the most powerful advanced models at will.