Large Language Models
docAnalyzer.ai is committed to staying at the forefront of AI technology, offering a selection of the best available language models based on quality and speed. Our platform includes models from leading providers such as OpenAI, Anthropic, and Google. Each model is meticulously evaluated to ensure optimal performance, providing you with the necessary power to chat with your documents and create AI agents. Below are the detailed specifications of each available model.
Model | Quality | Speed | Latency |
---|---|---|---|
GPT-4o * OpenAI | 100 | 80.6 token/s | 520 miliseconds |
Claude 3.5 Sonnet * Anthropic | 100 | 79.8 token/s | 840 miliseconds |
Claude 3 Opus * Anthropic | 94 | 23.7 token/s | 1910 miliseconds |
GPT-4o mini OpenAI | 85 | 182.6 token/s | 530 miliseconds |
Gemini 1.5 Flash Google | 83 | 142.7 token/s | 1320 miliseconds |
Claude 3 Haiku Anthropic | 72 | 118.2 token/s | 530 miliseconds |
GPT-3.5 Turbo OpenAI | 65 | 62.5 token/s | 360 miliseconds |
(*) LLM model available for use through the BYOK (Bring Your Own Key) principle. BYOK allows you to use your own API keys from LLM providers such as Anthropic or OpenAI. Simply obtain your API key from the model provider and input it into our system. This enables us to integrate the key with our platform, allowing you to leverage the most powerful advanced models at will.