Large Language Models

docAnalyzer.ai is committed to staying at the forefront of AI technology, offering a selection of the best available language models based on quality and speed. Our platform includes models from leading providers such as OpenAI, Anthropic, and Google. Each model is meticulously evaluated to ensure optimal performance, providing you with the necessary power to chat with your documents and create AI agents. Below are the detailed specifications of each available model.

Model Quality Speed Latency

GPT-4o *

OpenAI
100 100
80.6 80.6 token/s
0.52 520 miliseconds

Claude 3.5 Sonnet *

Anthropic
100 100
79.8 79.8 token/s
0.84 840 miliseconds

Claude 3 Opus *

Anthropic
94 94
23.7 23.7 token/s
1.91 1910 miliseconds

Gemini 1.5 Flash

Google
83 83
142.7 142.7 token/s
1.32 1320 miliseconds

Claude 3 Haiku

Anthropic
72 72
118.2 118.2 token/s
0.53 530 miliseconds

GPT-3.5 Turbo

OpenAI
65 65
62.5 62.5 token/s
0.36 360 miliseconds

(*) LLM model available for use through the BYOK (Bring Your Own Key) principle. BYOK allows you to use your own API keys from LLM providers such as Anthropic or OpenAI. Simply obtain your API key from the model provider and input it into our system. This enables us to integrate the key with our platform, allowing you to leverage the most powerful advanced models at will.