Large Language Models
docAnalyzer.ai is committed to staying at the forefront of AI technology, offering a selection of the best available language models based on quality and speed. Our platform includes models from leading providers such as OpenAI, Anthropic, and Google. Each model is meticulously evaluated to ensure optimal performance, providing you with the necessary power to chat with your documents and create AI agents. Below are the detailed specifications of each available model.
Model | Quality | Speed | Latency |
---|---|---|---|
Claude 3.5 Sonnet * Anthropic | 80 | 55.7 token/s | 860 miliseconds |
Gemini 1.5 Pro (Sep 24) * Google | 80 | 58.7 token/s | 780 miliseconds |
GPT-4o * OpenAI | 77 | 86.1 token/s | 490 miliseconds |
GPT-4o mini OpenAI | 71 | 102.4 token/s | 530 miliseconds |
Gemini 1.5 Flash (Sep 24) Google | 71 | 181.4 token/s | 320 miliseconds |
Claude 3 Opus * Anthropic | 70 | 57.8 token/s | 490 miliseconds |
Llama 3.1 70B (groq) * Meta | 65 | 71.3 token/s | 430 miliseconds |
Claude 3 Haiku Anthropic | 54 | 127.1 token/s | 470 miliseconds |
Llama 3.1 8B (groq) Meta | 53 | 751.31 token/s | 370 miliseconds |
(*) LLM model available for use through the BYOK (Bring Your Own Key) principle. BYOK allows you to use your own API keys from LLM providers such as Anthropic, Google AI studio, Groq or OpenAI. Simply obtain your API key from the model provider and input it into our system. This enables us to integrate the key with our platform, allowing you to leverage the most powerful advanced models at will.