New models: GPT-5, Claude 4.1, Grok 4 - New features: Notes & Spawn a chatbot

docAnalyzer.ai
  • Academic ResearchBusiness Operations & StrategyBanking & FinanceGovernment & Public ServicesHealthcare & MedicalHuman ResourcesInsuranceLegal & ComplianceManagement & ConsultancyReal Estate & Property Management
  • SummarizerData Extractor
  • FAQ
  • Pricing
Get Started

Large Language Models

docAnalyzer.ai is committed to staying at the forefront of AI technology, offering the latest models from leading providers including Anthropic, DeepSeek, Google, Meta, OpenAI and xAI. Our rapid deployment infrastructure enables us to make new models available within days—sometimes hours—of their release, ensuring you always have access to cutting-edge AI capabilities. Unlike most competitors who limit you to a single provider, we provide an extensive selection of models, giving you the freedom to choose the perfect AI for your specific needs. Each model is meticulously evaluated for quality and speed, providing you with the necessary power to chat with your documents and create AI agents. Below are the detailed specifications of each available model.

ModelQualitySpeedLatency*

GPT-5

OpenAI
69 69
109.4 109.4 token/s
74.53 74.53 seconds

Grok 4

xAI
68 68
51 51 token/s
6.43 6.43 seconds

Gemini 2.5 Pro

Google Gemini
65 65
143.5 143.5 token/s
30.39 30.39 seconds

GPT-5 mini

OpenAI
64 64
74.9 74.9 token/s
28.68 28.68 seconds

openai/gpt-oss-120b

Groq
61 61
226.1 226.1 token/s
0.44 0.44 seconds

Claude Opus 4.1

Anthropic
61 61
24.7 24.7 token/s
1.48 1.48 seconds

Claude Sonnet 4

Anthropic
59 59
48.1 48.1 token/s
1.38 1.38 seconds

DeepSeek R1

DeepSeek
59 59
20.1 20.1 token/s
2.82 2.82 seconds

Gemini 2.5 Flash

Google Gemini
58 58
257.4 257.4 token/s
14.08 14.08 seconds

Grok 3 mini

xAI
58 58
187.9 187.9 token/s
0.62 0.62 seconds

GPT-5 nano

OpenAI
54 54
190.7 190.7 token/s
33.68 33.68 seconds

openai/gpt-oss-20b

Groq
49 49
262.7 262.7 token/s
0.43 0.43 seconds

moonshotai/kimi-k2-instruct

Groq
49 49
53.2 53.2 token/s
0.56 0.56 seconds

DeepSeek V3

DeepSeek
49 49
19.9 19.9 token/s
2.93 2.93 seconds

Gemini 2.5 Flash-Lite

Google Gemini
47 47
423 423 token/s
8.81 8.81 seconds

GPT 4.1

OpenAI
47 47
128.5 128.5 token/s
0.53 0.53 seconds

GPT 4.1 mini

OpenAI
46 46
57.6 57.6 token/s
0.47 0.47 seconds

meta-llama/llama-4-maverick-17b-128e-instruct

Groq
42 42
146.6 146.6 token/s
0.32 0.32 seconds

meta-llama/llama-4-scout-17b-16e-instruct

Groq
33 33
114.7 114.7 token/s
0.39 0.39 seconds

GPT 4.1 nano

OpenAI
32 32
173 173 token/s
0.37 0.37 seconds

Gemini 2.0 Flash-Lite

Google Gemini
30 30
173.8 173.8 token/s
0.25 0.25 seconds

Claude 3.5 Haiku

Anthropic
23 23
54.1 54.1 token/s
0.57 0.57 seconds

* Latency progress bars use a logarithmic scale, where lower latency values indicate better performance.

docAnalyzer.ai

Why docAnalyzer?

  • Academic
  • Business
  • Finance
  • Government
  • Healthcare
  • HR
  • Insurance
  • Legal
  • Management
  • Real Estate

Compare us

  • vs Anara
  • vs AskYourPDF
  • vs ChatDOC
  • vs ChatPDF
  • vs ChatPDF.so
  • vs Humata
  • vs PDF.ai
  • vs Sharly

Resources

  • Account
  • Roadmap
  • Blog
  • Documentation
  • API reference
  • Models

Company

  • About us
  • Privacy
  • Terms
  • Pricing
  • Contact

docAnalyzer™, a trademark of AI For Verticals, Inc © 2025