You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/content/integrations/index.mdx
+1-21Lines changed: 1 addition & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -71,80 +71,66 @@ Evaluation model integrations configure the LLM provider DeepEval uses for LLM-a
71
71
<Card
72
72
title="OpenAI"
73
73
href="/integrations/models/openai"
74
-
description="Use OpenAI models for metrics, synthesis, simulation, and optimization."
75
74
/>
76
75
<Card
77
76
title="Azure OpenAI"
78
77
href="/integrations/models/azure-openai"
79
-
description="Use Azure-hosted OpenAI deployments as DeepEval judges."
80
78
/>
81
79
<Card
82
80
title="Ollama"
83
81
href="/integrations/models/ollama"
84
-
description="Run local Ollama models for evaluation."
85
82
/>
86
83
<Card
87
84
title="OpenRouter"
88
85
href="/integrations/models/openrouter"
89
-
description="Route DeepEval model calls through OpenRouter."
90
86
/>
91
87
<Card
92
88
title="Anthropic"
93
89
href="/integrations/models/anthropic"
94
-
description="Use Claude models as evaluation judges."
95
90
/>
96
91
<Card
97
92
title="Amazon Bedrock"
98
93
href="/integrations/models/amazon-bedrock"
99
-
description="Use Bedrock-hosted models for evaluation."
100
94
/>
101
95
<Card
102
96
title="Gemini"
103
97
href="/integrations/models/gemini"
104
-
description="Use Google Gemini models for LLM-as-a-judge metrics."
105
98
/>
106
99
<Card
107
100
title="DeepSeek"
108
101
href="/integrations/models/deepseek"
109
-
description="Configure DeepSeek as an evaluation model provider."
110
102
/>
111
103
<Card
112
104
title="Vertex AI"
113
105
href="/integrations/models/vertex-ai"
114
-
description="Run Gemini through Google Cloud Vertex AI."
115
106
/>
116
107
<Card
117
108
title="Grok"
118
109
href="/integrations/models/grok"
119
-
description="Use xAI Grok models for evaluation."
120
110
/>
121
111
<Card
122
112
title="Moonshot"
123
113
href="/integrations/models/moonshot"
124
-
description="Use Moonshot/Kimi models as judges."
125
114
/>
126
115
<Card
127
116
title="Portkey"
128
117
href="/integrations/models/portkey"
129
-
description="Route model calls through Portkey."
130
118
/>
131
119
<Card
132
120
title="vLLM"
133
121
href="/integrations/models/vllm"
134
-
description="Connect DeepEval to vLLM-hosted models."
135
122
/>
136
123
<Card
137
124
title="LM Studio"
138
125
href="/integrations/models/lmstudio"
139
-
description="Use local LM Studio models for evaluation."
140
126
/>
141
127
<Card
142
128
title="LiteLLM"
143
129
href="/integrations/models/litellm"
144
-
description="Use LiteLLM to route evaluation calls across providers."
145
130
/>
146
131
</Cards>
147
132
133
+
148
134
## Vector DBs
149
135
150
136
Vector database integrations show how to connect retrieval systems to DeepEval so RAG metrics can evaluate the context your application actually retrieves. Use these examples to benchmark retrieval quality and end-to-end RAG behavior.
@@ -153,31 +139,25 @@ Vector database integrations show how to connect retrieval systems to DeepEval s
153
139
<Card
154
140
title="Cognee"
155
141
href="/integrations/vector-databases/cognee"
156
-
description="Evaluate retrieval from Cognee semantic memory graphs."
0 commit comments