Skip to content

fix: return 1.0 when no knowledge retention verdicts exist#2636

Open
NgDMau wants to merge 1 commit intoconfident-ai:mainfrom
NgDMau:fix/knowledge-retention-zero-verdicts
Open

fix: return 1.0 when no knowledge retention verdicts exist#2636
NgDMau wants to merge 1 commit intoconfident-ai:mainfrom
NgDMau:fix/knowledge-retention-zero-verdicts

Conversation

@NgDMau
Copy link
Copy Markdown

@NgDMau NgDMau commented Apr 28, 2026

Problem

KnowledgeRetentionMetric._calculate_score() returns 0 when self.verdicts
is empty. This happens in short conversations (1-2 turns) where there is no
accumulated knowledge to forget.

Zero verdicts means "nothing was forgotten" — the reason text confirms this
("no attritions... all knowledge retained") but the score contradicts it by
returning 0.

Fix

Return 1.0 instead of 0 when there are no verdicts. "Nothing to forget" is
a perfect retention score.

How to reproduce

from deepeval.metrics import KnowledgeRetentionMetric
from deepeval.test_case import ConversationalTestCase
from deepeval.test_case.conversational_test_case import Turn

test_case = ConversationalTestCase(
    turns=[
        Turn(role="user", content="What is WOVN?"),
        Turn(role="assistant", content="WOVN is a localization platform."),
    ]
)

metric = KnowledgeRetentionMetric()
metric.measure(test_case)
print(metric.score)   # 0.0 — should be 1.0
print(metric.reason)  # "no attritions" — contradicts the score

@vercel
Copy link
Copy Markdown

vercel Bot commented Apr 28, 2026

@NgDMau is attempting to deploy a commit to the Confident AI Team on Vercel.

A member of the Team first needs to authorize it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant