Skip to content

Conversation

Copy link
Contributor

Copilot AI commented Nov 16, 2025

Addresses missing error handling in LLMSummarizer.Summarize() where the response stream from GenerateContent() was iterated without checking errors, causing silent failures.

Changes

  • Error handling: Unpack both response and error from iter.Seq2[*model.LLMResponse, error] during iteration
  • Type correction: Fixed Parts field type from []genai.Part back to []*genai.Part (compilation error from commit 0ff4ff5)
// Before
for resp := range responseStream {
    if resp == nil {
        continue
    }
    // ... process response
}

// After  
for resp, err := range responseStream {
    if err != nil {
        return nil, fmt.Errorf("LLM API call failed: %w", err)
    }
    if resp == nil {
        continue
    }
    // ... process response
}

API failures now surface immediately with context instead of returning nil content silently.


💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.

Co-authored-by: raphaelmansuy <1003084+raphaelmansuy@users.noreply.github.com>
Copilot AI changed the title [WIP] Address feedback on Feat/compaction pull request Fix missing error handling in LLM response stream iteration Nov 16, 2025
Copilot AI requested a review from raphaelmansuy November 16, 2025 09:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants