Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -4,38 +4,40 @@ import { IAIProvider } from './ai-provider.interface.js';

@Injectable()
export class AmazonBedrockAiProvider implements IAIProvider {
private readonly bedrockRuntimeClient: BedrockRuntimeClient;
private readonly modelId: string = 'global.anthropic.claude-sonnet-4-5-20250929-v1:0';
private readonly temperature: number = 0.7;
private readonly maxTokens: number = 1024;
private readonly region: string = 'us-west-2';
private readonly topP: number = 0.9;
private readonly bedrockRuntimeClient: BedrockRuntimeClient;
private readonly modelId: string = 'global.anthropic.claude-sonnet-4-5-20250929-v1:0';
private readonly temperature: number = 0.7;
private readonly maxTokens: number = 1024;
private readonly region: string = 'us-west-2';
private readonly topP: number = 0.9;

constructor() {
this.bedrockRuntimeClient = new BedrockRuntimeClient({
region: this.region,
});
}
public async generateResponse(prompt: string): Promise<string> {
const conversation = [
{
role: 'user' as const,
content: [{ text: prompt }],
},
];
constructor() {
this.bedrockRuntimeClient = new BedrockRuntimeClient({
region: this.region,
});
}
public async generateResponse(prompt: string): Promise<string> {
const conversation = [
{
role: 'user' as const,
content: [{ text: prompt }],
},
];

const command = new ConverseCommand({
modelId: this.modelId,
messages: conversation,
inferenceConfig: { maxTokens: this.maxTokens, temperature: this.temperature, topP: this.topP },
});
try {
const response = await this.bedrockRuntimeClient.send(command);
const responseText = response.output.message?.content[0].text;
return responseText || 'No response generated.';
} catch (error) {
console.error('Error generating AI response:', error);
throw new Error('Failed to generate AI response.');
}
}
const command = new ConverseCommand({
modelId: this.modelId,
messages: conversation,
inferenceConfig: { maxTokens: this.maxTokens, temperature: this.temperature, topP: this.topP },
});
try {
const response = await this.bedrockRuntimeClient.send(command);
console.info('AI response received from Amazon Bedrock.');
Copy link

Copilot AI Dec 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The codebase has a structured logging system using WinstonLogger (see backend/src/entities/logging/winston-logger.ts). While console.info/console.error are used in some places in the codebase, for consistency and better log management (centralized configuration, log levels, transports), consider injecting and using WinstonLogger instead of console.info for this logging statement.

Copilot uses AI. Check for mistakes.
const responseText = response.output.message?.content[0].text;
console.info('AI response text. ', responseText);
Copy link

Copilot AI Dec 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Logging the full AI response text may expose sensitive information. The response could contain user data, internal database structure details, or other confidential information that shouldn't be logged in production environments. Consider removing this log statement or wrapping it in a development-only check, or at minimum redact/truncate sensitive portions of the response.

Suggested change
console.info('AI response text. ', responseText);

Copilot uses AI. Check for mistakes.
return responseText || 'No response generated.';
} catch (error) {
console.error('Error generating AI response:', error);
throw new Error('Failed to generate AI response.');
}
}
}
Loading
Loading