-
Notifications
You must be signed in to change notification settings - Fork 11
feat(dynatrace): Add support for Dynatrace backend #5 #15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat(dynatrace): Add support for Dynatrace backend #5 #15
Conversation
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughAdds Dynatrace support: new DynatraceBackend with DQL-based queries, trace/span parsing, service discovery, and health checks; integrates dynatrace into config and server CLI; updates docs and .env.example; and adds unit and integration tests plus VCR cassettes and fixtures for Dynatrace. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Client
participant Server as MCP_Server
participant Backend as DynatraceBackend
participant DynatraceAPI as Dynatrace_API
participant Filter as ClientSide_Filter
Client->>Server: Request (search_traces / search_spans / get_trace / list_services)
Server->>Backend: Delegate request
Backend->>DynatraceAPI: HTTP call (/api/v2/ql/query:execute, /api/v2/traces/{id}, /api/v2/services) with Api-Token header
DynatraceAPI-->>Backend: JSON response (trace list / trace / services)
Backend->>Backend: Optionally fetch individual traces / build DQL
Backend->>Filter: Apply client-side filters for predicates not handled server-side
Filter-->>Backend: Filtered results
Backend->>Backend: Parse into TraceData/SpanData via _parse_dynatrace_trace/_parse_dynatrace_span
Backend-->>Server: Return TraceData / SpanData / service list / HealthCheckResponse
Server-->>Client: Forward formatted response
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes
Possibly related issues
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Changes requested ❌
Reviewed everything up to 47b4d8e in 2 minutes and 18 seconds. Click for details.
- Reviewed
1132lines of code in7files - Skipped
0files when reviewing. - Skipped posting
3draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. .env.example:28
- Draft comment:
Ensure the file ends with a newline to conform to best practices. - Reason this comment was not posted:
Confidence changes required:33%<= threshold50%None
2. .env.example:30
- Draft comment:
The configuration variable 'BACKEND_' appears to be incomplete. Is there a missing part to the variable name or its value? Please clarify whether this variable is intended to be fully named or if it should be removed. - Reason this comment was not posted:
Marked as duplicate.
3. src/opentelemetry_mcp/backends/dynatrace.py:27
- Draft comment:
Typo: 'OpenLLMetry' in the class docstring should be corrected to 'OpenTelemetry'. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 10% vs. threshold = 50% The comment suggests changing "OpenLLMetry" to "OpenTelemetry", but OpenLLMetry is actually a real project (https://github.com/traceloop/openllmetry) that provides OpenTelemetry instrumentation for LLMs with specific semantic conventions including gen_ai.* attributes. The docstring specifically mentions "gen_ai.* attributes" which are part of OpenLLMetry's semantic conventions. This suggests the author intentionally wrote "OpenLLMetry" to refer to these specific conventions, not as a typo. The comment appears to be incorrect - it's not a typo but rather a reference to a specific project. I might be wrong if OpenLLMetry is not actually a real project or if the codebase doesn't use OpenLLMetry conventions. However, the specific mention of "gen_ai.* attributes" strongly suggests this is referring to OpenLLMetry's semantic conventions, which are well-documented. Given that OpenLLMetry is a real, well-known project that specifically defines gen_ai.* semantic conventions for LLM observability, and the docstring explicitly mentions these gen_ai.* attributes, it's highly likely this is intentional. The comment appears to be incorrect. This comment should be deleted. "OpenLLMetry" is not a typo but rather a reference to a specific OpenTelemetry instrumentation project for LLMs that defines the gen_ai.* semantic conventions mentioned in the same sentence. The comment is incorrect.
Workflow ID: wflow_66oEiLqBzrwHrAXg
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
README.md (1)
384-384: Malformed HTML tag breaks collapsible section.The opening tag uses
<parameter name="name">instead of<summary>. This will break the collapsible section rendering.-<parameter name="name">Using the repository instead?</summary> +<summary>Using the repository instead?</summary>
🧹 Nitpick comments (8)
tests/backends/test_dynatrace.py (1)
8-8: Unused imports.
FilterandFilterTypeare imported but not used in any tests. Consider removing them or adding tests that exercise filter functionality.-from opentelemetry_mcp.models import Filter, FilterOperator, FilterType, SpanQuery, TraceQuery +from opentelemetry_mcp.models import FilterOperator, SpanQuery, TraceQuerysrc/opentelemetry_mcp/backends/dynatrace.py (7)
11-14: Unused imports.
FilterandFilterTypeare imported but not used directly in this module.from opentelemetry_mcp.models import ( - Filter, FilterOperator, - FilterType, SpanData, SpanQuery, TraceData, TraceQuery, )
94-104: Timezone-naive datetime may cause comparison issues.
datetime.now()returns a timezone-naive datetime, butquery.start_time/query.end_timemight be timezone-aware (if parsed from ISO 8601 with timezone). This could cause comparison issues or unexpected behavior.Consider using
datetime.now(timezone.utc)for consistency:+from datetime import datetime, timedelta, timezone + # In search_traces: if query.start_time: params["from"] = int(query.start_time.timestamp() * 1000) else: # Default to last 24 hours if not specified - params["from"] = int((datetime.now() - timedelta(days=1)).timestamp() * 1000) + params["from"] = int((datetime.now(timezone.utc) - timedelta(days=1)).timestamp() * 1000) if query.end_time: params["to"] = int(query.end_time.timestamp() * 1000) else: - params["to"] = int(datetime.now().timestamp() * 1000) + params["to"] = int(datetime.now(timezone.utc).timestamp() * 1000)
137-144: Hardcoded limit of 50 traces may be restrictive.The limit of 50 traces for detail fetching is hardcoded. For users requesting
limit=100, they'll only get details for 50 traces. Consider making this configurable or documenting this behavior.
200-215: Doubling the limit could exceed API constraints.
limit=query.limit * 2could potentially exceed Dynatrace API limits if the original query limit is large. Consider capping this to a maximum value.trace_query = TraceQuery( service_name=query.service_name, operation_name=query.operation_name, start_time=query.start_time, end_time=query.end_time, min_duration_ms=query.min_duration_ms, max_duration_ms=query.max_duration_ms, tags=query.tags, - limit=query.limit * 2, # Fetch more traces to ensure we get enough spans + limit=min(query.limit * 2, 1000), # Fetch more traces, but cap at API max has_error=query.has_error,
291-293: Redundant import inside method.
timedeltais already imported at the top of the file (line 4). This inline import is unnecessary.# Fallback: Extract services from trace search # Search for traces in the last 24 hours to discover services - from datetime import timedelta - params = {
330-332: Same redundant import.Remove the duplicate
from datetime import timedeltaimport.# Search for traces from this service to discover operations - from datetime import timedelta - params = {
476-485: Potential division by zero with timestamp 0.If
start_time_msis 0 or missing,datetime.fromtimestamp(0 / 1000)returns epoch (1970-01-01), which might not be the intended behavior. Consider validating the timestamp.# Parse timestamps (Dynatrace uses milliseconds since epoch) start_time_ms = span_data.get("startTime", span_data.get("start_time", 0)) + if not start_time_ms: + logger.warning(f"Span {span_id_raw} has no start time, using current time") + start_time = datetime.now(timezone.utc) if isinstance(start_time_ms, str):
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (6)
.env.example(1 hunks)README.md(5 hunks)src/opentelemetry_mcp/backends/dynatrace.py(1 hunks)src/opentelemetry_mcp/config.py(3 hunks)src/opentelemetry_mcp/server.py(4 hunks)tests/backends/test_dynatrace.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
src/opentelemetry_mcp/server.py (1)
src/opentelemetry_mcp/backends/dynatrace.py (1)
DynatraceBackend(23-584)
tests/backends/test_dynatrace.py (3)
src/opentelemetry_mcp/backends/dynatrace.py (11)
DynatraceBackend(23-584)_create_headers(30-39)get_supported_operators(41-52)search_traces(54-161)get_trace(232-258)list_services(260-315)get_service_operations(317-353)search_spans(163-230)health_check(355-381)_parse_dynatrace_span(453-584)_parse_dynatrace_trace(383-451)src/opentelemetry_mcp/models.py (6)
Filter(47-82)FilterOperator(14-36)FilterType(39-44)SpanQuery(593-650)TraceQuery(460-559)gen_ai_system(248-250)src/opentelemetry_mcp/backends/base.py (1)
client(29-42)
🪛 dotenv-linter (4.0.0)
.env.example
[warning] 28-28: [DuplicatedKey] The BACKEND_TYPE key is duplicated
(DuplicatedKey)
[warning] 28-28: [UnorderedKey] The BACKEND_TYPE key should go before the MAX_TRACES_PER_QUERY key
(UnorderedKey)
[warning] 29-29: [DuplicatedKey] The BACKEND_URL key is duplicated
(DuplicatedKey)
[warning] 29-29: [UnorderedKey] The BACKEND_URL key should go before the MAX_TRACES_PER_QUERY key
(UnorderedKey)
[warning] 30-30: [EndingBlankLine] No blank line at the end of the file
(EndingBlankLine)
[warning] 30-30: [KeyWithoutValue] The BACKEND_ key should be with a value or have an equal sign
(KeyWithoutValue)
[warning] 30-30: [UnorderedKey] The BACKEND_ key should go before the BACKEND_TYPE key
(UnorderedKey)
🔇 Additional comments (17)
README.md (3)
443-443: Documentation correctly updated for Dynatrace support.The core capabilities section now accurately reflects the addition of Dynatrace as a supported backend.
466-474: Backend Support Matrix accurately documents Dynatrace capabilities.The feature matrix correctly shows that Dynatrace supports all major features (search traces, advanced filters, span search, token tracking, error traces, LLM tools).
585-628: Comprehensive Dynatrace setup documentation.The documentation includes clear configuration details, token creation steps, Claude Desktop integration example, and troubleshooting guidance. This will help users onboard with Dynatrace.
src/opentelemetry_mcp/config.py (3)
19-19: Type definition correctly extended for Dynatrace.The Literal type now includes "dynatrace" alongside existing backends.
38-41: Validation logic consistent with type definition.The environment variable validation correctly includes "dynatrace" and provides a clear error message.
105-109: CLI override validation matches environment validation.The validation in
apply_cli_overridesis consistent withfrom_env, maintaining a single source of truth for valid backend types.tests/backends/test_dynatrace.py (4)
14-21: Well-structured test fixture.The fixture provides a properly configured backend instance with realistic Dynatrace configuration values.
42-85: Good coverage for trace search functionality.The test properly mocks HTTP responses and validates that traces are returned with correct service names. The mock setup correctly simulates the multi-call pattern (search + get_trace for each result).
138-167: Fallback behavior test is valuable.Testing the fallback from
/api/v2/servicesto trace-based service discovery ensures resilience when the services endpoint is unavailable.
260-282: Span parsing test validates GenAI attribute handling.The test verifies that OpenLLMetry semantic conventions (
gen_ai.system,gen_ai.request.model) are correctly parsed intoSpanAttributes.src/opentelemetry_mcp/server.py (3)
12-12: Import follows existing pattern.The DynatraceBackend import is correctly placed alongside other backend imports.
98-104: Backend creation follows established pattern.The Dynatrace backend initialization mirrors the structure of other backends (Jaeger, Tempo, Traceloop), passing
url,api_key, andtimeout. This maintains consistency across the codebase.
606-606: CLI choices correctly updated.The
--backendoption now includes "dynatrace" in the allowed choices.src/opentelemetry_mcp/backends/dynatrace.py (4)
30-39: Header creation correctly implements Dynatrace API authentication.Uses
Api-Tokenformat as required by Dynatrace API v2. The conditional inclusion of the Authorization header whenapi_keyis present is appropriate.
383-451: Robust trace parsing implementation.The
_parse_dynatrace_tracemethod handles multiple Dynatrace response formats (checking bothspansanddata.spans), correctly identifies root spans, calculates trace duration from span boundaries, and aggregates error status. Good defensive programming with the try/except wrapper.
503-525: Flexible attribute parsing handles multiple formats.The code correctly handles attributes as both dict and list of key-value pairs, and also checks for alternative
tagsformat. This flexibility is important for Dynatrace API compatibility.
569-580: SpanData construction is complete and well-structured.All required fields are populated with appropriate fallbacks and type conversions.
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
…/opentelemetry-mcp-server into opentelementry/dynatracebackend
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (3)
src/opentelemetry_mcp/backends/dynatrace.py (3)
136-142: Consider making the trace fetch limit configurable.The hardcoded limit of 50 traces prevents excessive API calls, which is good. However, consider making this configurable (e.g., via a class attribute or constructor parameter) to allow tuning based on deployment needs.
Apply this diff to make the limit configurable:
+ # Maximum number of traces to fetch detailed data for + MAX_TRACE_DETAILS_FETCH = 50 + async def search_traces(self, query: TraceQuery) -> list[TraceData]: """Search for traces using Dynatrace Trace API v2. # ... # Limit the number of traces to fetch details for - max_traces_to_fetch = min(len(trace_results), 50) + max_traces_to_fetch = min(len(trace_results), self.MAX_TRACE_DETAILS_FETCH)
161-228: LGTM with optional consideration.The span search implementation correctly converts to a trace query and flattens results. The
limit * 2heuristic for fetching traces is reasonable, though note that it may still under-fetch or over-fetch spans depending on trace structure.For future optimization, consider tracking the actual span count while fetching traces and stopping early once the desired span count is reached.
451-582: LGTM with optional refactoring consideration.The span parsing logic comprehensively handles multiple Dynatrace data formats, including various timestamp formats, attribute structures, and event representations. The defensive parsing with fallbacks is appropriate for integrating with an external API.
For future maintainability, consider extracting helper methods for:
- Timestamp parsing (lines 475-487)
- Attribute parsing (lines 502-520)
- Event parsing (lines 539-565)
This would reduce the method's cognitive complexity while preserving its robust format handling.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
src/opentelemetry_mcp/attributes.py(1 hunks)src/opentelemetry_mcp/backends/dynatrace.py(1 hunks)tests/backends/test_dynatrace.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/backends/test_dynatrace.py
🧰 Additional context used
🧬 Code graph analysis (1)
src/opentelemetry_mcp/attributes.py (1)
tests/backends/test_dynatrace.py (1)
backend(16-22)
🔇 Additional comments (6)
src/opentelemetry_mcp/attributes.py (1)
164-164: LGTM!The addition of "dynatrace" to the backend Literal type is correct and consistent with the new backend implementation.
src/opentelemetry_mcp/backends/dynatrace.py (5)
39-50: LGTM!The hybrid filtering approach (minimal native API support with client-side fallback) is well-documented and appropriate for Dynatrace API limitations.
230-256: LGTM!The trace retrieval logic is straightforward and includes appropriate error handling. The ValueError raised on parse failure provides a clear error message.
353-379: LGTM!The health check implementation correctly leverages
list_servicesto verify backend connectivity and returns properly structured responses for both healthy and unhealthy states.
381-449: LGTM!The trace parsing logic handles multiple Dynatrace data formats appropriately. The duration calculation, root span identification, and status determination are all implemented correctly.
28-37: TheApi-Tokenprefix is correct for Dynatrace API v2 when using API or personal access tokens. The implementation matches the official Dynatrace authentication requirements.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
src/opentelemetry_mcp/backends/dynatrace.py (1)
144-151: Sequentialget_tracecalls may be slow for many results.Each trace detail is fetched one by one; this can underutilize I/O for larger result sets. Consider batching with
asyncio.gather(or a small concurrency semaphore) while keeping the cap at 50 to limit API load.
🧹 Nitpick comments (6)
src/opentelemetry_mcp/backends/dynatrace.py (6)
92-103: Prefer UTC‑aware timestamps overdatetime.now()without timezone.You’re mixing potentially timezone‑aware query times (from
TraceQuery/SpanQuery) with naivedatetime.now()calls. For consistency and to avoid local‑TZ surprises, consider using UTC everywhere, e.g.:-from datetime import datetime, timedelta +from datetime import datetime, timedelta, timezone @@ - params["from"] = int((datetime.now() - timedelta(days=1)).timestamp() * 1000) + params["from"] = int( + (datetime.now(timezone.utc) - timedelta(days=1)).timestamp() * 1000 + ) @@ - params["to"] = int(datetime.now().timestamp() * 1000) + params["to"] = int(datetime.now(timezone.utc).timestamp() * 1000) @@ - params = { - "from": int((datetime.now() - timedelta(days=1)).timestamp() * 1000), - "to": int(datetime.now().timestamp() * 1000), + params = { + "from": int( + (datetime.now(timezone.utc) - timedelta(days=1)).timestamp() * 1000 + ), + "to": int(datetime.now(timezone.utc).timestamp() * 1000), @@ - params = { - "service": service_name, - "from": int((datetime.now() - timedelta(days=1)).timestamp() * 1000), - "to": int(datetime.now().timestamp() * 1000), + params = { + "service": service_name, + "from": int( + (datetime.now(timezone.utc) - timedelta(days=1)).timestamp() * 1000 + ), + "to": int(datetime.now(timezone.utc).timestamp() * 1000),Also applies to: 292-295, 330-335
161-228:search_spanslogic is reasonable; watch for double‑filtering semantics.Using
TraceQueryas a proxy for Dynatrace’s trace API and then applying span‑level client filters works, and over‑fetching withlimit * 2is a pragmatic choice. Just be aware that filters present in bothTraceQueryandSpanQuerymay effectively be applied twice (trace‑level and span‑level); if that’s intentional, a short comment in the code could clarify it for future readers.
258-312:list_servicesfallback is sensible; pagination limits are implicit.The primary
/api/v2/servicescall with a trace‑based fallback over the last 24h andlimit=1000is a good compromise. If environments can have more than 1000 relevant traces in 24h, you might later want to expose the lookback/limit as config, but the default is fine for an initial implementation.
314-349:get_service_operationsis straightforward but shares the same 24h/limit assumptions.Querying traces for a service over the last 24h and deduping operation names is a reasonable heuristic. Similar to
list_services, consider whether you’ll eventually need configurable time windows or pagination, but nothing blocking here.
351-377: Health check behavior and docstring are slightly inconsistent.Implementation always catches exceptions and returns an
unhealthyHealthCheckResponse, sohttpx.HTTPErrorwill not escape despite what the docstring’s “Raises” section says. Either remove the “Raises” note or adjust behavior to optionally propagate errors.
379-447: Trace parsing is robust; duration calc can be simplified.The handling of multiple possible trace formats and empty/invalid traces is defensive and good. You can simplify duration computation and avoid extra timestamp conversions:
- end_times = [ - datetime.fromtimestamp( - s.start_time.timestamp() + (s.duration_ms / 1000), tz=s.start_time.tzinfo - ) - for s in spans - ] + end_times = [s.start_time + timedelta(milliseconds=s.duration_ms) for s in spans]This keeps timezone info implicitly and is easier to read.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
src/opentelemetry_mcp/backends/dynatrace.py(1 hunks)
🔇 Additional comments (4)
src/opentelemetry_mcp/backends/dynatrace.py (4)
28-37: Headers helper looks fine; confirm integration with BaseBackend.Implementation is straightforward and correct; just ensure
BaseBackendactually wires_create_headers()into thehttpx.AsyncClient(or equivalent), since it’s not referenced directly in this class.
39-50: Supported operator set is minimal but correct.Returning only
FilterOperator.EQUALSmatches the current use in this backend. You can extend this later if Dynatrace query parameters support more operators.
52-159: Overallsearch_tracesflow is solid and defensive.The split between native vs client‑side filters, bounded detail fetch (
max_traces_to_fetch), and graceful handling of per‑trace failures all look good and match the intended hybrid filtering design.
230-257:get_tracebehavior and error handling look correct.API interaction and parsing flow are straightforward; raising
ValueErrorwhen parsing fails is clear and should be easy for callers/tests to reason about.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (2)
src/opentelemetry_mcp/backends/dynatrace.py (2)
146-155: Consider parallelizing trace fetching for better performance.Traces are fetched sequentially in the loop. For improved performance when fetching multiple traces, consider using
asyncio.gatherto parallelize theget_tracecalls.Example refactor:
# Collect trace fetch tasks trace_tasks = [ self.get_trace(str(trace_id)) for trace_result in trace_results[:max_traces_to_fetch] if (trace_id := trace_result.get("traceId") or trace_result.get("trace_id")) ] # Fetch all traces in parallel trace_results_list = await asyncio.gather(*trace_tasks, return_exceptions=True) for result in trace_results_list: if isinstance(result, Exception): logger.warning(f"Failed to fetch trace: {result}") elif result: traces.append(result)
539-564: Potential TypeError when events is null.Line 539 uses
span_data.get("events", span_data.get("logs", [])), which will returnNoneif the"events"key exists but its value isnull. Iterating overNonewill raise aTypeErrorand cause the entire span to be dropped.Apply this diff for safer event handling:
# Parse events/logs events: list[SpanEvent] = [] - for event_data in span_data.get("events", span_data.get("logs", [])): + events_source = span_data.get("events") or span_data.get("logs") or [] + for event_data in events_source: + if not isinstance(event_data, dict): + continue event_attrs: dict[str, str | int | float | bool] = {} - if isinstance(event_data, dict): - if "attributes" in event_data: - event_attrs.update(event_data["attributes"]) - elif "fields" in event_data: - # Handle Jaeger-style fields - for field in event_data["fields"]: - if isinstance(field, dict): - key = field.get("key") - value = field.get("value") - if key: - event_attrs[key] = value + if "attributes" in event_data: + event_attrs.update(event_data["attributes"]) + elif "fields" in event_data: + # Handle Jaeger-style fields + for field in event_data["fields"]: + if isinstance(field, dict): + key = field.get("key") + value = field.get("value") + if key: + event_attrs[key] = value - event_name = event_data.get("name", "event") if isinstance(event_data, dict) else "event" - event_timestamp = ( - event_data.get("timestamp", 0) if isinstance(event_data, dict) else 0 - ) + event_name = event_data.get("name", "event") + event_timestamp = event_data.get("timestamp", 0) events.append( SpanEvent(
🧹 Nitpick comments (1)
src/opentelemetry_mcp/backends/dynatrace.py (1)
473-482: Add timezone for consistency in timestamp parsing.Line 482 creates a naive
datetimeobject from the timestamp, while line 478 creates a timezone-aware datetime when parsing ISO format. For consistency and to avoid timezone-related issues, consider usingdatetime.fromtimestamp(start_time_ms / 1000, tz=timezone.utc).Apply this diff:
else: - start_time = datetime.fromtimestamp(start_time_ms / 1000) + start_time = datetime.fromtimestamp(start_time_ms / 1000, tz=timezone.utc)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
src/opentelemetry_mcp/backends/dynatrace.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/opentelemetry_mcp/backends/dynatrace.py (4)
src/opentelemetry_mcp/attributes.py (4)
HealthCheckResponse(160-172)SpanAttributes(12-137)SpanEvent(140-157)get(88-119)src/opentelemetry_mcp/backends/base.py (2)
BaseBackend(12-160)client(29-42)src/opentelemetry_mcp/backends/filter_engine.py (1)
FilterEngine(13-284)src/opentelemetry_mcp/models.py (7)
FilterOperator(14-36)SpanData(228-255)SpanQuery(593-650)TraceData(428-457)TraceQuery(460-559)has_error(253-255)gen_ai_system(248-250)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
src/opentelemetry_mcp/backends/dynatrace.py (1)
540-567: Make event iteration null-safe and normalize timestamps.The pattern
span_data.get("events", span_data.get("logs", []))will fail if the "events" key exists but isNone. Additionally, event timestamps are not normalized (could be 0, milliseconds, or seconds).Apply this diff:
# Parse events/logs events: list[SpanEvent] = [] - for event_data in span_data.get("events", span_data.get("logs", [])): + events_source = span_data.get("events") or span_data.get("logs") or [] + for event_data in events_source: + if not isinstance(event_data, dict): + continue + event_attrs: dict[str, str | int | float | bool] = {} - if isinstance(event_data, dict): - if "attributes" in event_data: - event_attrs.update(event_data["attributes"]) - elif "fields" in event_data: - # Handle Jaeger-style fields - for field in event_data["fields"]: - if isinstance(field, dict): - key = field.get("key") - value = field.get("value") - if key: - event_attrs[key] = value - - event_name = event_data.get("name", "event") if isinstance(event_data, dict) else "event" - event_timestamp = ( - event_data.get("timestamp", 0) if isinstance(event_data, dict) else 0 - ) + if "attributes" in event_data: + event_attrs.update(event_data["attributes"]) + elif "fields" in event_data: + # Handle Jaeger-style fields + for field in event_data["fields"]: + if isinstance(field, dict): + key = field.get("key") + value = field.get("value") + if key: + event_attrs[key] = value + + event_name = event_data.get("name", "event") + event_timestamp_raw = event_data.get("timestamp", 0) + + # Normalize timestamp to milliseconds + if isinstance(event_timestamp_raw, str): + try: + event_timestamp = int(datetime.fromisoformat(event_timestamp_raw.replace("Z", "+00:00")).timestamp() * 1000) + except Exception: + event_timestamp = int(float(event_timestamp_raw)) + else: + event_timestamp = int(event_timestamp_raw) events.append( SpanEvent(
🧹 Nitpick comments (1)
src/opentelemetry_mcp/backends/dynatrace.py (1)
144-153: Consider parallelizing trace detail fetching.Trace details are fetched sequentially in the for loop, which can be slow when processing many traces. Consider using
asyncio.gather()to fetch multiple traces concurrently.Example refactor:
# Collect all trace fetch tasks trace_tasks = [] for trace_result in trace_results[:max_traces_to_fetch]: trace_id = trace_result.get("traceId") or trace_result.get("trace_id") if trace_id: trace_tasks.append(self.get_trace(str(trace_id))) # Fetch all traces concurrently trace_results_list = await asyncio.gather(*trace_tasks, return_exceptions=True) # Process results for trace_id_index, result in enumerate(trace_results_list): if isinstance(result, Exception): trace_id = trace_results[trace_id_index].get("traceId") or trace_results[trace_id_index].get("trace_id") logger.warning(f"Failed to fetch trace {trace_id}: {result}") elif result: traces.append(result)Note: You'll need to add
import asyncioat the top of the file.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
src/opentelemetry_mcp/backends/dynatrace.py(1 hunks)
🔇 Additional comments (7)
src/opentelemetry_mcp/backends/dynatrace.py (7)
28-37: LGTM!Header construction correctly implements Dynatrace API token authentication format.
39-50: LGTM!The hybrid filtering approach (native EQUALS + client-side for others) is appropriate for Dynatrace API limitations.
161-228: LGTM!The approach of converting SpanQuery to TraceQuery and flattening spans is appropriate given Dynatrace's API structure. The limit multiplication and client-side filtering logic are sound.
230-256: LGTM!Clean implementation with appropriate error handling.
258-314: LGTM!The fallback strategy from services endpoint to trace-based discovery is robust, and timezone handling is consistent.
316-353: LGTM!Clean implementation with proper timezone handling.
355-381: LGTM!Using
list_services()as a health check is a practical approach that validates both connectivity and API functionality.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
…/opentelemetry-mcp-server into opentelementry/dynatracebackend
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (2)
src/opentelemetry_mcp/backends/dynatrace.py (2)
423-432: Normalize span start times to UTC to avoid mixing naive and aware datetimes
_parse_dynatrace_spancreatesstart_timevia different code paths:
- ISO string →
datetime.fromisoformat(...)(may return aware or naive depending on input).- Fallback string →
datetime.fromtimestamp(... )without a timezone.- Numeric ms →
datetime.fromtimestamp(..., tz=timezone.utc)(always aware).Later,
_parse_dynatrace_tracebuildsstart_timesandend_timesand callsmin()/max(). If some spans are aware (UTC) and others are naive, Python will raiseTypeError: can't compare offset-naive and offset-aware datetimes, and trace parsing fails.You can fix this by always normalizing parsed timestamps to timezone‑aware UTC:
- # Parse timestamps (Dynatrace uses milliseconds since epoch) - start_time_ms = span_data.get("startTime", span_data.get("start_time", 0)) - if isinstance(start_time_ms, str): - # Try to parse ISO format - try: - start_time = datetime.fromisoformat(start_time_ms.replace("Z", "+00:00")) - except Exception: - start_time = datetime.fromtimestamp(int(start_time_ms) / 1000) - else: - start_time = datetime.fromtimestamp(start_time_ms / 1000, tz=timezone.utc) + # Parse timestamps (Dynatrace uses milliseconds since epoch) and normalize to UTC + start_time_ms = span_data.get("startTime", span_data.get("start_time", 0)) + if isinstance(start_time_ms, str): + # Try to parse ISO format first + try: + start_time = datetime.fromisoformat(start_time_ms.replace("Z", "+00:00")) + if start_time.tzinfo is None: + start_time = start_time.replace(tzinfo=timezone.utc) + else: + start_time = start_time.astimezone(timezone.utc) + except Exception: + # Fallback: treat as milliseconds since epoch + start_time = datetime.fromtimestamp( + int(start_time_ms) / 1000, tz=timezone.utc + ) + else: + start_time = datetime.fromtimestamp(start_time_ms / 1000, tz=timezone.utc)This keeps all span start times consistently UTC-aware and avoids runtime errors in the duration computation.
Also applies to: 476-486
540-567: Harden span event iteration and normalize event timestamps to nanoseconds
for event_data in span_data.get("events", span_data.get("logs", []))will raise if"events"exists but isnull, and non‑dict entries can still lead to inconsistent handling. Also,SpanEvent.timestampis defined as nanoseconds, while here you pass through whatever Dynatrace returns.You can make this more robust and consistent:
- # Parse events/logs - events: list[SpanEvent] = [] - for event_data in span_data.get("events", span_data.get("logs", [])): - event_attrs: dict[str, str | int | float | bool] = {} - if isinstance(event_data, dict): - if "attributes" in event_data: - event_attrs.update(event_data["attributes"]) - elif "fields" in event_data: - # Handle Jaeger-style fields - for field in event_data["fields"]: - if isinstance(field, dict): - key = field.get("key") - value = field.get("value") - if key: - event_attrs[key] = value - - event_name = event_data.get("name", "event") if isinstance(event_data, dict) else "event" - event_timestamp = ( - event_data.get("timestamp", 0) if isinstance(event_data, dict) else 0 - ) - - events.append( - SpanEvent( - name=event_name, - timestamp=event_timestamp, - attributes=event_attrs, - ) - ) + # Parse events/logs + events: list[SpanEvent] = [] + events_source = span_data.get("events") + if events_source is None: + events_source = span_data.get("logs", []) + + for event_data in events_source or []: + if not isinstance(event_data, dict): + continue + + event_attrs: dict[str, str | int | float | bool] = {} + if "attributes" in event_data and isinstance(event_data["attributes"], dict): + event_attrs.update(event_data["attributes"]) + elif "fields" in event_data: + # Handle Jaeger-style fields + for field in event_data["fields"] or []: + if isinstance(field, dict): + key = field.get("key") + value = field.get("value") + if key: + event_attrs[key] = value + + event_name = event_data.get("name", "event") + + raw_ts = event_data.get("timestamp", 0) + if isinstance(raw_ts, str): + try: + dt = datetime.fromisoformat(raw_ts.replace("Z", "+00:00")) + if dt.tzinfo is None: + dt = dt.replace(tzinfo=timezone.utc) + else: + dt = dt.astimezone(timezone.utc) + event_timestamp = int(dt.timestamp() * 1_000_000_000) + except Exception: + event_timestamp = 0 + elif isinstance(raw_ts, (int, float)): + # Dynatrace timestamps are typically in milliseconds; convert to nanoseconds + event_timestamp = int(raw_ts * 1_000_000) + else: + event_timestamp = 0 + + events.append( + SpanEvent( + name=event_name, + timestamp=event_timestamp, + attributes=event_attrs, + ) + )This avoids
TypeErroronNoneevents/logs, skips non‑dict entries cleanly, and aligns event timestamps with theSpanEventnanosecond contract.
🧹 Nitpick comments (1)
src/opentelemetry_mcp/backends/dynatrace.py (1)
198-217: Double application of filters for span queries may over‑constrain resultsIn
search_spans, you:
- Build a
TraceQueryfrom theSpanQuery(including duration, error, gen_ai filters, andfilters=query.filters) and callsearch_traces(trace_query), which applies filters at the trace level viaFilterEngineonTraceData.- Then flatten spans and re‑apply
client_filtersat the span level viaFilterEngineonSpanData.For some fields (notably
durationand potentiallyhas_error), this means span filters are effectively “ANDed” with a trace‑level aggregate condition. Example: aSpanQuerywithmin_duration_msmay drop entire traces whose overall duration is below the threshold, even if they contain individual spans that satisfy the span‑level duration filter.If you intend filters in
SpanQueryto be purely span‑level, consider only using a minimalTraceQueryto bound time/service and leave the rest to the final span‑level filter pass, e.g.:- # Convert SpanQuery to TraceQuery for Dynatrace API - trace_query = TraceQuery( - service_name=query.service_name, - operation_name=query.operation_name, - start_time=query.start_time, - end_time=query.end_time, - min_duration_ms=query.min_duration_ms, - max_duration_ms=query.max_duration_ms, - tags=query.tags, - limit=query.limit * 2, # Fetch more traces to ensure we get enough spans - has_error=query.has_error, - gen_ai_system=query.gen_ai_system, - gen_ai_request_model=query.gen_ai_request_model, - gen_ai_response_model=query.gen_ai_response_model, - filters=query.filters, - ) + # Convert SpanQuery to a minimal TraceQuery for Dynatrace API: + # use it only to bound the search window and basic scoping, + # and rely on client-side filtering for span-level predicates. + trace_query = TraceQuery( + service_name=query.service_name, + operation_name=query.operation_name, + start_time=query.start_time, + end_time=query.end_time, + limit=query.limit * 2, # Fetch more traces to ensure we get enough spans + )Please double‑check the intended semantics before changing this, as it affects how existing filters behave for span searches.
Also applies to: 223-228
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
src/opentelemetry_mcp/backends/dynatrace.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/opentelemetry_mcp/backends/dynatrace.py (4)
src/opentelemetry_mcp/attributes.py (4)
HealthCheckResponse(160-172)SpanAttributes(12-137)SpanEvent(140-157)get(88-119)src/opentelemetry_mcp/backends/base.py (2)
BaseBackend(12-160)client(29-42)src/opentelemetry_mcp/backends/filter_engine.py (1)
FilterEngine(13-284)src/opentelemetry_mcp/models.py (7)
FilterOperator(14-36)SpanData(228-255)SpanQuery(593-650)TraceData(428-457)TraceQuery(460-559)has_error(253-255)gen_ai_system(248-250)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (3)
src/opentelemetry_mcp/backends/dynatrace.py (3)
144-154: Consider parallelizing trace detail fetches.Trace details are fetched sequentially in the
forloop. For better performance, consider usingasyncio.gather()to fetch multiple traces concurrently:# Fetch all traces concurrently trace_tasks = [ self.get_trace(str(trace_result.get("traceId") or trace_result.get("trace_id"))) for trace_result in trace_results[:max_traces_to_fetch] if trace_result.get("traceId") or trace_result.get("trace_id") ] trace_results_data = await asyncio.gather(*trace_tasks, return_exceptions=True) for i, trace in enumerate(trace_results_data): if isinstance(trace, Exception): trace_id = trace_results[i].get("traceId") or trace_results[i].get("trace_id") logger.warning(f"Failed to fetch trace {trace_id}: {trace}") elif trace: traces.append(trace)
66-80: Service filters from generic Filter objects are still being dropped.The
native_filterslist is computed but never used to populate theparamsdict. Onlyquery.service_name(line 106) is used. If a user specifies a service filter via the genericFiltersystem (not viaquery.service_name), it will be neither pushed to the API nor applied client-side (since it's excluded fromclient_filters).This is the same issue flagged in a previous review. Either use
native_filtersto populateparams["service"], or remove the native/client split and apply all filters client-side:- # Dynatrace API supports limited filtering via query parameters - # Most filters will be applied client-side - supported_fields = {"service.name"} # Service filtering via API - supported_operators = self.get_supported_operators() - - native_filters = [ - f - for f in all_filters - if f.field in supported_fields and f.operator in supported_operators - ] - client_filters = [f for f in all_filters if f not in native_filters] + # Apply all filters client-side to avoid dropping conditions + # that are only expressed via the generic filter system. + client_filters = all_filters
178-190: Service filters from generic Filter objects are still being dropped.Same issue as in
search_traces:native_filtersare computed but never used. Apply the same fix here:- # For span queries, most filtering will be client-side - supported_fields = {"service.name"} - supported_operators = self.get_supported_operators() - - native_filters = [ - f - for f in all_filters - if f.field in supported_fields and f.operator in supported_operators - ] - client_filters = [f for f in all_filters if f not in native_filters] + # Apply all filters client-side + client_filters = all_filters
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
src/opentelemetry_mcp/backends/dynatrace.py(1 hunks)
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
…/opentelemetry-mcp-server into opentelementry/dynatracebackend
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (3)
src/opentelemetry_mcp/backends/dynatrace.py (3)
178-190: Native span filters are also extracted but never applied.The same issue exists here:
native_filters(lines 185-189) are extracted fromall_filtersbut never translated into API parameters. TheTraceQueryconversion (lines 201-207) only usesquery.service_nameandquery.operation_name, ignoring any service filters expressed solely via genericFilterobjects.Apply the same fix as in
search_traces:- # For span queries, most filtering will be client-side - supported_fields = {"service.name"} - supported_operators = self.get_supported_operators() - - native_filters = [ - f - for f in all_filters - if f.field in supported_fields and f.operator in supported_operators - ] - client_filters = [f for f in all_filters if f not in native_filters] + # For span queries, also apply all filters client-side; native + # pushdown can be added later without changing semantics here. + client_filters = all_filtersBased on past review comments, this issue was previously flagged but not yet addressed.
66-79: Native service filters are extracted but never applied to API parameters.Lines 74-78 extract
native_filtersforservice.namewithEQUALSoperator, removing them fromclient_filters. However, these filters are never translated into Dynatrace APIparams. Onlyquery.service_name(line 105-106) populatesparams["service"]. If a user provides a service filter solely via genericFilterobjects (and not viaTraceQuery.service_name), that filter is neither pushed to the API nor applied client-side—it's effectively ignored.Solution: Apply all filters client-side until native pushdown is implemented:
- # Dynatrace API supports limited filtering via query parameters - # Most filters will be applied client-side - supported_fields = {"service.name"} # Service filtering via API - supported_operators = self.get_supported_operators() - - native_filters = [ - f - for f in all_filters - if f.field in supported_fields and f.operator in supported_operators - ] - client_filters = [f for f in all_filters if f not in native_filters] + # Dynatrace API supports limited filtering via query parameters. + # For now, apply all filters client-side to avoid dropping any + # conditions that are only expressed via the generic filter system. + client_filters = all_filtersBased on past review comments, this issue was previously flagged but not yet addressed.
540-586: Guard against null event/log arrays.If
span_datacontains"logs": null(or"events": null), lines 542-544 will setevents_sourcetoNone, and the iteration at line 546 will raiseTypeError: 'NoneType' object is not iterable. A more defensive pattern:# Parse events/logs events: list[SpanEvent] = [] events_source = span_data.get("events") if events_source is None: events_source = span_data.get("logs", []) - for event_data in events_source: + for event_data in events_source or []: if not isinstance(event_data, dict): continueThis ensures iteration over an empty list if both
eventsandlogsare null or missing.Based on past review comments, this issue was previously flagged but not yet addressed.
🧹 Nitpick comments (2)
src/opentelemetry_mcp/backends/dynatrace.py (2)
418-423: Consider defensive timezone normalization in end time calculation.Although
_parse_dynatrace_spancurrently always produces timezone-awarestart_time(withtzinfo=timezone.utc), the end time calculation assumesstart_time.tzinfois non-None. For defensive coding, explicitly ensurestart_timeis in UTC before the calculation:start_times = [s.start_time for s in spans] end_times = [ datetime.fromtimestamp( - s.start_time.timestamp() + (s.duration_ms / 1000), tz=timezone.utc + (s.start_time.replace(tzinfo=timezone.utc) if s.start_time.tzinfo is None + else s.start_time.astimezone(timezone.utc)).timestamp() + (s.duration_ms / 1000), + tz=timezone.utc ) for s in spans ]Or more simply, ensure all start times are UTC-aware first, then compute end times. This guards against future changes to
_parse_dynatrace_span.Based on past review comments, this defensive improvement was suggested but not yet addressed.
144-153: Consider parallelizing trace detail fetches.Traces are fetched sequentially in the
forloop (line 144-153). For better performance when retrieving multiple trace details, consider usingasyncio.gather:- for trace_result in trace_results[:max_traces_to_fetch]: - trace_id = trace_result.get("traceId") or trace_result.get("trace_id") - if trace_id: - try: - # Fetch full trace details - trace = await self.get_trace(str(trace_id)) - if trace: - traces.append(trace) - except Exception as e: - logger.warning(f"Failed to fetch trace {trace_id}: {e}") + import asyncio + + async def fetch_trace(trace_result): + trace_id = trace_result.get("traceId") or trace_result.get("trace_id") + if not trace_id: + return None + try: + return await self.get_trace(str(trace_id)) + except Exception as e: + logger.warning(f"Failed to fetch trace {trace_id}: {e}") + return None + + trace_results_to_fetch = trace_results[:max_traces_to_fetch] + fetched_traces = await asyncio.gather(*[fetch_trace(tr) for tr in trace_results_to_fetch]) + traces = [t for t in fetched_traces if t is not None]Based on past review comments, this performance improvement was suggested but not yet addressed.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
src/opentelemetry_mcp/backends/dynatrace.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/opentelemetry_mcp/backends/dynatrace.py (4)
src/opentelemetry_mcp/attributes.py (4)
HealthCheckResponse(160-172)SpanAttributes(12-137)SpanEvent(140-157)get(88-119)src/opentelemetry_mcp/backends/base.py (2)
BaseBackend(12-160)client(29-42)src/opentelemetry_mcp/backends/filter_engine.py (1)
FilterEngine(13-284)src/opentelemetry_mcp/models.py (6)
FilterOperator(14-36)SpanData(228-255)SpanQuery(593-650)TraceData(428-457)TraceQuery(460-559)has_error(253-255)
|
@doronkopit5 Could you please review this PR? |
doronkopit5
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the contribution.
First please fix lint errors,
After that please add tests using recordings against the real api like we did in other backends. Thats the best way to see the code integrates well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please make tests using recordings against a real dynatrace api like we did in other backends
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure will do that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
♻️ Duplicate comments (2)
src/opentelemetry_mcp/backends/dynatrace.py (2)
66-79: Major: Service filters via generic Filter objects are dropped.As noted in past reviews,
native_filtersforservice.nameare extracted but never converted into Dynatrace query parameters. Whilequery.service_nameis used (line 105-106), filters passed via the genericFiltersystem are ignored. This means if a user supplies a service filter only viaquery.filters, it won't be applied to the native query or client-side.Consider applying all filters client-side until native pushdown is properly implemented:
- # Dynatrace API supports limited filtering via query parameters - # Most filters will be applied client-side - supported_fields = {"service.name"} # Service filtering via API - supported_operators = self.get_supported_operators() - - native_filters = [ - f - for f in all_filters - if f.field in supported_fields and f.operator in supported_operators - ] - client_filters = [f for f in all_filters if f not in native_filters] + # Apply all filters client-side until native API filtering is implemented + client_filters = all_filters
198-203: Major: Same filter-dropping issue as search_traces.The same problem exists here:
native_filtersare extracted but never used to build query parameters, and they're removed fromclient_filters, effectively dropping those conditions.Apply the same fix as recommended for
search_traces:- # For span queries, most filtering will be client-side - supported_fields = {"service.name"} - supported_operators = self.get_supported_operators() - - native_filters = [ - f - for f in all_filters - if f.field in supported_fields and f.operator in supported_operators - ] - client_filters = [f for f in all_filters if f not in native_filters] + # Apply all filters client-side for now + client_filters = all_filters
🧹 Nitpick comments (1)
src/opentelemetry_mcp/backends/dynatrace.py (1)
431-438: Optional refactor: Simplify complex timezone handling.The nested conditional expression for timezone normalization (lines 433-434) is difficult to read. Consider extracting to a helper function or simplifying:
- end_times = [ - datetime.fromtimestamp( - (s.start_time.replace(tzinfo=timezone.UTC) if s.start_time.tzinfo is None - else s.start_time.astimezone(timezone.UTC)).timestamp() + (s.duration_ms / 1000), - tz=timezone.UTC, - ) - for s in spans - ] + def normalize_to_utc(dt: datetime) -> datetime: + if dt.tzinfo is None: + return dt.replace(tzinfo=timezone.UTC) + return dt.astimezone(timezone.UTC) + + end_times = [ + datetime.fromtimestamp( + normalize_to_utc(s.start_time).timestamp() + (s.duration_ms / 1000), + tz=timezone.UTC, + ) + for s in spans + ]
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
src/opentelemetry_mcp/backends/dynatrace.py(1 hunks)
🔇 Additional comments (11)
src/opentelemetry_mcp/backends/dynatrace.py (11)
28-37: LGTM! Clean header creation.The method correctly constructs Dynatrace API headers with optional token authentication.
39-50: LGTM! Appropriate operator support.Limiting to EQUALS is reasonable given Dynatrace API constraints, with client-side filtering handling more complex cases.
150-165: LGTM! Parallel trace fetching implemented.The use of
asyncio.gatherto fetch trace details concurrently is excellent and addresses the previous ellipsis-dev review comment about sequential fetching.
211-235: LGTM! Reasonable span search implementation.The approach of converting SpanQuery to TraceQuery, fetching traces, and flattening spans is appropriate given Dynatrace's API limitations. The 2x limit multiplier is a good heuristic to ensure enough spans are fetched.
237-263: LGTM! Clean trace retrieval.The method correctly uses the
/api/v2/traces/{trace_id}endpoint and handles parsing with appropriate error handling.
307-321: Note: This fallback reveals the likely correct trace query endpoint.Line 307 successfully uses
GET /api/v2/traceswith query params, which suggests this is the correct endpoint forsearch_tracesrather than the POST to/api/v2/ql/query:executecurrently used. This reinforces the critical issue flagged insearch_traces.
339-360: LGTM! Consistent service operations discovery.The method correctly queries traces for a specific service and extracts operation names. The approach is consistent with
list_services.
362-388: LGTM! Appropriate health check implementation.Using
list_servicesas a health probe is a reasonable way to verify backend connectivity and functionality.
402-460: LGTM! Comprehensive trace parsing.The method handles multiple Dynatrace trace formats, calculates accurate timing information, and correctly derives trace status from span errors.
557-601: LGTM! Robust event parsing with improved null handling.The event parsing correctly addresses past review feedback by checking if
events_sourceis None before iterating, and handles multiple timestamp formats (ISO strings and numeric milliseconds) with appropriate normalization to nanoseconds.
474-618: LGTM! Comprehensive span parsing.The method handles multiple Dynatrace span formats, normalizes timestamps to UTC, parses attributes from different structures (dict and list formats), and correctly derives span status from error indicators.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
…/opentelemetry-mcp-server into opentelementry/dynatracebackend
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (2)
src/opentelemetry_mcp/backends/dynatrace.py (2)
129-142: Native filters are constructed but never used.The code separates filters into
native_filtersandclient_filters(lines 137-142), butnative_filtersis never referenced again. The DQL query builder at line 151 usesquery.service_nameand other direct fields from the TraceQuery, not the filter objects. This means any service filters expressed solely through the genericFiltersystem (viaquery.filters) are silently dropped—neither pushed to Dynatrace nor applied client-side.This issue was flagged in a past review comment. Either:
- Remove the
native_filterssplit and apply all filters client-side, or- Implement pushdown by translating
native_filtersinto DQL parametersApply this diff to ensure all filters are applied client-side until native pushdown is implemented:
- # Dynatrace API supports limited filtering via query parameters - # Most filters will be applied client-side - supported_fields = {"service.name"} # Service filtering via API - supported_operators = self.get_supported_operators() - - native_filters = [ - f - for f in all_filters - if f.field in supported_fields and f.operator in supported_operators - ] - client_filters = [f for f in all_filters if f not in native_filters] + # Apply all filters client-side for now + # Native pushdown can be added later without changing semantics + client_filters = all_filters
221-233: Native filters are constructed but never used (same issue as search_traces).The same issue from
search_tracesapplies here:native_filtersis built but never used, causing filter loss.Apply this diff:
- # For span queries, most filtering will be client-side - supported_fields = {"service.name"} - supported_operators = self.get_supported_operators() - - native_filters = [ - f - for f in all_filters - if f.field in supported_fields and f.operator in supported_operators - ] - client_filters = [f for f in all_filters if f not in native_filters] + # Apply all filters client-side for now + client_filters = all_filters
🧹 Nitpick comments (1)
src/opentelemetry_mcp/backends/dynatrace.py (1)
490-646: Consider extracting helpers to reduce method complexity.The
_parse_dynatrace_spanmethod is 157 lines long and handles multiple responsibilities (timestamp parsing, attribute normalization, event parsing, status determination). Extracting helpers like_parse_span_timestamp,_parse_span_attributes, and_parse_span_eventswould improve readability and testability.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
src/opentelemetry_mcp/backends/dynatrace.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/opentelemetry_mcp/backends/dynatrace.py (5)
src/opentelemetry_mcp/attributes.py (4)
HealthCheckResponse(160-172)SpanAttributes(12-137)SpanEvent(140-157)get(88-119)src/opentelemetry_mcp/backends/base.py (2)
BaseBackend(12-160)client(29-42)src/opentelemetry_mcp/backends/filter_engine.py (1)
FilterEngine(13-284)src/opentelemetry_mcp/models.py (4)
FilterOperator(14-36)SpanData(228-255)TraceData(428-457)has_error(253-255)src/opentelemetry_mcp/server.py (3)
search_traces(145-221)get_trace(225-241)list_services(288-299)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (3)
src/opentelemetry_mcp/backends/dynatrace.py (3)
227-235: Critical: Same filter-dropping bug as insearch_traces.The same issue exists here:
native_filtersare separated out but never used to modify the query sent to Dynatrace. Apply the same fix as suggested forsearch_traces.Apply this diff:
- # For span queries, most filtering will be client-side - supported_fields = {"service.name"} - supported_operators = self.get_supported_operators() - - native_filters = [ - f - for f in all_filters - if f.field in supported_fields and f.operator in supported_operators - ] - client_filters = [f for f in all_filters if f not in native_filters] + # For span queries, apply all filters client-side to avoid dropping + # conditions expressed only via the generic filter system. + client_filters = all_filters
589-593: Missing type validation for events data.If
span_data.get("events")returns a non-None, non-list value (e.g., a string, number, or dict due to malformed API data), line 593 will raise aTypeError. The code checks forNonebut doesn't validate thatevents_sourceis actually a list.Apply this diff to safely handle non-list values:
# Parse events/logs events: list[SpanEvent] = [] events_source = span_data.get("events") if events_source is None: events_source = span_data.get("logs", []) + if not isinstance(events_source, list): + events_source = [] for event_data in events_source:
136-144: Critical: Service filters via Filter objects are still being silently dropped.Despite a past review comment claiming this was addressed, the current code still splits filters into
native_filtersandclient_filters(lines 139-144), butnative_filtersare never used to modify the DQL query. The_build_dql_querymethod only usesquery.service_name, not theFilterobjects innative_filters.This means if a user provides
Filter(field="service.name", operator=EQUALS, value="foo")without settingquery.service_name, the filter will be:
- Removed from
client_filters(line 144), so it won't be applied client-side- Never used to build the DQL query, so it won't be pushed to the API
- Silently dropped entirely
Apply this diff to fix the issue by removing the split and applying all filters client-side:
- # Dynatrace API supports limited filtering via query parameters - # Most filters will be applied client-side - supported_fields = {"service.name"} # Service filtering via API - supported_operators = self.get_supported_operators() - - native_filters = [ - f - for f in all_filters - if f.field in supported_fields and f.operator in supported_operators - ] - client_filters = [f for f in all_filters if f not in native_filters] + # Dynatrace API supports limited filtering via query parameters. + # For now, apply all filters client-side to avoid dropping any + # conditions that are only expressed via the generic filter system. + client_filters = all_filtersAlternatively, if you want to keep native filtering, you must use
native_filtersto modify the DQL query in_build_dql_query.
🧹 Nitpick comments (1)
src/opentelemetry_mcp/backends/dynatrace.py (1)
600-607: Fix indentation inconsistency.Lines 603 and 607 have extra leading spaces that break the consistent indentation pattern.
Apply this diff:
elif "fields" in event_data: # Handle Jaeger-style fields for field in event_data["fields"] or []: - if isinstance(field, dict): - key = field.get("key") - value = field.get("value") - if key: - event_attrs[key] = value + if isinstance(field, dict): + key = field.get("key") + value = field.get("value") + if key: + event_attrs[key] = value
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
src/opentelemetry_mcp/backends/dynatrace.py(1 hunks)
🔇 Additional comments (1)
src/opentelemetry_mcp/backends/dynatrace.py (1)
182-197: Well done parallelizing trace fetching!The use of
asyncio.gatherto fetch multiple traces concurrently is a significant performance improvement over sequential fetching. Good error handling too—failed traces are logged and filtered out rather than failing the entire operation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (4)
tests/integration/conftest.py (3)
306-309: Improve readability by breaking long line.Line 309 exceeds typical line length limits (>120 characters), making it harder to read.
🔎 Refactor to improve readability
@pytest.fixture def dynatrace_config(dynatrace_url: str, dynatrace_api_key: str) -> BackendConfig: """Dynatrace backend configuration.""" - return BackendConfig(type="dynatrace", url=TypeAdapter(HttpUrl).validate_python(dynatrace_url), api_key=dynatrace_api_key) + return BackendConfig( + type="dynatrace", + url=TypeAdapter(HttpUrl).validate_python(dynatrace_url), + api_key=dynatrace_api_key, + )
312-323: Add explicit return type annotation for async generator.Line 313 uses
AsyncGeneratorwithout type parameters. For consistency with other backend fixtures (lines 196, 225, 269), specify the yielded type explicitly.🔎 Add type parameters
@pytest.fixture -async def dynatrace_backend(dynatrace_config: BackendConfig) -> AsyncGenerator: +async def dynatrace_backend(dynatrace_config: BackendConfig) -> AsyncGenerator[DynatraceBackend, None]: """ Dynatrace backend instance for integration testing. Uses async context manager to properly initialize and cleanup the backend. """ from opentelemetry_mcp.backends.dynatrace import DynatraceBackend - backend = DynatraceBackend(url=str(dynatrace_config.url), api_key=dynatrace_config.api_key, timeout=dynatrace_config.timeout) + backend = DynatraceBackend( + url=str(dynatrace_config.url), + api_key=dynatrace_config.api_key, + timeout=dynatrace_config.timeout, + ) async with backend: yield backend
121-177: Consider adding Dynatrace-specific VCR filtering.The
vcr_configfixture has special handling for Tempo (filters timestamps from query params) and Traceloop (filters timestamps from request body, ignores host/scheme/port). Dynatrace queries also include dynamic timestamps in the DQL query parameters (e.g.,from: "2025-12-19T08:32:51.000Z"), which may cause cassette matching issues across test runs.Consider adding similar filtering logic for Dynatrace tests:
def filter_dynatrace_timestamps(request: Request) -> Request: """Remove timestamp parameters from Dynatrace DQL queries for better matching.""" try: if "dynatrace.com" in request.uri or "/api/v2/ql/query:execute" in request.uri: url = urlparse(request.uri) params = parse_qs(url.query) # Remove timestamps from DQL query parameter if "query" in params: query_str = params["query"][0] # Strip time range from DQL query import re query_str = re.sub(r'from:\s*"[^"]*"', 'from: "FILTERED"', query_str) query_str = re.sub(r'to:\s*"[^"]*"', 'to: "FILTERED"', query_str) params["query"] = [query_str] new_query = urlencode(params, doseq=True) new_url = url._replace(query=new_query) request.uri = new_url.geturl() except Exception: pass return requestThen add to
vcr_config:is_dynatrace_test = "dynatrace" in request.node.nodeid.lower() if is_dynatrace_test: config["before_record_request"] = filter_dynatrace_timestampstests/integration/test_dynatrace_integration.py (1)
12-15: Consider making the placeholder check more robust.The hardcoded string literal check
"abc12345.live.dynatrace.com"is brittle and tightly coupled to the fixture default. If the fixture default changes, this check won't be updated automatically.🔎 More maintainable approach
Option 1: Check for a sentinel pattern rather than exact match:
def _skip_if_placeholder_backend(backend: DynatraceBackend) -> None: """Skip tests when no real Dynatrace URL is configured.""" - if "abc12345.live.dynatrace.com" in getattr(backend, "url", ""): + url = getattr(backend, "url", "") + if "abc12345" in url or "test_api_key_for_replay" in getattr(backend, "api_key", ""): pytest.skip("DYNATRACE_URL not configured; skipping Dynatrace integration tests")Option 2: Add an environment variable to explicitly control test execution:
import os def _skip_if_placeholder_backend(backend: DynatraceBackend) -> None: """Skip tests when no real Dynatrace URL is configured.""" if not os.getenv("DYNATRACE_INTEGRATION_TESTS_ENABLED"): pytest.skip("Set DYNATRACE_INTEGRATION_TESTS_ENABLED=1 to run against real backend")
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (9)
src/opentelemetry_mcp/backends/dynatrace.py(1 hunks)tests/integration/cassettes/test_dynatrace_integration/TestDynatraceBackendHealth.test_health_check_healthy.yaml(1 hunks)tests/integration/cassettes/test_dynatrace_integration/TestDynatraceListServices.test_list_services.yaml(1 hunks)tests/integration/cassettes/test_dynatrace_integration/TestDynatraceSearchSpans.test_search_spans_basic.yaml(1 hunks)tests/integration/cassettes/test_dynatrace_integration/TestDynatraceSearchTraces.test_get_trace_by_id.yaml(1 hunks)tests/integration/cassettes/test_dynatrace_integration/TestDynatraceSearchTraces.test_search_traces_basic.yaml(1 hunks)tests/integration/cassettes/test_dynatrace_integration/TestDynatraceServiceOperations.test_get_service_operations.yaml(1 hunks)tests/integration/conftest.py(1 hunks)tests/integration/test_dynatrace_integration.py(1 hunks)
✅ Files skipped from review due to trivial changes (2)
- tests/integration/cassettes/test_dynatrace_integration/TestDynatraceBackendHealth.test_health_check_healthy.yaml
- tests/integration/cassettes/test_dynatrace_integration/TestDynatraceServiceOperations.test_get_service_operations.yaml
🧰 Additional context used
🧬 Code graph analysis (1)
tests/integration/conftest.py (2)
src/opentelemetry_mcp/config.py (1)
BackendConfig(16-61)src/opentelemetry_mcp/backends/dynatrace.py (1)
DynatraceBackend(21-654)
🔇 Additional comments (22)
tests/integration/conftest.py (2)
289-292: LGTM! Dynatrace URL fixture follows the established pattern.The fixture correctly provides a default placeholder URL for testing and allows override via environment variable, consistent with other backend fixtures.
295-303: LGTM! API key fixture properly handles recording vs replay scenarios.The docstring clearly explains that a real API key is needed for recording new cassettes but not for replay, and the key will be filtered from cassettes by the VCR configuration.
tests/integration/cassettes/test_dynatrace_integration/TestDynatraceSearchTraces.test_search_traces_basic.yaml (1)
1-90: LGTM! Cassette structure is valid for integration testing.The cassette properly records two interactions:
- Service listing attempt (falls back on 404)
- DQL query execution for trace search
Both interactions correctly filter sensitive headers (authorization is not visible), and the placeholder 404 responses are appropriate for tests using the default placeholder URL. The
_skip_if_placeholder_backendhelper in the test module will skip tests when real data isn't available.Note: The hardcoded timestamps in the query string (lines 62) will cause cassette matching issues if VCR filtering isn't implemented. This aligns with the recommendation in conftest.py to add Dynatrace-specific timestamp filtering.
tests/integration/cassettes/test_dynatrace_integration/TestDynatraceSearchSpans.test_search_spans_basic.yaml (1)
1-90: LGTM! Cassette follows consistent pattern.This cassette mirrors the structure of other Dynatrace test cassettes with appropriate 404 responses for the placeholder backend URL.
tests/integration/cassettes/test_dynatrace_integration/TestDynatraceSearchTraces.test_get_trace_by_id.yaml (1)
1-90: LGTM! Cassette structure is consistent.The cassette appropriately records service listing and trace query interactions with placeholder 404 responses.
tests/integration/cassettes/test_dynatrace_integration/TestDynatraceListServices.test_list_services.yaml (1)
1-88: LGTM! Cassette properly tests service discovery fallback.The cassette captures both the primary service listing endpoint (404) and the fallback DQL query, demonstrating the backend's resilience when the services endpoint is unavailable.
tests/integration/test_dynatrace_integration.py (6)
18-30: LGTM! Health check test has appropriate assertions.The test validates all critical health check response fields: status, backend type, and URL presence.
32-44: LGTM! Service listing test validates response structure.The test properly verifies that services are returned as non-empty strings.
47-63: LGTM! Operations test correctly validates service-specific operations.The test appropriately retrieves operations for an existing service and validates the response structure.
66-89: LGTM! Trace search test comprehensively validates trace structure.The test covers all essential TraceData fields including trace_id, service_name, spans, start_time, and duration_ms.
90-111: LGTM! Trace retrieval test validates individual trace fetching.The test appropriately searches for a trace and then retrieves it by ID, validating the full trace structure.
113-135: LGTM! Span search test validates span-level querying.The test properly exercises span search functionality and validates all required SpanData fields.
src/opentelemetry_mcp/backends/dynatrace.py (10)
28-37: LGTM! Header construction follows Dynatrace API requirements.The method correctly sets the Content-Type header and uses the
Api-Tokenauthorization format required by Dynatrace API.
39-50: LGTM! Operator support accurately reflects Dynatrace API capabilities.The method correctly indicates that only EQUALS operations are natively supported, with the comment appropriately noting that advanced filtering is handled client-side.
52-115: LGTM! DQL query construction is well-implemented.The method correctly:
- Uses proper DQL syntax with
from:andto:parameters (line 76)- Handles timezone-aware datetime operations with
datetime.UTC- Escapes filter values to prevent injection (lines 83, 88)
- Uses correct Dynatrace field names (
request.is_failedinstead of non-existentotel.status_code)- Properly formats duration filters with
msunit
204-265: LGTM! Span search appropriately leverages trace search.The method correctly:
- Converts SpanQuery to TraceQuery for API compatibility
- Multiplies the limit by 2 (line 249) to ensure sufficient spans after flattening
- Flattens spans from multiple traces
- Applies client-side filters to the flattened span list
Note: The same filter handling question from
search_tracesapplies here (lines 228-233).
267-293: LGTM! Trace retrieval is straightforward and robust.The method correctly fetches a trace by ID and includes proper error handling for parse failures.
295-354: LGTM! Service listing with robust fallback mechanism.The method demonstrates excellent resilience by:
- Attempting the dedicated services endpoint first
- Falling back to extracting services from trace data if the endpoint is unavailable (line 326)
- Handling multiple response formats (lines 315-319)
- Using timezone-aware datetime operations
356-396: LGTM! Service operations retrieval is well-implemented.The method correctly queries for operations within a specific service using proper DQL syntax and handles multiple field name variations defensively.
398-424: LGTM! Health check appropriately uses service listing as probe.The method provides a meaningful health check by attempting to list services, which exercises the API authentication and connectivity.
426-496: LGTM! Trace parsing with excellent timezone handling.The method demonstrates robust parsing with:
- Support for multiple response formats (lines 441-444)
- Careful timezone normalization in end time calculations (lines 469-470), ensuring naive datetimes are treated as UTC
- Proper trace duration calculation across all spans
- Appropriate status aggregation from span errors
498-654: LGTM! Comprehensive span parsing with defensive coding.The method demonstrates excellent robustness:
- Handles multiple timestamp formats (ISO strings and millisecond epochs) on lines 523-537
- Properly validates event sources and skips invalid entries (lines 596-601), addressing potential TypeError issues
- Parses attributes in both dict and list formats (lines 556-567)
- Includes fallback field names throughout for compatibility with different Dynatrace response formats
- Normalizes all timestamps to UTC
The parsing logic is well-structured and defensive against malformed data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
src/opentelemetry_mcp/backends/dynatrace.py (2)
239-239: Consider moving asyncio import to module level.The
asyncioimport at line 239 is currently inside the function. While this works correctly, it's more conventional to place imports at the module level for consistency and to make dependencies explicit.🔎 Proposed refactor
Add asyncio to the module-level imports at the top of the file:
import logging +import asyncio from datetime import datetime, timedelta, timezone from typing import Any, LiteralAnd remove the local import:
- import asyncio - async def fetch_trace(trace_result):
287-318: Consider reusing computed native_filters to avoid redundant filtering.Lines 287-291 compute
native_filtersfromall_filters, but line 318 passes a different inline list comprehension[f for f in query.get_all_filters() if f.field in ("service.name",)]tosearch_traces. This creates a slight inconsistency:
- Lines 287-291 check both
fieldandoperator- Line 318 only checks
fieldWhile both approaches work, reusing
native_filterswould be clearer and avoid redundant filtering.🔎 Proposed refactor
- # Search traces and pass any native filters discovered for span-level queries - traces = await self.search_traces(trace_query, native_filters=[f for f in query.get_all_filters() if f.field in ("service.name",)]) + # Search traces and pass the computed native filters + traces = await self.search_traces(trace_query, native_filters=native_filters)
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
src/opentelemetry_mcp/backends/dynatrace.py(1 hunks)tests/backends/test_dynatrace.py(1 hunks)tests/integration/conftest.py(4 hunks)
✅ Files skipped from review due to trivial changes (1)
- tests/backends/test_dynatrace.py
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/integration/conftest.py
Add Dynatrace Backend Support
This PR adds support for Dynatrace as an observability backend, enabling users to query OpenTelemetry traces from Dynatrace environments.
Changes
New Backend Implementation (
src/opentelemetry_mcp/backends/dynatrace.py)search_traces(),get_trace(),list_services(),search_spans(),get_service_operations(), andhealth_check()gen_ai.*attributes)Configuration Updates
dynatraceas a supported backend type inconfig.pydynatraceto CLI backend choicesTesting
tests/backends/test_dynatrace.pyDocumentation
Configuration
BACKEND_TYPE=dynatrace
BACKEND_URL=https://{your-environment-id}.live.dynatrace.com
BACKEND_API_KEY=dt0c01.ABC123...### Testing
Solves #5
Important
Add Dynatrace backend support for querying OpenTelemetry traces with configuration, testing, and documentation updates.
DynatraceBackendclass indynatrace.pyimplements methods:search_traces(),get_trace(),list_services(),search_spans(),get_service_operations(),health_check().dynatraceto supported backend types inconfig.py.server.pyto include Dynatrace backend.dynatraceto CLI backend choices.test_dynatrace.pycovering trace querying, span searching, service listing, and error handling.README.mdwith Dynatrace configuration examples and troubleshooting..env.examplefor Dynatrace configuration.This description was created by
for 47b4d8e. You can customize this summary. It will automatically update as commits are pushed.
Summary by CodeRabbit
New Features
Documentation
Tests
Chores
✏️ Tip: You can customize this high-level summary in your review settings.