Skip to content

Conversation

@kaushikmitr
Copy link
Contributor

@kaushikmitr kaushikmitr commented Dec 16, 2025

This pull request introduces refactoring and simplification of the prediction-based routing logic in the inference pool scheduler, focusing on introducing the PrepareRequestData plugin to get the prefix cache score and updating the prediction-based routing helm chart config.

Core logic and data structure simplification:

  • The sloRequestContext struct now stores all prediction results in a single predictionsForScheduling slice, replacing the previous separate maps for TTFT and TPOT values. All code paths and tests have been updated to use this unified structure.
  • The generatePredictions and scoreWithoutPredictions functions now rely exclusively on precomputed prefix cache scores from the SLO context, removing the need to pass and use the CycleState object for these calculations.
  • The PrepareRequestData method is introduced to precompute and populate prefix cache scores in the SLO context, further decoupling data preparation from scoring and prediction logic.

Prediction-based scheduling flow and configuration:

  • The "prediction-based scheduling off" feature and related code paths (including the NoLatencyRoutingProfileName and associated logic) have been removed, consolidating the routing flow and simplifying the profile handler's logic.
  • The SLOAwareProfileHandler.Pick method is simplified to always return all profiles unless all have already been executed, removing conditional execution based on headers.
  • The default value for samplingMean in the latency scorer configuration is increased from 100.0 to 1000.0.

Helm Chart:

  • epp-config.yaml is simplified to pick prediction based routing when enabled or pick default if not.

@netlify
Copy link

netlify bot commented Dec 16, 2025

Deploy Preview for gateway-api-inference-extension ready!

Name Link
🔨 Latest commit 44e0aea
🔍 Latest deploy log https://app.netlify.com/projects/gateway-api-inference-extension/deploys/6945d628fe96230008607e29
😎 Deploy Preview https://deploy-preview-2005--gateway-api-inference-extension.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: kaushikmitr
Once this PR has been reviewed and has the lgtm label, please assign ahg-g for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Dec 16, 2025
@kaushikmitr
Copy link
Contributor Author

kaushikmitr commented Dec 16, 2025

@ahg-g this PR simplifies the helm chart config to pick latency aware routing if enabled or switch to default if not.

{{- end }}
schedulingProfiles:
{{- if .Values.inferenceExtension.latencyPredictor.enabled }}
- name: predicted-latency-prefix
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we still need the predicted-latency-prefix profile? I thought preparedata plugin will allow us to get rid of it.

Copy link
Contributor Author

@kaushikmitr kaushikmitr Dec 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I initially thought we still need the prefix score plugin in the scheduling profile to ensure prerequest and prepare data steps of prefix score plugin gets executed, but it seems after some testing that as long as it's declared above those steps will be executed. In that case I will remove that. also tagging @rahulgurnani to make sure that is indeed the right behavior

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kfswain do we still need to have a separate profile to enforce order?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, if we are using the PrepareData plugin & new data framework, all data should be available before the scheduling cycle

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good! I will remove the second profile.

@ahg-g
Copy link
Contributor

ahg-g commented Dec 19, 2025

@kaushikmitr the naming is a bit all over the place, we use slo / predictor / router etc. I thought we agreed on predicted-latency as the name of the plugin. We can change that in a follow up PR, but I think we should address it soon since the plugin name will be part of the "config api" and it is user facing.

@kaushikmitr
Copy link
Contributor Author

@kaushikmitr the naming is a bit all over the place, we use slo / predictor / router etc. I thought we agreed on predicted-latency as the name of the plugin. We can change that in a follow up PR, but I think we should address it soon since the plugin name will be part of the "config api" and it is user facing.

yes we need to clean up two things. The naming of the plugin (predicted latency) and renaming TPOT everywhere (including docs) to ITL

matchLen := state.PrefixCacheServers[ServerID(pod.GetPod().NamespacedName)]
pod.Put(approximateprefix.PrefixCacheMatchInfoKey, approximateprefix.NewPrefixCacheMatchInfo(matchLen, total))
}
// Store the state in plugin state for later use.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need to change anything in the prefix plugin?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The prefix scorer does not consume the prefix state in the same way as the predicted latency one?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants