-
Notifications
You must be signed in to change notification settings - Fork 212
add preparedata plugin to latency based scorer to consume prefix states #2005
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
✅ Deploy Preview for gateway-api-inference-extension ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: kaushikmitr The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
@ahg-g this PR simplifies the helm chart config to pick latency aware routing if enabled or switch to default if not. |
| {{- end }} | ||
| schedulingProfiles: | ||
| {{- if .Values.inferenceExtension.latencyPredictor.enabled }} | ||
| - name: predicted-latency-prefix |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we still need the predicted-latency-prefix profile? I thought preparedata plugin will allow us to get rid of it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I initially thought we still need the prefix score plugin in the scheduling profile to ensure prerequest and prepare data steps of prefix score plugin gets executed, but it seems after some testing that as long as it's declared above those steps will be executed. In that case I will remove that. also tagging @rahulgurnani to make sure that is indeed the right behavior
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kfswain do we still need to have a separate profile to enforce order?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, if we are using the PrepareData plugin & new data framework, all data should be available before the scheduling cycle
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good! I will remove the second profile.
pkg/epp/scheduling/framework/plugins/multi/slo_aware_router/slo_aware_profile_handler.go
Outdated
Show resolved
Hide resolved
pkg/epp/scheduling/framework/plugins/multi/slo_aware_router/preparedata_hooks.go
Outdated
Show resolved
Hide resolved
|
@kaushikmitr the naming is a bit all over the place, we use slo / predictor / router etc. I thought we agreed on predicted-latency as the name of the plugin. We can change that in a follow up PR, but I think we should address it soon since the plugin name will be part of the "config api" and it is user facing. |
yes we need to clean up two things. The naming of the plugin (predicted latency) and renaming TPOT everywhere (including docs) to ITL |
| matchLen := state.PrefixCacheServers[ServerID(pod.GetPod().NamespacedName)] | ||
| pod.Put(approximateprefix.PrefixCacheMatchInfoKey, approximateprefix.NewPrefixCacheMatchInfo(matchLen, total)) | ||
| } | ||
| // Store the state in plugin state for later use. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we need to change anything in the prefix plugin?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The prefix scorer does not consume the prefix state in the same way as the predicted latency one?
This pull request introduces refactoring and simplification of the prediction-based routing logic in the inference pool scheduler, focusing on introducing the PrepareRequestData plugin to get the prefix cache score and updating the prediction-based routing helm chart config.
Core logic and data structure simplification:
sloRequestContextstruct now stores all prediction results in a singlepredictionsForSchedulingslice, replacing the previous separate maps for TTFT and TPOT values. All code paths and tests have been updated to use this unified structure.generatePredictionsandscoreWithoutPredictionsfunctions now rely exclusively on precomputed prefix cache scores from the SLO context, removing the need to pass and use theCycleStateobject for these calculations.PrepareRequestDatamethod is introduced to precompute and populate prefix cache scores in the SLO context, further decoupling data preparation from scoring and prediction logic.Prediction-based scheduling flow and configuration:
NoLatencyRoutingProfileNameand associated logic) have been removed, consolidating the routing flow and simplifying the profile handler's logic.SLOAwareProfileHandler.Pickmethod is simplified to always return all profiles unless all have already been executed, removing conditional execution based on headers.samplingMeanin the latency scorer configuration is increased from 100.0 to 1000.0.Helm Chart: