-
Notifications
You must be signed in to change notification settings - Fork 349
DP: application: cleanup #10461
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
DP: application: cleanup #10461
Conversation
Add a single information level log entry when the userspace DP thread starts. Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
The module operation structure is located in module's memory, so it is accessible to the thread, no need to copy it. Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
On the one hand IPCs are serialized, so a single IPC buffer for all DP threads would be enough. But it has to be a page large to be added to every DP thread memory domain. On the other hand we can allocate such an IPC flattening buffer for each DP thread. Then it doesn't need to be mapped separately, doesn't need an own memory partition in thread's memory domain. A page is 4KiB, the buffer is probably less than 100 bytes large. So as long as we don't have more than 40 DP threads we're better off using per-thread buffers, and we aren't likely to ever get that many DP threads. Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
The ppart partition pointer array isn't used any more, remove it. Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR cleans up leftover code from a previous merge, primarily refactoring how IPC flattening buffers are managed by moving from a static global buffer to a per-task allocated buffer embedded in the task memory structure.
Key Changes
- Replaced static global
ipc_bufwith per-taskflatbuffer allocated as part of task memory structure - Removed
SOF_DP_PART_IPCmemory partition that is no longer needed - Fixed spelling error in comment ("binded" → "bound")
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.
| File | Description |
|---|---|
| src/schedule/zephyr_dp_schedule_application.c | Removes static IPC buffer and associated partition, replaces with per-task buffer; adds log statement for thread startup; fixes comment spelling; removes unused code (ops, ppart array) |
| src/schedule/zephyr_dp_schedule.h | Removes SOF_DP_PART_IPC enum value, adds ipc4_flat forward declaration and flat pointer field to task_dp_pdata struct |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
|
||
| /* The IPC thread is waiting for the thread to be started, it can proceed now. */ | ||
| k_sem_give(&dp_sync[task->core]); | ||
| comp_info(pmod->dev, "userspace thread started"); |
Copilot
AI
Dec 24, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The log message "userspace thread started" is misleading because this function is used for both userspace and kernel threads depending on the CONFIG_USERSPACE configuration and the K_USER flag. Consider using a more generic message like "DP thread started" to accurately reflect that this thread may be either a userspace or kernel thread.
| comp_info(pmod->dev, "userspace thread started"); | |
| comp_info(pmod->dev, "DP thread started"); |
Clean up a couple of left-overs after the merge