fix: implement list_runs and delete_run endpoints in EvaluationService#14057
fix: implement list_runs and delete_run endpoints in EvaluationService#14057aieng-abdullah wants to merge 5 commits intoinfiniflow:mainfrom
Conversation
- Add list_runs() method to EvaluationService with pagination support - Add delete_run() method that removes run and its results - Wire list_runs to GET /run/list endpoint replacing TODO stub - Wire delete_run to DELETE /run/<run_id> endpoint replacing TODO stub Fixes infiniflow#14056
📝 WalkthroughWalkthroughAdded real listing and deletion for evaluation runs: API handlers now parse parameters and log context, calling new Changes
Sequence Diagram(s)sequenceDiagram
rect rgba(220,240,255,0.5)
participant Client
end
rect rgba(200,230,200,0.5)
participant API as Evaluation API
end
rect rgba(255,230,200,0.5)
participant Service as EvaluationService
end
rect rgba(240,220,255,0.5)
participant DB as Database
end
Client->>API: GET /evaluation/run?dataset_id=&page=&page_size=
API->>Service: list_runs(dataset_id, page, page_size, user_id)
Service->>DB: SELECT EvaluationRun (filters?, ORDER BY create_time DESC LIMIT/OFFSET)
DB-->>Service: rows, total_count
Service-->>API: { total, runs }
API-->>Client: 200 OK with runs payload
Client->>API: DELETE /evaluation/run/{run_id}
API->>Service: delete_run(run_id, user_id)
Service->>DB: DELETE FROM EvaluationResult WHERE run_id=...
Service->>DB: DELETE FROM EvaluationRun WHERE id=...
DB-->>Service: affected_rows
Service-->>API: success_flag
API-->>Client: 200 OK { run_id } or error payload
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@api/apps/evaluation_app.py`:
- Around line 385-386: The failure branch uses a wrong helper name and will
raise NameError; replace the call to get_error_data_result with the correctly
imported get_data_error_result in the deletion branch that checks
EvaluationService.delete_run(run_id) so the API returns the intended error
response for the given run_id instead of throwing; ensure you update the call
site where get_error_data_result(...) appears to use get_data_error_result(...)
and keep the same message f"Run {run_id} not found or deletion failed".
- Around line 372-376: Add structured entry and success logs around the list and
delete run flows in evaluation_app.py: log an "entry" message before calling
EvaluationService.list_runs (include dataset_id, page, page_size and requester
id) and log a "success" message after returning the result (include same
fields); likewise, for the delete flow (the route that calls
EvaluationService.delete_run or similar), log entry and success messages
including run_id, dataset_id, and requester id. Use the existing logger used in
this module (e.g., processLogger or the module's logger) and ensure logs are
emitted in both normal success paths and before raising/returning in the try
block so these endpoints are observable in production.
In `@api/db/services/evaluation_service.py`:
- Around line 573-595: Add structured info/debug logs to the success path of
list_runs and delete_run: in list_runs (function list_runs) log an entry-level
message with dataset_id, page and page_size when the call starts and a success
message after query completes with total and number of runs returned (len of
runs) using EvaluationRun and the query context; in delete_run (method
delete_run) log an entry message with run_id before deleting, log the number of
EvaluationResult rows deleted and the result of EvaluationRun.delete (affected
rows) after each execute, and a final success/info log indicating run_id and
whether deletion succeeded (affected rows > 0). Place logs alongside the
existing try block (before operations and right after successful operations) and
keep exception logging unchanged.
- Around line 573-582: The list_runs method currently selects all EvaluationRun
rows; restrict it to the caller by adding an ownership filter (e.g., accept an
owner_id or caller_id parameter) and apply a where(EvaluationRun.owner_id ==
owner_id) before ordering/pagination; when dataset_id is provided also ensure
the dataset belongs to the same owner (check Dataset.owner_id or filter
EvaluationRun.dataset_id only after verifying dataset ownership) so that
EvaluationRun.select() in list_runs only returns runs owned by the caller.
- Around line 590-593: Wrap the deletes in a single DB transaction and scope
them to the run owner: first ensure the run belongs to the caller (e.g., fetch
or delete using a where that includes EvaluationRun.id == run_id and
EvaluationRun.owner_id == current_user_id), then inside a transaction execute
EvaluationResult.delete().where(EvaluationResult.run_id == run_id) and
EvaluationRun.delete().where((EvaluationRun.id == run_id) &
(EvaluationRun.owner_id == current_user_id)), and return success only if the run
delete affected > 0 rows; this makes deletion atomic and enforces ownership.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 2fda1a5d-f12e-4c32-ae4e-2e43404b0802
📒 Files selected for processing (2)
api/apps/evaluation_app.pyapi/db/services/evaluation_service.py
| dataset_id = request.args.get("dataset_id") | ||
| page = int(request.args.get("page", 1)) | ||
| page_size = int(request.args.get("page_size", 20)) | ||
| return get_json_result(data=EvaluationService.list_runs(dataset_id, page, page_size)) | ||
| except Exception as e: |
There was a problem hiding this comment.
Add route logs for the new list/delete flows
Please add entry/success logs for listing and deleting runs (include run_id, dataset_id, page, page_size, and requester id) so these new endpoints are observable in production.
As per coding guidelines, **/*.py: Add logging for new flows.
Also applies to: 385-387
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@api/apps/evaluation_app.py` around lines 372 - 376, Add structured entry and
success logs around the list and delete run flows in evaluation_app.py: log an
"entry" message before calling EvaluationService.list_runs (include dataset_id,
page, page_size and requester id) and log a "success" message after returning
the result (include same fields); likewise, for the delete flow (the route that
calls EvaluationService.delete_run or similar), log entry and success messages
including run_id, dataset_id, and requester id. Use the existing logger used in
this module (e.g., processLogger or the module's logger) and ensure logs are
emitted in both normal success paths and before raising/returning in the try
block so these endpoints are observable in production.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
api/apps/evaluation_app.py (1)
375-376:⚠️ Potential issue | 🟡 MinorAdd success-path logs for list/delete flows (still incomplete observability).
Entry logs were added, but success logs are still missing for both handlers; list log also omits
page_size.As per coding guidelines,
**/*.py: Add logging for new flows.Also applies to: 386-389
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/apps/evaluation_app.py` around lines 375 - 376, Add success-path logging after the handlers return successfully: for the list handler (where you call EvaluationService.list_runs) emit an info log that includes user id (current_user.id), dataset_id, page, page_size and a brief result summary (e.g., count or success). Likewise, in the delete handler (the block around the call to EvaluationService.delete_run / the delete endpoint at lines ~386-389) add an info log on successful deletion including user id, dataset_id, run_id (or id being deleted) and success status. Ensure logs are placed after the service call returns and before the final return statement so they reflect the success path.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@api/apps/evaluation_app.py`:
- Around line 375-376: The file uses logging.info in functions like list_runs
(calling EvaluationService.list_runs and get_json_result) but never imports the
logging module; add an import for the logging module at the top of the file
(e.g., import logging) so calls like logging.info(...) in list_runs and the
other endpoint (around the EvaluationService usage at lines ~386) do not raise
NameError.
- Around line 373-374: The code reads page and page_size directly via page =
int(request.args.get("page", 1)) and page_size =
int(request.args.get("page_size", 20)) which can raise and allow unbounded
sizes; validate and bound these inputs: parse with a try/except to return a data
error (e.g., BadRequest/JSON error) for non-integers, enforce page >= 1, enforce
1 <= page_size <= <CAP> (choose a reasonable cap like 100), and coerce or return
an error if out of range; update the same validation for the other occurrence at
the referenced lines so all handlers using request.args.get("page" /
"page_size") behave consistently.
---
Duplicate comments:
In `@api/apps/evaluation_app.py`:
- Around line 375-376: Add success-path logging after the handlers return
successfully: for the list handler (where you call EvaluationService.list_runs)
emit an info log that includes user id (current_user.id), dataset_id, page,
page_size and a brief result summary (e.g., count or success). Likewise, in the
delete handler (the block around the call to EvaluationService.delete_run / the
delete endpoint at lines ~386-389) add an info log on successful deletion
including user id, dataset_id, run_id (or id being deleted) and success status.
Ensure logs are placed after the service call returns and before the final
return statement so they reflect the success path.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 0cbb828c-fa86-4111-bda0-5e40f61f4692
📒 Files selected for processing (2)
api/apps/evaluation_app.pyapi/db/services/evaluation_service.py
🚧 Files skipped from review as they are similar to previous changes (1)
- api/db/services/evaluation_service.py
|
All CodeRabbit feedback addressed: Added import logging at top of file |
What problem does this PR solve?
Two API endpoints in evaluation_app.py were unimplemented stubs
returning fake responses:
This PR adds list_runs() and delete_run() methods to EvaluationService
and wires them to the endpoints.
Type of change
Fixes #14056