Fix race conditions, error recovery, and exit handlers in job servers#38423
Fix race conditions, error recovery, and exit handlers in job servers#38423shunping merged 4 commits intoapache:masterfrom
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses intermittent shutdown errors in job servers by hardening the cleanup process. It introduces better state management, ensures thread safety during cache operations, and prevents resource leaks or duplicate registrations by properly managing atexit handlers and ensuring cleanup routines complete reliably. Highlights
New Features🧠 You can now enable Memory (public preview) to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize the Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counterproductive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
|
r: @derrickaw |
|
Stopping reviewer notifications for this pull request: review requested by someone other than the bot, ceding control. If you'd like to restart, comment |
There was a problem hiding this comment.
Code Review
This pull request improves the robustness of job and subprocess server cleanup by using try-finally blocks to ensure state variables are reset during failures. It also fixes a race condition in the purge method by moving a membership check inside a lock and prevents duplicate atexit registrations. Review feedback recommends removing a redundant hasattr check for atexit.unregister, as it is supported in all targeted Python versions.
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request improves the robustness of job and subprocess server management by ensuring consistent state cleanup and thread safety. Key changes include moving the _live_owners membership check and removal inside a lock to prevent race conditions, and wrapping cleanup calls in try...finally blocks to ensure _started and _owner_id are reset even if errors occur. It also ensures atexit handlers are unregistered to avoid duplicates on restart. Comprehensive tests were added to cover these scenarios, including concurrent purge attempts and destructor failures. I have no feedback to provide.
|
cc'ed @tvalentyn This seems promising to fix the "grpc deadline exceeds" error in our python test suites. |
The following error occurs in the python test workflows (e.g. https://github.com/apache/beam/actions/runs/25133968815/job/73667329011?pr=38135) from time to time. It happens during the shutdown of a job server, but looking at the code path, it could be a problem in other servers like expansion service.
We identified three potential issues and implemented fixes to harden the server shutdown process.