Add DiskProvisionedIops/ThroughputMibps options and update Go client libraries#37377
Conversation
Summary of ChangesHello @bambadiouf1, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances Google Cloud Dataflow's capabilities by allowing users to specify detailed performance characteristics for worker VM root disks. By introducing options for provisioned IOPS and throughput, it provides greater control over the underlying infrastructure, which can be crucial for optimizing the performance and cost-efficiency of data processing pipelines. This change is integrated across all supported SDKs, ensuring a consistent experience for developers. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Checks are failing. Will not request review until checks are succeeding. If you'd like to override that behavior, comment |
309f5fd to
6cb13d9
Compare
efa4d20 to
6cb13d9
Compare
…rker pools for java sdk
…ouf1/beam into iops_throughput_pipeline_options
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #37377 +/- ##
=============================================
+ Coverage 58.52% 71.94% +13.42%
+ Complexity 15434 3427 -12007
=============================================
Files 2851 329 -2522
Lines 280086 29836 -250250
Branches 12337 3593 -8744
=============================================
- Hits 163917 21466 -142451
+ Misses 109756 6729 -103027
+ Partials 6413 1641 -4772
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request introduces support for provisioned IOPS and throughput for worker disks across the Java, Go, and Python SDKs for Google Cloud Dataflow. It includes updates to pipeline options, translator logic, and associated tests, alongside dependency updates in the Go SDK. Feedback indicates critical issues within the Python SDK's autogenerated Dataflow client, specifically the removal of essential JSON field mappings for Flex Templates and the renumbering of message field indices, which could lead to serialization errors and breaking changes.
I am having trouble creating individual review comments. Click here to see my feedback.
sdks/python/apache_beam/runners/dataflow/internal/clients/dataflow/dataflow_v1b3_messages.py (8050-8057)
The custom JSON field mappings for dynamicTemplate_gcsPath and dynamicTemplate_stagingLocation have been removed. These mappings are critical for Flex Template functionality in the Python SDK, as they ensure that attribute names with underscores are correctly mapped to JSON keys containing dots (e.g., dynamicTemplate.gcsPath). Removing them will break Flex Template job submissions.
sdks/python/apache_beam/runners/dataflow/internal/clients/dataflow/dataflow_v1b3_messages.py (7954-7974)
The field indices in the WorkerPool message class (and others like Job and RuntimeUpdatableParams) have been renumbered. In apitools generated message classes, these integer indices are significant for serialization. Manually shifting existing indices is a breaking change that can cause binary incompatibility or serialization errors when communicating with the Dataflow service. New fields should be added at the end of the message definition with new unique indices, or the file should be regenerated using the official generator to ensure consistency with the discovery document.
damccorm
left a comment
There was a problem hiding this comment.
Thanks - this is looking pretty good, just a few more minor comments
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request adds support for setting disk provisioned IOPS and throughput in the Dataflow runner across the Java, Python, and Go SDKs. It introduces new pipeline options, updates job translation logic, and includes relevant tests for each SDK. The PR also updates several Go dependencies and refreshes the autogenerated Dataflow API client in Python. Feedback was provided to generalize the feature description in CHANGES.md, as the originally listed flag names were specific to the Java SDK and did not reflect the snake_case convention used in Python and Go.
damccorm
left a comment
There was a problem hiding this comment.
Thanks! Retrying some precommits to try to get a green signal, but I will eventually merge (unless they continue to fail and seem to represent a real problem)
Thanks a lot, Danny :)! |
|
Hey, this PR actually breaks the Dataflow protos in python by changing field numbers for existing fields. Additionally, this is changing the internally generated python client that we're in the process of deprecating. I'd suggest reverting this PR and removing the Python-specific code |
This is my fault, I should've caught the conflict. The relevant PR making this change is #37639, which would basically need to undo all the Python pieces here. I agree, lets:
|
This pull request introduces two new pipeline options for the Google Cloud Dataflow runner for the Java, Python and Go SDKs. These options allow users to specify provisioned performance for worker VM disks:
disk_provisioned_iops: Sets the provisioned IOPS for the disk. If unspecified, the service chooses a default
disk_provisioned_throughput_mibps: Sets the provisioned throughput in MiB/s for the disk.
Tests have been added/updated to verify that these options are correctly parsed and translated.
More context:
we need to add these pipeline options before submitting this cl: https://critique.corp.google.com/cl/858930428
Issue: #37374
Additionally, the Go Google API client dependencies were updated to their latest versions to allow the Go SDK to recognize the newly introduced DiskProvisionedIops and DiskProvisionedThroughputMibps fields in the Dataflow API. Several other Go client libraries and dependencies in sdks/go.mod have also been indirectly updated to their latest versions as well.
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, commentfixes #<ISSUE NUMBER>instead.CHANGES.mdwith noteworthy changes.See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.