What happened?
With optimization global window grouping in Spark
https://issues.apache.org/jira/browse/BEAM-12646
8c3af01#diff-c13404655c9bf261fcbcc72feb949e0ffcf428802897d2f39097f34d7a3d995aL185
it reintroduced issue for jobs with global windows. Now it is using spark's groupByKey = all values for a single key need to fit in-memory at once. It was already solved in:
https://issues.apache.org/jira/browse/BEAM-5392
Issue Priority
Priority: 3 (minor)
Issue Components
What happened?
With optimization global window grouping in Spark
https://issues.apache.org/jira/browse/BEAM-12646
8c3af01#diff-c13404655c9bf261fcbcc72feb949e0ffcf428802897d2f39097f34d7a3d995aL185
it reintroduced issue for jobs with global windows. Now it is using spark's groupByKey = all values for a single key need to fit in-memory at once. It was already solved in:
https://issues.apache.org/jira/browse/BEAM-5392
Issue Priority
Priority: 3 (minor)
Issue Components