Describe the issue
Using databricks bundle deploy with direct engine is always trigging a cluster restart.
Similar to - #3286
Bundle validates successfully.
No changes to the cluster.
databricks bundle plan shows the cluster is going to get updated.
Checking the cluster history shows identical pre/post json configurations aside from:
"release_version": "17.3.9"
which appears to be some configuration that cannot be specified.
Configuration
Databricks CLI v0.284.0
Cluster definition
resources: clusters: cluster-developer: cluster_name: "cluster-developer" policy_id: ${var.policy_general} spark_version: ${var.spark_version} driver_node_type_id: "Standard_D4pds_v6" node_type_id: "Standard_D4pds_v6" kind: "CLASSIC_PREVIEW" enable_local_disk_encryption: false is_single_node: true num_workers: 0 spark_conf: "spark.master": "local[*, 4]" "spark.databricks.cluster.profile": "singleNode" custom_tags: compute_name: "cluster-developer" ResourceClass: SingleNode autotermination_minutes: 60 data_security_mode: SINGLE_USER single_user_name: ${workspace.current_user.userName}
Steps to reproduce the behavior
- Run
databricks bundle deploy ...
- Observe cluster update behaviour
Expected Behavior
Cluster should not update/restart. This adds 2+minutes on every iteration using bundles and is annoying.
Actual Behavior
Cluster restarts.
OS and CLI version
Databricks CLI v0.284.0
Describe the issue
Using databricks bundle deploy with direct engine is always trigging a cluster restart.
Similar to - #3286
Bundle validates successfully.
No changes to the cluster.
databricks bundle plan shows the cluster is going to get updated.
Checking the cluster history shows identical pre/post json configurations aside from:
"release_version": "17.3.9"
which appears to be some configuration that cannot be specified.
Configuration
Databricks CLI v0.284.0
Cluster definition
resources: clusters: cluster-developer: cluster_name: "cluster-developer" policy_id: ${var.policy_general} spark_version: ${var.spark_version} driver_node_type_id: "Standard_D4pds_v6" node_type_id: "Standard_D4pds_v6" kind: "CLASSIC_PREVIEW" enable_local_disk_encryption: false is_single_node: true num_workers: 0 spark_conf: "spark.master": "local[*, 4]" "spark.databricks.cluster.profile": "singleNode" custom_tags: compute_name: "cluster-developer" ResourceClass: SingleNode autotermination_minutes: 60 data_security_mode: SINGLE_USER single_user_name: ${workspace.current_user.userName}Steps to reproduce the behavior
databricks bundle deploy ...Expected Behavior
Cluster should not update/restart. This adds 2+minutes on every iteration using bundles and is annoying.
Actual Behavior
Cluster restarts.
OS and CLI version
Databricks CLI v0.284.0