Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Limitation on Spark "task_gpu_amount" cannot be less than 1 #204

Open
chenya-zhang opened this issue Aug 27, 2023 · 0 comments
Open

Limitation on Spark "task_gpu_amount" cannot be less than 1 #204

chenya-zhang opened this issue Aug 27, 2023 · 0 comments

Comments

@chenya-zhang
Copy link

chenya-zhang commented Aug 27, 2023

Hi folks, here is some context of the limitation we encountered.

  1. There is a check from "mirrored_strategy_runner.py" that "task_gpu_amount" cannot be less than 1.
    https://github.com/tensorflow/ecosystem/blob/master/spark/spark-tensorflow-distributor/spark_tensorflow_distributor/mirrored_strategy_runner.py#L161-L164
  2. "spark.task.resource.gpu.amount" can by default be set to a decimal amount per Nvidia's docs, https://www.nvidia.com/en-us/ai-data-science/spark-ebook/getting-started-spark-3/.
  3. There is an option in TensorFlow to set a fractional GPU amount to limit the memory usage: https://www.tensorflow.org/api_docs/python/tf/compat/v1/GPUOptions, Tensorflow v2 Limit GPU Memory usage tensorflow#25138

In this case, does it make sense for Spark TensorFlow distributer to allow the GPU per task to be less than 1?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant