input
stringlengths 1
18.7k
| output
stringlengths 1
18.7k
|
---|---|
do you have any more info on the persistent clusters? we could potentially use it to speed up our end to end workflows by quite a bit :slightly_smiling_face:
| we don’t support it right now. we can support that, need to update backend plugin. I’ll work on it next week, and get back to you once it’s done.
|
we don’t support it right now. we can support that, need to update backend plugin. I’ll work on it next week, and get back to you once it’s done.
| haha that's great! but i'm mostly looking to understand the mechanics and i remember there being an RFC discussing them
we don't have an urgent timeline and are looking to plan some work
|
haha that's great! but i'm mostly looking to understand the mechanics and i remember there being an RFC discussing them
we don't have an urgent timeline and are looking to plan some work
| Kevin Su let’s wait on this. Dylan Wilder what we understood from Keshi Dai was that there is potential corruption that happens when a cluster is reused in ray
|
Kevin Su let’s wait on this. Dylan Wilder what we understood from Keshi Dai was that there is potential corruption that happens when a cluster is reused in ray
| does that mean it's off the roadmap?
or just needs to be thought through more?
|
does that mean it's off the roadmap?
or just needs to be thought through more?
| its not off the roadmap
it can be done on flyte side, we dont know if ray is ready yet
but de-prioritized
|
its not off the roadmap
it can be done on flyte side, we dont know if ray is ready yet
but de-prioritized
| got it, thanks for the context :pray:
actually wait, "it can be done on flyte side" does this mean the infra for reusing resources exists?
|
got it, thanks for the context :pray:
actually wait, "it can be done on flyte side" does this mean the infra for reusing resources exists?
| There is a `ClusterSelector` In ray job <https://github.com/ray-project/kuberay/blob/master/ray-operator/apis/ray/v1alpha1/rayjob_types.go#L53|CRD>, so basically we should be able to use it to run the ray job on the existing cluster. The propeller need to save the rayCluster id generated by first ray task, and the second ray task should reuse the same ray cluster by passing the cluster selector. lastly, propeller shut down the ray cluster at the end node.
|
There is a `ClusterSelector` In ray job <https://github.com/ray-project/kuberay/blob/master/ray-operator/apis/ray/v1alpha1/rayjob_types.go#L53|CRD>, so basically we should be able to use it to run the ray job on the existing cluster. The propeller need to save the rayCluster id generated by first ray task, and the second ray task should reuse the same ray cluster by passing the cluster selector. lastly, propeller shut down the ray cluster at the end node.
| Ketan Umare We will need this feature as well. With more complex Flyte workflows, users should be able to share Ray cluster among different Flyte tasks.
|
Ketan Umare We will need this feature as well. With more complex Flyte workflows, users should be able to share Ray cluster among different Flyte tasks.
| I know you will, remember we were adding this but you had issues
|
Hi, how to view the initiated ray cluster in ray dashboard. bcause while running i can see that the ray cluster is initiated locally on 8265 port but when the port is opened it shows site can't be reached even when the workflow is running at that moment.
| The cluster is dead as soon as the script ends
|
Hi, was trying distributed training using ray in flyte. I am getting this error while running.
```from flytekitplugins.ray import HeadNodeConfig, RayJobConfig, WorkerNodeConfig
import ray
from ray import tune
#ray.init()
#ray.init("auto", ignore_reinit_error=True)
ray_config = RayJobConfig(
head_node_config=HeadNodeConfig(ray_start_params={"log-color": "True"}),
worker_node_config=[WorkerNodeConfig(group_name="ray-group", replicas=2)],
)
num_actors = 4
num_cpus_per_actor = 1
ray_params = RayParams(
num_actors=num_actors, cpus_per_actor=num_cpus_per_actor)
def train_model(config):
train_x, train_y = load_breast_cancer(return_X_y=True)
train_set = RayDMatrix(train_x, train_y)
evals_result = {}
bst = train(
params=config,
dtrain=train_set,
evals_result=evals_result,
evals=[(train_set, "train")],
verbose_eval=False,
ray_params=ray_params)
bst.save_model("model.xgb")
@task(task_config=ray_config, limits=Resources(mem="2000Mi", cpu="1"))
def train_model_task() -> dict:
config = {
"tree_method": "approx",
"objective": "binary:logistic",
"eval_metric": ["logloss", "error"],
"eta": tune.loguniform(1e-4, 1e-1),
"subsample": tune.uniform(0.5, 1.0),
"max_depth": tune.randint(1, 9)
}
analysis = tune.run(
train_model,
config=config,
metric="train-error",
mode="min",
num_samples=4,
resources_per_trial=ray_params.get_tune_resources())
return analysis.best_config
@workflow
def train_model_wf() -> dict:
return train_model_task()```
| Running out of disk
Request more pleas
|
Running out of disk
Request more pleas
| `@task(task_config=ray_config, limits=Resources(mem="2000Mi", cpu="1", ephemeral_storage="500Mi"))`
|
`@task(task_config=ray_config, limits=Resources(mem="2000Mi", cpu="1", ephemeral_storage="500Mi"))`
| ```from sklearn.datasets import load_breast_cancer
from flytekit import Resources, task, workflow
from flytekitplugins.ray import HeadNodeConfig, RayJobConfig, WorkerNodeConfig
import ray
from ray import tune
#ray.shutdown()
#ray.init()
#ray.init("auto", ignore_reinit_error=True)
ray_config = RayJobConfig(
head_node_config=HeadNodeConfig(ray_start_params={"log-color": "True"}),
worker_node_config=[WorkerNodeConfig(group_name="ray-group", replicas=3)],
)
num_actors = 2
num_cpus_per_actor = 1
ray_params = RayParams(
num_actors=num_actors, cpus_per_actor=num_cpus_per_actor)
def train_model(config):
train_x, train_y = load_breast_cancer(return_X_y=True)
train_set = RayDMatrix(train_x, train_y)
evals_result = {}
bst = train(
params=config,
dtrain=train_set,
evals_result=evals_result,
evals=[(train_set, "train")],
verbose_eval=False,
ray_params=ray_params)
bst.save_model("model.xgb")
#@task(limits=Resources(mem="2000Mi", cpu="1"))
@task(task_config=ray_config, limits=Resources(mem="3000Mi", cpu="1", ephemeral_storage="3000Mi"))
def train_model_task() -> dict:
config = {
"tree_method": "approx",
"objective": "binary:logistic",
"eval_metric": ["logloss", "error"],
"eta": tune.loguniform(1e-4, 1e-1),
"subsample": tune.uniform(0.5, 1.0),
"max_depth": tune.randint(1, 9)
}
analysis = tune.run(
train_model,
config=config,
metric="train-error",
mode="min",
num_samples=4,
max_concurrent_trials=1,
resources_per_trial=ray_params.get_tune_resources())
return analysis.best_config
@workflow
def train_model_wf() -> dict:
return train_model_task()```
Still getting this error when we specify `ephemeral_storage` value also. do u have any suggested limit for cpu and memory
|
```from sklearn.datasets import load_breast_cancer
from flytekit import Resources, task, workflow
from flytekitplugins.ray import HeadNodeConfig, RayJobConfig, WorkerNodeConfig
import ray
from ray import tune
#ray.shutdown()
#ray.init()
#ray.init("auto", ignore_reinit_error=True)
ray_config = RayJobConfig(
head_node_config=HeadNodeConfig(ray_start_params={"log-color": "True"}),
worker_node_config=[WorkerNodeConfig(group_name="ray-group", replicas=3)],
)
num_actors = 2
num_cpus_per_actor = 1
ray_params = RayParams(
num_actors=num_actors, cpus_per_actor=num_cpus_per_actor)
def train_model(config):
train_x, train_y = load_breast_cancer(return_X_y=True)
train_set = RayDMatrix(train_x, train_y)
evals_result = {}
bst = train(
params=config,
dtrain=train_set,
evals_result=evals_result,
evals=[(train_set, "train")],
verbose_eval=False,
ray_params=ray_params)
bst.save_model("model.xgb")
#@task(limits=Resources(mem="2000Mi", cpu="1"))
@task(task_config=ray_config, limits=Resources(mem="3000Mi", cpu="1", ephemeral_storage="3000Mi"))
def train_model_task() -> dict:
config = {
"tree_method": "approx",
"objective": "binary:logistic",
"eval_metric": ["logloss", "error"],
"eta": tune.loguniform(1e-4, 1e-1),
"subsample": tune.uniform(0.5, 1.0),
"max_depth": tune.randint(1, 9)
}
analysis = tune.run(
train_model,
config=config,
metric="train-error",
mode="min",
num_samples=4,
max_concurrent_trials=1,
resources_per_trial=ray_params.get_tune_resources())
return analysis.best_config
@workflow
def train_model_wf() -> dict:
return train_model_task()```
Still getting this error when we specify `ephemeral_storage` value also. do u have any suggested limit for cpu and memory
| If you’re using demo cluster, I think 1Gi is the limit.
|
If you’re using demo cluster, I think 1Gi is the limit.
| i am trying it on EKS cluster
|
i am trying it on EKS cluster
| <https://github.com/flyteorg/flyte/blob/aae01aa33eadfb86f1c952eb415f21326ea5519b/charts/flyte-core/values-eks.yaml#L216> section specifies the task resource defaults.
Can you check yours? Please increasing the mem. I believe `kubectl -n flyte edit cm flyte-admin-base-config` is the command but I’m not very sure. Let me know if this doesn’t work.
|
<https://github.com/flyteorg/flyte/blob/aae01aa33eadfb86f1c952eb415f21326ea5519b/charts/flyte-core/values-eks.yaml#L216> section specifies the task resource defaults.
Can you check yours? Please increasing the mem. I believe `kubectl -n flyte edit cm flyte-admin-base-config` is the command but I’m not very sure. Let me know if this doesn’t work.
| |
Nice. Please increase your mem and try again.
|
|
Nice. Please increase your mem and try again.
| I increased the memory in task. the execution is getting queued but it is in pending state for long time. Even in remote run, the workflow is running for more than 4h for 4 trials but the execution is not happening.
|
I increased the memory in task. the execution is getting queued but it is in pending state for long time. Even in remote run, the workflow is running for more than 4h for 4 trials but the execution is not happening.
| Have you seen the message saying you asked for 3 cpu and 0 gpu but the cluster has 2 cpu and 0 gpu?
|
Have you seen the message saying you asked for 3 cpu and 0 gpu but the cluster has 2 cpu and 0 gpu?
| yes but i have requested for only 1 cpu. should i change anywhere else?
```@task(task_config=ray_config, limits=Resources(mem="5000Mi", cpu="1", ephemeral_storage="3000Mi"))```
|
yes but i have requested for only 1 cpu. should i change anywhere else?
```@task(task_config=ray_config, limits=Resources(mem="5000Mi", cpu="1", ephemeral_storage="3000Mi"))```
| I think it’s because of `get_tune_resources()`.
Have you seen <https://docs.ray.io/en/releases-1.11.0/ray-more-libs/xgboost-ray.html#memory-usage> section in the doc?
I’m assuming you’re training an xgboost model.
|
I think it’s because of `get_tune_resources()`.
Have you seen <https://docs.ray.io/en/releases-1.11.0/ray-more-libs/xgboost-ray.html#memory-usage> section in the doc?
I’m assuming you’re training an xgboost model.
| ```ray_config = RayJobConfig(
head_node_config=HeadNodeConfig(ray_start_params={"log-color": "True"}),
worker_node_config=[WorkerNodeConfig(group_name="ray-group", replicas=3)],
)```
do we have any ways to specify the number of cpus in the ray cluster config? like this ?
```ray_config = RayJobConfig(
head_node_config=HeadNodeConfig(ray_start_params={"log-color": "True"}),
worker_node_config=[WorkerNodeConfig(group_name="ray-group", replicas=3)],
num_cpus=4,
)```
bcause as mentioned above we have 64 cpus in eks cluster. but it shows this warning that we have only 2 cpus in *ray cluster*. how to increase the cpu limit in ray cluster config?
|
```ray_config = RayJobConfig(
head_node_config=HeadNodeConfig(ray_start_params={"log-color": "True"}),
worker_node_config=[WorkerNodeConfig(group_name="ray-group", replicas=3)],
)```
do we have any ways to specify the number of cpus in the ray cluster config? like this ?
```ray_config = RayJobConfig(
head_node_config=HeadNodeConfig(ray_start_params={"log-color": "True"}),
worker_node_config=[WorkerNodeConfig(group_name="ray-group", replicas=3)],
num_cpus=4,
)```
bcause as mentioned above we have 64 cpus in eks cluster. but it shows this warning that we have only 2 cpus in *ray cluster*. how to increase the cpu limit in ray cluster config?
| I believe you you can set them in `RayParams`
<https://github.com/ray-project/xgboost_ray/blob/ecca2c63385841a0a1938f5edc349893e5ac63fc/xgboost_ray/main.py>
|
I believe you you can set them in `RayParams`
<https://github.com/ray-project/xgboost_ray/blob/ecca2c63385841a0a1938f5edc349893e5ac63fc/xgboost_ray/main.py>
| yeah but in RayParams we could specify the number of cpus that has to be utilized for each trial `cpus_per_actor` . Is there any config to be changed to increase the cpu of the ray cluster as a whole? bcause when I increased the cpus_per_actor also the requested cpu is still 2 and shows the warning that it has only 2 cpu in the cluster.
|
yeah but in RayParams we could specify the number of cpus that has to be utilized for each trial `cpus_per_actor` . Is there any config to be changed to increase the cpu of the ray cluster as a whole? bcause when I increased the cpus_per_actor also the requested cpu is still 2 and shows the warning that it has only 2 cpu in the cluster.
| Kevin Su, any idea how we can set the ray cluster resources? As per the docs, it should be possible with `init()`, but in this case, since Flyte initializes the cluster, how can a user modify those values?
|
Kevin Su, any idea how we can set the ray cluster resources? As per the docs, it should be possible with `init()`, but in this case, since Flyte initializes the cluster, how can a user modify those values?
| To set Ray cluster resource, just update the `limit` and `request` in the @task. Like <https://github.com/flyteorg/flytesnacks/blob/a3b97943563cfc952b5683525763578685a93694/cookbook/integrations/kubernetes/ray_example/ray_example.py#L56|https://github.com/flyteorg/flytesnacks/blob/a3b97943563cfc952b5683525763578685a93[…]694/cookbook/integrations/kubernetes/ray_example/ray_example.py>
|
To set Ray cluster resource, just update the `limit` and `request` in the @task. Like <https://github.com/flyteorg/flytesnacks/blob/a3b97943563cfc952b5683525763578685a93694/cookbook/integrations/kubernetes/ray_example/ray_example.py#L56|https://github.com/flyteorg/flytesnacks/blob/a3b97943563cfc952b5683525763578685a93[…]694/cookbook/integrations/kubernetes/ray_example/ray_example.py>
| ```@task(task_config=ray_config, requests=Resources(mem="5000Mi", cpu="5", ephemeral_storage="1000Mi"), limits=Resources(mem="7000Mi", cpu="9", ephemeral_storage="2000Mi"))```
I have requested for 5 cpus but when it executes it shows requested cpus as 2 only.
and show same warning too that we have only 2 cpu in the cluster.
|
```@task(task_config=ray_config, requests=Resources(mem="5000Mi", cpu="5", ephemeral_storage="1000Mi"), limits=Resources(mem="7000Mi", cpu="9", ephemeral_storage="2000Mi"))```
I have requested for 5 cpus but when it executes it shows requested cpus as 2 only.
and show same warning too that we have only 2 cpu in the cluster.
| I’m wondering where it’s picking “you asked for 9.0 cpu” from. Is it from your `limits`?
|
I’m wondering where it’s picking “you asked for 9.0 cpu” from. Is it from your `limits`?
| I think it is based on the resource requested per trial. when i specified cpus_per_trial and num_actors as 2 and 4 it showed requested cpus as 9. when i decreased the resource requested and num actors as 2 and 1 it showed 3.
When the cpus_per_trial and num_actors are 1, the actual requested cpu is 2 and the execution is happening fine since we have sufficient 2 cpus in the cluster. when the num_actors are increased it requests for more cpus so the execution is not happening.
|
I think it is based on the resource requested per trial. when i specified cpus_per_trial and num_actors as 2 and 4 it showed requested cpus as 9. when i decreased the resource requested and num actors as 2 and 1 it showed 3.
When the cpus_per_trial and num_actors are 1, the actual requested cpu is 2 and the execution is happening fine since we have sufficient 2 cpus in the cluster. when the num_actors are increased it requests for more cpus so the execution is not happening.
| Um got it. We need to find a way to increase the cluster resources. Not sure why `requests` isn’t assigning the requested resources to the cluster.
|
Um got it. We need to find a way to increase the cluster resources. Not sure why `requests` isn’t assigning the requested resources to the cluster.
| yeah. kindly notify if there is any way to do so.
|
yeah. kindly notify if there is any way to do so.
| Kevin Su, do you have any ideas?
|
Kevin Su, do you have any ideas?
| Priya Could you describe the RayJob (kubectl describe) and check if the resource is same as you specify in the @task. I guess the head node doesn’t use all the cpu in the pod. In other words, the cpu of head pod could be 10, but cpu of the head node process in the pod could be 2.
|
Priya Could you describe the RayJob (kubectl describe) and check if the resource is same as you specify in the @task. I guess the head node doesn’t use all the cpu in the pod. In other words, the cpu of head pod could be 10, but cpu of the head node process in the pod could be 2.
| I have attached the allocated memory when we describe the node.
```@task(task_config=ray_config, requests=Resources(mem="5000Mi", cpu="5") , limits=Resources(mem="7000Mi", cpu="9"))```
This is the requested resources.
|
I have attached the allocated memory when we describe the node.
```@task(task_config=ray_config, requests=Resources(mem="5000Mi", cpu="5") , limits=Resources(mem="7000Mi", cpu="9"))```
This is the requested resources.
| sorry, could you describe the rayJob you are running?
|
sorry, could you describe the rayJob you are running?
| is there any command for this
This is the shown when we describe the kuberay-operator while running.
|
is there any command for this
This is the shown when we describe the kuberay-operator while running.
| kubectl describe RayJobs <name> -n <namespace>
|
Hi, while initiating ray cluster, the task is running in only one instance and pod. Generally if a ray cluster is initiated it is expected to run in different instance in distributed manner right? can we do horizontal scaling here to increase the pool of resources here?
| cc: Kevin Su
|
cc: Kevin Su
| hmm, if ray task is started, propeller should create head node and workers nodes. did you enable the ray plugin in propeller?
```tasks:
task-plugins:
enabled-plugins:
- container
- sidecar
- k8s-array
- ray
default-for-task-types:
container: container
sidecar: sidecar
container_array: k8s-array
ray: ray```
|
hmm, if ray task is started, propeller should create head node and workers nodes. did you enable the ray plugin in propeller?
```tasks:
task-plugins:
enabled-plugins:
- container
- sidecar
- k8s-array
- ray
default-for-task-types:
container: container
sidecar: sidecar
container_array: k8s-array
ray: ray```
| Yeah ray plugin is enabled
|
Yeah ray plugin is enabled
| is there any error in the kuberay operator?
|
is there any error in the kuberay operator?
| not sure. how to check if it works fine?
|
not sure. how to check if it works fine?
| kubectl logs <kuberay-operator> -n ray-system
|
kubectl logs <kuberay-operator> -n ray-system
| |
have you installed ingress controller? if not, it will cause an error in kuberay, kuberay use ingress controller to create a new ingress route for RayJob
|
|
have you installed ingress controller? if not, it will cause an error in kuberay, kuberay use ingress controller to create a new ingress route for RayJob
| yes ingress controller is installed in the setup
|
yes ingress controller is installed in the setup
| Priya do you have couple mins to hop on a call?
|
Priya do you have couple mins to hop on a call?
| sure ... pls let me know ur feasible timings
|
sure ... pls let me know ur feasible timings
| maybe 9~12 AM in your time
|
maybe 9~12 AM in your time
| Sorry for the inconvenience Kevin Su. We were having live demo so couldn't work on the setup. Will tomorrow same time work for u ?
|
Sorry for the inconvenience Kevin Su. We were having live demo so couldn't work on the setup. Will tomorrow same time work for u ?
| No worries, yes, ping me tomorrow when you are available
|
No worries, yes, ping me tomorrow when you are available
| Hi actually once the helm is upgraded I am able to see the worker pods getting created. But the issue now is that the task is getting queued for a long time it is not getting initiated. It gets `The node was low on resource: ephemeral-storage` and it is trying to initiate a new pod but we have enough ephemeral storage in the instance.
The docker image that we are trying to pull is nearly 10gb. will that be an issue? shall we connect by tomorrow mrng 9 AM on my time? can u confirm on where to connect through slack or google meet?
|
Hi actually once the helm is upgraded I am able to see the worker pods getting created. But the issue now is that the task is getting queued for a long time it is not getting initiated. It gets `The node was low on resource: ephemeral-storage` and it is trying to initiate a new pod but we have enough ephemeral storage in the instance.
The docker image that we are trying to pull is nearly 10gb. will that be an issue? shall we connect by tomorrow mrng 9 AM on my time? can u confirm on where to connect through slack or google meet?
| I’ll call you at 9am your time through google meet
|
Hi! Is there a way to shorten `ttlSecondsAfterFinished`? By default, it is 3600s (1 hour) and we’d like to tear down a cluster right after a job is complete. Thanks for your help!
```$ k describe rayjobs feb5da8c2a2394fb4ac8-n0-0 -n flytesnacks-development
...
Ttl Seconds After Finished: 3600```
| yes. update the propeller config map. change ttl to 0
<https://docs.flyte.org/en/latest/deployment/cluster_config/flytepropeller_config.html#ray-ray-config>
|
yes. update the propeller config map. change ttl to 0
<https://docs.flyte.org/en/latest/deployment/cluster_config/flytepropeller_config.html#ray-ray-config>
| Thanks for your prompt reply! Let me try this!
It worked like a charm!
```$ kubectl describe rayjobs f3281d8b2689c4c35a67-n0-0 -n flytesnacks-development
Ttl Seconds After Finished: 60```
For those who want to do the same, add this `ray.ttlSecondsAfterFinished` to the values.yaml for flyte-core.
``` # -- Kubernetes specific Flyte configuration
k8s:
plugins:
ray:
ttlSecondsAfterFinished: 60```
|
Thanks for your prompt reply! Let me try this!
It worked like a charm!
```$ kubectl describe rayjobs f3281d8b2689c4c35a67-n0-0 -n flytesnacks-development
Ttl Seconds After Finished: 60```
For those who want to do the same, add this `ray.ttlSecondsAfterFinished` to the values.yaml for flyte-core.
``` # -- Kubernetes specific Flyte configuration
k8s:
plugins:
ray:
ttlSecondsAfterFinished: 60```
| nice!
|
nice!
| Thank you Hiromu Hota
|
FYI: Priya and I found some issues when running the task on Kuberay 0.4.0. if you get any error as well, please downgrade to the 0.3.0 first. I’ll take a look into it at the end of this month.
one of the issues is that ray job status is always “queued”
and some other issues can be found in this <https://flyte-org.slack.com/archives/C049Q7GDWN9/p1670512286551649|thread>
| Priya and Kevin Su please file the issue with Ray.
seems like KubeRay is buggy
|
Priya and Kevin Su please file the issue with Ray.
seems like KubeRay is buggy
| Hi, is the issue resolved in KubeRay?
|
In version 0.3.0, while running the basic ray task mentioned in the documentation <https://docs.flyte.org/projects/cookbook/en/latest/auto/integrations/kubernetes/ray_example/ray_example.html>, the pods were getting up and running but the execution got queued in the console until we terminate manually.
| yes, in some cases, propeller didn’t delete the cluster. will fix it
|
Hi, in KubeRay version 0.3.0 while trying to perform ray training in remote using `pyflyte --config ~/.flyte/config-remote.yaml run --remote --image <image_name> ray_demo.py wf` , I am getting this issue in logs and the task is getting queued in the console. When the same is executed in local using `pyflyte --config ~/.flyte/config-remote.yaml run --image <image_name> ray_demo.py wf` , it works fine.
| Kevin Su, could it be a backend error?
|
Kevin Su, could it be a backend error?
| I’ve chatted with Priya. It’s probably a ray issue. just post here to see if anyone run into the same issue.
|
Hi, I have a doubt regarding scaling of nodes. Do we have options to make each worker pod run in different node so that the node will spawn 'n' number of nodes with a less memory instance?
for eg,
Now if I request for 8G memory and 4 CPU and request for 4 replicas, the node is spawning an instance with higher GB instance and trying to accommodate all worker nodes in single node. Instead I need an approach where each worker pod should schedule in 4 different node with less GB instance.
Do we have any way to achieve this scaling?
| There's a similar thread on <#CP2HDHKE1|ask-the-community>: <https://flyte-org.slack.com/archives/CP2HDHKE1/p1672750846054829>.
Could you revive that thread? I'll ping my team to respond.
|
Hello! I am running Ray on Flyte. I am getting a warning about Ray using /tmp instead of /dev/shm because /dev/shm has only 67108864 bytes available.
To fix this, I _would_ just specify --shm-size=3.55gb when running the container. But Flyte is running the containers for us, so I cannot figure out how to specify any run options.
Is there a way for specifying run options for the containers that Flyte runs?
Full text of warning attached.
| cc Daniel Rammer do you know if we can specify the pod spec for ray jobs?
is this part of the work you are doing?
|
cc Daniel Rammer do you know if we can specify the pod spec for ray jobs?
is this part of the work you are doing?
| Ketan Umare yes!
|
Ketan Umare yes!
| awesome Ruksana Kabealo
|
awesome Ruksana Kabealo
| Hey Ruksana Kabealo, I'm looking into this a bit and not finding a definitive answer. So Flyte uses the <https://github.com/ray-project/kuberay|kuberay project> to launch Ray tasks, basically it <https://github.com/flyteorg/flyteplugins/blob/4634a81403e501882e3b1f39bcbd229b78768a4e/go/tasks/plugins/k8s/ray/ray.go#L155-L161|creates an instance of the RayJob> which is then executed. So basically, we need to figure out what configuration we need to change on the RayJob CR to support this. I have found <https://github.com/ray-project/kuberay/issues/201|this issue> which seems to indicate that these are simple memory requests / limits. Does this sound correct?
|
Hey Ruksana Kabealo, I'm looking into this a bit and not finding a definitive answer. So Flyte uses the <https://github.com/ray-project/kuberay|kuberay project> to launch Ray tasks, basically it <https://github.com/flyteorg/flyteplugins/blob/4634a81403e501882e3b1f39bcbd229b78768a4e/go/tasks/plugins/k8s/ray/ray.go#L155-L161|creates an instance of the RayJob> which is then executed. So basically, we need to figure out what configuration we need to change on the RayJob CR to support this. I have found <https://github.com/ray-project/kuberay/issues/201|this issue> which seems to indicate that these are simple memory requests / limits. Does this sound correct?
| Hey Daniel Rammer ! Yes, it should be a simple memory limit change to expand the size of /dev/shm
|
Hey Daniel Rammer ! Yes, it should be a simple memory limit change to expand the size of /dev/shm
| Have you tried using the <https://github.com/flyteorg/flytesnacks/blob/master/cookbook/deployment/customizing_resources.py|task resource requests / limits>? IIUC Flytes Ray plugin <https://github.com/flyteorg/flyteplugins/blob/4634a81403e501882e3b1f39bcbd229b78768a4e/go/tasks/plugins/k8s/ray/ray.go#L72-L79|uses those to set the container-level requests>.
|
Hi, we recently opened a <https://github.com/flyteorg/flyteplugins/pull/321|pull request> to address the following <https://github.com/flyteorg/flyte/issues/2883|issue> (inter-cluster communication between Flyte and custom Ray cluster). Can someone please review it? It adds to a product Spotify is building that is integral to our machine learning platform.
cc Keshi Dai
| Hey Kevin Su Ketan Umare we are trying to make Flyte work for our internal Flyte cluster setup. Abdullah Mobeen opened this PR to enable the inter-cluster communication feature for Ray plugin. Could you guys help take a look? Thank you so much!
|
Hey Kevin Su Ketan Umare we are trying to make Flyte work for our internal Flyte cluster setup. Abdullah Mobeen opened this PR to enable the inter-cluster communication feature for Ray plugin. Could you guys help take a look? Thank you so much!
| Thanks, reviewing
|
Thanks, reviewing
| :+1:
cc Dylan Wilder
|
:+1:
cc Dylan Wilder
| Technically, the changes are similar to what Spotify did for the <https://github.com/spotify/flyte-flink-plugin|Flyte-Flink plugin>. Our data infra team also added context to the issue I linked. Thanks!
|
Technically, the changes are similar to what Spotify did for the <https://github.com/spotify/flyte-flink-plugin|Flyte-Flink plugin>. Our data infra team also added context to the issue I linked. Thanks!
| cc Daniel Rammer to review as well
|
cc Daniel Rammer to review as well
| timely :smile:
|
timely :smile:
| Looks great, merged, thanks Abdullah Mobeen! Now we would just need to update the flyteplugin dependency version in flytepropeller. Is this something you're looking for an immediate propeller release on or are you building your own image anyways?
|
Looks great, merged, thanks Abdullah Mobeen! Now we would just need to update the flyteplugin dependency version in flytepropeller. Is this something you're looking for an immediate propeller release on or are you building your own image anyways?
| Thanks a lot Daniel Rammer! Yess -- Since we prefer to always stay on a stable Flyte release, it is better if we make a new release to cover this plugin
|
Thanks a lot Daniel Rammer! Yess -- Since we prefer to always stay on a stable Flyte release, it is better if we make a new release to cover this plugin
| Stable Flyte release will be 1.4
|
Stable Flyte release will be 1.4
| Ketan Umare, what’s the rough timeline for Flyte 1.4 release?
|
Ketan Umare, what’s the rough timeline for Flyte 1.4 release?
| So 1.4 is the current stable, 1.5 will be end of month (maybe first week of April) since we switched to a monthly release cycle. I opened <https://github.com/flyteorg/flytepropeller/pull/542|this PR> to get the plugin updates merged into propeller and will make sure this is merged for the 1.5 release.
|
So 1.4 is the current stable, 1.5 will be end of month (maybe first week of April) since we switched to a monthly release cycle. I opened <https://github.com/flyteorg/flytepropeller/pull/542|this PR> to get the plugin updates merged into propeller and will make sure this is merged for the 1.5 release.
| Thanks Daniel Rammer!
|
hi, I installed master version of kuberay operator and deployed the ray cluster. When I submit the workflow, I could see only head pod getting created and worker pods are not getting created. In ray operator logs I could find some error in creating worker pods. It says failed quota but we have enough project quota and also the requested resources are very less. do anyone have idea on how to solve this issue
`@task(task_config=ray_config, requests=Resources(mem="2000Mi", cpu="1"), limits=Resources(mem="3000Mi", cpu="2"))`
```- development:
- projectQuotaCpu:
value: "64"
- projectQuotaMemory:
value: "150Gi"
value: |
apiVersion: v1
kind: ResourceQuota
metadata:
name: project-quota
namespace: {{ namespace }}
spec:
hard:
limits.cpu: {{ projectQuotaCpu }}
limits.memory: {{ projectQuotaMemory }}```
| can you ask this question in the ray slack?
this is really a ray problem seems like
|
Hi all, I created an issue <https://github.com/flyteorg/flyte/issues/3588|here> before realizing there was a Slack. Any ideas as to why the Python Ray example (from <https://docs.flyte.org/projects/cookbook/en/latest/auto/integrations/kubernetes/ray_example/ray_example.html|the docs>) registers its workflow just fine, but the Jupyter Notebook example doesn't find any entities? I'm probably missing something obvious so apologies if that's the case.
I noticed that VS Code thinks there is a `\n` after `@workflow`(unsurprising as Jupyter Notebooks are typically run in the browser obviously), not sure if that could be causing the problem.
| Peter Klingelhofer, you cannot register workflows present in ipynb files. You can, however, use FlyteRemote to register the tasks and workflows.
<https://docs.flyte.org/projects/flytekit/en/latest/design/control_plane.html#registering-entities>
You can include this code in a separate cell in your jupyter notebook and run it.
|
Peter Klingelhofer, you cannot register workflows present in ipynb files. You can, however, use FlyteRemote to register the tasks and workflows.
<https://docs.flyte.org/projects/flytekit/en/latest/design/control_plane.html#registering-entities>
You can include this code in a separate cell in your jupyter notebook and run it.
| Thank you for the quick response.
I think I'm having trouble figuring out what my flyte_entity should be. Let's assume my project name is `repo`, and I have the `ray_example.ipynb` file in the `workflows` folder, and I'm trying to add the workflow to the `development` domain.
I was adding a new separate cell to the bottom of the Jupyter Notebook `ray_example.ipynb` file like so:
```from flytekit.remote import FlyteRemote
from flytekit.configuration import Config, SerializationSettings, ImageConfig
# Using image pushed to local registry at localhost:30000
img = ImageConfig.from_images(
"localhost:30000/repo:latest", {"repo": "localhost:30000/repo:latest"}
)
# FlyteRemote object is the main entrypoint to API
remote = FlyteRemote(
config=Config.for_sandbox(),
default_project="repo",
default_domain="development",
)
# Get Task
# flyte_task = remote.fetch_task(name="workflows.ray_example", version="v1")
flyte_task = remote.fetch_task(
name="workflows.ray_example",
version="v1",
project="repo",
domain="development",
)
flyte_task = remote.register_task(
entity=flyte_task,
serialization_settings=SerializationSettings(image_config=None),
version="v2",
)
flyte_workflow = remote.register_workflow(
entity=flyte_task,
serialization_settings=SerializationSettings(image_config=None),
version="v1",
)
flyte_launch_plan = remote.register_launch_plan(entity=flyte_task, version="v1")```
Yet I still receive the `FlyteEntityNotExistException`. Apologies if the answer is obvious. Thank you again so much for any help/assistance you can provide!
|
Thank you for the quick response.
I think I'm having trouble figuring out what my flyte_entity should be. Let's assume my project name is `repo`, and I have the `ray_example.ipynb` file in the `workflows` folder, and I'm trying to add the workflow to the `development` domain.
I was adding a new separate cell to the bottom of the Jupyter Notebook `ray_example.ipynb` file like so:
```from flytekit.remote import FlyteRemote
from flytekit.configuration import Config, SerializationSettings, ImageConfig
# Using image pushed to local registry at localhost:30000
img = ImageConfig.from_images(
"localhost:30000/repo:latest", {"repo": "localhost:30000/repo:latest"}
)
# FlyteRemote object is the main entrypoint to API
remote = FlyteRemote(
config=Config.for_sandbox(),
default_project="repo",
default_domain="development",
)
# Get Task
# flyte_task = remote.fetch_task(name="workflows.ray_example", version="v1")
flyte_task = remote.fetch_task(
name="workflows.ray_example",
version="v1",
project="repo",
domain="development",
)
flyte_task = remote.register_task(
entity=flyte_task,
serialization_settings=SerializationSettings(image_config=None),
version="v2",
)
flyte_workflow = remote.register_workflow(
entity=flyte_task,
serialization_settings=SerializationSettings(image_config=None),
version="v1",
)
flyte_launch_plan = remote.register_launch_plan(entity=flyte_task, version="v1")```
Yet I still receive the `FlyteEntityNotExistException`. Apologies if the answer is obvious. Thank you again so much for any help/assistance you can provide!
| I'm assuming you've not registered the flyte task yet. In that case, you needn't fetch the task. Directly register it. Check out <https://docs.flyte.org/projects/cookbook/en/latest/auto/case_studies/feature_engineering/feast_integration/Feast_Flyte_Demo.html> example.
|
I'm assuming you've not registered the flyte task yet. In that case, you needn't fetch the task. Directly register it. Check out <https://docs.flyte.org/projects/cookbook/en/latest/auto/case_studies/feature_engineering/feast_integration/Feast_Flyte_Demo.html> example.
| Thank you for your response Samhita Alla. I believe in my code snippet above, that's what I've done in this section:
```flyte_task = remote.register_task(
entity=flyte_task,
serialization_settings=SerializationSettings(image_config=None),
version="v2",
)```
Interestingly, this user suggests that registering workflows inside Jupyter notebooks is not possible: <https://github.com/flyteorg/flyte/issues/3588#issuecomment-1509599891>
If it is indeed possible, I would be happy to work on an MR to add an example Jupyter Notebook file to the Ray example (<https://docs.flyte.org/projects/cookbook/en/latest/auto/integrations/kubernetes/ray_example/ray_example.html>), just need to figure out how to get an example workflow working via a Jupyter Notebook. I'm just pushing the Docker image to the local registry at `localhost:30000`, which is what I would think would be the simplest implementation possible to run a workflow.
I do notice that it looks like there is an example with Papermill, but obviously that's not Ray: <https://docs.flyte.org/projects/cookbook/en/latest/auto/integrations/flytekit_plugins/papermilltasks/simple.html#sphx-glr-auto-integrations-flytekit-plugins-papermilltasks-simple-py|https://docs.flyte.org/projects/cookbook/en/latest/auto/integrations/flytekit_plugins/papermilltasks/simple.html#sphx-glr-auto-in[…]ltasks-simple-py>
|
Thank you for your response Samhita Alla. I believe in my code snippet above, that's what I've done in this section:
```flyte_task = remote.register_task(
entity=flyte_task,
serialization_settings=SerializationSettings(image_config=None),
version="v2",
)```
Interestingly, this user suggests that registering workflows inside Jupyter notebooks is not possible: <https://github.com/flyteorg/flyte/issues/3588#issuecomment-1509599891>
If it is indeed possible, I would be happy to work on an MR to add an example Jupyter Notebook file to the Ray example (<https://docs.flyte.org/projects/cookbook/en/latest/auto/integrations/kubernetes/ray_example/ray_example.html>), just need to figure out how to get an example workflow working via a Jupyter Notebook. I'm just pushing the Docker image to the local registry at `localhost:30000`, which is what I would think would be the simplest implementation possible to run a workflow.
I do notice that it looks like there is an example with Papermill, but obviously that's not Ray: <https://docs.flyte.org/projects/cookbook/en/latest/auto/integrations/flytekit_plugins/papermilltasks/simple.html#sphx-glr-auto-integrations-flytekit-plugins-papermilltasks-simple-py|https://docs.flyte.org/projects/cookbook/en/latest/auto/integrations/flytekit_plugins/papermilltasks/simple.html#sphx-glr-auto-in[…]ltasks-simple-py>
| Papermill is for running jupyter notebook as a flyte task. In your case, I assume you're trying to register tasks and workflows that are present within your jupyter notebook which is absolutely possible. What Kevin Su's telling is that you cannot register code present in your Jupyter with `pyflyte run` or `pyflyte register`. You need to use FlyteRemote to register your code. Can you try registering by following the example I've sent earlier?
|
Papermill is for running jupyter notebook as a flyte task. In your case, I assume you're trying to register tasks and workflows that are present within your jupyter notebook which is absolutely possible. What Kevin Su's telling is that you cannot register code present in your Jupyter with `pyflyte run` or `pyflyte register`. You need to use FlyteRemote to register your code. Can you try registering by following the example I've sent earlier?
| Thanks so much for the help Samhita Alla. I closed my GitHub issue as I did get the workflow to successfully register via importing the Jupyter Notebook via Papermill.
However I'm still curious about FlyteRemote, I set up the FlyteRemote syntax, but I see that you said here that you can't use `pyflyte run` or `pyflyte register`, and I don't really see in the documentation regarding FlyteRemote what the equivalent commands would be to register workflows via FlyteRemote would be. If I'm using FlyteRemote, what command would need to be run to register workflows, since we can't use `pyflyte register`? Apologies again for the confusion on my part.
|
Thanks so much for the help Samhita Alla. I closed my GitHub issue as I did get the workflow to successfully register via importing the Jupyter Notebook via Papermill.
However I'm still curious about FlyteRemote, I set up the FlyteRemote syntax, but I see that you said here that you can't use `pyflyte run` or `pyflyte register`, and I don't really see in the documentation regarding FlyteRemote what the equivalent commands would be to register workflows via FlyteRemote would be. If I'm using FlyteRemote, what command would need to be run to register workflows, since we can't use `pyflyte register`? Apologies again for the confusion on my part.
| No problem! You'd have to use `register_task` / `register_workflow` / `register_launch_plan` / `register_script` function. FlyteRemote is a Python API. You can use it to programmatically register your code.
<https://github.com/flyteorg/flytekit/blob/e865db57d3bfbb7fb997417b052a05bc871cb0ed/flytekit/remote/remote.py>
|
No problem! You'd have to use `register_task` / `register_workflow` / `register_launch_plan` / `register_script` function. FlyteRemote is a Python API. You can use it to programmatically register your code.
<https://github.com/flyteorg/flytekit/blob/e865db57d3bfbb7fb997417b052a05bc871cb0ed/flytekit/remote/remote.py>
| Ah thank you! I see now. That makes sense. :slightly_smiling_face:
|
Kevin Su Ketan Umare Please help create an issue here <https://github.com/ray-project/kuberay> to track the integration. Some community members sync with me and they like to watch the issue.
| sure, will do
|
Hello Kevin Su Ketan Umare, This is Keshi from Spotify and we are evaluating Ray internally and would like to integrate it with Flyte. I heard from Jiaxin Shan that you guys have already started working on Flyte plugin for Ray and I’m interested in learning more on this.
| This is fantastic
Yes we have started but not long way. The work is not a lot, but has a couple parts
|
This is fantastic
Yes we have started but not long way. The work is not a lot, but has a couple parts
| Awesome! I’m happy to help and contribute. Do you think if we can collaborate on this?
At least from Spotify side, we would like to make sure the design would work for our setup. Maybe it’s worth having a sync on this?
|
Awesome! I’m happy to help and contribute. Do you think if we can collaborate on this?
At least from Spotify side, we would like to make sure the design would work for our setup. Maybe it’s worth having a sync on this?
| I think so
When would be a good time
|
I think so
When would be a good time
| That’s awesome! Is your team in the west cost? A few slots (all based on NYC time) that will work for me:
• Monday May 23 after 3:30PM
• Tuesday May 24 after 2PM
• Wednesday May 25 2-3:30PM or after 4PM
Let me know which time works for you guys!
|
That’s awesome! Is your team in the west cost? A few slots (all based on NYC time) that will work for me:
• Monday May 23 after 3:30PM
• Tuesday May 24 after 2PM
• Wednesday May 25 2-3:30PM or after 4PM
Let me know which time works for you guys!
| Keshi Dai Could I have your email, I’ll send you a meeting link
|
Keshi Dai Could I have your email, I’ll send you a meeting link
| I will send an invite now
|
I will send an invite now
| Thanks Ketan Umare Kevin Su! Look forward to it!
|
Thanks Ketan Umare Kevin Su! Look forward to it!
| Keshi Dai here are the meeting notes - <https://docs.google.com/document/d/1z6rAPlg8O-N6Bm3NxcHZgwaEDJMrr2PBBJh1uPqN5-w/edit#>
cc Haytham Abuelfutuh / Daniel Rammer
|
Subsets and Splits