output
stringlengths 1
18.7k
| input
stringlengths 1
18.7k
|
---|---|
The resource quota is limiting too much?
| it started to go through, but it’s still very slow ~4h for 3000 wfs
|
8.3h would be a sequential execution, so with max parallelism = 30, I would expect it to go faster
if you check how much the quota is and how much we are requesting, 30 should fit
<https://ghe.spotify.net/datainfra/flyte-load/blob/c64ddbd3f4eed6bdf22afe4f82d4e9e5cd824372/flyte_load/dynamicLP.py#L8-L9>
Babis Kiosidis
| The resource quota is limiting too much?
|
cc Haytham Abuelfutuh
is this `10.000 - resulting in ~200.000 individual tasks` 10 -> 200 or 10k to 200k?
Bernhard Stadlbauer / Maarten de Jong / Klaus Azesberger would you be open to having a chat about the usecase and help us dive deep and help you guys? We can do like a long whiteboarding session
| Hi Ketan Umare!
First of, thanks for the great support that you're providing us with! As you might have guessed from our recent posts, we've successfully run our first small scale Flyte workflows and are currently trying to scale out to our bigger usecases. This is where we've been running into some performance issues, where we are not quite clear whether it's a structural/configuration problem on our side, or whether we've hit a limitation of Flyte.
Roughly speaking, our usecase would be to map a (sub-)workflow with around 15-20 interconnected tasks onto a list of inputs with variable length. So far, we've used a dynamic task to kick off the series of subworkflows, but that does not seem to scale well for a large number of inputs/subworkflows (say 10,000 - resulting in ~200,000 individual tasks).
From what I've understood so far, this is a limitation in Flyte, as even though the dynamic workflow definition is saved to s3, the current state is saved in `etcd`, which will eventually reach the 1.5Mb limit. Also, there is only one Flyte propeller worker designated to the workflow, which runs into performance issues when trying to assess the state of many concurrent tasks. Did I understand
this correctly?
We're now trying to restructure our architecture to also be able to scale out to large workloads. The following solutions are what we've come up with, do you have a gut feeling what would be best trying, or maybe even a different approach?:
1. We could use `map_task`s to run the different steps (say a footprint detection task and a vectorization task) sequentially. However, some of our tasks have multiple inputs (e.g. a task might depend on the output of footprint detection and vectorization) and `map_task`s only support one input at the moment.
2. Write our own "scheduling" logic on top of Flyte, which then in turn will trigger workflow executions. This would take advantage of multiple propeller workers as well as smaller workflow defintions/workflow states. However we're afraid of running into limitations on the Flyte admin side as we would constantly need to query the state of our workflows?
|
that's 10k number of inputs (map tiles) where each input (tile) needs to be processed running a workflow of roughly 20 tasks (i think that's still an estimate on the lower end)
preferably we'd like to aggregate these workflows as subworkflows of bigger workflows but from today's PoV we probably cannot do that unless we can somehow work around etcd-limit and probably the grpc message size limit (which i don't understand yet tbh)
| cc Haytham Abuelfutuh
is this `10.000 - resulting in ~200.000 individual tasks` 10 -> 200 or 10k to 200k?
Bernhard Stadlbauer / Maarten de Jong / Klaus Azesberger would you be open to having a chat about the usecase and help us dive deep and help you guys? We can do like a long whiteboarding session
|
Sorry, I mixed up a comma and a dot, it should be 10k/200k. I also fixed the separator in the original text for future readers.
We would love to meet if that is possible, we would have time for example next week, Monday to Wednesday anytime in our evening/your morning :slightly_smiling_face:
| that's 10k number of inputs (map tiles) where each input (tile) needs to be processed running a workflow of roughly 20 tasks (i think that's still an estimate on the lower end)
preferably we'd like to aggregate these workflows as subworkflows of bigger workflows but from today's PoV we probably cannot do that unless we can somehow work around etcd-limit and probably the grpc message size limit (which i don't understand yet tbh)
|
Klaus Azesberger I think in have a few ideas of how to do this
So as a quick thing can you use launchplans instead of subworkflows?
This automatically scales out one workflow crd to many and thus you get 2mb per launchplan
Do you can create a nested structure that can farm out launchplans
Also supporting map tasks with multiple inputs is possible and really available in the backend, question is how to represent in flytekit, we can help with this, seems like a quick win
| Sorry, I mixed up a comma and a dot, it should be 10k/200k. I also fixed the separator in the original text for future readers.
We would love to meet if that is possible, we would have time for example next week, Monday to Wednesday anytime in our evening/your morning :slightly_smiling_face:
|
Ketan Umare I think we’ll want to do something similar with launchplans. Are there any examples you could share? Thanks
Also would love map_tasks with multiple inputs
| Klaus Azesberger I think in have a few ideas of how to do this
So as a quick thing can you use launchplans instead of subworkflows?
This automatically scales out one workflow crd to many and thus you get 2mb per launchplan
Do you can create a nested structure that can farm out launchplans
Also supporting map tasks with multiple inputs is possible and really available in the backend, question is how to represent in flytekit, we can help with this, seems like a quick win
|
Cc Yee do we know what blocks multi-input map tasks. Need for tuples?
| Ketan Umare I think we’ll want to do something similar with launchplans. Are there any examples you could share? Thanks
Also would love map_tasks with multiple inputs
|
We were thinking we will stick with the pythonic way of handling these and create a partial construct…
so you can say something like:
```map_task(partial_task(my_task, input1=static_input1, input2=static_input2), ...)```
| Cc Yee do we know what blocks multi-input map tasks. Need for tuples?
|
Haytham Abuelfutuh but this does not cover an array of tuples
Bernhard Stadlbauer the problem with performance is usually not the propeller worker, but the throttling that happens, because of downstream system like K8s.
| We were thinking we will stick with the pythonic way of handling these and create a partial construct…
so you can say something like:
```map_task(partial_task(my_task, input1=static_input1, input2=static_input2), ...)```
|
we support data classes, right? so you can choose to take a dataclass as a single input, and build an array of them as the input to the map_task… but what I’ve seen being asked is how to fill in some common fields (config,… etc.) and only “map” on one of the inputs… for that the partial syntax looks more ergonomic IMHO.
| Haytham Abuelfutuh but this does not cover an array of tuples
Bernhard Stadlbauer the problem with performance is usually not the propeller worker, but the throttling that happens, because of downstream system like K8s.
|
yes i agree
I think the missing part is, that dataclasses do not support our other Flyte types today - like FlyteFile, FlyteDirectory, FlyteSchema & Enum. But I think we can start supporting it. cc Kevin Su? what do you think?
| we support data classes, right? so you can choose to take a dataclass as a single input, and build an array of them as the input to the map_task… but what I’ve seen being asked is how to fill in some common fields (config,… etc.) and only “map” on one of the inputs… for that the partial syntax looks more ergonomic IMHO.
|
I think we should add it, many people want to use complex data type in dataclass. Let me do it.
| yes i agree
I think the missing part is, that dataclasses do not support our other Flyte types today - like FlyteFile, FlyteDirectory, FlyteSchema & Enum. But I think we can start supporting it. cc Kevin Su? what do you think?
|
Ya
Let’s create an issue
Are you around now?
| I think we should add it, many people want to use complex data type in dataclass. Let me do it.
|
yes
| Ya
Let’s create an issue
Are you around now?
|
Ketan Umare Complex datatype support is something we’ve come across as well. I hacked together a generic “MapTransformer” to support arbitrary data classes that contain FlyteFile fields
The usage is as follows:
```@dataclass
class Foobar:
id: str
myfile: FlyteFile
x: int
# Create a new transformer class for the
# specific data type you want to transform
class FoobarTransformer(MapTransformer[Foobar]):
def __init__(self):
super().__init__(name="foobar-transform", t=Foobar)
# Register transformer with Flyte type engine
TypeEngine.register(FoobarTransformer())```
| yes
|
nice Nicholas LoFaso / we decided to prioritize this in the Dataclass and we will work on this soon. Kevin Su is aware. Can you chime in on the issue - <https://github.com/flyteorg/flyte/issues/1521>
| Ketan Umare Complex datatype support is something we’ve come across as well. I hacked together a generic “MapTransformer” to support arbitrary data classes that contain FlyteFile fields
The usage is as follows:
```@dataclass
class Foobar:
id: str
myfile: FlyteFile
x: int
# Create a new transformer class for the
# specific data type you want to transform
class FoobarTransformer(MapTransformer[Foobar]):
def __init__(self):
super().__init__(name="foobar-transform", t=Foobar)
# Register transformer with Flyte type engine
TypeEngine.register(FoobarTransformer())```
|
I don’t think that is a bug, it will happen at times and is expected
So what do you mean by stuck, the other day it was stuck because the cluster was out of resources and there seems to be a bug or issue in the way resource quotas are administered
| Hi! Question. How did we decide to go about `"Will not fast follow, Reason: Wf terminated? false, Version matched? true",`
we see it again on the WF which seem stuck
(just a heavy execution task ….not a load test)
|
The expected execution time of a task ~1h, but it’ s on 3h now
| I don’t think that is a bug, it will happen at times and is expected
So what do you mean by stuck, the other day it was stuck because the cluster was out of resources and there seems to be a bug or issue in the way resource quotas are administered
|
Is it running?
| The expected execution time of a task ~1h, but it’ s on 3h now
|
it’s in a running state
| Is it running?
|
I mean the pod
| it’s in a running state
|
So what I see is… (sorrry for a long discussion again)
while the pod is not there …
kubectl -n ubi-pipelines-production get pods | grep 6s
returns nothing …..
checking the setup….
I see all the other pods but not that one …
probably the setup…
was me miss-understanding the setup. Please forget my question. Sorry.
| I mean the pod
|
Ketan Umare This doc is really good! Thank you for taking the time to put this together, it really helps.
My only comments is that the round latency is more critical than it seems, the definition of it is in the "Signs of slowdown" section, the concept is explained in "Timeline of a workflow execution" where it is not mentioned explicitly
| Anastasia Khlebnikova / Julien Bisconti / Bernhard Stadlbauer / Jeev B please go through <https://docs.flyte.org/en/latest/deployment/cluster_config/performance.html#deployment-cluster-config-performance|this> doc and the <https://docs.flyte.org/en/latest/concepts/execution_timeline.html#divedeep-execution-timeline|accompanying> doc. Let me know if it makes sense / helps
Pradithya Aria Pura you should also optimize the configuration when and if you see any problems
|
5 cents from my side: would be cool to have defaults values there plus the reasoning behind increasing or decreasing the value based on some factors?
Example:
```admin-launcher.tps, admin-launcher.cacheSize, admin-launcher.workers```
if we want to fine tune them …. what the reasoning should be?
| Ketan Umare This doc is really good! Thank you for taking the time to put this together, it really helps.
My only comments is that the round latency is more critical than it seems, the definition of it is in the "Signs of slowdown" section, the concept is explained in "Timeline of a workflow execution" where it is not mentioned explicitly
|
I would prefer 30s, but I think we use 1/5
| Hey Team, we are currently converting the dashboard from PromQL to (Spotify specific TSDB) and I was wondering which interval do you use to scrape the metrics in Prometheus ?
10s
30s
1m
5m
10m
other
emoji voting :point_up: :slightly_smiling_face:
|
<@U026CP5D1MF> What error are you seeing?
| <https://anyscale-dev.dev/login>
Hi i'm unable to reach this url for training , can someone please help ?
|
was listening to the presentation...
This site cant be reached
`ERR_CONNECTION_REFUSED`
weird, this seems to be on my personal laptop only :disappointed:
| <@U026CP5D1MF> What error are you seeing?
|
Does it work from a different computer?
| was listening to the presentation...
This site cant be reached
`ERR_CONNECTION_REFUSED`
weird, this seems to be on my personal laptop only :disappointed:
|
yh, on the same wifi :shrug:
| Does it work from a different computer?
|
Hmm, very weird, can you use the other computer?
| yh, on the same wifi :shrug:
|
let me check
| I did not receive any email(s) on the training material
|
likewise
| let me check
|
<@U0269UVGBFE> can you share your email address
| likewise
|
abbette at <http://amazon.com|amazon.com>
| <@U0269UVGBFE> can you share your email address
|
OK I found the email
Sorry, it was buried.
| abbette at <http://amazon.com|amazon.com>
|
got it
<@U0269UVGBFE> I will send you an email shortly; just generating credentials
<@U0269UVGBFE> you should have an email in your inbox, let me know if you don't or if the creds don't work
| OK I found the email
Sorry, it was buried.
|
<@U024C17HKQW> I haven't received that email either. Could you please send that email again?
| got it
<@U0269UVGBFE> I will send you an email shortly; just generating credentials
<@U0269UVGBFE> you should have an email in your inbox, let me know if you don't or if the creds don't work
|
I’m in
| <@U024C17HKQW> I haven't received that email either. Could you please send that email again?
|
<@U026A081BKN> what error are you getting?
make sure you use <https://anyscale-dev.dev/login> as the URL
| Somehow I can't log in anyscale, saying organization raysummit2021 not working?
|
<@U026430BG1W> the error of raysummit2021 as a public organization not permitted is now gone. I'm able to log in
| <@U026A081BKN> what error are you getting?
make sure you use <https://anyscale-dev.dev/login> as the URL
|
anything that is serializable by cloudpickle works, and you can also define your own serialization function if they are not
see also <https://docs.ray.io/en/master/serialization.html?highlight=serialization#customized-serialization>
| What type of data can be put in the object store? Should the data be serializable?
|
If you just pass a remote object into a regular function, it will be of type `ObjectRef`, you can call ray.get on it to get the actual object.
| Hi. What will happen if regular function will try to access remote object?
|
thank you
so wrapping function with remote decorator just unwraps ObjectRefs for user - to not worry about unwrapping manually
| If you just pass a remote object into a regular function, it will be of type `ObjectRef`, you can call ray.get on it to get the actual object.
|
that's right, it automatically unwraps (and also calls the function remotely so you can run many at the same time)
| thank you
so wrapping function with remote decorator just unwraps ObjectRefs for user - to not worry about unwrapping manually
|
in that case is it redundant to call obj_ref = counter.increment.remote() in the last example?
or rather, is the “remote” part of the call redundant?
| that's right, it automatically unwraps (and also calls the function remotely so you can run many at the same time)
|
the remote part is to point out to the programmer that the call will actually be remote, it is more of an API convention rather than technically necessary
in fact a very very early version of Ray at the beginning didn't have it :smile:
| in that case is it redundant to call obj_ref = counter.increment.remote() in the last example?
or rather, is the “remote” part of the call redundant?
|
It looks like it’s not optional though
| the remote part is to point out to the programmer that the call will actually be remote, it is more of an API convention rather than technically necessary
in fact a very very early version of Ray at the beginning didn't have it :smile:
|
but if you leave it out you do get an error
| It looks like it’s not optional though
|
it is a drop in replacement for the standard multiprocessing pool that allows you to scale out on a cluster, the reason we have it is so you can run it with existing code that is already implemented with multiprocessing pool
if you write new code, remote functions are more flexible
| what does multiprocessing pool offers in addition to remote function?
|
and faster?
| it is a drop in replacement for the standard multiprocessing pool that allows you to scale out on a cluster, the reason we have it is so you can run it with existing code that is already implemented with multiprocessing pool
if you write new code, remote functions are more flexible
|
and simpler yeah :slightly_smiling_face:
| and faster?
|
i don’t know of any specific resources for those libraries, but in general, Ray does its best to not interfere with other libraries, so you should be able to invoke numba/jax within your remote function/task.
| does anyone know whats the best way or resources for using numba/jax jit llvm compiled functions inside ray actors / remote functions?
|
paralellel iterators support a `union` operation to combine iterators, but there’s no `join` api in the SQL sense.
| Is there way to do a two dataset join with Parallel Iterators?
|
thank you. Maybe there are some other higher level API (maybe third party) available do to join/group by over datasets?
| paralellel iterators support a `union` operation to combine iterators, but there’s no `join` api in the SQL sense.
|
yes! if you represent your dataset with pandas, pyspark, or dask, you can use modin, RayDP (Spark on Ray), or dask-on-ray which all have join/group by operations
| thank you. Maybe there are some other higher level API (maybe third party) available do to join/group by over datasets?
|
batching: take a bunch of small data and combine them into a bigger chunk that takes less overhead to process
shard: take some data (might be batched) and assign it to a different machine/processor to execute in parallel
| what's the fifference between shard and batch?
|
I’m not super familiar with PyCaret but here is a blog by
<@U025ETR0UEN>
<https://medium.com/distributed-computing-with-ray/bayesian-hyperparameter-optimization-with-tune-sklearn-in-pycaret-a33b1592662f>
| Is there any examples using PyCaret library with Ray?
|
Hey <@U026114JRDK> Ray is not fully integrated with PyCaret but PyCaret provides support for distributed HPO with Ray Tune, as outlined in the blog post Ian shared. If you have any questions in regard to that please let me know, happy to help
| I’m not super familiar with PyCaret but here is a blog by
<@U025ETR0UEN>
<https://medium.com/distributed-computing-with-ray/bayesian-hyperparameter-optimization-with-tune-sklearn-in-pycaret-a33b1592662f>
|
It’s possible. You can call `ray start --head` on your first laptop and call `ray start --address=<first machine>` on the second, or use the ray autoscaler for private clusters
<https://docs.ray.io/en/releases-0.8.5/autoscaling.html#private-cluster>
| Can I create a Ray Cluster of two laptops to use more cores locally?
|
Just make sure that they are on the same network!
| It’s possible. You can call `ray start --head` on your first laptop and call `ray start --address=<first machine>` on the second, or use the ray autoscaler for private clusters
<https://docs.ray.io/en/releases-0.8.5/autoscaling.html#private-cluster>
|
How many laptops can I add to the cluster?
| Just make sure that they are on the same network!
|
in theory, a lot, but we would recommend some beefier machines if possible. check out the scalability envelope for more details <https://github.com/ray-project/ray/blob/master/benchmarks/README.md>
| How many laptops can I add to the cluster?
|
By default, yes, but Ray has an advanced object spilling feature that can write objects to disk if it runs out of memory.
| Does all the data has to fit in the memory of the cluster?
|
Nice. Is there any way to read in distributed manner? In the notebook, all reading of the data happens in one node.
If data is on a shared store like S3 or HDFS, does Ray have out of the box tools to read them to the object store?
| By default, yes, but Ray has an advanced object spilling feature that can write objects to disk if it runs out of memory.
|
Right now, Ray doesn’t have tools for that, but some of the higher level libraries do. (For example, dask-on-ray, and modin can both read from s3 out of the box). You can also bring your own tools
| Nice. Is there any way to read in distributed manner? In the notebook, all reading of the data happens in one node.
If data is on a shared store like S3 or HDFS, does Ray have out of the box tools to read them to the object store?
|
AFAIK we won't be covering it in this tutorial, there are two solutions available: <https://docs.ray.io/en/master/dask-on-ray.html> and <https://docs.ray.io/en/master/modin/index.html> (see also <https://github.com/modin-project/modin>)
| are we going to look how to distribute a dataframe?
|
<http://ray-distributed.slack.com|ray-distributed.slack.com>
| Is there a general Ray Slack channel for ongoing support and questions?
|
restricted to berkeley and anyscale
| <http://ray-distributed.slack.com|ray-distributed.slack.com>
|
also there is <https://discuss.ray.io/> which has better search/indexing and is recommended for longer questions :slightly_smiling_face:
for the slack you need to fill out <https://forms.gle/9TSdDYUgxYs8SA9e8> to be invited
| restricted to berkeley and anyscale
|
If you run out of memory, the Ray task will raise an error, which will show up on the driver and is also printed to the log files. You can automatically deal with memory leaks that you can't/don't want to fix by setting `max_calls=1` on the task to restart workers after each call.
| If there is a memory leak, how would the error look like?
|
Great. What is license/permission for them?
| Yes <@U025X6U2XL6> - the videos will be on demand after the end of the day
|
I think it can be very useful for prototyping and proof of concept, based on my experience to date with Tune and RLLib. These incorporate some of the multiprocessing and distribution features of Ray core.
| Is Ray useful to run on a single Workstation with a single GPU and multicore CPU?
|
Excellent presentation and thanks. I am using Ray Tune with RLLib. It appears I have access to some parallelization features through these two packages. What can't I do WRT to cluster/parallel processing?
| Thanks for the great questions during the morning tutorial about Ray core
|
Many thanks David, and great question here –
For both RLlib and Tune, those are already going to be leveraging features for parallelization. There may be some configuration required to make the best use of cluster resources, e.g., are there GPUs available, what kind, etc.
| Excellent presentation and thanks. I am using Ray Tune with RLLib. It appears I have access to some parallelization features through these two packages. What can't I do WRT to cluster/parallel processing?
|
Do you still have the problem Simon?
| The cluster is starting up. The terminal will be available after the cluster is active.
|
Hi ...
yes i do. My cluster seems to be in a suspended state
<https://anyscale-dev.dev/o/raysummit2021/projects/prj_9RTMrFJQUhNfqz6TJiyyuTKA/app-config-details/bld_Q8rhXSuGQMdxR8ntU46FNH83>
| Do you still have the problem Simon?
|
What is your username?
| Hi ...
yes i do. My cluster seems to be in a suspended state
<https://anyscale-dev.dev/o/raysummit2021/projects/prj_9RTMrFJQUhNfqz6TJiyyuTKA/app-config-details/bld_Q8rhXSuGQMdxR8ntU46FNH83>
|
<mailto:[email protected]|[email protected]>
| What is your username?
|
It looks to be active now? <https://anyscale-dev.dev/o/raysummit2021/projects/prj_9RTMrFJQUhNfqz6TJiyyuTKA/clusters/ses_zhdwMD493NShc3T7J5GpGeaa>
| <mailto:[email protected]|[email protected]>
|
it has been reporting for the last 10 minutes that the cluster is starting up
or even longer
| It looks to be active now? <https://anyscale-dev.dev/o/raysummit2021/projects/prj_9RTMrFJQUhNfqz6TJiyyuTKA/clusters/ses_zhdwMD493NShc3T7J5GpGeaa>
|
Got it! Yes, I’m seeing that too.
It is working now.
| it has been reporting for the last 10 minutes that the cluster is starting up
or even longer
|
thanks for your help
| Got it! Yes, I’m seeing that too.
It is working now.
|
cc <@U01VD58RVLK> <@U024C17HKQW>
| I'm following the tutorial 24 hours late, now. The Jupyter, Dashboard, and TensorBoard links don't work - I guess something was turned off on the platform. Any chance someone can turn it on please?
|
<@U026E8WB1BN> - the materials were only available during the live tutorial and are no longer available. We will be hosting more tutorials in the future, so you’ll have another chance to participate. You can find the info on those events (once it’s available) on Anyscale’s events page: <https://www.anyscale.com/events>
We’re sorry for any inconvenience this has caused
| cc <@U01VD58RVLK> <@U024C17HKQW>
|
I believe you should be able to access them here: <https://github.com/DerwenAI/ray_tutorial>
| Hi <@U01VD58RVLK> can I get access to the slides of core tutorial. Paco mentioned they will be available.
|
hello <@U01VD58RVLK> thank you so much! The slides have links disabled, but that's okay. By the way, I"m looking for a Github link for the second tutorial as well involving RLlib and Ray Tune
| I believe you should be able to access them here: <https://github.com/DerwenAI/ray_tutorial>
|
Hi <@U026AF8Q9BN>, which links are disabled in the <https://github.com/DerwenAI/ray_tutorial/blob/main/slides.pdf> slide deck? There should not be any, so I'd really to fix those! :)
| hello <@U01VD58RVLK> thank you so much! The slides have links disabled, but that's okay. By the way, I"m looking for a Github link for the second tutorial as well involving RLlib and Ray Tune
|
very kind of you to respond <@U025PMJJ9GX> I realized that the links work fine after downloading. Viewing the pdf directly from Github seems to disable links. There's a wealth of knowledge in these slides and links, so didn't want to miss out anything :slightly_smiling_face:
| Hi <@U026AF8Q9BN>, which links are disabled in the <https://github.com/DerwenAI/ray_tutorial/blob/main/slides.pdf> slide deck? There should not be any, so I'd really to fix those! :)
|
Good to hear :slightly_smiling_face: Yes, the GitHub rendering breaks PDF in some ways. At Derwen, we did a separate kind of SWA preso viewer in Flask/CloudFlare, and I could barely believe how very strange parsing PDFs can get...
| very kind of you to respond <@U025PMJJ9GX> I realized that the links work fine after downloading. Viewing the pdf directly from Github seems to disable links. There's a wealth of knowledge in these slides and links, so didn't want to miss out anything :slightly_smiling_face:
|
and if so, how is that managed on K8s
new pod for every “task”
| > so does Ray also have a work-stealing type of task execution system?
|
no, i think the standard usage for ray is to start ray "nodes" within a pod, and ray will execute tasks within the pod
| and if so, how is that managed on K8s
new pod for every “task”
|
cool
and no need of gang scheduling?
and the number of nodes is pre-determined or dynamic?
| no, i think the standard usage for ray is to start ray "nodes" within a pod, and ray will execute tasks within the pod
|
nodes can be dynamic; no need to gangschedule
<https://docs.ray.io/en/master/cluster/kubernetes.html>
| cool
and no need of gang scheduling?
and the number of nodes is pre-determined or dynamic?
|
and the code is pickled i assume?
| nodes can be dynamic; no need to gangschedule
<https://docs.ray.io/en/master/cluster/kubernetes.html>
|
any problems with K8s?
| :wave: , I'm Bill, PM @ Anyscale. Happy to see progress on this integration - is kubernetes a must-have for you all?
|
There shouldn't be. As I mentioned, we've got users running it in production. I think we just need to know more about the behavior you would want out of the system to be able to answer that question better.
| any problems with K8s?
|
No matter what namespace Rayoperator is in, Raycluster can be created in any namespace . if you run a ray task in `flytesnack-development` , then RayCluster will be launched in `flytesnack-development`
| Kevin Su Do we need to deploy the KuberayOperator and RayCluster in separate Namespace or it needs to be deployed in the same namespace where the Flyte components are running?
<https://blog.flyte.org/ray-and-flyte>
Here they have given a link to Github page where they have mentioned name space as *ray-system* in README file. Does it matters?
|
This should be possible- seems like something is wrong in pickling
Can you file this as a bug
Also not sure why this is failing - the error is non descriptive
If you drop the distributed part (drop ray config) does it work
Or does it work locally
| hi, can we run ray tune experiments in flyte. bcause i am getting error while executing.
```import typing
import ray
from ray import tune
from flytekit import Resources, task, workflow
from flytekitplugins.ray import HeadNodeConfig, RayJobConfig, WorkerNodeConfig
@ray.remote
def objective(config):
return (config["x"] * config["x"])
ray_config = RayJobConfig(
head_node_config=HeadNodeConfig(ray_start_params={"log-color": "True"}),
worker_node_config=[WorkerNodeConfig(group_name="ray-group", replicas=2)],
runtime_env={"pip": ["numpy", "pandas"]},
)
@task(task_config=ray_config, limits=Resources(mem="2000Mi", cpu="1"))
def ray_task(n: int) -> int:
model_params = {
"x": tune.randint(-10, 10)
}
tuner = tune.Tuner(
objective,
tune_config=tune.TuneConfig(
num_samples=10,
max_concurrent_trials=n,
),
param_space=model_params,
)
results = tuner.fit()
return results
@workflow
def ray_workflow(n: int) -> int:
return ray_task(n=n)```
is there any other ways to run hyperparameter tuning in a distributed manner like ray tune?
|
The above error mentioned is got when i ran locally using '*python example.py'*. when i executed using pyflyte run command i am getting this error.
after removing the distributed config also am getting same error.
| This should be possible- seems like something is wrong in pickling
Can you file this as a bug
Also not sure why this is failing - the error is non descriptive
If you drop the distributed part (drop ray config) does it work
Or does it work locally
|
Subsets and Splits