input
stringlengths 1
18.7k
| output
stringlengths 1
18.7k
|
---|---|
Thanks Matt Smith I’ll try your suggestions tomorrow and report back.
| also i can help with any of this
Sören Brunk I was pretty busy this week (I am oncall :D)
|
also i can help with any of this
Sören Brunk I was pretty busy this week (I am oncall :D)
| Matt Smith I’ve changed the docker entrypoint to flytekit_spark_entrypoint.sh as you suggested. I already had 2. and 3. in place using your flytekit_venv script because I’ve derived it from the Python flytesnacks example.
I’m still getting the same error though.
Ok this seems indeed to be an issue with single quotes around `pyflyte-execute ...` in `flytekit_spark_entrypoint.sh`
When I use `$PYSPARK_APP_ARGS` directly as an argument to spark-submit instead of `$PYSPARK_ARGS` it works. I haven’t figured out why exactly, because you know, bash… but it should be fixable.
Now the driver is running but the Spark executors are failing. I can’t really figure out what’s going on because the driver or the Spark operator immediately removes the failed executors. Does anyone have an idea how to keep them around for debugging?
|
Matt Smith I’ve changed the docker entrypoint to flytekit_spark_entrypoint.sh as you suggested. I already had 2. and 3. in place using your flytekit_venv script because I’ve derived it from the Python flytesnacks example.
I’m still getting the same error though.
Ok this seems indeed to be an issue with single quotes around `pyflyte-execute ...` in `flytekit_spark_entrypoint.sh`
When I use `$PYSPARK_APP_ARGS` directly as an argument to spark-submit instead of `$PYSPARK_ARGS` it works. I haven’t figured out why exactly, because you know, bash… but it should be fixable.
Now the driver is running but the Spark executors are failing. I can’t really figure out what’s going on because the driver or the Spark operator immediately removes the failed executors. Does anyone have an idea how to keep them around for debugging?
| Ohh sorry about this Anmol Khurana or I can help in a bit
We should have an example for this, terribly sorry
ok
let me create an example and share with you
is that ok?
so the way this works is that “both spark executor and driver use the same entrypoint”
the entrypoint script we also want other flyte tasks to use
so we provide an option to switch. I think the open source one might not be the correct one, as internally we have a base image that users use.
|
Ohh sorry about this Anmol Khurana or I can help in a bit
We should have an example for this, terribly sorry
ok
let me create an example and share with you
is that ok?
so the way this works is that “both spark executor and driver use the same entrypoint”
the entrypoint script we also want other flyte tasks to use
so we provide an option to switch. I think the open source one might not be the correct one, as internally we have a base image that users use.
| Yes of you could create an example that would be awesome!
I think it would also be useful for flytesnacks.
|
Yes of you could create an example that would be awesome!
I think it would also be useful for flytesnacks.
| Yup
I am doing that now. You can see it tomorrow
Or Monday
|
Yup
I am doing that now. You can see it tomorrow
Or Monday
| Looking into this as well. will update and make sure we have examples as well. Sorry about this.
Sören Brunk are you setting `PYSPARK_PYTHON` and `PYSPARK_DRIVER_PYTHON` . If not, can you try adding those to the dockerfile and see if it helps ? Something like:
```ARG PYTHON_EXEC=.../venv/bin/python3
ENV PYSPARK_PYTHON ${PYTHON_EXEC}
ENV PYSPARK_DRIVER_PYTHON ${PYTHON_EXEC}```
Meanwhile I am working on adding an example in flytesnacks as well to capture all of this.
|
Looking into this as well. will update and make sure we have examples as well. Sorry about this.
Sören Brunk are you setting `PYSPARK_PYTHON` and `PYSPARK_DRIVER_PYTHON` . If not, can you try adding those to the dockerfile and see if it helps ? Something like:
```ARG PYTHON_EXEC=.../venv/bin/python3
ENV PYSPARK_PYTHON ${PYTHON_EXEC}
ENV PYSPARK_DRIVER_PYTHON ${PYTHON_EXEC}```
Meanwhile I am working on adding an example in flytesnacks as well to capture all of this.
| Anmol Khurana just tried your suggestion but unfortunately the executors are still failing. I think I’ll just wait until you’ve added the example now. Thanks for all your help guys!
|
Anmol Khurana just tried your suggestion but unfortunately the executors are still failing. I think I’ll just wait until you’ve added the example now. Thanks for all your help guys!
| No thank you for the patience
Sören Brunk Thanks to Anmol (and previous problems) he fixed the script here - <https://github.com/lyft/flytekit/pull/132>
|
No thank you for the patience
Sören Brunk Thanks to Anmol (and previous problems) he fixed the script here - <https://github.com/lyft/flytekit/pull/132>
| Thanks Ketan for sharing this. Just to expand a bit, the error message was pretty much what was happening. The `entrypoint.sh` in flytekit had additional quotes which aren’t in the internal version we use (or in the open-source spark one) and were causing issues. In-addition to <https://github.com/lyft/flytekit/pull/132>, <https://github.com/lyft/flytesnacks/pull/15/files> has the dockerfile and an example workflow which I used to build/test locally. I need to clean-up these PRs a bit/add docs etc and I plan to have these checked-in by early next week.
|
Thanks Ketan for sharing this. Just to expand a bit, the error message was pretty much what was happening. The `entrypoint.sh` in flytekit had additional quotes which aren’t in the internal version we use (or in the open-source spark one) and were causing issues. In-addition to <https://github.com/lyft/flytekit/pull/132>, <https://github.com/lyft/flytesnacks/pull/15/files> has the dockerfile and an example workflow which I used to build/test locally. I need to clean-up these PRs a bit/add docs etc and I plan to have these checked-in by early next week.
| hey Sören Brunk I think you will wake up, so Anmol Khurana decide to remove the default script in “flytekit” as it seems spark upstream has a script that we can use . He has this issue - <https://github.com/lyft/flyte/issues/409> and he is about to merge the changes. Also he has this example which works - <https://github.com/lyft/flytesnacks/pull/15>
But he is improving it a little bit
|
hey Sören Brunk I think you will wake up, so Anmol Khurana decide to remove the default script in “flytekit” as it seems spark upstream has a script that we can use . He has this issue - <https://github.com/lyft/flyte/issues/409> and he is about to merge the changes. Also he has this example which works - <https://github.com/lyft/flytesnacks/pull/15>
But he is improving it a little bit
| Released <https://github.com/lyft/flytekit/releases/tag/v0.10.3> with the fix. Updated <https://github.com/lyft/flytesnacks/pull/15> as well with to refer to the new flytekit .
|
Released <https://github.com/lyft/flytekit/releases/tag/v0.10.3> with the fix. Updated <https://github.com/lyft/flytesnacks/pull/15> as well with to refer to the new flytekit .
| Thanks Anmol Khurana and Ketan Umare I’ve just tried to run the version from that PR and success! :tada::tada:
I just had to set `FLYTE_INTERNAL_IMAGE` to use my locally built Docker image like so:
```ENV FLYTE_INTERNAL_IMAGE $IMAGE_TAG```
|
Thanks Anmol Khurana and Ketan Umare I’ve just tried to run the version from that PR and success! :tada::tada:
I just had to set `FLYTE_INTERNAL_IMAGE` to use my locally built Docker image like so:
```ENV FLYTE_INTERNAL_IMAGE $IMAGE_TAG```
| Anmol worked hard over the weekend so all his work.
Thank you Sören Brunk again for the persistence
|
Does Flyte support interactive iterative ML research workloads on GPUs? Or is it more for well-defined scheduled workloads?
| Oliver Mannion depends on what you mean by “interactive” . So if you mean “spark” like interactive where you write code in one cell, and get results and so on.. then at the moment - NO. But, if you want to iterate on a pipeline then it does.
And I will explain how -
1. We recently added support for “single task execution” Where you can execute one task of a pipeline.
2. Flyte always supported requesting an execution and getting results back through the API. So you will be able to retrieve the results in a jupyter notebook
3. We also recently added a pre-alpha version of “Raw container support” - you will see more example soon on this. This allows you to get away from building a container - The biggest problem today in interactive execution
One problem that I think with Flyte and iterative development is - Flyte is multi-tenant by design, which means if a large set of production workloads are running, users interactive requests will get queued up potentially, but if you have enough machines then this should work
As a last point, we are exploring additional interactive ideas that could mitigate the largest pain today - Building a container. But we do not feel very comfortable from completely taking away containers, as that provides strong reproducibility guarantees - which I think is a cornerstone of Flyte.
Hope this answers. I would love to discuss more
Oliver Mannion ^ any more questions?
also WIP - <https://github.com/lyft/flytesnacks/>
|
Oliver Mannion depends on what you mean by “interactive” . So if you mean “spark” like interactive where you write code in one cell, and get results and so on.. then at the moment - NO. But, if you want to iterate on a pipeline then it does.
And I will explain how -
1. We recently added support for “single task execution” Where you can execute one task of a pipeline.
2. Flyte always supported requesting an execution and getting results back through the API. So you will be able to retrieve the results in a jupyter notebook
3. We also recently added a pre-alpha version of “Raw container support” - you will see more example soon on this. This allows you to get away from building a container - The biggest problem today in interactive execution
One problem that I think with Flyte and iterative development is - Flyte is multi-tenant by design, which means if a large set of production workloads are running, users interactive requests will get queued up potentially, but if you have enough machines then this should work
As a last point, we are exploring additional interactive ideas that could mitigate the largest pain today - Building a container. But we do not feel very comfortable from completely taking away containers, as that provides strong reproducibility guarantees - which I think is a cornerstone of Flyte.
Hope this answers. I would love to discuss more
Oliver Mannion ^ any more questions?
also WIP - <https://github.com/lyft/flytesnacks/>
| Using a Jupyter notebook to describe and trigger a task and then inspect the results might be what I’m thinking of here. Do you have any examples of that?
|
Using a Jupyter notebook to describe and trigger a task and then inspect the results might be what I’m thinking of here. Do you have any examples of that?
| I am writing one, will share.
hey Oliver Mannion / Joseph Winston here is the updated cookbook - <https://github.com/lyft/flytesnacks/tree/master/cookbook>
It is not yet complete but many parts are
the simple once 1/3/4/5/9 are still wip
Oliver Mannion I have a lot more examples
check them out and let me know
|
<!here> Reminder everyone: Our Bi-weekly Zoom sync is tomorrow, Tues, July 14th at 9am Pacific Daylight Time:
<https://us04web.zoom.us/j/71298741279?pwd=TDR1RUppQmxGaDRFdzBOa2lHN1dsZz09>
Chang-hong will demonstrate prototype Flyte integration with Amazon Sagemaker. Walkon topics welcome.
| moving this to google meet.
<https://meet.google.com/gjz-osvf-nzv>
|
Hello Everyone!
My question related possibility to extend `sidecar_task` to specify `tolerations` section in `pod_spec`
I noticed that the `tolerations` field in `pod_spec` doesn’t appear into the Pod manifest in kubernetes.
<https://lyft.github.io/flyte/user/tasktypes/sidecar.html?highlight=pod_spec#working-example>
So, my workaround to achieve is using documented approach with `resource-tolerations` in k8s propeller plugin and extended k8s resources: <https://github.com/lyft/flyteplugins/blob/master/go/tasks/testdata/config.yaml#L15> and <https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/>
Q: I’m interested if it makes sense to not drop `tolerations` field in pod_spec? Or is it important for other stuff?
P.S.: My case: very specific python task which requires up to terabyte disk size and it should run on the dedicated ec2 node group
Thank you in advance!
| <https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/core/types.go#L2785>
is this not it?
|
<https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/core/types.go#L2785>
is this not it?
| Hi Yee!
Yes, we tried to put this object into pod_spec
|
Hi Yee!
Yes, we tried to put this object into pod_spec
| oooh you mean it doesn’t make its way into the actual Pod
sorry
Katrina Rogan if she wants to take a look
if not, i can do it later
|
oooh you mean it doesn’t make its way into the actual Pod
sorry
Katrina Rogan if she wants to take a look
if not, i can do it later
| we shouldn't be deliberately dropping anything from the podspec
<https://github.com/lyft/flyteplugins/blob/master/go/tasks/plugins/k8s/sidecar/sidecar.go#L64>
ah we do
we should be appending to the user pod spec definition
Ruslan Stanevich do you mind filing an issue to track?
|
we shouldn't be deliberately dropping anything from the podspec
<https://github.com/lyft/flyteplugins/blob/master/go/tasks/plugins/k8s/sidecar/sidecar.go#L64>
ah we do
we should be appending to the user pod spec definition
Ruslan Stanevich do you mind filing an issue to track?
| Thank you Katrina Rogan for checking! :pray:
Should I report it as feature request or issue?
|
Thank you Katrina Rogan for checking! :pray:
Should I report it as feature request or issue?
| issue/bug is fine :D
|
issue/bug is fine :D
| thank you for your help :pray:
issue has been created <https://github.com/lyft/flyte/issues/417>
have a nice day!
|
thank you for your help :pray:
issue has been created <https://github.com/lyft/flyte/issues/417>
have a nice day!
| Katrina Rogan I think same is happening for volume
Ruslan Stanevich you should also look at platform specific tolerations, configuring them in the backend
|
Katrina Rogan I think same is happening for volume
Ruslan Stanevich you should also look at platform specific tolerations, configuring them in the backend
| Hi, Ketan, do you mean specific tolerations for requested resources? or it is like common tolerations for all workflows?
I’m not sure I understood this correctly :slightly_smiling_face:
|
Hi, Ketan, do you mean specific tolerations for requested resources? or it is like common tolerations for all workflows?
I’m not sure I understood this correctly :slightly_smiling_face:
| Ketan Umare why do you say the same is happening for volume? the original use case for sidecar tasks was to support shared volume mounts and i don't see us overriding it
|
Ketan Umare why do you say the same is happening for volume? the original use case for sidecar tasks was to support shared volume mounts and i don't see us overriding it
| Katrina Rogan some users in Minsk tried it and failed. It seems it was getting overwritten
|
Katrina Rogan some users in Minsk tried it and failed. It seems it was getting overwritten
| hm maybe something got refactored - i'll take a look in the same pr
Ketan Umare i don't see volume being overwritten in plugins code, do you have more details about the failure Minsk folks saw?
|
hm maybe something got refactored - i'll take a look in the same pr
Ketan Umare i don't see volume being overwritten in plugins code, do you have more details about the failure Minsk folks saw?
| Katrina Rogan I dont, I just got that as a comment, we can ping them and get more details. They said it was `/dev/shm`
|
Katrina Rogan I dont, I just got that as a comment, we can ping them and get more details. They said it was `/dev/shm`
| and they were also creating a corresponding volume mount?
|
and they were also creating a corresponding volume mount?
| ya
|
Derek Schaller, can you help austin with the CLA issue
it seems bruce used github account to sign the CLA
| What needs to be done here? I have my git.name and git.email matching what I (believe) is on github. And github credentials were used to sign CLA.
How do I sign a new CLA — is there a link to general form? And what address/info needs to be on it?
Everytime I click in the PR about the CLA — it pulls up the already signed CLA, which says is signed by my Github user.
It seems I’m missing something obvious here? Or did we find a really broken edge case?
|
What needs to be done here? I have my git.name and git.email matching what I (believe) is on github. And github credentials were used to sign CLA.
How do I sign a new CLA — is there a link to general form? And what address/info needs to be on it?
Everytime I click in the PR about the CLA — it pulls up the already signed CLA, which says is signed by my Github user.
It seems I’m missing something obvious here? Or did we find a really broken edge case?
| Ok let’s look at your PR
Something is wrong
So
I don’t have admin on the CLA
I can probably reset
|
Ok let’s look at your PR
Something is wrong
So
I don’t have admin on the CLA
I can probably reset
| “brucearctor” — seems to be both my git user.name, and github name. “An author in one of the commits has no associated github name” — so not sure how that gets linked?
it’s still not clear what the email is in the current CLA? Or how I’d sign a new one. If those would solve the issue
|
Hello Everyone.
Could you advise me on the best way to deploy Flyte to EKS?
| We have 2 groups within Lyft running Flyte on EKS. Having said that, our instruction on EKS is not complete. We will have to guide you a bit manually on this and then add to the docs.
Here is a starting point -
<https://github.com/lyft/flyte/blob/master/eks/README.md>
cc Yee
Ketan Umare Can you tag any L5/Minsk folks who are active here.
|
We have 2 groups within Lyft running Flyte on EKS. Having said that, our instruction on EKS is not complete. We will have to guide you a bit manually on this and then add to the docs.
Here is a starting point -
<https://github.com/lyft/flyte/blob/master/eks/README.md>
cc Yee
Ketan Umare Can you tag any L5/Minsk folks who are active here.
| and welcome!
|
and welcome!
| <https://github.com/lyft/flyte/issues/299>
|
<https://github.com/lyft/flyte/issues/299>
| Welcome Yiannis, both Yee and I can work with you to run it on EKS
we have a sample that is close to complete
<https://github.com/lyft/flyte/tree/master/eks>
Ruslan Stanevich / <@UP23UL29J> should also be able to help, they run Flyte on EKS
|
Welcome Yiannis, both Yee and I can work with you to run it on EKS
we have a sample that is close to complete
<https://github.com/lyft/flyte/tree/master/eks>
Ruslan Stanevich / <@UP23UL29J> should also be able to help, they run Flyte on EKS
| thanks all. I just found Kustomize and TF in the repository. I would suppose the tf would be recommended.
|
thanks all. I just found Kustomize and TF in the repository. I would suppose the tf would be recommended.
| so Yiannis so the TF is only to setup the EKS cluster, S3 bucket, Postgres Aurora db etc (the infra)
the kustomize is what sets up Flyte
so to run Flyte you need
1. EKS cluster
2. Postgres DB (you can run in the cluster, but not recomended)
3. S3 bucket
And then to access the UI/Admin API you need
4. ELB
hope this helps
|
so Yiannis so the TF is only to setup the EKS cluster, S3 bucket, Postgres Aurora db etc (the infra)
the kustomize is what sets up Flyte
so to run Flyte you need
1. EKS cluster
2. Postgres DB (you can run in the cluster, but not recomended)
3. S3 bucket
And then to access the UI/Admin API you need
4. ELB
hope this helps
| Thank you!
|
Thank you!
| the Terraform, will help creating 1/2/3/4
|
the Terraform, will help creating 1/2/3/4
| I already have a EKS
|
I already have a EKS
| ohh that is awesome
do you have a postgres DB?
you can just create one in console if you have the perms
|
ohh that is awesome
do you have a postgres DB?
you can just create one in console if you have the perms
| not yet, trying to get it now
|
not yet, trying to get it now
| ok and one s3 bucket
|
ok and one s3 bucket
| no permissions yet
|
no permissions yet
| this is where Flyte will store metadata and intermediate data
Awesome
|
this is where Flyte will store metadata and intermediate data
Awesome
| ok cool. So im close
thank you very much ! :smile:
|
ok cool. So im close
thank you very much ! :smile:
| once you have that we can help you with the kustomize, should not be much, but need a couple changes
ya, this is actually great, its been a while we have helped someone setup from scratch on EKS, helps us improve the docs too
Yiannis let me know if you need help
|
Ketan’s meeting notes and the video from today’s community sync are posted here: <https://docs.google.com/document/d/1Jb6eOPOzvTaHjtPEVy7OR2O5qK1MhEs3vv56DX2dacM/edit#heading=h.xmwjdzl7mbxh>
| Had to skip even though I’d planned to join this one. Thanks for providing notes and videos! That’s super helpful! :slightly_smiling_face:
|
Hello Flyte Friends!
I have question for the experts once again. Does the Flyte design allow/work well with really long running tasks? I.e. is it possible/does it make sense to deploy something like a Spark Streaming job that basically runs continuously until it fails or someone stops it?
| Sören Brunk, great question. Today the tasks are designed to complete
Eventually we want to support streaming tasks. As I think streaming as a service is pretty cool
But we do have flink k8s operator
Or spark operator
You can just deploy using them
Or we can add a task that launches and releases
Sören Brunk let me know, is this something that you are needing on day 1?
i would definitely share on the flyte homepage on github you will see the other 2 operators
|
Sören Brunk, great question. Today the tasks are designed to complete
Eventually we want to support streaming tasks. As I think streaming as a service is pretty cool
But we do have flink k8s operator
Or spark operator
You can just deploy using them
Or we can add a task that launches and releases
Sören Brunk let me know, is this something that you are needing on day 1?
i would definitely share on the flyte homepage on github you will see the other 2 operators
| It’s not really a hard requirement for us right now. But we have use cases where we continuously receive smaller batches of machine data so the overhead of spinning up a new spark job every time is quite high. A Spark streaming job would be much more suitable here. Of course we we could just deploy that job using the Spark operator directly but it would be much nicer to describe it as a Flyte Spark task.
It would basically be a simple single task workflow.
|
It’s not really a hard requirement for us right now. But we have use cases where we continuously receive smaller batches of machine data so the overhead of spinning up a new spark job every time is quite high. A Spark streaming job would be much more suitable here. Of course we we could just deploy that job using the Spark operator directly but it would be much nicer to describe it as a Flyte Spark task.
It would basically be a simple single task workflow.
| Sören Brunk I hear you
And we would love to work with you on this
Yup so today tasks need to
Complete
No hard requirement really, you could set
You can actually run a streaming jobs and set the platform timeout to infinity :joy:
So it should run today
But I want to qualify these as special task types in the future,
Just for correctness
|
Sören Brunk I hear you
And we would love to work with you on this
Yup so today tasks need to
Complete
No hard requirement really, you could set
You can actually run a streaming jobs and set the platform timeout to infinity :joy:
So it should run today
But I want to qualify these as special task types in the future,
Just for correctness
| Yeah even a streaming job that terminates say after 24h and is rescheduled then as a new task execution would be totally fine.
for now
|
Yeah even a streaming job that terminates say after 24h and is rescheduled then as a new task execution would be totally fine.
for now
| Ya this should all work
There is no need for a task to terminate
It’s a logical requirement- u see what I mean
|
Ya this should all work
There is no need for a task to terminate
It’s a logical requirement- u see what I mean
| yes I get it
|
yes I get it
| Awesome, would love to help with trying it out
Just want to see if there are differences in streaming
|
Awesome, would love to help with trying it out
Just want to see if there are differences in streaming
| Great! I’ll get back to you when I have the chance to give it a try
|
Great! I’ll get back to you when I have the chance to give it a try
| yup
in the spark operator CRD i mean, if there are adding it should be simple
|
yup
in the spark operator CRD i mean, if there are adding it should be simple
| IMHO there’s not really much of a difference from the Spark operator point of view. Probably <https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/docs/user-guide.md#using-container-lifecycle-hooks|graceful shutdown> triggered from the outside is a more common requirement for a streaming job.
|
IMHO there’s not really much of a difference from the Spark operator point of view. Probably <https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/docs/user-guide.md#using-container-lifecycle-hooks|graceful shutdown> triggered from the outside is a more common requirement for a streaming job.
| ohh we trigger that
so this should just work for now haha
ya then that would be one of the future items to support forever running functions - streaming functions
and we will be adding support for flinkoperator as well
|
Flyte supports Sagemaker builtin algorithms today - The interface is where we have really innovated IMO
Custom models is coming soon, slated for this month
When you use Sagemaker through Flyte, you are still interacting with Flyte Control plane, so it will manage the executions, queue them up if needed and also capture/record all information
We do not really use a lot of sagemaker at Lyft *yet, but we would love to know your problems - like resource management and we can tackle them within Flyte easily
Fredrik Sannholm ^
| So like I said, our main gripe with SM is the lack of resource management, no job queues like on AWS Batch. Looks like Flyte will help us out
|
So like I said, our main gripe with SM is the lack of resource management, no job queues like on AWS Batch. Looks like Flyte will help us out
| yup, we have a built in resource pooling thing, now that you brought it up, i will add support for that into Sagemaker plugin :slightly_smiling_face:
Chang-Hong Hsu Haytham Abuelfutuh ^
|
yup, we have a built in resource pooling thing, now that you brought it up, i will add support for that into Sagemaker plugin :slightly_smiling_face:
Chang-Hong Hsu Haytham Abuelfutuh ^
| So based on <https://docs.aws.amazon.com/general/latest/gr/sagemaker.html|this> , you can run 20 concurrent training jobs, which is a ridiculous number for a company of any size. I haven’t discussed this with our AWS guy, I assume we could get a higher limit with :moneybag: the real problem is the lack of a queue
|
So based on <https://docs.aws.amazon.com/general/latest/gr/sagemaker.html|this> , you can run 20 concurrent training jobs, which is a ridiculous number for a company of any size. I haven’t discussed this with our AWS guy, I assume we could get a higher limit with :moneybag: the real problem is the lack of a queue
| yup
i agree, this is easily supportable with Flyte
|
yup
i agree, this is easily supportable with Flyte
| If you exceed your limit it just returns a error, and have to try again :slightly_smiling_face:
|
If you exceed your limit it just returns a error, and have to try again :slightly_smiling_face:
| this is our problem with Hive, Presto, AWS Batch (has a rate limit) and Sagemaker
|
this is our problem with Hive, Presto, AWS Batch (has a rate limit) and Sagemaker
| Awesomeness! :heart_eyes:
|
Awesomeness! :heart_eyes:
| yup, and you do see in the UI, that you are waiting in the line
thank you for the awesome suggestion
|
yup, and you do see in the UI, that you are waiting in the line
thank you for the awesome suggestion
| Yes we have a pooling mechanism to do rate limit of some sort
It is not backed by a queue yet so no ordering is preserved when the job is rejected due to resource limit. But basically it does what you described here
> If you exceed your limit it just returns a error, and have to try again
I created an issue to track this <https://github.com/lyft/flyte/issues/460>
|
Yes we have a pooling mechanism to do rate limit of some sort
It is not backed by a queue yet so no ordering is preserved when the job is rejected due to resource limit. But basically it does what you described here
> If you exceed your limit it just returns a error, and have to try again
I created an issue to track this <https://github.com/lyft/flyte/issues/460>
| thank you Chang-Hong Hsu
Chang-Hong Hsu we dont need it right now
as it would automatically queue, because the CRD would naturally serve as a queuing mechanism :slightly_smiling_face:
|
thank you Chang-Hong Hsu
Chang-Hong Hsu we dont need it right now
as it would automatically queue, because the CRD would naturally serve as a queuing mechanism :slightly_smiling_face:
| ah ok
|
Another question (sorry if this is in the docs)! Lyft uses Amundsen/ <http://atlas.apache.org/#/|Apache Atlas> to track lineage, right? How well would it integrate with Flyte?
| thank you for the question. I do not think we use Atlas, but yes we do use Amundsen. So Amundsen only provides a flat view in time for any metadata. Flyte on the other hand has a timeline of data generation (so 3D lineage - available through DataCatalog).
That being said, we will be working on integrating Amundsen and Flyte to index latest versions of “Workflows”, “tasks” and “datasets” into Amundsen.
The plan was to use the Eventstream from Flyte to build the current view in Amundsen
but, this is not available yet, and would probably go into early next year - based on priorities
|
thank you for the question. I do not think we use Atlas, but yes we do use Amundsen. So Amundsen only provides a flat view in time for any metadata. Flyte on the other hand has a timeline of data generation (so 3D lineage - available through DataCatalog).
That being said, we will be working on integrating Amundsen and Flyte to index latest versions of “Workflows”, “tasks” and “datasets” into Amundsen.
The plan was to use the Eventstream from Flyte to build the current view in Amundsen
but, this is not available yet, and would probably go into early next year - based on priorities
| ok! Atlas is one of the alternatives to power the metadata service of Amundsen, as per: <https://github.com/lyft/amundsen>.
|
ok! Atlas is one of the alternatives to power the metadata service of Amundsen, as per: <https://github.com/lyft/amundsen>.
| i see. ya we will keep you posted about the amundsen + flyte work if you are interested
|
George Snelling: silly qn, how does one join the group to get future notifications?
| We should probably open it up for users to sign up with an approval
|
We should probably open it up for users to sign up with an approval
| Jeev B not a silly qn at all. We use gsuite for the domain <http://flyte.org|flyte.org>. The group was locked down too tight. I just relaxed the permissions and exposed it to public search, but it might take a while before the indexing catches up and it shows up in search. For posterity the URL is <https://groups.google.com/a/flyte.org/forum/#!forum/users>
Also, we’ve been treating slack as our primary communication channel out of habit, but we’ll try to do better about cross-posting important notices to <mailto:[email protected]|[email protected]> since google hosts those messages indefinitely for a modest fee.
|
hi all! i have a draft PR here for dynamic sidecar tasks. I was hoping to get some thoughts on how its been implemented before I go through with writing tests/docs:<https://github.com/lyft/flytekit/pull/152>
I tested this with the following workflow:
```from flytekit.sdk.tasks import dynamic_sidecar_task, python_task, inputs, outputs
from flytekit.sdk.types import Types
from flytekit.sdk.workflow import workflow_class, Output
from k8s.io.api.core.v1 import generated_pb2
def _task_pod_spec():
pod_spec = generated_pb2.PodSpec()
cnt = generated_pb2.Container(name="main")
pod_spec.volumes.extend(
[
generated_pb2.Volume(
name="dummy-configmap",
volumeSource=generated_pb2.VolumeSource(
configMap=generated_pb2.ConfigMapVolumeSource(
localObjectReference=generated_pb2.LocalObjectReference(
name="dummy-configmap"
)
)
),
)
]
)
cnt.volumeMounts.extend(
[
generated_pb2.VolumeMount(
name="dummy-configmap", mountPath="/data", readOnly=True,
)
]
)
pod_spec.containers.extend([cnt])
return pod_spec
@inputs(input_config=Types.String)
@outputs(output_config=Types.String)
@python_task
def passthrough(wf_params, input_config, output_config):
output_config.set(input_config)
@outputs(config=Types.String)
@dynamic_sidecar_task(pod_spec=_task_pod_spec(), primary_container_name="main")
def get_config(wf_params, config):
with open("/data/dummy.cfg") as handle:
task = passthrough(input_config=handle.read())
config.set(task.outputs.output_config)
@workflow_class
class DynamicPodCustomizationWF:
config_getter = get_config()
config = Output(config_getter.outputs.config, sdk_type=Types.String)```
and it correctly outputs the contents of the configmap:
which only works if the pod customization is working as intended
this might be too simple, and i might likely have been glossing over some important details. just wanted to get some expert opinion on it! :slightly_smiling_face:
| Haytham Abuelfutuh ^^
Thank you Jeev for doing this
|
Haytham Abuelfutuh ^^
Thank you Jeev for doing this
| Ketan Umare: Its not complete. still needs tests/docs, but I wanted to make sure that I was on the right track, and that this is worth pursuing!
|
Ketan Umare: Its not complete. still needs tests/docs, but I wanted to make sure that I was on the right track, and that this is worth pursuing!
| will look into it in a bit
maybe tomorrow
|
will look into it in a bit
maybe tomorrow
| yea no rush at all! thanks!
|
yea no rush at all! thanks!
| btw, did you get a chance to try the “blanket tolerations?
|
btw, did you get a chance to try the “blanket tolerations?
| nope not yet. i'm actually doing some handover stuff before i go on family leave anytime now.... not much time to work on new stuff yet.
|
nope not yet. i'm actually doing some handover stuff before i go on family leave anytime now.... not much time to work on new stuff yet.
| ohh family leave?
…
|
ohh family leave?
…
| i do have a sandbox env that i can mess around on, so will try to play around with it!
|
i do have a sandbox env that i can mess around on, so will try to play around with it!
| ok
no hurries
|
ok
no hurries
| thanks for putting this in jeev!
i’ll take a deeper look at this tomorrow.
we’re also in the middle of a bit of a refactor that I’m really hoping to get in by the end of next week.
but it shouldn’t really affect this change in particular.
|
thanks for putting this in jeev!
i’ll take a deeper look at this tomorrow.
we’re also in the middle of a bit of a refactor that I’m really hoping to get in by the end of next week.
but it shouldn’t really affect this change in particular.
| thanks Yee
|
thanks Yee
| oh hi.
sorry, yes.
Katrina Rogan and i both took a look.
fill out tests and we can merge?
also bump the version
|
oh hi.
sorry, yes.
Katrina Rogan and i both took a look.
fill out tests and we can merge?
also bump the version
| sounds good
thanks!
|
sounds good
thanks!
| and added a couple random comments.
|
and added a couple random comments.
| looks good! i'll defer to yee to approve since he's the flytekit guru but thank you for adding this and refactoring!
|
Hey everyone! Is it possible to run a spark task on condition, for example, if one of the workflows inputs was filled? Maybe you have an example where I can look
| Yee another example of branch
|
Yee another example of branch
| hey Artem Osipov! This is natively supported in the IDL but not yet implemented. For now though, you can achieve a similar behavior by using dynamic tasks, though it’s not exactly the same.
for example, here is a dynamic task that yields tasks from within a for block: <https://github.com/lyft/flytekit/blob/master/tests/flytekit/common/workflows/batch.py#L58>
you could just as easily make that an if block.
but the interface isn’t nearly as clean since it means you’ll be running an entire task (replete with container loading and such), just to run an if statement. the native branching that ketan’s referring to will fix that.
|
hey Artem Osipov! This is natively supported in the IDL but not yet implemented. For now though, you can achieve a similar behavior by using dynamic tasks, though it’s not exactly the same.
for example, here is a dynamic task that yields tasks from within a for block: <https://github.com/lyft/flytekit/blob/master/tests/flytekit/common/workflows/batch.py#L58>
you could just as easily make that an if block.
but the interface isn’t nearly as clean since it means you’ll be running an entire task (replete with container loading and such), just to run an if statement. the native branching that ketan’s referring to will fix that.
| Thanks!
|
Yee - Vrinda Vasavada wants to know if we can have 2 different docker images in the same repo. Note: its not the same workflow, but basically different workflows with different images
Yee this is not indicated in any examples, and the default examples dont really help understand this. Can we help Vrinda Vasavada
Vrinda Vasavada just for me to understand, can you have these different workflows in different python modules / folders?
| yes we can!
|
yes we can!
| that should make it simple
so if you actually look Vrinda Vasavada <https://github.com/lyft/flytesnacks> follows this pattern exactly
like look at this - <https://github.com/lyft/flytesnacks/tree/master/plugins/spark>
<https://github.com/lyft/flytesnacks/tree/master/plugins/pytorch>
different yet in the same repo
|
that should make it simple
so if you actually look Vrinda Vasavada <https://github.com/lyft/flytesnacks> follows this pattern exactly
like look at this - <https://github.com/lyft/flytesnacks/tree/master/plugins/spark>
<https://github.com/lyft/flytesnacks/tree/master/plugins/pytorch>
different yet in the same repo
| basically just having two repos in one repo? sure
|