input
stringlengths
1
18.7k
output
stringlengths
1
18.7k
yeah but resulting closure is encoded :disappointed:
but the task type is a top-level field?
but the task type is a top-level field?
inside closure
inside closure
<https://github.com/lyft/flyteadmin/blob/master/pkg/repositories/models/task.go#L21> ?
<https://github.com/lyft/flyteadmin/blob/master/pkg/repositories/models/task.go#L21> ?
oh wait
oh wait
isn't that what i linked to?
isn't that what i linked to?
Yeah. The word was too generic. I was expecting something `task_type` but works :slightly_smiling_face:
Yee Sean Shi Let's discuss here. Sean has a dynamic task that yields a dynamic task. We tried `run_external_workflow_dynamic_task.assign_name("some.unique.name")` and did not work
pinging on this as it's blocking a critical task for us
pinging on this as it's blocking a critical task for us
cc Ketan Umare
cc Ketan Umare
hey did you look at the example ohh the nested task is a dynamic task already Sean Shi can you not do a workaround? like dont use 2 levels of nested dynamic tasks? otherwise the only thing we can do is try to replicate the error Anand Swaminathan on the other hand at the moment we cannot even open PR’s
hey did you look at the example ohh the nested task is a dynamic task already Sean Shi can you not do a workaround? like dont use 2 levels of nested dynamic tasks? otherwise the only thing we can do is try to replicate the error Anand Swaminathan on the other hand at the moment we cannot even open PR’s
I've looped in the workflow owner, so I'll ask if it's possible to workaround. I'm not familiar enough to make that call. all I can say is it was working in flytekit==0.9.4
I've looped in the workflow owner, so I'll ask if it's possible to workaround. I'm not familiar enough to make that call. all I can say is it was working in flytekit==0.9.4
Sean Shi messaged you
Sean Shi messaged you
I am seeing two differences in the flow of execution of the different tasks in play here. 1. The failing workflow has one of the nested tasks missing 2. the external repo's task that is being triggered has been changed by its owner from a python task to a dynamic task. So the failing one has a sequence of 3 nested dynamic tasks compared to two in the succeeding one.
I am seeing two differences in the flow of execution of the different tasks in play here. 1. The failing workflow has one of the nested tasks missing 2. the external repo's task that is being triggered has been changed by its owner from a python task to a dynamic task. So the failing one has a sequence of 3 nested dynamic tasks compared to two in the succeeding one.
if two succeed then by induction 3 should, there is no specific logic in which it wouldnt seems there is some other problem
if two succeed then by induction 3 should, there is no specific logic in which it wouldnt seems there is some other problem
but there is a still the issue with 1. one of the nested tasks is missing from the UI
but there is a still the issue with 1. one of the nested tasks is missing from the UI
This is resolved. We fixed the bug
This is resolved. We fixed the bug
oh. cool
oh. cool
the `auto-assign` error is no longer present, but the external workflow does not appear to be launched anymore. I’m observing what Adithya Hemakumar described. the nesting of sub-workflows is different
the `auto-assign` error is no longer present, but the external workflow does not appear to be launched anymore. I’m observing what Adithya Hemakumar described. the nesting of sub-workflows is different
can you start a new thread please? also Anand Swaminathan can you pull the futures file and take a look at it? `flyte-cli parse-proto -f ~/Downloads/futures.pb -p flyteidl.core.dynamic_job_pb2.DynamicJobSpec` or wherever you put it
can you start a new thread please? also Anand Swaminathan can you pull the futures file and take a look at it? `flyte-cli parse-proto -f ~/Downloads/futures.pb -p flyteidl.core.dynamic_job_pb2.DynamicJobSpec` or wherever you put it
hi Yee, not sure what the new thread topic should be? I think we’re still on the same issue
hi Yee, not sure what the new thread topic should be? I think we’re still on the same issue
Sean Shi I think everything is fine. The task is being cached
Sean Shi I think everything is fine. The task is being cached
yep, looks like it’s user error! thanks
A quick question, is <https://github.com/lyft/flytepropeller/blob/master/config.yaml#L4> mandatory?
If you use Blobs or Schemas, yes... It can be the same bucket you use for everything else.. but our recommendation is to keep them separate...
If you use Blobs or Schemas, yes... It can be the same bucket you use for everything else.. but our recommendation is to keep them separate...
And this is if you don’t specify one for a launch plan This is a default value
And this is if you don’t specify one for a launch plan This is a default value
Hm… why this setting isn’t project-specific? My understanding this setting appeared so that flytepropeller can generate output locations to avoid dirty output when task retries
Hm… why this setting isn’t project-specific? My understanding this setting appeared so that flytepropeller can generate output locations to avoid dirty output when task retries
Generally speaking settings start global (read: controlled by the Flyte administrator) then when a use case arises we make them customizable through the control plane (project/domain/WF specific).. I see what you're saying though.. if there is a use case for you, I can point you to prior PRs that did exactly that for other settings...
Generally speaking settings start global (read: controlled by the Flyte administrator) then when a use case arises we make them customizable through the control plane (project/domain/WF specific).. I see what you're saying though.. if there is a use case for you, I can point you to prior PRs that did exactly that for other settings...
I see. I remember a time ago this setting was in SDK. And then Ketan said that there is a new docker arg that specifies location where to write data. Can flytekit python have custom location for data today?
I see. I remember a time ago this setting was in SDK. And then Ketan said that there is a new docker arg that specifies location where to write data. Can flytekit python have custom location for data today?
Yes it can Also the setting is per launchplan
Yes it can Also the setting is per launchplan
Propeller passes this (raw output prefix + some suffix) to flytekit (or any container if it asks for it in its container CMD arg templates)
Propeller passes this (raw output prefix + some suffix) to flytekit (or any container if it asks for it in its container CMD arg templates)
Thus even for referenced task data will be written to the launchplan configured prefix
Thus even for referenced task data will be written to the launchplan configured prefix
I can explain why using one bucket is a problem. For GCS access control is done per bucket, so having one buckets means that all users share the same bucket where they read and write data, without much access control. It isn’t a big deal for inputs and outputs, but a bigger deal for actual blobs
I can explain why using one bucket is a problem. For GCS access control is done per bucket, so having one buckets means that all users share the same bucket where they read and write data, without much access control. It isn’t a big deal for inputs and outputs, but a bigger deal for actual blobs
It’s not one bucket right Gleb Kanterov ? As I said it is per launchplan and this can be completely different I guess what you are saying is that maybe we should have a project level default?
It’s not one bucket right Gleb Kanterov ? As I said it is per launchplan and this can be completely different I guess what you are saying is that maybe we should have a project level default?
I didn’t understand what you mean by “per launchplan”, but now I see <https://github.com/lyft/flyteidl/blob/master/protos/flyteidl/admin/launch_plan.proto#L98>. It makes sense now Not sure how far it is feasible to go with project-default settings Because it breaks immutability of configuration I guess if we see global setting as an easy way to get started it all makes sense Thanks for clarification :+1:
I didn’t understand what you mean by “per launchplan”, but now I see <https://github.com/lyft/flyteidl/blob/master/protos/flyteidl/admin/launch_plan.proto#L98>. It makes sense now Not sure how far it is feasible to go with project-default settings Because it breaks immutability of configuration I guess if we see global setting as an easy way to get started it all makes sense Thanks for clarification :+1:
Aaah sorry for the confusion Docs on it
Aaah sorry for the confusion Docs on it
We need to support this part in Java SDK as well :slightly_smiling_face: Hope to get my hands on it eventually It can be very nice to create custom SDK type mappings for `Dataset[T]` (Spark) or `PCollection` (Scio/Beam)
We need to support this part in Java SDK as well :slightly_smiling_face: Hope to get my hands on it eventually It can be very nice to create custom SDK type mappings for `Dataset[T]` (Spark) or `PCollection` (Scio/Beam)
Ohh you mean in Java? We did this in python let me share <https://github.com/lyft/flytekit/blob/annotations/flytekit/taskplugins/spark/schema.py|https://github.com/lyft/flytekit/blob/annotations/flytekit/taskplugins/spark/schema.py> So we created this thing called as the type engine Which transforms from python types to flyteidl
Ohh you mean in Java? We did this in python let me share <https://github.com/lyft/flytekit/blob/annotations/flytekit/taskplugins/spark/schema.py|https://github.com/lyft/flytekit/blob/annotations/flytekit/taskplugins/spark/schema.py> So we created this thing called as the type engine Which transforms from python types to flyteidl
I was thinking about how to provide this kind of context to implementation, and came up with thing similar to grpc context, but for task. And now I opened code snippet that you linked and see flyte context there :slightly_smiling_face: How does one output Spark DF, is it by returning dataframe object? Yes, it seems so Looks quite nice, wasn’t sure about this idea, but now I see that you came to the same conclusion and it works I feel better about it
I was thinking about how to provide this kind of context to implementation, and came up with thing similar to grpc context, but for task. And now I opened code snippet that you linked and see flyte context there :slightly_smiling_face: How does one output Spark DF, is it by returning dataframe object? Yes, it seems so Looks quite nice, wasn’t sure about this idea, but now I see that you came to the same conclusion and it works I feel better about it
Ya I thought about it and the implementation matters, but it really simplifies usage
Ya I thought about it and the implementation matters, but it really simplifies usage
I like also that there is user space params, similar thing can be useful for equivalent to spark context in beam/scio
I like also that there is user space params, similar thing can be useful for equivalent to spark context in beam/scio
awesome
Ketan Umare you where showing some nice documentation with the new tutorials, is that already published?
is it… but it’s still a work in progress. <https://flytecookbook.readthedocs.io/en/latest/auto_recipes/index.html#beginner> we’re still finalizing the exact topics and writing things up. the basic section is done.
is it… but it’s still a work in progress. <https://flytecookbook.readthedocs.io/en/latest/auto_recipes/index.html#beginner> we’re still finalizing the exact topics and writing things up. the basic section is done.
thanks for sharing :slightly_smiling_face:
thanks for sharing :slightly_smiling_face:
we’ll be pushing new content for the rest of the week too, so feel free to check back later as well
Hi all, I’d like to spark a discussion about how people deploy Flyte to K8S. If I remember correctly Ketan Umare you said something about ideas regarding changes in packaging/deploying Flyte, so this might be relevant as well. We’re currently using Helm, but we’re not too happy with it mainly for these reasons: • Text based templating on top of indentation sensitive yaml is pretty error-prone and readability isn’t great either. • Different Helm charts don’t compose well because Helm values can’t be variables. • We ended up with a completely broken Helm release quite a few times, because the release and then the rollback both failed. To mitigate some of the issues, we generate our values.yaml using <https://dhall-lang.org/|the Dhall configuration language>, which allows us define variables in one central place to keep things DRY. Dhall also has a type system which improves error reporting and <https://github.com/dhall-lang/dhall-kubernetes|Dhall-kubernetes> allows you to generate K8S resources directly, more similar to writing Helm charts but less fragile. We haven’t tried that yet though. A big advantage of Helm is that there’s a Helm chart available for almost everything (although quality varies a lot). Helm also properly prunes resources that are removed during an upgrade. Kustomize is interesting due to its template free approach and flexibility to patch everything. It’s also nicely integrated with kubectl. On the downside, it can be quite repetitive to add things like common labels and as you add more layers of kustomizations, readability suffers too. Pruning resources is still experimental in kubectl and therefore in kustomize as well. A plus is of course that Flyte already has different kustomizations available. Although I haven’t tried it yet, <https://tanka.dev/|Tanka> looks also quite interesting to me. It uses Jsonnet as a templating language, which, even though it’s not type safe like Dhall, is much better than text based templating on yaml IMHO. A big plus is that Tanka has (experimental) Helm and Kustomize integrations, so you can import and use existing charts or Kustomize resources and use it with Tanka. Like with Kustomize/kubectl pruning seems still experimental. This is just my limited view, I’m sure there are a lot of other options as well so I’d love to hear about your deployment options, experiences, opinions and ideas.
dhall can be nice, but the learning curve can be steep for people not familiar with functional languages like Haskell. Did you see kpt and kpt functions? With kpt you can do templating with Skylark (python dialect for bazel), Typescript or golang
dhall can be nice, but the learning curve can be steep for people not familiar with functional languages like Haskell. Did you see kpt and kpt functions? With kpt you can do templating with Skylark (python dialect for bazel), Typescript or golang
Thanks Gleb Kanterov I didn't know about kpt. Will look into it. Yeah I agree Dhall can be intimidating at first if you're not familiar with an FP language already. The syntax is also quite haskellish.
Thanks Gleb Kanterov I didn't know about kpt. Will look into it. Yeah I agree Dhall can be intimidating at first if you're not familiar with an FP language already. The syntax is also quite haskellish.
Wow, love the discussion. I honestly would love of some of you make the decision Ruslan Stanevich is another expert who has created a PR with helm
Wow, love the discussion. I honestly would love of some of you make the decision Ruslan Stanevich is another expert who has created a PR with helm
Hi, I agree with you concerns about the Helm as release-management tool. I’ve faced several times the strange stuff My team use Helm only `as a template engine`. Our Internal Flyte helm chart is similar to this one <https://github.com/lyft/flyte/pull/550/files> Then this Helm chart can be deployed with the tool like <https://argoproj.github.io/argo-cd/> or just using `kubectl apply` on the generated yaml-file. I don’t consider helm as releaser manager :slightly_smiling_face: Frankly speaking, it reduced number of yaml manifests in the repo and made it easier to maintain and update configuration for the installations in the different clusters. The biggest challenge I’ve faced using Helm for Flyte - your Helm chart templates should contain all possible components for installation like: `SparkOperator`, `Contour` (for minikube), `Istio config` (eg in our installation), `spark history server` and etc. Otherwise, It would be inconvenient to manage this components outside the Flyte chart. And in my opinion, the `ideal Flyte installation package` is: • the Helm chart which contains in templates all possible components for dev and prod environments and allows to toggle this features in `values-&lt;installation&gt;.yaml` file. • :slightly_smiling_face: Or even `FlyteOperator` when you should describe only one CR manifest for the whole installation (like <https://github.com/jaegertracing/jaeger-operator> and etc)
Hi, I agree with you concerns about the Helm as release-management tool. I’ve faced several times the strange stuff My team use Helm only `as a template engine`. Our Internal Flyte helm chart is similar to this one <https://github.com/lyft/flyte/pull/550/files> Then this Helm chart can be deployed with the tool like <https://argoproj.github.io/argo-cd/> or just using `kubectl apply` on the generated yaml-file. I don’t consider helm as releaser manager :slightly_smiling_face: Frankly speaking, it reduced number of yaml manifests in the repo and made it easier to maintain and update configuration for the installations in the different clusters. The biggest challenge I’ve faced using Helm for Flyte - your Helm chart templates should contain all possible components for installation like: `SparkOperator`, `Contour` (for minikube), `Istio config` (eg in our installation), `spark history server` and etc. Otherwise, It would be inconvenient to manage this components outside the Flyte chart. And in my opinion, the `ideal Flyte installation package` is: • the Helm chart which contains in templates all possible components for dev and prod environments and allows to toggle this features in `values-&lt;installation&gt;.yaml` file. • :slightly_smiling_face: Or even `FlyteOperator` when you should describe only one CR manifest for the whole installation (like <https://github.com/jaegertracing/jaeger-operator> and etc)
Following the idea of CR, with kpt functions it’s quite easy to implement, and it can be typesafe if typed SDK is used.
Following the idea of CR, with kpt functions it’s quite easy to implement, and it can be typesafe if typed SDK is used.
So do we have a suggestion?
So do we have a suggestion?
Ruslan Stanevich yeah you’re right it’s important to distinguish creating/configuring resource definitions from release-management especially as you might want to use different tools for each task even if a tool like Helm supports both. I guess one of the troubles we have with Helm as is that we use an umbrella chart which combines several external charts. Now if we want to set i.e. the same hostname in different places Helm is too limited. That’s why we have to generate the values.yaml with another tool. I’m not too familiar with custom resources so I’m trying to understand what a `FlyteOperator` CR would look like. Would all Flyte configuration be part of the CR? And then does some operator implementation create the low level K8S resources based on that config? So we gain simplicity (only one CR manifest, less opinionated about tools like Helm, Kustomize etc.) but we lose some flexibility as we can only configure what’s exposed through the CR. What about dependencies like the Spark operator?
Ruslan Stanevich yeah you’re right it’s important to distinguish creating/configuring resource definitions from release-management especially as you might want to use different tools for each task even if a tool like Helm supports both. I guess one of the troubles we have with Helm as is that we use an umbrella chart which combines several external charts. Now if we want to set i.e. the same hostname in different places Helm is too limited. That’s why we have to generate the values.yaml with another tool. I’m not too familiar with custom resources so I’m trying to understand what a `FlyteOperator` CR would look like. Would all Flyte configuration be part of the CR? And then does some operator implementation create the low level K8S resources based on that config? So we gain simplicity (only one CR manifest, less opinionated about tools like Helm, Kustomize etc.) but we lose some flexibility as we can only configure what’s exposed through the CR. What about dependencies like the Spark operator?
Thank you Ruslan Stanevich / Sören Brunk, can you guys please help flyte go in the right direction?
Thank you Ruslan Stanevich / Sören Brunk, can you guys please help flyte go in the right direction?
actually, can we discuss this at the next meeting? the one in jan? or should we handle it before then?
actually, can we discuss this at the next meeting? the one in jan? or should we handle it before then?
Ya lets discuss next meeting
I have a question about workflow execution. I have a workflow with a container task which the docker image is not pushed. When triggering an execution of the workflow, the execution failed but in the namespace a POD with `ImagePullBackOff` state is not cleaned. In propeller’s log, I found the message: `Trying to abort a node in state [failed]` which leads me to <https://github.com/lyft/flytepropeller/blob/c53e1d7a9aa2c060230f6c5d7db69fd3123adeb8/pkg/controller/nodes/executor.go#L860|here> . Am I getting expected behaviour to have a POD hanging or this is seems more like a bug? Some more error msg: `Some downstream node has failed. Failed: [true]. TimedOut: [false]. Error: [code:"RetriesExhausted|ContainersNotReady|ImagePullBackOff" message:"[4/4] currentAttempt done. Last Error: USER::containers with unready status: [f2zjnxjy]|Back-off pulling image` It looks like the node (with wrong docker images) transited to `failed` state first during retrying. But when it reaches max attempt retries for the workflow, the abort function did not clean up the pod because the node is not in <https://github.com/lyft/flytepropeller/blob/c53e1d7a9aa2c060230f6c5d7db69fd3123adeb8/pkg/controller/nodes/executor.go#L629|states> which can be actually aborted The node went to `Finalize` but it seems that in the function the pod is not deleted but rather just cleared the <https://github.com/lyft/flytepropeller/blob/cdd6fa250981b5ae1481f54794e028dc7b1cff23/pkg/controller/nodes/task/k8s/plugin_manager.go#L369|finalizer> I am not sure the config attribute `inject-finalizer: true` would help in our case but it is not enable in our setup.
Ya so we do not delete the last attempt today, because if we do the we will loose the logs immediately But maybe we should make that as a config In case of failed container pull the pod continues also in sidecar job This was a decision made to prevent k8s logs. The pod eventually gets reaped when the gc threshold is met
Ya so we do not delete the last attempt today, because if we do the we will loose the logs immediately But maybe we should make that as a config In case of failed container pull the pod continues also in sidecar job This was a decision made to prevent k8s logs. The pod eventually gets reaped when the gc threshold is met
In our case, we had a lot of failed container which eats all the quota for the project and no new execution can be done.
In our case, we had a lot of failed container which eats all the quota for the project and no new execution can be done.
Ok one was is to reduce gc threshold to 1 hour or less And I can help make the last deleting state a config option Should be trivial
Ok one was is to reduce gc threshold to 1 hour or less And I can help make the last deleting state a config option Should be trivial
Ok. Just curious about the `inject-finalizer` I don’t think it matters much, right? Since the finalizer will be called anyway But does the gc cleanup the PODs as well? I thought it only cleans flyte workflow CRs
Ok. Just curious about the `inject-finalizer` I don’t think it matters much, right? Since the finalizer will be called anyway But does the gc cleanup the PODs as well? I thought it only cleans flyte workflow CRs
Ya if u disable inject finalizer Pods will be deleted Robles is high scale k8s deletes random completed pods Above 12800 pods Unless configured otherwise You can do that for immediate relief
Ya if u disable inject finalizer Pods will be deleted Robles is high scale k8s deletes random completed pods Above 12800 pods Unless configured otherwise You can do that for immediate relief
Robles? I can check that out. Thanks a lot for the clarification
Robles? I can check that out. Thanks a lot for the clarification
Hi Nian, sorry I was on my phone I can link you a few things that you can look into Also Robles -&gt; problems (gotta love iphone auto-correct) :smile: So let me explain how the cleanup works when we create a Workflow, all pods, spark jobs or other CRD,s we launch are created as child entities (This can be done using ownership references). But to prevent child entities from getting cleaned up async, you can inject a finalizer. For a workflow thus, all child entities will remain, unless the Workflow itself is deleted The workflow is deleted in 2 scenarios 1. The workflow completes and it has been beyond the GC threshold <https://github.com/lyft/flytepropeller/blob/master/pkg/controller/config/config.go#L70> (Note the funky logic 0 -23 hours only in hourly increments) this is because not all versions of kubernetes support CRD GC and we wanted the GC to be extremely low overhead. so using this logic makes it possible to be one command 2. When the workflow is aborted Now another interesting behavior - again added because we learned user behavior, let say a task is running and it fails, and there are 3 retries Then for retry 1 &amp; 2, the pods will be deleted by propeller For retry 3, the pod will not be. This is because users expect to see the log for sometime after the failure Eventually it will be deleted once the GC threshold is met Sadly, for imagepullbackfailure, the pod never gets released and we should handle its deletion Nian Tang ^ Let me know if you have any questions, and if I should add a flag that allows deleting the last failed attempt automatically Also Nian Tang here is the logic difference between retryable failure and failure - <https://github.com/lyft/flytepropeller/blob/master/pkg/controller/nodes/executor.go#L514-L521> IF you see `abort` is not invoked thats the difference
Hi Nian, sorry I was on my phone I can link you a few things that you can look into Also Robles -&gt; problems (gotta love iphone auto-correct) :smile: So let me explain how the cleanup works when we create a Workflow, all pods, spark jobs or other CRD,s we launch are created as child entities (This can be done using ownership references). But to prevent child entities from getting cleaned up async, you can inject a finalizer. For a workflow thus, all child entities will remain, unless the Workflow itself is deleted The workflow is deleted in 2 scenarios 1. The workflow completes and it has been beyond the GC threshold <https://github.com/lyft/flytepropeller/blob/master/pkg/controller/config/config.go#L70> (Note the funky logic 0 -23 hours only in hourly increments) this is because not all versions of kubernetes support CRD GC and we wanted the GC to be extremely low overhead. so using this logic makes it possible to be one command 2. When the workflow is aborted Now another interesting behavior - again added because we learned user behavior, let say a task is running and it fails, and there are 3 retries Then for retry 1 &amp; 2, the pods will be deleted by propeller For retry 3, the pod will not be. This is because users expect to see the log for sometime after the failure Eventually it will be deleted once the GC threshold is met Sadly, for imagepullbackfailure, the pod never gets released and we should handle its deletion Nian Tang ^ Let me know if you have any questions, and if I should add a flag that allows deleting the last failed attempt automatically Also Nian Tang here is the logic difference between retryable failure and failure - <https://github.com/lyft/flytepropeller/blob/master/pkg/controller/nodes/executor.go#L514-L521> IF you see `abort` is not invoked thats the difference
Nice explanation. :+1:. I need to understand it better. But for now I need to make food for my hungry kids
Nice explanation. :+1:. I need to understand it better. But for now I need to make food for my hungry kids
hahah, absolutely i will actually make it a config by then and you can try it out
hahah, absolutely i will actually make it a config by then and you can try it out
Or I can try to make it a config option. Also want to make some contributions:blush:
Or I can try to make it a config option. Also want to make some contributions:blush:
yes that is amazing i will not do it :smile: I want more people to start helping with FlytePropeller, it is a beast (hopefully simple) Also let me know and I can walk you through the code whenever you want we can do like a 1 hour meeting and go feed your hungry kids haha
yes that is amazing i will not do it :smile: I want more people to start helping with FlytePropeller, it is a beast (hopefully simple) Also let me know and I can walk you through the code whenever you want we can do like a 1 hour meeting and go feed your hungry kids haha
That would be great. Do you have some time tomorrow morning? Or we can do it on Monday next week
That would be great. Do you have some time tomorrow morning? Or we can do it on Monday next week
ohh Nian Tang I had time, but I missed this I can do monday morning around 8:30/9:00?
ohh Nian Tang I had time, but I missed this I can do monday morning around 8:30/9:00?
Sure We do it on Monday then
Sure We do it on Monday then
ok, let me know a time preference or send an invite let me dm you my calendar
ok, let me know a time preference or send an invite let me dm you my calendar
Ketan Umare A question about the GC. Should the workflows being GC ed have `deletionTimestamp` on the workflow CR?
Ketan Umare A question about the GC. Should the workflows being GC ed have `deletionTimestamp` on the workflow CR?
Ya so when a crd or object is deleted it’s an async process First a deletion time stamp is added During gc, we add a deletion time stamp and that will async get propagated and handled by propeller
Ya so when a crd or object is deleted it’s an async process First a deletion time stamp is added During gc, we add a deletion time stamp and that will async get propagated and handled by propeller
It is a bit strange that none of the flyte workflow is marked with the deletion ts
It is a bit strange that none of the flyte workflow is marked with the deletion ts
And is it stuck?
And is it stuck?
No I dumped all cr and did a grep for the deletion time stamp and it is empty result For 2 months
No I dumped all cr and did a grep for the deletion time stamp and it is empty result For 2 months
? Is gc disabled?
? Is gc disabled?
Is there a flag for it?
Is there a flag for it?
U should be able to see these logs in propeller <https://github.com/lyft/flytepropeller/blob/master/pkg/controller/garbage_collector.go|https://github.com/lyft/flytepropeller/blob/master/pkg/controller/garbage_collector.go> You are right it cannot be disabled
U should be able to see these logs in propeller <https://github.com/lyft/flytepropeller/blob/master/pkg/controller/garbage_collector.go|https://github.com/lyft/flytepropeller/blob/master/pkg/controller/garbage_collector.go> You are right it cannot be disabled
I will create a issue a bit later
I will create a issue a bit later
issue about? Actually I dont understand your problem are you saying there are workflows from 2 months ago still in your k8s cluster?
issue about? Actually I dont understand your problem are you saying there are workflows from 2 months ago still in your k8s cluster?
Yes
Yes
did you get a chance to look at the logs? something seems to be off
did you get a chance to look at the logs? something seems to be off
I had a look for it. It looks the gc is running and didn’t see errors related to it Maybe we miss configured something. It look like permission issue: `Garbage collection failed in this round.Error : [namespaces is forbidden: User "xxx" cannot list resource "namespaces" in API group "" at the cluster scope`
I had a look for it. It looks the gc is running and didn’t see errors related to it Maybe we miss configured something. It look like permission issue: `Garbage collection failed in this round.Error : [namespaces is forbidden: User "xxx" cannot list resource "namespaces" in API group "" at the cluster scope`
I thought as much
I thought as much
I think we might missed quite a lot configuration related to rbac. :sweat_smile: We did not configured anything actually and it still worked for most of the parts. That is really amazing.
I think we might missed quite a lot configuration related to rbac. :sweat_smile: We did not configured anything actually and it still worked for most of the parts. That is really amazing.
Wow That is surprising
Wow That is surprising
Not sure why it worked. Anyway, I fixed it manually. Thanks a lot for the help.
Haytham Abuelfutuh we are having trouble with our signup link, do you mind changing that in the github/flyte landing page
Yes, I was trying to recover it... but in vain... I'm....
Yes, I was trying to recover it... but in vain... I'm....
yes i found some users in a different workspace, waiting for it to work :disappointed:
yes i found some users in a different workspace, waiting for it to work :disappointed:
Can someone review this? <https://github.com/lyft/flyte/pull/661> I don't have it hooked up to the invite pipeline yet.. but at least we will collect the emails in an accessible location. CC Katrina Rogan Yee
Can someone review this? <https://github.com/lyft/flyte/pull/661> I don't have it hooked up to the invite pipeline yet.. but at least we will collect the emails in an accessible location. CC Katrina Rogan Yee
Chang-Hong Hsu
Chang-Hong Hsu
looking
Hi :wave: The question is related to how `Flytepropeller` handles the workflows in case of exceeding the resourcesQuota. When `resourcesQuota` per namespace is exceeded (for example `<http://requests.nvidia.com/gpu|requests.nvidia.com/gpu>` is limited), so, I noticed that the workflow on the dashboard goes to the `Running` status and Flytepropeller logs show like `Failed to launch job, resource quota exceeded. err: [BackOffError] The operation attempt was blocked by back-off [attempted at: 2021-01-19 08:57:12.03015527 +0000 UTC m=+4551682.770724274][the block expires at: 2021-01-19 08:58:49.028628226 +0000 UTC m=+4551779.769197238] and the requested resource(s) exceeds resource ceiling(s)` So, I’d like to figure out what the simplest way to get information if there are the workflows in `Running` status which have not been applied in Kubernetes yet? I mean if there is specific metric or information in metadata. Unfortunately, I couldn’t find such source of truth. Thank you!
I _think_ there is a metric we omit when we go into back off... I'll look when I get to my computer... Cc Chang-Hong Hsu if you know on top of your head...
I _think_ there is a metric we omit when we go into back off... I'll look when I get to my computer... Cc Chang-Hong Hsu if you know on top of your head...
Haytham Abuelfutuh did you find it?
Haytham Abuelfutuh did you find it?
No (need to setup Prometheus to explore then instead) but I found that we mark tasks as "WaitingForResources"... and I believe that will make it to the Task Event we submit to Admin... so if you are looking for a way to get all of the workflows stuck with a task in that phase, you can query Admin API to iterate and inspect them...
Hey all! I have a general usage question. We where wondering how to tag metadata to flyte workflow runs so that our users can easily track what they run, what result is linked to which run id and maybe search their past runs. We where thinking of linking our flyte runs with some kind of model / workflow registry where we can link run_id with metadata, then users could maybe search previous runs for example? Do y’all have suggestions on this? Thanks
robust metadata/tags is something we definitely want but haven't had time to implement :disappointed: however there are some ways you can accomplish that through freeform execution <https://github.com/lyft/flyteidl/blob/master/protos/flyteidl/admin/execution.proto#L231|annotations &amp; labels> as well as the <https://github.com/lyft/flyteidl/blob/master/protos/flyteidl/admin/execution.proto#L178|user> who triggered an execution. re: linking a result to a run id could you explain a little bit more? individual outputs are entities saved in <https://github.com/lyft/datacatalog|datacatalog> and this has support for tagging and lineage tracking and to your last point about searching for previous runs, this is definitely something we hope to do. please upvote <https://github.com/lyft/flyte/issues/671> or if you'd like to contribute we can talk about the work (not a whole lot!) that needs to be done to implement
robust metadata/tags is something we definitely want but haven't had time to implement :disappointed: however there are some ways you can accomplish that through freeform execution <https://github.com/lyft/flyteidl/blob/master/protos/flyteidl/admin/execution.proto#L231|annotations &amp; labels> as well as the <https://github.com/lyft/flyteidl/blob/master/protos/flyteidl/admin/execution.proto#L178|user> who triggered an execution. re: linking a result to a run id could you explain a little bit more? individual outputs are entities saved in <https://github.com/lyft/datacatalog|datacatalog> and this has support for tagging and lineage tracking and to your last point about searching for previous runs, this is definitely something we hope to do. please upvote <https://github.com/lyft/flyte/issues/671> or if you'd like to contribute we can talk about the work (not a whole lot!) that needs to be done to implement
• on the linking result i to run_id: Yes definitely this exists apologies I did not make myself clear. My point was mainly about the ability to tag a run with metadata and then getting to the result is already possible. result for me refers not to the output of the flyte run, but the business result that is related to a run id (i.e this run ids ran training for model a.1 which had the best result on task a). • about searching for previous runs, just upvoted the linked issue. would be happy to try and contribute but not sure i have the correct technical skills :stuck_out_tongue:
• on the linking result i to run_id: Yes definitely this exists apologies I did not make myself clear. My point was mainly about the ability to tag a run with metadata and then getting to the result is already possible. result for me refers not to the output of the flyte run, but the business result that is related to a run id (i.e this run ids ran training for model a.1 which had the best result on task a). • about searching for previous runs, just upvoted the linked issue. would be happy to try and contribute but not sure i have the correct technical skills :stuck_out_tongue:
ah yeah so I like I said earlier metadata is undefined atm unfortunately. you could again leverage labels or annotations as interim tags but we don't support filtering on either of those at the moment. the folks over at spotify are working on a proposal for generic event sinks for executions and are interested in adding task metadata along with those changes. it could be worth following ( in <#C01JNV23VE3|flyte-events>) if you're interested. one option is you could leverage this work to push the execution data to a separate registry like you proposed which could be a better store for discovering executions with specific metadata attributes for your business needs filtering by user for executions is actually not that big of a change! if you're interested in contributing (which we would love!) we can set up some time to go over the changes you would need to make in flyteadmin :)
quick qn: is it fair to assume that propeller retries on quota exceeded from ResourceQuota? that it queues up tasks as resources free up?
yes it will wait for the quota to free up
yes it will wait for the quota to free up
awesome! as a follow-up, in this config, these look like domains. Is there a way to specify project-specific (not domain-specific) resource quotas? <https://github.com/lyft/flyte/blob/ecb63bdb5ecfc2a0895d75dd165d85ba1827ee7c/kustomize/base/single_cluster/headless/config/admin/cluster_resources.yaml|https://github.com/lyft/flyte/blob/ecb63bdb5ecfc2a0895d75dd165d85ba1827ee7c/kustom[…]ase/single_cluster/headless/config/admin/cluster_resources.yaml>
awesome! as a follow-up, in this config, these look like domains. Is there a way to specify project-specific (not domain-specific) resource quotas? <https://github.com/lyft/flyte/blob/ecb63bdb5ecfc2a0895d75dd165d85ba1827ee7c/kustomize/base/single_cluster/headless/config/admin/cluster_resources.yaml|https://github.com/lyft/flyte/blob/ecb63bdb5ecfc2a0895d75dd165d85ba1827ee7c/kustom[…]ase/single_cluster/headless/config/admin/cluster_resources.yaml>
give me 2 minutes hey Jeev B you want one quota across all domains? that is a little dangerous no? someone running development may kill their production loads? Also K8s quotas are per namespace so if you are using namespaces per `project-domain` combination then that cannot be done sadly
give me 2 minutes hey Jeev B you want one quota across all domains? that is a little dangerous no? someone running development may kill their production loads? Also K8s quotas are per namespace so if you are using namespaces per `project-domain` combination then that cannot be done sadly
we run separate clusters for dev/prod
we run separate clusters for dev/prod
just make it so that `development+staging+production` = 130% and production has the largest. ohh you do, so you have multi-cluster setup? in that case, its ok, you can set the quota for project + domain right, should not be a problem?