input
stringlengths 1
18.7k
| output
stringlengths 1
18.7k
|
---|---|
if you wouldn’t mind trying, i’d be curious to see what this value is at the point of failure
```from flytekit.configuration.sdk import WORKFLOW_PACKAGES
WORKFLOW_PACKAGES.get()```
| ya, Yee wouldnt creating a module level variable called the task solve it?
|
ya, Yee wouldnt creating a module level variable called the task solve it?
| it can be that (2) can be fixed by creating a wrapper that will return None if the code is running inside container in Flyte task. Now sure how much harm this magic is going to bring :slightly_smiling_face:
|
it can be that (2) can be fixed by creating a wrapper that will return None if the code is running inside container in Flyte task. Now sure how much harm this magic is going to bring :slightly_smiling_face:
| as it gets added to the instance tracker
|
as it gets added to the instance tracker
| ```WORKFLOW_PACKAGES ['workflows.onemodel.royalties']```
|
```WORKFLOW_PACKAGES ['workflows.onemodel.royalties']```
| Gleb Kanterov we are also in the new flytekit api, overhauling how remote tasks should work, we should discuss that, let me start a thread in <#CREL4QVAQ|flytekit>
|
Gleb Kanterov we are also in the new flytekit api, overhauling how remote tasks should work, we should discuss that, let me start a thread in <#CREL4QVAQ|flytekit>
| sorry Yee if i unset the env var i get
```WORKFLOW_PACKAGES []```
but it errors either way
|
sorry Yee if i unset the env var i get
```WORKFLOW_PACKAGES []```
but it errors either way
| this line needs to be moved to module level: `
```ads_revenue = SdkWorkflow.fetch(
"flytesnacks",
"production",
"workflows.ads_revenue.workflow.AdsRevenueForecast",
"v4"
)```
|
this line needs to be moved to module level: `
```ads_revenue = SdkWorkflow.fetch(
"flytesnacks",
"production",
"workflows.ads_revenue.workflow.AdsRevenueForecast",
"v4"
)```
| doesn't fix it
if by module you mean file
|
doesn't fix it
if by module you mean file
| yeah it needs to be assigned to a key returned in `dir(module)`
same error if you move it?
|
yeah it needs to be assigned to a key returned in `dir(module)`
same error if you move it?
| yup :confused: Yee can testify
|
yup :confused: Yee can testify
| hmmm unfortunately i can’t dig into this right now, but my best guess would be there is some slight misalignment between workflow and task registerable entities--since that path for tasks is pretty well-exercised and hasn’t been creating issues.
and catching up, looks like yee already found it and fixed it
|
hmmm unfortunately i can’t dig into this right now, but my best guess would be there is some slight misalignment between workflow and task registerable entities--since that path for tasks is pretty well-exercised and hasn’t been creating issues.
and catching up, looks like yee already found it and fixed it
| yep :slightly_smiling_face:
|
I have some questions about ser/de and caching behavior. I think i understand so stop me when i'm wrong
1. caching is based on the _materialized_ inputs to each task. eg if task B depends on output from A and A is rerun due to changed inputs _but the output does not change_ task B will not be rerun
2. The equality of output is determined by the proto representation of that type. eg if it's a string type then this process is straightforward, however if it's a CSV or dataframe then even though the data output by A may be the same the file location in the proto may change and _so it will be a cache miss_
| 1. correct.
2. correct.
|
1. correct.
2. correct.
| I have an interesting use case that i'm trying to solve that where this behavior is a blocker. would like to get your input
we're basically build a scenario analysis tool which involves a bunch of interconnected models. users provide assumptions (such as future fx rates, product pricing, etc) and kick off the system to make predictions.
because of the scale/complexity of the parameters, we can not model each as a flyte type. So we need an external UI/database etc. the plan was to have a user create immutable copies of possible parameters and then pass a pointer to the params as the main arg to the workflow. unfortunately since that pointer will change every run it basically invalidates the entire cache which will be a bad user experience
|
I have an interesting use case that i'm trying to solve that where this behavior is a blocker. would like to get your input
we're basically build a scenario analysis tool which involves a bunch of interconnected models. users provide assumptions (such as future fx rates, product pricing, etc) and kick off the system to make predictions.
because of the scale/complexity of the parameters, we can not model each as a flyte type. So we need an external UI/database etc. the plan was to have a user create immutable copies of possible parameters and then pass a pointer to the params as the main arg to the workflow. unfortunately since that pointer will change every run it basically invalidates the entire cache which will be a bad user experience
| Hey Dylan Wilder. That's an interesting use case!
DataCatalog (the thing that provides caching behavior) has public and documented APIs... We have customers who use it outside the typical behavior of Flyte caching.
One possibility I see here is that you can compute your own provenance (the thing you want to use as the lookup key), and use that directly to lookup from data catalog, if an existing artifact exist (in this case it'll be a pointer to the real dataset), return it...
This way, subsequent tasks will continue to behave the same way (caching will work as expected... etc.)
you can compute the provenance based on hashing of the real data generated (might be expensive, depending on the size)...
Does that make sense?
|
Hey Dylan Wilder. That's an interesting use case!
DataCatalog (the thing that provides caching behavior) has public and documented APIs... We have customers who use it outside the typical behavior of Flyte caching.
One possibility I see here is that you can compute your own provenance (the thing you want to use as the lookup key), and use that directly to lookup from data catalog, if an existing artifact exist (in this case it'll be a pointer to the real dataset), return it...
This way, subsequent tasks will continue to behave the same way (caching will work as expected... etc.)
you can compute the provenance based on hashing of the real data generated (might be expensive, depending on the size)...
Does that make sense?
| it does! this is the kind of thing i was looking for
|
it does! this is the kind of thing i was looking for
| alright, let me write up a gist with an example...
Some very rough idea here: <https://gist.github.com/EngHabu/79da5071a4f2715811dec55cc8f5961a>
|
alright, let me write up a gist with an example...
Some very rough idea here: <https://gist.github.com/EngHabu/79da5071a4f2715811dec55cc8f5961a>
| sorry what's going on with `B` here? ostensibly it should be written to storage somewhere and the reference saved in datacatalog? is that storage external or is there a way to store the object directly in flyte. and how is this different than storing anywhere deterministic and returning the directory as a string (ie eliding datacatalog)
also follow on question. in many cases _some_ parameters have changed but many haven't. and not all downstreams depend on every parameter, so ideally would just rerun those that are necessary but again _don't want to expose everything as a top level flyte output_ since this would be unmanageable. think a big dict of key to dataset values. is it possible to depend on one of the keys only or in general on some sub element of an output. In the old api i'd guess no because outputs aren't materialized, but maybe in the new one?
|
sorry what's going on with `B` here? ostensibly it should be written to storage somewhere and the reference saved in datacatalog? is that storage external or is there a way to store the object directly in flyte. and how is this different than storing anywhere deterministic and returning the directory as a string (ie eliding datacatalog)
also follow on question. in many cases _some_ parameters have changed but many haven't. and not all downstreams depend on every parameter, so ideally would just rerun those that are necessary but again _don't want to expose everything as a top level flyte output_ since this would be unmanageable. think a big dict of key to dataset values. is it possible to depend on one of the keys only or in general on some sub element of an output. In the old api i'd guess no because outputs aren't materialized, but maybe in the new one?
| the python sdk shouldn’t impact this since it really comes down to the IDL definition. Having sub-references into the IDL for structured outputs is a really interesting thought though.
one idea that could work today (not sure how extensible it is to your setup), but you could probably achieve this in a fairly elegant way with python by declaring a dict with individual keys pointing to task outputs or workflow inputs. Then `**` exploding the dict as input to downstream nodes. Then you need to implement a way to drop out unnecessary keys which can probably be achieved by making a wrapping of the task object which drops out extra kwargs prior to attempting to construct the node.
basically, use python code to aggregate the references into something manageable
|
the python sdk shouldn’t impact this since it really comes down to the IDL definition. Having sub-references into the IDL for structured outputs is a really interesting thought though.
one idea that could work today (not sure how extensible it is to your setup), but you could probably achieve this in a fairly elegant way with python by declaring a dict with individual keys pointing to task outputs or workflow inputs. Then `**` exploding the dict as input to downstream nodes. Then you need to implement a way to drop out unnecessary keys which can probably be achieved by making a wrapping of the task object which drops out extra kwargs prior to attempting to construct the node.
basically, use python code to aggregate the references into something manageable
| hmm i think i follow what you're suggesting, let me see if i can give it a shot
could dynamically generate the outputs at registration time
|
one scenario to consider is having messages that are delivered at least once trigger workflows...
| Hi Jeev B execution ID is designed for this scenario. We have many cases at lyft where users launch executions and used of the ID is a way to achieve idempotency
Now retries on failures, what are the failure scenarios? Workflow is just a meta entity and should never fail, but a task fails right? And if that is the case one should introduce retries for task nodes
Please indicate cases in which you see workflows failing that should need an entire workflow retry
|
Hi Jeev B execution ID is designed for this scenario. We have many cases at lyft where users launch executions and used of the ID is a way to achieve idempotency
Now retries on failures, what are the failure scenarios? Workflow is just a meta entity and should never fail, but a task fails right? And if that is the case one should introduce retries for task nodes
Please indicate cases in which you see workflows failing that should need an entire workflow retry
| you are right. we want to retry on task failures. but would it be possible to intervene manually to “resume” workflows after tasks have failed? in case of a disastrous infra failure for instance
so it picks up from the last successful tasks and proceeds to completion
|
you are right. we want to retry on task failures. but would it be possible to intervene manually to “resume” workflows after tasks have failed? in case of a disastrous infra failure for instance
so it picks up from the last successful tasks and proceeds to completion
| Hmm, so today that can be achieved if you use memorization
If not we don’t have a resume, but let’s add that as a feature request
That can be built as we can completely recreate the state
|
Hmm, so today that can be achieved if you use memorization
If not we don’t have a resume, but let’s add that as a feature request
That can be built as we can completely recreate the state
| right but memoization works with a new execution ID right? the difference is that we’d like to resume the existing execution ID
that makes sense. we’ll add a feature request!
at lyft, when users use idempotent execution IDs, how do they handle failures outside of the workflow? for instance if kiam fails to provide credentials to S3? do they just relaunch and leverage memoization?
does that make sense?
|
right but memoization works with a new execution ID right? the difference is that we’d like to resume the existing execution ID
that makes sense. we’ll add a feature request!
at lyft, when users use idempotent execution IDs, how do they handle failures outside of the workflow? for instance if kiam fails to provide credentials to S3? do they just relaunch and leverage memoization?
does that make sense?
| Ya they do
Leverage memorization
|
Ya they do
Leverage memorization
| sounds good. we’ll go down this path for now and create a wrapping service that will handle launches/relaunches for us.
|
sounds good. we’ll go down this path for now and create a wrapping service that will handle launches/relaunches for us.
| Hmm that does not sound good - more work?
So to understand you need the same execution ID to be repeated
|
Hmm that does not sound good - more work?
So to understand you need the same execution ID to be repeated
| yea a bit.
yes that’s one option. what we need is to have a machine idempotently kick off workflows while handling the case of “resuming” workflows with failed tasks.
for context, we have a controller that responds to object storage events and kicks off workflows. and gcp pubsub is an at least once delivery system. but in the event of task failures due to infra failures we want to be able to intervene and “touch” files to “retrigger” and push these workflows through.
does that make sense Ketan Umare
|
yea a bit.
yes that’s one option. what we need is to have a machine idempotently kick off workflows while handling the case of “resuming” workflows with failed tasks.
for context, we have a controller that responds to object storage events and kicks off workflows. and gcp pubsub is an at least once delivery system. but in the event of task failures due to infra failures we want to be able to intervene and “touch” files to “retrigger” and push these workflows through.
does that make sense Ketan Umare
| give me some time, I will comment.
|
give me some time, I will comment.
| Ketan Umare: i might be able to leverage the `FlyteWorkflow` CRD with a custom label override as the idempotency key without any other additional work.
|
Ketan Umare: i might be able to leverage the `FlyteWorkflow` CRD with a custom label override as the idempotency key without any other additional work.
| hmm i am a little confused and intrigued
especially the controller that uses events to kick of workflows
I am extremely interested in that to see if you guys want to open source it at some point as a Flyte module
|
hmm i am a little confused and intrigued
especially the controller that uses events to kick of workflows
I am extremely interested in that to see if you guys want to open source it at some point as a Flyte module
| yea the idea itself was inspired by this: <https://github.com/argoproj/argo-events>
except we only really care about GCS object created events or webhooks
our current implementation isnt as extensible... yet, but it has lots of potential!
|
yea the idea itself was inspired by this: <https://github.com/argoproj/argo-events>
except we only really care about GCS object created events or webhooks
our current implementation isnt as extensible... yet, but it has lots of potential!
| Can you do a talk at the next oss meeting about how you use it?
Also why not use Argo events
But this is one of our plans and would
Love to
Collaborate
|
<!here> Reminder everybody: community zoom meet tomorrow, Tuesday 9/17, 9am Pacific Time, 5pm UTC.
Katrina Rogan will demo dramatic reductions in workflow registration times, improving interaction speed
Ketan Umare will demo the new Flytekit SDK alpha, (built by the inimitable but shy Yee) available now for community feedback. Play with Flyte workflows locally before containerizing.
| Thomas Vetterli: fyi
|
Thomas Vetterli: fyi
| We will also provide a roadmap for the next few months and how we want to improve the lives of all our users. so Tune in!
|
Yee: just noticed that setting `workflow_packages` to point to a single python file fails with: `has no attribute '__path__'`
probably because:
```def iterate_modules(pkgs):
for package_name in pkgs:
package = importlib.import_module(package_name)
yield package
for _, name, _ in pkgutil.walk_packages(package.__path__, prefix="{}.".format(package_name)):
yield importlib.import_module(name)```
assumes that all packages are directories with `__ init __.py` files.
```>>> importlib.import_module("flytekit.models.admin.common")
<module 'flytekit.models.admin.common' from '/Users/jeev/Workspace/repos/flytekit/flytekit/models/admin/common.py'>
>>> importlib.import_module("flytekit.models.admin")
<module 'flytekit.models.admin' from '/Users/jeev/Workspace/repos/flytekit/flytekit/models/admin/__init__.py'>```
both of these are valid imports
but with the former, we don't need to walk.
we can use `if hasattr(package, __ path __)` as a check to see if a package is walkable.
PR here: <https://github.com/lyft/flytekit/pull/259>
| thank you! yeah we’ve never set these to single files before. not sure if there was a reason for that.
|
thank you! yeah we’ve never set these to single files before. not sure if there was a reason for that.
| our use case is mostly for development and not wanting to register all our workflows...
|
Jeev B / Gleb Kanterov are you guys presenting this Tuesday OSS sync? Jeev B we would love to to see what you have for the reactive work and Gleb Kanterov i think your JAva SDK is ready :stuck_out_tongue_winking_eye:
| we don’t have a lot of substance - some python code that I can demo and perhaps that will inspire a discussion. :)
|
we don’t have a lot of substance - some python code that I can demo and perhaps that will inspire a discussion. :)
| that is exactly the point
I guess what the usecase is, and how it helps you
we want to actually plan for it in 2021, and whatever you have could be a great starting point
Thank you buddy!
|
We should use <https://gitbook.com/|https://gitbook.com/> for docs
| How did you arrive at this conclusion? I’m curious to know, because just yesterday I told my team “we should use <https://squidfunk.github.io/mkdocs-material/> for docs” :sweat_smile:
|
How did you arrive at this conclusion? I’m curious to know, because just yesterday I told my team “we should use <https://squidfunk.github.io/mkdocs-material/> for docs” :sweat_smile:
| Why do you think so, Fred? Have you done some homework? If so, do you want to share?
|
Why do you think so, Fred? Have you done some homework? If so, do you want to share?
| Did some googling, but I liked how simple it is, open source ad big community, a lot of plugins, and for someone who has zero frontend skills its easy to make it look nice :smile:
Also, <https://fastapi.tiangolo.com> is my all time favourite SW doc and its built with Material for MkDocs, so that might have affected my opinion a little bit.
|
Did some googling, but I liked how simple it is, open source ad big community, a lot of plugins, and for someone who has zero frontend skills its easy to make it look nice :smile:
Also, <https://fastapi.tiangolo.com> is my all time favourite SW doc and its built with Material for MkDocs, so that might have affected my opinion a little bit.
| Ohh I just saw that gitbook is very well
Integrated with git, but love suggestions
I should have said - we should evaluate this :blush:
|
Ohh I just saw that gitbook is very well
Integrated with git, but love suggestions
I should have said - we should evaluate this :blush:
| Is there a opensource/selfhosted version of gitbook? I only looked at it briefly because it looked proprietary.
|
Is there a opensource/selfhosted version of gitbook? I only looked at it briefly because it looked proprietary.
| True
|
True
| It's free for open source software
|
It's free for open source software
| Aaha that’s why I see a lot of oss software using it
|
Aaha that’s why I see a lot of oss software using it
| oh, well thats nice!
|
oh, well thats nice!
| Fredrik Sannholm is your usecase private code?
Also, I think one of the most important questions is source code documentation, especially for open source projects
|
Fredrik Sannholm is your usecase private code?
Also, I think one of the most important questions is source code documentation, especially for open source projects
| yes, internal docs and tools
Currently we have in Confluence, which sort of works, but I’m not a fan
|
yes, internal docs and tools
Currently we have in Confluence, which sort of works, but I’m not a fan
| We use mkdocs as well, but I don’t mind using anything that works
|
We use mkdocs as well, but I don’t mind using anything that works
| Gleb Kanterov so mkdocs generates Java and python docs or are there other libraries that do it and you plug in
|
Gleb Kanterov so mkdocs generates Java and python docs or are there other libraries that do it and you plug in
| My understanding is that mkdocs turn markdown into html or what ever. There are plugins that turn python docstrings into markdown. Not sure about java though
|
My understanding is that mkdocs turn markdown into html or what ever. There are plugins that turn python docstrings into markdown. Not sure about java though
| It’s mostly for hand-written markdown docs
|
It’s mostly for hand-written markdown docs
| This is why we started using Sphinx, also because Sphinx is checked so broken links cause errors
|
Channel and Community, I have some information to share. Lyft has decided to donate Flyte to Linux Foundation. With this donation we will be creating a standalone neutral entity under Linux Foundation
We feel that this would help in fostering a better communtiy and a better open source product.
Thank you for all the support, we will hopefully take the product higher
And with features like flytekit (native typing) and others coming, we are sure we can help the community move from prototyping to production for their pipelines very quickly and efficiently
Gleb Kanterov Hongxin Liang Nelson Arapé Jeev B Fredrik Sannholm Sören Brunk Yuvraj (union.ai) Ruslan Stanevich Niels Bantilan Tim Chan ^
Yosi Taguri
| :tada:
|
:tada:
| What I would love to know from all of you, is that we might break the import statements for you folks, as the code will be in a new organization
|
What I would love to know from all of you, is that we might break the import statements for you folks, as the code will be in a new organization
| when is this move taking place Ketan Umare?
is it going into incubation first? not sure how linux foundation works.
|
when is this move taking place Ketan Umare?
is it going into incubation first? not sure how linux foundation works.
| We will do it in the next couple weeks
only after all of you confirm
|
We will do it in the next couple weeks
only after all of you confirm
| none of the old stuff will break though right? or why do you think it will?
|
none of the old stuff will break though right? or why do you think it will?
| nope
only if you directly import the go-code
|
nope
only if you directly import the go-code
| ah right ok
|
ah right ok
| we might break it
again we dont know, if there is a redirect created
if so, then that would not break either
|
we might break it
again we dont know, if there is a redirect created
if so, then that would not break either
| i see
|
i see
| so we are figuring out the mechanics
but this is one time cost
the home will be `<http://github.com/flyteorg|github.com/flyteorg>`
|
so we are figuring out the mechanics
but this is one time cost
the home will be `<http://github.com/flyteorg|github.com/flyteorg>`
| is the plan to move everything over as is, so that we can just update our docker image paths to "switch" over the deployment?
|
is the plan to move everything over as is, so that we can just update our docker image paths to "switch" over the deployment?
| we as a community will own this space and we can easily add new projects there
ohh yes we have already started publishing all images to github docker registry
|
we as a community will own this space and we can easily add new projects there
ohh yes we have already started publishing all images to github docker registry
| oh cool
|
oh cool
| we will be updating the base kustomize soon
and anyone can now build using github workflows (part of the core) too
|
we will be updating the base kustomize soon
and anyone can now build using github workflows (part of the core) too
| :thumbsup:
very exciting
|
:thumbsup:
very exciting
| thank you again
we hope to begin the new year with a much more grander vision
|
thank you again
we hope to begin the new year with a much more grander vision
| congrats team, looking forward to the future of Flyte!
|
congrats team, looking forward to the future of Flyte!
| That’s very exciting news! I’m sure it will help Flyte to grow as an open-source project and also make it visible to a wider audience. Looking forward too!
|
That’s very exciting news! I’m sure it will help Flyte to grow as an open-source project and also make it visible to a wider audience. Looking forward too!
| Sounds cool! New chapter in the book of flyte!
|
Sounds cool! New chapter in the book of flyte!
| Big congrats! Looking forward to the new era.
|
Big congrats! Looking forward to the new era.
| Jeev B all images have moved to github container registry. <https://github.com/lyft/flyte/blob/master/deployment/sandbox/flyte_generated.yaml#L8861|Example>
They should follow exactly the same pattern except that you use `<http://ghcr.io/|ghcr.io/>` instead of `<http://docker.pkg.github.com/|docker.pkg.github.com/>` prefix...
|
Jeev B all images have moved to github container registry. <https://github.com/lyft/flyte/blob/master/deployment/sandbox/flyte_generated.yaml#L8861|Example>
They should follow exactly the same pattern except that you use `<http://ghcr.io/|ghcr.io/>` instead of `<http://docker.pkg.github.com/|docker.pkg.github.com/>` prefix...
| Also we are waiting for all of you guys to ok then we can move all code to flyteorg github organization
|
Also we are waiting for all of you guys to ok then we can move all code to flyteorg github organization
| Yes. Basically the question is:
*Do you reference <http://github.com/lyft/flyte*|github.com/lyft/flyte*> anywhere in your repos?* particularly `golang` repos... or if you do any scripting around installing flytekit from source... etc.
A quick (:thumbsup: for Yes) and (:thumbsdown: for No) would be great!
As part of the Linux Foundation Donation ^, we will need to transfer ownership of flyte repos to a different org (<http://github.com/flyteorg|github.com/flyteorg>)... there are a couple of options (if you can think of more, plz feel free to add)...
Can I get a couple of eyes on these options? <https://docs.google.com/document/d/1nmS6yyF8uVZ4nlkD9o5JrwYehYSBYRlc4yIeh8GDbvI>
Trying to enumerate the tactical process of moving these repos... and decide by this week on how to move forward...
|
Yes. Basically the question is:
*Do you reference <http://github.com/lyft/flyte*|github.com/lyft/flyte*> anywhere in your repos?* particularly `golang` repos... or if you do any scripting around installing flytekit from source... etc.
A quick (:thumbsup: for Yes) and (:thumbsdown: for No) would be great!
As part of the Linux Foundation Donation ^, we will need to transfer ownership of flyte repos to a different org (<http://github.com/flyteorg|github.com/flyteorg>)... there are a couple of options (if you can think of more, plz feel free to add)...
Can I get a couple of eyes on these options? <https://docs.google.com/document/d/1nmS6yyF8uVZ4nlkD9o5JrwYehYSBYRlc4yIeh8GDbvI>
Trying to enumerate the tactical process of moving these repos... and decide by this week on how to move forward...
| Haytham Abuelfutuh we build propeller internally with our own plugins, so there is the reference, but should be very easy to fix.
|
Haytham Abuelfutuh we build propeller internally with our own plugins, so there is the reference, but should be very easy to fix.
| Github establishes a redirect for the moved repos... `go get` is fine using that...
However, after the move, when we actually do the code change to use vanity domains, yes you will need to change your dependency... hopefully we can coordinate that.. will keep you updated for sure
|
Github establishes a redirect for the moved repos... `go get` is fine using that...
However, after the move, when we actually do the code change to use vanity domains, yes you will need to change your dependency... hopefully we can coordinate that.. will keep you updated for sure
| That is trivial change on our side as we only depend on a few things. So no worries. I commented just to let you know that we have such case.
|
Where in the Admin database is the task type available?
| <https://github.com/lyft/flyteidl/blob/master/protos/flyteidl/core/tasks.proto#L103>
<https://github.com/lyft/flyteadmin/blob/master/pkg/repositories/models/task.go#L22>
|
<https://github.com/lyft/flyteidl/blob/master/protos/flyteidl/core/tasks.proto#L103>
<https://github.com/lyft/flyteadmin/blob/master/pkg/repositories/models/task.go#L22>
| Yee That's from user but is that what is used by UI ? I was hoping it will be somewhere in executions
My intention is to find what type a task execution is.
|
Yee That's from user but is that what is used by UI ? I was hoping it will be somewhere in executions
My intention is to find what type a task execution is.
| Randy Schott would know this right?
|
Randy Schott would know this right?
| Yeah. Just checked
It's part of the identifier.
No way to do SQL, but it's there
|
Yeah. Just checked
It's part of the identifier.
No way to do SQL, but it's there
| Well, there is a way to do sql, but it’s a join?
|
Well, there is a way to do sql, but it’s a join?
| Nope/ They are all protobuf :disappointed:
|
Nope/ They are all protobuf :disappointed:
| In the database?
|
In the database?
| yeah
|
yeah
| huh, I thought that the id fields were all split out and indexed
so the fields from task_id should be something you can read off of a row
|
huh, I thought that the id fields were all split out and indexed
so the fields from task_id should be something you can read off of a row
| yes. but the type of task is not
|
yes. but the type of task is not
| oh, because it’s in the closure.
|
oh, because it’s in the closure.
| yes
|
yes
| got it, I thought you were having trouble locating the associated task. Yes, the task type is not indexed and thus not given its own column.
Though we could change that
|
got it, I thought you were having trouble locating the associated task. Yes, the task type is not indexed and thus not given its own column.
Though we could change that
| *Backlog*
|
*Backlog*
| There are so many things that are serialized in the closure that we would like to be able to search on. This is all calling for a non-postgres solution imo
|
There are so many things that are serialized in the closure that we would like to be able to search on. This is all calling for a non-postgres solution imo
| well using a relational database shines here because we can query using identifier (indexed) fields as a join condition :slightly_smiling_face: but columnar data stores could be interesting too
|
well using a relational database shines here because we can query using identifier (indexed) fields as a join condition :slightly_smiling_face: but columnar data stores could be interesting too
| Katrina Rogan so no way to find this right?
|
Katrina Rogan so no way to find this right?
| you can join task executions on tasks, no?
no way for dynamic tasks however, because we no longer necessarily register a task definition for them
|
you can join task executions on tasks, no?
no way for dynamic tasks however, because we no longer necessarily register a task definition for them
| yeah but resulting closure is encoded :disappointed:
|