input
stringlengths 1
18.7k
| output
stringlengths 1
18.7k
|
---|---|
No rush, so long as there is something can get it later.
| George Snelling did you get the recording? I will also send out the notes with the recording
|
George Snelling did you get the recording? I will also send out the notes with the recording
| I missed the housekeeping but got all of Chang-Hong Hsu’s demo.
|
I missed the housekeeping but got all of Chang-Hong Hsu’s demo.
| ohhh thats ok, i will put a little more of the comment in the notes
|
ohhh thats ok, i will put a little more of the comment in the notes
| George Snelling wanted to check in on the video
|
George Snelling wanted to check in on the video
| Ian here you go:
<https://drive.google.com/file/d/1x8riVGRlVM3R_ShLMOP16BT63OQgGSI3/view?usp=sharing>
Also linked from the <https://docs.google.com/document/d/1Jb6eOPOzvTaHjtPEVy7OR2O5qK1MhEs3vv56DX2dacM/edit#heading=h.c5ha25xc546e|Meeting Notes>. Apologies for the delay.
|
Ian here you go:
<https://drive.google.com/file/d/1x8riVGRlVM3R_ShLMOP16BT63OQgGSI3/view?usp=sharing>
Also linked from the <https://docs.google.com/document/d/1Jb6eOPOzvTaHjtPEVy7OR2O5qK1MhEs3vv56DX2dacM/edit#heading=h.c5ha25xc546e|Meeting Notes>. Apologies for the delay.
| Thanks
|
what's the current best practices around propagating secrets via clusterresource_templates?
| hey Jeev B I don't know that we have best practices but there are several options you can use
you can inject secrets as env variables or mount them at a specific file path
then in the config you can specify them like so:
```cluster_resources:
templatePath: pkg/clusterresource/sampletemplates
templateData:
foo:
valueFrom:
env: MY_VARIABLE
bar:
valueFrom:
filePath: "/mnt/wherever"```
|
hey Jeev B I don't know that we have best practices but there are several options you can use
you can inject secrets as env variables or mount them at a specific file path
then in the config you can specify them like so:
```cluster_resources:
templatePath: pkg/clusterresource/sampletemplates
templateData:
foo:
valueFrom:
env: MY_VARIABLE
bar:
valueFrom:
filePath: "/mnt/wherever"```
| can we specify configmaps or secrets here?
would be super cool if we can create the objects in the flyte deployment namespace and have them be propagated to the project-domain namespaces!
Katrina Rogan the `filePath` here, is within the flyteadmin pod?
so that way, i could mount secrets/configmaps into flyteadmin, and have them be propagated to the child namespaces?
|
can we specify configmaps or secrets here?
would be super cool if we can create the objects in the flyte deployment namespace and have them be propagated to the project-domain namespaces!
Katrina Rogan the `filePath` here, is within the flyteadmin pod?
so that way, i could mount secrets/configmaps into flyteadmin, and have them be propagated to the child namespaces?
| Jeev B yep that's correct
|
Jeev B yep that's correct
| oh sweet
i love this. thanks!
|
oh sweet
i love this. thanks!
| awesome, glad to hear! i'll file an issue to track your secret suggestion (or if you'd like to, please do!) that's a cool alternative to have too
|
awesome, glad to hear! i'll file an issue to track your secret suggestion (or if you'd like to, please do!) that's a cool alternative to have too
| Katrina Rogan i think there might be one issue with this: when creating cluster resource templates that are secrets, the values should be base64 encoded.
per here: <https://github.com/lyft/flyteadmin/blob/316ce8cbb7a9df0791c77f4335835b3399ee44b8/pkg/runtime/interfaces/cluster_resource_configuration.go#L25>
it sounds like the best way to go about this is to create config maps as cluster resource templates and pass secrets into the configmaps
|
Katrina Rogan i think there might be one issue with this: when creating cluster resource templates that are secrets, the values should be base64 encoded.
per here: <https://github.com/lyft/flyteadmin/blob/316ce8cbb7a9df0791c77f4335835b3399ee44b8/pkg/runtime/interfaces/cluster_resource_configuration.go#L25>
it sounds like the best way to go about this is to create config maps as cluster resource templates and pass secrets into the configmaps
| oh what we've done is mount the base 64 encoded value
|
oh what we've done is mount the base 64 encoded value
| right that makes sense. so double base64 encode the secret mounted to flyteadmin!
|
Hello Everyone :wave:
Have you ever considered publishing a Helm chart for Flyte?
From the first sight, Helm should make it easier to configure for different Flyte installation (AWS, GCP, dev, etc). And potentially it’ll reduce the total number of lines of configuration code.
From our experience, we have transferred all K8s system controllers to Helm (we use it only as templating engine) and this simplified the maintenance and rollout updates on different k8s clusters.
Or you have some concerns about this?
| Yes we have. We don’t have expertise. If you can help I can take forward the work
Ruslan Stanevich any help? Yuvraj (union.ai) was interested to
|
Yes we have. We don’t have expertise. If you can help I can take forward the work
Ruslan Stanevich any help? Yuvraj (union.ai) was interested to
| HI Ketan Umare
I have several ideas how to do it.
I’ll take a look at this in the following week and will try to share PoC.
|
HI Ketan Umare
I have several ideas how to do it.
I’ll take a look at this in the following week and will try to share PoC.
| Ruslan Stanevich ping me if anything needed, I am available for POC.
|
Ruslan Stanevich ping me if anything needed, I am available for POC.
| thanks Yuvraj (union.ai) !
will do :fist:
So, I see the Flyte helm chart smth like this.
For now it contains minimal customizations for chart but it can be extended.
The Chart contains 2 configurations:
• Sandbox installation (`values-sandbox.yaml`) has been tested on Minikube with `tests/endtoent.yaml`
• EKS installation (`values-eks.yaml`) has been tested on development EKS cluster. Yes, some data is hidden, like bucket name, account number and etc.
on our main Flyte installation:
• we use Istio as ingress controller instead of Contour.
• we don’t use Kubernetes secrets, but use vault-init-agent.
• and some additional customizations
Actually, it needs additional time for improvement but I’d like share the first attempt.
|
thanks Yuvraj (union.ai) !
will do :fist:
So, I see the Flyte helm chart smth like this.
For now it contains minimal customizations for chart but it can be extended.
The Chart contains 2 configurations:
• Sandbox installation (`values-sandbox.yaml`) has been tested on Minikube with `tests/endtoent.yaml`
• EKS installation (`values-eks.yaml`) has been tested on development EKS cluster. Yes, some data is hidden, like bucket name, account number and etc.
on our main Flyte installation:
• we use Istio as ingress controller instead of Contour.
• we don’t use Kubernetes secrets, but use vault-init-agent.
• and some additional customizations
Actually, it needs additional time for improvement but I’d like share the first attempt.
| Ruslan Stanevich do you mind making a PR :pray:
:slightly_smiling_face:
Also does this helm chart work with kustomize?
or is this completely separate?
Yuvraj (union.ai) ^
|
Ruslan Stanevich do you mind making a PR :pray:
:slightly_smiling_face:
Also does this helm chart work with kustomize?
or is this completely separate?
Yuvraj (union.ai) ^
| That's a separate thing
|
That's a separate thing
| added PR <https://github.com/lyft/flyte/pull/550/files> with Helm chart
|
Ketan Umare: are the `default-affinity` in k8s plugin spec not meant to work with `sidecar` tasks? `default-tolerations` work, but not `default-node-selector` or `default-affinity`.
| It should. If it’s not working, then it’s a bug can you please file it, we can tackle ASAP
|
It should. If it’s not working, then it’s a bug can you please file it, we can tackle ASAP
| ok good to know. will do
|
ok good to know. will do
| mind tagging me on the issue? I'll try to take a look soon
|
mind tagging me on the issue? I'll try to take a look soon
| ok i havent had a chance to reproduce properly and write it up
ill make time on monday to do that
|
ok i havent had a chance to reproduce properly and write it up
ill make time on monday to do that
| no worries, Katrina Rogan we can try it out right?
i can TAL a look at the code
Just need to ensure this is used - <https://github.com/lyft/flyteplugins/blob/master/go/tasks/pluginmachinery/flytek8s/pod_helper.go#L24>
and we dont use it in sidecar - <https://github.com/lyft/flyteplugins/blob/master/go/tasks/plugins/k8s/sidecar/sidecar.go#L89>
Jeev B you are right
should be a simple fix
|
no worries, Katrina Rogan we can try it out right?
i can TAL a look at the code
Just need to ensure this is used - <https://github.com/lyft/flyteplugins/blob/master/go/tasks/pluginmachinery/flytek8s/pod_helper.go#L24>
and we dont use it in sidecar - <https://github.com/lyft/flyteplugins/blob/master/go/tasks/plugins/k8s/sidecar/sidecar.go#L89>
Jeev B you are right
should be a simple fix
| ah ok
Ketan Umare does that explain why tolerations work?
|
ah ok
Ketan Umare does that explain why tolerations work?
| ya there is code there somehow that we added tolerations specifically
we should just move this block to “Build” <https://github.com/lyft/flyteplugins/blob/master/go/tasks/pluginmachinery/flytek8s/pod_helper.go#L48-L61>
Katrina Rogan ^ wdyt?
|
ya there is code there somehow that we added tolerations specifically
we should just move this block to “Build” <https://github.com/lyft/flyteplugins/blob/master/go/tasks/pluginmachinery/flytek8s/pod_helper.go#L48-L61>
Katrina Rogan ^ wdyt?
| sweet
|
sweet
| sorry updated the link ^
|
sorry updated the link ^
| sounds good to me
|
sounds good to me
| Katrina Rogan would you get a chance to work on this ^?
|
Katrina Rogan would you get a chance to work on this ^?
| .
I can look today
also Jeev B do you have an issue filed for this? if not i can make one, just didn't want to duplicate :slightly_smiling_face:
|
.
I can look today
also Jeev B do you have an issue filed for this? if not i can make one, just didn't want to duplicate :slightly_smiling_face:
| i don't have one, no. sorry!
|
i don't have one, no. sorry!
| no worries at all!
|
no worries at all!
| i was going to make one today, but I figured the problem was already clear...
note to self to make one anyway next time!
|
i was going to make one today, but I figured the problem was already clear...
note to self to make one anyway next time!
| np, the GH issue is more just for tracking/discoverability but this context is already very helpful
|
What can be a good approach to have a node type that can take arbitrary list of inputs of the same type and create a list (or map) of them? There is a similar thing for array tasks, but it also assumes that execution happens in parallel. What I really want is to have universal task that will:
```task(a=1, b=2) -> map: {a: 1, b: 2}
task(a=1, b=2, c=4) -> map: {a: 1, b: 2, c: 4}```
I achieve this by writing some boilerplate today :slightly_smiling_face:
Sometimes it’s called variadic functions, varargs, n-arry functions, kwargs
As I understand it, such thing requires extension of idl and compiler unless there is already something
| Ya no car rags support, but you can accept a list. You may not want it as you will have to create a list
|
has anyone seen an error like this:
```[system] unable to retrieve launchplan information ... caused by: [SystemError] Could not fetch launch plan definition from Admin, caused by: rpc error: code = Unknown desc = failed database operation with bind message supplies 4 parameters, but prepared statement "" requires 9```
| hey Jeev B have you run all the latest migrations?
|
hey Jeev B have you run all the latest migrations?
| Weirdly implies a malformed query and this is one of the most used api
|
Weirdly implies a malformed query and this is one of the most used api
| just crashed again with a similar issue:
```[system] unable to retrieve launchplan information ..., caused by: [SystemError] Could not fetch launch plan definition from Admin, caused by: rpc error: code = Unknown desc = failed database operation with bind message supplies 4 parameters, but prepared statement "" requires 1```
Katrina Rogan:
```Init Containers:
run-migrations:
Image: <http://docker.io/lyft/flyteadmin:v0.3.5@sha256:234bbb911f960e47445afb1577978f23495f65428682f27754e8f8192b83f10e|docker.io/lyft/flyteadmin:v0.3.5@sha256:234bbb911f960e47445afb1577978f23495f65428682f27754e8f8192b83f10e>```
that's all I'm doing. should i be doing something else as well?
ah...its possible that I tried running with a more recent version of flyteadmin, and then downgraded. that might've revved by db forward. could that potentially be a problem?
|
just crashed again with a similar issue:
```[system] unable to retrieve launchplan information ..., caused by: [SystemError] Could not fetch launch plan definition from Admin, caused by: rpc error: code = Unknown desc = failed database operation with bind message supplies 4 parameters, but prepared statement "" requires 1```
Katrina Rogan:
```Init Containers:
run-migrations:
Image: <http://docker.io/lyft/flyteadmin:v0.3.5@sha256:234bbb911f960e47445afb1577978f23495f65428682f27754e8f8192b83f10e|docker.io/lyft/flyteadmin:v0.3.5@sha256:234bbb911f960e47445afb1577978f23495f65428682f27754e8f8192b83f10e>```
that's all I'm doing. should i be doing something else as well?
ah...its possible that I tried running with a more recent version of flyteadmin, and then downgraded. that might've revved by db forward. could that potentially be a problem?
| run migrations should be all you need, but if you reverted admin you might need to rollback the migrations that are newer than your current version of admin
|
run migrations should be all you need, but if you reverted admin you might need to rollback the migrations that are newer than your current version of admin
| ah i see
this is in a dev env. ill just wipe the DBs and try again to be sure.
|
ah i see
this is in a dev env. ill just wipe the DBs and try again to be sure.
| Ok just saw this thread
|
Ok just saw this thread
| this is still happening:
```[system] unable to retrieve launchplan information ..., caused by: [SystemError] Could not fetch launch plan definition from Admin, caused by: rpc error: code = Unknown desc = failed database operation with bind message supplies 4 parameters, but prepared statement "" requires 1```
this is with a fresh DB. interestingly enough this is failing on a run that had previously succeeded and I had relaunched.
dump of some of the logs. looks like this is happening quite a bit.
|
this is still happening:
```[system] unable to retrieve launchplan information ..., caused by: [SystemError] Could not fetch launch plan definition from Admin, caused by: rpc error: code = Unknown desc = failed database operation with bind message supplies 4 parameters, but prepared statement "" requires 1```
this is with a fresh DB. interestingly enough this is failing on a run that had previously succeeded and I had relaunched.
dump of some of the logs. looks like this is happening quite a bit.
| woah weird, is this only happening for executions queries now or launch plan ones too? also which version of admin are you on now?
also are you seeing these errors when during an execution or are these from hitting a specific admin endpoint/console page?
|
woah weird, is this only happening for executions queries now or launch plan ones too? also which version of admin are you on now?
also are you seeing these errors when during an execution or are these from hitting a specific admin endpoint/console page?
| The execution queries aren’t causing the workflow to crash, but the launch plan one is. i imagine this would happen even if i was running headless. i’m using flyteadmin 0.3.5. interestingly, the issue isn’t always reproducible. so far the same workflow with the same inputs has succeeded twice and failed twice.
|
The execution queries aren’t causing the workflow to crash, but the launch plan one is. i imagine this would happen even if i was running headless. i’m using flyteadmin 0.3.5. interestingly, the issue isn’t always reproducible. so far the same workflow with the same inputs has succeeded twice and failed twice.
| That just sounds odd
Let’s look at the schema of the table
|
That just sounds odd
Let’s look at the schema of the table
| interestingly this has been running fine in our sandbox running postgres:10.1. the prod sql instance is running postgres 11. i've downgraded that now, and will try to reproduce again. will post the schema here in a bit
```flyte=> \d+ executions
Table "public.executions"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
--------------------------+--------------------------+-----------+----------+----------------------------------------+----------+--------------+-------------
id | integer | | not null | nextval('executions_id_seq'::regclass) | plain | |
created_at | timestamp with time zone | | | | plain | |
updated_at | timestamp with time zone | | | | plain | |
deleted_at | timestamp with time zone | | | | plain | |
execution_project | text | | not null | | extended | |
execution_domain | text | | not null | | extended | |
execution_name | text | | not null | | extended | |
launch_plan_id | integer | | | | plain | |
workflow_id | integer | | | | plain | |
task_id | integer | | | | plain | |
phase | text | | | | extended | |
closure | bytea | | | | extended | |
spec | bytea | | not null | | extended | |
started_at | timestamp with time zone | | | | plain | |
execution_created_at | timestamp with time zone | | | | plain | |
execution_updated_at | timestamp with time zone | | | | plain | |
duration | bigint | | | | plain | |
abort_cause | text | | | | extended | |
mode | integer | | | | plain | |
source_execution_id | integer | | | | plain | |
parent_node_execution_id | integer | | | | plain | |
cluster | text | | | | extended | |
inputs_uri | text | | | | extended | |
user_inputs_uri | text | | | | extended | |
error_kind | text | | | | extended | |
error_code | text | | | | extended | |
Indexes:
"executions_pkey" PRIMARY KEY, btree (execution_project, execution_domain, execution_name)
"idx_executions_deleted_at" btree (deleted_at)
"idx_executions_error_kind" btree (error_kind)
"idx_executions_id" btree (id)
"idx_executions_launch_plan_id" btree (launch_plan_id)
"idx_executions_task_id" btree (task_id)
"idx_executions_workflow_id" btree (workflow_id)```
```flyte=> \d+ launch_plans
Table "public.launch_plans"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
---------------+--------------------------+-----------+----------+------------------------------------------+----------+--------------+-------------
id | integer | | not null | nextval('launch_plans_id_seq'::regclass) | plain | |
created_at | timestamp with time zone | | | | plain | |
updated_at | timestamp with time zone | | | | plain | |
deleted_at | timestamp with time zone | | | | plain | |
project | text | | not null | | extended | |
domain | text | | not null | | extended | |
name | text | | not null | | extended | |
version | text | | not null | | extended | |
spec | bytea | | not null | | extended | |
workflow_id | integer | | | | plain | |
closure | bytea | | not null | | extended | |
state | integer | | | 0 | plain | |
digest | bytea | | | | extended | |
schedule_type | text | | | | extended | |
Indexes:
"launch_plans_pkey" PRIMARY KEY, btree (project, domain, name, version)
"idx_launch_plans_deleted_at" btree (deleted_at)
"idx_launch_plans_id" btree (id)
"idx_launch_plans_workflow_id" btree (workflow_id)
"lp_project_domain_idx" btree (project, domain)
"lp_project_domain_name_idx" btree (project, domain, name)```
|
interestingly this has been running fine in our sandbox running postgres:10.1. the prod sql instance is running postgres 11. i've downgraded that now, and will try to reproduce again. will post the schema here in a bit
```flyte=> \d+ executions
Table "public.executions"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
--------------------------+--------------------------+-----------+----------+----------------------------------------+----------+--------------+-------------
id | integer | | not null | nextval('executions_id_seq'::regclass) | plain | |
created_at | timestamp with time zone | | | | plain | |
updated_at | timestamp with time zone | | | | plain | |
deleted_at | timestamp with time zone | | | | plain | |
execution_project | text | | not null | | extended | |
execution_domain | text | | not null | | extended | |
execution_name | text | | not null | | extended | |
launch_plan_id | integer | | | | plain | |
workflow_id | integer | | | | plain | |
task_id | integer | | | | plain | |
phase | text | | | | extended | |
closure | bytea | | | | extended | |
spec | bytea | | not null | | extended | |
started_at | timestamp with time zone | | | | plain | |
execution_created_at | timestamp with time zone | | | | plain | |
execution_updated_at | timestamp with time zone | | | | plain | |
duration | bigint | | | | plain | |
abort_cause | text | | | | extended | |
mode | integer | | | | plain | |
source_execution_id | integer | | | | plain | |
parent_node_execution_id | integer | | | | plain | |
cluster | text | | | | extended | |
inputs_uri | text | | | | extended | |
user_inputs_uri | text | | | | extended | |
error_kind | text | | | | extended | |
error_code | text | | | | extended | |
Indexes:
"executions_pkey" PRIMARY KEY, btree (execution_project, execution_domain, execution_name)
"idx_executions_deleted_at" btree (deleted_at)
"idx_executions_error_kind" btree (error_kind)
"idx_executions_id" btree (id)
"idx_executions_launch_plan_id" btree (launch_plan_id)
"idx_executions_task_id" btree (task_id)
"idx_executions_workflow_id" btree (workflow_id)```
```flyte=> \d+ launch_plans
Table "public.launch_plans"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
---------------+--------------------------+-----------+----------+------------------------------------------+----------+--------------+-------------
id | integer | | not null | nextval('launch_plans_id_seq'::regclass) | plain | |
created_at | timestamp with time zone | | | | plain | |
updated_at | timestamp with time zone | | | | plain | |
deleted_at | timestamp with time zone | | | | plain | |
project | text | | not null | | extended | |
domain | text | | not null | | extended | |
name | text | | not null | | extended | |
version | text | | not null | | extended | |
spec | bytea | | not null | | extended | |
workflow_id | integer | | | | plain | |
closure | bytea | | not null | | extended | |
state | integer | | | 0 | plain | |
digest | bytea | | | | extended | |
schedule_type | text | | | | extended | |
Indexes:
"launch_plans_pkey" PRIMARY KEY, btree (project, domain, name, version)
"idx_launch_plans_deleted_at" btree (deleted_at)
"idx_launch_plans_id" btree (id)
"idx_launch_plans_workflow_id" btree (workflow_id)
"lp_project_domain_idx" btree (project, domain)
"lp_project_domain_name_idx" btree (project, domain, name)```
| Any luck with the postgres version downgrade?
|
Any luck with the postgres version downgrade?
| first run is going with postgres 10
|
first run is going with postgres 10
| erm what do you mean?
|
erm what do you mean?
| so i downgraded to postgres 10 and kicked off the same workflow/inputs. did you mean something else?
|
so i downgraded to postgres 10 and kicked off the same workflow/inputs. did you mean something else?
| Nope that exactly, I was just wondering if you see issues with that workflow?
|
Nope that exactly, I was just wondering if you see issues with that workflow?
| not yet. i will know in about 30 mins or so. will keep you posted! :slightly_smiling_face:
Katrina Rogan: it already crashed:
```[system] unable to retrieve launchplan information orchid:main:sunflower.workflows.flyte_workflows.fastq_to_bam_workflow.TRIM_FASTQS_AND_ALIGN_LAUNCH_PLAN:67ea184a8384cea6207037d9da31ff1a28c54b011354e5df261c9bf6be62787d, caused by: [SystemError] Could not fetch launch plan definition from Admin, caused by: rpc error: code = Unknown desc = failed database operation with bind message supplies 4 parameters, but prepared statement "" requires 3```
2 of 4 executions have failed so far.
|
not yet. i will know in about 30 mins or so. will keep you posted! :slightly_smiling_face:
Katrina Rogan: it already crashed:
```[system] unable to retrieve launchplan information orchid:main:sunflower.workflows.flyte_workflows.fastq_to_bam_workflow.TRIM_FASTQS_AND_ALIGN_LAUNCH_PLAN:67ea184a8384cea6207037d9da31ff1a28c54b011354e5df261c9bf6be62787d, caused by: [SystemError] Could not fetch launch plan definition from Admin, caused by: rpc error: code = Unknown desc = failed database operation with bind message supplies 4 parameters, but prepared statement "" requires 3```
2 of 4 executions have failed so far.
| Jeev B are you sure you are only running one instance of flyteadmin?
|
Jeev B are you sure you are only running one instance of flyteadmin?
| yes.
|
yes.
| if you are running more replicas, are they all running the latest version?
|
if you are running more replicas, are they all running the latest version?
| ```> kubectl get po
NAME READY STATUS RESTARTS AGE
cloud-sql-proxy-6dd6c8f4dc-29j6x 2/2 Running 0 22m
cloud-sql-proxy-6dd6c8f4dc-7gqxp 2/2 Running 0 22m
cloud-sql-proxy-6dd6c8f4dc-mgfjc 2/2 Running 0 22m
datacatalog-5669546cdd-mrn4f 1/1 Running 0 22m
flyteadmin-7cbd6b6f8f-xk4vc 3/3 Running 0 22m
flyteconsole-c8bcc85d9-gt9vc 2/2 Running 0 9d
flytepropeller-cf596c85b-z2db9 1/1 Running 0 22m
syncresources-1603645860-rfzxf 0/1 Completed 0 18s```
|
```> kubectl get po
NAME READY STATUS RESTARTS AGE
cloud-sql-proxy-6dd6c8f4dc-29j6x 2/2 Running 0 22m
cloud-sql-proxy-6dd6c8f4dc-7gqxp 2/2 Running 0 22m
cloud-sql-proxy-6dd6c8f4dc-mgfjc 2/2 Running 0 22m
datacatalog-5669546cdd-mrn4f 1/1 Running 0 22m
flyteadmin-7cbd6b6f8f-xk4vc 3/3 Running 0 22m
flyteconsole-c8bcc85d9-gt9vc 2/2 Running 0 9d
flytepropeller-cf596c85b-z2db9 1/1 Running 0 22m
syncresources-1603645860-rfzxf 0/1 Completed 0 18s```
| so this non determinism may be because you have 2 versions of flyteadmin
I hope its not cloudsqlproxy
|
so this non determinism may be because you have 2 versions of flyteadmin
I hope its not cloudsqlproxy
| hmm... interesting point. i might be able to test that.
|
hmm... interesting point. i might be able to test that.
| also talking to Katrina
|
also talking to Katrina
| to wrap this up: this was an issue with our pgbouncer pooling mode. we were initially running transaction pooling mode, but switching to session pooling mode has resolved these issues. see: <https://www.pgbouncer.org/features.html|https://www.pgbouncer.org/features.html>
|
to wrap this up: this was an issue with our pgbouncer pooling mode. we were initially running transaction pooling mode, but switching to session pooling mode has resolved these issues. see: <https://www.pgbouncer.org/features.html|https://www.pgbouncer.org/features.html>
| HI Jeev B can you describe why do you have the need to use pgbouncer? We just connect to Cloud Sql Proxy and it is working just fine for us
|
HI Jeev B can you describe why do you have the need to use pgbouncer? We just connect to Cloud Sql Proxy and it is working just fine for us
| yup! about a year ago, cloud-sql instances had a limit of 100 concurrent connections. for some other application we needed more than that, and we just made 1 reusable kustomize template for cloud-sql-proxy with pgbouncer as a sidecar. pgbouncer basically allows us to circumvent cloud-sql's limit by pooling connections.
this may not be necessary anymore. i just haven't had the chance to revisit.
Nelson Arapé more info here: <https://cloud.google.com/sql/docs/quotas#cloud-sql-for-postgresql-connection-limits>
looks like this isn't necessarily a problem anymore
|
Jeev B is this latest version
| we’re running v0.3.5. haven’t upgraded to 0.3.6 or 0.3.7. last i tried we were getting 400/500s with gcp in flyteconsole while trying to view inputs/outputs.
|
we’re running v0.3.5. haven’t upgraded to 0.3.6 or 0.3.7. last i tried we were getting 400/500s with gcp in flyteconsole while trying to view inputs/outputs.
| Ya we need to add a version api
Someone is working on that
But in your overlay did you remove the run migrations
|
Ya we need to add a version api
Someone is working on that
But in your overlay did you remove the run migrations
| no. should i?
i imagine it should just complete if it doesnt have any new migrations to run anyway right?
|
no. should i?
i imagine it should just complete if it doesnt have any new migrations to run anyway right?
| I you shouldnt
Ya
Is this breaking production?
|
I you shouldnt
Ya
Is this breaking production?
| no we don’t have this in prod yet. demoing to core team on thursday though
we have been working off of ephemeral sandboxes so far, and this is working in a full prod-like env.
so we probably have some tuning to do as well
|
no we don’t have this in prod yet. demoing to core team on thursday though
we have been working off of ephemeral sandboxes so far, and this is working in a full prod-like env.
so we probably have some tuning to do as well
| Ohh demo on Thursday that sounds critical
How can I help debug
|
Ohh demo on Thursday that sounds critical
How can I help debug
| oh wow! thanks. we’re going to be running stress tests early next week. will likely need help if we’re running into issues.
|
oh wow! thanks. we’re going to be running stress tests early next week. will likely need help if we’re running into issues.
| Yes please do
I can just paste over all our prod config to you guys
Most important settings are going to be workflowrevisioncache and kube client settings
So before you run the stress test let’s chat Monday and I can relay all these settings
Actually I know Spotify is also scaling up things so if you want we can basically do a channel and do a VC call to explain the settings
Docs should follow one day
|
Yes please do
I can just paste over all our prod config to you guys
Most important settings are going to be workflowrevisioncache and kube client settings
So before you run the stress test let’s chat Monday and I can relay all these settings
Actually I know Spotify is also scaling up things so if you want we can basically do a channel and do a VC call to explain the settings
Docs should follow one day
| that would be awesome! I’ll ping you on Monday!
|
that would be awesome! I’ll ping you on Monday!
| Sure
|
Jeev B let us know if we are still on for 10:00 am
| the problem has been completely resolved. would still love your docs/tips on scaling/stress-testing though.
|
the problem has been completely resolved. would still love your docs/tips on scaling/stress-testing though.
| ya so do we want to catch up at 10
i can
just let me know
|
ya so do we want to catch up at 10
i can
just let me know
| no rush for the docs. we don't have to meet today!
|
no rush for the docs. we don't have to meet today!
| ok
i thought you want to do stress testing
i just wanted to give you the perf things before you start stress testing
so sure lets catch up later
|
ok
i thought you want to do stress testing
i just wanted to give you the perf things before you start stress testing
so sure lets catch up later
| we can chat briefly at 10am Ketan Umare!
|
Hi everyone, we are looking at updating the SparkOperator/CRD version used in Flyte.
Currently, we use `v1beta1` version of the SparkOperator which is deployed as part of Flyte deploy. Similarly, the Spark Plugin currently creates v1beta1 spark resources.
We are looking to move to `v1beta2` which has been the stable version supported by GCP SparkOperator and has multiple new features. This might need some manual cluster clean-up as well as can have a potential impact on existing running jobs. Please review details in <https://github.com/lyft/flyte/issues/573> if you are using Spark in production on how this can potentially impact you and let us know if there are any concerns.
Ruslan Stanevich I believe you use Spark in production . Also Ketan Umare please add anyone else who might be using Spark
| Thank you Anmol Khurana
Yes, we use Spark for production workflows.
|
Thank you Anmol Khurana
Yes, we use Spark for production workflows.
| Jeev B Nelson Arapé I don’t think you guys use spark right?
Deepen Mehta I know you use your own spark right?
|
Jeev B Nelson Arapé I don’t think you guys use spark right?
Deepen Mehta I know you use your own spark right?
| No we don't Ketan Umare
|
George Snelling Ketan Umare are we having the usual open-source meeting tuesday? Or are we skipping this one on account of the election?
| It's a holiday/day off at Lyft and a few other companies too
|
It's a holiday/day off at Lyft and a few other companies too
| What do you guys think
If so we should cancel now
|
What do you guys think
If so we should cancel now
| I'm fine to cancel.
Want me to do it?
|
I'm fine to cancel.
Want me to do it?
| my vote is to cancel now… and we can do the fast register demo and also present the alpha release together (though I’m not sure we’ll get fast-register working with the new code)
|
my vote is to cancel now… and we can do the fast register demo and also present the alpha release together (though I’m not sure we’ll get fast-register working with the new code)
| Ketan Umare?
|
Ketan Umare?
| Sure I am good
Let’s cancel
|
hey not sure if this the right channel, lmk and i can move it. I seem to be unable to fetch and run remote workflows/tasks from the python API
1. If i try and register a workflow that fetches a remote workflow i get `An entity was not found in modules accessible from the workflow packages configuration` . I traced it to an arg `detect_unreferenced_entities` which is hardcoded to `True` . I can expose this as a cmdline arg in a PR but i'm a little confused about the intended functionality here
2. If i fork this code to get around that and register, code fails at runtime to fetch the task because `FLYTE_PLATFORM_URL` is not set or is wrong. My understanding is that this code shouldn't be running at all and it's a fluke of python class loading. What's the correct workaround?
Also I understand that this is a product of the old flytekit and point (2) in particular should be fixed in the new version
example code that fails
```@workflow_class
class RoyaltiesForecast:
# System workflow parameters
parameters = Input(Types.Generic, default={})
consumption = Input(BQDataset)
ads_revenue = SdkWorkflow.fetch(
"flytesnacks",
"production",
"workflows.ads_revenue.workflow.AdsRevenueForecast",
"v4"
)
ads_revenue = ads_revenue(
parameters=parameters,
consumption=consumption,
)```
cc Gleb Kanterov Ketan Umare Yee as mentioned, this is our main blocker
| Are the project and domain attributes correct?
|
Are the project and domain attributes correct?
| hmm yes, but would that be related?
the code seems to be explicitly preventing this behavior
<https://github.com/lyft/flytekit/blob/master/flytekit/tools/module_loader.py#L72>
|
hmm yes, but would that be related?
the code seems to be explicitly preventing this behavior
<https://github.com/lyft/flytekit/blob/master/flytekit/tools/module_loader.py#L72>
| i just looked and the sample and tried to guess
let me check
|
i just looked and the sample and tried to guess
let me check
| code is a little opaque, still trying to parse but i think that's what it's doing. setting that value to false fixes this issue for sure
|
code is a little opaque, still trying to parse but i think that's what it's doing. setting that value to false fixes this issue for sure
| Dylan Wilder can you try it this way - <https://github.com/lyft/flytesnacks/blob/master/cookbook/recipes/shared/sharing.ipynb>
basically factoring out the task outside the class?
I think its assuming that the task exists
i know this example works, we can debug once you have the code working (so that we can unblock you)
Yee what do you think ^
Dylan Wilder any luck
|
Dylan Wilder can you try it this way - <https://github.com/lyft/flytesnacks/blob/master/cookbook/recipes/shared/sharing.ipynb>
basically factoring out the task outside the class?
I think its assuming that the task exists
i know this example works, we can debug once you have the code working (so that we can unblock you)
Yee what do you think ^
Dylan Wilder any luck
| Yee is helping me :slightly_smiling_face:
on meets
|
Yee is helping me :slightly_smiling_face:
on meets
| I thought that the problem was with how Python loads the code
One solution can be separating workflows and tasks into different modules
|
I thought that the problem was with how Python loads the code
One solution can be separating workflows and tasks into different modules
| problem (2) is. that
problem (1) is different
|
problem (2) is. that
problem (1) is different
| (1) don’t know :slightly_smiling_face:
|
(1) don’t know :slightly_smiling_face:
| yea yee and i looked into it, so we're working on it. will let you guys know!
|
yea yee and i looked into it, so we're working on it. will let you guys know!
| if you wouldn’t mind trying, i’d be curious to see what this value is at the point of failure
```from flytekit.configuration.sdk import WORKFLOW_PACKAGES
WORKFLOW_PACKAGES.get()```
|