input
stringlengths 1
12.6k
| output
stringlengths 1
12k
|
---|---|
I'm using Pod tasks and am not seeing it! It looks like they inherit from PythonFunctionTask though so should be there?
|
it’s not accessible with FlyteRemote for now.
Would you like to help with it?
|
it’s not accessible with FlyteRemote for now.
Would you like to help with it?
|
possibly!
|
possibly!
|
Thanks!
|
Hi team, I create two issues and prs inside for review:
1. <https://github.com/flyteorg/flyte/issues/3567|[Core feature] Support horovod in mpi task> cc: <@USU6W5ATA> <@U029U35LRDJ>
2. <https://github.com/flyteorg/flyte/issues/3566|[Core feature] Add user id to security context> cc <@UNR3C6Y4T> needs your pointer about the middleware
Thanks <@USU6W5ATA> for the review.
Left a question <https://github.com/flyteorg/flytekit/pull/1575/files#r1156378225>
<@UNR3C6Y4T> can you review the 2nd one when you get some time? thanks in advance
<@USU6W5ATA> addressed the comment
<https://github.com/flyteorg/flyteidl/pull/388>
<https://github.com/flyteorg/flyteadmin/pull/549/files>
Addressed the comment and added e2e test for the user identifier
<@UNR3C6Y4T>
<@U031V5ZTDE0>
|
thanks Byron for the context here! So if i understand this correctly:
From UI, or flytekit you'll be able to add `user_identifier` as part of execution
Then in admin, it will pass the identifier to workflow.
Is this all that you need from <http://union.ai|union.ai> to achieve your intended feature for linkedin?
(Would there be a change in console to show the user identity as well?)
|
thanks Byron for the context here! So if i understand this correctly:
From UI, or flytekit you'll be able to add `user_identifier` as part of execution
Then in admin, it will pass the identifier to workflow.
Is this all that you need from <http://union.ai|union.ai> to achieve your intended feature for linkedin?
(Would there be a change in console to show the user identity as well?)
|
No no
Quick chat?
User_id will be parsed from oauth access token
|
Hey y'all. Would love feedback on <https://github.com/flyteorg/flytekit/pull/1565>.
|
Sorry for the delay - cc <@UNR3C6Y4T> / <@USU6W5ATA>
|
Sorry for the delay - cc <@UNR3C6Y4T> / <@USU6W5ATA>
|
thanks, will take a look today
cc <@U04664Z7H37> who was looking into integrating pydantic as well.
|
thanks, will take a look today
cc <@U04664Z7H37> who was looking into integrating pydantic as well.
|
<@U04664Z7H37> I think that's a great idea. Let me know if you want help with that.
|
<@U04664Z7H37> I think that's a great idea. Let me know if you want help with that.
|
will take a look today
|
Thanks everyone, that was a fantastic meeting. This is quickly becoming my favorite meeting of the week!
Recording will be posted in the YT channel.
There were a couple of housekeeping items that we didn't have the chance to discuss there:
1. Branches vs forks. For contributing to Flyte, do we want to require PRs to come from forks exclusively? or keep the current approach of creating branches in the repo. <@U0265RTUJ5B> <@UNZB4NW3S>. The decision could have implications on the privileges and requirements for new Contributors
2. For the CODEOWNERS idea from <@U04664Z7H37>. Is it the goal of it to auto assign TSC members (at least) as reviewers for new RFCs?
|
1. I think right now we allow Flyte maintainers (or contributors?) to open branches in the Flyte repos. The general idea is that users must fork for the first few contributions and then they'll be added to the requisite permissions to allow branching directly. Not very opinionated on this moving forward, but just some context.
|
1. I think right now we allow Flyte maintainers (or contributors?) to open branches in the Flyte repos. The general idea is that users must fork for the first few contributions and then they'll be added to the requisite permissions to allow branching directly. Not very opinionated on this moving forward, but just some context.
|
1. +1 to what Dan said. Maintainers have permissions to open branches in the repo. For all other contributions we follow the usual OSS process of forking the repo.
2. This is reasonable. This will solve the problem of awareness for all new RFCs.
|
1. +1 to what Dan said. Maintainers have permissions to open branches in the repo. For all other contributions we follow the usual OSS process of forking the repo.
2. This is reasonable. This will solve the problem of awareness for all new RFCs.
|
1. I’m personally also just fine with having a fork and creating PRs from there, this is what I’ve done so far. However you prefer.
2. Yes, my idea was automatically becoming aware of RFCs to review
|
1. I’m personally also just fine with having a fork and creating PRs from there, this is what I’ve done so far. However you prefer.
2. Yes, my idea was automatically becoming aware of RFCs to review
|
thanks everyone
so regarding #1, and to comply with the RFC process, I'll submit a short RFC to change the current privileges of the Contributor role (which include branching) to make it forks only. Branching would remain a privilege for maintainers.
There are concerns, though, that this movement could elevate the barriers for someone to become a contributor. I'll let elaborate <@UNR3C6Y4T> better on this as a comment in the PR to come
|
<@U04H6UUE78B> I would also love if <@U02B12QHY9J> can join the contributor discussions
and Also <@UP4D9EY6T> (and hopefully other folks from spotify)
|
Sorry, not able to join. It was 3am in my timezone :sweat_smile:
|
<@U019PBV483E> targeted jupyter group please !!!!!!!!!
cc <@USU6W5ATA> and I have been looking into making tasks fully pickleable
|
I'm interested in never having to leave my Jupyter notebook
so like never have to look at console tbh (even though console is very nice)
This is current UX, but I want that syncing execution log to be an ipywidget that shows graph view
(this runs and returns output to user)
But task development inside notebook would also be super dope!
|
I'm interested in never having to leave my Jupyter notebook
so like never have to look at console tbh (even though console is very nice)
This is current UX, but I want that syncing execution log to be an ipywidget that shows graph view
(this runs and returns output to user)
But task development inside notebook would also be super dope!
|
add your idea here :slightly_smiling_face: :
<https://github.com/flyteorg/flyte/discussions/3515>
|
This message was deleted.
|
cc <@U0265RTUJ5B>
and <@U04H6UUE78B>
do you think this is something we should discuss in the normal contributor sync (one happening tomorrow I believe)
or should we schedule something separate just for this
|
cc <@U0265RTUJ5B>
and <@U04H6UUE78B>
do you think this is something we should discuss in the normal contributor sync (one happening tomorrow I believe)
or should we schedule something separate just for this
|
no mostly linkedin specific thing
imo it takes 1 hour
|
no mostly linkedin specific thing
imo it takes 1 hour
|
will let <@U0265RTUJ5B> coordinate… early next week would be good for me
|
will let <@U0265RTUJ5B> coordinate… early next week would be good for me
|
thank you <@UNR3C6Y4T>
<@U042Z2S8268> feel free to bring to the meetup tomorrow (or post here) the initiatives that could be shared in public. I think the project and the community benefits from public discussions
|
thank you <@UNR3C6Y4T>
<@U042Z2S8268> feel free to bring to the meetup tomorrow (or post here) the initiatives that could be shared in public. I think the project and the community benefits from public discussions
|
oh wait i just realize i put in the wrong channel…
|
<@UNZB4NW3S> What's the status of this proposal?
<https://github.com/flyteorg/flyte/pull/3320>
Would you be able to discuss it at tomorrow's Contributors meetup?
We can do async too
|
oops this reminds me about the RFC. I will submit my config override RFC today
|
oops this reminds me about the RFC. I will submit my config override RFC today
|
I can
wait actually tomorrow is hard for me
|
I can
wait actually tomorrow is hard for me
|
np, do you think is ready for TSC reviews?
|
np, do you think is ready for TSC reviews?
|
I think so
i will join in for the first 15-20 minutes if it is ok, to share the RFC, or i will find a replacement
|
I think so
i will join in for the first 15-20 minutes if it is ok, to share the RFC, or i will find a replacement
|
Great! Thanks
|
<@U04H6UUE78B> Can you explain the process of rfc review again?
|
David has a <https://github.com/flyteorg/flyte/pull/3460|pr> for rfc review. Feel free to leave any comment.
|
hi <@U01DYLVUNJE> have anyone picked on this task? or any plan regarding this issue <https://github.com/flyteorg/flyte/issues/3094>
|
It’s done. the pr has been merged.
<https://github.com/flyteorg/flytectl/releases/tag/v0.6.31>
|
It’s done. the pr has been merged.
<https://github.com/flyteorg/flytectl/releases/tag/v0.6.31>
|
oh awesome, thanks
|
oh awesome, thanks
|
hey <@U045124RRFX> thank you for your interest on contributing to Flyte!
Would you mind taking a look at the `help wanted` issues to see if there's something you could help with?
<https://github.com/flyteorg/flyte/labels/help%20wanted>
|
hey <@U045124RRFX> thank you for your interest on contributing to Flyte!
Would you mind taking a look at the `help wanted` issues to see if there's something you could help with?
<https://github.com/flyteorg/flyte/labels/help%20wanted>
|
Hi David, I have this two issues that I am working on, I will take a look once I am done with those
<https://github.com/flyteorg/flyte/issues/3533>
<https://github.com/flyteorg/flyte/issues/3308>
|
Hi David, I have this two issues that I am working on, I will take a look once I am done with those
<https://github.com/flyteorg/flyte/issues/3533>
<https://github.com/flyteorg/flyte/issues/3308>
|
<@U045124RRFX> mind if I assign those two issues to you?
|
<@U045124RRFX> mind if I assign those two issues to you?
|
NP, please go ahead
<@USU6W5ATA> one question, how do we change the logging level for flytectl? Trying to debug an issue with the flytectl itself
|
NP, please go ahead
<@USU6W5ATA> one question, how do we change the logging level for flytectl? Trying to debug an issue with the flytectl itself
|
flytectl [command] --logger.level 5
|
flytectl [command] --logger.level 5
|
thank you
<@USU6W5ATA> I see the gap here. I think only structure dataset is supported in your PR. Not the union type. I am having two errors because of this:
1. `flytectl get execution` error out with `{"json":{"src":"main.go:13"},"level":"error","msg":"unsupported literal scalar type *core.Scalar_Union","ts":"2023-03-27T14:20:51-07:00"}`
2. on the UI, a json with union type showed up as `This type is not yet supported`
Do you want me to create a new issue regarding this one?
<https://github.com/flyteorg/flyteidl/blob/master/clients/go/coreutils/extract_literal.go#L63> here is the code related
|
thank you
<@USU6W5ATA> I see the gap here. I think only structure dataset is supported in your PR. Not the union type. I am having two errors because of this:
1. `flytectl get execution` error out with `{"json":{"src":"main.go:13"},"level":"error","msg":"unsupported literal scalar type *core.Scalar_Union","ts":"2023-03-27T14:20:51-07:00"}`
2. on the UI, a json with union type showed up as `This type is not yet supported`
Do you want me to create a new issue regarding this one?
<https://github.com/flyteorg/flyteidl/blob/master/clients/go/coreutils/extract_literal.go#L63> here is the code related
|
ohh, I see. we don’t have pr for union type.
would you like to work in it?
|
ohh, I see. we don’t have pr for union type.
would you like to work in it?
|
I can work on it, but I might need some mentoring on this if I hit error
can I ask you for help in that case?
|
I can work on it, but I might need some mentoring on this if I hit error
can I ask you for help in that case?
|
of course
Thanks for the help.
|
Hi everyone
<@U04664Z7H37> <@U03C1MJQ892> <@U01DYLVUNJE> can I have your reviews on the updates to the RFC process?
<https://github.com/flyteorg/flyte/pull/3460>
Thanks!
|
I will review tomorrow <@U04H6UUE78B> :slightly_smiling_face:
|
I will review tomorrow <@U04H6UUE78B> :slightly_smiling_face:
|
Just reviewed :+1:
|
OK to move the Contributor's meeting 2 hours earlier? (Every other Thursday, 11AM PT)
|
<@U04H6UUE78B> Will tomorrow’s meeting be two hours earlier?
|
<@U04H6UUE78B> Will tomorrow’s meeting be two hours earlier?
|
Yes <@U03C1MJQ892>! Was about to set up the reminder!
|
Yes <@U03C1MJQ892>! Was about to set up the reminder!
|
Thank you :bow:
|
<@U01DYLVUNJE> i think we can open tickets to improve flytepropeller and flyteadmin’s error messages! Sometimes we find them not very verbose
|
please do! I just created a new `improve-error-message` tag, so you can either create issues with this tag directly, or add a <https://github.com/flyteorg/flyte/discussions/3502|comment to the discussion> depending on your preference.
|
please do! I just created a new `improve-error-message` tag, so you can either create issues with this tag directly, or add a <https://github.com/flyteorg/flyte/discussions/3502|comment to the discussion> depending on your preference.
|
cc <@U042TPX3G06>
|
Thanks everyone for joining, that was a fantastic meeting!
*Action items:*
• *<@U042Z2S8268>* to push the <https://docs.google.com/document/d/1NS93TghOzwKamihQMDATd_jdYJtrtWQ1sViSTSDduAA/edit#heading=h.rcvd21a5zyol|config overrides proposal> as PR for comments/discussion
• Start a survey in this channel to move the meeting 2 hours earlier (David)
• Create an issue to discuss potential SIGs (David)
• Create Github teams and CODEOWNERS to reflect the structure of TSC and Maintainer members (I don't seem to have permissions on the repo for this. <@U01DYLVUNJE> / <@U0265RTUJ5B> is this something we could review?)
<https://hackmd.io/@davidmirror/rkqCpbK1n|Notes>
Recording will be posted soon
|
will do!
|
Where is the link for todays meeting
|
Cc <@U04H6UUE78B>
|
Cc <@U04H6UUE78B>
|
hi <@U042Z2S8268> thanks for your interest!
First meeting will take place next Thursday March 16 <https://flyte-org.slack.com/archives/CNMKCU6FR/p1678228825460909?thread_ts=1677596817.155979&cid=CNMKCU6FR|as announced here>
In that way, it would give us a bit more time to start inking a renewed RFC process to discuss it there
|
hi <@U042Z2S8268> thanks for your interest!
First meeting will take place next Thursday March 16 <https://flyte-org.slack.com/archives/CNMKCU6FR/p1678228825460909?thread_ts=1677596817.155979&cid=CNMKCU6FR|as announced here>
In that way, it would give us a bit more time to start inking a renewed RFC process to discuss it there
|
oh. my mail said today
|
oh. my mail said today
|
oh that's a bummer. I'm sorry. The way Doodle works is that it's focused on a one-time meeting so it makes it look that way
in this case the idea is to agree on a day/hour to have a recurring meeting.
Does the current schedule work for you <@U042Z2S8268>?
|
oh that's a bummer. I'm sorry. The way Doodle works is that it's focused on a one-time meeting so it makes it look that way
in this case the idea is to agree on a day/hour to have a recurring meeting.
Does the current schedule work for you <@U042Z2S8268>?
|
so it will be a weekly meeting?
|
so it will be a weekly meeting?
|
bi-weekly
|
bi-weekly
|
it works for me
|
it works for me
|
I have a feeling that we'll need to eventually have them weekly because there are several proposals to discuss and also to explore an alternating schedule where we could also meet at a EU-friendly time every 2 weeks (also thinking on TSC members like <@U04664Z7H37> and <@U03C1MJQ892>)
We'll see how it progress :slightly_smiling_face:
|
<@UNZB4NW3S> FYI, i was looking at the device flow. It seems like fosite doesn't support it yet: <https://github.com/ory/fosite/pull/695> (seems that it was closed?)
This is the higher level ticket for feature: <https://github.com/ory/hydra/issues/2416>
|
What is fosite
|
What is fosite
|
used by flyteadmin for auth server: <https://github.com/flyteorg/flyteadmin/blob/master/auth/authzserver/provider.go#L25>
|
used by flyteadmin for auth server: <https://github.com/flyteorg/flyteadmin/blob/master/auth/authzserver/provider.go#L25>
|
Do you use FlyteAdmins auth server
Do you not have a full oauth2 implementation
|
Do you use FlyteAdmins auth server
Do you not have a full oauth2 implementation
|
for auth server yes, we use FlyteAdmin's auth server
we use Azure sso for authentication.. which doesn't provide auth expected by flyte?
|
for auth server yes, we use FlyteAdmin's auth server
we use Azure sso for authentication.. which doesn't provide auth expected by flyte?
|
Can we talk
|
Can we talk
|
sure
|
FYI, I contributed two PRs to resolve tensorflow/podtemplate issues we uncovered during internal testing
<https://github.com/flyteorg/flyteplugins/pull/327>
<https://github.com/flyteorg/flyteplugins/pull/326>
|
thanks for these byron! they've both been merged and will be included in the 1.4 release!
|
I had a chat with <@UNR3C6Y4T> last week, and we talked about sync hour agendas to see if we can talk this week. <@UTU7ZNA56> when will you be available to talk? Let's try to do at a time that Yee can participate as well
|
Would 8PM UTC+2:00 work for everyone?
|
Would 8PM UTC+2:00 work for everyone?
|
that’s 11 for me here in seattle. works for me.
|
that’s 11 for me here in seattle. works for me.
|
That's 3 pm for me. Depending on the day, works for me as well
|
That's 3 pm for me. Depending on the day, works for me as well
|
could you make an invite for a day that works <@U04HLNJMXJA> ?
|
could you make an invite for a day that works <@U04HLNJMXJA> ?
|
8.30 would also be fine, if it helps
|
8.30 would also be fine, if it helps
|
Yes, I can :slightly_smiling_face:
Just gonna look my schedule, and I send an invite.
|
Thanks for finishing the Pr and merging :rocket:
|
Flyteplugins too
|
Flyteplugins too
|
Yes, saw it :slightly_smiling_face:
I will amend the flytesnacks docs PR with the min_replicas change and ping for review there as well.
|
Yes, saw it :slightly_smiling_face:
I will amend the flytesnacks docs PR with the min_replicas change and ping for review there as well.
|
amazing work!
|
amazing work!
|
<https://github.com/flyteorg/flytesnacks/pull/987>
Can you pls take a look?
|
<https://github.com/flyteorg/flytesnacks/pull/987>
Can you pls take a look?
|
The changes look good! but there’s an unrelated issue with the `sphinxcontrib-yt` package in our docs :disappointed:
needa take a look
|
Also, <@U04664Z7H37> do you folks use - <https://github.com/libffcv/ffcv>?
created this - <https://github.com/flyteorg/flyte/issues/3615>
|
Mh no, not using it but looks interesting. <https://arxiv.org/abs/2209.13705|This> paper compares ffcv to other libraries including squirrel which my previous company built. The authors didn’t use many features of squirrel though, otherwise would be faster.
When I look at the code snippets on ffcv’s website+github, I’d say that this all should live in user code though, just needs an image with dependencies installed. Which job should Flyte take in your opinion?
> For example, we could not run FFCV with a dataset hosted in an S3 bucket to perform our remote experiments.
(From the comparison paper)
This is a downside for a data loading library tbh. Squirrel from my previous company uses fsspec and was designed for remote loading.
|
Mh no, not using it but looks interesting. <https://arxiv.org/abs/2209.13705|This> paper compares ffcv to other libraries including squirrel which my previous company built. The authors didn’t use many features of squirrel though, otherwise would be faster.
When I look at the code snippets on ffcv’s website+github, I’d say that this all should live in user code though, just needs an image with dependencies installed. Which job should Flyte take in your opinion?
> For example, we could not run FFCV with a dataset hosted in an S3 bucket to perform our remote experiments.
(From the comparison paper)
This is a downside for a data loading library tbh. Squirrel from my previous company uses fsspec and was designed for remote loading.
|
Interesting that it would only work with local files
|
<@U042Z2S8268> we are enabling torch-elastic in flytekit now
|
thanks this is amazing! will inform my team
|
• <https://github.com/flyteorg/flyte/issues/3614>
• <https://github.com/flyteorg/flytekit/pull/1603>
• <https://github.com/flyteorg/flyteidl/pull/394>
• <https://github.com/flyteorg/flyteplugins/pull/343>
• <https://github.com/flyteorg/flytesnacks/pull/987>
|
we have to merge starting flyteidl
flytekit will be the last to merge
this allows us to change things if needed
cc <@U017K8AJBAN>
|
One other thing about which I’m interested in your opinion:
`torchrun` allows the user to set `--nnodes` which could e.g. be `2` but also be `"1:2"` which means min 1 max 2. Currently this is what iour new `task_config=Elastic()` exposes as well.
The kubeflow PytorchJob allows setting `minReplicas`, `maxReplicas` (which by default are both None), and `replicas` (see <https://github.com/kubeflow/training-operator/blob/master/examples/pytorch/elastic/echo/echo.yaml|here>). In theory you could say min 2, max 4, replicas 3 (without going into how much sense this makes).
If a user specifies `2:3` we currently set min to 2 and max and replicas to 3.
To summarize: Should we expose `nnodes` like torchrun or `min_replicas`, `max_replicas`, and `replicas` like the pytorchjob to the user?
|
ohh is that a question?
i like min and max
isnt it the same? but more explicit?
|
ohh is that a question?
i like min and max
isnt it the same? but more explicit?
|
Currently we make the assumption that when user specifies `3:5`, we set `maxReplicas` but also `Replicas` to 5. In theory this doesn’t have to be the case in the pytorchjob manifest.
I’ll change it to the more explicit version :+1:
|
Currently we make the assumption that when user specifies `3:5`, we set `maxReplicas` but also `Replicas` to 5. In theory this doesn’t have to be the case in the pytorchjob manifest.
I’ll change it to the more explicit version :+1:
|
Aah got it
|
Got it, first single node multi GPU :+1: I can implement this on Friday or Saturday and then ping you for review. Or do you need it before that?
|
Hmm that should be ok. What I want to do is train alpaca on Flyte and have that as a demo
Actually if you open a PR directly on flytekit I can hack too
Else I can copy paste hack and open PR
Let me do it for single machine and you can make it work for distributed
|
Hmm that should be ok. What I want to do is train alpaca on Flyte and have that as a demo
Actually if you open a PR directly on flytekit I can hack too
Else I can copy paste hack and open PR
Let me do it for single machine and you can make it work for distributed
|
Can you give me permissions to open a PR in flytekit please?
Then I’ll push there
Or feel free to just copy, whichever is easier for you :slightly_smiling_face:
|
Can you give me permissions to open a PR in flytekit please?
Then I’ll push there
Or feel free to just copy, whichever is easier for you :slightly_smiling_face:
|
Ohh you don’t have perms
Can I give you
I can send it
Wait you should get it in 2 minutes
|
Ohh you don’t have perms
Can I give you
I can send it
Wait you should get it in 2 minutes
|
Ok, branch is ready to push :slightly_smiling_face:
thx
|
Ok, branch is ready to push :slightly_smiling_face:
thx
|
Ok you should have it
|
Ok you should have it
|
<https://github.com/flyteorg/flytekit/pull/1583>
Cool thanks :slightly_smiling_face:
I closed the other PR from the fork
Feel free to also hack/commit on this branch
|
<https://github.com/flyteorg/flytekit/pull/1583>
Cool thanks :slightly_smiling_face:
I closed the other PR from the fork
Feel free to also hack/commit on this branch
|
Perfect
|
Perfect
|
This is going to be awesome
We currently beat ignite into launching the local process group instead of torchrun.
Looking very forward to throw that logic out
<@UNZB4NW3S> I pushed a few commits to the wip branch. Cleanup + docstrings.
Also working on making it work in a distributed way now.
|
This is going to be awesome
We currently beat ignite into launching the local process group instead of torchrun.
Looking very forward to throw that logic out
<@UNZB4NW3S> I pushed a few commits to the wip branch. Cleanup + docstrings.
Also working on making it work in a distributed way now.
|
Yup, I pushed some
Commits too
If you had seen
This is looking great
|
Yup, I pushed some
Commits too
If you had seen
This is looking great
|
Saw them :+1:
|
Saw them :+1:
|
I will try to get alpaca working on it too
Then we can test
On a side note I also got tasks working from a jupyter notebook -
That way you can train large models directly from an interactive environment
|
I will try to get alpaca working on it too
Then we can test
On a side note I also got tasks working from a jupyter notebook -
That way you can train large models directly from an interactive environment
|
You mean write task in notebook and then just run task from there?
|
You mean write task in notebook and then just run task from there?
|
Yup
No need to have it in a phythojnscript
Finally you will have to copy
|
Yup
No need to have it in a phythojnscript
Finally you will have to copy
|
I’m not much of a notebook user ^^ But I guess for many data scientists this is a killer feature
|
I’m not much of a notebook user ^^ But I guess for many data scientists this is a killer feature
|
Ya that’s my hope
Mee too
|
Ya that’s my hope
Mee too
|
I have a question about how to select the plugin for the task type. I have this:
```class MultiNodePytorchElasticFunctionTask(PythonFunctionTask[Elastic]):
_ELASTIC_TASK_TYPE = "torch-elastic"
def __init__(self, task_config: Elastic, task_function: Callable, **kwargs):
super(MultiNodePytorchElasticFunctionTask, self).__init__(
task_type=self._ELASTIC_TASK_TYPE,
**kwargs,
def get_custom(...): ...```
I also added this to helm values:
``` enabled_plugins:
tasks:
task-plugins:
enabled-plugins:
- ...
- pytorch
default-for-task-types:
- ...
pytorch: pytorch
torch-elastic: pytorch```
Propeller says:
```{"json":{"exec_id":"f1c678faf0fd74fad828","node":"n0","ns":"flytesnacks-development","res_ver":"23058","routine":"worker-2","tasktype":"torch-elastic","wf":"flytesnacks:development:<http://wf.wf|wf.wf>"},"level":"warning","msg":"No plugin found for Handler-type [torch-elastic], defaulting to [container]","ts":"2023-04-08T22:26:30Z"}```
Do I need to configure this somewhere else as well?
The existing pytorch plugin in flyteplugins just needs an additional if else whether to configure an <https://github.com/kubeflow/training-operator/blob/b2ee1cb380b94004798b44ca32a14de3bddc675f/pkg/apis/kubeflow.org/v1/pytorch_types.go#L90|ElasticPolicy>.
|
I have a question about how to select the plugin for the task type. I have this:
```class MultiNodePytorchElasticFunctionTask(PythonFunctionTask[Elastic]):
_ELASTIC_TASK_TYPE = "torch-elastic"
def __init__(self, task_config: Elastic, task_function: Callable, **kwargs):
super(MultiNodePytorchElasticFunctionTask, self).__init__(
task_type=self._ELASTIC_TASK_TYPE,
**kwargs,
def get_custom(...): ...```
I also added this to helm values:
``` enabled_plugins:
tasks:
task-plugins:
enabled-plugins:
- ...
- pytorch
default-for-task-types:
- ...
pytorch: pytorch
torch-elastic: pytorch```
Propeller says:
```{"json":{"exec_id":"f1c678faf0fd74fad828","node":"n0","ns":"flytesnacks-development","res_ver":"23058","routine":"worker-2","tasktype":"torch-elastic","wf":"flytesnacks:development:<http://wf.wf|wf.wf>"},"level":"warning","msg":"No plugin found for Handler-type [torch-elastic], defaulting to [container]","ts":"2023-04-08T22:26:30Z"}```
Do I need to configure this somewhere else as well?
The existing pytorch plugin in flyteplugins just needs an additional if else whether to configure an <https://github.com/kubeflow/training-operator/blob/b2ee1cb380b94004798b44ca32a14de3bddc675f/pkg/apis/kubeflow.org/v1/pytorch_types.go#L90|ElasticPolicy>.
|
AFK
i think your config looks right
<@U04664Z7H37> when you get a chance check the first few log lines if you start flytepropeller
i think this config looks ok
<@U04664Z7H37> quick question, we should not need `standalone` / single node pytorch operator right?
we should automatically adapt?
what if, we add a check in TorchElasticConstructor and change the task-type
`if num replicas is 1` then the plugin type is `torch-elastic-standalone` else it is `torch-elastic` and the backend config is set of `torch-elastic` to use `pytorch-operator`?
|
AFK
i think your config looks right
<@U04664Z7H37> when you get a chance check the first few log lines if you start flytepropeller
i think this config looks ok
<@U04664Z7H37> quick question, we should not need `standalone` / single node pytorch operator right?
we should automatically adapt?
what if, we add a check in TorchElasticConstructor and change the task-type
`if num replicas is 1` then the plugin type is `torch-elastic-standalone` else it is `torch-elastic` and the backend config is set of `torch-elastic` to use `pytorch-operator`?
|
I was thinking exactly the same.
I feel like this should go into the existing pytorch plugin, not new ones, since also for the kubeflow training operator vanilla torch distributed training and torch elastic training only differs by the elastic config in the pytorchjob manifest. Same k8s kind though.
This stays the same for backwards compatibility of course
`pip install flytekitplugins-kfpytorch`
```from flytekitplugins.kfpytorch import Pytorch
@task(
task_config=Pytorch(...)
)```
But people could to `pip install flytekitplugins-kfpytorch[elastic]` (for the torch dependency) and then:
```from flytekitplugins.kfpytorch import ElasticPytorch
@task(
task_config=ElasticPytorch(nnodes=1) # single pod, no operator
)
@task(
task_config=ElasticPytorch(nnodes=2) # pytorch operator
)```
And in flyteplugins all the pytorch code can be reused as well, just an if whether we need to set elastic config in pytorchjob.
Already works:
```class PytorchElasticFunctionTask(PythonFunctionTask[Elastic]):
_ELASTIC_TASK_TYPE = "pytorch"
_ELASTIC_TASK_TYPE_STANDALONE = "container"
def __init__(self, task_config: Elastic, task_function: Callable, **kwargs):
task_type = self._ELASTIC_TASK_TYPE_STANDALONE if task_config.nnodes == 1 else self._ELASTIC_TASK_TYPE
super(PytorchElasticFunctionTask, self).__init__(
task_config=task_config,
task_type=task_type,
...
def get_custom(self, settings: SerializationSettings) -> Optional[Dict[str, Any]]:
if self.task_config.nnodes == 1:
"""
Torch elastic distributed training is executed in a normal k8s pod so that this
works without the kubeflow train operator.
"""
return super().get_custom(settings)
else:
from flytekitplugins.kfpytorch.models import PyTorchJob
job = PyTorchJob(```
```Every 2.0s: kubectl get pods -n flytesnacks-development Fabios-MacBook-Pro.local: Sun Apr 9 23:04:03 2023
NAME READY STATUS RESTARTS AGE
f91014ed8990b4c79b32-n0-0-master-0 1/1 Running 0 23s
f91014ed8990b4c79b32-n0-0-worker-0 1/1 Running 0 22s
f91014ed8990b4c79b32-n0-0-worker-1 1/1 Running 0 16s
f7e922a78842044aba46-n0-0 1/1 Running 0 7s```
Only diff between the two is `nnodes` being 1 or not.
Can you pls give me perms to make a PR in idl and plugins next week? Or shell I do from fork there?
I will work on the changes in plugins tomorrow.
Free day in Germany
:slightly_smiling_face:
|
I was thinking exactly the same.
I feel like this should go into the existing pytorch plugin, not new ones, since also for the kubeflow training operator vanilla torch distributed training and torch elastic training only differs by the elastic config in the pytorchjob manifest. Same k8s kind though.
This stays the same for backwards compatibility of course
`pip install flytekitplugins-kfpytorch`
```from flytekitplugins.kfpytorch import Pytorch
@task(
task_config=Pytorch(...)
)```
But people could to `pip install flytekitplugins-kfpytorch[elastic]` (for the torch dependency) and then:
```from flytekitplugins.kfpytorch import ElasticPytorch
@task(
task_config=ElasticPytorch(nnodes=1) # single pod, no operator
)
@task(
task_config=ElasticPytorch(nnodes=2) # pytorch operator
)```
And in flyteplugins all the pytorch code can be reused as well, just an if whether we need to set elastic config in pytorchjob.
Already works:
```class PytorchElasticFunctionTask(PythonFunctionTask[Elastic]):
_ELASTIC_TASK_TYPE = "pytorch"
_ELASTIC_TASK_TYPE_STANDALONE = "container"
def __init__(self, task_config: Elastic, task_function: Callable, **kwargs):
task_type = self._ELASTIC_TASK_TYPE_STANDALONE if task_config.nnodes == 1 else self._ELASTIC_TASK_TYPE
super(PytorchElasticFunctionTask, self).__init__(
task_config=task_config,
task_type=task_type,
...
def get_custom(self, settings: SerializationSettings) -> Optional[Dict[str, Any]]:
if self.task_config.nnodes == 1:
"""
Torch elastic distributed training is executed in a normal k8s pod so that this
works without the kubeflow train operator.
"""
return super().get_custom(settings)
else:
from flytekitplugins.kfpytorch.models import PyTorchJob
job = PyTorchJob(```
```Every 2.0s: kubectl get pods -n flytesnacks-development Fabios-MacBook-Pro.local: Sun Apr 9 23:04:03 2023
NAME READY STATUS RESTARTS AGE
f91014ed8990b4c79b32-n0-0-master-0 1/1 Running 0 23s
f91014ed8990b4c79b32-n0-0-worker-0 1/1 Running 0 22s
f91014ed8990b4c79b32-n0-0-worker-1 1/1 Running 0 16s
f7e922a78842044aba46-n0-0 1/1 Running 0 7s```
Only diff between the two is `nnodes` being 1 or not.
Can you pls give me perms to make a PR in idl and plugins next week? Or shell I do from fork there?
I will work on the changes in plugins tomorrow.
Free day in Germany
:slightly_smiling_face:
|
I can give your perms
i like the idea
idl and plugins permissions added
also we can simply add the same plugin for different config types
<@U04664Z7H37> also thought some more of
```pip install flytekitplugins-kfpytorch[elastic]```
Maybe we can simply add an import gate. if modulenot found, raise an error that torch should be installed
Also <@U04664Z7H37> I have this repo created = <https://github.com/unionai-oss/stanford_alpaca/pull/1>
check it out
|
I can give your perms
i like the idea
idl and plugins permissions added
also we can simply add the same plugin for different config types
<@U04664Z7H37> also thought some more of
```pip install flytekitplugins-kfpytorch[elastic]```
Maybe we can simply add an import gate. if modulenot found, raise an error that torch should be installed
Also <@U04664Z7H37> I have this repo created = <https://github.com/unionai-oss/stanford_alpaca/pull/1>
check it out
|
I saw you simplified the models, didn’t know this was possible, nice :+1:
Update from my side:
I opened a draft <https://github.com/flyteorg/flyteidl/pull/394|PR in idl> and in <https://github.com/flyteorg/flyteplugins/pull/343|plugins>. Built a propeller image, creating a distributed pytorchjob with elastic config works.
What is not working reliably yet is the rendevouz when initiating the process group.
We definitely need something similar to <https://github.com/flyteorg/flytekit/pull/1583/commits/333e6008c2b3bac2fc77a379fd2220288a2a4519|what I added here>:
``` rdzv_endpoint=os.environ.get("PET_RDZV_ENDPOINT", f"localhost:0"),```
Here, `localhost:0` means torchrun picks a free port (see <https://github.com/pytorch/pytorch/blob/537c346117967da690c9fe719e27d08ce9d43424/torch/distributed/run.py#L100|docs>).
I’m currently working on making <https://github.com/kubeflow/training-operator/tree/master/examples/pytorch/elastic/echo|this minimal elastic example> from the kubeflow training operator repo work:
```import os
import logging
from flytekit import task, workflow
from flytekitplugins.kfpytorch import PyTorch, Elastic
logging.basicConfig(level=<http://logging.INFO|logging.INFO>) # To see torchrun trying to establish the rendevouz
@task(
task_config=Elastic(
nnodes=2,
nproc_per_node=2,
start_method="fork",
)
#task_config=PyTorch(num_workers=2)
)
def train() -> str:
import io
import os
import pprint
import sys
import time
import torch.distributed as dist
env_dict = {
k: os.environ[k]
for k in (
"LOCAL_RANK",
"RANK",
"GROUP_RANK",
"WORLD_SIZE",
"MASTER_ADDR",
"MASTER_PORT",
"TORCHELASTIC_RESTART_COUNT",
"TORCHELASTIC_MAX_RESTARTS",
)
}
with io.StringIO() as buff:
print("======================================================", file=buff)
print(
f"Environment variables set by the agent on PID {os.getpid()}:", file=buff
)
pprint.pprint(env_dict, stream=buff)
print("======================================================", file=buff)
print(buff.getvalue())
sys.stdout.flush()
dist.init_process_group(backend="gloo")
dist.barrier()
rank = dist.get_rank()
print(
(
f"On PID {os.getpid()}, after init process group, "
f"rank={dist.get_rank()}, world_size = {dist.get_world_size()}\n"
)
)
dist.destroy_process_group()
return f"foo-{rank}"
@workflow
def wf():
train()
if __name__ == "__main__":
print(f"Parent {os.getpid()}")
print(wf())```
Rendevouz sometimes fails, sometimes works, currently debugging why. Just as fyi where I’m at…
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.