sub
stringclasses 4
values | title
stringlengths 3
304
| selftext
stringlengths 3
30k
| upvote_ratio
float64 0.07
1
| id
stringlengths 9
9
| created_utc
float64 1.6B
1.65B
|
---|---|---|---|---|---|
investing | What do you personally feel is a fair price for SPY and QQQ? | I use the [buffet indicator](https://www.currentmarketvaluation.com/models/buffett-indicator.php) and the [Shiller PE ratio](https://www.multpl.com/shiller-pe) to judge fair market prices. I personally feel that SPY is fairly valued around $350-$375. Although, I don't feel like it will reach those values in the near term. I don't know much about tech stocks and I don't follow them so I have no idea about them. What values for SPY and QQQ do you consider fair? What price would you buy as much as possible? | 0.65 | t3_t67a47 | 1,646,357,651 |
investing | Looking for an account that employer can direct deposit paycheck to and the funds would automatically be converted into financial assets. | Stocks and crypto come to mind. But it could be any other type of asset (even illiquid or intangible).
The main goal is just to covert the disposable cash into something other then fiat as soon as it is deposited. I don't want to be the one to do it such as in the form of automatic transfers to other accounts. I'd like to cut that step out so that the paycheck would essentially be a non liquid (cash) asset as soon as it hits the account.
I thought about Coinbase where the paycheck would be converted into a chosen crypto currency, but the problem with that is that the crypto currency could then be exchanged to cash on their platform/app at a single tap and then the cash sent to your debit card. I want it to be difficult to change into cash or lock the asset for a predetermined period of time. The point of that being to not be able to spend it on a whim.
Any ideas? | 0.27 | t3_t65h43 | 1,646,352,152 |
investing | Why do actively managed funds exist if they wont beat the market. | I am mainly talking about funds that invest in individual stocks. I can understand how some may want to invest more in bonds than stocks due to lower risk especially nearing retirement. But in general I would imagine that the goal is just to make money and as it currently stands it seems like nobody can beat the market. So why invest in a actively managed fund that is going to have a lot more fees if it is just going to have a worse result compared to putting it in something like the S&P.
I also see a lot of investors claim that you can't beat the market but then right after saying that try to predict the outcome of individual stocks. Is this not contradictory?
I refuse to believe people are this stupid. There must be some reasonable explanation right? | 0.86 | t3_t62a23 | 1,646,343,186 |
investing | What happens if no one takes the other side of an option? | Lets say I buy an AMZN call for $20k at the end of this year, which seems unlikely so people are happy to sell it to me.
Then Amazon invents teleporting packages into homes, and their stock rallies to $50k over the next month. It is likely whoever sold that call would get margin called.
What happens next? Who would want to buy this option knowing it's likely to cost them a ton of money? Is it possible I could not get paid if I make a huge bet and win? Or will someone always step in to take the other side of the call? | 0.74 | t3_t61gfo | 1,646,340,864 |
investing | Panasonic PCRFY, any reason why it is at 52 weeks low? | Is there a reason why Panasonic is at 52 week low at the moment or is just macro?
EVs are growing exponentially, partnership with Tesla on new batteries, why these low numbers?
is this a buy or am I missing some news that I should really know?
Wanted to put some money in it in the past but never got there, now I have few bucks to invest and it looks as a good moment to buy this, but maybe I missed something.
Thanks! | 0.72 | t3_t60mtb | 1,646,338,631 |
investing | How the hell does Uber lose so much money? | I’m writing this post as I am walking somewhere and I have not had time to look through their statements, but how the hell do they lose so much money?
They dont have employee drivers, no vehicle fleets, drivers are paid when they complete a fare.
Are there just too many extraordinary costs? Incidents? Legal Liabilities? This is a fascinating case because I remember years back people begging for an IPO, and theyve proven quarter after quarter that their business just absolutely sucks. I dont even know how they stay afloat with negative cash from operations. Issuing stock Im going to hazard a guess lol. | 0.96 | t3_t5ypwj | 1,646,333,531 |
investing | Weighting INTl index funds in index fund investing? | Ive been a follower of r/bogelheads for a while now. They strongly recommend a mixture of US and INTL to something of a 60/40 ratio.
Ive looked at historic returns on INTL markets all the way back to the 50's, for a long term investor (say 20 years), it seems the SP500 still comes out ahead.
**Is it really to an advantage to invest almost 40% into intl markets?** Intl markets have performed really poorly in the past decade (yes I know that is a short time) and it's hard to imagine with the russia-ukraine conflict, the EU will be doing amazing in the near future. | 0.79 | t3_t5wvxl | 1,646,328,715 |
investing | Can we buy Ukrainian war bonds? | I read that they released some bonds, but do we, retail investors, can bid for those or get them on the secondary market?
Here's an article: https://www.cnbc.com/2022/03/02/ukraine-raises-270-million-from-sale-of-war-bonds-to-fund-army.html
It lists some banks, but it's unclear if I could also participate through some platforms (I'm only on T212 at the moment).
Besides, the listed banks are in an article that came out after the sale. Beforehand I couldn't find the institutions where the bonds would be available. Is there some go-to resource where one could find that information beforehand? | 0.63 | t3_t5wlyc | 1,646,328,009 |
investing | Question about what is considered a freeride violation | I just sold a stock and the money is still pending from the sale. I see i have money available to trade that makes up the amount that I sold the previous stock for. If I bought a new stock with that money before it settled would that be considered a free ride violation? | 0.67 | t3_t5wi9p | 1,646,327,749 |
investing | MSCI and FTSERussell will be removing Russia from their indicies. | Haven't seen this noted anywhere yet. Both have of announced Russia as 'uninvestable'. These are the two big boys as far as international is concerned, should expect the rest to follow suit. We should expect further dips as passive indicies liquidate Russian exposure.
Russia isn't a huge piece of the global universe, before all this it made up 3% of the MSCI EM, but most expected this to take more time to implemented. | 0.96 | t3_t5upmv | 1,646,323,020 |
investing | Investing in ETF USD with EUR | Hi beautiful community
I am based in Germany and am regularly buying ETF Vanguard FTSE All World UCITS (USD) Accumulating.
However it's USD, and I invest Eur.
I have seen in other threads that it's better to invest in the same currency EUR due to exchange fluctuations. Could you share more Insights on this topic?
What are the risks if I keep investig in USD ETF?
The broker I use is Scalable Capital.
What would be an equal ETF in EUR?
Appreciate any insights | 0.72 | t3_t5r4yd | 1,646,312,637 |
investing | Help needed: Looking for historical short interest data of NYSE companies. | Dear reader,
Like the title implies, I am looking for at least two years of (bi-monthly) historical data of NYSE companies' short interest data. This is a non funded research, so I am trying to keep things cheaply. There are various subscription services out there that do seem to offer these kinds of data, which I would be fine with if I am 100% certain that they include what I am looking for.
My ideal dataset would be a panel dataset with all NYSE listed companies (or at least a large and unbiased subsample of them), according to the above mentioned criteria. All resources I can find online about acquiring such data seem to have been either outdated, locked behind a subscription, or at least violate one of my criteria. If anyone could help me out on this one, and at least point me in the right direction, that would be VERY much appreciated.
Kind Regards,
BHTA! | 0.44 | t3_t5pamk | 1,646,306,086 |
investing | Daily General Discussion and Advice Thread - March 03, 2022 | Have a general question? Want to offer some commentary on markets? Maybe you would just like to throw out a neat fact that doesn't warrant a self post? Feel free to post here!
If your question is "I have $10,000, what do I do?" or other "advice for my personal situation" questions, you should include relevant information, such as the following:
* How old are you? What country do you live in?
* Are you employed/making income? How much?
* What are your objectives with this money? (Buy a house? Retirement savings?)
* What is your time horizon? Do you need this money next month? Next 20yrs?
* What is your risk tolerance? (Do you mind risking it at blackjack or do you need to know its 100% safe?)
* What are you current holdings? (Do you already have exposure to specific funds and sectors? Any other assets?)
* Any big debts (include interest rate) or expenses?
* And any other relevant financial information will be useful to give you a proper answer.
Please consider consulting our FAQ first - https://www.reddit.com/r/investing/wiki/faq
And our [side bar](https://www.reddit.com/r/investing/about/sidebar) also has useful resources.
Be aware that these answers are just opinions of Redditors and should be used as a starting point for your research. You should strongly consider seeing a registered financial rep before making any financial decisions! | 0.84 | t3_t5o82m | 1,646,301,670 |
investing | Any ideas for a low interest rate, high inflation environment? | So I'm quite disappointed by the Fed's suggestion of just a 0.25% rate hike in March when housing prices have nearly gone up 50-100% within the past few years in most metro cities, as well as bubbles occurring in meme stocks, and that prices in general have been going up. Something simple like the dollar menu McDonald spicy chicken burger is now $1.47 in my area and portion size is like half of what it used to be.
I'm kind of at a loss here. While the supply chain disruption is there, it is also in part caused by the large demand of consumers. At this point, bonds and fixed income investments (i.e. Treasury, CDs, savings) they are all shit investments because of the low interest rate environment while inflation is at 7%.
It feel like the Fed is continuing to feed the asset bubble with cheap money. I just don't understand. This is the time to be more aggressive with hiking rates because the economy is actually doing quite well and can withstand some shocks.
Should I participate in the musical chair game by putting my money back in the markets and trying to see when the music stops, or is there something that actually works well in a low interest, high inflation environment?
edit: thanks for all your input. I will have to look into all your recommendations to figure out which approach I would be most comfortable doing. | 0.81 | t3_t5jd5t | 1,646,282,500 |
investing | 🎯Steps of fundamental analysis | 1. **Macroeconomic analysis.** The macroeconomic analysis involves analyzing capital flows, interest rate cycles, different metrics, world news.
2. **Industry analysis.** The industry analysis consists of an analysis of the industry and the players that form the sector.
3. **Situational analysis.** The situational analysis identifies strengths and weaknesses while finding product features that affect business development. | 0.24 | t3_t5j9o2 | 1,646,282,177 |
investing | What metric cutoffs (eg, EPS > 30) do you use to automatically rule out certain stocks? | Just curious what sort of cutoffs you use to easily rule out certain stocks. For example, I rarely consider any stock that has a PE ratio above 30. Also, I generally stay away from stocks with negative betas. I’m not saying these cutoffs should be used everyone, but it stops me from doing further DD into companies that probably won’t fit into my investing strategy.
Edit: meant PE ratio, not EPS. | 0.76 | t3_t5bywe | 1,646,260,089 |
investing | SPY - Why are we rising today? 🙈 | \-This was one a hell of a month! Great fear + uncertainty + rising oil and gold + inflation + FED uncertainty + War + OPEC+ agreement and etc etc.
​
\-Thankfully managed to make a couple of profitable trades, especially on spy! Now let's see what is waiting for us in near future and the reason behind today's rise.
​
1) We rose today due to yesterday's SOTU where Biden emphasized the importance of dropping costs for the middle-class which made people happy, due to the high inflation everyone was triggered by high prices so he managed to hit a sweet spot!
​
2) FED's Powell talked about how the U.S is not going to directly get affected by the Russian sanctions and that OIL prices are not a threat. He also mentioned that inflation is somehow under-control.
​
3) The unemployment data came out pretty positive! We are up in employment which was triggered by high manufacturing + retail sales.
​
Outcome: Let's sell some short-term put spreads, lads and gents!
Alright Caldoon, but what the hell to expect next week or next month?
​
1) Don't expect Macro-Economy to just forget what have had happened in these past few days. The long-term effect of sanctions will definitely be negative and the USO is still rising although it dropped a bit. U.S and Europe are banning Russian gas and oil from everywhere and that will definitely drag Europe down soon, U.S might stay reluctant for a while due to OIL reserves but even then like we mentioned, USO is rising!
​
2) The inter-bank loan system called Swap Lines which are used by banks to lend each other money had a huge rate increase. That was due to uncertainty and fear in the market. No bank felt confident to pull out cash and lend it to anyone. It was feared that this might bring Financial Sector down which would drag the whole SP500 with it but thankfully for now it chilled down. However, the sanctions and its negative effects might drag it up again.
​
3) No matter what Powell says inflation is still high, the savings rate in consumers is very low which means people are not really left with much money in their pocket. That will definitely affect Q1 2022 retail sales
​
4) Not just retail sales but whole GDP is forecasted to show 0% growth, repeating myself, it is expected that we will have zero growth. That is scary isn't it? High inflation Low growth and volatile unemployment, that might bring the stagflation (recession) in the game which we mentioned in previous post.
​
So in total, the situation is pretty bearish . We are bearish on Mid-term basis on SP500 and Nasdaq! | 0.39 | t3_t57hkk | 1,646,247,842 |
investing | Are Interest Rate Increases Priced in? | I know there are increases planned for this year. This month will see some, and likely more throughout ‘22 to combat inflation.
Because it is so clear that this is necessary, would it also not stand to reason that big money/smart money has already priced this in to the markets?
Obviously, I’m not saying we hit any bottoms in the SP or NQ. I’m sure there’s a lot of other things that could bring about new lows.
However, I would love to hear thoughts on how much you think rates will bring the market down, or if you think the future hikes are partially priced in. | 0.83 | t3_t55y7q | 1,646,243,822 |
investing | Looking for a shared/public Marked up Index Chart (preferably S&P, Nasdaq, Dow) | Does anyone know of a shared/public market index chart marked up with significant market and economic events like FOMC meetings, Powel's press conferences, world events, other significant events….etc? I'm betting there's one on TradingView and have searched but having trouble finding anything. Maybe ThinkOrSwim or a blog post? | 0.72 | t3_t54dn3 | 1,646,239,754 |
investing | MRAM Everspin Technologies | Everspin Technologies (MRAM) came on my radar after the last earnings beat, which resulted in the share price nearly doubling. The more research I did, the more interesting it looked.
The company makes a currently useful and potentially disruptive product - magnetoresistive RAM, which stores bits magnetically rather than electrically. It finds use in niche applications at the moment, but has the potential to eventually become a single-chip solution for both RAM and ROM in computer systems. That is to say, it is both quick to access and non-volatile. More info on this tech [here (wiki)](https://en.wikipedia.org/wiki/Magnetoresistive_RAM) and [here](https://www.science.org/doi/abs/10.1126/science.1110549).
The company became profitable in Q2 2021 and has consistently been beating earnings estimates.
Details:
- Market cap ~$200M
- Small float of about 16M shares outstanding (20M outstanding)
- 67% of float held by institutions
- No net debt
- $14.6M cash on balance sheet
Do your own DD but this company looks very interesting to me and reports earnings today after the closing bell. | 0.73 | t3_t541x2 | 1,646,238,900 |
investing | US Interest rates Are Headed Higher very soon to combat inflation | [US Central Bank is going to raise interest rates later March.](https://www.msn.com/en-us/money/markets/powell-says-rates-are-headed-higher-even-as-ukraine-poses-uncertainty/ar-AAUvThI?ocid=msedgdhp&pc=U531)
Feds will move toward a “predictable” shrinking of its big bond holdings after raising rates, a move that will take additional steam out of the economy, and that it will discuss those plans at its meeting ending March 16 without finalizing them. According to Jerome Powell. "...." | 0.95 | t3_t53ubi | 1,646,238,337 |
investing | Why is the market assuming Fed action will be less aggressive given the Ukraine situation will almost certainly lead to increased Stagflation? | So the market priced in a possible 50 basis points rate increase in March (or 7 hikes over the year), then Ukraine is invaded and it had the biggest rally in years on the assumption it would be a less aggressive .25% in March and 5 increases over the years
Now the market is trying to pump like crazy as if all uncertainty is off the table, inflation disappeared and the Fed will now do nothing.
So why isn’t the market pricing in the almost guaranteed stagflation from soaring oil prices and the economic impact of Russian markets vanishing overnight?
Wouldn’t the Fed have to take even more aggressive action to curb the home-grown inflation getting amplified by the geopolitical inflation? | 0.55 | t3_t527of | 1,646,233,892 |
investing | Sberbank shares - how screwed am I? | Question - Sberbank ADR (the largest Russian bank) is trading at $0.01 on the London stock exchange (symbol SBER) but listed at $1.00 on the OTC markets (SBRCY). Do the shares represent shares in the European subsidiary or something that just went bankrupt? Is either price accurate? Just wondering because I wouldn't expect the main Russian bank to go under even though the European subsidiary does.
London: [https://www.londonstockexchange.com/stock/SBER/sberbank-of-russia/company-page](https://www.londonstockexchange.com/stock/SBER/sberbank-of-russia/company-page)
OTC: [https://stocktwits.com/symbol/SBRCY](https://stocktwits.com/symbol/SBRCY)
Brokers are suspending all trading and only allowing exiting of positions in Moscow Exchange stocks and Russian stocks in general. Does the price just mean there is no price discovery right now? That's what I'm praying for.
​
Btw this is [crossposted](https://www.reddit.com/r/wallstreetbets/comments/t4wz6c/sberbank_shares_how_fucked_am_i/) at /r/wallstreetbets I assume that's ok. | 0.81 | t3_t4xjxw | 1,646,218,103 |
investing | Daily General Discussion and Advice Thread - March 02, 2022 | Have a general question? Want to offer some commentary on markets? Maybe you would just like to throw out a neat fact that doesn't warrant a self post? Feel free to post here!
If your question is "I have $10,000, what do I do?" or other "advice for my personal situation" questions, you should include relevant information, such as the following:
* How old are you? What country do you live in?
* Are you employed/making income? How much?
* What are your objectives with this money? (Buy a house? Retirement savings?)
* What is your time horizon? Do you need this money next month? Next 20yrs?
* What is your risk tolerance? (Do you mind risking it at blackjack or do you need to know its 100% safe?)
* What are you current holdings? (Do you already have exposure to specific funds and sectors? Any other assets?)
* Any big debts (include interest rate) or expenses?
* And any other relevant financial information will be useful to give you a proper answer.
Please consider consulting our FAQ first - https://www.reddit.com/r/investing/wiki/faq
And our [side bar](https://www.reddit.com/r/investing/about/sidebar) also has useful resources.
Be aware that these answers are just opinions of Redditors and should be used as a starting point for your research. You should strongly consider seeing a registered financial rep before making any financial decisions! | 0.72 | t3_t4wwbf | 1,646,215,272 |
investing | $BE and the energy crisis | $BE at a market cap of 4 billion projects 35% yearly growth through the next decade from almost a billion in revenues today to 15 billion. Hydrogen generation with the most efficient electrolyzer can replace what natural gas is today. Energy is a national security risk as you see oil prices getting higher and higher with energy producer crisis no matter what side you are on. I believe in freedom though. God bless | 0.5 | t3_t4r8ls | 1,646,193,991 |
investing | Is the Fed really stopping to buy bonds? | From what I understand, the Fed assured everyone it would taper off bond purchases by $15 billion a month starting November, finally ending in March (now).
However, looking [at the Fed's data](https://www.federalreserve.gov/monetarypolicy/bst_recenttrends.htm) (Total Assets, 1y scope), it doesn't look like the Fed tapered anything at all, instead increasing by the same exact rate until now. Am I reading it wrong? | 0.85 | t3_t4pz22 | 1,646,190,123 |
investing | OXY up 20% over the past two trading days | OXY has been spiking above and beyond that of other crude stocks. This is the leveraged play that Buffett financed via preferred. He got a large share dividend during the pandemic and sold all of it at panic level prices. Now, the stock has more than ~~tripled~~ quadrupled from the covid depths. Cash flow looks very healthy.
Thoughts on OXY's future? | 0.63 | t3_t4ou5e | 1,646,186,710 |
investing | Why are Russian ADRs falling so much? | **Please keep this factual rather than ethical. I am curious about the market dynamics only.**
I'm struggling to understand the reasons behind the total collapse of the Russian equities market. It was trading at a substantial discount before, but now it's truly unbelievable with many ADRs trading below a PE ratio of 1. I understand that Russia is being cut of from SWIFT and all, but ultimately many of these are companies that will be fine even without European trade for a few years.
I see this as a result of four main factors:
\- Risk of companies going bankrupt or being diluted to the point of positions being worthless- Risk of nationalization of stocks, or government interference causing stocks to lose value- Risk of ADRs becoming untradable and somehow as a result becoming worthless- Devaluation as (institutional) investors divest from Russia
I'm curious to hear your opinions on how much each of these play a part. It is fascinating to see the total collapse of a market like this, and I'm struggling to understand how companies with rock solid balance sheets and stable incomes can be punished so hard that in many cases they seemingly price in a >80-90% chance of the stock being worthless. | 0.44 | t3_t4o6tu | 1,646,184,782 |
investing | Google Class A & C Shares Question | I currently hold GOOG in my portfolio at 10% and I notice the QQQ and SPY both split google with equal weight in both GOOG & GOOGL.
Would you recommend splitting my Goog shares into both A & C shares? I know A carries voting rights that C shares do not. This would be a long hold in my Roth. Wasn't sure if it was worth splitting up my class C shares. Appreciate your help | 0.8 | t3_t4nl3b | 1,646,182,989 |
investing | Compass Pathways Psilocybin Stock | Just bought some medical shrooms stock after an impressive Johns Hopkins study using it for treatment resistant depression. My job is administering medical ketamine, hallucinogens are the going to be the next breakthrough drugs for psychiatry and psychotherapy.
https://jamanetwork.com/journals/jamapsychiatry/fullarticle/2772630 | 0.6 | t3_t4n0sm | 1,646,181,375 |
investing | How much do you need to research a company (or technology) before you feel ready to invest? | I caught a news article about those new electronic aviation start ups (Lilium, Joby, etc) and it sounded really interesting. Its been posted about on reddit before and some people are certain beyond a shadow of a doubt these companies will revolutionize personal travel, while others are completely convinced it will never catch on and the stocks will crash.
I read articles and various posts for a couple hours and came very close to putting a few thousand $ into one of these stocks, but I held back because I knew nothing about the technology, all of my knowledge of the companies came second hand from others, and I couldn't feel confident I really knew what I was investing in or what its prospects were. I also knew that if I was only just now hearing about this in the news, so was everyone else, so if there was any window of investment opportunity I already missed it. I would just be gambling on cool technology I had heard about.
But then I expanded that skepticism to other realms of investing - i'm in the medical field, but just knowing the mechanism of action or side effect profiles for medications doesn't give me the confidence to believe I can pick winners or losers in pharma, or anything else.
What degree of understanding should be required before you should put money into a stock, or even into a given field at all? How do you know that you know what you need to know? | 0.82 | t3_t4kvhb | 1,646,175,399 |
investing | Apple stops all product sales in Russia as RT and Sputnik removed from App Store | Apple has paused all product sales in Russia in response to the invasion of Ukraine.
The tech company said it has also limited Apple Pay and other services in Russia, and that it has removed state-backed news outlets RT and Sputnik from its App Store in other countries.
In a statement, Apple said: "We are deeply concerned about the Russian invasion of Ukraine and stand with all of the people who are suffering as a result of the violence.
"We are supporting humanitarian efforts, providing aid for the unfolding refugee crisis, and doing all we can to support our teams in the region.
"We have taken a number of actions in response to the invasion.
"We have paused all product sales in Russia.
"Last week, we stopped all exports into our sales channel in the country.
"Apple Pay and other services have been limited.
"RT News and Sputnik News are no longer available for download from the App Store outside Russia.
"And we have disabled both traffic and live incidents in Apple Maps in Ukraine as a safety and precautionary measure for Ukrainian citizens.
"We will continue to evaluate the situation and are in communication with relevant governments on the actions we are taking. We join all those around the world who are calling for peace."
https://news.sky.com/story/apple-stops-all-product-sales-in-russia-as-rt-and-sputnik-removed-from-app-store-12555128 | 0.96 | t3_t4iokb | 1,646,169,621 |
investing | Investing in the Russian situation | Free thoughts worth what you paid for them.
I'm seeing chatter about investing in Russian assets. While that may be tempting from a valuation standpoint (if the transaction could even be executed) there is a far more liquid and safer trade here.
Invest in competitors or directly in commodities.
The removal of Russian commodities from supply will increase the prices of these and help competing suppliers, from oil and gas to wheat and potash. There is no reason to chase ownership of Russian assets that may be at the whim of the Russian government, when the same exposure is available in other markets. | 0.42 | t3_t4gr2e | 1,646,164,676 |
investing | Questions about internationally invested mutual fund | Hello Subredditors. I am far from a financially savvy person and I have done well in my portfolio mostly, if I am honest, based on luck. At this time I am invested in LIJKX and have a very nice nest egg over $100K and growing. I am 51 years of age.
I believe this mutual fund is internationally invested and I suspect some of that is in Russiea, a country I have no interest in supporting. Given the conflict going on and the fact that any investment in their interests at this point gives fuel to their wartime activities, I would like to ensure nothing I own would be contributing to that. I am helpless to make corporations pull out of business with Russia, but this is a small thing I can do.
How do I find out if my fund is invested in Russia in any capacity and what should I be looking at should I change from this fund to another? | 0.56 | t3_t4es8y | 1,646,159,604 |
investing | LHX - Conflict stock or long term? | I have been interested in LH3 for some time. With the Ukrainian conflict, naturally govt contractors are going to see bumps.
I am, however, not interested in profiting on catastrophe. My goal is long term investment - talking about buying houses, kids through college (one day), being able to take vacations and retire at a reasonable time.
I know this isn't official investment advice, but what do you think about L3Harris as a long term hold? They seem to be well positioned regardless of international conflict. | 0.38 | t3_t48qga | 1,646,143,756 |
investing | Large/Megacap ETF with strong balance sheet/fundamentals? | Hey guys, good morning. Was hoping you guys might turn me on to a good ETF which invests in only Large/Megacap companies who have strong balance sheets and fundamentals (and preferably low PE).
We are in uncertain times right now, and I tend to lean towards a stagflationary perspective for the short to intermediate timeframe. My personal feeling is that these larger, stable, and strong companies will weather the storm the best.
I could go try to find a handful of companies that meet my criteria one by one, but would be much easier if there was an ETF I could jump into. | 0.69 | t3_t48tr1 | 1,646,144,012 |
investing | Ukraine supplies 70% of the world's neon. Chip makers are on edge. | Again, the world's major chip and semiconductor companies are watching the conflict closely as the Russian invasion of Ukraine will likely hamper the supply of neon.
Neon is used in lithography to make microchips.
Currently it appears the larger chip manufacturers have plenty in reserve but are worried that if the conflict escalates or is prolonged then again, the industry will suffer as a whole.
https://www.wired.co.uk/article/ukraine-chip-shortage-neon
https://www.reuters.com/breakingviews/ukraine-war-flashes-neon-warning-lights-chips-2022-02-24/
Edit: removed insensitive sentence. | 0.96 | t3_t44pro | 1,646,129,633 |
investing | Daily General Discussion and Advice Thread - March 01, 2022 | Have a general question? Want to offer some commentary on markets? Maybe you would just like to throw out a neat fact that doesn't warrant a self post? Feel free to post here!
If your question is "I have $10,000, what do I do?" or other "advice for my personal situation" questions, you should include relevant information, such as the following:
* How old are you? What country do you live in?
* Are you employed/making income? How much?
* What are your objectives with this money? (Buy a house? Retirement savings?)
* What is your time horizon? Do you need this money next month? Next 20yrs?
* What is your risk tolerance? (Do you mind risking it at blackjack or do you need to know its 100% safe?)
* What are you current holdings? (Do you already have exposure to specific funds and sectors? Any other assets?)
* Any big debts (include interest rate) or expenses?
* And any other relevant financial information will be useful to give you a proper answer.
Please consider consulting our FAQ first - https://www.reddit.com/r/investing/wiki/faq
And our [side bar](https://www.reddit.com/r/investing/about/sidebar) also has useful resources.
Be aware that these answers are just opinions of Redditors and should be used as a starting point for your research. You should strongly consider seeing a registered financial rep before making any financial decisions! | 0.74 | t3_t44j5g | 1,646,128,870 |
investing | trying to figure DODBX out | Many years ago I had money put in DODBX for me. over the years I've heard it's praise sung many times over. a great funt, a real long term champ, can't go wrong..... as I now take over active mgt of this money and my DODBX holdings I'm looking at long-term and I'm just not sure what the big deal is? it doesn't seem amazing. it took seemingly forever to recover from 2008. It's not horrid and since I've been in it since 2004 I'm overall up. But, what's the buzz, what am I missing? what is similar but better? Help me understand the love for what seems to be a rather unremarkable fund please? | 0.22 | t3_t3n3cz | 1,646,075,053 |
investing | Shell to exit joint ventures with Gazprom and pull out of Nord Stream 2 | https://www.shell.com/media/news-and-media-releases/2022/shell-intends-to-exit-equity-partnerships-held-with-gazprom-entities.html
> The Board of Shell plc (“Shell”) today announced its intention to exit its joint ventures with Gazprom and related entities, including its 27.5 percent stake in the Sakhalin-II liquefied natural gas facility, its 50 percent stake in the Salym Petroleum Development and the Gydan energy venture. Shell also intends to end its involvement in the Nord Stream 2 pipeline project. | 0.98 | t3_t3mewx | 1,646,073,290 |
investing | My bank has screwed me over multiple times, looking for suggestions. | I have tried to invest my money multiple times in different things over the past couple years. Each time, my bank has done everything they can to make it harder on me.
First of all they have 1k buy limits per 24hr hours, you have to call and tell them to allow over 1k for that 24hr hour period. Not a huge deal but it is an inconvenience. Where it gets bad is when they close at 2:30 and you can’t make a purchase over 1k till the next day (which with stocks, crypto, etc. you all know a lot can change very quickly) this has happened a few times causing me to already get frustrated.
The big one that pushed me over, I wanted to buy 4k of a crypto coin (I know some is for it and others against it, that’s not why I’m here) the coin dipped and I was ready to buy, I called the bank and told them I was gonna make a 4k charge on the debit card so they could okay it. When I tried to make the purchase it failed… I called back and asked why. The lady checked and said oh we can’t allow that because the company you’re trying to buy from is out of the US. I said oh no it’s okay, they are a well trusted company, could you please allow the transaction to go through? Nope she said, we are not able to allow that transaction to go through, as we are not allowed to. She said the best we can do is an ACH transfer. Well 6 days pass and I get the 4k in the account that I wanted it to be in, but by the time the 6 days passed the coin went back up. I knew it would and was planning on selling when it did, which ended up costing me a little under 3.5k profit I coulda made. Nothing crazy but still… bank screwed me out of 3.5k.
I definitely want to move banks, but who’s to say I won’t move to a bank that has these same restrictions? Just looking for any helpful tips or information you all might have. | 0.44 | t3_t3lybv | 1,646,072,125 |
investing | Warning to all young investors of the “kiddie tax” |
I just recently filed my 2021 taxes. I had investments from 2020 and then sold them in 2021 assuming I could take the long term capital gain and pay no tax due to my low income.
Unfortunately I am a young college student and a law called the “kiddie tax” exists which taxes a dependent child’s unearned income over $2,200 at the parent rate.
I owed over $4,000 when I filed. I got taxed about 30% on my capital gain. | 0.8 | t3_t3kpo3 | 1,646,068,951 |
investing | First cultivated meat company to go public | MeaTech $MITC recently announced significant management changes aimed at accelerating the company’s transition from a development stage company to a cultivated meat production company. T
modular factory design allows the company to create a sustainable solution for a wide variety of species including chicken, beef, and pork.
first cultivated meat company to go public with a $US28 million IPO in March 2021, following several funding rounds in 2020 totalling $US16.5 million.
lobal food technology company using advanced biotechnology and engineering capabilities to develop slaughter-free, real meat, which is delicious, nutritious, and safer than conventional meat. | 0.75 | t3_t3ik4c | 1,646,063,229 |
investing | ELI5: What is e-mini spy and what does it mean to sell out options on? | I have a bit of experience trading spy and understand that it is an index/average of the 500 leading companies in the U.S.
Today's(2/28) spx price is ~$4,377.94
I'm using interactive brokers. Someone recently give me a recommendation to sell a put option(50)
E-mini S&P 4170 at a limit price of $10.5 with expiration date of 3/2
I want to understand this trade before making it
Can someone give me an ELI5 explanation of what this trade entails? | 0.67 | t3_t3ibx2 | 1,646,062,630 |
investing | Does it make sense to invest in Ukrainian bonds right now, assuming one waits until maturity? | Since Ukrainian bond prices fell, does it make sense to invest in them right now, assuming one waits until maturity? Decided to ask here since I haven't dealt with bonds before, but from what I understand, as long as one waits until maturity to receive the principal, government bonds should be fairly safe. | 0.32 | t3_t3i27p | 1,646,061,880 |
investing | TIL my long-term capital gains are taxed at 0% | Working on my taxes over the weekend, I was confused at why the tax amount calculated by TurboTax was coming in significantly lower than what I had previously estimated. After digging around a bit, I discovered the cause of the discrepancy – long-term capital gains (for married filed jointly) of up to about $73K are taxed at 0%!
I vaguely knew long-term gains received more favorable treatment than short-term, but I had no idea it was 0%, as most of my holdings in my taxable non-retirement account have been income-oriented along with some short-term gains from trading (I've had some long-term gains over the years too, but these amounts weren't big enough to create enough discrepancy between my estimate and the final number to catch my eye). But this year I had $20K worth of long term gains that I expected to be taxed on, so it's a nice surprise to find that this gain is tax-free! | 0.7 | t3_t3g901 | 1,646,056,791 |
investing | Investing Opportunity? Russia Ruble is now worth about $.01 US. | Would it be possible to buy up a bunch of Rubles and hold them until the war is over or sanctions have been removed and then turn them in for US after the value has returned to normal? I've always been interested in currency exchange rates and I think this is an opportunity worth discussing. How would I go about buying Rubles or would I have had to do it before the sanctions were in place? | 0.44 | t3_t3fcwy | 1,646,054,075 |
pytorch | My model was training bad, i tried plotting the gradient flows(one for discriminator and other generator), but they don't seem to be right, like the initial layers has more gradient accumulation than final ones, help me with this | nan | 1 | t3_uduqoc | 1,651,151,707 |
pytorch | How to get torch==1.1.0 in an Anaconda env (Windows) | I need to get that old version for a project. It worked on my linux laptop but I had to move it to my main pc cuz the code was too much for my humble 300$ laptop.
The command I used was:
pip install torch==1.1.0
Python V is: 3.6.13
Error says:
ERROR: Could not find a version that satisfies the requirement torch==1.1.0 (from versions: 1.7.0, 1.7.1, 1.8.0, 1.8.1, 1.9.0, 1.9.1, 1.10.0, 1.10.1, 1.10.2)
ERROR: No matching distribution found for torch==1.1.0
I tried to find a solution online but its confusing. Is there any way to get that specific version in Windows?
Any help is appreciated! | 0.75 | t3_ud96pz | 1,651,081,079 |
pytorch | GroupNorm3D | Hi!
I have a 3D-CNN network (fully convolutional). As I am using image sequences, I cannot train in big enough batches, so I am trying to find an alternative for batchnorm, and of course there is PyTorch's LayerNorm3d and InstanceNorm3d, but there is no GroupNorm3d. Is there a reason for that? Or maybe GroupNorm works for '3D' inputs (with 3D convs) as well? Or should I just write my own class based on the original paper?
Thanks in advance! | 1 | t3_ud2ipr | 1,651,062,880 |
pytorch | How to find a memory leak? | As a lot of people on this thread it would seem, I regularly meet the infamous `RuntimeError: CUDA out of memory` after a few epochs, that drives me crazy. Everytime, people post their codes on forums and someone points out a missing `.item()` or `.detach()` somewhere. But is there a way to know at each epoch the size of each tensor, or the memory usage breakdown or something in that fashion in order to track these issues myself instead of always asking online? | 0.88 | t3_ucjzoq | 1,650,999,684 |
pytorch | What loss function pairs with softmax activation function? CNN | I'm working on a multi-class image classification model, where I'm having 6 different classes one hot encoded.
I'm using the softmax as the activation function and I'm not sure which loss function pairs with it.
Would be cool if someone could help me out with this 😅 | 1 | t3_ub44yh | 1,650,833,821 |
pytorch | TF/Keras and PyTorch differences | I'm switching over a few models I created with TF/Keras over to PyTorch. A few things that I've noticed are that the accuracy is much lower and batch normalization gets stuck at local minima if I have more than one Batch Norm layer.
Is this common or am I doing something wrong? | 0.88 | t3_u9hmv4 | 1,650,642,212 |
pytorch | A post on Denoising Text Image Documents using Autoencoders | [Denoising Text Image Documents using Autoencoders](https://debuggercafe.com/denoising-text-image-documents-using-autoencoders/)
[https://debuggercafe.com/denoising-text-image-documents-using-autoencoders/](https://debuggercafe.com/denoising-text-image-documents-using-autoencoders/)
​
https://preview.redd.it/njqqw9jobzu81.png?width=1200&format=png&auto=webp&s=3edb09fa108ef7bc9b8098db6f557fb69ca73afa | 1 | t3_u92c6q | 1,650,589,229 |
pytorch | Trying to get Pytorch ROCm to work on Ubuntu 20.04 with Fiji cards | nan | 0.75 | t3_u8hlnc | 1,650,524,606 |
pytorch | Reinforcement tutorial doesn't lead to a model that converges | I copied and pasted the code and it doesn't produce a model that is better than random chance no matter what I do to it. Does anyone know of a Pytorch reinforcement tutorial that works?
[https://pytorch.org/tutorials/intermediate/reinforcement\_q\_learning.html](https://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html) | 0.63 | t3_u7rp29 | 1,650,443,119 |
pytorch | A little thought about the unification of dynamic and static graphs | This post is reproduced from Zhihu and translated using Deepl for all enthusiasts to communicate
I've been writing samples for the past few days, mainly referring to the Pytorch implementation, and inevitably encountered some problems caused by the differences between static and dynamic graphs again. In addition, MindSpore has recently made some progress on static graph syntax support, but the ease of use is not so obvious, which triggered the impulse to write this article. Since it is a random thought, there will be no structure, so I wrote it wherever I thought of it.
Supernatural dynamic graphs, obsolete static graphs
From the emergence of Pytorch to the present, the user-friendliness of dynamic graphs I believe no one will have any questions, freedom of writing, debugging convenience, code that is the formula of the cool. The toiling public who have been suffering from TensorFlow for a long time are looking forward to it, and the students who want to start deep learning with a low threshold are also looking forward to it. To some extent, the lowering of the AI threshold (and involution) is actually accompanied by the popularity of dynamic graph frameworks. I personally do not use many frameworks in depth, because the beginning of the AI years have been Keras, and then Pytorch open source has been using Pytorch. Later, I also had a shallow taste of the old version of Paddle, as well as MindSpore since 2020.
Pytorch has been eating up TensorFlow's share step by step, and then TF2.0 has shifted to eager mode across the board. And as dynamic graphs continue to gain popularity, the concept of dynamic and static graphs is in fact little known among most people engaged in AI (perhaps I'm just ignorant). Looking back at these several domestic frameworks, unfortunately, whether before or after the birth of Pytorch, most of them have taken the TF-like route, and then desperately trying to make up for it.
It's been 2202 now, and there are still people experimenting with static graphs?
In fact, we all vote with our feet, when I write a bit of wrong code, the C++ stack overflows; when I want to change a model, but also need to pay attention to the syntax restrictions; when I want to manipulate the gradient, but to go around to complete; when I want to write a brand new Layer, but have to care about how to write the code of control flow. Each of the above is a reason to stay away from static graphs. When I tried the old version of Paddle, this is how I was discouraged, even with the free V100 GPUs, I really can't carry this learning threshold. Later MindSpore encountered the same problems, even if the morphology has been optimized a lot.
Dynamic and static graphs, really need to be unified?
So MindSpore is playing the slogan of dynamic and static unification, and is practicing it physically. In the current environment of mainstream motion pictures, it is still worth a compliment to be able to motivate a group of people to use it. But we also face the dilemma of migrating after the experiment is completed with Pytorch.
Back to the topic, do dynamic and static graphs need to be unified? In terms of MindSpore's practice, it is to let the compiler gradually support the full Python syntax, and then translate it to computational graphs and send it down to device for computation. Pytorch has been criticized for being too flexible and then difficult to deploy, although there is now a good deployment path with ONNX as an intermediate IR. The common perception is probably that dynamic graphs are suitable for academia and static graphs are suitable for industry.
Dynamic and static unification from an AI full-scene perspective
The first thing to clarify is that the full scenario mentioned here is not the same thing as the one advertised by HW. the full scenario of AI must cover scientific research. Moreover, the development of the whole AI field is probably like a coin pusher, constantly throwing coins in, quantitative change triggers qualitative change, and then a milestone piece of work appears and drops a bunch with a clatter. And only after that, these milestone models will be enshrined by industry and deployed everywhere.
Obviously, the full scenario of AI is research + deployment, with the former being versatile and the latter being stable. That's why there are numerous inference engines, but not many make training frameworks. The success of Pytorch is then best explained by the fact that it responds to the needs of the people who use it the most, rather than focusing only on the almost stable and customized deployments in industry.
So following the trajectory of this AI field to see the role that deep learning frameworks should play, it probably looks like this:
Satisfy the need for a magic model for a large group of scientific people
Support or even derive a new milestone model
Support the export deployment and large-scale application of milestone models
Looking at the matter of dynamic and static unification again, the dynamic graph is unstoppable and the inevitable choice for the scientific research scenario. Static graphs are more suitable for exporting and deploying models generated by scientific research. The industry frameworks seem to be choosing to let one side gradually approach the other. I think the gap between the two is better to be bridged by something, rather to be forcibly eliminated. In other words, people are not very resistant to secondary code modifications when exporting models and deploying them, and a one-click export is good, but adding a conversion is not unacceptable. So training using Pytorch then exporting as ONNX for deployment is not a bad choice.
A few thoughts on MindSpore
Static graphs as defaults will not appeal to the research crowd.
Static graph syntax cannot and does not need to support the full range of Python syntax.
As a framework with almost optimal support for static graph syntax, it should actually encourage users to use dynamic graphs first until they need to deploy the model (80% of people don't), at which point the compiler should give enough guidance information to guide modifications to the model (the modifications here are non-destructive to the model).
In industrial training scenarios, using static graph to train (milestone models have enough impact, at this point the model is fixed and more stuffed with data).
Static graphs can be exported directly for deployment.
So a more reasonable approach should be, dynamic graphs as default, static graphs for deployment requirements, guidance provided for compiler errors, static graph training export. The biggest advantage of MindSpore is that it is compiled by way of AST parsing and this tool that can bridge the gap.
I found it to be an almost faultless process from the time I came across MindSpore. Later MindSpore changed the default mode to graph mode. Looking at the people using static graphs painfully, I thought, why bother?
Summary
Finally, a few more words. Few people can answer the question - why should I use MindSpore when I have Pytorch.
If the above approach is implemented, the reasons I could give would probably be something like:
Freedom of science: how free Pytorch is to write models, and how free it is to use me.
Seamless deployment: no need to convert for deployment, just convert to static graphs according to guidance.
Finally, in fact, quite want to spit a few more words. Still hope that no matter what deep learning framework is, it can be honest about the fact that it is a tool rather than core. respect AI science, respect AI researchers. That's all. | 1 | t3_u7qt1i | 1,650,439,165 |
pytorch | RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation | For [this training code](https://gist.github.com/buttercutter/b6f526c56e20f029d68e6f9041c3f5c0/17ce540da7b4e66a73a812e0a93041b52e9fd9c4#file-gdas-py-L612-L679) , why the [runtime error when there is no inplace operation](https://pastebin.com/vzRB6dUT) ? | 0.5 | t3_u7e8zd | 1,650,398,959 |
pytorch | perform a 2d convolution with loaded weights?? | Hi, not sure if this is even proper to post here as a question but I am trying to implement the model as defined in this paper [https://arxiv.org/abs/2203.11192](https://arxiv.org/abs/2203.11192)
I have implemented everything except for the top right part of figure 3, basically I need to take a part of the transformer encoder output and convolve it with the weights output from the transformer decoder. The shapes (B,C,W,H) are z\_test = torch.Size(\[1, 256, 14, 14\]) and w = torch.Size(\[1,256, 1, 1\]) where w (as far as I know) are the weights that should be used when applying the convolution to z\_test.
result = torch.nn.functional.conv2d(z\_test, w, stride=1, padding=0) works but this doesn't use the weights of w, just the shape with the conv2d internal weights, or have I misunderstood? If not, how can I load w as weights of the conv2d operator?
Hoping for any help that I can get | 0.9 | t3_u72a2d | 1,650,365,447 |
pytorch | Training loss does not converge | nan | 1 | t3_u72a1e | 1,650,365,443 |
pytorch | TypeError: setup() got an unexpected keyword argument 'stage' | I am trying to train my q&a model through pytorch\_lightning. However while running the command
trainer.fit(model,data_module)
I am getting the following error:
--------------------------------------------------------------------------_
TypeError Traceback (most recent call last)
<ipython-input-72-b9cdaa88efa7> in <module>()
----> 1 trainer.fit(model,data_module)
4 frames
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py in _call_setup_hook(self)
1488
1489 if self.datamodule is not None:
-> 1490 self.datamodule.setup(stage=fn)
1491 self._call_callback_hooks("setup", stage=fn)
1492 self._call_lightning_module_hook("setup", stage=fn)
TypeError: setup() got an unexpected keyword argument 'stage' | 1 | t3_u727pq | 1,650,365,195 |
pytorch | [N][R][P] High fidelity 3D face reconstruction from monocular image | nan | 1 | t3_u6nabu | 1,650,315,265 |
pytorch | Confused where I should add if __name__ == '__main__': | EDIT: I tried putting it after the dataloader without the freeze\_support(), but that causes the function to return none.
&#x200B;
Title pretty much says it all. I'm getting the " *An attempt has been made to start a new process before the current process has finished its bootstrapping phase. You've probably forgotten to use if \_\_name\_\_ == '\_\_main\_\_': freeze\_support()."*
I'm not really sure where to put that line of code. Is it after I declare the batch size and num\_workers etc? And do I put the freeze\_support() section? Just a little confused at this point.
Here's the function that's being called.
@torch.no_grad()
def prepare_data_features(model, dataset):
# Prepare model
model.to(device)
data_loader = data.DataLoader(dataset, batch_size=32, num_workers=32, shuffle=False, drop_last=False)
feats, labels = [], []
for batch_imgs, batch_labels in tqdm(data_loader, total=len(data_loader)):
batch_imgs = batch_imgs.to(device)
batch_feats = model(batch_imgs)
feats.append(batch_feats.detach().cpu())
batch_labels = batch_labels.to(device)
labels.append(batch_labels.detach().cpu())
feats = torch.cat(feats, dim=0)
labels = torch.cat(labels, dim=0)
# Sort images by labels
labels, idxs = labels.sort()
feats = feats[idxs]
# return data.TensorDataset(feats, labels)
return feats, labels
Do I just put it at the top of this function, or after the dataloader, or somewhere else? | 0.78 | t3_u6jh63 | 1,650,305,321 |
pytorch | Help with DeepFashion dataset | I am trying to find a good dataset that I can use for various ML applications involving fashion/clothing and I came across the [DeepFashion](https://mmlab.ie.cuhk.edu.hk/projects/DeepFashion.html) dataset. I was blown away when I saw it... it has the exact type of functionality that I am looking for. I found this toolbox called [mmfashion](https://github.com/open-mmlab/mmfashion) that uses the DeepFashion dataset and I cloned it and tried to get it set up, but I have been unsuccessful. I have also come across the `Fashion-MNIST` dataset that can help with classifying images, but DeepFashion seems to be full-featured and solves all of my use cases.
Has anyone worked with DeepFashion/mmfashion before and know how to get it set up or could help me do so? I'd be more than happy to compensate anyone that is able to help me get it working. | 1 | t3_u6cboq | 1,650,286,006 |
pytorch | torch.nn.Transformer how to get access to encoded layer | Do I really have to copy the source code and extend it to return the memory layer in the forward() function of the transformer or is there an easier way to do this? | 1 | t3_u5qjc4 | 1,650,212,742 |
pytorch | best way to reduce convolution channels in a learned manner? | Newbie here,
I've been playing around with making a model that attempts to colourise b+w images. my current model is a cnn in which the final layer is reduced to two channels, which become the a & b channels of LAB colour space. the preceding layer is the output of the convolutions with many (at the moment 64) channels.
I'm trying to create a system where the final layer essentially learns the relative importance for each of the 64 input channels, independently for each output channel.
my current method is to take the output of a convolution layer, the shape of which looks like:
(batch_size, 64, x,y)
flatten it
(batch_size,64,n_pixels)
do a 1d convolution, with a size of 64, and two output channels
(batch_size,2,n_pixels)
then unflatten
(batch_size,2,x,y)
but I suspect this is a little bit too hacky, and there is a better way to do it.
P.S. I know this is almost certainly not an optimal system for a colourising nn, but I'm going through the process as a DIY learning exercise! | 1 | t3_u4zoxo | 1,650,121,497 |
pytorch | Correct method to have weights derived from a reduced order set of parameters? | I have an application where I want to have the weights for a Conv2D to be based on some reduced set of parameters. For example, I want to restrict the kernel to be a 2D representation of sin waves with a particular frequency and orientation. So, the new Module really only needs to have two parameters.
I was able to make a class that derives from torch.nn.Module, has these two parameters, contains a single Conv2D member variable, and a section to generate the kernel encapsulated in a "with torch.no_grad():" section that then assigns these derived weights to the Conv2D object. Some simple tests with some test data seem to show this is doing the 2D convolution just right.
However, my questions is how to ensure that this derived kernel get updated every time the underlying frequency and orientation is updated by the optimizer.
1. Should I regenerate it in the forward() function?
2. Is there some callback for when parameters get updated?
3. Is there some other preferred method to do this? | 0.5 | t3_u4puv3 | 1,650,083,016 |
pytorch | Newbie Question: Why is my linear model is only returning NaN? How can diagnose this? | Hey, I'm trying to learn more about PyTorch and I'm running into a frustrating issue with my model. No matter what I do my model only predicts `nan`.
Here is my model:
class linearRegression(torch.nn.Module):
def __init__(self, inputSize, outputSize):
super(linearRegression, self).__init__()
self.linear = torch.nn.Linear(inputSize, outputSize)
def forward(self, x):
out = self.linear(x)
return out
As you can see it is a single layer. My data set is fully normalized with z-scaled. Yet when I train my model all I get back is nan:
def train(model, train_x, train_y):
criterion = torch.nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.0001)
epochs = 1000
for _ in range(epochs):
# Clear gradient buffers because we don't want any gradient from previous epoch to carry forward, dont want to cummulate gradients
optimizer.zero_grad()
# get output from the model, given the inputs
outputs = model(train_x)
# get loss for the predicted output
loss = criterion(outputs, train_y)
# get gradients w.r.t to parameters
loss.backward()
# update parameters
optimizer.step()
Above is my example code for training the model.
Any help would be greatly appreciated. | 0.83 | t3_u4mie8 | 1,650,071,386 |
pytorch | A tutorial on Autoencoder Neural Network: Application to Image Denoising | [Autoencoder Neural Network: Application to Image Denoising](https://debuggercafe.com/autoencoder-neural-network-application-to-image-denoising/)
&#x200B;
https://preview.redd.it/p976sv6acst81.png?width=1200&format=png&auto=webp&s=de191c84788e64ea2eada0055085d22531b89a71 | 0.75 | t3_u4lqzg | 1,650,068,834 |
pytorch | What kind of model should I use for my project? | Hi friends! I’m working on a project that involves an AI trying to complete a specific task within an FPS video game. The input is a tensor with [t, c, x, y] dimensions (time, RGB channels, and x/y of pixels) given from a custom dataset, and the output needs to be a binary array for keyboard buttons, a binary array for mouse buttons, and a “unit interval” array of where to move the mouse, from 0-1 on each axis on the screen. I was thinking I should use a few convolutional layers for pattern detection, and then feed that to an LSTM, but I have no idea if that would work, and my tests have shown that it’s quite bad. How can I fix my network topology, or is what I’m doing completely wrong? | 1 | t3_u341ko | 1,649,894,216 |
pytorch | Help choosing a network type | Hello,
i have a problem that i am trying to solve using Pytorch and would like some opinions or help concerning what kind of neural network to use, or if possible how to solve this problem more efficiently.
Problem:
- The input contains 4 images (from the four top corners) of objects captured by camera.
- Giving: The 3d shape of the photographed images (.obj, .stl, .fbx)
- The nn model should be able to classify which photographed object has been recorded by the cameras by matching the 4 images with the appropiate 3D shape. (preferably unsupervised)
What i am planing to do:
- Generate different views from the 3d object files. Use rotationnet
- or with Pytorch3d, generate a 3d view of the images and compare the 3d objects
Is there a way to project the images (views) onto the 3d files directly and select which one fits the best?
Thanks | 1 | t3_u2xc0f | 1,649,875,563 |
pytorch | Machine Learning with PyTorch and Scikit-Learn eBook | nan | 0.67 | t3_u16s1b | 1,649,680,683 |
pytorch | PyTorch on M1 GPU with Shark | Has anyone tried this and got their example to work on the M1 GPUs?
[https://nod.ai/pytorch-m1-max-gpu/](https://nod.ai/pytorch-m1-max-gpu/) | 0.82 | t3_u0x4zo | 1,649,643,625 |
pytorch | Sampler that picks a random subset of the data from one class per epoch | So I'm working on a project where I have a class imbalance. And one of the things I wanted to try was to sample a random subset of the highest occurring class for the first epoch/first mini-batch along with all the other classes, and a new subset for every other epoch/mini-batch. The remaining classes get sampled in their entirety. Is it possible to implement this with the Pytorch data module? | 1 | t3_u0sag1 | 1,649,628,342 |
pytorch | Max pooling | Can someone please share how to do maxpooling with Bert model in PyTorch ? | 0.36 | t3_u01wl2 | 1,649,537,155 |
pytorch | Train a Convolutional Autoenocder on CIFAR10 using PyTorch | [Train a Convolutional Autoenocder on CIFAR10 using PyTorch](https://debuggercafe.com/machine-learning-hands-on-convolutional-autoencoders/)
[https://debuggercafe.com/machine-learning-hands-on-convolutional-autoencoders/](https://debuggercafe.com/machine-learning-hands-on-convolutional-autoencoders/)
&#x200B;
https://preview.redd.it/1zdx0t6787s81.png?width=1200&format=png&auto=webp&s=76697398c7c4fe2c68ea9b8fd8dd1acda682fa41 | 1 | t3_tyqviz | 1,649,377,369 |
pytorch | Implementing a basic feed-forward NN - why is my loss/weight not updating? | Hello all,
I am a beginner to Pytorch, and to a lesser extent, Python. I understand the fundamentals of ML and NNs, but now I am trying to put pen-to-paper with some beginner projects.
I've written up a super rudimentary architecture based on the MNIST dataset, and my code runs, but my outputs do not seem to meaningful update through each mini-batch and epoch iteration. Further, the loss values output for each iteration output as "-0.000".
I've tried to troubleshoot this, but there is nothing that stands out to me here as blatantly incorrect based on my research.
For reference: The data I am using is not from the built-in Pytorch package data. I am using a modified version of MNIST that only contains class label values for 0 and 1 (just to make the implementation a little easier with a single output node).
So my features are input as a (12665, 784) NumPy array, with feature values normalized to be between 0 and 1. The class labels, then, are (12665, 1) with binary values of 0 or 1.
I've also ran this data through a "from-scratch" NN that I put together with NumPy, and that seemed to work just fine. I am just not sure where I am going wrong with PyTorch.
Any help would be greatly appreciated, thanks!
My code:
import numpy as np
import torch
from torch import nn
from torch import optim
from torch.utils.data import DataLoader, TensorDataset
import torch.nn.functional as F
device = torch.device('cuda')
# Single output node: 0 or 1
mnist_train_01 = np.genfromtxt('path', delimiter=',')
Y_train_01 = mnist_train_01[:, [0]] # Y
X_train_01 = mnist_train_01[:, 1:] # X
X_train_01 = X_train_01 / 255 # Normalize X
class NN(nn.Module):
def __init__(self, input_size, output_size):
super().__init__()
self.fc1 = nn.Linear(input_size, 100)
self.fc2 = nn.Linear(100, output_size)
def forward(self, x):
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
def train(net, features, batch_size, class_label, epochs=100000, lr=0.001):
model.train()
opt = torch.optim.Adam(net.parameters(), lr=lr)
criterion = nn.CrossEntropyLoss()
class_label = torch.from_numpy(class_label).to(device)
features = torch.from_numpy(features).to(device).float()
for e in range(epochs):
for n in range(0, features.shape[0], batch_size):
opt.zero_grad()
# The features
x = features[n:n+batch_size,:]
y = class_label[n:n+batch_size,:]
# forward
outputs = model.forward(x)
loss = criterion(outputs, y)
#backward
loss.backward()
#gradient descent
opt.step()
print("Loss: {:.4f}".format(loss.item()))
model = NN(input_size=784, output_size=1).to(device)
train(model, X_train_01, 5, Y_train_01, 2, 0.001) | 1 | t3_tycqgj | 1,649,336,950 |
pytorch | Tutorial: Writing JAX-like code in PyTorch with functorch | nan | 1 | t3_ty2ty8 | 1,649,298,773 |
pytorch | What happens if I perform unstructured pruning after structured pruning in Pytorch? | Say, I have a model named CNN\_Model which consists of multiple CNN layers and one Fully Connected layer. I perform structured pruning on the model and then perform unstructured l1 pruning on the model. Does the structured pruning mask gets removed?
Say for example:
I use this function:
for name, module in model.named\_modules():
if isinstance(module, torch.nn.Conv2d):
prune.ln\_structured(module=module, name="weight",
amount=sparsity, n=1, dim=dimention, importance\_scores=None)
&#x200B;
and then use this:
for name, module in model.named\_modules():
if isinstance(module, torch.nn.Conv2d):
prune.l1\_unstructured(module=module, name="weight",
amount=sparsity, importance\_scores=None)
&#x200B;
or if i were to perform global unstructured pruning after the structured pruning, what would happen? | 1 | t3_txugh7 | 1,649,274,315 |
pytorch | Tutorial on how to make a new pytorch layer CUDA-compatible? | We have been working on a new kind of layer in pytorch. We were able to implement it in pure python and it seems to be working fine, but would really like to extend it to work on the GPU. As far as I can tell, there isn't magic going on behind the scenes with pytorch that automatically makes the Python implementation be GPU-compatible unless all the underlying operations used in the layer were already GPU-compatible.
So, does someone have a tutorial that gives an end-to-end example of making a new layer type that is both CPU- and GPU-compatible? It looks like [this page](https://pytorch.org/tutorials/advanced/cpp_extension.html#integrating-a-c-cuda-operation-with-pytorch) shows how to do either a C++ or CUDA layer, but it doesn't show how to make one that switches between using CUDA or not-CUDA based on the python script that is calling it. | 1 | t3_txp5u9 | 1,649,259,973 |
pytorch | NaN training loss | Why am I having [NaN for training loss](https://github.com/buttercutter/gdas/blob/c1ea1779e3e3c2a1aa52df3071256d56f9ed2e03/gdas.py#L50), `ltrain` if I change the value of the variable `NUM_OF_CELLS` from 8 to 16 ?
https://preview.redd.it/n40wq9ddixr81.png?width=980&format=png&auto=webp&s=da7f05523769945ffaa977d6e5df667be697ec71 | 1 | t3_txp2v5 | 1,649,259,741 |
pytorch | Looking for people to test my new GPU/Ubuntu virtual machine "cloud' service! | Hi everyone! I've spent the last couple months building and configuring a virtual GPU "cloud/instance" service. I'm looking for anyone with ML/DL/Pytorch/Ubuntu/GPU experience to put my VMs to the test and let me know what you like/dislike about it. It's still in the early beta stages so I'd like to know how training times and latency compare to what you're currently used to. Absolutely free of charge. SSH and VNC connections are available through the web. Must have you connect to my VPN to gain access, for security. Let me know if you're willing to try this out. Comment or DM. All constructive criticism is greatly appreciated! | 0.84 | t3_twxasn | 1,649,171,773 |
pytorch | Can someone help me with this? it's been days that i struggle with this problem | [https://github.com/jdb78/pytorch-forecasting/issues/933](https://github.com/jdb78/pytorch-forecasting/issues/933) | 0.84 | t3_tw40ak | 1,649,084,122 |
pytorch | A Post on Implementing Deep Autoencoder in PyTorch | [Implementing Deep Autoencoder in PyTorch](https://debuggercafe.com/implementing-deep-autoencoder-in-pytorch/)
[https://debuggercafe.com/implementing-deep-autoencoder-in-pytorch/](https://debuggercafe.com/implementing-deep-autoencoder-in-pytorch/)
&#x200B;
https://preview.redd.it/tco3abi1atq81.png?width=1200&format=png&auto=webp&s=7a80833265a73047b0ab3e3f2262af6e89612c77 | 1 | t3_ttdrw0 | 1,648,772,642 |
pytorch | PyTorch and ROCm 5: What ROCm packages are required for PyTorch? | I'm hoping to use PyTorch with ROCm to speed up some SVD using an AMD GPU. I'm new to GPU computing, ROCm and PyTorch, and feel a bit lost.
I'm pretty sure I need ROCm >= 5.0 to support the 6800 RX GPU, which means the [PyTorch Get Started Locally](https://pytorch.org/get-started/locally/) command doesn't quite work for me. Further, I’d like to test on a laptop with a Vega 8 iGPU which some ROCm packages do not support (HID, I believe).
Hence, I need to install ROCm differently, and due to my OS, I can't use the AMD script (PopOS 21.10), but go through the [Using Package Manager on Ubuntu](https://docs.amd.com/bundle/ROCm-Installation-Guide-v5.1/page/How_to_Install_ROCm.html) method, which is all fine.
In the very last step, I'm asked to install ROCm meta-packages---and here I'm unsure what I need for PyTorch. AMD has a [neat list of ROCm meta-packages](https://docs.amd.com/bundle/ROCm-Installation-Guide-v5.1/page/Meta-packages_in_ROCm_Programming_Models.html), but I don't know which are necessary.
I've been trying to prod the URL from the PyTorch Get Started Locally command, but I didn't get anything out of that.
### **So, in short: Which of the [ROCm meta-packages](https://docs.amd.com/bundle/ROCm-Installation-Guide-v5.1/page/Meta-packages_in_ROCm_Programming_Models.html) should I install to make use of PyTorch?**
Thank you for your time. | 0.78 | t3_tszoi8 | 1,648,733,034 |
pytorch | loss.backward() ??? | Hello there,
I am recently transitioning to pytorch from tf, and was wondering about the backward method. Specifically I am wondering the design decision that went into creating this interface.
This is a method called on an arbitrary torch tensor. How the hell does this interface make explicit the connection to the model? Something like:
model.backward(loss = loss)
Makes much more sense to me. It clearly shows that the loss is used in conjunction with all the partial derivatives to calculate the gradient updates for the model itself.
I suspect that there is probably an actual reason behind this interface design that I am failing to see. If any of you fellow ML folk could shed some light onto this issue, it would be greatly appreciated. | 0.8 | t3_tqt40y | 1,648,533,659 |
pytorch | PyTorch to coreml with colab | Any experts in this for GANs? I’m willing to pay at this point. I just can’t figure out why my model worked when missing the jacobian part cause I didn’t have an invert function. When I added the invert function and let the jacobian work again the model now returns blank. The invert alone seems to work on its own. It’s a client project so I can’t post the entire thing. | 1 | t3_tqp4s8 | 1,648,519,302 |
pytorch | I made a beginner's guide to TorchStudio! | If you haven't seen it yet, TorchStudio is an awesome IDE built **just for PyTorch** and was released a few weeks ago for open beta!
[I wrote this beginner's guide](https://www.assemblyai.com/blog/beginners-guide-to-torchstudio-pytorch-only-ide/) to help you get up-and-running with TorchStudio. It takes you through model building, training, and comparison, and also includes some pros, cons, and suggested features!
https://preview.redd.it/e0li76p9e5q81.png?width=960&format=png&auto=webp&s=94dd300d691ff97e52f2de67f437a2186f80c98c | 0.89 | t3_tqcnvy | 1,648,483,518 |
pytorch | [Help] Gradient of a parameter is setting to NoneType | I am experimenting with pytorch autograd. I have created a simple algebraic equation **w+2** in which I want to find the value of w a scalar value using gradient descent. The **real output is 4** and the **real w is 2**.
import torch
from torch.autograd import Variable
w = Variable(torch.ones(1),requires_grad=True)
for i in range(10):
y = 4
lr = 0.001
y_ = w+2
loss = (y_-y) ** 2
loss.backward()
w = w - lr*w.grad.data
print("L: ",loss,"w: ",w,"w-g: ",w.grad)
This is the error
>AttributeError: 'NoneType' object has no attribute 'data'
How do I solve this? | 1 | t3_tq0duw | 1,648,438,769 |
pytorch | how to convert from torch Tensor to base64 image to send over network? | Creating a flask server that so far looks like:
@app.route("/json", methods=['GET', 'POST', 'PUT'])
def getjsondata():
if request.method=='POST':
print("received POST")
data = request.get_json()
#print(format(data['z']))
jzf = [float(i) for i in data['z']]
jzft = torch.FloatTensor(jzf)
jzftr = jzft.reshape([1, 512])
z = jzftr.cuda()
c = None # class labels (not used in this example)
trunc = 1
img = G(z, c, trunc)
#print(type(img))
img = (img.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255).to(torch.uint8)
It receives some JSON data, uses that as input vector, generates an image, and now I want to send that image back. From what I understand it should be sent as a base64 byte array, but I have tried for hours to convert it into that format to no avail. Any help is appreciated. | 0.83 | t3_tplcqg | 1,648,392,191 |
pytorch | RuntimeError: Function 'LogSoftmaxBackward0' returned nan values in its 0th output. | Why if `NUM_OF_CELLS` is [increased from 8 to 16](https://github.com/buttercutter/gdas/blob/cdb7179b98aa2b6f5bd12cc090dc6b60ecef7311/gdas.py#L50) , the following errors pop up ?
/home/phung/PycharmProjects/venv/py39/bin/python /home/phung/PycharmProjects/beginner_tutorial/gdas_new.py
Files already downloaded and verified
Files already downloaded and verified
run_num = 0
Entering train_NN(), forward_pass_only = 0
modules = <generator object Module.named_children at 0x7f6a8044d0b0>
gradwalk(output_tensor.grad_fn)
outputs1.size() = torch.Size([4, 10])
train_labels.size() = torch.Size([4])
tensor(1., device='cuda:0')
[W python_anomaly_mode.cpp:104] Warning: Error detected in LogSoftmaxBackward0. Traceback of forward call that caused the error:
File "/home/phung/PycharmProjects/beginner_tutorial/gdas_new.py", line 873, in <module>
ltrain = train_NN(forward_pass_only=0)
File "/home/phung/PycharmProjects/beginner_tutorial/gdas_new.py", line 638, in train_NN
Ltrain = criterion(NN_output, NN_train_labels)
File "/home/phung/PycharmProjects/venv/py39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/phung/PycharmProjects/venv/py39/lib/python3.9/site-packages/torch/nn/modules/loss.py", line 1150, in forward
return F.cross_entropy(input, target, weight=self.weight,
File "/home/phung/PycharmProjects/venv/py39/lib/python3.9/site-packages/torch/nn/functional.py", line 2846, in cross_entropy
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
(function _print_stack)
Traceback (most recent call last):
File "/home/phung/PycharmProjects/beginner_tutorial/gdas_new.py", line 873, in <module>
ltrain = train_NN(forward_pass_only=0)
File "/home/phung/PycharmProjects/beginner_tutorial/gdas_new.py", line 648, in train_NN
Ltrain.backward()
File "/home/phung/PycharmProjects/venv/py39/lib/python3.9/site-packages/torch/_tensor.py", line 307, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/home/phung/PycharmProjects/venv/py39/lib/python3.9/site-packages/torch/autograd/__init__.py", line 154, in backward
Variable._execution_engine.run_backward(
RuntimeError: Function 'LogSoftmaxBackward0' returned nan values in its 0th output.
Process finished with exit code 1 | 1 | t3_tof013 | 1,648,262,140 |
pytorch | Help with 1D CNN for time series classification | I am attempting to train a classifier for 1 dimensional time series data with multiple layers. Each training example is a slice of sequential 1D data and each is labeled as one of three total classes. The issue is that the model only ever predicts one of the classes for each training example. I do believe that the data provided should be enough to classify each slice. So far, my attempts to make a deeper network have yielded the same result and I wanted to make sure I’m not making any glaring mistakes before I waste time attempting to make it deeper. Any advice? | 0.75 | t3_tnvap2 | 1,648,232,620 |
pytorch | Non differential loss approximation | I'm working with custom recommendation systems for personal research and I want to try using NDCG as my loss function directly. It's non-differential so I need an approximation but after trying out NeuralNDCG it takes way too long and way too much GPU. It's simply not feasible with my project scale. I have over 1 million users across over 20000 items. In total, I am using 62 million values and only kept the top 250000 users and the top 8196 items. My next idea is that I know neural nets can be used to approximate some algorithms or functions so I was wondering if I could make a second network that takes in the true values as well as the reconstructed values and outputs an approximated NDCG score of how well the first did. That being said I don't know if there are better or easier ways of achieving what I want and I figure the hive mind of the internet is probably my best bet.
Am I overthinking things?
Is this too complicated?
Am I better off sticking with differential loss functions?
Is there something like NeuralNDCG but significantly more efficient? | 1 | t3_tnf4qw | 1,648,186,870 |
pytorch | Yolo annotated dataset to custom Dataloader | Hi guys,
I'm trying to learn and implwment a custom dataloader so that it would load dataset annotated in yolo format. So, I have an image with multiple objects in it, and then a txt file with multiple label rows per image.
Could you advise me how to tackle that logic wise? Should I crop the images by labels and supply one label row per image as in the examples I see online or there are some other options?
Thanks a lot | 1 | t3_tl1a7f | 1,648,054,644 |
pytorch | Creating a custom loss function for Object Localization | (Sorry for long read)
Hi, I am currently working on a project where I need to implement Object Localization in my Convolutional NN model. We have gotten the following hints:
\- In pytorch, the loss function is just a regular function. You can define a custom loss function by
defining a regular python function and use a combination of pre-defined pytorch loss functions
inside this custom function.
\- In pytorch, all elements of a given tensor share the same type. But we saw that the last
element of y true is supposed to be an integer. You might need to convert some elements of
your tensors to a different type on the fly while computing the loss.
And that we should part the loss function into three parts, detection(using nn.BCEWithLogitsLoss ), localization(using nn.MSELoss) and classification(using nn.CrossEntropyLoss).
I have the classification part figured out, but I am so confused on how I am supposed to alter the model so I can get multiple outputs with for example the bounding box cords, and use them.
If I could get any hints on how to implement the Object Localization in my Convolutional NN that would be greatly appreciated :D | 1 | t3_tkvvc0 | 1,648,047,629 |
pytorch | PyTorch: Implicit Gradients returns (None) Meta-Gradient | I am attempting to implement the implicit gradients algorithm \[1, 2, 3\] to optimize some meta-parameters (in my case the parameters of a loss function). However, the (meta-)gradients produced are always None. Can I have some help identifying what the problem is, and how I can resolve this issue?
Below I have attached some simplified code that reproduces the error.
```
from sklearn.datasets import make_regression
import torch
# Creating a meta-network for representing the loss function.
class MetaNetwork(torch.nn.Module):
def __init__(self):
super(MetaNetwork, self).__init__()
self.model = torch.nn.Sequential(
torch.nn.Linear(2, 10),
torch.nn.ReLU(),
torch.nn.Linear(10, 1),
torch.nn.Softplus()
)
def forward(self, y_pred, y_target):
return self.model(torch.cat((y_pred, y_target), dim=1)).mean()
# Creating a base-network for learning the model of the data.
class BaseNetwork(torch.nn.Module):
def __init__(self):
super(BaseNetwork, self).__init__()
self.model = torch.nn.Sequential(
torch.nn.Linear(1, 10),
torch.nn.ReLU(),
torch.nn.Linear(10, 1)
)
def forward(self, x):
return self.model(x)
# Generating some synthetic training and validation data.
X_train, y_train = make_regression(n_samples=100, n_features=1, n_informative=1, noise=0.1, random_state=1)
X_valid, y_valid = make_regression(n_samples=100, n_features=1, n_informative=1, noise=0.1, random_state=2)
# Converting data into the correct format.
X_train, y_train = torch.tensor(X_train).float(), torch.unsqueeze(torch.tensor(y_train).float(), 1)
X_valid, y_valid = torch.tensor(X_valid).float(), torch.unsqueeze(torch.tensor(y_valid).float(), 1)
# Creating our base and meta models, as well as the base optimizer.
meta_network, base_network = MetaNetwork(), BaseNetwork()
base_optimizer = torch.optim.SGD(base_network.parameters(), lr=0.01)
# Training the model using the meta-network as the loss function.
for i in range(10):
base_optimizer.zero_grad()
yp = base_network(X_train)
base_loss = meta_network(yp, y_train)
base_loss.backward()
base_optimizer.step()
meta_loss_fn = torch.nn.MSELoss()
# Computing the training and validation (meta) loss.
train_loss = meta_loss_fn(base_network(X_train), y_train)
validation_loss = meta_loss_fn(base_network(X_valid), y_valid)
# Gradient of the validation loss with respect to the base model weights.
dloss_val_dparams = torch.autograd.grad(validation_loss, base_network.parameters(),
retain_graph=True, allow_unused=True)
# Gradient of the training loss with respect to the base model weights.
dloss_train_dparams = torch.autograd.grad(train_loss, base_network.parameters(),
create_graph=True, allow_unused=True)
p = v = dloss_val_dparams
for _ in range(10):
grad = torch.autograd.grad(dloss_train_dparams, base_network.parameters(),
grad_outputs=v, retain_graph=True, allow_unused=True)
grad = [g * 0.01 for g in grad]
v = [curr_v - curr_g for (curr_v, curr_g) in zip(v, grad)]
p = [curr_p + curr_v for (curr_p, curr_v) in zip(p, v)]
v2 = list(0.01 * pp for pp in p)
v3 = torch.autograd.grad(dloss_train_dparams, meta_network.parameters(), grad_outputs=v2, allow_unused=True)
print("Meta Gradient", v3)
```
---
\[1\] Rajeswaran, A., Finn, C., Kakade, S. M., & Levine, S. (2019). Meta-learning with implicit gradients.
\[2\] Lorraine, J., Vicol, P., & Duvenaud, D. (2020, June). Optimizing millions of hyperparameters by implicit differentiation.
\[3\] Gao, B., Gouk, H., Yang, Y., & Hospedales, T. (2021). Loss Function Learning for Domain Generalization by Implicit Gradient. | 1 | t3_tklfsy | 1,648,007,911 |
pytorch | Help implementing NN with L1/L2 regression in PyTorch. | Hi all,
I'm working on setting baselines for a new dataset that has previous implementations without NN only( the dataset is quite new). It's a multivariate regression task for inputs and yields the final prediction
As part of this, I've run sklearn classifier.. and now I'm building a neural net for the same. However, the networks I built seem to perform well below par as opposed to the sklearn regressor.
My question is, how does one implement lasso / ridge regressor in pytorch that achieves performance similar to sklearn regressors? | 1 | t3_tjnz0u | 1,647,902,067 |
pytorch | DCGAN with PyTorch in Wildlife animals | I thought it will be quite interesting to see Deep Convolutional GAN’s capability in generating wildlife, so I built a model based on the DCGAN architecture through PyTorch:
[https://taying-cheng.medium.com/create-new-animals-using-dcgan-with-pytorch-2ce47810ebd4](https://taying-cheng.medium.com/create-new-animals-using-dcgan-with-pytorch-2ce47810ebd4) | 0.83 | t3_ti9un0 | 1,647,739,007 |
pytorch | Complete Guide of Swin Transformation with Full PyTorch Implementation | (I found out i wrote transformation not transformer just now🤣)
Hello everyone!
I’ve recently read Swin Transformer paper and tried to implement with PyTorch. But there’re no post that FULLY explains the nitty-gritty details of the paper with full implementation. It took me soooo long time to write this post so I wanted to share with y’all! Hope this helps someone! The implementation is based on the official implementation of Microsoft team.
https://jasonlee-cp.github.io/paper/Swin_Transformer/#swin-transformer-architecture | 1 | t3_ti0w0h | 1,647,712,818 |
pytorch | Help implementing the coco dataset using fiftyone | As mentioned in the title i'm trying to use fiftyone to import my dataset from coco. Problem is, each image has a JSON related to them and each image has the mask for every detection. Now if i want to get the mask for detection x in image y all i need to do is dataset\[y\]\['ground\_truth'\]\['detections'\]\[x\]\['mask'\]. This is the part where the problems arise:
1 - I noticed that the masks in the same image have different sizes between themselves and even the image, why is it?
2 - I have all the masks corresponding to each detection in an image, what now? How can i create a dataloader in order to train the segmentation model?
Thank you for your attention! | 1 | t3_thycgb | 1,647,705,860 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.