text
stringlengths
64
6.93k
IceCube",6,"['New paper! <LINK> with @ProfJohnBeacom. We study neutrino-induced dimuons, a phenomenon that has only been seen in accelerator neutrino experiments, as a new event class of neutrino telescopes like IceCube, IceCube-Gen2. @uw_icecube 1/6 <LINK>', '@ProfJohnBeacom The dimuons are mainly from neutrino-nucleus deep-inelastic scatter and W-boson production. We develop a calculational framework and show that for 10 years, IceCube can detect ≃400 dimuons and IceCube-Gen2 can detect ≃1200! 2/6 https://t.co/gYuPDTRGqC', '@ProfJohnBeacom These dimuons have very important physics potentials, including probing high-energy QCD, enabling the first detection of W-boson production (a process that will be very important for high energy neutrinos but has never been identified), and new physics. 3/6 https://t.co/ybjf3usXlZ', '@ProfJohnBeacom More excitingly, we find 19 dimuon candidates from analyzing IceCube public data! We are not totally sure if they are dimuon signals yet due to the reasons detailed in the paper, but all aspects of them match our prediction. 4/6 https://t.co/XdZbGbl2Hz', '@ProfJohnBeacom @uw_icecube Whether they are real dimuons or some new background (or signal!), it’s important to understand them by IceCube collaboration. 5/6', '@ProfJohnBeacom @uw_icecube The continued success of neutrino physics &amp; astrophysics depends on developing new tools to get the most out of the data. Developing new event classes is an important part. Our theory and observation contributions help open a valuable new direction for neutrino telescopes. 6/6']",21,10,1497
3632,39,1120185554111430656,1454878428,Avishai Gilkis,"Happy to share this new paper, with @astro_jje: How to get rid of your star's hydrogen in three easy steps! But be careful what you assume for step 3... 🌟 <LINK> @norhaslizayusof @astro_jje The Schwarzschild criterion was used for another set of models which are mentioned only briefly in the discussion. More models with the Nugis & Lamers prescription then retain hydrogen, probably because of the interplay of mixing and the dependence on surface helium abundance.",https://arxiv.org/abs/1904.09221,"We find that applying a theoretical wind mass-loss rate from Monte Carlo radiative transfer models for hydrogen-deficient stars results in significantly more leftover hydrogen following stable mass transfer through Roche-lobe overflow than when we use an extrapolation of an empirical fit for Galactic Wolf-Rayet stars, for which a negligible amount of hydrogen remains in a large set of binary stellar evolution computations. These findings have implications for modelling progenitors of Type Ib and Type IIb supernovae. Most importantly, our study stresses the sensitivity of the stellar evolution models to the assumed mass-loss rates and the need to develop a better theoretical understanding of stellar winds. ","Effects of winds on the leftover hydrogen in massive stars following
Roche lobe overflow",2,"[""Happy to share this new paper, with @astro_jje: How to get rid of your star's hydrogen in three easy steps! But be careful what you assume for step 3... 🌟\n<LINK>"", '@norhaslizayusof @astro_jje The Schwarzschild criterion was used for another set of models which are mentioned only briefly in the discussion. More models with the Nugis &amp; Lamers prescription then retain hydrogen, probably because of the interplay of mixing and the dependence on surface helium abundance.']",19,04,467
3633,75,1440223335292424207,54849207,Ian Harrison,"New paper day..! From Juan Pablo Cordero, on a way to efficiently marginalise over uncertainties in redshift distributions in cosmology analyses where you have access to realisations of the full n(z) <LINK> By ordering the discrete realisations according to a few summary values you can present the sampler with a smoother likelihood it can more easily explore. We talk about how to choose the summary values and how to order them in n-dimensional space. We show that for DES-Y3, the scope of uncertainty on redshift distributions given by the SOMPz+3sDIR+SR (<LINK>) realisations is small enough that the traditional ""mean shifting"" method is still valid (does not degrade the cosmology parameter estimation). If you wanted to use it, the code is on the des-y3 branch of cosmosis-standard-library (which, yes, is not the best place it could be 😝) <LINK> @cosmic_mar Thanks! I was mostly shepherd... It is driven by the fact that for n(z) in weak lensing we have a bunch of samples of nuisance parameters (the set of histogram bin heights of the discretised n(z)) which are generated independently of the main cosmology chain. <LINK> @cosmic_mar Feeding them into the chain in a random order can mean small steps (to the 'next' realisation of n(z)) can create a very jumpy likelihood surface which is harder to explore. @cosmic_mar We make a choice of summary parameters for the n(z) (the per-bin mean unsurprisingly turns out to be a very good one) which we expect to correlate with likelihood, then order the realisations according to that, with the rank becoming the nuisance parameter in the chain. <LINK> @cosmic_mar The other slight subtlety was how to do such a ranking in &gt;1D (when we have more than one tomographic bin) but Gary B had the insight this was a version of the Linear Sum Assignment Problem, and solving that allows you to place similar n(z) both nearby and on a regular grid. @cosmic_mar ...so, not specific to n(z) really, but situations where you have a bunch of realisations of nuisance parts of your theory already sitting round and want to use them in a chain. @cosmic_mar ...which I guess may not be a common situation 😆 It was definitely developed as an ad-hoc solution to this particular problem.",https://arxiv.org/abs/2109.09636,"Cosmological information from weak lensing surveys is maximised by dividing source galaxies into tomographic sub-samples for which the redshift distributions are estimated. Uncertainties on these redshift distributions must be correctly propagated into the cosmological results. We present hyperrank, a new method for marginalising over redshift distribution uncertainties in cosmological analyses, using discrete samples from the space of all possible redshift distributions. This is demonstrated in contrast to previous highly simplified parametric models of the redshift distribution uncertainty. In hyperrank the set of proposed redshift distributions is ranked according to a small (in this work between one and four) number of summary values, which are then sampled along with other nuisance parameters and cosmological parameters in the Monte Carlo chain used for inference. This can be regarded as a general method for marginalising over discrete realisations of data vector variation with nuisance parameters, which can consequently be sampled separately to the main parameters of interest, allowing for increased computational efficiency. We focus on the case of weak lensing cosmic shear analyses and demonstrate our method using simulations made for the Dark Energy Survey (DES). We show the method can correctly and efficiently marginalise over a range of models for the redshift distribution uncertainty. Finally, we compare hyperrank to the common mean-shifting method of marginalising over redshift uncertainty, validating that this simpler model is sufficient for use in the DES Year 3 cosmology results presented in companion papers. ","Dark Energy Survey Year 3 results: Marginalisation over redshift
distribution uncertainties using ranking of discrete realisations",10,"['New paper day..! From Juan Pablo Cordero, on a way to efficiently marginalise over uncertainties in redshift distributions in cosmology analyses where you have access to realisations of the full n(z)\n<LINK>', 'By ordering the discrete realisations according to a few summary values you can present the sampler with a smoother likelihood it can more easily explore. We talk about how to choose the summary values and how to order them in n-dimensional space.', 'We show that for DES-Y3, the scope of uncertainty on redshift distributions given by the SOMPz+3sDIR+SR (https://t.co/OCACtfjP7d) realisations is small enough that the traditional ""mean shifting"" method is still valid (does not degrade the cosmology parameter estimation).', 'If you wanted to use it, the code is on the des-y3 branch of cosmosis-standard-library (which, yes, is not the best place it could be 😝)\nhttps://t.co/PkssHiBqlD', '@cosmic_mar Thanks! I was mostly shepherd... It is driven by the fact that for n(z) in weak lensing we have a bunch of samples of nuisance parameters (the set of histogram bin heights of the discretised n(z)) which are generated independently of the main cosmology chain. https://t.co/DoqjrKzC2F', ""@cosmic_mar Feeding them into the chain in a random order can mean small steps (to the 'next' realisation of n(z)) can create a very jumpy likelihood surface which is harder to explore."", '@cosmic_mar We make a choice of summary parameters for the n(z) (the per-bin mean unsurprisingly turns out to be a very good one) which we expect to correlate with likelihood, then order the realisations according to that, with the rank becoming the nuisance parameter in the chain. https://t.co/ili3Qmm3zQ', '@cosmic_mar The other slight subtlety was how to do such a ranking in &gt;1D (when we have more than one tomographic bin) but Gary B had the insight this was a version of the Linear Sum Assignment Problem, and solving that allows you to place similar n(z) both nearby and on a regular grid.', '@cosmic_mar ...so, not specific to n(z) really, but situations where you have a bunch of realisations of nuisance parts of your theory already sitting round and want to use them in a chain.', '@cosmic_mar ...which I guess may not be a common situation 😆 It was definitely developed as an ad-hoc solution to this particular problem.']",21,09,2229
3634,69,1044489488028889088,869154646694264832,Hugh Salimbeni,"Pleased to share our new paper on orthogonal decoupled GP inference, with Ching-An Cheng, Byron Boots and @mpd, appearing at NIPS 2018. Paper: <LINK> code: <LINK> Five tweet summary:👇 1/5 Gaussian process variational posteriors are a great idea (see @alexggmatthews's thesis (<LINK>) for details), but typically incur cubic scaling in the covariance parameterization, limiting how many ‘inducing points’ can be used. This is a shame. 2/5 We can squeeze more flexibility into the mean, as shown in <LINK>. This is the ‘decoupled’ approach, as it uses different parameterizations for the mean and covariance. The ‘coupled’ approach is the usual one where the mean/covariance share the same basis 3/5 This seems like a good idea, but it unfortunately has two downsides: 1) it’s non-convex (unlike the coupled case) and 2) it doesn’t admit a natural gradient update (which has proved very useful in the coupled case, see e.g. <LINK>) 4/5 We propose a new basis. It has all the advantages of the original decoupled basis, but with two additional properties: convexity, and a natural gradient update rule. 5/5 The basis has an orthogonal projection for the additional component in the mean, which decouples the natural gradient update. Implementing a natural gradient step is just as easy as the coupled case. See code <LINK>",https://arxiv.org/abs/1809.08820,"Gaussian processes (GPs) provide a powerful non-parametric framework for reasoning over functions. Despite appealing theory, its superlinear computational and memory complexities have presented a long-standing challenge. State-of-the-art sparse variational inference methods trade modeling accuracy against complexity. However, the complexities of these methods still scale superlinearly in the number of basis functions, implying that that sparse GP methods are able to learn from large datasets only when a small model is used. Recently, a decoupled approach was proposed that removes the unnecessary coupling between the complexities of modeling the mean and the covariance functions of a GP. It achieves a linear complexity in the number of mean parameters, so an expressive posterior mean function can be modeled. While promising, this approach suffers from optimization difficulties due to ill-conditioning and non-convexity. In this work, we propose an alternative decoupled parametrization. It adopts an orthogonal basis in the mean function to model the residues that cannot be learned by the standard coupled approach. Therefore, our method extends, rather than replaces, the coupled approach to achieve strictly better performance. This construction admits a straightforward natural gradient update rule, so the structure of the information manifold that is lost during decoupling can be leveraged to speed up learning. Empirically, our algorithm demonstrates significantly faster convergence in multiple experiments. ",Orthogonally Decoupled Variational Gaussian Processes,6,"['Pleased to share our new paper on orthogonal decoupled GP inference, with Ching-An Cheng, Byron Boots and @mpd, appearing at NIPS 2018. Paper: <LINK> code: <LINK> Five tweet summary:👇', ""1/5 Gaussian process variational posteriors are a great idea (see @alexggmatthews's thesis (https://t.co/SR1MAEj2Sf) for details), but typically incur cubic scaling in the covariance parameterization, limiting how many ‘inducing points’ can be used. This is a shame."", '2/5 We can squeeze more flexibility into the mean, as shown in https://t.co/1vg3MOCyfE. This is the ‘decoupled’ approach, as it uses different parameterizations for the mean and covariance. The ‘coupled’ approach is the usual one where the mean/covariance share the same basis', '3/5 This seems like a good idea, but it unfortunately has two downsides: 1) it’s non-convex (unlike the coupled case) and 2) it doesn’t admit a natural gradient update (which has proved very useful in the coupled case, see e.g. https://t.co/gzE3fGudTz)', '4/5 We propose a new basis. It has all the advantages of the original decoupled basis, but with two additional properties: convexity, and a natural gradient update rule.', '5/5 The basis has an orthogonal projection for the additional component in the mean, which decouples the natural gradient update. Implementing a natural gradient step is just as easy as the coupled case. See code https://t.co/QaQCv79oHS']",18,09,1319
3635,39,1153555839669678081,268337552,Nicolas Kourtellis,"Our new ACM TWEB paper on Detecting Cyberbullying and Cyberaggression in Social Media is finally here! Check out the preprint: <LINK> with @dchatzakou, @jhblackb, @emilianoucl, @gianluca_string, @iliasl, @athenavakali & the support of @ENCASE_H2020, @TEFresearch",https://arxiv.org/abs/1907.08873,"Cyberbullying and cyberaggression are increasingly worrisome phenomena affecting people across all demographics. More than half of young social media users worldwide have been exposed to such prolonged and/or coordinated digital harassment. Victims can experience a wide range of emotions, with negative consequences such as embarrassment, depression, isolation from other community members, which embed the risk to lead to even more critical consequences, such as suicide attempts. In this work, we take the first concrete steps to understand the characteristics of abusive behavior in Twitter, one of today's largest social media platforms. We analyze 1.2 million users and 2.1 million tweets, comparing users participating in discussions around seemingly normal topics like the NBA, to those more likely to be hate-related, such as the Gamergate controversy, or the gender pay inequality at the BBC station. We also explore specific manifestations of abusive behavior, i.e., cyberbullying and cyberaggression, in one of the hate-related communities (Gamergate). We present a robust methodology to distinguish bullies and aggressors from normal Twitter users by considering text, user, and network-based attributes. Using various state-of-the-art machine learning algorithms, we classify these accounts with over 90% accuracy and AUC. Finally, we discuss the current status of Twitter user accounts marked as abusive by our methodology, and study the performance of potential mechanisms that can be used by Twitter to suspend users in the future. ",Detecting Cyberbullying and Cyberaggression in Social Media,1,"['Our new ACM TWEB paper on Detecting Cyberbullying and Cyberaggression in Social Media is finally here!\nCheck out the preprint: <LINK>\nwith @dchatzakou, @jhblackb, @emilianoucl, @gianluca_string, @iliasl, @athenavakali &amp; the support of @ENCASE_H2020, @TEFresearch']",19,07,262
3636,64,1139240969738452992,16003634,Santiago Castro,"New paper accepted at @ACL2019_Italy in collaboration with @hdevamanyu (first co-author along with me), @vperez_r, Roger Zimmermann, @radamihalcea and @soujanyaporia: ""Towards Multimodal Sarcasm Detection (An _Obviously_ Perfect Paper)"" <LINK> #NLProc <LINK> It's a balanced corpus with 690 5-second-length video clips in English, marked as either sarcastic or non-sarcastic + ~15s of previous video context + transcriptions. The dataset is mostly composed of @bigbangtheory and @FriendsTV video clips. <LINK> We called it MUStARD 🌭😋, and it's available here (+ code): <LINK> We have a lot from Sheldon and Chandler, and you know they are quite sarcastic,even though Sheldon doesn't know what it is!😂They have different styles clearly, btw You may also know Dorothy, from Golden Girls (I didn't before). She's also pretty sarcastic and it's here as well! <LINK> Here's an example sarcastic instance from the dataset. <LINK> We ran some simple baselines and found out that audio in English plays an important role when trying to identify sarcasm for unknown speakers. If you add text is slightly better for our simple baselines. <LINK> This work was done in a collaboration between @UMich, @NUSingapore and @sutdsg",https://arxiv.org/abs/1906.01815,"Sarcasm is often expressed through several verbal and non-verbal cues, e.g., a change of tone, overemphasis in a word, a drawn-out syllable, or a straight looking face. Most of the recent work in sarcasm detection has been carried out on textual data. In this paper, we argue that incorporating multimodal cues can improve the automatic classification of sarcasm. As a first step towards enabling the development of multimodal approaches for sarcasm detection, we propose a new sarcasm dataset, Multimodal Sarcasm Detection Dataset (MUStARD), compiled from popular TV shows. MUStARD consists of audiovisual utterances annotated with sarcasm labels. Each utterance is accompanied by its context of historical utterances in the dialogue, which provides additional information on the scenario where the utterance occurs. Our initial results show that the use of multimodal information can reduce the relative error rate of sarcasm detection by up to 12.9% in F-score when compared to the use of individual modalities. The full dataset is publicly available for use at this https URL ",Towards Multimodal Sarcasm Detection (An _Obviously_ Perfect Paper),7,"['New paper accepted at @ACL2019_Italy in collaboration with @hdevamanyu (first co-author along with me), @vperez_r, Roger Zimmermann, @radamihalcea and @soujanyaporia:\n\n""Towards Multimodal Sarcasm Detection (An _Obviously_ Perfect Paper)""\n\n<LINK> #NLProc <LINK>', ""It's a balanced corpus with 690 5-second-length video clips in English, marked as either sarcastic or non-sarcastic + ~15s of previous video context + transcriptions.\n\nThe dataset is mostly composed of @bigbangtheory and @FriendsTV video clips. https://t.co/BAZEGAEa6m"", ""We called it MUStARD 🌭😋, and it's available here (+ code): https://t.co/4OZskIvNTd"", ""We have a lot from Sheldon and Chandler, and you know they are quite sarcastic,even though Sheldon doesn't know what it is!😂They have different styles clearly, btw\n\nYou may also know Dorothy, from Golden Girls (I didn't before). She's also pretty sarcastic and it's here as well! https://t.co/AKdtPucZjC"", ""Here's an example sarcastic instance from the dataset. https://t.co/g4BJU6vE3e"", 'We ran some simple baselines and found out that audio in English plays an important role when trying to identify sarcasm for unknown speakers. If you add text is slightly better for our simple baselines. https://t.co/UbjtlThF4B', 'This work was done in a collaboration between @UMich, @NUSingapore and @sutdsg']",19,06,1213
3637,96,1273557885147058177,94167230,László Molnár,"Our paper about Betelgeuse, led by @MeridithJoyceGR, is finally ready! Submitted and ready for your comments at <LINK> (and here) We set out to explain the dip - we did a TON of things but that: new light curve, new evo and seismic models, new hydro models. <LINK> We did our trusted evo+seismic approach with MESA+GYRE. Some highlights: - B had to go through a merger, initial fast-rotator models don't work. - tho @emsque measured Teff by +-25K (yay!), different mixing lengths can shift tracks +- 200K, making evo track fitting uncertain. <LINK> Don't despair tho! B is also a pulsating star so we have a seismic constraint. Turns out the period constrains the mass and radius pretty effectively. We say 750(+62-30) R_Sun and 16.5-19 M_Sun for present-day radius and mass. <LINK> Biggest feat? We know the angular diameter, we now have a radius: we can calculate distance! I present you the first seismic parallax/distance of Betelgeuse, 165(+16-8) pc. This agrees with the Hipparcos data, but closer than the latest Hipparcos+radio combined result. <LINK> Why? We hypothesize that persistent hot spots and other surface features create more/more coherent noise than previous works estimated. Difference btwn linear and non-linear pulsation periods may also play a small role. (Img: Kervella et al. 2018) <LINK> Speaking of variability, we also dug up some old space-based photometry from the SMEI cameras, did our best to correct it and scaled to the public V data. Red is the new data. I think it's pretty rad! <LINK> Finally, we experimented with some implicit hydro calculations. While we run into the same non-linearity issues in super-bright models as others did, we were able to confirm that the star pulsates in the fundamental radial mode. /fin <LINK>",https://arxiv.org/abs/2006.09837,"We conduct a rigorous examination of the nearby red supergiant Betelgeuse by drawing on the synthesis of new observational data and three different modeling techniques. Our observational results include the release of new, processed photometric measurements collected with the space-based SMEI instrument prior to Betelgeuse's recent, unprecedented dimming event. We detect the first radial overtone in the photometric data and report a period of $185\pm13.5$ d. Our theoretical predictions include self-consistent results from multi-timescale evolutionary, oscillatory, and hydrodynamic simulations conducted with the Modules for Experiments in Stellar Astrophysics (MESA) software suite. Significant outcomes of our modeling efforts include a precise prediction for the star's radius: $764^{+116}_{-62} R_{\odot}$. In concert with additional constraints, this allows us to derive a new, independent distance estimate of $168^ {+27}_{-15}$ pc and a parallax of $\pi=5.95^{+0.58}_{-0.85}$ mas, in good agreement with Hipparcos but less so with recent radio measurements. Seismic results from both perturbed hydrostatic and evolving hydrodynamic simulations constrain the period and driving mechanisms of Betelgeuse's dominant periodicities in new ways. Our analyses converge to the conclusion that Betelgeuse's $\approx 400$ day period is the result of pulsation in the fundamental mode, driven by the $\kappa$-mechanism. Grid-based hydrodynamic modeling reveals that the behavior of the oscillating envelope is mass-dependent, and likewise suggests that the non-linear pulsation excitation time could serve as a mass constraint. Our results place $\alpha$ Ori definitively in the core helium-burning phase near the base of the red supergiant branch. We report a present-day mass of $16.5$--$19 ~M_{\odot}$---slightly lower than typical literature values. ","Standing on the shoulders of giants: New mass and distance estimates for
Betelgeuse through combined evolutionary, asteroseismic, and hydrodynamical
simulations with MESA",7,"['Our paper about Betelgeuse, led by @MeridithJoyceGR, is finally ready! Submitted and ready for your comments at <LINK> (and here)\nWe set out to explain the dip - we did a TON of things but that: new light curve, new evo and seismic models, new hydro models. <LINK>', ""We did our trusted evo+seismic approach with MESA+GYRE. Some highlights:\n- B had to go through a merger, initial fast-rotator models don't work. \n- tho @emsque measured Teff by +-25K (yay!), different mixing lengths can shift tracks +- 200K, making evo track fitting uncertain. https://t.co/k1MxF4q3OY"", ""Don't despair tho! B is also a pulsating star so we have a seismic constraint. Turns out the period constrains the mass and radius pretty effectively. We say 750(+62-30) R_Sun and 16.5-19 M_Sun for present-day radius and mass. https://t.co/6qgOucFxgu"", 'Biggest feat? We know the angular diameter, we now have a radius: we can calculate distance! I present you the first seismic parallax/distance of Betelgeuse, 165(+16-8) pc. This agrees with the Hipparcos data, but closer than the latest Hipparcos+radio combined result. https://t.co/DUavfeTC9x', 'Why? We hypothesize that persistent hot spots and other surface features create more/more coherent noise than previous works estimated. Difference btwn linear and non-linear pulsation periods may also play a small role. (Img: Kervella et al. 2018) https://t.co/zje8bODWRg', ""Speaking of variability, we also dug up some old space-based photometry from the SMEI cameras, did our best to correct it and scaled to the public V data. Red is the new data. I think it's pretty rad! https://t.co/xPbNpSjN0C"", 'Finally, we experimented with some implicit hydro calculations. While we run into the same non-linearity issues in super-bright models as others did, we were able to confirm that the star pulsates in the fundamental radial mode. /fin https://t.co/gFu0R6dg0n']",20,06,1763
3638,3,1390399246084476934,185910194,Graham Neubig,"MetaXL is a new method for cross-lingual transfer to extremely low-resource languages that works by meta-learning transformation functions to improve gradient alignment between source and target languages. See our #NAACL2021 paper! <LINK> 1/2 <LINK> Code is also available: <LINK> Great work by @xiamengzhou, Guoqing Zheng, and other collaborators at @MSFTResearch! 2/2",https://arxiv.org/abs/2104.07908,"The combination of multilingual pre-trained representations and cross-lingual transfer learning is one of the most effective methods for building functional NLP systems for low-resource languages. However, for extremely low-resource languages without large-scale monolingual corpora for pre-training or sufficient annotated data for fine-tuning, transfer learning remains an under-studied and challenging task. Moreover, recent work shows that multilingual representations are surprisingly disjoint across languages, bringing additional challenges for transfer onto extremely low-resource languages. In this paper, we propose MetaXL, a meta-learning based framework that learns to transform representations judiciously from auxiliary languages to a target one and brings their representation spaces closer for effective transfer. Extensive experiments on real-world low-resource languages - without access to large-scale monolingual corpora or large amounts of labeled data - for tasks like cross-lingual sentiment analysis and named entity recognition show the effectiveness of our approach. Code for MetaXL is publicly available at github.com/microsoft/MetaXL. ","MetaXL: Meta Representation Transformation for Low-resource
Cross-lingual Learning",2,"['MetaXL is a new method for cross-lingual transfer to extremely low-resource languages that works by meta-learning transformation functions to improve gradient alignment between source and target languages. See our #NAACL2021 paper! <LINK> 1/2 <LINK>', 'Code is also available: https://t.co/pBauwCGnbu\nGreat work by @xiamengzhou, Guoqing Zheng, and other collaborators at @MSFTResearch! 2/2']",21,04,369
3639,50,1321459591809519622,720027280051957761,Oliver Newton,"I’ve had my head buried in code bugs, so I nearly missed the chance to promote a new paper that I collaborated on which came out today! Wolfgang Enzi led on a joint analysis of several methods to tighten constraints on thermal relic dark matter models <LINK> 1/6 Enzi adopts a Bayesian approach to combine studies of gravitational lensing, the Lyman-alpha forest and MW satellite galaxies. From this, he obtains a lower limit on the thermal relic particle mass of 6.733 keV at 95 per cent confidence. 2/6 <LINK> This also rules out 7.1 keV sterile neutrino dark matter models for large values of the lepton asymmetry parameter, which is particularly interesting as it is one of the models proposed to explain the 3.55 keV excess observed in X-ray spectra of DM-dominated objects. 3/6 How can we improve the results further? Currently, the MW satellites and Lyman-alpha forest provide the strongest constraints on the half mode mass; however, both analyses are subject to assumptions about galaxy formation and feedback processes. 4/6 Other uncertainties peculiar to each approach also creep in (e.g. the MW mass affects the satellite galaxy luminosity function; the choice of IGM thermal history can make the Lyman-alpha forest consistent with both CDM and WDM models). So, plenty of areas for improvement! 5/6 It's a great piece of work and I encourage anyone interested in astrophysical constraints on DM models to have a read! 6/6",https://arxiv.org/abs/2010.13802,"We derive joint constraints on the warm dark matter (WDM) half-mode scale by combining the analyses of a selection of astrophysical probes: strong gravitational lensing with extended sources, the Lyman-$\alpha$ forest, and the number of luminous satellites in the Milky Way. We derive an upper limit of $\lambda_{\rm hm}=0.089{\rm~Mpc~h^{-1} }$ at the 95 per cent confidence level, which we show to be stable for a broad range of prior choices. Assuming a Planck cosmology and that WDM particles are thermal relics, this corresponds to an upper limit on the half-mode mass of $M_{\rm hm }< 3 \times 10^{7} {\rm~M_{\odot}~h^{-1}}$, and a lower limit on the particle mass of $m_{\rm th }> 6.048 {\rm~keV}$, both at the 95 per cent confidence level. We find that models with $\lambda_{\rm hm}> 0.223 {\rm~Mpc~h^{-1} }$ (corresponding to $m_{\rm th }> 2.552 {\rm~keV}$ and $M_{\rm hm }< 4.8 \times 10^{8} {\rm~M_{\odot}~h^{-1}}$) are ruled out with respect to the maximum likelihood model by a factor $\leq 1/20$. For lepton asymmetries $L_6>10$, we rule out the $7.1 {\rm~keV}$ sterile neutrino dark matter model, which presents a possible explanation to the unidentified $3.55 {\rm~keV}$ line in the Milky Way and clusters of galaxies. The inferred 95 percentiles suggest that we further rule out the ETHOS-4 model of self-interacting DM. Our results highlight the importance of extending the current constraints to lower half-mode scales. We address important sources of systematic errors and provide prospects for how the constraints of these probes can be improved upon in the future. ","Joint constraints on thermal relic dark matter from strong gravitational
lensing, the Lyman-$\alpha$ forest, and Milky Way satellites",6,"['I’ve had my head buried in code bugs, so I nearly missed the chance to promote a new paper that I collaborated on which came out today! Wolfgang Enzi led on a joint analysis of several methods to tighten constraints on thermal relic dark matter models\n<LINK>\n\n1/6', 'Enzi adopts a Bayesian approach to combine studies of gravitational lensing, the Lyman-alpha forest and MW satellite galaxies. From this, he obtains a lower limit on the thermal relic particle mass of 6.733 keV at 95 per cent confidence.\n\n2/6 https://t.co/feSkZCHkzK', 'This also rules out 7.1 keV sterile neutrino dark matter models for large values of the lepton asymmetry parameter, which is particularly interesting as it is one of the models proposed to explain the 3.55 keV excess observed in X-ray spectra of DM-dominated objects.\n\n3/6', 'How can we improve the results further? Currently, the MW satellites and Lyman-alpha forest provide the strongest constraints on the half mode mass; however, both analyses are subject to assumptions about galaxy formation and feedback processes.\n\n4/6', 'Other uncertainties peculiar to each approach also creep in (e.g. the MW mass affects the satellite galaxy luminosity function; the choice of IGM thermal history can make the Lyman-alpha forest consistent with both CDM and WDM models). So, plenty of areas for improvement!\n\n5/6', ""It's a great piece of work and I encourage anyone interested in astrophysical constraints on DM models to have a read!\n\n6/6""]",20,10,1433
3640,38,1442535536246222855,900584226730512384,Shiyue Zhang,"Interested in finding a balance btw automation and human correlation for summary evaluation? Happy to share our #EMNLP2021 paper which introduces new metrics: Lite2Pyramid, Lite3Pyramid, Lite2.xPyramid w @MohitBan47 (@uncnlp) <LINK> <LINK> 1/7 <LINK> Human eval. for summarization is reliable but non-reproducible and expensive. Automatic metrics are cheap and reproducible but poorly correlated with human judgment. We propose flexible semi-automatic to automatic metrics, following Pyramid/LitePyramid human eval. protocol. 2/7 Semi-automatic Lite2Pyramid retains the reusable human-labeled Summary Content Units (SCUs) for reference(s) but replaces the manual work of judging SCUs’ presence in system summaries with a natural language inference (NLI) model. 3/7 Fully-automatic Lite3Pyramid further substitutes SCUs with automatically extracted Semantic Triplet Units (STUs) via a semantic role labeling (SRL) model. 4/7 Third, we propose in-between metrics Lite2.xPyramid, where inspired by active learning, we use a simple regressor to predict how well the STUs can simulate SCUs & retain SCUs more difficult to simulate, which provides a smooth transition+balance btw automation and manual eval 5/7 Comparing to 15 metrics, we evaluate human-metric correlations on 4 meta-eval datasets (incl. our new PyrXSum). Lite2Pyr consistently has best summ-level correlations; Lite3Pyr works competitively; Lite2.xPyr trades off small correlation drops for larger manual effort reduction. <LINK> SCUs annotation or STUs extraction only needs to be done once, so they can come with data or be done in pre-processing. When they are ready & one TITAN V GPU is available, it takes around 2.5mins to evaluate 500 CNN/DM examples. We provide support at: <LINK> 7/7 (Thanks to @ani_nenkova @RPassonneau for Pyramid and @obspp18, Gabay, @AlexGaoYang, Ronen, @ramakanth1729, Bansal, Amsterdamer, Ido Dagan for LitePyramid! Thank @EasonNie @huggingface for NLI models, @_danieldeutsch @DanRothNLP for SacreROUGE, and @ai2_allennlp for SRL tools 😀)",https://arxiv.org/abs/2109.11503,"Human evaluation for summarization tasks is reliable but brings in issues of reproducibility and high costs. Automatic metrics are cheap and reproducible but sometimes poorly correlated with human judgment. In this work, we propose flexible semiautomatic to automatic summary evaluation metrics, following the Pyramid human evaluation method. Semi-automatic Lite2Pyramid retains the reusable human-labeled Summary Content Units (SCUs) for reference(s) but replaces the manual work of judging SCUs' presence in system summaries with a natural language inference (NLI) model. Fully automatic Lite3Pyramid further substitutes SCUs with automatically extracted Semantic Triplet Units (STUs) via a semantic role labeling (SRL) model. Finally, we propose in-between metrics, Lite2.xPyramid, where we use a simple regressor to predict how well the STUs can simulate SCUs and retain SCUs that are more difficult to simulate, which provides a smooth transition and balance between automation and manual evaluation. Comparing to 15 existing metrics, we evaluate human-metric correlations on 3 existing meta-evaluation datasets and our newly-collected PyrXSum (with 100/10 XSum examples/systems). It shows that Lite2Pyramid consistently has the best summary-level correlations; Lite3Pyramid works better than or comparable to other automatic metrics; Lite2.xPyramid trades off small correlation drops for larger manual effort reduction, which can reduce costs for future data collection. Our code and data are publicly available at: this https URL ",Finding a Balanced Degree of Automation for Summary Evaluation,8,"['Interested in finding a balance btw automation and human correlation for summary evaluation? Happy to share our #EMNLP2021 paper which introduces new metrics: Lite2Pyramid, Lite3Pyramid, Lite2.xPyramid\n\nw @MohitBan47 (@uncnlp)\n\n<LINK>\n\n<LINK>\n1/7 <LINK>', 'Human eval. for summarization is reliable but non-reproducible and expensive. Automatic metrics are cheap and reproducible but poorly correlated with human judgment. We propose flexible semi-automatic to automatic metrics, following Pyramid/LitePyramid human eval. protocol. 2/7', 'Semi-automatic Lite2Pyramid retains the reusable human-labeled Summary Content Units (SCUs) for reference(s) but replaces the manual work of judging SCUs’ presence in system summaries with a natural language inference (NLI) model. 3/7', 'Fully-automatic Lite3Pyramid further substitutes SCUs with automatically extracted Semantic Triplet Units (STUs) via a semantic role labeling (SRL) model. 4/7', 'Third, we propose in-between metrics Lite2.xPyramid, where inspired by active learning, we use a simple regressor to predict how well the STUs can simulate SCUs &amp; retain SCUs more difficult to simulate, which provides a smooth transition+balance btw automation and manual eval 5/7', 'Comparing to 15 metrics, we evaluate human-metric correlations on 4 meta-eval datasets (incl. our new PyrXSum). Lite2Pyr consistently has best summ-level correlations; Lite3Pyr works competitively; Lite2.xPyr trades off small correlation drops for larger manual effort reduction. https://t.co/ipMWfxM1N5', 'SCUs annotation or STUs extraction only needs to be done once, so they can come with data or be done in pre-processing. When they are ready &amp; one TITAN V GPU is available, it takes around 2.5mins to evaluate 500 CNN/DM examples. We provide support at: https://t.co/bWOLWVdsIN \n7/7', '(Thanks to @ani_nenkova @RPassonneau for Pyramid and @obspp18, Gabay, @AlexGaoYang, Ronen, @ramakanth1729, Bansal, Amsterdamer, Ido Dagan for LitePyramid! Thank @EasonNie @huggingface for NLI models, @_danieldeutsch @DanRothNLP for SacreROUGE, and @ai2_allennlp for SRL tools 😀)']",21,09,2033
3641,94,1336868143189417984,2872512304,Dr. Elizabeth Hobson,"New preprint! Figuring out how to best analyze social network data can be a brain breaker. In this paper, we discuss several considerations, potential pitfalls, and best practices for thinking through your analyses. <LINK> <LINK> This was a great collaboration that started as a @NIMBioS working group. Careful readers may spot three interesting quirks / easter eggs hidden in the paper… <LINK> Thanks to co-first-author @mattjsilk for a fantastic supplement (~80 pages of examples!) & terrific co-authors Nina Fefferman, @DanLarremore, Puck Rombach, Saray Shai, & @NoaPinter Thanks also to our reviewers who provided a ton of useful feedback - it was a very positive review experience! We're revising now, so if anyone wants to catch the quirks/easter eggs they will likely be much harder to find in the future as we clean some things up...Any guesses?",https://arxiv.org/abs/2012.04720,"Analyzing social networks is challenging. Key features of relational data require the use of non-standard statistical methods such as developing system-specific null, or reference, models that randomize one or more components of the observed data. Here we review a variety of randomization procedures that generate reference models for social network analysis. Reference models provide an expectation for hypothesis-testing when analyzing network data. We outline the key stages in producing an effective reference model and detail four approaches for generating reference distributions: permutation, resampling, sampling from a distribution, and generative models. We highlight when each type of approach would be appropriate and note potential pitfalls for researchers to avoid. Throughout, we illustrate our points with examples from a simulated social system. Our aim is to provide social network researchers with a deeper understanding of analytical approaches to enhance their confidence when tailoring reference models to specific research questions. ","A guide to choosing and implementing reference models for social network
analysis",4,"['New preprint! Figuring out how to best analyze social network data can be a brain breaker. In this paper, we discuss several considerations, potential pitfalls, and best practices for thinking through your analyses. <LINK> <LINK>', 'This was a great collaboration that started as a @NIMBioS working group. Careful readers may spot three interesting quirks / easter eggs hidden in the paper… https://t.co/ZslasOfsSf', 'Thanks to co-first-author @mattjsilk for a fantastic supplement (~80 pages of examples!) &amp; terrific co-authors Nina Fefferman, @DanLarremore, Puck Rombach, Saray Shai, &amp; @NoaPinter', ""Thanks also to our reviewers who provided a ton of useful feedback - it was a very positive review experience! We're revising now, so if anyone wants to catch the quirks/easter eggs they will likely be much harder to find in the future as we clean some things up...Any guesses?""]",20,12,853
3642,65,1116114983266463744,45985352,Florian Golemo,"New paper: “Active Domain Randomization”. A better, safer, and easier-to-debug domain randomization for transferring your #robot policy from #sim2real . <LINK> Work lead by @SportsballBMan, colab with @linuxpotter @fgolemo, @chrisjpal, and @duckietown_coo <LINK>",https://arxiv.org/abs/1904.04762,"Domain randomization is a popular technique for improving domain transfer, often used in a zero-shot setting when the target domain is unknown or cannot easily be used for training. In this work, we empirically examine the effects of domain randomization on agent generalization. Our experiments show that domain randomization may lead to suboptimal, high-variance policies, which we attribute to the uniform sampling of environment parameters. We propose Active Domain Randomization, a novel algorithm that learns a parameter sampling strategy. Our method looks for the most informative environment variations within the given randomization ranges by leveraging the discrepancies of policy rollouts in randomized and reference environment instances. We find that training more frequently on these instances leads to better overall agent generalization. Our experiments across various physics-based simulated and real-robot tasks show that this enhancement leads to more robust, consistent policies. ",Active Domain Randomization,1,"['New paper: “Active Domain Randomization”. A better, safer, and easier-to-debug domain randomization for transferring your #robot policy from #sim2real .\n<LINK>\nWork lead by @SportsballBMan, colab with @linuxpotter @fgolemo, @chrisjpal, and @duckietown_coo <LINK>']",19,04,262
3643,61,1173866678700191744,864970046,Falk Herwig,"Great talks @NPA_IX and fittingly our new @jina_cee #nugrid paper on weak i-process nuclear physics uncertainty impact for metal-poor star abundance predictions today on #arXiv <LINK> @ArtemisSpyrou @Cec_gRESONANT measure 75Ga(n,g) and Ni66(n,g) 😀 @PHASTatUVIC @NPA_IX @jina_cee @ArtemisSpyrou @Cec_gRESONANT @PHASTatUVIC In addition to Ga75 and Ni66 there are four more (n,g) measurements needed most urgently for i-process simulation predictions for metal-poor stars: Kr88, Rb89, Cs137 and I135 <LINK> @jina_cee #nugrid @TRIUMFLab",https://arxiv.org/abs/1909.07011,"Several anomalous elemental abundance ratios have been observed in the metal-poor star HD94028. We assume that its high [As/Ge] ratio is a product of a weak intermediate (i) neutron-capture process. Given that observational errors are usually smaller than predicted nuclear physics uncertainties, we have first set up a benchmark one-zone i-process nucleosynthesis simulation results of which provide the best fit to the observed abundances. We have then performed Monte Carlo simulations in which 113 relevant (n,$\gamma$) reaction rates of unstable species were randomly varied within Hauser-Feshbach model uncertainty ranges for each reaction to estimate the impact on the predicted stellar abundances. One of the interesting results of these simulations is a double-peaked distribution of the As abundance, which is caused by the variation of the $^{75}$Ga (n,$\gamma$) cross section. This variation strongly anti-correlates with the predicted As abundance, confirming the necessity for improved theoretical or experimental bounds on this cross section. The $^{66}$Ni (n,$\gamma$) reaction is found to behave as a major bottleneck for the i-process nucleosynthesis. Our analysis finds the Pearson product-moment correlation coefficient $r_\mathrm{P} > 0.2$ for all of the i-process elements with $32 \leq Z \leq 42$, with significant changes in their predicted abundances showing up when the rate of this reaction is reduced to its theoretically constrained lower bound. Our results are applicable to any other stellar nucleosynthesis site with the similar i-process conditions, such as Sakurai's object (V4334 Sagittarii) or rapidly-accreting white dwarfs. ","The impact of (n,$\gamma$) reaction rate uncertainties on the predicted
abundances of i-process elements with $32\leq Z\leq 48$ in the metal-poor
star HD94028",2,"['Great talks @NPA_IX and fittingly our new @jina_cee #nugrid paper on weak i-process nuclear physics uncertainty impact for metal-poor star abundance predictions today on #arXiv <LINK> @ArtemisSpyrou @Cec_gRESONANT measure 75Ga(n,g) and Ni66(n,g) 😀 @PHASTatUVIC', '@NPA_IX @jina_cee @ArtemisSpyrou @Cec_gRESONANT @PHASTatUVIC In addition to Ga75 and Ni66 there are four more (n,g) measurements needed most urgently for i-process simulation predictions for metal-poor stars: Kr88, Rb89, Cs137 and I135 https://t.co/7Q1yoPj9E5 @jina_cee #nugrid @TRIUMFLab']",19,09,532
3644,9,1455463802934280192,916940326132224000,Virginie Do,"Happy to share our paper “Two-sided fairness in rankings via Lorenz dominance”, accepted to #NeurIPS2021 (w/ @scorbettdavies, J. Atif & Nicolas Usunier) 🎊 We propose a new framework for fair recommendation, grounded in welfare economics. <LINK> 1/4 Our goal: improve the experience of the worse-off users and/or content producers. We use Generalized Lorenz curves to understand who benefits or bears the cost of a fairness intervention. Rankings are considered *Lorenz efficient* if they produce non-dominated Lorenz curves. 2/4 <LINK> This guarantees (a) Pareto efficiency (b) equity: utility is redistributed from better-off to worse-off, keeping total utility constant. 3/4 We generate rankings by maximizing concave welfare functions. No single state of ""perfect fairness"" here - rather a variety of acceptable tradeoffs, which choice is context-dependent. Our approach, unlike those based on fairness constraints, always satisfies Lorenz efficiency 4/4 @facebookai @Paris_Dauphine @psl_univ @DauphineMiles @NeurIPSConf",https://arxiv.org/abs/2110.15781,"We consider the problem of generating rankings that are fair towards both users and item producers in recommender systems. We address both usual recommendation (e.g., of music or movies) and reciprocal recommendation (e.g., dating). Following concepts of distributive justice in welfare economics, our notion of fairness aims at increasing the utility of the worse-off individuals, which we formalize using the criterion of Lorenz efficiency. It guarantees that rankings are Pareto efficient, and that they maximally redistribute utility from better-off to worse-off, at a given level of overall utility. We propose to generate rankings by maximizing concave welfare functions, and develop an efficient inference procedure based on the Frank-Wolfe algorithm. We prove that unlike existing approaches based on fairness constraints, our approach always produces fair rankings. Our experiments also show that it increases the utility of the worse-off at lower costs in terms of overall utility. ",Two-sided fairness in rankings via Lorenz dominance,5,"['Happy to share our paper “Two-sided fairness in rankings via Lorenz dominance”, accepted to #NeurIPS2021 (w/ @scorbettdavies, J. Atif &amp; Nicolas Usunier) 🎊 We propose a new framework for fair recommendation, grounded in welfare economics. \n<LINK> 1/4', 'Our goal: improve the experience of the worse-off users and/or content producers. \nWe use Generalized Lorenz curves to understand who benefits or bears the cost of a fairness intervention. Rankings are considered *Lorenz efficient* if they produce non-dominated Lorenz curves. 2/4 https://t.co/Ctw2PLJRJr', 'This guarantees\n(a) Pareto efficiency\n(b) equity: utility is redistributed from better-off to worse-off, keeping total utility constant. 3/4', 'We generate rankings by maximizing concave welfare functions. No single state of ""perfect fairness"" here - rather a variety of acceptable tradeoffs, which choice is context-dependent. Our approach, unlike those based on fairness constraints, always satisfies Lorenz efficiency 4/4', '@facebookai @Paris_Dauphine @psl_univ @DauphineMiles @NeurIPSConf']",21,10,1023
3645,119,1437393852227276801,3378033143,Katerina Margatina,"💥Our #EMNLP2021 paper is now on Arxiv! We propose a new acquisition function for active learning that leverages both uncertainty and diversity sampling by acquiring ⚡️contrastive⚡️ examples.⬇️ 💻code: <LINK> 📝pre-print: <LINK> (1/n) We hypothesize that data points that are close in the model feature space but the model produces different predictive likelihoods, should be good candidates for data acquisition. We define such examples as contrastive. (2/n) <LINK> Our method, Contrastive Active Learning (CAL), selects unlabeled data points from the pool, whose predictive likelihoods diverge the most from their neighbors in the training set. (3/n) <LINK> This way, CAL shares similarities with diversity sampling, but instead of performing clustering it uses the feature space to create neighborhoods. CAL also leverages uncertainty, by using predictive likelihoods to rank the unlabeled data. (4/n) We empirically show that CAL performs consistently better or equal compared to all baseline acquisition functions, in 7 datasets from 4 NLP tasks, when evaluated on in-domain and out-of-domain settings. (5/n) <LINK> We finally conduct a thorough analysis of our method showing that CAL achieves a better trade-off between diversity and uncertainty compared to the other acquisition functions. (6/n) <LINK> Feel free to check our paper for more details🔍! I would like to thank my co-authors again for their incredible help & support on this work! @gvernikos @LoicBarrault @nikaletras 😊 (7/7) n=7",https://arxiv.org/abs/2109.03764,"Common acquisition functions for active learning use either uncertainty or diversity sampling, aiming to select difficult and diverse data points from the pool of unlabeled data, respectively. In this work, leveraging the best of both worlds, we propose an acquisition function that opts for selecting \textit{contrastive examples}, i.e. data points that are similar in the model feature space and yet the model outputs maximally different predictive likelihoods. We compare our approach, CAL (Contrastive Active Learning), with a diverse set of acquisition functions in four natural language understanding tasks and seven datasets. Our experiments show that CAL performs consistently better or equal than the best performing baseline across all tasks, on both in-domain and out-of-domain data. We also conduct an extensive ablation study of our method and we further analyze all actively acquired datasets showing that CAL achieves a better trade-off between uncertainty and diversity compared to other strategies. ",Active Learning by Acquiring Contrastive Examples,7,"['💥Our #EMNLP2021 paper is now on Arxiv! We propose a new acquisition function for active learning that leverages both uncertainty and diversity sampling by acquiring ⚡️contrastive⚡️ examples.⬇️\n\n💻code: <LINK>\n📝pre-print: <LINK>\n\n(1/n)', 'We hypothesize that data points that are close in the model feature space but the model produces different predictive likelihoods, should be good candidates for data acquisition. We define such examples as contrastive. (2/n) https://t.co/WOtJaclHyS', 'Our method, Contrastive Active Learning (CAL), selects unlabeled data points from the pool, whose predictive likelihoods diverge the most from their neighbors in the training set. (3/n) https://t.co/I4j2H7iCJO', 'This way, CAL shares similarities with diversity sampling, but instead of performing clustering it uses the feature space to create neighborhoods. CAL also leverages uncertainty, by using predictive likelihoods to rank the unlabeled data. (4/n)', 'We empirically show that CAL performs consistently better or equal compared to all baseline acquisition functions, in 7 datasets from 4 NLP tasks, when evaluated on in-domain and out-of-domain settings. (5/n) https://t.co/55Me01ODA9', 'We finally conduct a thorough analysis of our method showing that CAL achieves a better trade-off between diversity and uncertainty compared to the other acquisition functions. (6/n) https://t.co/QQ2gcFfsig', 'Feel free to check our paper for more details🔍! I would like to thank my co-authors again for their incredible help &amp; support on this work! @gvernikos @LoicBarrault @nikaletras 😊\n \n(7/7) n=7']",21,09,1497
3646,156,1508252725498294272,716963985245929472,Khai Nguyen,"In our new paper <LINK>, we investigate the usage of amortized optimization in finding informative projecting directions in mini-batch sliced Wasserstein. Seeking good projecting directions often requires iterative loops for optimization. This process is also repeated on all pairs of mini-batches. Therefore, it is computationally expensive. Leveraging the inside of amortized optimization, we propose three types of amortized models including the linear model, the generalized linear model, and the non-linear model. These models are trained to predict the best vector on the unit-hypersphere that can maximize the Wasserstein distance between the two projected one-dimensional probability measures of every two mini-batch probability measures. We demonstrate the benefit of the new approach on training generative models in terms of generative quality, computational time, and computational memory. We refer to the paper for more details.",https://arxiv.org/abs/2203.13417,"Seeking informative projecting directions has been an important task in utilizing sliced Wasserstein distance in applications. However, finding these directions usually requires an iterative optimization procedure over the space of projecting directions, which is computationally expensive. Moreover, the computational issue is even more severe in deep learning applications, where computing the distance between two mini-batch probability measures is repeated several times. This nested-loop has been one of the main challenges that prevent the usage of sliced Wasserstein distances based on good projections in practice. To address this challenge, we propose to utilize the learning-to-optimize technique or amortized optimization to predict the informative direction of any given two mini-batch probability measures. To the best of our knowledge, this is the first work that bridges amortized optimization and sliced Wasserstein generative models. In particular, we derive linear amortized models, generalized linear amortized models, and non-linear amortized models which are corresponding to three types of novel mini-batch losses, named amortized sliced Wasserstein. We demonstrate the favorable performance of the proposed sliced losses in deep generative modeling on standard benchmark datasets. ","Amortized Projection Optimization for Sliced Wasserstein Generative
Models",5,"['In our new paper <LINK>, we investigate the usage of amortized optimization in finding informative projecting directions in mini-batch sliced Wasserstein.', 'Seeking good projecting directions often requires iterative loops for optimization. This process is also repeated on all pairs of mini-batches. Therefore, it is computationally expensive.', 'Leveraging the inside of amortized optimization, we propose three types of amortized models including the linear model, the generalized linear model, and the non-linear model.', 'These models are trained to predict the best vector on the unit-hypersphere that can maximize the Wasserstein distance between the two projected one-dimensional probability measures of every two mini-batch probability measures.', 'We demonstrate the benefit of the new approach on training generative models in terms of generative quality, computational time, and computational memory. We refer to the paper for more details.']",22,03,941
3647,35,1309464728972951553,256513537,Dr Chiara Mingarelli,"New paper! @Chengcheng_xin, @JeffreyHazboun and me see which candidate supermassive black hole binaries in CRTS, PTF, and @PanSTARRS1 could be detected by @NANOGrav, IPTA, @SKA_telescope, and on what timescale this will happen. Spoiler alert: 3 by 2025. <LINK> <LINK> A few other key points: we show that the international pulsar timing array (IPTA) gives almost uniform sky coverage over individual PTAs, and that by 2025 IPTA will improve @NANOGrav NANOGrav's current minimum detectable strain by a factor of 6, and its volume by a factor of 216! IPTA will reach detection sensitivities for three candidates by 2025, and 13 by the end of the decade, enabling us not only to do multimessenger astrophysics, but also to constrain the underlying empirical relations used to estimate SMBH masses! We also found that we can already constrain the mass of a binary in Mrk 504 to M &lt; 3.3e9 solar masses. We believe this is its first direct BH mass constraint (tell us if you know of another one!). It also has signs of x-ray periodicity (Saade+2020) in addition to optical! <LINK> We also identify 24 high-mass high-z galaxies which should not be able to host SMBHBs. We internally called these the “impossible binaries”. What it could really mean is that our models are too simple and that lots of gas and eccentricity are helping these high-z SMBHBs to merge! GW detection of even one of these candidates would be an essentially eternal multimessenger system, and identifying common false positive signals from non-detections will be useful to filter the data from future large-scale surveys such as LSST. <LINK> It was a pleasure to work with very talented collaborators: @Columbia's @chengcheg_xin (new to twitter!) who just started graduate school, and @NANOGrav postdoc @JeffreyHazboun ! @IBJIYONGI thanks for the signal boost! @IBJIYONGI ... and of course for your congratulations and support!!",https://arxiv.org/abs/2009.11865,"Supermassive black hole binary systems (SMBHBs) emitting gravitational waves may be traced by periodic light curves. We assembled a catalog of 149 such periodic light curves, and using their masses, distances, and periods, predicted the gravitational-wave strain and detectability of each binary candidate using all-sky detection maps. We found that the International Pulsar Timing Array (IPTA) provides almost uniform sky coverage -- a unique ability of the IPTA -- and by 2025 will improve NANOGrav's current minimum detectable strain by a factor of 6, and its volume by a factor of 216. Moreover, IPTA will reach detection sensitivities for three candidates by 2025, and 13 by the end of the decade, enabling us to constrain the underlying empirical relations used to estimate SMBH masses. We find that we can in fact already constrain the mass of a binary in Mrk 504 to $M<3.3\times 10^9~M_\odot$. We also identify 24 high-mass high-redshift galaxies which, according to our models, should not be able to host SMBHBs. Importantly the GW detection of even one of these candidates would be an essentially eternal multimessenger system, and identifying common false positive signals from non-detections will be useful to filter the data from future large-scale surveys such as LSST. ","Multimessenger pulsar timing array constraints on supermassive black
hole binaries traced by periodic light curves",9,"['New paper! @Chengcheng_xin, @JeffreyHazboun and me see which candidate supermassive black hole binaries in CRTS, PTF, and @PanSTARRS1 could be detected by @NANOGrav, IPTA, @SKA_telescope, and on what timescale this will happen. Spoiler alert: 3 by 2025. <LINK> <LINK>', ""A few other key points: we show that the international pulsar timing array (IPTA) gives almost uniform sky coverage over individual PTAs, and that by 2025 IPTA will improve @NANOGrav NANOGrav's current minimum detectable strain by a factor of 6, and its volume by a factor of 216!"", 'IPTA will reach detection sensitivities for three candidates by 2025, and 13 by the end of the decade, enabling us not only to do multimessenger astrophysics, but also to constrain the underlying empirical relations used to estimate SMBH masses!', 'We also found that we can already constrain the mass of a binary in Mrk 504 to M &lt; 3.3e9 solar masses. We believe this is its first direct BH mass constraint (tell us if you know of another one!). It also has signs of x-ray periodicity (Saade+2020) in addition to optical! https://t.co/mKt5YFo3b8', 'We also identify 24 high-mass high-z galaxies which should not be able to host SMBHBs. We internally called these the “impossible binaries”. What it could really mean is that our models are too simple and that lots of gas and eccentricity are helping these high-z SMBHBs to merge!', 'GW detection of even one of these candidates would be an essentially eternal multimessenger system, and identifying common false positive signals from non-detections will be useful to filter the data from future large-scale surveys such as LSST. https://t.co/jLWnxsOG31', ""It was a pleasure to work with very talented collaborators: @Columbia's @chengcheg_xin (new to twitter!) who just started graduate school, and @NANOGrav postdoc @JeffreyHazboun !"", '@IBJIYONGI thanks for the signal boost!', '@IBJIYONGI ... and of course for your congratulations and support!!']",20,09,1898
3648,197,1516784363299553288,859171889410953221,Melissa Morris,"Ever find yourself awake late at night, thinking ""wow, radio AGN with bent jets are super weird, I wonder what their environments look like compared to other radio AGN and if we can use them to learn about galaxy groups?"" Well, my paper has you covered!! <LINK>",https://arxiv.org/abs/2204.08510,"Galaxies hosting Active Galactic Nuclei (AGN) with bent radio jets are used as tracers of dense environments, such as galaxy groups and clusters. The assumption behind using these jets is that they are bent under ram pressure from a dense, gaseous medium through which the host galaxy moves. However, there are many AGN in groups and clusters with jets that are not bent, which leads us to ask: why are some AGN jets affected so much by their environment while others are seemingly not? We present the results of an environmental study on a sample of 185 AGN with bent jets and 191 AGN with unbent jets in which we characterize their environments by searching for neighboring galaxies using a Friends-of-Friends algorithm. We find that AGN with bent jets are indeed more likely to reside in groups and clusters, while unbent AGN are more likely to exist in singles or pairs. When considering only AGN in groups of 3 or more galaxies, we find that bent AGN are more likely to exist in halos with more galaxies than unbent AGN. We also find that unbent AGN are more likely than bent AGN to be the brightest group galaxy. Additionally, groups hosting AGN with bent jets have a higher density of galaxies than groups hosting unbent AGN. Curiously, there is a population of AGN with bent jets that are in seemingly less dense regions of space, indicating they may be embedded in a cosmic web filament. Overall, our results indicate that bent doubles are more likely to exist in in larger, denser, and less relaxed environments than unbent doubles, potentially linking a galaxy's radio morphology to its environment. ",How does environment affect the morphology of radio AGN?,1,"['Ever find yourself awake late at night, thinking ""wow, radio AGN with bent jets are super weird, I wonder what their environments look like compared to other radio AGN and if we can use them to learn about galaxy groups?"" Well, my paper has you covered!! <LINK>']",22,04,261
3649,137,1248232346785988608,29682697,Robin Scheibler,"We just released a new preprint about MM algorithms for joint independent subspace analysis (JISA-MM), that is blind source separation where some sources are correlated. We apply it to blind audio source(s) extraction. Paper: <LINK> Code: <LINK> <LINK>",https://arxiv.org/abs/2004.03926,"In this work, we propose efficient algorithms for joint independent subspace analysis (JISA), an extension of independent component analysis that deals with parallel mixtures, where not all the components are independent. We derive an algorithmic framework for JISA based on the majorization-minimization (MM) optimization technique (JISA-MM). We use a well-known inequality for super-Gaussian sources to derive a surrogate function of the negative log-likelihood of the observed data. The minimization of this surrogate function leads to a variant of the hybrid exact-approximate diagonalization problem, but where multiple demixing vectors are grouped together. In the spirit of auxiliary function based independent vector analysis (AuxIVA), we propose several updates that can be applied alternately to one, or jointly to two, groups of demixing vectors. Recently, blind extraction of one or more sources has gained interest as a reasonable way of exploiting larger microphone arrays to achieve better separation. In particular, several MM algorithms have been proposed for overdetermined IVA (OverIVA). By applying JISA-MM, we are not only able to rederive these in a general manner, but also find several new algorithms. We run extensive numerical experiments to evaluate their performance, and compare it to that of full separation with AuxIVA. We find that algorithms using pairwise updates of two sources, or of one source and the background have the fastest convergence, and are able to separate target sources quickly and precisely from the background. In addition, we characterize the performance of all algorithms under a large number of noise, reverberation, and background mismatch conditions. ","MM Algorithms for Joint Independent Subspace Analysis with Application
to Blind Single and Multi-Source Extraction",1,"['We just released a new preprint about MM algorithms for joint independent subspace analysis (JISA-MM), that is blind source separation where some sources are correlated. We apply it to blind audio source(s) extraction. Paper: <LINK> Code: <LINK> <LINK>']",20,04,252
3650,72,1493250721050996740,741914576,Dr./Prof. Keri Hoadley,"After what felt like a very slow 2021 (a new job, a move across the country, feeling burnt out from 2020), it feels so great to see our paper on X-rays detected from the Blue Ring Nebula's central star (and the nebula, too?!) accepted and on arxiv! <LINK>",https://arxiv.org/abs/2202.05424,"Tight binary or multiple star systems can interact through mass transfer and follow vastly different evolutionary pathways than single stars. The star TYC 2597-735-1 is a candidate for a recent stellar merger remnant resulting from a coalescence of a low-mass companion with a primary star a few thousand years ago. This violent event is evident in a conical outflow (""Blue Ring Nebula"") emitting in UV light and surrounded by leading shock filaments observed in H$\alpha$ and UV emission. From Chandra data, we report the detection of X-ray emission from the location of TYC 2597-735-1 with a luminosity $\log(L_\mathrm{X}/L_\mathrm{bol})=-5.5$. Together with a previously reported period around 14~days, this indicates ongoing stellar activity and the presence of strong magnetic fields on TYC 2597-735-1. Supported by stellar evolution models of merger remnants, we interpret the inferred stellar magnetic field as dynamo action associated with a newly formed convection zone in the atmosphere of TYC 2597-735-1, though internal shocks at the base of an accretion-powered jet cannot be ruled out. We speculate that this object will evolve into an FK Com type source, i.e. a class of rapidly spinning magnetically active stars for which a merger origin has been proposed but for which no relic accretion or large-scale nebula remains visible. We also detect likely X-ray emission from two small regions close to the outer shock fronts in the Blue Ring Nebula, which may arise from either inhomogenities in the circumstellar medium or in the mass and velocity distribution in the merger-driven outflow. ","X-ray emission from candidate stellar merger remnant TYC 2597-735-1 and
its Blue Ring Nebula",1,"[""After what felt like a very slow 2021 (a new job, a move across the country, feeling burnt out from 2020), it feels so great to see our paper on X-rays detected from the Blue Ring Nebula's central star (and the nebula, too?!) accepted and on arxiv! <LINK>""]",22,02,255
3651,158,1131035045626490882,15132384,Kyosuke Nishida,"Our #ACL2019_Italy paper about explainable multi-hop QA is out on arXiv! <LINK> We propose a query-based extractive summarization model, QFE, and the multi-task learning of QA and evidence extraction. Our model achieves SOTA in evidence extraction on HotpotQA! <LINK>",https://arxiv.org/abs/1905.08511,"Question answering (QA) using textual sources for purposes such as reading comprehension (RC) has attracted much attention. This study focuses on the task of explainable multi-hop QA, which requires the system to return the answer with evidence sentences by reasoning and gathering disjoint pieces of the reference texts. It proposes the Query Focused Extractor (QFE) model for evidence extraction and uses multi-task learning with the QA model. QFE is inspired by extractive summarization models; compared with the existing method, which extracts each evidence sentence independently, it sequentially extracts evidence sentences by using an RNN with an attention mechanism on the question sentence. It enables QFE to consider the dependency among the evidence sentences and cover important information in the question sentence. Experimental results show that QFE with a simple RC baseline model achieves a state-of-the-art evidence extraction score on HotpotQA. Although designed for RC, it also achieves a state-of-the-art evidence extraction score on FEVER, which is a recognizing textual entailment task on a large textual database. ","Answering while Summarizing: Multi-task Learning for Multi-hop QA with
Evidence Extraction",1,"['Our #ACL2019_Italy paper about explainable multi-hop QA is out on arXiv! <LINK> We propose a query-based extractive summarization model, QFE, and the multi-task learning of QA and evidence extraction. Our model achieves SOTA in evidence extraction on HotpotQA! <LINK>']",19,05,267
3652,65,1374375561124966405,731352704182669317,Fabian Jankowski,"We have a new paper on the arXiv! In 2019 we got lucky to observe the pulsar J1452-6036 ~3 hours after a glitch in its rotation. We investigated its radiative parameters around the glitch and constrained any changes, including a slight radio flux increase. <LINK>",https://arxiv.org/abs/2103.09869,"We present high-sensitivity, wide-band observations (704 to 4032 MHz) of the young to middle-aged radio pulsar J1452-6036, taken at multiple epochs before and, serendipitously, shortly after a glitch occurred on 2019 April 27. We obtained the data using the new ultra-wide-bandwidth low-frequency (UWL) receiver at the Parkes radio telescope, and we used Markov Chain Monte Carlo techniques to estimate the glitch parameters robustly. The data from our third observing session began 3 h after the best-fitting glitch epoch, which we constrained to within 4 min. The glitch was of intermediate size, with a fractional change in spin frequency of $270.52(3) \times 10^{-9}$. We measured no significant change in spin-down rate and found no evidence for rapidly-decaying glitch components. We systematically investigated whether the glitch affected any radiative parameters of the pulsar and found that its spectral index, spectral shape, polarisation fractions, and rotation measure stayed constant within the uncertainties across the glitch epoch. However, its pulse-averaged flux density increased significantly by about 10 per cent in the post-glitch epoch and decayed slightly before our fourth observation a day later. We show that the increase was unlikely caused by calibration issues. While we cannot exclude that it was due to refractive interstellar scintillation, it is hard to reconcile with refractive effects. The chance coincidence probability of the flux density increase and the glitch event is low. Finally, we present the evolution of the pulsar's pulse profile across the band. The morphology of its polarimetric pulse profile stayed unaffected to a precision of better than 2 per cent. ","Constraints on wide-band radiative changes after a glitch in PSR
J1452-6036",1,"['We have a new paper on the arXiv! In 2019 we got lucky to observe the pulsar J1452-6036 ~3 hours after a glitch in its rotation. We investigated its radiative parameters around the glitch and constrained any changes, including a slight radio flux increase. <LINK>']",21,03,263
3653,32,1364485567468208128,734484164070772736,Guy Tennenholtz,"GELATO: How do we leverage proximity and uncertainty to improve offline reinforcement learning (RL) algorithms? In our new paper we answer this question through variational pullback metrics of proximity and uncertainty. <LINK> GELATO: We construct Riemannian metrics on submanifolds induced by a variational forward model. These metrics, capturing both proximity and uncertainty w.r.t the data, are leveraged in a model based offline RL framework (MOPO) <LINK> GELATO is capable of capturing intrinsic characteristics of the data manifold, trading off proximity and uncertainty in order to enjoy the benefits of both worlds. Read more in our paper! <LINK>",https://arxiv.org/abs/2102.11327,"Offline reinforcement learning approaches can generally be divided to proximal and uncertainty-aware methods. In this work, we demonstrate the benefit of combining the two in a latent variational model. We impose a latent representation of states and actions and leverage its intrinsic Riemannian geometry to measure distance of latent samples to the data. Our proposed metrics measure both the quality of out of distribution samples as well as the discrepancy of examples in the data. We integrate our metrics in a model-based offline optimization framework, in which proximity and uncertainty can be carefully controlled. We illustrate the geodesics on a simple grid-like environment, depicting its natural inherent topology. Finally, we analyze our approach and improve upon contemporary offline RL benchmarks. ","GELATO: Geometrically Enriched Latent Model for Offline Reinforcement
Learning",3,"['GELATO: How do we leverage proximity and uncertainty to improve offline reinforcement learning (RL) algorithms? In our new paper we answer this question through variational pullback metrics of proximity and uncertainty. <LINK>', 'GELATO: We construct Riemannian metrics on submanifolds induced by a variational forward model. These metrics, capturing both proximity and uncertainty w.r.t the data, are leveraged in a model based offline RL framework (MOPO) https://t.co/PkbJ69F4oa', 'GELATO is capable of capturing intrinsic characteristics of the data manifold, trading off proximity and uncertainty in order to enjoy the benefits of both worlds. Read more in our paper! https://t.co/PkbJ69F4oa']",21,02,655
3654,138,1466229884385177612,1591961539,Maxwell Nye,"New paper! <LINK> We show that huge language models (137B params!) can be trained to solve algorithmic tasks by “showing their work”---writing intermediate text to a scratchpad. This “scratchpad” technique even allows us to predict the execution of Python code. <LINK> In <LINK> we showed that large language models could write code, but had trouble ""executing"" programs. Here we show that execution accuracy improves if models are trained to predict the entire execution trace (i.e., predict the intermediate states line-by-line). We evaluate on the MBPP dataset <LINK>. We also show that results improve when the training data is augmented with traces from the model’s own failed attempts to solve programming problems, providing a cheap form of additional training data. <LINK> This technique is simple, can be easily implemented with off-the-shelf transformer models, and applies to other algorithmic tasks as well. We additionally show results for long-addition with 8+ digit numbers and evaluating polynomials. <LINK> Special thanks to my internship host @gstsdn! This work was also done in collaboration with great folks at Google: Anders Andreassen, @guygr, @hmichalewski, @jacobaustin132, @Bieber, @dmdohan, Aitor Lewkowycz, @MaartenBosma, @jluan, @RandomlyWalking",https://arxiv.org/abs/2112.00114,"Large pre-trained language models perform remarkably well on tasks that can be done ""in one pass"", such as generating realistic text or synthesizing computer programs. However, they struggle with tasks that require unbounded multi-step computation, such as adding integers or executing programs. Surprisingly, we find that these same models are able to perform complex multi-step computations -- even in the few-shot regime -- when asked to perform the operation ""step by step"", showing the results of intermediate computations. In particular, we train transformers to perform multi-step computations by asking them to emit intermediate computation steps into a ""scratchpad"". On a series of increasingly complex tasks ranging from long addition to the execution of arbitrary programs, we show that scratchpads dramatically improve the ability of language models to perform multi-step computations. ","Show Your Work: Scratchpads for Intermediate Computation with Language
Models",5,"['New paper! <LINK>\nWe show that huge language models (137B params!) can be trained to solve algorithmic tasks by “showing their work”---writing intermediate text to a scratchpad. This “scratchpad” technique even allows us to predict the execution of Python code. <LINK>', 'In https://t.co/0I0sXS1lSZ we showed that large language models could write code, but had trouble ""executing"" programs. Here we show that execution accuracy improves if models are trained to predict the entire execution trace (i.e., predict the intermediate states line-by-line).', 'We evaluate on the MBPP dataset https://t.co/Q1oHNHpgFb.\n\nWe also show that results improve when the training data is augmented with traces from the model’s own failed attempts to solve programming problems, providing a cheap form of additional training data. https://t.co/rH8o2VdP55', 'This technique is simple, can be easily implemented with off-the-shelf transformer models, and applies to other algorithmic tasks as well. We additionally show results for long-addition with 8+ digit numbers and evaluating polynomials. https://t.co/HluF7Ub9iz', 'Special thanks to my internship host @gstsdn!\n\nThis work was also done in collaboration with great folks at Google: Anders Andreassen, @guygr, @hmichalewski, @jacobaustin132, @Bieber, @dmdohan, Aitor Lewkowycz, @MaartenBosma, @jluan, @RandomlyWalking']",21,12,1273
3655,140,1146960044266676224,118592197,Tsuyoshi Okubo,"Our new preprint: ""Abelian and Non-Abelian Chiral Spin Liquids in a Compact Tensor Network Representation"" Hyun-Yong Lee, Ryui Kaneko, Tsuyoshi Okubo, and Naoki Kawashima, We propose a good and compact TNS for chiral spin liquid on the star lattice. <LINK> We extend the loop gas and the string gas states for Kitaev spin liquid (<LINK>) to the case of chiral spin liquids of Kitaev model on the star lattice. We show that those tensor network states can express gapped chiral spin liquids. Kiatev スピン液体のコンパクトなテンソルネットワーク表現であるloop gas、string gas 状態をカイラルスピン液体に拡張しました。arXiv:1901.05786と同様、とても良い仕事だと思います。arXiv:1901.05786と比べると僕自身の寄与は小さいですが...",https://arxiv.org/abs/1907.02268,"We provide new insights into the Abelian and non-Abelian chiral Kitaev spin liquids on the star lattice using the recently proposed loop gas (LG) and string gas (SG) states [H.-Y. Lee, R. Kaneko, T. Okubo, N. Kawashima, Phys. Rev. Lett. 123, 087203 (2019)]. Those are compactly represented in the language of tensor network. By optimizing only one or two variational parameters, accurate ansatze are found in the whole phase diagram of the Kitaev model on the star lattice. In particular, the variational energy of the LG state becomes exact(within machine precision) at two limits in the model, and the criticality at one of those is analytically derived from the LG feature. It reveals that the Abelian CSLs are well demonstrated by the short-ranged LG while the non-Abelian CSLs are adiabatically connected to the critical LG where the macroscopic loops appear. Furthermore, by constructing the minimally entangled states and exploiting their entanglement spectrum and entropy, we identify the nature of anyons and the chiral edge modes in the non-Abelian phase with the Ising conformal field theory. ","Abelian and non-Abelian chiral spin liquids in a compact tensor network
representation",3,"['Our new preprint: ""Abelian and Non-Abelian Chiral Spin Liquids in a Compact Tensor Network Representation"" Hyun-Yong Lee, Ryui Kaneko, Tsuyoshi Okubo, and Naoki Kawashima, We propose a good and compact TNS for chiral spin liquid on the star lattice. <LINK>', 'We extend the loop gas and the string gas states for Kitaev spin liquid (https://t.co/gt17AjuVi6) to the case of chiral spin liquids of Kitaev model on the star lattice. We show that those tensor network states can express gapped chiral spin liquids.', 'Kiatev スピン液体のコンパクトなテンソルネットワーク表現であるloop gas、string gas 状態をカイラルスピン液体に拡張しました。arXiv:1901.05786と同様、とても良い仕事だと思います。arXiv:1901.05786と比べると僕自身の寄与は小さいですが...']",19,07,636
3656,4,1225077329862500354,910621028212203521,Prof Rachel Oliver 🐯,"If you can't make Stefan Schulz's talk tomorrow, then read our preprint here instead: <LINK> Why are green LEDs less efficient than blue ones? Major question with implications for more efficient solid state lighting, and this paper gives new insights! <LINK>",https://arxiv.org/abs/2001.09345,"We present a detailed theoretical analysis of the electronic and optical properties of c-plane InGaN/GaN quantum well structures with In contents ranging from 5% to 25%. Special attention is paid to the relevance of alloy induced carrier localization effects to the green gap problem. Studying the localization length and electron-hole overlaps at low and elevated temperatures, we find alloy-induced localization effects are crucial for the accurate description of InGaN quantum wells across the range of In content studied. However, our calculations show very little change in the localization effects when moving from the blue to the green spectral regime; i.e. when the internal quantum efficiency and wall plug efficiencies reduce sharply, for instance, the in-plane carrier separation due to alloy induced localization effects change weakly. We conclude that other effects, such as increased defect densities, are more likely to be the main reason for the green gap problem. This conclusion is further supported by our finding that the electron localization length is large, when compared to that of the holes, and changes little in the In composition range of interest for the green gap problem. Thus electrons may become increasingly susceptible to an increased (point) defect density in green emitters and as a consequence the nonradiative recombination rate may increase. ","Polar InGaN/GaN quantum wells: Revisiting the impact of carrier
localization on the green gap problem",1,"[""If you can't make Stefan Schulz's talk tomorrow, then read our preprint here instead: <LINK>\nWhy are green LEDs less efficient than blue ones? Major question with implications for more efficient solid state lighting, and this paper gives new insights! <LINK>""]",20,01,258
3657,3,1047264397192646656,2800204849,Andrew Gordon Wilson,"Our new paper, GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU acceleration, is appearing as a #NIPS2018 spotlight, with code! The paper is about GP inference and blackbox linear algebra that exploits hardware for major acceleration. <LINK> <LINK>",https://arxiv.org/abs/1809.11165,"Despite advances in scalable models, the inference tools used for Gaussian processes (GPs) have yet to fully capitalize on developments in computing hardware. We present an efficient and general approach to GP inference based on Blackbox Matrix-Matrix multiplication (BBMM). BBMM inference uses a modified batched version of the conjugate gradients algorithm to derive all terms for training and inference in a single call. BBMM reduces the asymptotic complexity of exact GP inference from $O(n^3)$ to $O(n^2)$. Adapting this algorithm to scalable approximations and complex GP models simply requires a routine for efficient matrix-matrix multiplication with the kernel and its derivative. In addition, BBMM uses a specialized preconditioner to substantially speed up convergence. In experiments we show that BBMM effectively uses GPU hardware to dramatically accelerate both exact GP inference and scalable approximations. Additionally, we provide GPyTorch, a software platform for scalable GP inference via BBMM, built on PyTorch. ","GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU
Acceleration",1,"['Our new paper, GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU acceleration, is appearing as a #NIPS2018 spotlight, with code! The paper is about GP inference and blackbox linear algebra that exploits hardware for major acceleration. <LINK> <LINK>']",18,09,268
3658,24,1021494627344498688,3087725630,Zack Kilpatrick (he/him/his),"New on arXiv: ""Optimizing a jump-diffusion model of a starving forager"" Fun paper with talented undergrad N Krishnan showing the lifetime of the Bhat-Redner starving forager is increased by considering a combination of short & long range movement: <LINK> Panel D shows the behavior of a mixed jump-diffusion forager clearing out ""food deserts."" This type of forager lives longer than a pure diffuser or jumper when foragers can only last a finite time without food. <LINK>",https://arxiv.org/abs/1807.06740,"We analyze the movement of a starving forager on a one-dimensional periodic lattice, where each location contains one unit of food. As the forager lands on sites with food, it consumes the food, leaving the sites empty. If the forager lands consecutively on $s$ empty sites, then it will starve. The forager has two modes of movement: it can either diffuse, by moving with equal probability to adjacent sites on the lattice, or it can jump to a uniformly randomly chosen site on the lattice. We show that the lifetime $T$ of the forager in either paradigm can be approximated by the sum of the cover time $\tau_{\rm cover}$ and the starvation time $s$, when $s$ far exceeds the number $n$ of lattice sites. Our main findings focus on the hybrid model, where the forager has a probability of either jumping or diffusing. The lifetime of the forager varies non-monotonically according to $p_j$, the probability of jumping. By examining a small system, analyzing a heuristic model, and using direct numerical simulation, we explore the tradeoff between jumps and diffusion, and show that the strategy that maximizes the forager lifetime is a mixture of both modes of movement. ",Optimizing a jump-diffusion model of a starving forager,2,"['New on arXiv: ""Optimizing a jump-diffusion model of a starving forager"" Fun paper with talented undergrad N Krishnan showing the lifetime of the Bhat-Redner starving forager is increased by considering a combination of short &amp; long range movement:\n<LINK>', 'Panel D shows the behavior of a mixed jump-diffusion forager clearing out ""food deserts."" This type of forager lives longer than a pure diffuser or jumper when foragers can only last a finite time without food. https://t.co/ZweKZeyaII']",18,07,472
3659,201,1366591700660072454,2885797996,Prof Jesse Capecelatro,"New paper out on @arxiv: <LINK> Realistic coughs are pulsatile, resulting in vortex interactions that accelerate particles emanating from later pulses which may carry more virus. Congrats @MichiganAero undergrad Kalvin Monroe, @umichme Yuan and @LattanziAaron <LINK>",https://arxiv.org/abs/2103.00581,"Expiratory events, such as coughs, are often pulsatile in nature and result in vortical flow structures that transport respiratory particles. In this work, direct numerical simulation (DNS) of turbulent pulsatile jets, coupled with Lagrangian particle tracking of micron-sized droplets, is performed to investigate the role of secondary and tertiary expulsions on particle dispersion and penetration. Fully-developed turbulence obtained from DNS of a turbulent pipe flow is provided at the jet orifice. The volumetric flow rate at the orifice is modulated in time according to a damped sine wave; thereby allowing for control of the number of pulses, duration, and peak amplitude. The resulting vortex structures are analyzed for single-, two-, and three-pulse jets. The evolution of the particle cloud is then compared to existing single-pulse models. Particle dispersion and penetration of the entire cloud is found to be hindered by increased pulsatility. However, the penetration of particles emanating from a secondary or tertiary expulsion are enhanced due to acceleration downstream by vortex structures. ",Role of pulsatility on particle dispersion in expiratory flows,1,"['New paper out on @arxiv: <LINK> Realistic coughs are pulsatile, resulting in vortex interactions that accelerate particles emanating from later pulses which may carry more virus. Congrats @MichiganAero undergrad Kalvin Monroe, @umichme Yuan and @LattanziAaron <LINK>']",21,03,266
3660,149,1511222484879785986,1283150444,Maurizio Pierini,"New paper on particle reconstruction with #GraphNetworks: Distance Weighted message passing and object condensation to reconstruct particles from 200 simultaneous collisions. Great work by @ShahRukhQasim <LINK> #DeepLearning #AI #LHC #HEP <LINK> @JoosepPata @ShahRukhQasim I think we should put it on @ZENODO_ORG , indeed @ShahRukhQasim @JanKieseler",https://arxiv.org/abs/2204.01681,"We present an end-to-end reconstruction algorithm to build particle candidates from detector hits in next-generation granular calorimeters similar to that foreseen for the high-luminosity upgrade of the CMS detector. The algorithm exploits a distance-weighted graph neural network, trained with object condensation, a graph segmentation technique. Through a single-shot approach, the reconstruction task is paired with energy regression. We describe the reconstruction performance in terms of efficiency as well as in terms of energy resolution. In addition, we show the jet reconstruction performance of our method and discuss its inference computational cost. To our knowledge, this work is the first-ever example of single-shot calorimetric reconstruction of ${\cal O}(1000)$ particles in high-luminosity conditions with 200 pileup. ","End-to-end multi-particle reconstruction in high occupancy imaging
calorimeters with graph neural networks",2,"['New paper on particle reconstruction with #GraphNetworks: Distance Weighted message passing and object condensation to reconstruct particles from 200 simultaneous collisions. Great work by @ShahRukhQasim <LINK> #DeepLearning #AI #LHC #HEP <LINK>', '@JoosepPata @ShahRukhQasim I think we should put it on @ZENODO_ORG , indeed @ShahRukhQasim @JanKieseler']",22,04,349
3661,27,1465258987604086788,1283150444,Maurizio Pierini,"New paper on the New Physics Learning Machine method: how to include systematic uncertainties. <LINK> While trying to fit deviations in data wrt a reference sample (standard model expectation) one has to take into account the fact that the reference description comes with uncertainties. We did so combining the previously developed method to a “Likelihood Free Inference” spin-off We first model the dependence of the reference on the nuisance with a network learning the Taylor expansion (as done in standard analyses with analytic methods). We freeze this taylor expansion network in the following steps <LINK> We break down the problem (learning the lhc test statistics t as a data vs reference classification) in two likelihood ratio learnings: tau = H1 (signal hypothesis) with nuisance vs nominal-nuisance H0 (no-signal hypothesis) and Delta = H0 with nuisance vs nominal-nuisance H0 <LINK> The lhc test statistics is learned by the difference of the two. For a tuned (on bkg-only toys) choice of hyper-parameters, the cancellation gives back an expected chisq distribution, equivalent to the asymptotic formula of the LHC statistical procedure <LINK> The method is less sensitive than a classic analysis on a specific signal. But its signal-agnostic nature is crucial to extend the reach of the LHC search program. The effect of nuisance parameters reduces the sensitivity (as it should) but signal sensitivity is retained The modeling of the nuisance has the important role of removing the risk of false claims, due to the imperfect knoll of the reference. With this crucial step, we are now ready to move this approach to real data.",https://arxiv.org/abs/2111.13633,"We show how to deal with uncertainties on the Standard Model predictions in an agnostic new physics search strategy that exploits artificial neural networks. Our approach builds directly on the specific Maximum Likelihood ratio treatment of uncertainties as nuisance parameters for hypothesis testing that is routinely employed in high-energy physics. After presenting the conceptual foundations of our method, we first illustrate all aspects of its implementation and extensively study its performances on a toy one-dimensional problem. We then show how to implement it in a multivariate setup by studying the impact of two typical sources of experimental uncertainties in two-body final states at the LHC. ",Learning New Physics from an Imperfect Machine,7,"['New paper on the New Physics Learning Machine method: how to include systematic uncertainties. <LINK>', 'While trying to fit deviations in data wrt a reference sample (standard model expectation) one has to take into account the fact that the reference description comes with uncertainties. We did so combining the previously developed method to a “Likelihood Free Inference” spin-off', 'We first model the dependence of the reference on the nuisance with a network learning the Taylor expansion (as done in standard analyses with analytic methods). We freeze this taylor expansion network in the following steps https://t.co/UATqvaVB5E', 'We break down the problem (learning the lhc test statistics t as a data vs reference classification) in two likelihood ratio learnings: tau = H1 (signal hypothesis) with nuisance vs nominal-nuisance H0 (no-signal hypothesis) and Delta = H0 with nuisance vs nominal-nuisance H0 https://t.co/kMcxl9ssER', 'The lhc test statistics is learned by the difference of the two. For a tuned (on bkg-only toys) choice of hyper-parameters, the cancellation gives back an expected chisq distribution, equivalent to the asymptotic formula of the LHC statistical procedure https://t.co/snlOFNyUl8', 'The method is less sensitive than a classic analysis on a specific signal. But its signal-agnostic nature is crucial to extend the reach of the LHC search program. The effect of nuisance parameters reduces the sensitivity (as it should) but signal sensitivity is retained', 'The modeling of the nuisance has the important role of removing the risk of false claims, due to the imperfect knoll of the reference. With this crucial step, we are now ready to move this approach to real data.']",21,11,1642
3662,45,1276262376002555904,924816072036904960,Adam Gleave,"How do you measure the distance between two reward functions? Our EPIC distance is invariant to reward shaping, can be approximated efficiently, and is predictive of policy training success and transfer! New paper with @MichaelD1729 @janleike et al. <LINK> <LINK> Thanks to @DeepMind and @CHAI_Berkeley for providing a supportive atmosphere for this work!",https://arxiv.org/abs/2006.13900,"For many tasks, the reward function is inaccessible to introspection or too complex to be specified procedurally, and must instead be learned from user data. Prior work has evaluated learned reward functions by evaluating policies optimized for the learned reward. However, this method cannot distinguish between the learned reward function failing to reflect user preferences and the policy optimization process failing to optimize the learned reward. Moreover, this method can only tell us about behavior in the evaluation environment, but the reward may incentivize very different behavior in even a slightly different deployment environment. To address these problems, we introduce the Equivalent-Policy Invariant Comparison (EPIC) distance to quantify the difference between two reward functions directly, without a policy optimization step. We prove EPIC is invariant on an equivalence class of reward functions that always induce the same optimal policy. Furthermore, we find EPIC can be efficiently approximated and is more robust than baselines to the choice of coverage distribution. Finally, we show that EPIC distance bounds the regret of optimal policies even under different transition dynamics, and we confirm empirically that it predicts policy training success. Our source code is available at this https URL ",Quantifying Differences in Reward Functions,2,"['How do you measure the distance between two reward functions?\n\nOur EPIC distance is invariant to reward shaping, can be approximated efficiently, and is predictive of policy training success and transfer!\n\nNew paper with @MichaelD1729 @janleike et al.\n\n<LINK> <LINK>', 'Thanks to @DeepMind and @CHAI_Berkeley for providing a supportive atmosphere for this work!']",20,06,355
3663,74,1115553086116773888,507704346,Isabelle Augenstein,"Pre-print of our #naacl2019 paper ""Issue Framing in Online Discussion Fora"" is now available. We introduce a new issue frame annotated corpus of online discussions. Mareike Hartmann, Tallulah Jansen @IAugenstein Anders Søgaard <LINK> #NLProc @coastalcph @CopeNLU <LINK>",https://arxiv.org/abs/1904.03969,"In online discussion fora, speakers often make arguments for or against something, say birth control, by highlighting certain aspects of the topic. In social science, this is referred to as issue framing. In this paper, we introduce a new issue frame annotated corpus of online discussions. We explore to what extent models trained to detect issue frames in newswire and social media can be transferred to the domain of discussion fora, using a combination of multi-task and adversarial training, assuming only unlabeled training data in the target domain. ",Issue Framing in Online Discussion Fora,1,"['Pre-print of our #naacl2019 paper ""Issue Framing in Online Discussion Fora"" is now available. We introduce a new issue frame annotated corpus of online discussions.\nMareike Hartmann, Tallulah Jansen @IAugenstein Anders Søgaard\n<LINK>\n#NLProc @coastalcph @CopeNLU <LINK>']",19,04,269
3664,86,1469386317440200709,473190579,Naomi Gendler,"Really excited about our new paper! We show how, in a large class of string theory compactifications, the strong CP problem is automatically solved by the Peccei-Quinn mechanism. We also analyze dark matter and astro bounds on axions in our stringy EFTs! <LINK>",https://arxiv.org/abs/2112.04503,"We show that the strong CP problem is solved in a large class of compactifications of string theory. The Peccei-Quinn mechanism solves the strong CP problem if the CP-breaking effects of the ultraviolet completion of gravity and of QCD are small compared to the CP-preserving axion potential generated by low-energy QCD instantons. We characterize both classes of effects. To understand quantum gravitational effects, we consider an ensemble of flux compactifications of type IIB string theory on orientifolds of Calabi-Yau hypersurfaces in the geometric regime, taking a simple model of QCD on D7-branes. We show that the D-brane instanton contribution to the neutron electric dipole moment falls exponentially in $N^4$, with $N$ the number of axions. In particular, this contribution is negligible in all models in our ensemble with $N>17$. We interpret this result as a consequence of large $N$ effects in the geometry that create hierarchies in instanton actions and also suppress the ultraviolet cutoff. We also compute the CP breaking due to high-energy instantons in QCD. In the absence of vectorlike pairs, we find contributions to the neutron electric dipole moment that are not excluded, but that could be accessible to future experiments if the scale of supersymmetry breaking is sufficiently low. The existence of vectorlike pairs can lead to a larger dipole moment. Finally, we show that a significant fraction of models are allowed by standard cosmological and astrophysical constraints. ",PQ Axiverse,1,"['Really excited about our new paper! We show how, in a large class of string theory compactifications, the strong CP problem is automatically solved by the Peccei-Quinn mechanism. We also analyze dark matter and astro bounds on axions in our stringy EFTs!\n\n<LINK>']",21,12,261
3665,223,1374960903309783043,3943468754,Shradha Sehgal,"What's #KOOking? Read our work on characterization of the latest #Koo platform! We study the user demographic, prominent accounts, content and activity, linguistic communities, and much more. We also make our dataset public for research! Full report at <LINK> <LINK>",https://arxiv.org/abs/2103.13239,"Social media has grown exponentially in a short period, coming to the forefront of communications and online interactions. Despite their rapid growth, social media platforms have been unable to scale to different languages globally and remain inaccessible to many. In this paper, we characterize Koo, a multilingual micro-blogging site that rose in popularity in 2021, as an Indian alternative to Twitter. We collected a dataset of 4.07 million users, 163.12 million follower-following relationships, and their content and activity across 12 languages. We study the user demographic along the lines of language, location, gender, and profession. The prominent presence of Indian languages in the discourse on Koo indicates the platform's success in promoting regional languages. We observe Koo's follower-following network to be much denser than Twitter's, comprising of closely-knit linguistic communities. An N-gram analysis of posts on Koo shows a #KooVsTwitter rhetoric, revealing the debate comparing the two platforms. Our characterization highlights the dynamics of the multilingual social network and its diverse Indian user base. ","What's Kooking? Characterizing India's Emerging Social Network, Koo",1,"[""What's #KOOking? Read our work on characterization of the latest #Koo platform! We study the user demographic, prominent accounts, content and activity, linguistic communities, and much more.\nWe also make our dataset public for research! \nFull report at <LINK> <LINK>""]",21,03,266
3666,101,1325928457726021638,6222842,Nick Feamster,"How did the Internet/ISPs respond to COVID-19? See the presentation @jlivingood and I gave to the IAB this morning, as well as our recently posted paper, here: <LINK> (lots of new data on changes in traffic patterns, ISP provisioning upgrades, @FCC data, etc.). <LINK>",https://arxiv.org/abs/2011.00419,"The COVID-19 pandemic has resulted in dramatic changes to the daily habits of billions of people. Users increasingly have to rely on home broadband Internet access for work, education, and other activities. These changes have resulted in corresponding changes to Internet traffic patterns. This paper aims to characterize the effects of these changes with respect to Internet service providers in the United States. We study three questions: (1)How did traffic demands change in the United States as a result of the COVID-19 pandemic?; (2)What effects have these changes had on Internet performance?; (3)How did service providers respond to these changes? We study these questions using data from a diverse collection of sources. Our analysis of interconnection data for two large ISPs in the United States shows a 30-60% increase in peak traffic rates in the first quarter of 2020. In particular, we observe traffic downstream peak volumes for a major ISP increase of 13-20% while upstream peaks increased by more than 30%. Further, we observe significant variation in performance across ISPs in conjunction with the traffic volume shifts, with evident latency increases after stay-at-home orders were issued, followed by a stabilization of traffic after April. Finally, we observe that in response to changes in usage, ISPs have aggressively augmented capacity at interconnects, at more than twice the rate of normal capacity augmentation. Similarly, video conferencing applications have increased their network footprint, more than doubling their advertised IP address space. ","Characterizing Service Provider Response to the COVID-19 Pandemic in the
United States",1,"['How did the Internet/ISPs respond to COVID-19? See the presentation @jlivingood and I gave to the IAB this morning, as well as our recently posted paper, here: <LINK> (lots of new data on changes in traffic patterns, ISP provisioning upgrades, @FCC data, etc.). <LINK>']",20,11,268
3667,123,1323596586694434821,1182331711683776517,Jon Ander Campos,"Check out our new #COLING2020 paper on ""Improving Conversational Question Answering Systems after Deployment using Feedback-Weighted Learning"" We present a method that enables to improve an initial system after deployment using binary feedback only. <LINK> Joint work with @kchonyc @antxaotegi @Aitor57 @gazkune and @eagirre @sazoo_nlp Thanks Sashank! Hope you are doing well too!",http://arxiv.org/abs/2011.00615,"The interaction of conversational systems with users poses an exciting opportunity for improving them after deployment, but little evidence has been provided of its feasibility. In most applications, users are not able to provide the correct answer to the system, but they are able to provide binary (correct, incorrect) feedback. In this paper we propose feedback-weighted learning based on importance sampling to improve upon an initial supervised system using binary user feedback. We perform simulated experiments on document classification (for development) and Conversational Question Answering datasets like QuAC and DoQA, where binary user feedback is derived from gold annotations. The results show that our method is able to improve over the initial supervised system, getting close to a fully-supervised system that has access to the same labeled examples in in-domain experiments (QuAC), and even matching in out-of-domain experiments (DoQA). Our work opens the prospect to exploit interactions with real users and improve conversational systems after deployment. ","Improving Conversational Question Answering Systems after Deployment
using Feedback-Weighted Learning",3,"['Check out our new #COLING2020 paper on ""Improving Conversational Question Answering Systems after Deployment using Feedback-Weighted Learning""\n\nWe present a method that enables to improve an initial system after deployment using binary feedback only.\n\n<LINK>', 'Joint work with @kchonyc @antxaotegi @Aitor57 @gazkune and @eagirre', '@sazoo_nlp Thanks Sashank! Hope you are doing well too!']",20,11,380
3668,192,1465281600426496000,4417968213,Ani Nenkova,"What we found about temporal effects on NLP system performance: 1) system performance may not get worse over time; it can even improve over time; YET 2) a system with better performance can be obtained by retraining on temporally more recent data <LINK> By how much performance changes over time, and by how much it can be improved by temporal adaptation varies considerably depending on the language representation. Representations that yield an overall better system also afford less room for temporal adaptation Experiments on named entity recognition, sentiment in product reviews and true casing consistently show the need to clearly distinguish between temporal model deterioration and the potential for temporal adaptation. The overlap between training (fine-tuning) and testing data vocabulary steadily diminishes over time, and the smaller overlap is correlated with worse performance for a system without pre-trained representations. Self-labeling is one successful approach to temporal domain adaptation without human labeling of more recent data. The system produces labels for data from a more recent time period and is retrained on the combination of original and self-labeled data. @LChoshen Do you mean no human labeled new data? If so, indeed, no new data is now introduced. There is unlabelled data, labeled by the system trained on older data. Then the initial and self-labeled data are combined, the model retrained and the results are better. @LChoshen As for why it works, I agree it is unintuitive and hopefully there will be a follow up paper to unpack the why. We tried it because performance didn’t seem to deteriorate and checking if the labels are actually useful was an additional way to verify that.",https://arxiv.org/abs/2111.12790,"Keeping the performance of language technologies optimal as time passes is of great practical interest. Here we survey prior work concerned with the effect of time on system performance, establishing more nuanced terminology for discussing the topic and proper experimental design to support solid conclusions about the observed phenomena. We present a set of experiments with systems powered by large neural pretrained representations for English to demonstrate that {\em temporal model deterioration} is not as big a concern, with some models in fact improving when tested on data drawn from a later time period. It is however the case that {\em temporal domain adaptation} is beneficial, with better performance for a given time period possible when the system is trained on temporally more recent data. Our experiments reveal that the distinctions between temporal model deterioration and temporal domain adaptation becomes salient for systems built upon pretrained representations. Finally we examine the efficacy of two approaches for temporal domain adaptation without human annotations on new data, with self-labeling proving to be superior to continual pre-training. Notably, for named entity recognition, self-labeling leads to better temporal adaptation than human annotation. ",Temporal Effects on Pre-trained Models for Language Processing Tasks,7,"['What we found about temporal effects on NLP system performance:\n\n1) system performance may not get worse over time; it can even improve over time; \n\nYET\n\n2) a system with better performance can be obtained by retraining on temporally more recent data\n\n<LINK>', 'By how much performance changes over time, and by how much it can be improved by temporal adaptation varies considerably depending on the language representation.\n\nRepresentations that yield an overall better system also afford less room for temporal adaptation', 'Experiments on named entity recognition, sentiment in product reviews and true casing consistently show the need to clearly distinguish between temporal model deterioration and the potential for temporal adaptation.', 'The overlap between training (fine-tuning) and testing data vocabulary steadily diminishes over time, and the smaller overlap is correlated with worse performance for a system without pre-trained representations.', 'Self-labeling is one successful approach to temporal domain adaptation without human labeling of more recent data. \n\nThe system produces labels for data from a more recent time period and is retrained on the combination of original and self-labeled data.', '@LChoshen Do you mean no human labeled new data? If so, indeed, no new data is now introduced.\n\nThere is unlabelled data, labeled by the system trained on older data. Then the initial and self-labeled data are combined, the model retrained and the results are better.', '@LChoshen As for why it works, I agree it is unintuitive and hopefully there will be a follow up paper to unpack the why.\n\nWe tried it because performance didn’t seem to deteriorate and checking if the labels are actually useful was an additional way to verify that.']",21,11,1731
3669,34,1220504185524764672,405482317,Prof. Lisa Harvey-Smith,"A new paper led by Chika Ogbodo was released today (I am a co-author) showing the strength and orientation of magnetic fields in the Milky Way. The magnetic fields in space are measured by looking at 'masers' - microwave lasers in space that occur in gas. <LINK> <LINK> The light from these space lasers is split into two colours by the magnetic field in the cloud of gas where it originates. This allows us to measure its strength and direction. Another way to measure the magnetic fields in space is to observe the light from pulsars (weird stars that appear to pulse, because they are spinning). The way that light is spread out by magnetism in our Galaxy can also be studied in this way. Congrats to Chika, to his supervisors @JimiGreen, Jo Dawson and the rest of the team. @JimiGreen @Macquarie_Uni @CSIRO_ATNF @ScotIncGrowth The universe may be infinite but I will never make sense of Brexit I'm afraid.",http://arxiv.org/abs/2001.06180,"From targeted observations of ground-state OH masers towards 702 Multibeam (MMB) survey 6.7-GHz methanol masers, between Galactic longitudes 186$^{\circ}$ through the Galactic centre to 20$^{\circ}$, made as part of the `MAGMO' project, we present the physical and polarisation properties of the 1720-MHz OH maser transition, including the identification of Zeeman pairs. We present 10 new and 23 previously catalogued 1720-MHz OH maser sources detected towards star formation regions. In addition, we also detected 16 1720-MHz OH masers associated with supernova remnants and two sites of diffuse OH emission. Towards the 33 star formation masers, we identify 44 Zeeman pairs, implying magnetic field strengths ranging from $-$11.4 to $+$13.2 mG, and a median magnetic field strength of $|B_{LOS}|$ $\sim$ 6 mG. With limited statistics, we present the in-situ magnetic field orientation of the masers and the Galactic magnetic field distribution revealed by the 1720-MHz transition. We also examine the association statistics of 1720-MHz OH SFR masers with other ground-state OH masers, excited-state OH masers, class I and class II methanol masers and water masers, and compare maser positions with mid-infrared images of the parent star forming regions. Of the 33 1720-MHz star formation masers, ten are offset from their central exciting sources, and appear to be associated with outflow activity. ","MAGMO: Polarimetry of 1720-MHz OH Masers towards Southern Star Forming
Regions",6,"[""A new paper led by Chika Ogbodo was released today (I am a co-author) showing the strength and orientation of magnetic fields in the Milky Way. The magnetic fields in space are measured by looking at 'masers' - microwave lasers in space that occur in gas.\n<LINK> <LINK>"", 'The light from these space lasers is split into two colours by the magnetic field in the cloud of gas where it originates. This allows us to measure its strength and direction.', 'Another way to measure the magnetic fields in space is to observe the light from pulsars (weird stars that appear to pulse, because they are spinning). The way that light is spread out by magnetism in our Galaxy can also be studied in this way.', 'Congrats to Chika, to his supervisors @JimiGreen, Jo Dawson and the rest of the team.', '@JimiGreen @Macquarie_Uni @CSIRO_ATNF', ""@ScotIncGrowth The universe may be infinite but I will never make sense of Brexit I'm afraid.""]",20,01,909
3670,44,1288123905861758976,94412971,Wojciech Kryściński,"New work in which we discuss summarization model evaluation and share a large set of resources for eval. metric research! 📊🧮 w/ @alexfabbri4 @BMarcusMcCann @CaimingXiong @RichardSocher Dragomir Radev Paper: <LINK> Resources: <LINK> Thread: We re-evaluated and studied 12 commonly used evaluation metrics in a consistent fashion and offer a side-by-side comparison of their performance in correlation to human expert annotators. (2/n) <LINK> We re-evaluated 23 modern summarization model outputs in a consistent and comprehensive fashion using both human annotations and automatic evaluation metrics and offer a side-by-side comparison of their performance. (3/n) <LINK> We implemented and shared an evaluation toolkit that offers a unified and easy way of conducting comprehensive performance evaluations of new summarization model outputs. (4/n) We collected, unified, and shared the largest collection of summarization model outputs trained on the CNN/DailyMail news corpus. The collection includes 44 model outputs associated with 23 recent summarization papers. (5/n) We collected and shared the largest and most diverse, in terms of model types, collection of human judgements of model-generated summaries that include expert and crowd-sourced annotations. (6/n) We hope our contributions will promote a more complete evaluation protocol of summarization methods and will encourage further research into reliable evaluation metrics. (7/n) We thank all authors who shared their model outputs and thus contributed to our work and we encourage the research community to join our efforts and contribute their model outputs and extend the evaluation toolkit. (8/n) #NLProc #TextSummarization (9/n)",https://arxiv.org/abs/2007.12626,"The scarcity of comprehensive up-to-date studies on evaluation metrics for text summarization and the lack of consensus regarding evaluation protocols continue to inhibit progress. We address the existing shortcomings of summarization evaluation methods along five dimensions: 1) we re-evaluate 14 automatic evaluation metrics in a comprehensive and consistent fashion using neural summarization model outputs along with expert and crowd-sourced human annotations, 2) we consistently benchmark 23 recent summarization models using the aforementioned automatic evaluation metrics, 3) we assemble the largest collection of summaries generated by models trained on the CNN/DailyMail news dataset and share it in a unified format, 4) we implement and share a toolkit that provides an extensible and unified API for evaluating summarization models across a broad range of automatic metrics, 5) we assemble and share the largest and most diverse, in terms of model types, collection of human judgments of model-generated summaries on the CNN/Daily Mail dataset annotated by both expert judges and crowd-source workers. We hope that this work will help promote a more complete evaluation protocol for text summarization as well as advance research in developing evaluation metrics that better correlate with human judgments. ",SummEval: Re-evaluating Summarization Evaluation,9,"['New work in which we discuss summarization model evaluation and share a large set of resources for eval. metric research! 📊🧮\n\nw/ @alexfabbri4 @BMarcusMcCann @CaimingXiong @RichardSocher Dragomir Radev \n\nPaper: <LINK>\nResources: <LINK>\n\nThread:', 'We re-evaluated and studied 12 commonly used evaluation metrics in a consistent fashion and offer a side-by-side comparison of their performance in correlation to human expert annotators. (2/n) https://t.co/l6WI8rNW1K', 'We re-evaluated 23 modern summarization model outputs in a consistent and comprehensive fashion using both human annotations and automatic evaluation metrics and offer a side-by-side comparison of their performance. (3/n) https://t.co/cszvrp5Hxu', 'We implemented and shared an evaluation toolkit that offers a unified and easy way of conducting comprehensive performance evaluations of new summarization model outputs. (4/n)', 'We collected, unified, and shared the largest collection of summarization model outputs trained on the CNN/DailyMail news corpus. The collection includes 44 model outputs associated with 23 recent summarization papers. (5/n)', 'We collected and shared the largest and most diverse, in terms of model types, collection of human judgements of model-generated summaries that include expert and crowd-sourced annotations. (6/n)', 'We hope our contributions will promote a more complete evaluation protocol of summarization methods and will encourage further research into reliable evaluation metrics. (7/n)', 'We thank all authors who shared their model outputs and thus contributed to our work and we encourage the research community to join our efforts and contribute their model outputs and extend the evaluation toolkit. (8/n)', '#NLProc #TextSummarization (9/n)']",20,07,1698
3671,111,1125925969628241923,1053730346930384897,Jeff Carlin,"New MADCASH (Magellanic Analogs Dwarf Companions and Stellar Halos) paper by awesome U. Arizona grad student @Ragadeepika on extended stellar populations around IC 1613 from exquisite Subaru/HSC data. I'll let her summarize it, but check it out: <LINK> @rareflwr41 @adrianprw @Ragadeepika Yes - keep in touch!",https://arxiv.org/abs/1905.02210,"Stellar halos offer fossil evidence for hierarchical structure formation. Since halo assembly is predicted to be scale-free, stellar halos around low-mass galaxies constrain properties such as star formation in the accreted subhalos and the formation of dwarf galaxies. However, few observational searches for stellar halos in dwarfs exist. Here we present gi photometry of resolved stars in isolated Local Group dwarf irregular galaxy IC 1613 ($M_{\star} \sim 10^8 M_{\odot})$. These Subaru/Hyper Suprime-Cam observations are the widest and deepest of IC 1613 to date. We measure surface density profiles of young main-sequence, intermediate to old red giant branch, and ancient horizontal branch stars outside of 12' ($\sim 2.6$ kpc; 2.5 half-light radii) from the IC 1613 center. All of the populations extend to ~24' ($\sim 5.2$ kpc; 5 half-light radii), with the older populations best fit by a broken exponential in these outer regions. Comparison with earlier studies sensitive to IC 1613's inner regions shows that the density of old stellar populations steepens substantially with distance from the center; we trace the $g$-band effective surface brightness to an extremely faint limit of $\sim 33.7$ mag arcsec$^{-2}$. Conversely, the distribution of younger stars follows a single, shallow exponential profile in the outer regions, demonstrating different formation channels for the younger and older components of IC 1613. The outermost, intermediate-age and old stars have properties consistent with those expected for accreted stellar halos, though future observational and theoretical work is needed to definitively distinguish this scenario from other possibilities. ","Hyper Wide Field Imaging of the Local Group Dwarf Irregular Galaxy IC
1613: An Extended Component of Metal-poor Stars",2,"[""New MADCASH (Magellanic Analogs Dwarf Companions and Stellar Halos) paper by awesome U. Arizona grad student @Ragadeepika on extended stellar populations around IC 1613 from exquisite Subaru/HSC data. I'll let her summarize it, but check it out: <LINK>"", '@rareflwr41 @adrianprw @Ragadeepika Yes - keep in touch!']",19,05,309
3672,38,1073345088842018817,24859650,Jan-Willem van de Meent,"New paper by my student @BabakEsmaeili10 on learning structured representations for reviews: <LINK>. We infer embeddings for users and items and model review text using a structured neural topic model, which infers groups of interpretable, related topics. <LINK>",https://arxiv.org/abs/1812.05035,"We present Variational Aspect-based Latent Topic Allocation (VALTA), a family of autoencoding topic models that learn aspect-based representations of reviews. VALTA defines a user-item encoder that maps bag-of-words vectors for combined reviews associated with each paired user and item onto structured embeddings, which in turn define per-aspect topic weights. We model individual reviews in a structured manner by inferring an aspect assignment for each sentence in a given review, where the per-aspect topic weights obtained by the user-item encoder serve to define a mixture over topics, conditioned on the aspect. The result is an autoencoding neural topic model for reviews, which can be trained in a fully unsupervised manner to learn topics that are structured into aspects. Experimental evaluation on large number of datasets demonstrates that aspects are interpretable, yield higher coherence scores than non-structured autoencoding topic model variants, and can be utilized to perform aspect-based comparison and genre discovery. ",Structured Neural Topic Models for Reviews,1,"['New paper by my student @BabakEsmaeili10 on learning structured representations for reviews: <LINK>. We infer embeddings for users and items and model review text using a structured neural topic model, which infers groups of interpretable, related topics. <LINK>']",18,12,262
3673,157,1395536919774121984,68538286,Dan Hendrycks,"Can Transformers crack the coding interview? We collected 10,000 programming problems to find out. GPT-3 isn't very good, but new models like GPT-Neo are starting to be able to solve introductory coding challenges. paper: <LINK> dataset: <LINK> <LINK>",https://arxiv.org/abs/2105.09938,"While programming is one of the most broadly applicable skills in modern society, modern machine learning models still cannot code solutions to basic problems. Despite its importance, there has been surprisingly little work on evaluating code generation, and it can be difficult to accurately assess code generation performance rigorously. To meet this challenge, we introduce APPS, a benchmark for code generation. Unlike prior work in more restricted settings, our benchmark measures the ability of models to take an arbitrary natural language specification and generate satisfactory Python code. Similar to how companies assess candidate software developers, we then evaluate models by checking their generated code on test cases. Our benchmark includes 10,000 problems, which range from having simple one-line solutions to being substantial algorithmic challenges. We fine-tune large language models on both GitHub and our training set, and we find that the prevalence of syntax errors is decreasing exponentially as models improve. Recent models such as GPT-Neo can pass approximately 20% of the test cases of introductory problems, so we find that machine learning models are now beginning to learn how to code. As the social significance of automatic code generation increases over the coming years, our benchmark can provide an important measure for tracking advancements. ",Measuring Coding Challenge Competence With APPS,1,"[""Can Transformers crack the coding interview? We collected 10,000 programming problems to find out. GPT-3 isn't very good, but new models like GPT-Neo are starting to be able to solve introductory coding challenges.\n\npaper: <LINK>\ndataset: <LINK> <LINK>""]",21,05,251
3674,71,1149037407192453124,831519512214245377,Dr. Fran Concha-Ramírez,"My new paper with @Martijn_Wilhelm, S. Portegies Zwart and T. Haworth is out on arXiv today: <LINK> We run simulations of star clusters of different densities, where some of the stars have protoplanetary disks around them. Short thread about our results: The disks are subject to intrinsic viscous growth, truncations due to nearby encounters with other stars, and external photoevaporation caused by bright, massive stars in the vicinity. We look to find out how these processes affect disk survival (and subsequent planet formation). We find that external photoevaporation is very efficient in depleting disk mass: between 60% and 80% of disks get destroyed before 2 Myr, in the early dynamical stages of cluster evolution. <LINK> Mass lost due to encounters with other stars is negligible compared to the effects of photoevaporation. The scope of these encounters is also very local, whereas photoevaporation affects all stars in the cluster all through the cluster's evolution (though in different degrees). <LINK> Our results are in good agreement with observational estimates of disk lifetimes. They also support observational evidence that suggests that rocky gas giant cores/planets must start forming very early on (before 1 Myr of disk evolution). Our biggest caveat: our results are strongly dependent on initial conditions (the bane of simulations). Initial stellar densities can strongly affect to what degree each of these disk dispersal mechanisms is important. We will continue investigating the effect of initial conditions in future work. For now, comments and questions are very welcome! @nespinozap @Martijn_Wilhelm Thanks! It does impose such constraints, which is v interesting because most stars are formed in clusters. So it also constrains the regions where stars w/ planets might have formed! @nespinozap @Martijn_Wilhelm We have not taken stellar properties such as metallicity into account, and also we don't treat the dust and gas components of the disk separately. It could definitely affect our results! @MikeGrudic @Martijn_Wilhelm <LINK>",https://arxiv.org/abs/1907.03760,"Planet-forming circumstellar disks are a fundamental part of the star formation process. Since stars form in a hierarchical fashion in groups of up to hundreds or thousands, the UV radiation environment that these disks are exposed to can vary in strength by at least six orders of magnitude. This radiation can limit the masses and sizes of the disks. Diversity in star forming environments can have long lasting effects in disk evolution and in the resulting planetary populations. We perform simulations to explore the evolution of circumstellar disks in young star clusters. We include viscous evolution, as well as the impact of dynamical encounters and external photoevaporation. We find that photoevaporation is an important process in destroying circumstellar disks: in regions of stellar density $\rho \sim 100 \mathrm{\ M}_\odot \mathrm{\ pc}^{-3}\mathrm{\ }$ around 80% of disks are destroyed before 2 Myr of cluster evolution. Our findings are in agreement with observed disk fractions in young star forming regions and support previous estimations that planet formation must start in timescales < 0.1 - 1 Myr. ","External photoevaporation of circumstellar disks constrains the
timescale for planet formation",10,"['My new paper with @Martijn_Wilhelm, S. Portegies Zwart and T. Haworth is out on arXiv today: \n<LINK>\n\nWe run simulations of star clusters of different densities, where some of the stars have protoplanetary disks around them. Short thread about our results:', 'The disks are subject to intrinsic viscous growth, truncations due to nearby encounters with other stars, and external photoevaporation caused by bright, massive stars in the vicinity. We look to find out how these processes affect disk survival (and subsequent planet formation).', 'We find that external photoevaporation is very efficient in depleting disk mass: between 60% and 80% of disks get destroyed before 2 Myr, in the early dynamical stages of cluster evolution. https://t.co/wMMeQvsb5k', ""Mass lost due to encounters with other stars is negligible compared to the effects of photoevaporation. The scope of these encounters is also very local, whereas photoevaporation affects all stars in the cluster all through the cluster's evolution (though in different degrees). https://t.co/8A8DvyecLS"", 'Our results are in good agreement with observational estimates of disk lifetimes. They also support observational evidence that suggests that rocky gas giant cores/planets must start forming very early on (before 1 Myr of disk evolution).', 'Our biggest caveat: our results are strongly dependent on initial conditions (the bane of simulations). Initial stellar densities can strongly affect to what degree each of these disk dispersal mechanisms is important.', 'We will continue investigating the effect of initial conditions in future work. \nFor now, comments and questions are very welcome!', '@nespinozap @Martijn_Wilhelm Thanks! It does impose such constraints, which is v interesting because most stars are formed in clusters. So it also constrains the regions where stars w/ planets might have formed!', ""@nespinozap @Martijn_Wilhelm We have not taken stellar properties such as metallicity into account, and also we don't treat the dust and gas components of the disk separately. It could definitely affect our results!"", '@MikeGrudic @Martijn_Wilhelm https://t.co/b1qTj38Sto']",19,07,2070
3675,130,1427750806137233414,1263676931112947712,Abdullah Abuolaim,"In addition to being useful for defocus deblurring, what else Dual-Pixel (DP) can offer? . Check our DP-based multi-view synthesis and a New Image Motion Attribute (NIMAT). . Paper: <LINK> GitHub: <LINK> . With M. Afifi @mahmoudnafifi & M. Brown <LINK>",https://arxiv.org/abs/2108.05251,"Many camera sensors use a dual-pixel (DP) design that operates as a rudimentary light field providing two sub-aperture views of a scene in a single capture. The DP sensor was developed to improve how cameras perform autofocus. Since the DP sensor's introduction, researchers have found additional uses for the DP data, such as depth estimation, reflection removal, and defocus deblurring. We are interested in the latter task of defocus deblurring. In particular, we propose a single-image deblurring network that incorporates the two sub-aperture views into a multi-task framework. Specifically, we show that jointly learning to predict the two DP views from a single blurry input image improves the network's ability to learn to deblur the image. Our experiments show this multi-task strategy achieves +1dB PSNR improvement over state-of-the-art defocus deblurring methods. In addition, our multi-task framework allows accurate DP-view synthesis (e.g., ~39dB PSNR) from the single input image. These high-quality DP views can be used for other DP-based applications, such as reflection removal. As part of this effort, we have captured a new dataset of 7,059 high-quality images to support our training for the DP-view synthesis task. Our dataset, code, and trained models are publicly available at this https URL ","Improving Single-Image Defocus Deblurring: How Dual-Pixel Images Help
Through Multi-Task Learning",1,"['In addition to being useful for defocus deblurring, what else Dual-Pixel (DP) can offer?\n.\nCheck our DP-based multi-view synthesis and a New Image Motion Attribute (NIMAT).\n.\nPaper: <LINK>\nGitHub: <LINK>\n.\nWith M. Afifi @mahmoudnafifi &amp; M. Brown <LINK>']",21,08,252
3676,31,1001186740298702848,529438517,Aaron Clauset,"Excited to share a new paper, which investigates how faculty hiring is a mechanism for how ""Prestige drives epistemic inequality in the diffusion of scientific ideas,"" with @alliecmorgan D. Economou, and @samfway <LINK> <LINK> @boydgraber @alliecmorgan @samfway We did consider using topic models on titles (like we did here <LINK>). Expert-generated keywords worked well for the subject areas we chose. Some careful topic modeling would be a great extension of our empirical analysis.",https://arxiv.org/abs/1805.09966,"The spread of ideas in the scientific community is often viewed as a competition, in which good ideas spread further because of greater intrinsic fitness, and publication venue and citation counts correlate with importance and impact. However, relatively little is known about how structural factors influence the spread of ideas, and specifically how where an idea originates might influence how it spreads. Here, we investigate the role of faculty hiring networks, which embody the set of researcher transitions from doctoral to faculty institutions, in shaping the spread of ideas in computer science, and the importance of where in the network an idea originates. We consider comprehensive data on the hiring events of 5032 faculty at all 205 Ph.D.-granting departments of computer science in the U.S. and Canada, and on the timing and titles of 200,476 associated publications. Analyzing five popular research topics, we show empirically that faculty hiring can and does facilitate the spread of ideas in science. Having established such a mechanism, we then analyze its potential consequences using epidemic models to simulate the generic spread of research ideas and quantify the impact of where an idea originates on its longterm diffusion across the network. We find that research from prestigious institutions spreads more quickly and completely than work of similar quality originating from less prestigious institutions. Our analyses establish the theoretical trade-offs between university prestige and the quality of ideas necessary for efficient circulation. Our results establish faculty hiring as an underlying mechanism that drives the persistent epistemic advantage observed for elite institutions, and provide a theoretical lower bound for the impact of structural inequality in shaping the spread of ideas in science. ","Prestige drives epistemic inequality in the diffusion of scientific
ideas",2,"['Excited to share a new paper, which investigates how faculty hiring is a mechanism for how ""Prestige drives epistemic inequality in the diffusion of scientific ideas,"" with @alliecmorgan D. Economou, and @samfway <LINK> <LINK>', '@boydgraber @alliecmorgan @samfway We did consider using topic models on titles (like we did here https://t.co/bIIuEfGzps). Expert-generated keywords worked well for the subject areas we chose. Some careful topic modeling would be a great extension of our empirical analysis.']",18,05,485
3677,12,846880534856810496,21611239,Sean Carroll,"New paper: the universe is accelerating because it's approaching its heat death. <LINK> Or, less poetically: the universe becomes smooth and empty (de Sitter spacetime) as its entropy increases. With @aechatwi. @ecopenhaver Yes, but it's not the horizon until we're in de Sitter. Before then, it's something called a Q-screen. @nattyover Hoping to, before too long.",https://arxiv.org/abs/1703.09241,"In a wide class of cosmological models, a positive cosmological constant drives cosmological evolution toward an asymptotically de Sitter phase. Here we connect this behavior to the increase of entropy over time, based on the idea that de Sitter spacetime is a maximum-entropy state. We prove a cosmic no-hair theorem for Robertson-Walker and Bianchi I spacetimes that admit a Q-screen (""quantum"" holographic screen) with certain entropic properties: If generalized entropy, in the sense of the cosmological version of the Generalized Second Law conjectured by Bousso and Engelhardt, increases up to a finite maximum value along the screen, then the spacetime is asymptotically de Sitter in the future. Moreover, the limiting value of generalized entropy coincides with the de Sitter horizon entropy. We do not use the Einstein field equations in our proof, nor do we assume the existence of a positive cosmological constant. As such, asymptotic relaxation to a de Sitter phase can, in a precise sense, be thought of as cosmological equilibration. ","Cosmic Equilibration: A Holographic No-Hair Theorem from the Generalized
Second Law",4,"[""New paper: the universe is accelerating because it's approaching its heat death.\n<LINK>"", 'Or, less poetically: the universe becomes smooth and empty (de Sitter spacetime) as its entropy increases. With @aechatwi.', ""@ecopenhaver Yes, but it's not the horizon until we're in de Sitter. Before then, it's something called a Q-screen."", '@nattyover Hoping to, before too long.']",17,03,365
3678,132,1007633027122380800,563661699,Prof Alex Hill,"Check out my paper with @astro_curator @jcibanezm and Andrea Gatto. We find that the pressure of interstellar gas stays in the range where cold dense atomic gas (which might eventually form stars) can exist even when we change the heating rate by ~10. <LINK> 1/6 <LINK> Technical details: Happens because any gas that gets turbulently compressed has two options: expand and heat at the same pressure, or heat to become (less) cold gas at the same density (raising the pressure). Latter happens *much* faster, so pressure tends to go up. 2/6 This is supported by observations that cold hydrogen gas is found a long way out in the Milky Way disk, where the pressure ought to be too low for cold gas to form. 3/6 Has implications for cosmological simulations which don’t resolve the cold gas: the pressure in the warm gas should be kept high enough for cold gas to form even far out in galactic disks. 4/6 Our data (22 GB) is available through the @AMNH library at <LINK> So pull out @yt_astro and have at it! 5/6 This paper started with failure: alternative title would be “What we learned when I couldn’t get my simulations to work for two years”. 6/6 @astrocurator (7/6)",https://arxiv.org/abs/1806.05571,"We investigate the impact of the far ultraviolet (FUV) heating rate on the stability of the three-phase interstellar medium using three-dimensional simulations of a $1$ kpc$^2$, vertically-extended domain. The FUV heating rate sets the range of thermal pressures across which the cold ($\sim10^2$ K) and warm ($\sim10^4$ K) neutral media (CNM and WNM) can coexist in equilibrium. Even absent a variable star formation rate regulating the FUV heating rate, the gas physics keeps the pressure in the two-phase regime: because radiative heating and cooling processes happen on shorter timescales than sound wave propagation, turbulent compressions tend to keep the interstellar medium within the CNM-WNM pressure regime over a wide range of heating rates. The thermal pressure is set primarily by the heating rate with little influence from the hydrostatics. The vertical velocity dispersion adjusts as needed to provide hydrostatic support given the thermal pressure: when the turbulent pressure $\langle\rho\rangle\sigma_z^2$ is calculated over scales $\gtrsim500$ pc, the thermal plus turbulent pressure approximately equals the weight of the gas. The warm gas volume filling fraction is $0.2<f_w<0.8$ over a factor of less than three in heating rate, with $f_w$ near unity at higher heating rates and near zero at lower heating rates. We suggest that cosmological simulations that do not resolve the CNM should maintain an interstellar thermal pressure within the two-phase regime. ","Effect of the heating rate on the stability of the three-phase
interstellar medium",7,"['Check out my paper with @astro_curator @jcibanezm and Andrea Gatto. We find that the pressure of interstellar gas stays in the range where cold dense atomic gas (which might eventually form stars) can exist even when we change the heating rate by ~10. <LINK> 1/6 <LINK>', 'Technical details: Happens because any gas that gets turbulently compressed has two options: expand and heat at the same pressure, or heat to become (less) cold gas at the same density (raising the pressure). Latter happens *much* faster, so pressure tends to go up. 2/6', 'This is supported by observations that cold hydrogen gas is found a long way out in the Milky Way disk, where the pressure ought to be too low for cold gas to form. 3/6', 'Has implications for cosmological simulations which don’t resolve the cold gas: the pressure in the warm gas should be kept high enough for cold gas to form even far out in galactic disks. 4/6', 'Our data (22 GB) is available through the @AMNH library at https://t.co/BP6gQsNmGz So pull out @yt_astro and have at it! 5/6', 'This paper started with failure: alternative title would be “What we learned when I couldn’t get my simulations to work for two years”. 6/6', '@astrocurator (7/6)']",18,06,1170
3679,21,1453895296166023171,1094750915444310016,Alex Pizzuto,"New paper day! May I [re-]introduce you to TauRunner, a python package to propagate particles at ultra high energies? This work – led by @IbrahimSafa1 & Jeff Lazar (contributions from myself, @carguelles314, et al.) – has been a blast Possible 🧵 later <LINK>",https://arxiv.org/abs/2110.14662,"In the past decade IceCube's observations have revealed a flux of astrophysical neutrinos extending to $10^{7}~\rm{GeV}$. The forthcoming generation of neutrino observatories promises to grant further insight into the high-energy neutrino sky, with sensitivity reaching energies up to $10^{12}~\rm{GeV}$. At such high energies, a new set of effects becomes relevant, which was not accounted for in the last generation of neutrino propagation software. Thus, it is important to develop new simulations which efficiently and accurately model lepton behavior at this scale. We present TauRunner a PYTHON-based package that propagates neutral and charged leptons. TauRunner supports propagation between $10~\rm{GeV}$ and $10^{12}~\rm{GeV}$. The package accounts for all relevant secondary neutrinos produced in charged-current tau neutrino interactions. Additionally, tau energy losses of taus produced in neutrino interactions is taken into account, and treated stochastically. Finally, TauRunner is broadly adaptable to divers experimental setups, allowing for user-specified trajectories and propagation media, neutrino cross sections, and initial spectra. ","TauRunner: A Public Python Program to Propagate Neutral and Charged
Leptons",1,"['New paper day!\n\nMay I [re-]introduce you to TauRunner, a python package to propagate particles at ultra high energies?\n\nThis work – led by @IbrahimSafa1 &amp; Jeff Lazar (contributions from myself, @carguelles314, et al.) – has been a blast\n\nPossible 🧵 later\n\n<LINK>']",21,10,258
3680,92,1480798108028223488,776765039726460929,Carlo Felice Manara,New paper by PhD student @ESO and Konkoli Obs Gabriella Zsidi. <LINK> We study the accretion variability of CR Cha on timescales from minutes to a decade. The peak is at weeks-month timescale. (expect more to come from Gabriella and from #PENELLOPELP) <LINK>,https://arxiv.org/abs/2201.03396,"Classical T Tauri stars are surrounded by a circumstellar disk from which they are accreting material. This process is essential in the formation of Sun-like stars. Although often described with simple and static models, the accretion process is inherently time variable. Our aim is to examine the accretion process of the low-mass young stellar object CR Cha on a wide range of timescales from minutes to a decade by analyzing both photometric and spectroscopic observations from 2006, 2018, and 2019. We carried out period analysis on the light curves of CR Cha from the TESS mission and the ASAS-SN and the ASAS-3 databases. We studied the color variations of the system using $I,J,H,K$-band photometry obtained contemporaneously with the TESS observing window. We analyzed the amplitude, timescale, and the morphology of the accretion tracers found in a series of high-resolution spectra obtained in 2006 with the AAT/UCLES, in 2018 with the HARPS, and in 2019 with the ESPRESSO and the FEROS spectrographs. All photometric data reveal periodic variations compatible with a 2.327 days rotational period, which is stable in the system over decades. Moreover, the ASAS-SN and ASAS-3 data hint at a long-term brightening by 0.2 mag, between 2001 and 2008, and of slightly less than 0.1 mag in the 2015 - 2018 period. The near-infrared color variations can be explained by either changing accretion rate or changes in the inner disk structure. Our results show that the amplitude of the variations in the H$\alpha$ emission increases on timescales from hours to days/weeks, after which it stays similar even when looking at decadal timescales. On the other hand, we found significant morphological variations on yearly/decadal timescales, indicating that the different physical mechanisms responsible for the line profile changes, such as accretion or wind, are present to varying degrees at different times. ","Accretion variability from minutes to decade timescales in the classical
T Tauri star CR Cha",1,['New paper by PhD student @ESO and Konkoli Obs Gabriella Zsidi. <LINK>\nWe study the accretion variability of CR Cha on timescales from minutes to a decade. The peak is at weeks-month timescale. \n(expect more to come from Gabriella and from #PENELLOPELP) <LINK>'],22,01,258
3681,47,1321469284959375360,894875488094760960,Andrea Dittadi,"New paper: On the Transfer of Disentangled Representations in Realistic Settings. We scale up disentangled representations to a new high-resolution robotics dataset and investigate the role of disentanglement in Out-Of-Distribution generalization. <LINK> [1/4] <LINK> Our dataset consists of two parts: a large simulated dataset containing images of a robotic setup with 7 factors of variation (see previous gif), and a smaller test set of real images. Here is how a model transfers from the simulator to the real robot: [2/4] <LINK> We measure Out-Of-Distribution generalization when (1) only the downstream predictor is OOD, and (2) both encoder and predictor are OOD (e.g. when deployed to the real world). High disentanglement leads to low transfer score (lower is better) in some cases but not always. [3/4] <LINK> This was a super fun project at @MPI_IS with an amazing group of collaborators: @f_traeuble (equal contribution), @FrancescoLocat8, @manuelwuethrich, @ai_frojack, @OleWinther1, Stefan Bauer, @bschoelkopf [4/4]",http://arxiv.org/abs/2010.14407,"Learning meaningful representations that disentangle the underlying structure of the data generating process is considered to be of key importance in machine learning. While disentangled representations were found to be useful for diverse tasks such as abstract reasoning and fair classification, their scalability and real-world impact remain questionable. We introduce a new high-resolution dataset with 1M simulated images and over 1,800 annotated real-world images of the same setup. In contrast to previous work, this new dataset exhibits correlations, a complex underlying structure, and allows to evaluate transfer to unseen simulated and real-world settings where the encoder i) remains in distribution or ii) is out of distribution. We propose new architectures in order to scale disentangled representation learning to realistic high-resolution settings and conduct a large-scale empirical study of disentangled representations on this dataset. We observe that disentanglement is a good predictor for out-of-distribution (OOD) task performance. ",On the Transfer of Disentangled Representations in Realistic Settings,4,"['New paper: On the Transfer of Disentangled Representations in Realistic Settings. We scale up disentangled representations to a new high-resolution robotics dataset and investigate the role of disentanglement in Out-Of-Distribution generalization.\n\n<LINK> [1/4] <LINK>', 'Our dataset consists of two parts: a large simulated dataset containing images of a robotic setup with 7 factors of variation (see previous gif), and a smaller test set of real images.\n\nHere is how a model transfers from the simulator to the real robot: [2/4] https://t.co/zIgTTxAZfB', 'We measure Out-Of-Distribution generalization when (1) only the downstream predictor is OOD, and (2) both encoder and predictor are OOD (e.g. when deployed to the real world). High disentanglement leads to low transfer score (lower is better) in some cases but not always. [3/4] https://t.co/sXfkQCd1cn', 'This was a super fun project at @MPI_IS with an amazing group of collaborators: @f_traeuble (equal contribution), @FrancescoLocat8, @manuelwuethrich, @ai_frojack, @OleWinther1, Stefan Bauer, @bschoelkopf [4/4]']",20,10,1029
3682,301,1319070298541441025,1258919993334312960,Yuyue Yan,"Our manuscript: “Variable guiding strategies in multi-exits evacuation: Pursuing balanced pedestrian densities” <LINK> is now available.@ProfJohnDrury @EvacuationModel We studied the effects of crowd density control in guiding strategies on evacuation efficiency. <LINK> We revealed that for a moderate target density value, the density control for the partial regions (near the exits) could yield a global effect for balancing the pedestrians in the rest of the regions and hence improve the evacuation efficiency.",http://arxiv.org/abs/2010.10647,"Evacuation assistants and their guiding strategies play an important role in the multi-exits pedestrian evacuation. To investigate the effect of guiding strategies on evacuation efficiency, we propose a force-driven cellular automaton model with adjustable guiding attractions imposed by the evacuation assistants located in the exits. In this model, each of the evacuation assistants tries to attract the pedestrians in the evacuation space towards its own exit by sending a quantifiable guiding signal, which may be adjusted according to the values of pedestrian density near the exit. The effects of guiding strategies pursuing balanced pedestrian densities are studied. It is observed that the unbalanced pedestrian distribution is mainly yielded by a snowballing effect generated from the mutual attractions among the pedestrians, and can be suppressed by controlling the pedestrian densities around the exits. We also reveal an interesting fact that given a moderate target density value, the density control for the partial regions (near the exits) could yield a global effect for balancing the pedestrians in the rest of the regions and hence improve the evacuation efficiency. Our findings may contribute to give new insight into designing effective guiding strategies in the realistic evacuation process. ","Variable guiding strategies in multi-exits evacuation: Pursuing balanced
pedestrian densities",2,"['Our manuscript: “Variable guiding strategies in multi-exits evacuation: Pursuing balanced pedestrian densities” <LINK>\nis now available.@ProfJohnDrury @EvacuationModel\nWe studied the effects of crowd density control in guiding strategies on evacuation efficiency. <LINK>', 'We revealed that for a moderate target density value, the density control for the partial regions (near the exits) could yield a global effect for balancing the pedestrians in the rest of the regions and hence improve the evacuation efficiency.']",20,10,515
3683,77,1148594342384295937,1091815542334337024,Scott Niekum,"Want to get up to speed on what’s happening in robot learning for manipulation? Oliver Kroemer, George Konidaris, and I have just released a new survey paper: <LINK>. And please let us know if there is important work we missed — it is a huge and growing field! @animesh_garg Yes, quite a bit longer than we’d like to admit! Thanks for your feedback in early ideas! @RezaAhmadzadeh_ That was fast! We will update it if we missed anything but this is essentially the final version that we are submitting to a journal.",http://arxiv.org/abs/1907.03146,"A key challenge in intelligent robotics is creating robots that are capable of directly interacting with the world around them to achieve their goals. The last decade has seen substantial growth in research on the problem of robot manipulation, which aims to exploit the increasing availability of affordable robot arms and grippers to create robots capable of directly interacting with the world to achieve their goals. Learning will be central to such autonomous systems, as the real world contains too much variation for a robot to expect to have an accurate model of its environment, the objects in it, or the skills required to manipulate them, in advance. We aim to survey a representative subset of that research which uses machine learning for manipulation. We describe a formalization of the robot manipulation learning problem that synthesizes existing research into a single coherent framework and highlight the many remaining research opportunities and challenges. ","A Review of Robot Learning for Manipulation: Challenges,
Representations, and Algorithms",3,"['Want to get up to speed on what’s happening in robot learning for manipulation? Oliver Kroemer, George Konidaris, and I have just released a new survey paper: <LINK>. And please let us know if there is important work we missed — it is a huge and growing field!', '@animesh_garg Yes, quite a bit longer than we’d like to admit! Thanks for your feedback in early ideas!', '@RezaAhmadzadeh_ That was fast! We will update it if we missed anything but this is essentially the final version that we are submitting to a journal.']",19,07,515
3684,158,1306225509681033217,1056626853652426752,Jonathan Herzig,"1/ Does a span-based parser perform better than seq2seq models in terms of compositional generalization in semantic parsing? Seems like it! Check out our new paper: <LINK>, Joint work with @JonathanBerant <LINK> 2/ We present SpanBasedSP, a parser that predicts a span tree over the input utterance, explicitly encoding how partial programs compose over spans in the input. This form of inductive bias helps the model in dealing with new structures, unseen during training time. 3/ To be comparable with seq2seq models, we train from programs, without access to gold trees, treating trees as latent variables. We also parse a common class of non-projective trees through an extension to standard CKY. <LINK> 4/ SpanBasedSP performs similarly to strong seq2seq baselines on random splits, but dramatically improves performance compared to baselines on splits that require compositional generalization. From 69.8 → 95.3 average accuracy on SCAN, CLOSURE and GeoQuery. <LINK> @jayantkrish @JonathanBerant I agree it depends on the data. I think our method could help in iid splits if there are indeed many structures in the test set that are unseen during training time. For most popular datasets it seems that this is not the case though, and this is why seq2seq models do well there.",https://arxiv.org/abs/2009.06040,"Despite the success of sequence-to-sequence (seq2seq) models in semantic parsing, recent work has shown that they fail in compositional generalization, i.e., the ability to generalize to new structures built of components observed during training. In this work, we posit that a span-based parser should lead to better compositional generalization. we propose SpanBasedSP, a parser that predicts a span tree over an input utterance, explicitly encoding how partial programs compose over spans in the input. SpanBasedSP extends Pasupat et al. (2019) to be comparable to seq2seq models by (i) training from programs, without access to gold trees, treating trees as latent variables, (ii) parsing a class of non-projective trees through an extension to standard CKY. On GeoQuery, SCAN and CLOSURE datasets, SpanBasedSP performs similarly to strong seq2seq baselines on random splits, but dramatically improves performance compared to baselines on splits that require compositional generalization: from $61.0 \rightarrow 88.9$ average accuracy. ",Span-based Semantic Parsing for Compositional Generalization,5,"['1/ Does a span-based parser perform better than seq2seq models in terms of compositional generalization in semantic parsing?\nSeems like it!\nCheck out our new paper: <LINK>, Joint work with @JonathanBerant <LINK>', '2/ We present SpanBasedSP, a parser that predicts a span tree over the input utterance, explicitly encoding how\npartial programs compose over spans in the input.\n\nThis form of inductive bias helps the model in dealing with new structures, unseen during training time.', '3/ To be comparable with seq2seq models, we train from programs, without access to gold trees, treating trees\nas latent variables. We also parse a common class of non-projective trees through an extension to standard CKY. https://t.co/anGR2VTUi3', '4/ SpanBasedSP performs similarly to strong seq2seq baselines on random splits, but dramatically improves performance compared to baselines on splits that require compositional generalization. \nFrom 69.8 → 95.3 average accuracy on SCAN, CLOSURE and GeoQuery. https://t.co/EqeDSnYK0K', '@jayantkrish @JonathanBerant I agree it depends on the data. I think our method could help in iid splits if there are indeed many structures in the test set that are unseen during training time. For most popular datasets it seems that this is not the case though, and this is why seq2seq models do well there.']",20,09,1282
3685,52,1308947577019281414,1087183776,Jordan Ellenberg,"New paper! With Daniel Corey and @wanlinbunny. ""The Ceresa class: tropical, topological, and algebraic."" <LINK> @wanlinbunny The Ceresa class is a really handy algebraic cycle attached to a curve: in some sense the simplest cycle ""beyond the Jacobian."" @wanlinbunny We had wondered for a while whether there was a good analogue for a tropical curve (i.e. a metric graph). Turns out there is! @wanlinbunny What's more, it's computable in terms of commutators in the mapping class group; much of the main work of the paper is in that language. @wanlinbunny So if you like, the paper is about tropical curves, but also about the restriction of the Morita cocycle to groups generated by commuting Dehn twists. @wanlinbunny And the invariant ends up agreeing with the usual Ceresa class for curves/C((t)) whose tropicalization is the tropical curve in question. @wanlinbunny What kind of invariant is this? It lives in a finite abelian group attached to the curve (not Pic^0, but related.) In particular, it's torsion. @wanlinbunny The key point for that turns out to be an old theorem of @AndyPutmanMath -- thanks, Andy! @wanlinbunny @AndyPutmanMath I really like this definition; we have lots of questions about it, and in this paper just a few answers.",https://arxiv.org/abs/2009.10824,"The Ceresa cycle is an algebraic cycle attached to a smooth algebraic curve with a marked point, which is trivial when the curve is hyperelliptic with a marked Weierstrass point. The image of the Ceresa cycle under a certain cycle class map provides a class in \'etale cohomology called the Ceresa class. Describing the Ceresa class explicitly for non-hyperelliptic curves is in general not easy. We present a ""combinatorialization"" of this problem, explaining how to define a Ceresa class for a tropical algebraic curve, and also for a topological surface endowed with a multiset of commuting Dehn twists (where it is related to the Morita cocycle on the mapping class group). We explain how these are related to the Ceresa class of a smooth algebraic curve over $\mathbb{C}(\!(t)\!)$, and show that the Ceresa class in each of these settings is torsion. ","The Ceresa class: tropical, topological, and algebraic",9,"['New paper! With Daniel Corey and @wanlinbunny. ""The Ceresa class: tropical, topological, and algebraic."" <LINK>', '@wanlinbunny The Ceresa class is a really handy algebraic cycle attached to a curve: in some sense the simplest cycle ""beyond the Jacobian.""', '@wanlinbunny We had wondered for a while whether there was a good analogue for a tropical curve (i.e. a metric graph). Turns out there is!', ""@wanlinbunny What's more, it's computable in terms of commutators in the mapping class group; much of the main work of the paper is in that language."", '@wanlinbunny So if you like, the paper is about tropical curves, but also about the restriction of the Morita cocycle to groups generated by commuting Dehn twists.', '@wanlinbunny And the invariant ends up agreeing with the usual Ceresa class for curves/C((t)) whose tropicalization is the tropical curve in question.', ""@wanlinbunny What kind of invariant is this? It lives in a finite abelian group attached to the curve (not Pic^0, but related.) In particular, it's torsion."", '@wanlinbunny The key point for that turns out to be an old theorem of @AndyPutmanMath -- thanks, Andy!', '@wanlinbunny @AndyPutmanMath I really like this definition; we have lots of questions about it, and in this paper just a few answers.']",20,09,1250
3686,134,1466449032092389379,2377407248,Daniel Whiteson,"New paper! ""New physics in tri-boson event topologies"" <LINK> Experimental studies led by UCI *undergraduate* Jesus Caridad Ramirez. As the LHC datasets grow, rare events become a new playground for seeing rare events and looking for new physics. We studied the three-boson final states: <LINK> Because there are a lot of ways to get three bosons, which can all decay a lot of ways, there were a LOT of variations for Jesus to study, such as: <LINK> And: <LINK> AND: <LINK> All of which were interpreted in effective field theories that describe a wide set of models by our theory friends: <LINK> @KyleCranmer Next paper: ""Sarcasm on twitter""",https://arxiv.org/abs/2112.00137,"We present a study of the sensitivity to models of new physics of proton collisions resulting in three electroweak bosons. As a benchmark, we analyze models in which an exotic scalar field $\phi$ is produced in association with a gauge boson ($V=\gamma$ or $Z$). The scalar then decays to a pair of bosons, giving the process $pp\rightarrow \phi V\rightarrow V'V""V$. We interpret our results in a set of effective field theories where the exotic scalar fields couple to the Standard Model through pairs of electroweak gauge bosons. We estimate the sensitivity of the LHC and HL-LHC datasets and find sensitivity to cross sections in the 10 fb -- 0.5 fb range, corresponding to scalar masses of 500 GeV to 2 TeV and effective operator coefficients up to 35 TeV. ",New physics in triboson event topologies,7,"['New paper!\n\n ""New physics in tri-boson event topologies""\n\n<LINK>\n\nExperimental studies led by UCI *undergraduate* Jesus Caridad Ramirez.', 'As the LHC datasets grow, rare events become a new playground for seeing rare events and looking for new physics. We studied the three-boson final states: https://t.co/HVRNTgrOD4', 'Because there are a lot of ways to get three bosons, which can all decay a lot of ways, there were a LOT of variations for Jesus to study, such as: https://t.co/qs4gu4ozjV', 'And: https://t.co/slS3oT5vs3', 'AND: https://t.co/Kls5Hhh8Js', 'All of which were interpreted in effective field theories that describe a wide set of models by our theory friends: https://t.co/zFBnjDdkr0', '@KyleCranmer Next paper: ""Sarcasm on twitter""']",21,12,646
3687,34,1353887309968642053,922847904058011649,Romy Rodríguez,"Excited to share this new paper! Here we present “Analytic estimates of the achievable precision on the properties of transiting exoplanet using purely empirical measurements” <LINK> We derived analytic approximations of the mass, density, and surface gravities of a transiting exoplanet in terms of direct transit and radial velocity observables, like the period, transit duration, ingress/egress time, transit depth, and RV semi-amplitude. We find a general hierarchy in the precision with which the planetary mass, radius, density and surface can be measured, namely, the surface gravity is the best measured property, followed by the density and the mass. We tested our analytic approximations with a few confirmed exoplanets from the literature and find generally a very good agreement between the uncertainties in the properties reported by the papers and the ones that we derive using our analytic approximations. <LINK> We also explored the connection between the surface gravity, which we believe to be an important diagnostic of the habitability of a rocky exoplanet, and the core mass fraction. We measured the core-mass fraction of the super-Mercury candidate K2-229b as predicted from its mass and radius (56%) and the core-mass fraction as expected from the refractory abundance of its host star (29%). The mass-radius ellipses show the 1sigma and 2sigma uncertainties in its mass and radius when the mass and radius are assumed to be uncorrelated (red) and when they’re correlated via the additional constraint of the surface gravity (black). <LINK> The additional constraint of the surface gravity reduces the uncertainty in the core mass fraction of K2-229b and we therefore conclude that surface gravity is a good proxy for the core mass fraction of a terrestrial exoplanet @j_tharindu Thank you!! :D @astroxicana Gracias Lupita!! &lt;3",https://arxiv.org/abs/2101.09289,"We present analytic estimates of the fractional uncertainties on the mass, radius, surface gravity, and density of a transiting planet, using only empirical or semi-empirical measurements. We first express these parameters in terms of transit photometry and radial velocity (RV) observables, as well as the stellar radius $R_{\star}$, if required. In agreement with previous results, we find that, assuming a circular orbit, the surface gravity of the planet ($g_p$) depends only on empirical transit and RV parameters; namely, the planet period $P$, the transit depth $\delta$, the RV semi-amplitude $K_{\star}$, the transit duration $T$, and the ingress/egress duration $\tau$. However, the planet mass and density depend on all these quantities, plus $R_{\star}$. Thus, an inference about the planet mass, radius, and density must rely upon an external constraint such as the stellar radius. For bright stars, stellar radii can now be measured nearly empirically by using measurements of the stellar bolometric flux, the effective temperature, and the distance to the star via its parallax, with the extinction $A_V$ being the only free parameter. For any given system, there is a hierarchy of achievable precisions on the planetary parameters, such that the planetary surface gravity is more accurately measured than the density, which in turn is more accurately measured than the mass. We find that surface gravity provides a strong constraint on the core mass fraction of terrestrial planets. This is useful, given that the surface gravity may be one of the best measured properties of a terrestrial planet. ","Analytic Estimates of the Achievable Precision on the Physical
Properties of Transiting Planets Using Purely Empirical Measurements",10,"['Excited to share this new paper! Here we present “Analytic estimates of the achievable precision on the properties of transiting exoplanet using purely empirical measurements”\n\n<LINK>', 'We derived analytic approximations of the mass, density, and surface gravities of a transiting exoplanet in terms of direct transit and radial velocity observables, like the period, transit duration, ingress/egress time, transit depth, and RV semi-amplitude.', 'We find a general hierarchy in the precision with which the planetary mass, radius, density and surface can be measured, namely, the surface gravity is the best measured property, followed by the density and the mass.', 'We tested our analytic approximations with a few confirmed exoplanets from the literature and find generally a very good agreement between the uncertainties in the properties reported by the papers and the ones that we derive using our analytic approximations. https://t.co/y8M5cZUWG4', 'We also explored the connection between the surface gravity, which we believe to be an important diagnostic of the habitability of a rocky exoplanet, and the core mass fraction.', 'We measured the core-mass fraction of the super-Mercury candidate K2-229b as predicted from its mass and radius (56%) and the core-mass fraction as expected from the refractory abundance of its host star (29%).', 'The mass-radius ellipses show the 1sigma and 2sigma uncertainties in its mass and radius when the mass and radius are assumed to be uncorrelated (red) and when they’re correlated via the additional constraint of the surface gravity (black). https://t.co/l9VXH3w8A1', 'The additional constraint of the surface gravity reduces the uncertainty in the core mass fraction of K2-229b and we therefore conclude that surface gravity is a good proxy for the core mass fraction of a terrestrial exoplanet', '@j_tharindu Thank you!! :D', '@astroxicana Gracias Lupita!! &lt;3']",21,01,1854
3688,82,1252806732163584000,347828252,Yasser Shoukry,"[1/3] Very excited about our new results on architectural assurance for Neural Networks when used in control applications <LINK>. In a new paper with my postdoc @JamesFerlez and Ph.D. student Xiaowu Sun, we extended our previous results on Two-Level Lattice NNs [2/3] to design “assured” architectures for neural networks when the underlying system evolves according to some nonlinear dynamics. Whereas current techniques are based on hand-picked architectures or heuristic-based search to find such architectures, [3/3] our approach exploits the given nonlinear model of the system to find the NN architecture “without” having access to training data. We provide a guarantee that the resulting NN architecture is sufficient to implement a controller that satisfies an achievable specification.",https://arxiv.org/abs/2004.09628,"In this paper, we consider the problem of automatically designing a Rectified Linear Unit (ReLU) Neural Network (NN) architecture (number of layers and number of neurons per layer) with the guarantee that it is sufficiently parametrized to control a nonlinear system. Whereas current state-of-the-art techniques are based on hand-picked architectures or heuristic based search to find such NN architectures, our approach exploits the given model of the system to design an architecture; as a result, we provide a guarantee that the resulting NN architecture is sufficient to implement a controller that satisfies an achievable specification. Our approach exploits two basic ideas. First, assuming that the system can be controlled by an unknown Lipschitz-continuous state-feedback controller with some Lipschitz constant upper-bounded by $K_\text{cont}$, we bound the number of affine functions needed to construct a Continuous Piecewise Affine (CPWA) function that can approximate the unknown Lipschitz-continuous controller. Second, we utilize the authors' recent results on a novel NN architecture named as the Two-Level Lattice (TLL) NN architecture, which was shown to be capable of implementing any CPWA function just from the knowledge of the number of affine functions that compromises this CPWA function. ","Two-Level Lattice Neural Network Architectures for Control of Nonlinear
Systems",3,"['[1/3] Very excited about our new results on architectural assurance for Neural Networks when used in control applications <LINK>. In a new paper with my postdoc @JamesFerlez and Ph.D. student Xiaowu Sun, we extended our previous results on Two-Level Lattice NNs', '[2/3] to design “assured” architectures for neural networks when the underlying system evolves according to some nonlinear dynamics. Whereas current techniques are based on hand-picked architectures or heuristic-based search to find such architectures,', '[3/3] our approach exploits the given nonlinear model of the system to find the NN architecture “without” having access to training data. We provide a guarantee that the resulting NN architecture is sufficient to implement a controller that satisfies an achievable specification.']",20,04,794
3689,29,1453348747778433025,325194378,Antonio Valerio Miceli Barone,"Neural language models can accurately model text by exploiting long-distance correlations that however tend to be fragile w.r.t. distribution shift. In this new paper with @alexandrabirch1 and @RicoSennrich , we attempt to fix them: <LINK> The strongest correlations in a text occur at a short distance. N-grams model can only consider these short-distance correlations, which makes them relatively robust at the cost of having a generally poor predictive accuracy. Modern LMs achieve much better accuracy by exploiting weak, long-distance ""shortcut"" correlations, but this makes them fragile out-of-distribution. We want to automatically adapt and use more or less long-distance correlations depending on how in-distribution or OOD the data is. In a RNNLM we can achieve this by doing OOD detection and manipulating the state: we initialize the RNN with the all-zero state, which corresponds to high entropy of the output distribution, since at the beginning many tokens are possible <LINK> At each step we update the RNN (a GRU) with the observed token as usual, then we estimate how much OOD it is by applying the Random Network Distillation method by Burda et al. <LINK> The more OOD the state is, the more we scale it towards zero. This both increases the output distribution entropy (the model becomes less certain about the next token) and purges the state from old, distracting information (older tokens are exponentially decaying in the state). We obtain perplexity improvements in out-of-domain test sets while maintaining perplexity in in-distribution test sets.",https://arxiv.org/abs/2110.13229,"Neural machine learning models can successfully model language that is similar to their training distribution, but they are highly susceptible to degradation under distribution shift, which occurs in many practical applications when processing out-of-domain (OOD) text. This has been attributed to ""shortcut learning"": relying on weak correlations over arbitrary large contexts. We propose a method based on OOD detection with Random Network Distillation to allow an autoregressive language model to automatically disregard OOD context during inference, smoothly transitioning towards a less expressive but more robust model as the data becomes more OOD while retaining its full context capability when operating in-distribution. We apply our method to a GRU architecture, demonstrating improvements on multiple language modeling (LM) datasets. ","Distributionally Robust Recurrent Decoders with Random Network
Distillation",7,"['Neural language models can accurately model text by exploiting long-distance correlations that however tend to be fragile w.r.t. distribution shift.\n\nIn this new paper with @alexandrabirch1 and @RicoSennrich , we attempt to fix them:\n<LINK>', 'The strongest correlations in a text occur at a short distance. N-grams model can only consider these short-distance correlations, which makes them relatively robust at the cost of having a generally poor predictive accuracy.', 'Modern LMs achieve much better accuracy by exploiting weak, long-distance ""shortcut"" correlations, but this makes them fragile out-of-distribution.\nWe want to automatically adapt and use more or less long-distance correlations depending on how in-distribution or OOD the data is.', 'In a RNNLM we can achieve this by doing OOD detection and manipulating the state:\nwe initialize the RNN with the all-zero state, which corresponds to high entropy of the output distribution, since at the beginning many tokens are possible https://t.co/vOgQJ63m8c', 'At each step we update the RNN (a GRU) with the observed token as usual, then we estimate how much OOD it is by applying the Random Network Distillation method by Burda et al. https://t.co/x0WMKVfwnj', 'The more OOD the state is, the more we scale it towards zero. This both increases the output distribution entropy (the model becomes less certain about the next token) and purges the state from old, distracting information (older tokens are exponentially decaying in the state).', 'We obtain perplexity improvements in out-of-domain test sets while maintaining perplexity in in-distribution test sets.']",21,10,1573
3690,58,1483750761871941634,725183149769105411,Ryan Mann,"New paper with Tyler Helmuth: Efficient Algorithms for Approximating Quantum Partition Functions at Low Temperature <LINK> <LINK> @AlhambraAlvaro Thanks! We assume that we know the ground states of the classical Hamiltonian, but Peierls' condition helps too.",https://arxiv.org/abs/2201.06533,"We establish an efficient approximation algorithm for the partition functions of a class of quantum spin systems at low temperature, which can be viewed as stable quantum perturbations of classical spin systems. Our algorithm is based on combining the contour representation of quantum spin systems of this type due to Borgs, Koteck\'y, and Ueltschi with the algorithmic framework developed by Helmuth, Perkins, and Regts, and Borgs et al. ","Efficient Algorithms for Approximating Quantum Partition Functions at
Low Temperature",2,"['New paper with Tyler Helmuth: Efficient Algorithms for Approximating Quantum Partition Functions at Low Temperature <LINK> <LINK>', ""@AlhambraAlvaro Thanks! We assume that we know the ground states of the classical Hamiltonian, but Peierls' condition helps too.""]",22,01,258
3691,119,1283072624701181955,915661066255962113,Jim Winkens,"New paper! Joint contrastive and supervised training improves OOD detection performance on the challenging near OOD setting by obtaining a rich and task-agnostic feature space. <LINK> Thread. <LINK> Supervised training for multiclass classification does not produce representations beyond the minimum necessary to classify. Contrastive training instead incentivizes the model to learn features that discriminate between all dataset images, essential for reliable OOD detection. We pretrain with a SimCLR objective, followed by fine-tuning with a joint SimCLR and supervised loss. We use a standard Mahalanobis OOD detector acting on the penultimate layer and note that label smoothing establishes tighter class clusters -&gt; crucial for this detector. <LINK> We propose a measure of how far OOD a test sample is, called CLP. We show that joint training improves performance especially in the near OOD regime where we report a new SOTA. This regime is critical in the medical domain where fine-grained outliers are commonplace. <LINK> Our method is competitive with SOTA methods across the CLP spectrum in various settings, and unlike leading methods does not require additional OOD data for training or tuning. <LINK> Super fun collaboration with @BunelR, @abzz4ssj, Robert Stanforth, @vivnat, @joe_ledsam, @patmacwilliams, @pushmeet, @alan_karthi, @saakohl, @TaylanCemgilML and @ORonneberger.",http://arxiv.org/abs/2007.05566,"Reliable detection of out-of-distribution (OOD) inputs is increasingly understood to be a precondition for deployment of machine learning systems. This paper proposes and investigates the use of contrastive training to boost OOD detection performance. Unlike leading methods for OOD detection, our approach does not require access to examples labeled explicitly as OOD, which can be difficult to collect in practice. We show in extensive experiments that contrastive training significantly helps OOD detection performance on a number of common benchmarks. By introducing and employing the Confusion Log Probability (CLP) score, which quantifies the difficulty of the OOD detection task by capturing the similarity of inlier and outlier datasets, we show that our method especially improves performance in the `near OOD' classes -- a particularly challenging setting for previous methods. ",Contrastive Training for Improved Out-of-Distribution Detection,6,"['New paper! Joint contrastive and supervised training improves OOD detection performance on the challenging near OOD setting by obtaining a rich and task-agnostic feature space.\n\n<LINK>\n\nThread. <LINK>', 'Supervised training for multiclass classification does not produce representations beyond the minimum necessary to classify. Contrastive training instead incentivizes the model to learn features that discriminate between all dataset images, essential for reliable OOD detection.', 'We pretrain with a SimCLR objective, followed by fine-tuning with a joint SimCLR and supervised loss. We use a standard Mahalanobis OOD detector acting on the penultimate layer and note that label smoothing establishes tighter class clusters -&gt; crucial for this detector. https://t.co/YcxlDvZTpd', 'We propose a measure of how far OOD a test sample is, called CLP. We show that joint training improves performance especially in the near OOD regime where we report a new SOTA. This regime is critical in the medical domain where fine-grained outliers are commonplace. https://t.co/y4BDgp9wlH', 'Our method is competitive with SOTA methods across the CLP spectrum in various settings, and unlike leading methods does not require additional OOD data for training or tuning. https://t.co/zZmy4S3Nq2', 'Super fun collaboration with @BunelR, @abzz4ssj, Robert Stanforth, @vivnat, @joe_ledsam, @patmacwilliams, @pushmeet, @alan_karthi, @saakohl, @TaylanCemgilML and @ORonneberger.']",20,07,1394
3692,34,1110376836880654336,837133583558987776,Colin Raffel,"New work w/ @yaoqinucsd, Nicholas Carlini, @goodfellow_ian, and Gary Cottrell on generating imperceptible, robust, and targeted adversarial examples for speech recognition systems! Paper: <LINK> Audio samples: <LINK> @YaoQinUCSD did this awesome work during her internship at Brain last summer. Super excited that she will be returning for an internship this coming summer too!",https://arxiv.org/abs/1903.10346,"Adversarial examples are inputs to machine learning models designed by an adversary to cause an incorrect output. So far, adversarial examples have been studied most extensively in the image domain. In this domain, adversarial examples can be constructed by imperceptibly modifying images to cause misclassification, and are practical in the physical world. In contrast, current targeted adversarial examples applied to speech recognition systems have neither of these properties: humans can easily identify the adversarial perturbations, and they are not effective when played over-the-air. This paper makes advances on both of these fronts. First, we develop effectively imperceptible audio adversarial examples (verified through a human study) by leveraging the psychoacoustic principle of auditory masking, while retaining 100% targeted success rate on arbitrary full-sentence targets. Next, we make progress towards physical-world over-the-air audio adversarial examples by constructing perturbations which remain effective even after applying realistic simulated environmental distortions. ","Imperceptible, Robust, and Targeted Adversarial Examples for Automatic