text
stringlengths 64
6.93k
|
---|
Speech Recognition",2,"['New work w/ @yaoqinucsd, Nicholas Carlini, @goodfellow_ian, and Gary Cottrell on generating imperceptible, robust, and targeted adversarial examples for speech recognition systems! \nPaper: <LINK>\nAudio samples: <LINK>', '@YaoQinUCSD did this awesome work during her internship at Brain last summer. Super excited that she will be returning for an internship this coming summer too!']",19,03,377 |
3693,151,1359133014526205952,597726901,Hannah Rose Kirk,"Is #AI generated text biased towards protected groups? In a new @OxfordAI paper, we look at occupational biases for gender intersectionality with religion, sexuality, political affiliation...Why does #GPT-2 think women are maids not computer programmers? <LINK> <LINK> Shout out to my co-authors @YennieJun, @y_m_asano, @EliasBenussi, @shaideriqbal, @frdreyer, Aleksander Shtedritski and Filippo Volpin! ๐",https://arxiv.org/abs/2102.04130,"The capabilities of natural language models trained on large-scale data have increased immensely over the past few years. Open source libraries such as HuggingFace have made these models easily available and accessible. While prior research has identified biases in large language models, this paper considers biases contained in the most popular versions of these models when applied `out-of-the-box' for downstream tasks. We focus on generative language models as they are well-suited for extracting biases inherited from training data. Specifically, we conduct an in-depth analysis of GPT-2, which is the most downloaded text generation model on HuggingFace, with over half a million downloads per month. We assess biases related to occupational associations for different protected categories by intersecting gender with religion, sexuality, ethnicity, political affiliation, and continental name origin. Using a template-based data collection pipeline, we collect 396K sentence completions made by GPT-2 and find: (i) The machine-predicted jobs are less diverse and more stereotypical for women than for men, especially for intersections; (ii) Intersectional interactions are highly relevant for occupational associations, which we quantify by fitting 262 logistic models; (iii) For most occupations, GPT-2 reflects the skewed gender and ethnicity distribution found in US Labor Bureau data, and even pulls the societally-skewed distribution towards gender parity in cases where its predictions deviate from real labor market observations. This raises the normative question of what language models should learn - whether they should reflect or correct for existing inequalities. ","Bias Out-of-the-Box: An Empirical Analysis of Intersectional |
Occupational Biases in Popular Generative Language Models",2,"['Is #AI generated text biased towards protected groups? In a new @OxfordAI paper, we look at occupational biases for gender intersectionality with religion, sexuality, political affiliation...Why does #GPT-2 think women are maids not computer programmers?\n<LINK> <LINK>', 'Shout out to my co-authors @YennieJun, @y_m_asano, @EliasBenussi, @shaideriqbal, @frdreyer, Aleksander Shtedritski and Filippo Volpin! ๐']",21,02,405 |
3694,128,1152264272191377414,2302304521,Dr Andra Stroe ๐ณ๏ธโ๐๐ท๐ด,"Our paper on APEX observations of galaxies in the Antlia cluster is accepted to ApJ and on the Arxiv today! We find that star forming galaxies in this disturbed galaxy cluster have large molecular gas reservoirs! Great work by my student @cairns_jd ! <LINK> Fingers crossed for our @almaobs ACA proposal! If accepted, we will be able to study these interesting galaxies at high spatial resolution! Thanks to @ESO and @robivison for approving the summer student funding which made all of this possible!",https://arxiv.org/abs/1907.07691v1,"We present CO(2-1) observations of 72 galaxies in the nearby, disturbed Antlia galaxy cluster with the Atacama Pathfinder Experiment (APEX) telescope. The galaxies in our sample are selected to span a wide range of stellar masses ($10^{8}M_{\odot}\lesssim M_{\star} \lesssim 10^{10}M_{\odot}$) and star formation rates ($0.0005M_{\odot}\text{yr}^{-1}<\text{SFR}<0.3M_{\odot}\text{yr}^{-1}$). Reaching a depth of $23\text{mJy}$ in $50\text{km}\text{s}^{-1}$ channels, we report a total CO detection rate of $37.5\%$ and a CO detection rate of $86\%$ for sources within 1 dex of the main sequence. We compare our sample with a similar sample of galaxies in the field, finding that, for a fixed stellar mass and SFR, galaxies in the Antlia cluster have comparable molecular gas reservoirs to field galaxies. We find that $\sim41\%$ (11/27) of our CO detections display non-Gaussian CO(2-1) emission line profiles, and a number of these sources display evidence of quenching in their optical images. We also find that the majority of our sample lie either just below, or far below the main sequence of field galaxies, further hinting at potential ongoing quenching. We conclude that the Antlia cluster represents an intermediate environment between fields and dense clusters, where the gentler intracluster medium (ICM) allows the cluster members to retain their reservoirs of molecular gas, but in which the disturbed ICM is just beginning to influence the member galaxies, resulting in high SFRs and possible ongoing quenching. ",] Large Molecular Gas Reservoirs in Star Forming Cluster Galaxies,3,"['Our paper on APEX observations of galaxies in the Antlia cluster is accepted to ApJ and on the Arxiv today! We find that star forming galaxies in this disturbed galaxy cluster have large molecular gas reservoirs! Great work by my student @cairns_jd ! <LINK>', 'Fingers crossed for our @almaobs ACA proposal! If accepted, we will be able to study these interesting galaxies at high spatial resolution!', 'Thanks to @ESO and @robivison for approving the summer student funding which made all of this possible!']",19,07,501 |
3695,28,1432340162806116354,889824297409277952,Nikos Kolotouros,"Check out our new work ""Probablistic Modeling for Human Mesh Recovery"" with @geopavlakos @dineshjayaraman and @KostasPenn that will be presented at #ICCV21 ! Paper: <LINK> Project page: <LINK> Code: <LINK> <LINK> 3D reconstruction from 2D evidence is inherently ambiguous. In this work we embrace the reconstruction ambiguity and recast the problem as learning a mapping from the input to a distribution of plausible 3D poses. <LINK> Our method is not limited to learning a conditional generative model of 3D poses. We show that our learned distribution is useful for a variety of **downstream tasks**, such as body model fitting or reconstruction from multiple views. <LINK> If you want to test our method on random videos from YouTube please check the Colab notebook that we prepared: <LINK>",http://arxiv.org/abs/2108.11944,"This paper focuses on the problem of 3D human reconstruction from 2D evidence. Although this is an inherently ambiguous problem, the majority of recent works avoid the uncertainty modeling and typically regress a single estimate for a given input. In contrast to that, in this work, we propose to embrace the reconstruction ambiguity and we recast the problem as learning a mapping from the input to a distribution of plausible 3D poses. Our approach is based on the normalizing flows model and offers a series of advantages. For conventional applications, where a single 3D estimate is required, our formulation allows for efficient mode computation. Using the mode leads to performance that is comparable with the state of the art among deterministic unimodal regression models. Simultaneously, since we have access to the likelihood of each sample, we demonstrate that our model is useful in a series of downstream tasks, where we leverage the probabilistic nature of the prediction as a tool for more accurate estimation. These tasks include reconstruction from multiple uncalibrated views, as well as human model fitting, where our model acts as a powerful image-based prior for mesh recovery. Our results validate the importance of probabilistic modeling, and indicate state-of-the-art performance across a variety of settings. Code and models are available at: this https URL ",Probabilistic Modeling for Human Mesh Recovery,4,"['Check out our new work ""Probablistic Modeling for Human Mesh Recovery"" with @geopavlakos @dineshjayaraman and @KostasPenn that will be presented at #ICCV21 !\nPaper: <LINK>\nProject page: <LINK>\nCode:\n<LINK> <LINK>', '3D reconstruction from 2D evidence is inherently ambiguous. In this work we embrace the reconstruction ambiguity and recast the problem as learning a mapping from the input to a distribution of plausible 3D poses. https://t.co/TI0Pe66Zh3', 'Our method is not limited to learning a conditional generative model of 3D poses. We show that our learned distribution is useful for a variety of **downstream tasks**, such as body model fitting or reconstruction from multiple views. https://t.co/cHfkyJZVjC', 'If you want to test our method on random videos from YouTube please check the Colab notebook that we prepared:\nhttps://t.co/l7hHvJeoi6']",21,08,793 |
3696,111,1478632300917280769,1295482230,John Antoniadis (ฮฮนฮฌฮฝฮฝฮทฯ),"Very excited for my studentโs @astrosaba first, first author paper! He proposes a new type of explosions related to stars with ONeMg cores. He shows that some of these stars end may their life in a Ia-like SN instead of forming a NS (via an ECSN) ๐ <LINK> <LINK>",https://arxiv.org/abs/2201.00871,"(abridged) When stripped from their hydrogen-rich envelopes, stars with initial masses between $\sim$7 and 11 M$_\odot$ develop massive degenerate cores and collapse. Depending on the final structure and composition, the outcome can range from a thermonuclear explosion, to the formation of a neutron star in an electron-capture supernova (ECSN). It has been recently demonstrated that stars in this mass range may initiate explosive oxygen burning when their central densities are still below $\rho_{\rm c} \lesssim 10^{9.6}$ g cm$^{-3}$. This makes them interesting candidates for Type Ia Supernovae -- which we call (C)ONe SNe Ia -- and might have broader implications for the formation of neutron stars via ECSNe. Here, we model the evolution of 252 helium-stars with initial masses in the $0.8-3.5$ M$_\odot$ range, and metallicities between $Z=10^{-4}$ and $0.02$. We use these models to constrain the central densities, compositions and envelope masses at the time of explosive oxygen ignition. We further investigate the sensitivity of these properties to mass-loss rate assumptions using additional models with varying wind efficiencies. We find that helium-stars with masses between $\sim$1.8 and 2.7 M$_\odot$ evolve onto $1.35-1.37$ M$_\odot$ (C)ONe cores that initiate explosive burning at central densities between $\rm \log_{10}(\rho_c)\sim 9.3$ and 9.6. We constrain the amount of residual carbon retained after core carbon burning, and conclude that it plays a critical role in determining the final outcome: Cores with residual carbon mass fractions of $X_{\rm min}(\rm{{^{12}}C}) \gtrsim 0.004$ result in (C)ONe SNe Ia, while those with lower carbon mass fractions become ECSNe. We find that (C)ONe SNe Ia are more likely to occur at high metallicities, whereas at low metallicities ECSNe dominate. ","Thermonuclear and Electron-Capture Supernovae from Stripped-Envelope |
Stars",1,"['Very excited for my studentโs @astrosaba first, first author paper! He proposes a new type of explosions related to stars with ONeMg cores. He shows that some of these stars end may their life in a Ia-like SN instead of forming a NS (via an ECSN) ๐\n<LINK> <LINK>']",22,01,262 |
3697,9,1023934181469024256,1339581511,Ben Rackham,"New paper on the atmosphere of the hot Jupiter WASP-19b, led by @nespinozap, on arXiv today: <LINK> Retrievals considering both planetary and stellar spectral features offer a promising path forward for disentangling their contributions to transmission spectra. <LINK>",https://arxiv.org/abs/1807.10652,"The short period ($0.94$-day) transiting exoplanet WASP-19b is an exceptional target for transmission spectroscopy studies, due to its relatively large atmospheric scale-height ($\sim 500$ km) and equilibrium temperature ($\sim 2100$ K). Here we report on six precise spectroscopic Magellan/IMACS observations, five of which target the full optical window from $0.45-0.9\mu$m and one targeting the $0.4-0.55\mu$m blue-optical range. Five of these datasets are consistent with a transmission spectrum without any significant spectral features, while one shows a significant slope as a function of wavelength, which we interpret as arising from photospheric heterogeneities in the star. Coupled with HST/WFC3 infrared observations, our optical/near-infrared measurements point to the presence of high altitude clouds in WASP-19b's atmosphere in agreement with previous studies. Using a semi-analytical retrieval approach, considering both planetary and stellar spectral features, we find a water abundance consistent with solar for WASP-19b and strong evidence for sub-solar abundances for optical absorbers such as TiO and Na; no strong optical slope is detected, which suggests that if hazes are present, they are much weaker than previously suggested. In addition, two spot-crossing events are observed in our datasets and analyzed, including one of the first unambiguously detected bright spot-crossing events on an exoplanet host star. ","ACCESS: A featureless optical transmission spectrum for WASP-19b from |
Magellan/IMACS",1,"['New paper on the atmosphere of the hot Jupiter WASP-19b, led by @nespinozap, on arXiv today: <LINK>\n\nRetrievals considering both planetary and stellar spectral features offer a promising path forward for disentangling their contributions to transmission spectra. <LINK>']",18,07,268 |
3698,141,1186445773807992832,247800333,Ahmad ุทู,"New paper accepted for publication on dynamic state estimation (DSE) in power systems for nonlinear machine models *with performance guarantees*: <LINK> This paper deserves a blogpost but a Twitter thread should do it. DSE is really an old research problem---not only in power systems. There's virtually tens of methods in the literature that are based on Kalman filter and its variants of stochastic DSE methods. This work deviates from the literature's approach and follows a new, simpler one. The problem is difficult due to: i) non-Gaussian noise of sensors (PMUs); ii) the nonlinearity in both process and measurement models iii) need to perform DSE so quickly to keep up with PMU's sampling rate. In this work, we show how simple, highly-efficient, and robust nonlinear DSE, based on 100+ year old Lyapunov theory & new convex relaxations, can be designed to guarantee upper and lower bounds on the state estimation error norm--the difference between system & estimated states. We also compare our approach to three mainstream methods in the literature of power systems DSE and showcase that the theoretical bounds aren't only theoretically useful, but they all hold under various scenarios. This paper took so much of my student's energy (and mine too) over the summer, and I'm so happy that it's finally accepted for publication. It's a shame Sebastian isn't on Twitter. This man deserves so much good. @theEnergyMads Thanks Mads. By the way, this same approach in the paper can be extended to control for nonlinear dynamic models, without resorting to any linearization.",https://arxiv.org/abs/1910.09487,"A robust observer for performing power system dynamic state estimation (DSE) of a synchronous generator is proposed. The observer is developed using the concept of $\mathcal{L}_{\infty}$ stability for uncertain, nonlinear dynamic generator models. We use this concept to (i) design a simple, scalable, and robust dynamic state estimator and (ii) obtain a performance guarantee on the state estimation error norm relative to the magnitude of uncertainty from unknown generator inputs, and process and measurement noises. Theoretical methods to obtain upper and lower bounds on the estimation error are also provided. Numerical tests validate the performance of the $\mathcal{L}_{\infty}$-based estimator in performing DSE under various scenarios. The case studies reveal that the derived theoretical bounds are valid for a variety of case studies and operating conditions, while yielding better performance than existing power system DSE methods. ","Robust Dynamic State Estimation of Synchronous Machines with Asymptotic |
State Estimation Error Performance Guarantees",7,"['New paper accepted for publication on dynamic state estimation (DSE) in power systems for nonlinear machine models *with performance guarantees*: <LINK>\nThis paper deserves a blogpost but a Twitter thread should do it.', ""DSE is really an old research problem---not only in power systems. There's virtually tens of methods in the literature that are based on Kalman filter and its variants of stochastic DSE methods. This work deviates from the literature's approach and follows a new, simpler one."", ""The problem is difficult due to: \ni) non-Gaussian noise of sensors (PMUs);\nii) the nonlinearity in both process and measurement models\niii) need to perform DSE so quickly to keep up with PMU's sampling rate."", 'In this work, we show how simple, highly-efficient, and robust nonlinear DSE, based on 100+ year old Lyapunov theory & new convex relaxations, can be designed to guarantee upper and lower bounds on the state estimation error norm--the difference between system & estimated states.', ""We also compare our approach to three mainstream methods in the literature of power systems DSE and showcase that the theoretical bounds aren't only theoretically useful, but they all hold under various scenarios."", ""This paper took so much of my student's energy (and mine too) over the summer, and I'm so happy that it's finally accepted for publication. It's a shame Sebastian isn't on Twitter. This man deserves so much good."", '@theEnergyMads Thanks Mads. By the way, this same approach in the paper can be extended to control for nonlinear dynamic models, without resorting to any linearization.']",19,10,1579 |
3699,110,1348909420789575680,1242001549053898752,Marc Aubreville,"We just recently released a new dataset, covering a task that is even more challenging than mitotic figure detection, and also relevant for tumor prognostication: Bi- and multinucleated cells. Received very nice reviews at this year's @BVM_workshop. Paper: <LINK>",https://arxiv.org/abs/2101.01445,"Tumor cells with two nuclei (binucleated cells, BiNC) or more nuclei (multinucleated cells, MuNC) indicate an increased amount of cellular genetic material which is thought to facilitate oncogenesis, tumor progression and treatment resistance. In canine cutaneous mast cell tumors (ccMCT), binucleation and multinucleation are parameters used in cytologic and histologic grading schemes (respectively) which correlate with poor patient outcome. For this study, we created the first open source data-set with 19,983 annotations of BiNC and 1,416 annotations of MuNC in 32 histological whole slide images of ccMCT. Labels were created by a pathologist and an algorithmic-aided labeling approach with expert review of each generated candidate. A state-of-the-art deep learning-based model yielded an $F_1$ score of 0.675 for BiNC and 0.623 for MuNC on 11 test whole slide images. In regions of interest ($2.37 mm^2$) extracted from these test images, 6 pathologists had an object detection performance between 0.270 - 0.526 for BiNC and 0.316 - 0.622 for MuNC, while our model archived an $F_1$ score of 0.667 for BiNC and 0.685 for MuNC. This open dataset can facilitate development of automated image analysis for this task and may thereby help to promote standardization of this facet of histologic tumor prognostication. ","Dataset on Bi- and Multi-Nucleated Tumor Cells in Canine Cutaneous Mast |
Cell Tumors",1,"[""We just recently released a new dataset, covering a task that is even more challenging than mitotic figure detection, and also relevant for tumor prognostication: Bi- and multinucleated cells. Received very nice reviews at this year's @BVM_workshop. Paper: <LINK>""]",21,01,263 |
3700,2,1070499250251931649,214639688,Adina Williams,#SCiL2019 paper out w/ @kelina1124 A. Warstadt @sleepinyourhat #LSA2019 TL;DR Two new CoLA-style arg. structure alternation datasets (LaVA&FAVA)! Our models aren't equally good on all alternations; some info in word embeddings isn't in sentence embeddings <LINK>,https://arxiv.org/abs/1811.10773,"Verbs occur in different syntactic environments, or frames. We investigate whether artificial neural networks encode grammatical distinctions necessary for inferring the idiosyncratic frame-selectional properties of verbs. We introduce five datasets, collectively called FAVA, containing in aggregate nearly 10k sentences labeled for grammatical acceptability, illustrating different verbal argument structure alternations. We then test whether models can distinguish acceptable English verb-frame combinations from unacceptable ones using a sentence embedding alone. For converging evidence, we further construct LaVA, a corresponding word-level dataset, and investigate whether the same syntactic features can be extracted from word embeddings. Our models perform reliable classifications for some verbal alternations but not others, suggesting that while these representations do encode fine-grained lexical information, it is incomplete or can be hard to extract. Further, differences between the word- and sentence-level models show that some information present in word embeddings is not passed on to the down-stream sentence embeddings. ",Verb Argument Structure Alternations in Word and Sentence Embeddings,1,"[""#SCiL2019 paper out w/ @kelina1124 A. Warstadt @sleepinyourhat #LSA2019\nTL;DR Two new CoLA-style arg. structure alternation datasets (LaVA&FAVA)! Our models aren't equally good on all alternations; some info in word embeddings isn't in sentence embeddings\n<LINK>""]",18,11,262 |
3701,28,876979418559729665,791834604328001536,Katsuhito Sudoh,"Our new paper is out on arXiv, ""An Empirical Study of Mini-Batch Creation Strategies for Neural Machine Translation"" <LINK> This is a comparative study in different mini-batching in NMT. It suggests an efficiency-oriented mini-batching is not always good... This is a joint work with NAIST people when I was in NTT; the first author is now at NTT :-)",https://arxiv.org/abs/1706.05765,"Training of neural machine translation (NMT) models usually uses mini-batches for efficiency purposes. During the mini-batched training process, it is necessary to pad shorter sentences in a mini-batch to be equal in length to the longest sentence therein for efficient computation. Previous work has noted that sorting the corpus based on the sentence length before making mini-batches reduces the amount of padding and increases the processing speed. However, despite the fact that mini-batch creation is an essential step in NMT training, widely used NMT toolkits implement disparate strategies for doing so, which have not been empirically validated or compared. This work investigates mini-batch creation strategies with experiments over two different datasets. Our results suggest that the choice of a mini-batch creation strategy has a large effect on NMT training and some length-based sorting strategies do not always work well compared with simple shuffling. ","An Empirical Study of Mini-Batch Creation Strategies for Neural Machine |
Translation",3,"['Our new paper is out on arXiv, ""An Empirical Study of Mini-Batch Creation Strategies for Neural Machine Translation""\n<LINK>', 'This is a comparative study in different mini-batching in NMT. It suggests an efficiency-oriented mini-batching is not always good...', 'This is a joint work with NAIST people when I was in NTT; the first author is now at NTT :-)']",17,06,350 |
3702,74,1349427132645089281,66175375,Jason Wang,"A short twitter thread on our new paper on the PDS 70 protoplanets using the GRAVITY interferometer (<LINK>)! This is one of the first papers from our ExoGRAVITY large program to study all the known directly imaged planets with interferometry <LINK> GRAVITY has amazing astrometric precision because we coherently combine light from telescopes separated by 130 meters. With just two epochs, we could already see that the inner planet has to be slightly eccentric while the outer planet is essentially circular. <LINK> Based on dynamical stability arguments, we could also place a ~10 Jupiter mass upper limit on the inner planet. We also find that this configuration is consistent with a 2:1 orbital resonance, but not required for stability. <LINK> GRAVITY also gives us a K-band spectrum of both planets. We ran A LOT of orbit fits (24 per planet), and found that the models with the best support from the data are dust-extincted planetary atmospheres. We firmly rule out blackbodies for both planets. Here's PDS 70 c: <LINK> We also did a super cool experiment to try to spatially resolve the circumplanetary environment of the protoplanets! We achieved sub-au resolution in our GRAVITY observations, but unfortunately did not resolve either planet. Any bright disk would have to be smaller than 0.3 au. <LINK> Forgot to add: this was work done with @ArthurVigan @SylvestreLacour @PlanetaryGao, a bunch of others not on twitter, and the whole GRAVITY team for helping with making these observations happen! @AstroThayne Nope. We basically found in the paper that we see no evidence of circumplanetary emission in the NIR data (maybe extinction though), but that the ALMA emission is totally consistent with cooler dust that is invisible at these shorter wavelengths.",https://arxiv.org/abs/2101.04187,"We present K-band interferometric observations of the PDS 70 protoplanets along with their host star using VLTI/GRAVITY. We obtained K-band spectra and 100 $\mu$as precision astrometry of both PDS 70 b and c in two epochs, as well as spatially resolving the hot inner disk around the star. Rejecting unstable orbits, we found a nonzero eccentricity for PDS 70 b of $0.17 \pm 0.06$, a near-circular orbit for PDS 70 c, and an orbital configuration that is consistent with the planets migrating into a 2:1 mean motion resonance. Enforcing dynamical stability, we obtained a 95% upper limit on the mass of PDS 70 b of 10 $M_\textrm{Jup}$, while the mass of PDS 70 c was unconstrained. The GRAVITY K-band spectra rules out pure blackbody models for the photospheres of both planets. Instead, the models with the most support from the data are planetary atmospheres that are dusty, but the nature of the dust is unclear. Any circumplanetary dust around these planets is not well constrained by the planets' 1-5 $\mu$m spectral energy distributions (SEDs) and requires longer wavelength data to probe with SED analysis. However with VLTI/GRAVITY, we made the first observations of a circumplanetary environment with sub-au spatial resolution, placing an upper limit of 0.3~au on the size of a bright disk around PDS 70 b. ",Constraining the Nature of the PDS 70 Protoplanets with VLTI/GRAVITY,7,"['A short twitter thread on our new paper on the PDS 70 protoplanets using the GRAVITY interferometer (<LINK>)! This is one of the first papers from our ExoGRAVITY large program to study all the known directly imaged planets with interferometry <LINK>', 'GRAVITY has amazing astrometric precision because we coherently combine light from telescopes separated by 130 meters. With just two epochs, we could already see that the inner planet has to be slightly eccentric while the outer planet is essentially circular. https://t.co/AHDqQ0RznR', 'Based on dynamical stability arguments, we could also place a ~10 Jupiter mass upper limit on the inner planet. We also find that this configuration is consistent with a 2:1 orbital resonance, but not required for stability. https://t.co/Qdvx2Y2UeU', ""GRAVITY also gives us a K-band spectrum of both planets. We ran A LOT of orbit fits (24 per planet), and found that the models with the best support from the data are dust-extincted planetary atmospheres. We firmly rule out blackbodies for both planets. Here's PDS 70 c: https://t.co/glMUJ7jlOv"", 'We also did a super cool experiment to try to spatially resolve the circumplanetary environment of the protoplanets! We achieved sub-au resolution in our GRAVITY observations, but unfortunately did not resolve either planet. Any bright disk would have to be smaller than 0.3 au. https://t.co/q2stzAHtq7', 'Forgot to add: this was work done with @ArthurVigan @SylvestreLacour @PlanetaryGao, a bunch of others not on twitter, and the whole GRAVITY team for helping with making these observations happen!', '@AstroThayne Nope. We basically found in the paper that we see no evidence of circumplanetary emission in the NIR data (maybe extinction though), but that the ALMA emission is totally consistent with cooler dust that is invisible at these shorter wavelengths.']",21,01,1769 |
3703,137,1369108886494482432,1169073912,Anna Yu,"It's paper day๐!! Our new paper (w/ @jbprime, @astro_klein, Jonathan Stern, and collaborators) finds this interesting connection between the early time bursty star formation and the formation of the galactic thick disk: <LINK> <LINK> The time when star formation transitions from bursty to steady correlates strongly with the age of the thick disk (both median age and t90) --> The observations of the Milky Way thick disk thus suggest that it probably had this transition from bursty to steady ~6.5Gyr ago๐ <LINK> Big thanks to all of our collaborators๐ค: @AndrewWetzel, Xiangcheng Ma, @jorgito__moreno, Zach Hafen, @alexbgurvich, @PFHopkins_Astro, Duลกan Kereลก, Claude-Andrรฉ Faucher-Giguรจr, Robert Feldmann, and Eliot Quataert!! This is without doubt team effort!",https://arxiv.org/abs/2103.03888,"We investigate thin and thick stellar disc formation in Milky-Way-mass galaxies using twelve FIRE-2 cosmological zoom-in simulations. All simulated galaxies experience an early period of bursty star formation that transitions to a late-time steady phase of near-constant star formation. Stars formed during the late-time steady phase have more circular orbits and thin-disc-like morphology at $z=0$, whilst stars born during the bursty phase have more radial orbits and thick-disc structure. The median age of thick-disc stars at $z=0$ correlates strongly with this transition time. We also find that galaxies with an earlier transition from bursty to steady star formation have a higher thin-disc fractions at $z=0$. Three of our systems have minor mergers with LMC-size satellites during the thin-disc phase. These mergers trigger short starbursts but do not destroy the thin disc nor alter broad trends between the star formation transition time and thin/thick disc properties. If our simulations are representative of the Universe, then stellar archaeological studies of the Milky Way (or M31) provide a window into past star-formation modes in the Galaxy. Current age estimates of the Galactic thick disc would suggest that the Milky Way transitioned from bursty to steady phase $\sim$6.5 Gyr ago; prior to that time the Milky Way likely lacked a recognisable thin disc. ",The bursty origin of the Milky Way thick disc,3,"[""It's paper day๐!! Our new paper (w/ @jbprime, @astro_klein, Jonathan Stern, and collaborators) finds this interesting connection between the early time bursty star formation and the formation of the galactic thick disk: <LINK> <LINK>"", 'The time when star formation transitions from bursty to steady correlates strongly with the age of the thick disk (both median age and t90) --> The observations of the Milky Way thick disk thus suggest that it probably had this transition from bursty to steady ~6.5Gyr ago๐ https://t.co/kiOOsAWp9d', 'Big thanks to all of our collaborators๐ค: @AndrewWetzel, Xiangcheng Ma, @jorgito__moreno, Zach Hafen, @alexbgurvich, @PFHopkins_Astro, Duลกan Kereลก, Claude-Andrรฉ Faucher-Giguรจr, Robert Feldmann, and Eliot Quataert!! This is without doubt team effort!']",21,03,766 |
3704,65,1417592232010469377,1093725312368762886,Caleb Miles,"๐จ New arXiv paper ๐จ <LINK> We address a longstanding problem in testing for a mediated effect. Traditional tests of indirect effects are conservative and underpowered when the effects of exposure on mediator and mediator on outcome are both small-to-moderate. Under ass'ns, the indirect effect is IDed by the product of regression coefs, so the H_0 of no indirect effect is delta_x*delta_y=0. This null gives rise to some unusual behavior. hat{delta}_x*hat{delta}_y doesn't converge uniformly even when their joint convergence is uniform: <LINK> Additionally, the best performing traditional test rejects when the two tests of delta_x=0 and delta_y=0 each reject, but this is underpowered since the non-rejection regions overlap. We instead propose two tests optimizing power wrt minimax and Bayes risk optimality criteria. <LINK> 1st: A minimax optimal test, which achieves its nominal type 1 error alpha exactly everywhere in H_0. Its power is then never < alpha in the alternative space. It may look funny, but we show it's the unique test w this property in a set generated by a certain class of functions. <LINK> 2nd: A Bayes risk optimal test. We adapt the sparse linear programming approach proposed in Rosenblum et al. (<LINK>) to our setting to get an approx. optimal test for the 0-1 loss. Another funny-looking rejection region! <LINK> How do these tests compare in terms of performance? Almost identically as it turns out, but both dominate the joint significance test (the one that rejects both delta_x=0 and delta_y=0), especially for small-to-moderate effect sizes wrt sd. <LINK> Other fun things: โ
Allowance for exposure-mediator interaction โ
Nonstandard (& general) def'n of p-values applied to these tests โ
Large-scale hyp. testing of indirect effects โ
Tests of products of >2 coefficients based on Latin squares and (maybe?) hypercubes <LINK> Lastly, software! We have an R package implementing the minimax optimal test (Bayes risk test to come). It's a bit of a work in progress, so there may still be some bugs to work out, but we're very excited about its potential for widespread practical use! <LINK>",https://arxiv.org/abs/2107.07575,"The indirect effect of an exposure on an outcome through an intermediate variable can be identified by a product of regression coefficients under certain causal and regression modeling assumptions. Thus, the null hypothesis of no indirect effect is a composite null hypothesis, as the null holds if either regression coefficient is zero. A consequence is that existing hypothesis tests are either severely underpowered near the origin (i.e., when both coefficients are small with respect to standard errors) or do not preserve type 1 error uniformly over the null hypothesis space. We propose hypothesis tests that (i) preserve level alpha type 1 error, (ii) meaningfully improve power when both true underlying effects are small relative to sample size, and (iii) preserve power when at least one is not. One approach gives a closed-form test that is minimax optimal with respect to local power over the alternative parameter space. Another uses sparse linear programming to produce an approximately optimal test for a Bayes risk criterion. We provide an R package that implements the minimax optimal test. ","Optimal tests of the composite null hypothesis arising in mediation |
analysis",8,"['๐จ New arXiv paper ๐จ\n<LINK>\n\nWe address a longstanding problem in testing for a mediated effect. Traditional tests of indirect effects are conservative and underpowered when the effects of exposure on mediator and mediator on outcome are both small-to-moderate.', ""Under ass'ns, the indirect effect is IDed by the product of regression coefs, so the H_0 of no indirect effect is delta_x*delta_y=0. This null gives rise to some unusual behavior. hat{delta}_x*hat{delta}_y doesn't converge uniformly even when their joint convergence is uniform: https://t.co/nFbBFKzCHO"", 'Additionally, the best performing traditional test rejects when the two tests of delta_x=0 and delta_y=0 each reject, but this is underpowered since the non-rejection regions overlap.\n\nWe instead propose two tests optimizing power wrt minimax and Bayes risk optimality criteria. https://t.co/oTmAPSM0Ez', ""1st: A minimax optimal test, which achieves its nominal type 1 error alpha exactly everywhere in H_0. Its power is then never < alpha in the alternative space. It may look funny, but we show it's the unique test w this property in a set generated by a certain class of functions. https://t.co/UBEiDpOn5D"", '2nd: A Bayes risk optimal test. We adapt the sparse linear programming approach proposed in Rosenblum et al. (https://t.co/JtszDztjQ6) to our setting to get an approx. optimal test for the 0-1 loss. Another funny-looking rejection region! https://t.co/1r8jllfFVS', 'How do these tests compare in terms of performance? Almost identically as it turns out, but both dominate the joint significance test (the one that rejects both delta_x=0 and delta_y=0), especially for small-to-moderate effect sizes wrt sd. https://t.co/xbcfLwmxR3', ""Other fun things:\nโ
Allowance for exposure-mediator interaction\nโ
Nonstandard (& general) def'n of p-values applied to these tests\nโ
Large-scale hyp. testing of indirect effects\nโ
Tests of products of >2 coefficients based on Latin squares and (maybe?) hypercubes https://t.co/vxZLJTU2vX"", ""Lastly, software! We have an R package implementing the minimax optimal test (Bayes risk test to come). It's a bit of a work in progress, so there may still be some bugs to work out, but we're very excited about its potential for widespread practical use! https://t.co/UzmswyNPAE""]",21,07,2134 |
3705,187,1453000270397513754,929973145,Dr David Sobral ๐ซ๐,(Re)Solving Reionization with Lyฮฑ: How Bright Lyฮฑ Emitters account for the zโ2โ8 Cosmic Ionizing Background (<LINK>). We find that a highly ionizing minority of galaxies with MUV<โ17 (LAEs) accounts for the entire ionizing budget from star-forming galaxies. <LINK>,https://arxiv.org/abs/2110.11967,"The cosmic ionizing emissivity from star-forming galaxies has long been anchored to UV luminosity functions. Here we introduce an emissivity framework based on Ly$\alpha$ emitters (LAEs), which naturally hones in on the subset of galaxies responsible for the ionizing background due to the intimate connection between the production and escape of Ly$\alpha$ and LyC photons. Using constraints on the escape fractions of bright LAEs ($L_{\rm{Ly\alpha}}>0.2 L^{*}$) at $z\approx2$ obtained from resolved Ly$\alpha$ profiles, and arguing for their redshift-invariance, we show that: (i) quasars and LAEs together reproduce the relatively flat emissivity at $z\approx2-6$, which is non-trivial given the strong evolution in both the star-formation density and quasar number density at these epochs and (ii) LAEs produce late and rapid reionization between $z\approx6-9$ under plausible assumptions. Within this framework, the $>10\times$ rise in the UV population-averaged $f_{\rm{esc}}$ between $z\approx3-7$ naturally arises due to the same phenomena that drive the growing Ly$\alpha$ emitter fraction with redshift. Generally, a LAE dominated emissivity yields a peak in the distribution of the ionizing budget with UV luminosity as reported in latest simulations. Using our adopted parameters ($f_{\rm{esc}}=50\%$, $\xi_{\rm{ion}}=10^{25.9}$ Hz erg$^{-1}$ for half the bright LAEs), a highly ionizing minority of galaxies with $M_{\rm UV}<-17$ accounts for the entire ionizing budget from star-forming galaxies. Rapid flashes of LyC from such rare galaxies produce a ""disco"" ionizing background. We conclude proposing tests to further develop our suggested Ly$\alpha$-anchored formalism. ","(Re)Solving Reionization with Ly{\alpha}: How Bright Ly{\alpha} Emitters |
account for the $z\approx2-8$ Cosmic Ionizing Background",1,['(Re)Solving Reionization with Lyฮฑ: How Bright Lyฮฑ Emitters account for the zโ2โ8 Cosmic Ionizing Background (<LINK>). We find that a highly ionizing minority of galaxies with MUV<โ17 (LAEs) accounts for the entire ionizing budget from star-forming galaxies. <LINK>'],21,10,267 |
3706,54,1493971052187242497,3313806489,Tim Roberts,Busy teaching - so almost missed that it's NEW PAPER DAY. In which my excellent ex-PhD student @RajAstron shows X-ray and optical emission don't always change concurrently in a pulsating ULX - so the optical emission could be from the secondary star. <LINK>,https://arxiv.org/abs/2202.06986,"NGC 1313 X-2 is one of the few known pulsating ultraluminous X-ray sources (PULXs), and so is thought to contain a neutron star that accretes at highly super-Eddington rates. However, the physics of this accretion remains to be determined. Here we report the results of two simultaneous XMM-Newton and HST observations of this PULX taken to observe two distinct X-ray behaviours as defined from its Swift light curve. We find that the X-ray spectrum of the PULX is best described by the hard ultraluminous (HUL) regime during the observation taken in the lower flux, lower variability amplitude behaviour; its spectrum changes to a broadened disc during the higher flux, higher variability amplitude epoch. However, we see no accompanying changes in the optical/UV fluxes, with the only difference being a reduction in flux in the near-IR as the X-ray flux increased. We attempt to fit irradiation models to explain the UV/optical/IR fluxes but they fail to provide meaningful constraints. Instead, a physical model for the system leads us to conclude that the optical light is dominated by a companion O/B star, albeit with an IR excess that may be indicative of a jet. We discuss how these results may be consistent with the precession of the inner regions of the accretion disc leading to changes in the observed X-ray properties, but not the optical, and whether we should expect to observe reprocessed emission from ULXs. ","A multi-wavelength view of distinct accretion regimes in the pulsating |
ultraluminous X-ray source NGC 1313 X-2",1,"[""Busy teaching - so almost missed that it's NEW PAPER DAY. In which my excellent ex-PhD student @RajAstron shows X-ray and optical emission don't always change concurrently in a pulsating ULX - so the optical emission could be from the secondary star. \n\n<LINK>""]",22,02,258 |
3707,13,1178655631999426560,1556664198,Kyle Cranmer,"Excited to announce a new paper with Alvaro Sanchez-Gonzalez, Victor Bapst, and @PeterWBattaglia (@DeepMindAI) on ""Hamiltonian Graph Networks with ODE Integrators"" Gives improvements in position & energy accuracy, and zero-shot generalization. <LINK> <LINK> We combine graph networks with a differentiable ordinary differential equation integrator as a mechanism for predicting future states (OGN), and a Hamiltonian as an internal representation (HOGN). <LINK> There are some parables for Machine Learning for physics and interpretability. What does it mean for a network with no physics constraints to be more accurate than the true Hamiltonian? This can happen if you use a simple ODE integrator and big time steps. The graph network that directly learns position and momentum updates with no physics constraints has enough capacity to learn the residuals and make more accurate predictions (using the same crappy integrator). But that doesn't generalize or conserve energy well. Instead, our Hamiltonian graph networks will suffer the same loss of predictive accuracy as the true Hamiltonian with a crappy integrator, but do better at conserving energy and generalize to different step sizes and integrator types. It's interesting to look back and think about the question ""is the network learning the physics?"" It seems clear the hamiltonian network is, but this isn't reflected in the naive test accuracy with the same integrator and step size. This was a fun collaboration. The @DeepMind team is amazing! @Deepmind Also a shout-out to a related project with @PeterWBattaglia and a different Cranmer.... what are the chances? <LINK>",https://arxiv.org/abs/1909.12790,"We introduce an approach for imposing physically informed inductive biases in learned simulation models. We combine graph networks with a differentiable ordinary differential equation integrator as a mechanism for predicting future states, and a Hamiltonian as an internal representation. We find that our approach outperforms baselines without these biases in terms of predictive accuracy, energy accuracy, and zero-shot generalization to time-step sizes and integrator orders not experienced during training. This advances the state-of-the-art of learned simulation, and in principle is applicable beyond physical domains. ",Hamiltonian Graph Networks with ODE Integrators,8,"['Excited to announce a new paper with Alvaro Sanchez-Gonzalez, Victor Bapst, and @PeterWBattaglia (@DeepMindAI) on \n""Hamiltonian Graph Networks with ODE Integrators""\nGives improvements in position & energy accuracy, and zero-shot generalization. \n<LINK> <LINK>', 'We combine graph networks with a differentiable ordinary differential equation integrator as a mechanism for predicting future states (OGN), and a Hamiltonian as an internal representation (HOGN). https://t.co/U9toZHyL7f', 'There are some parables for Machine Learning for physics and interpretability. What does it mean for a network with no physics constraints to be more accurate than the true Hamiltonian? This can happen if you use a simple ODE integrator and big time steps.', ""The graph network that directly learns position and momentum updates with no physics constraints has enough capacity to learn the residuals and make more accurate predictions (using the same crappy integrator). But that doesn't generalize or conserve energy well."", 'Instead, our Hamiltonian graph networks will suffer the same loss of predictive accuracy as the true Hamiltonian with a crappy integrator, but do better at conserving energy and generalize to different step sizes and integrator types.', 'It\'s interesting to look back and think about the question ""is the network learning the physics?"" It seems clear the hamiltonian network is, but this isn\'t reflected in the naive test accuracy with the same integrator and step size.', 'This was a fun collaboration. The @DeepMind team is amazing!', '@Deepmind Also a shout-out to a related project with @PeterWBattaglia and a different Cranmer.... what are the chances?\nhttps://t.co/sU6Q8msBku']",19,09,1638 |
3708,58,1186473370784800768,1658162341,Narayanan Rengaswamy,"New paper out just now! <LINK> We rigorously characterize the most general stabilizer codes that support physical transversal T and T^{-1}. @kenbrownquantum @JoshKoomz @dabacon @earltcampbell @CVuillot @QuantumChambs @JarrodMcclean @QuantumGosset As a corollary we prove that CSS codes are optimal for transversal T among non-degenerate stabilizer codes. Personally I think this is my best so far! This paper has been one fun journey that started with two questions I had in mind around October last year: 1. Does the symplectic formalism have to be restricted to Cliffords? This lead to <LINK> 2. If non-Clifford gates don't map all Paulis to Paulis, then how do some stabilizer codes support physical T gates? At first I didn't get why the codespace is preserved. Conversations with Michael Newman from @kenbrownquantum's group were very helpful in understanding several things! This is the first official collaboration between the Calderbank/Pfister lab and @kenbrownquantum's lab. I look forward to more! We also show that Bravyi and Haah's triorthogonal codes is essentially the most general family of CSS codes that realize logical transversal T via physical transversal T. As I'm originally trained in classical information and coding theory, I really enjoy that this work produces exciting classical coding problems to work on that are well motivated by quantum constraints, e.g. transversal T. (See CSS-T codes!)",http://arxiv.org/abs/1910.09333,"In order to perform universal fault-tolerant quantum computation, one needs to implement a logical non-Clifford gate. Consequently, it is important to understand codes that implement such gates transversally. In this paper, we adopt an algebraic approach to characterize all stabilizer codes for which transversal $T$ and $T^{-1}$ gates preserve the codespace. Our Heisenberg perspective reduces this to a finite geometry problem that translates to the design of certain classical codes. We prove three corollaries: (a) For any non-degenerate $[[ n,k,d ]]$ stabilizer code supporting a physical transversal $T$, there exists an $[[ n,k,d ]]$ CSS code with the same property; (b) Triorthogonal codes are the most general CSS codes that realize logical transversal $T$ via physical transversal $T$; (c) Triorthogonality is necessary for physical transversal $T$ on a CSS code to realize the logical identity. The main tool we use is a recent efficient characterization of certain diagonal gates in the Clifford hierarchy (arXiv:1902.04022). We refer to these gates as Quadratic Form Diagonal (QFD) gates. Our framework generalizes all existing code constructions that realize logical gates via transversal $T$. We provide several examples and briefly discuss connections to decreasing monomial codes, pin codes, generalized triorthogonality and quasitransversality. We partially extend these results towards characterizing all stabilizer codes that support transversal $\pi/2^{\ell}$ $Z$-rotations. In particular, using Ax's theorem on residue weights of polynomials, we provide an alternate characterization of logical gates induced by transversal $\pi/2^{\ell}$ $Z$-rotations on a family of quantum Reed-Muller codes. We also briefly discuss a general approach to analyze QFD gates that might lead to a characterization of all stabilizer codes that support any given physical transversal $1$- or $2$-local diagonal gate. ",On Optimality of CSS Codes for Transversal $T$,7,"['New paper out just now! <LINK>\nWe rigorously characterize the most general stabilizer codes that support physical transversal T and T^{-1}. @kenbrownquantum @JoshKoomz @dabacon @earltcampbell @CVuillot @QuantumChambs @JarrodMcclean @QuantumGosset', 'As a corollary we prove that CSS codes are optimal for transversal T among non-degenerate stabilizer codes.', 'Personally I think this is my best so far! This paper has been one fun journey that started with two questions I had in mind around October last year:', ""1. Does the symplectic formalism have to be restricted to Cliffords? This lead to https://t.co/ItErm0KxZT\n2. If non-Clifford gates don't map all Paulis to Paulis, then how do some stabilizer codes support physical T gates? At first I didn't get why the codespace is preserved."", ""Conversations with Michael Newman from @kenbrownquantum's group were very helpful in understanding several things! This is the first official collaboration between the Calderbank/Pfister lab and @kenbrownquantum's lab. I look forward to more!"", ""We also show that Bravyi and Haah's triorthogonal codes is essentially the most general family of CSS codes that realize logical transversal T via physical transversal T."", ""As I'm originally trained in classical information and coding theory, I really enjoy that this work produces exciting classical coding problems to work on that are well motivated by quantum constraints, e.g. transversal T. (See CSS-T codes!)""]",19,10,1421 |
3709,95,1346799290736381953,77592002,Ahmad,"My new paper is up on arxiv! I investigate how metallicity and dust affect the growth of H II regions in clusters. I carry out rad-hydro simulations with detailed RT, and compare photoionization heating with radiation pressure (Pgas and Prad in figs). <LINK> <LINK> This was my first sole-author paper and I'm super happy with it. And there's a lot of scope to do more analysis using these simulations as well, eg with synthetic observations. (Which is just as well, as they weren't exactly cheap...) @poke haha thanks! I admit I've spent a lot of time trying out different colour schemes, so I'm glad it paid off ๐",https://arxiv.org/abs/2101.01193,"Gas metallicity $Z$ and the related dust-to-gas ratio $f_\textrm{d}$ can influence the growth of HII regions via metal line cooling and UV absorption. We model these effects in star-forming regions containing massive stars. We compute stellar feedback from photoionization and radiation pressure (RP) using Monte Carlo radiative transfer coupled with hydrodynamics, including stellar and diffuse radiation fields. We follow a $10^5$ M$_\odot$ turbulent cloud with $Z/$Z$_\odot$ = 2, 1, 0.5, 0.1 and $f_\textrm{d} = 0.01 Z/$Z$_\odot$ with a cluster-sink particle method for star formation. The models evolve for at least 1.5 Myr under feedback. Lower $Z$ results in higher temperatures and therefore larger HII regions. For $Z \ge$Z$_\odot$, radiation pressure $P_\textrm{rad}$ can dominate locally over the gas pressure $P_\textrm{gas}$ in the inner half-parsec around sink particles. Globally, the ratio of $P_\textrm{rad}/P_\textrm{gas}$ is around 1 (2 Z$_\odot$), 0.3 (Z$_\odot$), 0.1 (0.5 Z$_\odot$), and 0.03 (0.1 Z$_\odot$). In the solar model, excluding RP results in an ionized volume several times smaller than the fiducial model with both mechanisms. Excluding RP and UV attenuation by dust results in a larger ionized volume than the fiducial case. That is, UV absorption hinders growth more than RP helps it. The radial expansion velocity of ionized gas reaches $+$15 km/s outwards, while neutral gas has inward velocities for most of the runtime, except for 0.1 Z$_\odot$ which exceeds $+$4 km/s. $Z$ and $f_\textrm{d}$ do not significantly alter the star formation efficiency, rate, or cluster half-mass radius, with the exception of 0.1 Z$_\odot$ due to the earlier expulsion of neutral gas. ","The growth of H II regions around massive stars: the role of metallicity |
and dust",3,"['My new paper is up on arxiv! \n\nI investigate how metallicity and dust affect the growth of H II regions in clusters.\n\nI carry out rad-hydro simulations with detailed RT, and compare photoionization heating with radiation pressure (Pgas and Prad in figs). \n\n<LINK> <LINK>', ""This was my first sole-author paper and I'm super happy with it. And there's a lot of scope to do more analysis using these simulations as well, eg with synthetic observations. (Which is just as well, as they weren't exactly cheap...)"", ""@poke haha thanks! I admit I've spent a lot of time trying out different colour schemes, so I'm glad it paid off ๐""]",21,01,617 |
3710,27,987253115509460992,864555701783474179,julesh,"New preprint: Backward induction for repeated games This is really work I did in 2014, and I *still* can't understand how it works, so I wrote an 'experimental' paper about it. Heavily exploiting features of Haskell to do computable game theory. <LINK> @TheMichaelBurge Good question.... I think yes, and I think one of the interpretations of repeated games is they are really finite-but-unknow-length.Thing is, formalising games where the player doesn't know the rules is a can of worms. @TheMichaelBurge Perhaps though if you have a finite-but-unknown-length game you get 'false horizons' where players have a false belief about when the game ends. Kinda like real life apocalypse sects really...",https://arxiv.org/abs/1804.07074,"We present a method of backward induction for computing approximate subgame perfect Nash equilibria of infinitely repeated games with discounted payoffs. This uses the selection monad transformer, combined with the searchable set monad viewed as a notion of 'topologically compact' nondeterminism, and a simple model of computable real numbers. This is the first application of Escard\'o and Oliva's theory of higher-order sequential games to games of imperfect information, in which (as well as its mathematical elegance) lazy evaluation does nontrivial work for us compared with a traditional game-theoretic analysis. Since a full theoretical understanding of this method is lacking (and appears to be very hard), we consider this an 'experimental' paper heavily inspired by theoretical ideas. We use the famous Iterated Prisoner's Dilemma as a worked example. ",Backward Induction for Repeated Games,3,"[""New preprint: Backward induction for repeated games\nThis is really work I did in 2014, and I *still* can't understand how it works, so I wrote an 'experimental' paper about it. Heavily exploiting features of Haskell to do computable game theory.\n<LINK>"", ""@TheMichaelBurge Good question.... I think yes, and I think one of the interpretations of repeated games is they are really finite-but-unknow-length.Thing is, formalising games where the player doesn't know the rules is a can of worms."", ""@TheMichaelBurge Perhaps though if you have a finite-but-unknown-length game you get 'false horizons' where players have a false belief about when the game ends. Kinda like real life apocalypse sects really...""]",18,04,698 |
3711,62,1407019963001487366,1674028088,Misha Laskin ๐บ๐ฆ,"New paper / algo - MABE! We show that combining dynamics models + weighted behavioral priors results in offline RL that is (a) robust across datasets and (b) can transfer behaviors across domains. Paper: <LINK> Site: <LINK> ๐งต 1/8 <LINK> Offline RL has the potential to make RL safer and practical, while model-based methods enable sample efficiency and generalization. But offline MBRL relies on uncertainty estimation for conservatism, which is notoriously hard to do accurately, resulting in brittle algos. 2/8 <LINK> We present Offline Model-based RL with Adaptive Behavioral Priors (MABE), an uncertainty-free MBRL approach that regularizes the MBRL policy with an advantage-weighted behavioral prior to keep the MBRL agent near high-return states. On D4RL MABE performs quite well! 3/8 <LINK> Since the behavioral prior is adaptive (advantage or reward weighted), MABE excels in datasets of varying skill levels. 4/8 <LINK> We also investigate new capabilities that MABE brings to the table. MBRL methods work well for in-domain generalization, but can we transfer behaviors across domains even if the dynamics are different? E.g. consider transferring behavior from icy terrain to normal terrain. 5/8 <LINK> Prior MBRL approaches cannot generalize across domains, because the dynamics don't transfer. By combining dynamics models for in-domain generalization + behavioral priors for cross-domain transfer, MABE can successfully transfer behaviors even if the dynamics are different! 6/8 <LINK> Finally, we ablate different components of MABE and even add / remove uncertainty estimation to see how much it contributes. tl;dr is that behavioral priors + offline policy improvement with MBRL contribute most to performance; and uncertainty estimation is not required. 7/8 <LINK> This work was led by a very talented undergrad @catherine_cang and in collaboration with @aravindr93 + @pabbeel. 8/8",http://arxiv.org/abs/2106.09119,"Offline Reinforcement Learning (RL) aims to extract near-optimal policies from imperfect offline data without additional environment interactions. Extracting policies from diverse offline datasets has the potential to expand the range of applicability of RL by making the training process safer, faster, and more streamlined. We investigate how to improve the performance of offline RL algorithms, its robustness to the quality of offline data, as well as its generalization capabilities. To this end, we introduce Offline Model-based RL with Adaptive Behavioral Priors (MABE). Our algorithm is based on the finding that dynamics models, which support within-domain generalization, and behavioral priors, which support cross-domain generalization, are complementary. When combined together, they substantially improve the performance and generalization of offline RL policies. In the widely studied D4RL offline RL benchmark, we find that MABE achieves higher average performance compared to prior model-free and model-based algorithms. In experiments that require cross-domain generalization, we find that MABE outperforms prior methods. Our website is available at this https URL . ","Behavioral Priors and Dynamics Models: Improving Performance and Domain |
Transfer in Offline RL",8,"['New paper / algo - MABE! We show that combining dynamics models + weighted behavioral priors results in offline RL that is (a) robust across datasets and (b) can transfer behaviors across domains.\n\nPaper: <LINK>\nSite: <LINK>\n\n๐งต 1/8 <LINK>', 'Offline RL has the potential to make RL safer and practical, while model-based methods enable sample efficiency and generalization.\n\nBut offline MBRL relies on uncertainty estimation for conservatism, which is notoriously hard to do accurately, resulting in brittle algos. \n\n2/8 https://t.co/0kXoXMAAyZ', 'We present Offline Model-based RL with Adaptive Behavioral Priors (MABE), an uncertainty-free MBRL approach that regularizes the MBRL policy with an advantage-weighted behavioral prior to keep the MBRL agent near high-return states. \n\nOn D4RL MABE performs quite well!\n\n3/8 https://t.co/cAWWqBjyBi', 'Since the behavioral prior is adaptive (advantage or reward weighted), MABE excels in datasets of varying skill levels.\n\n4/8 https://t.co/lpbJM8cAQ3', 'We also investigate new capabilities that MABE brings to the table. MBRL methods work well for in-domain generalization, but can we transfer behaviors across domains even if the dynamics are different?\n\nE.g. consider transferring behavior from icy terrain to normal terrain.\n\n5/8 https://t.co/7dKtgglKu4', ""Prior MBRL approaches cannot generalize across domains, because the dynamics don't transfer. By combining dynamics models for in-domain generalization + behavioral priors for cross-domain transfer, MABE can successfully transfer behaviors even if the dynamics are different!\n\n6/8 https://t.co/zBt7599Vpd"", 'Finally, we ablate different components of MABE and even add / remove uncertainty estimation to see how much it contributes. tl;dr is that behavioral priors + offline policy improvement with MBRL contribute most to performance; and uncertainty estimation is not required.\n7/8 https://t.co/u1yjyAtW6H', 'This work was led by a very talented undergrad @catherine_cang and in collaboration with @aravindr93\n+ @pabbeel. \n8/8']",21,06,1901 |
3712,41,1452892660910501897,928668522242244608,Mike Walmsley,"CNN trained on Galaxy Zoo can solve new tasks they were never trained for. Paper thread! <LINK> /n <LINK> Our latest GZ models learn to solve every GZ question at once. Solving this broad task makes them learn a semantically meaningful representation, which we can use to...2/n 1) Find similar galaxies to a query galaxy. Play with it at <LINK> 2) Find the anomalies you are personally most interested in, by understanding visual similarity and learning from your feedback. and most importantly, 3) ... Finetune to solve new problems. Pretraining on GZ works much better than Imagenet or from scratch. You can classify rings with just ~10 examples. Find your own galaxies with this Colab: <LINK> All the code is public and documented. Includes pretrained models. More in our blogs for researchers (<LINK>), @galaxyzoo volunteers (<LINK>) and of course the paper itself (<LINK>). Special thanks to Michelle Lochner for her Astronomaly work, which inspired the anomaly-finding section <LINK>. We plan on adding this new method into Astronomaly. Also special thanks to @radastrat and @chrislintott for entertaining my random tangents.",https://arxiv.org/abs/2110.12735,"Astronomers have typically set out to solve supervised machine learning problems by creating their own representations from scratch. We show that deep learning models trained to answer every Galaxy Zoo DECaLS question learn meaningful semantic representations of galaxies that are useful for new tasks on which the models were never trained. We exploit these representations to outperform existing approaches at several practical tasks crucial for investigating large galaxy samples. The first task is identifying galaxies of similar morphology to a query galaxy. Given a single galaxy assigned a free text tag by humans (e.g. `#diffuse'), we can find galaxies matching that tag for most tags. The second task is identifying the most interesting anomalies to a particular researcher. Our approach is 100\% accurate at identifying the most interesting 100 anomalies (as judged by Galaxy Zoo 2 volunteers). The third task is adapting a model to solve a new task using only a small number of newly-labelled galaxies. Models fine-tuned from our representation are better able to identify ring galaxies than models fine-tuned from terrestrial images (ImageNet) or trained from scratch. We solve each task with very few new labels; either one (for the similarity search) or several hundred (for anomaly detection or fine-tuning). This challenges the longstanding view that deep supervised methods require new large labelled datasets for practical use in astronomy. To help the community benefit from our pretrained models, we release our fine-tuning code zoobot. Zoobot is accessible to researchers with no prior experience in deep learning. ","Practical Galaxy Morphology Tools from Deep Supervised Representation |
Learning",6,"['CNN trained on Galaxy Zoo can solve new tasks they were never trained for. \n\nPaper thread! <LINK> /n <LINK>', 'Our latest GZ models learn to solve every GZ question at once. Solving this broad task makes them learn a semantically meaningful representation, which we can use to...2/n', '1) Find similar galaxies to a query galaxy. Play with it at https://t.co/BAfLeZOCyV\n\n2) Find the anomalies you are personally most interested in, by understanding visual similarity and learning from your feedback.\n\nand most importantly, 3) ...', 'Finetune to solve new problems. Pretraining on GZ works much better than Imagenet or from scratch. You can classify rings with just ~10 examples.\n\nFind your own galaxies with this Colab: https://t.co/C5m8aSN6rv \n\nAll the code is public and documented. Includes pretrained models.', 'More in our blogs for researchers (https://t.co/B3KljmRrDb), @galaxyzoo volunteers (https://t.co/1R5L9ceywZ) and of course the paper itself (https://t.co/J1EFJuQ8oo).', 'Special thanks to Michelle Lochner for her Astronomaly work, which inspired the anomaly-finding section https://t.co/KBv7txZXNn. We plan on adding this new method into Astronomaly.\n\nAlso special thanks to @radastrat and @chrislintott for entertaining my random tangents.']",21,10,1133 |
3713,78,1426129957885992964,1575687696,Dr. Darko Donevski,"If you're interested in massive galaxies, there is a new paper from our group led by @Sissaschool PhD student Lara Pantoni. The study is on today's @arxiv as <LINK>. It explores CO gas properties to understand the evolution of some fascinating, distant objects.",https://arxiv.org/abs/2108.05596,"We present the ALMA view of 11 main-sequence DSFGs, (sub-)millimeter selected in the GOODS-S field, and spectroscopically confirmed to be at the peak of Cosmic SFH (z = 2-3). Our study combines the analysis of galaxy SED with ALMA continuum and CO spectral emission, by using ALMA Science Archive products at the highest spatial resolution currently available for our sample (< 1 arcsec). We include galaxy multi-band images and photometry (in the optical, radio and X-rays) to investigate the interlink between dusty, gaseous and stellar components and the eventual presence of AGN. We use multi-band sizes and morphologies to gain an insight on the processes that lead galaxy evolution, e.g. gas condensation, star formation, AGN feedback. The 11 DSFGs are very compact in the (sub-)millimeter (median r(ALMA) = 1.15 kpc), while the optical emission extends tolarger radii (median r(H)/r(ALMA) = 2.05). CO lines reveal the presence of a rotating disc of molecular gas, but we can not exclude either the presence of interactions and/or molecular outflows. Images at higher (spectral and spatial) resolution are needed to disentangle from the possible scenarios. Most of the galaxies are caught in the compaction phase, when gas cools and falls into galaxy centre, fuelling the dusty burst of star formation and the growing nucleus. We expect these DSFGs to be the high-zstar-forming counterparts of massive quiescent galaxies. Some features of CO emission in three galaxies are suggestive of forthcoming/ongoing AGN feedback, that is thought to trigger the morphological transition from star-forming disks to ETGs. ","An ALMA view of 11 Dusty Star Forming Galaxies at the peak of Cosmic |
Star Formation History",1,"[""If you're interested in massive galaxies, there is a new paper from our group led by @Sissaschool PhD student Lara Pantoni. The study is on today's @arxiv as <LINK>. It explores CO gas properties to understand the evolution of some fascinating, distant objects.""]",21,08,261 |
3714,213,1390563362027802626,2739515118,christian majenz,"Together with colleagues at QuSoft, I realized that there are guides for first-time reviewers and even first-time PC-chairs, but we couldn't find any for first-time PC members. So we wrote one: <LINK> As far as I can see, only one of my coauthors, @Huebli, is on twitter. Is that correct?",https://arxiv.org/abs/2105.02773,"In theoretical computer science, conferences play an important role in the scientific process. The decisions whether to accept or reject articles is taken by the program committee (PC) members. Serving on a PC for the first time can be a daunting experience. This guide will help new program-committee members to understand how the system works, and provide useful tips and guidelines. It discusses every phase of the paper-selection process, and the tasks associated to it. ","A Guide for New Program Committee Members at Theoretical Computer |
Science Conferences",2,"[""Together with colleagues at QuSoft, I realized that there are guides for first-time reviewers and even first-time PC-chairs, but we couldn't find any for first-time PC members. So we wrote one: <LINK>"", 'As far as I can see, only one of my coauthors, @Huebli, is on twitter. Is that correct?']",21,05,288 |
3715,111,1456538097185923102,1238481001304686594,Pablo Martรญnez-Miravรฉ,"New paper! With @MariamTortola and Susana Molina Sedgwick ""Non-standard interactions from the future neutrino solar sector""๐ <LINK> We study the pontential of a combined analysis of JUNO and Hyper-Kamiokande in the presence of non-standard interactions(NSI) We show that strong constraints on NSI can be derived in that case, while ensuring an accurate determination of the oscillation parameters. We also illustrate the nice complementarity between both experiments ๐ <LINK>",https://arxiv.org/abs/2111.03031,"The next-generation neutrino experiment JUNO will determine the solar oscillation parameters - $\sin^2 \theta_{12}$ and $\Delta m^2_{21}$ - with great accuracy, in addition to measuring $\sin^2\theta_{13}$, $\Delta m^2_{31}$, and the mass ordering. In parallel, the continued study of solar neutrinos at Hyper-Kamiokande will provide complementary measurements in the solar sector. In this paper, we address the expected sensitivity to non-universal and flavour-changing non-standard interactions (NSI) with $d$-type quarks from the combination of these two future neutrino experiments. We also show the robustness of their measurements of the solar parameters $\sin^2 \theta_{12}$ and $\Delta m^2_{21}$ in the presence of NSI. We study the impact of the exact experimental configuration of the Hyper-Kamiokande detector, and conclude it is of little relevance in this scenario. Finally, we find that the LMA-D solution is expected to be present if no additional input from non-oscillation experiments is considered. ",Non-standard interactions from the future neutrino solar sector,2,"['New paper! With @MariamTortola and Susana Molina Sedgwick\n\n""Non-standard interactions from the future neutrino solar sector""๐\n\n<LINK>\n\nWe study the pontential of a combined analysis of JUNO and Hyper-Kamiokande in the presence of non-standard interactions(NSI)', 'We show that strong constraints on NSI can be derived in that case, while ensuring an accurate determination of the oscillation parameters.\n\nWe also illustrate the nice complementarity between both experiments ๐ https://t.co/BaAC8JE7NP']",21,11,475 |
3716,188,1361188427547496457,1367912887,Stรฉphane Deny,"With @D_Bouchacourt and @marksibrahim at @facebookai, we try to Address the Topological Defects of Disentanglement via Distributed Operators: <LINK> Wait, what? ๐ง Find below a short introduction to the topic and the work! 1/ In machine learning, disentanglement is the art of learning the factors of variation that compose a dataset. For example, in a dataset of dog pictures, some relevant factors of variation are pose, color and breed. <LINK> 2/ In its traditional implementation, disentanglement attempts to isolate each of these factors of variation into a distinct subspace of a latent representation, as shown in this cartoon. <LINK> 3/ In this work, using considerations from topology (the mathematical field), we show that this notion of disentanglement is impossible to achieve for many transformations, including basic ones such as simple rotations of shapes. <LINK> 4/ We then study an alternative approach to disentanglement, which relies on distributed latent operators, potentially acting on the entire latent space. With group theory we show that this approach does not suffer from the same shortcomings as disentanglement via subspaces. <LINK> 5/ We back up these mathematical observations with simple experiments on toy datasets. We show for example that the rotation of simple shapes can be successfully learned via distributed operators, but not via subspaces. <LINK> 6/ We hope that our work will be a starting point to more discussions on how to choose operators in latent space for disentanglement, and to understand the success of distributed operator strategies in recent work such as <LINK> and <LINK> 7/ All comments and feedback are welcome on this attempt to improve our understanding of disentanglement! Tagging some relevant folks below: @airalcorn2 @mnorko @crozSciTech @emidup @TacoCohen @pfau @StefanoFusi2 @andrewgwils @ylecun @yanndubs @Jack_W_Lindsey @casellesdupre @wellingmax @maurice_weiler @mario1geiger @Yubei_Chen @DylanPaiton @tylerraye @yash_j_sharma @klindt_david @Hidenori8Tanaka @lcfalors @thomaskipf @LakeBrenden @ReubenFeinman @FrancescoLocat8 @poolio @DaniloJRezende @OlivierBachem @chelseabfinn @armandjoulin @Kamesh_Kris @StefanoErmon @jaschasd @davidwromero @erikjbekkers @JeffreySeely @SuryaGanguli @leventsagun @arimorcos @tydsh @davidjschwab @kaishengtai @simple_cell_ @jaakkolehtinen",https://arxiv.org/abs/2102.05623,"A core challenge in Machine Learning is to learn to disentangle natural factors of variation in data (e.g. object shape vs. pose). A popular approach to disentanglement consists in learning to map each of these factors to distinct subspaces of a model's latent representation. However, this approach has shown limited empirical success to date. Here, we show that, for a broad family of transformations acting on images--encompassing simple affine transformations such as rotations and translations--this approach to disentanglement introduces topological defects (i.e. discontinuities in the encoder). Motivated by classical results from group representation theory, we study an alternative, more flexible approach to disentanglement which relies on distributed latent operators, potentially acting on the entire latent space. We theoretically and empirically demonstrate the effectiveness of this approach to disentangle affine transformations. Our work lays a theoretical foundation for the recent success of a new generation of models using distributed operators for disentanglement. ","Addressing the Topological Defects of Disentanglement via Distributed |
Operators",10,"['With @D_Bouchacourt and @marksibrahim at @facebookai, we try to Address the Topological Defects of Disentanglement via Distributed Operators: <LINK>\nWait, what? ๐ง Find below a short introduction to the topic and the work!', '1/ In machine learning, disentanglement is the art of learning the factors of variation that compose a dataset. For example, in a dataset of dog pictures, some relevant factors of variation are pose, color and breed. https://t.co/siqiWIusme', '2/ In its traditional implementation, disentanglement attempts to isolate each of these factors of variation into a distinct subspace of a latent representation, as shown in this cartoon. https://t.co/prJ4VbNcyH', '3/ In this work, using considerations from topology (the mathematical field), we show that this notion of disentanglement is impossible to achieve for many transformations, including basic ones such as simple rotations of shapes. https://t.co/s9tML4QgoN', '4/ We then study an alternative approach to disentanglement, which relies on distributed latent operators, potentially acting on the entire latent space. With group theory we show that this approach does not suffer from the same shortcomings as disentanglement via subspaces. https://t.co/6lZZa5mnOa', '5/ We back up these mathematical observations with simple experiments on toy datasets. We show for example that the rotation of simple shapes can be successfully learned via distributed operators, but not via subspaces. https://t.co/TxtmCoRh5C', '6/ We hope that our work will be a starting point to more discussions on how to choose operators in latent space for disentanglement, and to understand the success of distributed operator strategies in recent work such as https://t.co/sIkpZOymUR and https://t.co/yTQ5w0W8dz', '7/ All comments and feedback are welcome on this attempt to improve our understanding of disentanglement! Tagging some relevant folks below: @airalcorn2 @mnorko @crozSciTech @emidup @TacoCohen @pfau @StefanoFusi2 @andrewgwils @ylecun @yanndubs @Jack_W_Lindsey @casellesdupre', '@wellingmax @maurice_weiler @mario1geiger @Yubei_Chen @DylanPaiton @tylerraye @yash_j_sharma @klindt_david @Hidenori8Tanaka @lcfalors @thomaskipf @LakeBrenden @ReubenFeinman @FrancescoLocat8 @poolio @DaniloJRezende @OlivierBachem @chelseabfinn @armandjoulin', '@Kamesh_Kris @StefanoErmon @jaschasd @davidwromero @erikjbekkers @JeffreySeely @SuryaGanguli @leventsagun @arimorcos @tydsh @davidjschwab @kaishengtai @simple_cell_ @jaakkolehtinen']",21,02,2341 |
3717,6,925289294851461121,892059194240532480,Mikel Artetxe,"Check our new paper on ""Unsupervised Neural Machine Translation"". Training NMT models with only monolingual corpora! <LINK> <LINK> @ngutten It would be fun to see it take its own source code and output the paper! But I don't think that it would actually learn anything meaningful. @ngutten It should degrade smoothly for actual (natural) languages. But trying to connect C and English out of nothing sounds too crazy.",https://arxiv.org/abs/1710.11041,"In spite of the recent success of neural machine translation (NMT) in standard benchmarks, the lack of large parallel corpora poses a major practical problem for many language pairs. There have been several proposals to alleviate this issue with, for instance, triangulation and semi-supervised learning techniques, but they still require a strong cross-lingual signal. In this work, we completely remove the need of parallel data and propose a novel method to train an NMT system in a completely unsupervised manner, relying on nothing but monolingual corpora. Our model builds upon the recent work on unsupervised embedding mappings, and consists of a slightly modified attentional encoder-decoder model that can be trained on monolingual corpora alone using a combination of denoising and backtranslation. Despite the simplicity of the approach, our system obtains 15.56 and 10.21 BLEU points in WMT 2014 French-to-English and German-to-English translation. The model can also profit from small parallel corpora, and attains 21.81 and 15.24 points when combined with 100,000 parallel sentences, respectively. Our implementation is released as an open source project. ",Unsupervised Neural Machine Translation,3,"['Check our new paper on ""Unsupervised Neural Machine Translation"". Training NMT models with only monolingual corpora!\n<LINK> <LINK>', ""@ngutten It would be fun to see it take its own source code and output the paper! But I don't think that it would actually learn anything meaningful."", '@ngutten It should degrade smoothly for actual (natural) languages. But trying to connect C and English out of nothing sounds too crazy.']",17,10,417 |
3718,121,1249636945719951360,797888987675365377,Tom Rainforth,"Our new paper <LINK> shows how we can do blind contact tracing at scale for COVID-19 *without* requiring people to install an app or giving governments access to tracking data. All credit goes to my collaborators @jk_fitzsimons @atulmantri91 @jansen_zhao and Rob <LINK> With @oblivious_AI we will shortly be deploying this in the wild in a city of 5mil+, shout @jk_fitzsimons for more details",https://arxiv.org/abs/2004.05116,"The current COVID-19 pandemic highlights the utility of contact tracing, when combined with case isolation and social distancing, as an important tool for mitigating the spread of a disease [1]. Contact tracing provides a mechanism of identifying individuals with a high likelihood of previous exposure to a contagious disease, allowing additional precautions to be put in place to prevent continued transmission. Here we consider a cryptographic approach to contact tracing based on secure two-party computation (2PC). We begin by considering the problem of comparing a set of location histories held by two parties to determine whether they have come within some threshold distance while at the same time maintaining the privacy of the location histories. We propose a solution to this problem using pre-shared keys, adapted from an equality testing protocol due to Ishai et al [2]. We discuss how this protocol can be used to maintain privacy within practical contact tracing scenarios, including both app-based approaches and approaches which leverage location history held by telecoms and internet service providers. We examine the efficiency of this approach and show that existing infrastructure is sufficient to support anonymised contact tracing at a national level. ","A note on blind contact tracing at scale with applications to the |
COVID-19 pandemic",2,"['Our new paper <LINK> shows how we can do blind contact tracing at scale for COVID-19 *without* requiring people to install an app or giving governments access to tracking data. All credit goes to my collaborators @jk_fitzsimons @atulmantri91 @jansen_zhao and Rob <LINK>', 'With @oblivious_AI we will shortly be deploying this in the wild in a city of 5mil+, shout @jk_fitzsimons for more details']",20,04,392 |
3719,194,1380138443276181509,19149703,Karina Voggel โจ๐ญ๐๐ผโโ๏ธ,"A huge effort of our Student in Arizona Allie Hughes is out on the Arxiv today. We find 1900 good GC candidates in Centaurus A in the outer Halo, using Data from Gaia and our own imaging survey. <LINK> <LINK> This work is essentially extending the Gaia method I have developed in my 2020 paper (<LINK>) to fainter magnitudes and its really exciting to see how it is still efficient. What Gaia can provide us that normal spectroscopic surveys can't is completeness in the sparse outskirts of CenAs Halo. Before this method 99% of all known GCs were located within 20kpc. Now we can detect them out to 150kpc. The box is where we had literature GCs! <LINK> Another cool thing that Gaia provides is a confirmation of what literature measurements were not real GCs (blue). Removing these from the overall GCs of CenA shows that the velocity dispersion of the GCs is much smaller than previously thought. <LINK> Because these false positives the average velocity of the GC system used to be much lower then the one of the galaxy itself. This discrepancy is now gone. What we learned: If you GC systems velocity overlaps with the MW use Gaia to see if it can confirm some as foreground. Also fun fact: This is ground based imaging of one such GC. They were there and visibly resolved into stars with semi-decent imaging. But Galaxy Halos are so big that we miss them if we don't know where to look. <LINK>",https://arxiv.org/abs/2104.02719,"We present a new catalog of 40502 globular cluster (GC) candidates in NGC 5128 out to a projected radius of $\sim$150 kpc, based on data from the Panoramic Imaging Survey of Centaurus and Sculptor (PISCeS), Gaia Data Release 2, and the NOAO Source Catalog. Ranking these candidates based on the likelihood that they are true GCs, we find that approximately 1900 belong to our top two ranking categories and should be the highest priority for spectroscopic follow-up for confirmation. Taking into account our new data and a vetting of previous GC catalogs, we estimate a total GC population of $1450 \pm 160$ GCs. We show that a substantial number of sources previously argued to be low-velocity GCs are instead foreground stars, reducing the inferred GC velocity dispersion. This work showcases the power of Gaia to identify slightly extended sources at the $\sim 4$ Mpc distance of NGC 5128, enabling accurate identification of GCs throughout the entire extended halo, not just the inner regions that have been the focus of most previous work. ","NGC 5128 globular cluster candidates out to 150 kpc: a comprehensive |
catalog from Gaia and ground based data",6,"['A huge effort of our Student in Arizona Allie Hughes is out on the Arxiv today. \nWe find 1900 good GC candidates in Centaurus A in the outer Halo, using Data from Gaia and our own imaging survey.\n<LINK> <LINK>', 'This work is essentially extending the Gaia method I have developed in my 2020 paper (https://t.co/jWLbemNdL1) to fainter magnitudes and its really exciting to see how it is still efficient.', ""What Gaia can provide us that normal spectroscopic surveys can't is completeness in the sparse outskirts of CenAs Halo. Before this method 99% of all known GCs were located within 20kpc. Now we can detect them out to 150kpc. The box is where we had literature GCs! https://t.co/BlEzWMdDOG"", 'Another cool thing that Gaia provides is a confirmation of what literature measurements were not real GCs (blue). Removing these from the overall GCs of CenA shows that the velocity dispersion of the GCs is much smaller than previously thought. https://t.co/8yrMXtvKqu', 'Because these false positives the average velocity of the GC system used to be much lower then the one of the galaxy itself. This discrepancy is now gone. What we learned: If you GC systems velocity overlaps with the MW use Gaia to see if it can confirm some as foreground.', ""Also fun fact: This is ground based imaging of one such GC. They were there and visibly resolved into stars with semi-decent imaging. But Galaxy Halos are so big that we miss them if we don't know where to look. https://t.co/pQrtjF512l""]",21,04,1399 |
3720,12,1432969657871384581,2999702157,Anton Ilderton,"My new paper on the #Schwinger effect is out on the #arXiv today. I *think* it's quite a nice result. Not sure what the community will think... #lasers #quantum #physics #AcademicTwitter <LINK> First responses are in! Partner: ""I didn't know particles could be diabetic!"" @DrKate_L TOTES!",https://arxiv.org/abs/2108.13885,"The production of electron-positron pairs from light is a famous prediction of quantum electrodynamics. Yet it is often emphasised that the number of produced pairs has no physical meaning until the driving electromagnetic fields are switched off, as otherwise its definition is basis-dependent. The common adiabatic definition, in particular, can predict the `creation' of a number of pairs orders of magnitude larger than the final yield. We show here, by clarifying exactly what is being counted, that the adiabatic number of pairs has an unambiguous and physical interpretation. As a result, and perhaps contrary to expectation, the large numbers of pairs seen at non-asymptotic times become, in principle, physically accessible. ",The physics of adiabatic particle number in the Schwinger effect,3,"[""My new paper on the #Schwinger effect is out on the #arXiv today. I *think* it's quite a nice result. Not sure what the community will think... #lasers #quantum #physics #AcademicTwitter \n\n <LINK>"", 'First responses are in!\nPartner: ""I didn\'t know particles could be diabetic!""', '@DrKate_L TOTES!']",21,08,289 |
3721,114,1194531073553846272,1030804177855959042,Michael Scherer,"Intertwined orders and emergent symmetries play a critical role in condensed matter physics. In a combined QMC and FRG study of interacting Dirac fermions, we establish stability of multi-critical points with enhanced symmetries and phase-coexistence <LINK>. <LINK>",https://arxiv.org/abs/1911.01244,"The quantum phase diagram and critical behavior of two-dimensional Dirac fermions coupled to two compatible order-parameter fields with $O(N_1)\oplus O(N_2)$ symmetry is investigated. Recent numerical studies of such systems have reported evidence for non-Landau-Ginzburg-Wilson transitions and emergent $O(N_1+N_2)$ symmetry between the two ordered states, which has been interpreted within a scenario of deconfined quantum criticality in (2+1)-dimensional Dirac materials. Here, we provide two theoretical approaches to refine the phase diagrams of such systems. In the immediate vicinity of the multicritical point between the ordered phases and the semimetallic phase, we employ a non-perturbative field-theoretical analysis based on the functional renormalization group. For the particular case of $N_1=3$, $N_2=1$, we perform a large-scale quantum Monte Carlo analysis of the strong-coupling region, where both orders meet. Our findings support the robust emergence of enhanced symmetry at the multicritical point and suggest the transition between the two ordered phases to take place via a sequence of continuous transitions. In particular, we find that intermediate regimes of coexistence are present in the phase diagram for all values of $N_1$ and $N_2$. ",Emergent symmetries and coexisting orders in Dirac fermion systems,1,"['Intertwined orders and emergent symmetries play a critical role in condensed matter physics. In a combined QMC and FRG study of interacting Dirac fermions, we establish stability of multi-critical points with enhanced symmetries and phase-coexistence <LINK>. <LINK>']",19,11,266 |
3722,154,1356915866475069444,460489687,Juan Mateos Garcia,"The privatisation of AI research(ers) In a new working paper, we map career AI researcher career transitions between academia and industry, providing evidence about the scale, drivers and potential consequences of the AI brain drain. Deeper thread soon <LINK> <LINK> [Co authored w/ @Daniel_S_Hain, @kstathou & @RJurowetzki ] @jermainkaminski That sounds like a great idea - our analysis of drivers and outcomes is quite simple right now. It would be very interesting to enhance it from a directional angle as you say. Maybe a good opportunity for collaboration? @gjrietveld @jermainkaminski Definitely - Thank you for flagging this up! @gjrietveld @jermainkaminski $30 for 3-day access it better be! ๐",https://arxiv.org/abs/2102.01648,"The private sector is playing an increasingly important role in basic Artificial Intelligence (AI) R&D. This phenomenon, which is reflected in the perception of a brain drain of researchers from academia to industry, is raising concerns about a privatisation of AI research which could constrain its societal benefits. We contribute to the evidence base by quantifying transition flows between industry and academia and studying its drivers and potential consequences. We find a growing net flow of researchers from academia to industry, particularly from elite institutions into technology companies such as Google, Microsoft and Facebook. Our survival regression analysis reveals that researchers working in the field of deep learning as well as those with higher average impact are more likely to transition into industry. A difference-in-differences analysis of the effect of switching into industry on a researcher's influence proxied by citations indicates that an initial increase in impact declines as researchers spend more time in industry. This points at a privatisation of AI knowledge compared to a counterfactual where those high-impact researchers had remained in academia. Our findings highlight the importance of strengthening the public AI research sphere in order to ensure that the future of this powerful technology is not dominated by private interests. ","The Privatization of AI Research(-ers): Causes and Potential |
Consequences -- From university-industry interaction to public research |
brain-drain?",5,"['The privatisation of AI research(ers)\n\nIn a new working paper, we map career AI researcher career transitions between academia and industry, providing evidence about the scale, drivers and potential consequences of the AI brain drain. Deeper thread soon\n\n<LINK> <LINK>', '[Co authored w/ @Daniel_S_Hain, @kstathou & @RJurowetzki ]', '@jermainkaminski That sounds like a great idea - our analysis of drivers and outcomes is quite simple right now. It would be very interesting to enhance it from a directional angle as you say. Maybe a good opportunity for collaboration?', '@gjrietveld @jermainkaminski Definitely - Thank you for flagging this up!', '@gjrietveld @jermainkaminski $30 for 3-day access it better be! ๐']",21,02,702 |
3723,76,1129275019102756869,882303076505456642,Timon Emken,"Our new paper about the direct detection of low-mass #DarkMatter interacting strongly with ordinary matter hit the @arxiv today. Short summary in these tweets. <LINK> <LINK> A detection experiment on Earth cannot detect DM above some critical interaction strength due to scatterings in Earth's crust/atmosphere. We used #MonteCarlo analytic methods to estimate these critical cross sections for various DM-electron scattering experiments and models. <LINK> We find an open window in parameter space for a strongly interacting sub-dominant component of DM (<1%) for ultralight (but not massless) dark photon mediators. <LINK> A small-scale detector at high altitudes, e.g. on a balloon or a satellite could probe such strong DM interactions. This kind of experiment would be sensitive to a strong, orbital signal modulation due to the Earth's ""dark matter shadow"", as seen in the video. The new version of the DaMaSCUS-CRUST code (Dark Matter Simulation Code for Underground Scatterings) is publicly available on @GitHub and @ZENODO_ORG. <LINK> <LINK>",https://arxiv.org/abs/1905.06348,"We consider direct-detection searches for sub-GeV dark matter via electron scatterings in the presence of large interactions between dark and ordinary matter. Scatterings both on electrons and nuclei in the Earth's crust, atmosphere, and shielding material attenuate the expected local dark matter flux at a terrestrial detector, so that such experiments lose sensitivity to dark matter above some critical cross section. We study various models, including dark matter interacting with a heavy and ultralight dark photon, through an electric dipole moment, and exclusively with electrons. For a dark-photon mediator and an electric dipole interaction, the dark matter-electron scattering cross-section is directly linked to the dark matter-nucleus cross section, and nuclear interactions typically dominate the attenuation process. We determine the exclusion bands for the different dark-matter models from several experiments - SENSEI, CDMS-HVeV, XENON10, XENON100, and DarkSide-50 - using a combination of Monte Carlo simulations and analytic estimates. We also derive projected sensitivities for a detector located at different depths and for a range of exposures, and calculate the projected sensitivity for SENSEI at SNOLAB and DAMIC-M at Modane. Finally, we discuss the reach to high cross sections and the modulation signature of a small balloon- and satellite-borne detector sensitive to electron recoils, such as a Skipper-CCD. Such a detector could potentially probe unconstrained parameter space at high cross sections for a sub-dominant component of dark matter interacting with a massive, but ultralight, dark photon. ","Direct Detection of Strongly Interacting Sub-GeV Dark Matter via |
Electron Recoils",5,"['Our new paper about the direct detection of low-mass #DarkMatter interacting strongly with ordinary matter hit the @arxiv today. Short summary in these tweets. \n\n<LINK> <LINK>', ""A detection experiment on Earth cannot detect DM above some critical interaction strength due to scatterings in Earth's crust/atmosphere. We used #MonteCarlo analytic methods to estimate these critical cross sections for various DM-electron scattering experiments and models. https://t.co/guBckHRrTz"", 'We find an open window in parameter space for a strongly interacting sub-dominant component of DM (<1%) for ultralight (but not massless) dark photon mediators. https://t.co/aosCZZu1PH', 'A small-scale detector at high altitudes, e.g. on a balloon or a satellite could probe such strong DM interactions. This kind of experiment would be sensitive to a strong, orbital signal modulation due to the Earth\'s ""dark matter shadow"", as seen in the video.', 'The new version of the DaMaSCUS-CRUST code (Dark Matter Simulation Code for Underground Scatterings) is publicly available on @GitHub and @ZENODO_ORG. \n\nhttps://t.co/Zo5H7bXCKz \n\nhttps://t.co/QH5mFwp2mw']",19,05,1056 |
3724,56,1362358042482851844,27686902,Dr Becky Smethurst,"Take a look at some of the galaxy classifications done by volunteers on @galaxyzoo on the DECaLS imaging! Mike has put together a fun interface for you to explore them yourself All from our new research paper published today: <LINK> <LINK> @makizdat In astronomy itโs usually who contributed the most. The first author is usually the main writer of the text. And then itโs alphabetical at a certain point when thereโs bigger collaborations. In chemistry though, itโs usually the most senior person comes last",https://arxiv.org/abs/2102.08414,"We present Galaxy Zoo DECaLS: detailed visual morphological classifications for Dark Energy Camera Legacy Survey images of galaxies within the SDSS DR8 footprint. Deeper DECaLS images (r=23.6 vs. r=22.2 from SDSS) reveal spiral arms, weak bars, and tidal features not previously visible in SDSS imaging. To best exploit the greater depth of DECaLS images, volunteers select from a new set of answers designed to improve our sensitivity to mergers and bars. Galaxy Zoo volunteers provide 7.5 million individual classifications over 314,000 galaxies. 140,000 galaxies receive at least 30 classifications, sufficient to accurately measure detailed morphology like bars, and the remainder receive approximately 5. All classifications are used to train an ensemble of Bayesian convolutional neural networks (a state-of-the-art deep learning method) to predict posteriors for the detailed morphology of all 314,000 galaxies. When measured against confident volunteer classifications, the networks are approximately 99% accurate on every question. Morphology is a fundamental feature of every galaxy; our human and machine classifications are an accurate and detailed resource for understanding how galaxies evolve. ","Galaxy Zoo DECaLS: Detailed Visual Morphology Measurements from |
Volunteers and Deep Learning for 314,000 Galaxies",2,"['Take a look at some of the galaxy classifications done by volunteers on @galaxyzoo on the DECaLS imaging! Mike has put together a fun interface for you to explore them yourself \n\nAll from our new research paper published today: <LINK> <LINK>', '@makizdat In astronomy itโs usually who contributed the most. The first author is usually the main writer of the text. And then itโs alphabetical at a certain point when thereโs bigger collaborations. In chemistry though, itโs usually the most senior person comes last']",21,02,509 |
3725,158,1463425906249289728,12745042,RJB Goudie,New preprint by @hhau_stats +me @MRC_BSU Have Bayesian submodels linked in a โchain-likeโ structure for several data sources? ie neighbouring submodels share parameters We propose โchained Markov meldingโ for forming a suitable encompassing Bayes model <LINK> <LINK>,https://arxiv.org/abs/2111.11566,"A challenge for practitioners of Bayesian inference is specifying a model that incorporates multiple relevant, heterogeneous data. It may be easier to instead specify distinct submodels for each source of data, then join the submodels together. We consider chains of submodels, where submodels directly relate to their neighbours via common quantities which may be parameters or deterministic functions thereof. We propose chained Markov melding, an extension of Markov melding, a generic method to combine chains of submodels into a joint model. One challenge we address is appropriately capturing the prior dependence between common quantities within a submodel, whilst also reconciling differences in priors for the same common quantity between two adjacent submodels. Estimating the posterior of the resulting overall joint model is also challenging, so we describe a sampler that uses the chain structure to incorporate information contained in the submodels in multiple stages, possibly in parallel. We demonstrate our methodology using two examples. The first example considers an ecological integrated population model, where multiple data are required to accurately estimate population immigration and reproduction rates. We also consider a joint longitudinal and time-to-event model with uncertain, submodel-derived event times. Chained Markov melding is a conceptually appealing approach to integrating submodels in these settings. ",Combining chains of Bayesian models with Markov melding,1,['New preprint by @hhau_stats +me @MRC_BSU\n\nHave Bayesian submodels linked in a โchain-likeโ structure for several data sources? ie neighbouring submodels share parameters\n\nWe propose โchained Markov meldingโ for forming a suitable encompassing Bayes model\n\n<LINK> <LINK>'],21,11,266 |
3726,12,900758384122748928,2180768821,Erik Hoel,"What is causation? How do you measure it? What causes what? Iโm on a new paper that just went up on arXiv about this <LINK> this is truly excellent work by my colleagues Larissa Albantakis, William Marshall, and Giulio Tononi @ChristianHoney_ @nattyover In IIT, information is integrated by the system itself. But the observer can measure this using information theory and causal analysis @ChristianHoney_ @nattyover There purposefully is no observer in the theory, unlike in traditional information theory. An observer can measure it but does not create it @ChristianHoney_ @nattyover Sure thing - DM me or email me at hoelerik at gmail",https://arxiv.org/abs/1708.06716,"Actual causation is concerned with the question ""what caused what?"" Consider a transition between two states within a system of interacting elements, such as an artificial neural network, or a biological brain circuit. Which combination of synapses caused the neuron to fire? Which image features caused the classifier to misinterpret the picture? Even detailed knowledge of the system's causal network, its elements, their states, connectivity, and dynamics does not automatically provide a straightforward answer to the ""what caused what?"" question. Counterfactual accounts of actual causation based on graphical models, paired with system interventions, have demonstrated initial success in addressing specific problem cases in line with intuitive causal judgments. Here, we start from a set of basic requirements for causation (realization, composition, information, integration, and exclusion) and develop a rigorous, quantitative account of actual causation that is generally applicable to discrete dynamical systems. We present a formal framework to evaluate these causal requirements that is based on system interventions and partitions, and considers all counterfactuals of a state transition. This framework is used to provide a complete causal account of the transition by identifying and quantifying the strength of all actual causes and effects linking the two consecutive system states. Finally, we examine several exemplary cases and paradoxes of causation and show that they can be illuminated by the proposed framework for quantifying actual causation. ","What caused what? A quantitative account of actual causation using |
dynamical causal networks",5,"['What is causation? How do you measure it? What causes what? Iโm on a new paper that just went up on arXiv about this <LINK>', 'this is truly excellent work by my colleagues Larissa Albantakis, William Marshall, and Giulio Tononi', '@ChristianHoney_ @nattyover In IIT, information is integrated by the system itself. But the observer can measure this using information theory and causal analysis', '@ChristianHoney_ @nattyover There purposefully is no observer in the theory, unlike in traditional information theory. An observer can measure it but does not create it', '@ChristianHoney_ @nattyover Sure thing - DM me or email me at hoelerik at gmail']",17,08,637 |
3727,44,1519313005242159105,724609851884724225,Ana Belen Sainz,"New paper out! ""An open-source linear program for testing nonclassicality"" <LINK> Input the states and measurements you want to use, and the program tells you if you can generate nonclassical statistics in a prepare-measure experiment. @ictqt @JohnHSelby",https://arxiv.org/abs/2204.11905,"The gold standard for demonstrating that an experiment resists any classical explanation is to show that its statistics violate generalized noncontextuality. We here provide an open-source linear program for testing whether or not any given prepare-measure experiment is classically-explainable in this sense. The input to the program is simply an arbitrary set of quantum states and an arbitrary set of quantum effects; the program then determines if the Born rule statistics generated by all pairs of these can be explained by a classical (noncontextual) model. If a classical model exists, it provides an explicit model. If it does not, then it computes the minimal amount of noise that must be added such that a model does exist, and then provides this model. We generalize all these results to arbitrary generalized probabilistic theories (and accessible fragments thereof) as well; indeed, our linear program is a test of simplex-embeddability. ",An open-source linear program for testing nonclassicality,1,"['New paper out! \n\n""An open-source linear program for testing nonclassicality""\n<LINK>\n\nInput the states and measurements you want to use, and the program tells you if you can generate nonclassical statistics in a prepare-measure experiment. \n\n@ictqt\n@JohnHSelby']",22,04,256 |
3728,210,1503405126429650950,1242372799039246338,JF Rupprecht - OM team,"Today on <LINK> Extending active nematics for curved surfaces with an up-down asymmetry to model epithelial tissues or biofilms grown on wavy substrates; we find thresholdless active flows, discontinuous transitions and hysteresis between flowing steady states ... challenging established paradigms for activity-driven spontaneous flows ; * S. Bell, SZ Lin & J. Prost @Lab_PCC",https://arxiv.org/abs/2203.05644,"Cell monolayers are a central model system to tissue biophysics. In vivo, epithelial tissues are curved on the scale of microns, and curvature's role in the onset of spontaneous tissue flows is still not well-understood. Here, we present a hydrodynamic theory for an apical-basal asymmetric active nematic gel on a curved strip. We show that surface curvature qualitatively changes monolayer motion compared to flat space: the resulting flows can be thresholdless, and the transition to motion may change from continuous to discontinuous. Surface curvature, friction and active tractions are all shown to control the flow pattern selected: from simple shear to vortex chains. ",Active nematic flows on curved surfaces,2,"['Today on <LINK> Extending active nematics for curved surfaces with an up-down asymmetry to model epithelial tissues or biofilms grown on wavy substrates; we find thresholdless active flows, discontinuous transitions and hysteresis between flowing steady states', '... challenging established paradigms for activity-driven spontaneous flows ; * S. Bell, SZ Lin & J. Prost @Lab_PCC']",22,03,376 |
3729,9,1245379641025736706,880052980938133505,Graham D Bruce,"Need to know the polarisation of multiple overlapped laser beams simultaneously? Do it with #speckle! @StLeonards_PGs-funded PhD student @MorganFacchin has a new preprint (his first first-author paper!) on @arxiv today, explaining how. <LINK> #NotAnAprilFool <LINK>",https://arxiv.org/abs/2003.14408,"Laser speckle is generated by the multiple interference of light through a disordered medium. Here we study the premise that the speckle pattern retains information about the polarisation state of the incident field. We analytically verify that a linear relation exists between the Stokes vector of the light and the resulting speckle pattern. As a result, the polarisation state of a beam can be measured from the speckle pattern using a transmission matrix approach. We perform a quantitative analysis of the accuracy of the transmission matrix method to measure randomly time-varying polarisation states. In experiment, we find that the Stokes parameters of light from a diode laser can be retrieved with an uncertainty of 0.05 using speckle images of 150$\times$150 pixels and 17 training states. We show both analytically and in experiment that this approach may be extended to the case of more than one laser field, demonstrating the measurement of the Stokes parameters of two laser beams simultaneously from a single speckle pattern and achieving the same uncertainty of 0.05. ","Speckle-based determination of the polarisation state of single and |
multiple laser beams",1,"['Need to know the polarisation of multiple overlapped laser beams simultaneously? Do it with #speckle! @StLeonards_PGs-funded PhD student @MorganFacchin has a new preprint (his first first-author paper!) on @arxiv today, explaining how. <LINK> #NotAnAprilFool <LINK>']",20,03,265 |
3730,59,1161545688662052865,24443979,Dan Stowell,"New paper from us: ""Estimating & Mitigating the Impact of Acoustic Environments on Machine-to-Machine Signalling"" <LINK> by @AmoghMatt - he will present it at @eusipco2019! #eusipco2019 @AmoghMatt @eusipco2019 Oh also - this is the product of a great collaboration with @chirp!",https://arxiv.org/abs/1908.04672,"The advance of technology for transmitting Data-over-Sound in various IoT and telecommunication applications has led to the concept of machine-to-machine over-the-air acoustic signalling. Reverberation can have a detrimental effect on such machine-to-machine signals while decoding. Various methods have been studied to combat the effects of reverberation in speech and audio signals, but it is not clear how well they generalise to other sound types. We look at extending these models to facilitate machine-to-machine acoustic signalling. This research investigates dereverberation techniques to shortlist a single-channel reverberation suppression method through a pilot test. In order to apply the chosen dereverberation method a novel method of estimating acoustic parameters governing reverberation is proposed. The performance of the final algorithm is evaluated on quality metrics as well as the performance of a real machine-to-machine decoder. We demonstrate a dramatic reduction in error rate for both audible and ultrasonic signals. ","Estimating & Mitigating the Impact of Acoustic Environments on |
Machine-to-Machine Signalling",2,"['New paper from us: ""Estimating & Mitigating the Impact of Acoustic Environments on Machine-to-Machine Signalling"" <LINK> by @AmoghMatt - he will present it at @eusipco2019! #eusipco2019', '@AmoghMatt @eusipco2019 Oh also - this is the product of a great collaboration with @chirp!']",19,08,277 |
3731,46,1263301669120548865,2888783160,Saptarshi Pal,<LINK>. If you are interested in discrete time discrete state dynamical systems and would love to learn a cool new method from semigroup decomposition theory to analyze their structures better here is a paper for you by @NehanivCL and I. Also if you are familiar with Bootstrap Percolation in Statistical Mechanics this might be of interest to you,https://arxiv.org/abs/2005.10078,In this paper a modification of the standard Bootstrap Percolation model is introduced. In our modification a discrete time update rule is constructed that allows for non-monotonicity - unlike its classical counterpart. External inputs to drive the system into desirable states are also included in the model. The algebraic structure and complexity properties of the system are inferred by studying the system's holonomy decomposition. We introduce methods of inferring the pools of reversibility for the system. Dependence of system complexity on process parameters is presented and discussed. ,"Algebraic Structure and Complexity of Bootstrap Percolation with |
External Inputs",2,"['<LINK>. If you are interested in discrete time discrete state dynamical systems and would love to learn a cool new method from semigroup decomposition theory to analyze their structures better here is a paper for you by @NehanivCL and I.', 'Also if you are familiar with Bootstrap Percolation in Statistical Mechanics this might be of interest to you']",20,05,347 |
3732,269,1313420178605117440,165610171,Jorge Villa Vรฉlez,A new #VESTIGE paper lead by Alessia Longobardi from @LAM_Marseille is out today on @arxiv showing results on how gas and dust are perturbed by cluster environment in a similar way (outside-in stripping of the galaxy ISM). Here is the link <LINK> #VESTIGEVIII <LINK>,https://arxiv.org/abs/2010.02202,"We measure FIR emission from tails of stripped dust following the ionised and atomic gas components in galaxies undergoing ram pressure stripping. We study the dust-to-gas relative distribution and mass ratio in the stripped interstellar medium and relate them to those of the intra-cluster medium, thus linking the cluster-ICM-galaxy evolution at small-scales. The galaxy sample consists of three Scd Virgo galaxies with stellar masses in the range $10^9\lesssim \mathrm{M_{*}} \lesssim 10^{10}\, \mathrm{M_{\odot}}$, and within 1 Mpc from the cluster centre, namely NGC 4330, NGC 4522, and NGC 4654. Through the analysis of VESTIGE H$\alpha$, $Herschel$ SPIRE far-infrared, and VIVA HI data, we trace the spatial distribution of the tails and infer the dust and gas masses from the measured far-infrared 250 $\mu$m and HI flux densities. Dust-to-gas mass ratios (DGRs) in the tails are analysed as a function of the galaxy mass, metallicity, and dust temperature. Along the stripped component, the dust distribution closely follows the HI and H$\alpha$ emitting gas, all extending beyond the optical disc. In these regions, the DGRs are $2.0\pm0.6\times10^{-3}$, $0.7\pm0.1\times10^{-3}$, and $0.4\pm0.03\times10^{-3}$, for NGC 4330, NGC 4522, and NGC 4654, respectively, i.e. up to a factor of 15 less than the values measured in the main body of nearby galaxies. We also find a negative trend in the DGR as a function of the metallicity that can be explained in terms of a dust component more centrally concentrated in more metal-rich systems. Together with the finding that the stripped dust is cold, $T_{d} \lesssim 25\, K$, our results support an outside-in stripping scenario of the galaxy interstellar medium. This study shows that ram pressure stripping is a key mechanism in the building up of the Virgo intra-cluster component injecting dust grains into the ICM, thus contributing to its metal enrichment. ","A Virgo Environmental Survey Tracing Ionised Gas Emission. VESTIGE VIII. |
Bridging the cluster-ICM-galaxy evolution at small scales",1,['A new #VESTIGE paper lead by Alessia Longobardi from @LAM_Marseille is out today on @arxiv showing results on how gas and dust are perturbed by cluster environment in a similar way (outside-in stripping of the galaxy ISM). Here is the link <LINK> #VESTIGEVIII <LINK>'],20,10,266 |
3733,66,1339245285151633408,60893773,James Bullock,New paper w Victor Robles @YaleAstronomy :: 3D orbits of satellite galaxies (via @ESAGaia ) help constrain their dark matter halo structure & past mass loss. One result: no MW sats have Vmax>27km/s => Too Big to Fail will not die. I blame @MBKplus <LINK> <LINK>,https://arxiv.org/abs/2012.07865,"Using the phat-ELVIS suite of Milky Way-size halo simulations, we show that subhalo orbital pericenters, $r_{\rm peri}$, correlate with their dark matter halo structural properties. Specifically, at fixed maximum circular velocity, $V_{\rm max}$, subhalos with smaller $r_{\rm peri}$ are more concentrated (have smaller $r_{\rm max}$ values) and have lost more mass, with larger peak circular velocities, $V_{\rm peak}$, prior to infall. These trends provide information that can tighten constraints on the inferred $V_{\rm max}$ and $V_{\rm peak}$ values for known Milky Way satellites. We illustrate this using published pericenter estimates enabled by Gaia for the nine classical Milky Way dwarf spheroidal satellites. The two densest dSph satellites (Draco and Ursa Minor) have relatively small pericenters, and this pushes their inferred $r_{\rm max}$ and $V_{\rm max}$ values lower than they would have been without pericenter information. For Draco, we infer $V_{\rm max} = 23.5 \, \pm 3.3$ km s$^{-1}$ (compared to $27.3 \, \pm 7.1$ km s$^{-1}$ without pericenter information). Such a shift exacerbates the traditional Too Big to Fail problem. Draco's peak circular velocity range prior to infall narrows from $V_{\rm peak} = 21 - 49$ km s$^{-1}$ without pericenter information to $V_{\rm peak} = 25-37$ km s$^{-1}$ with the constraint. Over the full population of classical dwarf spheroidals, we find no correlation between $V_{\rm peak}$ and stellar mass today, indicative of a high level of stochasticity in galaxy formation at stellar masses below $\sim 10^7$ M$_\odot$. As proper motion measurements for dwarf satellites become more precise, they should enable useful priors on the expected structure and evolution of their host dark matter subhalos. ","Orbital pericenters and the inferred dark matter halo structure of |
satellite galaxies",1,['New paper w Victor Robles @YaleAstronomy :: 3D orbits of satellite galaxies (via @ESAGaia ) help constrain their dark matter halo structure & past mass loss. \n\nOne result: no MW sats have Vmax>27km/s => Too Big to Fail will not die. I blame @MBKplus \n\n<LINK> <LINK>'],20,12,270 |
3734,34,1321735992689250304,863411754243674112,SebastianGPopescu,"Delighted to share our new paper on enhanced outlier status propagation through Hierarchical GPs. In <LINK> we provide equivalents of Deep Kernel Learning and Deep Gaussian Processes in Wasserstein-2 space and show they are better at Out-of-Distribution Detection <LINK> work done with @JamesCole_Neuro @Neurosharp @GlockerBen By decomposing a Sparse GP into its parametric and non-parametric components, we can disentangle the two types of uncertainty present in our posterior variance formula. We consider Distributional Uncertainty as a measure of outlier detection. In standard Deep Gaussian Processes, as the number of inducing points is increased the Distributional Variance collapses to zero. In figure 2, we can see outlier points (green) getting progressively more closely mapped to areas where inlier points (blue,red) get mapped to. In DeepGPs, the Distributional Variance is low across input space areas where there is no data because of the PCA mean function. Our Wasserstein-2 space models suffer less from this drawback.(Figure 3)",https://arxiv.org/abs/2010.14877,"Stacking Gaussian Processes severely diminishes the model's ability to detect outliers, which when combined with non-zero mean functions, further extrapolates low non-parametric variance to low training data density regions. We propose a hybrid kernel inspired from Varifold theory, operating in both Euclidean and Wasserstein space. We posit that directly taking into account the variance in the computation of Wasserstein-2 distances is of key importance towards maintaining outlier status throughout the hierarchy. We show improved performance on medium and large scale datasets and enhanced out-of-distribution detection on both toy and real data. ",Hierarchical Gaussian Processes with Wasserstein-2 Kernels,5,"['Delighted to share our new paper on enhanced outlier status propagation through Hierarchical GPs. In <LINK> we provide equivalents of Deep Kernel Learning and Deep Gaussian Processes in Wasserstein-2 space and show they are better at Out-of-Distribution Detection <LINK>', 'work done with @JamesCole_Neuro @Neurosharp @GlockerBen', 'By decomposing a Sparse GP into its parametric and non-parametric components, we can disentangle the two types of uncertainty present in our posterior variance formula. We consider Distributional Uncertainty as a measure of outlier detection.', 'In standard Deep Gaussian Processes, as the number of inducing points is increased the Distributional Variance collapses to zero. In figure 2, we can see outlier points (green) getting progressively more closely mapped to areas where inlier points (blue,red) get mapped to.', 'In DeepGPs, the Distributional Variance is low across input space areas where there is no data because of the PCA mean function. Our Wasserstein-2 space models suffer less from this drawback.(Figure 3)']",20,10,1045 |
3735,147,1499403993918943233,19740214,Kory Mathewson,"New @DeepMind paper on Learning Robust Real-Time Cultural Transmission without Human Data explores the emergence of collective knowledge in reinforcement learning agents. More details: ๐: <LINK> ๐: <LINK> ๐น: <LINK> <LINK> This work will be presented and discussed at the @royalsociety meeting on โThe emergence of collective knowledge and cumulative culture in animals, humans and machinesโ <LINK> Collaboration w/ @avishkar58, Bethanie Brownfield, @adriancollister, @agudallago, Ashley Edwards, @reverettai, Alexandre Frechette, @edwardfhughes, @kikujiroK, Yanko Gitahy Oliveira, Julia Pawar, @Miruna_Pislar, Alex Platonov, @evansenter, Sukhdeep Singh, @dbltnk, @l32zhang",https://arxiv.org/abs/2203.00715,"Cultural transmission is the domain-general social skill that allows agents to acquire and use information from each other in real-time with high fidelity and recall. In humans, it is the inheritance process that powers cumulative cultural evolution, expanding our skills, tools and knowledge across generations. We provide a method for generating zero-shot, high recall cultural transmission in artificially intelligent agents. Our agents succeed at real-time cultural transmission from humans in novel contexts without using any pre-collected human data. We identify a surprisingly simple set of ingredients sufficient for generating cultural transmission and develop an evaluation methodology for rigorously assessing it. This paves the way for cultural evolution as an algorithm for developing artificial general intelligence. ",Learning Robust Real-Time Cultural Transmission without Human Data,3,"['New @DeepMind paper on Learning Robust Real-Time Cultural Transmission without Human Data explores the emergence of collective knowledge in reinforcement learning agents. \n\nMore details: \n๐: <LINK>\n๐: <LINK>\n๐น: <LINK> <LINK>', 'This work will be presented and discussed at the @royalsociety meeting on โThe emergence of collective knowledge and cumulative culture in animals, humans and machinesโ https://t.co/HNyrF5YyQI', 'Collaboration w/ @avishkar58, Bethanie Brownfield, @adriancollister, @agudallago, Ashley Edwards, @reverettai, Alexandre Frechette, @edwardfhughes, @kikujiroK, Yanko Gitahy Oliveira, Julia Pawar, @Miruna_Pislar, Alex Platonov, @evansenter, Sukhdeep Singh, @dbltnk, @l32zhang']",22,03,673 |
3736,93,1235590132679553024,1011308091328028672,Vashisht Madhavan,"New paper! Scaling MAP-Elites to Deep Neuroevolution. We enable agents to recover from damage and explore in high-D control tasks, where traditional QD algorithms fail. Arxiv <LINK>. Work led by @cedcolas during his internship with me @UberAILabs 1/3 <LINK> Work done with a great team including @Joost_Huizinga and @jeffclune 2/3 In this hard task, the agent must learn to 1) navigate and 2) reach the goal w/ a strongly deceptive reward signal directly leading the agent to the trap on the right. Previous attempts rely on hierarchical RL, yet our method solves the task directly with a single policy. 3/3",https://arxiv.org/abs/2003.01825,"Quality-Diversity (QD) algorithms, and MAP-Elites (ME) in particular, have proven very useful for a broad range of applications including enabling real robots to recover quickly from joint damage, solving strongly deceptive maze tasks or evolving robot morphologies to discover new gaits. However, present implementations of MAP-Elites and other QD algorithms seem to be limited to low-dimensional controllers with far fewer parameters than modern deep neural network models. In this paper, we propose to leverage the efficiency of Evolution Strategies (ES) to scale MAP-Elites to high-dimensional controllers parameterized by large neural networks. We design and evaluate a new hybrid algorithm called MAP-Elites with Evolution Strategies (ME-ES) for post-damage recovery in a difficult high-dimensional control task where traditional ME fails. Additionally, we show that ME-ES performs efficient exploration, on par with state-of-the-art exploration algorithms in high-dimensional control tasks with strongly deceptive rewards. ",Scaling MAP-Elites to Deep Neuroevolution,3,"['New paper! Scaling MAP-Elites to Deep Neuroevolution. We enable agents to recover from damage and explore in high-D control tasks, where traditional QD algorithms fail. Arxiv <LINK>. Work led by @cedcolas during his internship with me @UberAILabs 1/3 <LINK>', 'Work done with a great team including @Joost_Huizinga and @jeffclune 2/3', 'In this hard task, the agent must learn to 1) navigate and 2) reach the goal w/ a strongly deceptive reward signal directly leading the agent to the trap on the right.\nPrevious attempts rely on hierarchical RL, yet our method solves the task directly with a single policy. 3/3']",20,03,607 |
3737,3,1182015175685107712,3743366715,Aaron Mueller,"also! new #emnlp2019 paper with @marty_with_an_e, @amuuueller, and @tallinzen --- we explore whether increasing layer and/or training set size helps language models learn better syntax. the answer: a little, but these alone won't be enough <LINK> <LINK>",https://arxiv.org/abs/1909.00111,"Recurrent neural networks can learn to predict upcoming words remarkably well on average; in syntactically complex contexts, however, they often assign unexpectedly high probabilities to ungrammatical words. We investigate to what extent these shortcomings can be mitigated by increasing the size of the network and the corpus on which it is trained. We find that gains from increasing network size are minimal beyond a certain point. Likewise, expanding the training corpus yields diminishing returns; we estimate that the training corpus would need to be unrealistically large for the models to match human performance. A comparison to GPT and BERT, Transformer-based models trained on billions of words, reveals that these models perform even more poorly than our LSTMs in some constructions. Our results make the case for more data efficient architectures. ",Quantity doesn't buy quality syntax with neural language models,1,"[""also! new #emnlp2019 paper with @marty_with_an_e, @amuuueller, and @tallinzen --- we explore whether increasing layer and/or training set size helps language models learn better syntax. the answer: a little, but these alone won't be enough <LINK> <LINK>""]",19,09,253 |
3738,188,1276199969448493057,970000369072959488,Teresita Suarez,"I released a new paper on the arxiv! We model absorption line systems as pressure confined clouds, confined by the expected pressure of massive galactic haloes in the early Universe (z >5). Inferred chemical abundances may arise in starburst galaxies. <LINK> <LINK>",https://arxiv.org/abs/2006.13088,"We interpret observations of intergalactic low ionisation metal absorption systems at redshifts z $\gtrsim$5 in terms of pressure-confined clouds. We find clouds confined by the expected pressure of galactic haloes with masses $11<\log M_h/h^{-1}M_\odot<12$ provide a good description of the column density ratios between low ionisation metal absorbers. Some of the ratios, however, require extending conventional radiative transfer models of irradiated slabs to spherical (or cylindrical) clouds to allow for lines of sight passing outside the cores of the clouds. Moderate depletion of silicon onto dust grains is also indicated in some systems. The chemical abundances inferred span the range between solar and massive-star dominated stellar populations as may arise in starburst galaxies. The typical HI column densities matching the data correspond to Damped Lyman-$\alpha$ Absorbers (DLAs) or sub-DLAs, with sizes of 40 pc to 3 kpc, gas masses $3.5<\log M_c/M_\odot<8$ and metallicites $0.001-0.01Z_\odot$. Such systems continue to pose a challenge for galaxy-scale numerical simulations to reproduce. ","Modelling intergalactic low ionisation metal absorption line systems |
near the epoch of reionization",1,"['I released a new paper on the arxiv! We model absorption line systems as pressure confined clouds, confined by the expected pressure of massive galactic haloes in the early Universe (z >5). Inferred chemical abundances may arise in starburst galaxies.\n<LINK> <LINK>']",20,06,268 |
3739,79,1092340647078621184,481539448,Richard Alexander,"New paper, led by @bec_nealon, looking at the observational appearance of discs warped by planets on inclined orbits. Bec shows that even modest misalignments (few degrees) are enough for gas-giants to create observable shadows in scattered light images. <LINK> <LINK> There are movies...๐ <LINK> <LINK> Finally, we apply our model to the Hubble observations of TW Hya by @JohnDebes and colleagues. In general we find good agreement with the structures observed in the disc, but matching the variability time-scales is tricky.",https://arxiv.org/abs/1902.00036,"Three-dimensional hydrodynamic numerical simulations have demonstrated that the structure of a protoplanetary disc may be strongly affected by a planet orbiting in a plane that is misaligned to the disc. When the planet is able to open a gap, the disc is separated into an inner, precessing disc and an outer disc with a warp. In this work, we compute infrared scattered light images to investigate the observational consequences of such an arrangement. We find that an inner disc misaligned by a less than a degree to the outer disc is indeed able to cast a shadow at larger radii. In our simulations a planet of around 6 Jupiter masses inclined by around 2 degrees is enough to warp the disc and cast a shadow with a depth of more than 10% of the average flux at that radius. We also demonstrate that warp in the outer disc can cause a variation in the azimuthal brightness profile at large radii. Importantly, this latter effect is a function of the distance from the star and is most prominent in the outer disc. We apply our model to the TW Hya system, where a misaligned, precessing inner disc has been invoked to explain an recently observed shadow in the outer disc. Consideration of the observational constraints suggest that an inner disc precessing due to a misaligned planet is an unlikely explanation for the features found in TW Hya. ",Scattered light shadows in warped protoplanetary discs,4,"['New paper, led by @bec_nealon, looking at the observational appearance of discs warped by planets on inclined orbits. Bec shows that even modest misalignments (few degrees) are enough for gas-giants to create observable shadows in scattered light images. \n<LINK> <LINK>', 'There are movies...๐\nhttps://t.co/5Y2CXGNZvp', 'https://t.co/4oAUVPftBx', 'Finally, we apply our model to the Hubble observations of TW Hya by @JohnDebes and colleagues. In general we find good agreement with the structures observed in the disc, but matching the variability time-scales is tricky.']",19,02,526 |
3740,19,1420972752249360385,1015053310603284480,Stephen Kane,"Congratulations to @michelle_hill63 for her incredible work in leading a new paper on the iota Draconis system, available here: <LINK> iota Draconis is a bright (V~3) giant star that is known to host a planet in a highly eccentric orbit (e~0.7). We've been following this star for over a decade with Lick and APF data. The system had a known RV trend which we've been waiting to ""turn around"", which it finally did. Furthermore, the star was observed by TESS and is an ideal asteroseismology target, and indeed signatures were detected. These allowed us to constrain the stellar radius and mass to 2% and 6%, respectively. Using these revised stellar properties, we improved the parameters of the known planet and extracted the properties for the new companion, which has a mass of ~15.6 Jupiter masses and an orbital period of ~68 years. As an added bonus, we analyzed the combined RV+Gaia/Hipparcos data which verified the orbital solution for the new companion. This system provides a fascinating case-study for planetary orbital evolution around evolved stars. @michelle_hill63 and I are especially grateful to Tiago Campante, @zhexingli, @Paul_Dalba, Timothy Brandt, Timothy White, @fringetracker, Keivan Stassun, BJ Fulton, and numerous others who contributed to this effort!",https://arxiv.org/abs/2107.13583,"Giant stars as known exoplanet hosts are relatively rare due to the potential challenges in acquiring precision radial velocities and the small predicted transit depths. However, these giant host stars are also some of the brightest in the sky and so enable high signal-to-noise follow-up measurements. Here we report on new observations of the bright (V ~ 3.3) giant star $\iota$ Draconis ($\iota$ Dra), known to host a planet in a highly eccentric ~511 day period orbit. TESS observations of the star over 137 days reveal asteroseismic signatures, allowing us to constrain the stellar radius, mass, and age to ~2%, ~6%, and ~28%, respectively. We present the results of continued radial velocity monitoring of the star using the Automated Planet Finder over several orbits of the planet. We provide more precise planet parameters of the known planet and, through the combination of our radial velocity measurements with Hipparcos and Gaia astrometry, we discover an additional long-period companion with an orbital period of ~$68^{+60}_{-36}$ years. Mass predictions from our analysis place this sub-stellar companion on the border of the planet and brown dwarf regimes. The bright nature of the star combined with the revised orbital architecture of the system provides an opportunity to study planetary orbital dynamics that evolve as the star moves into the giant phase of its evolution. ","Asteroseismology of iota Draconis and Discovery of an Additional |
Long-Period Companion",6,"['Congratulations to @michelle_hill63 for her incredible work in leading a new paper on the iota Draconis system, available here: <LINK>', 'iota Draconis is a bright (V~3) giant star that is known to host a planet in a highly eccentric orbit (e~0.7). We\'ve been following this star for over a decade with Lick and APF data. The system had a known RV trend which we\'ve been waiting to ""turn around"", which it finally did.', 'Furthermore, the star was observed by TESS and is an ideal asteroseismology target, and indeed signatures were detected. These allowed us to constrain the stellar radius and mass to 2% and 6%, respectively.', 'Using these revised stellar properties, we improved the parameters of the known planet and extracted the properties for the new companion, which has a mass of ~15.6 Jupiter masses and an orbital period of ~68 years.', 'As an added bonus, we analyzed the combined RV+Gaia/Hipparcos data which verified the orbital solution for the new companion. This system provides a fascinating case-study for planetary orbital evolution around evolved stars.', '@michelle_hill63 and I are especially grateful to Tiago Campante, @zhexingli, @Paul_Dalba, Timothy Brandt, Timothy White, @fringetracker, Keivan Stassun, BJ Fulton, and numerous others who contributed to this effort!']",21,07,1281 |
3741,19,1343928977195479041,1299830240823455745,Matteo Agostini,What does Cosmology say about #Majorana #neutrinos? The discovery potential of next-generation double-beta decay experiments has never been higher! Here is our new paper with a Christmas-color palette: <LINK> #Physics #Science <LINK> Uhm... it does resemble the Italian flag ๐ฎ๐น. Is this just a coincidence @FrancescoVissa1?,http://arxiv.org/abs/2012.13938,"We discuss the impact of the cosmological measurements on the predictions of the Majorana mass of the neutrinos, the parameter probed by neutrinoless double-beta decay experiments. Using a minimal set of assumptions, we quantify the probabilities of discovering neutrinoless double-beta decay and introduce a new graphical representation that could be of interest for the community. ",Discovery probabilities of Majorana neutrinos based on cosmological data,2,"['What does Cosmology say about #Majorana #neutrinos? The discovery potential of next-generation double-beta decay experiments has never been higher! Here is our new paper with a Christmas-color palette: <LINK> #Physics #Science <LINK>', 'Uhm... it does resemble the Italian flag ๐ฎ๐น. Is this just a coincidence @FrancescoVissa1?']",20,12,323 |
3742,100,1381893390355300356,1179672664002183168,Ahmet Iscen,"New paper on arxiv! Class-Balanced Distillation for Long-Tailed Visual Recognition <LINK> With Andrรฉ Araujo, @BoqingGo and @CordeliaSchmid <LINK> We propose a new method which combines the advantages of instance and class-balanced sampling by distilling the feature representations of multiple teachers with different characteristics.",https://arxiv.org/abs/2104.05279,"Real-world imagery is often characterized by a significant imbalance of the number of images per class, leading to long-tailed distributions. An effective and simple approach to long-tailed visual recognition is to learn feature representations and a classifier separately, with instance and class-balanced sampling, respectively. In this work, we introduce a new framework, by making the key observation that a feature representation learned with instance sampling is far from optimal in a long-tailed setting. Our main contribution is a new training method, referred to as Class-Balanced Distillation (CBD), that leverages knowledge distillation to enhance feature representations. CBD allows the feature representation to evolve in the second training stage, guided by the teacher learned in the first stage. The second stage uses class-balanced sampling, in order to focus on under-represented classes. This framework can naturally accommodate the usage of multiple teachers, unlocking the information from an ensemble of models to enhance recognition capabilities. Our experiments show that the proposed technique consistently outperforms the state of the art on long-tailed recognition benchmarks such as ImageNet-LT, iNaturalist17 and iNaturalist18. ",Class-Balanced Distillation for Long-Tailed Visual Recognition,2,"['New paper on arxiv!\nClass-Balanced Distillation for Long-Tailed Visual Recognition\n<LINK>\n\nWith Andrรฉ Araujo, @BoqingGo and @CordeliaSchmid <LINK>', 'We propose a new method which combines the advantages of instance and class-balanced sampling by distilling the feature representations of multiple teachers with different characteristics.']",21,04,334 |
3743,89,961421759957348352,22148802,Leo C. Stein ๐ฆ,"New preprint on the arXiv tonight, get it while it's hot! Baoyi & I applied the technology we developed in the previous article to find how the near-horizon limit of extremal black holes get deformed when you turn on some beyond-GR corrections to gravity. <LINK>",https://arxiv.org/abs/1802.02159,"Black holes are a powerful setting for studying general relativity and theories beyond GR. However, analytical solutions for rotating black holes in beyond-GR theories are difficult to find because of the complexity of such theories. In this paper, we solve for the deformation to the near-horizon extremal Kerr metric due to two example string-inspired beyond-GR theories: Einstein-dilaton-Gauss-Bonnet, and dynamical Chern-Simons theory. We accomplish this by making use of the enhanced symmetry group of NHEK and the weak-coupling limit of EdGB and dCS. We find that the EdGB metric deformation has a curvature singularity, while the dCS metric is regular. From these solutions we compute orbital frequencies, horizon areas, and entropies. This sets the stage for analytically understanding the microscopic origin of black hole entropy in beyond-GR theories. ",Deformation of extremal black holes from stringy interactions,1,"[""New preprint on the arXiv tonight, get it while it's hot!\nBaoyi & I applied the technology we developed in the previous article to find how the near-horizon limit of extremal black holes get deformed when you turn on some beyond-GR corrections to gravity.\n<LINK>""]",18,02,262 |
3744,86,964466913257623553,9827202,Lukas Mosser,We have released our publication on conditioning three-dimensional pore- and reservoir-scale models created by generative adversarial networks. You can find the preprint here: <LINK> Code and pre-trained models on GitHub: <LINK> <LINK> @GrahamGanssle @stevejpurves @JesperDramsch Cheers @GrahamGannssle! Great to see interest from the Bureau of Economic Geology on Deep Learning in geoscience.,https://arxiv.org/abs/1802.05622,"Geostatistical modeling of petrophysical properties is a key step in modern integrated oil and gas reservoir studies. Recently, generative adversarial networks (GAN) have been shown to be a successful method for generating unconditional simulations of pore- and reservoir-scale models. This contribution leverages the differentiable nature of neural networks to extend GANs to the conditional simulation of three-dimensional pore- and reservoir-scale models. Based on the previous work of Yeh et al. (2016), we use a content loss to constrain to the conditioning data and a perceptual loss obtained from the evaluation of the GAN discriminator network. The technique is tested on the generation of three-dimensional micro-CT images of a Ketton limestone constrained by two-dimensional cross-sections, and on the simulation of the Maules Creek alluvial aquifer constrained by one-dimensional sections. Our results show that GANs represent a powerful method for sampling conditioned pore and reservoir samples for stochastic reservoir evaluation workflows. ","Conditioning of three-dimensional generative adversarial networks for |
pore and reservoir-scale models",2,"['We have released our publication on conditioning three-dimensional pore- and reservoir-scale models created by generative adversarial networks. You can find the preprint here: <LINK>\nCode and pre-trained models on GitHub: <LINK> <LINK>', '@GrahamGanssle @stevejpurves @JesperDramsch Cheers @GrahamGannssle! Great to see interest from the Bureau of Economic Geology on Deep Learning in geoscience.']",18,02,393 |
3745,195,1370764928466964484,1057303533622575106,Tuhin Chakrabarty,"๐Reframing arguments to reflect different connotations but same denotations ๐ New #NAACL2021 paper titled โENTRUST: Argument Reframing with Language Models and Entailmentโ <LINK> Joint work with @chridey and Smaranda Muresan #NLProc <LINK> Differences in lexical framing, the focus of our work, can have large effects on peoplesโ opinions and beliefs. For example ""illegal aliens"" vs ""undocumented workers"". To reframe arguments We use a lexical resource for โconnotationsโ (<LINK>) to create a parallel corpus and propose a method for argument reframing that combines controllable text generation (positive connotation) with a post decoding entailment component (same denotation). We evaluate our approach on two different tasks (reframing partisan arguments and appeal to fear/prejudice fallacies) showing that our method is preferred over several competing baseline",https://arxiv.org/abs/2103.06758,"Framing involves the positive or negative presentation of an argument or issue depending on the audience and goal of the speaker (Entman 1983). Differences in lexical framing, the focus of our work, can have large effects on peoples' opinions and beliefs. To make progress towards reframing arguments for positive effects, we create a dataset and method for this task. We use a lexical resource for ""connotations"" to create a parallel corpus and propose a method for argument reframing that combines controllable text generation (positive connotation) with a post-decoding entailment component (same denotation). Our results show that our method is effective compared to strong baselines along the dimensions of fluency, meaning, and trustworthiness/reduction of fear. ",ENTRUST: Argument Reframing with Language Models and Entailment,4,"['๐Reframing arguments to reflect different connotations but same denotations ๐ New #NAACL2021 paper titled โENTRUST: Argument Reframing with Language Models and Entailmentโ\n<LINK>\nJoint work with @chridey and Smaranda Muresan\n#NLProc <LINK>', 'Differences in lexical framing, the focus of our work, can have large effects on peoplesโ opinions and beliefs. For example ""illegal aliens"" vs ""undocumented workers"". To reframe arguments', 'We use a lexical resource for โconnotationsโ (https://t.co/lZTvppp2kZ) to create a parallel corpus and propose a method for argument reframing that combines controllable text generation (positive connotation) with a post decoding entailment component (same denotation).', 'We evaluate our approach on two different tasks (reframing partisan arguments and appeal to fear/prejudice fallacies) showing that our\nmethod is preferred over several competing baseline']",21,03,868 |
3746,161,1450853554206482432,942694791707545600,Thomas Scialom,"๐ขNew Paper Alert How do metrics perform across tasks? ๐๐๐๐๐๐ญ๐ซ๐ข๐๐ฌ: ๐ ๐๐๐ง๐๐ก๐ฆ๐๐ซ๐ค ๐๐จ๐ซ ๐๐๐ง๐ ๐ฎ๐๐ ๐ ๐๐๐ง๐๐ซ๐๐ญ๐ข๐จ๐ง ๐๐ฏ๐๐ฅ๐ฎ๐๐ญ๐ข๐จ๐ง ๐๐ฏ๐๐ฅ๐ฎ๐๐ญ๐ข๐จ๐ง => Multi Task/Lingual/Modal Paper: <LINK> Code: <LINK> ๐ <LINK> Generative models are becoming increasingly important in NLP. This makes research on evaluation metrics of critical importance. However, there is currently no simple, unified way to compare, analyze or evaluate metrics across a representative set of tasks. 2/ To this purpose, we release BEAMetrics, the Benchmark to Evaluate Automatic Metrics. Across its 11 datasets, BEAMetrics is multi[task, lingual and dimensional] 3/ <LINK> I am particularly excited about including 2 open-ended Question Answering evaluation sets. Indeed, QA has long been considered as an extractive task before the recent advances for abstractive models. 4/ <LINK> Already some interesting findings: - BERTScore becomes increasingly more popular. But its performance is unequal across tasks: on summarization, it often reflects human judgment worst than ROUGE or BLEU. 5/ I hope BEAMetrics will serve the community to stimulate research into future metrics that address the challenges of evaluating flexible generative models of language. A collaboration @DeepMind & @RecitalAI Big thanks to my amazing coauthor @FelixHill84 ๐ 6/6",https://arxiv.org/abs/2110.09147,"Natural language processing (NLP) systems are increasingly trained to generate open-ended text rather than classifying between responses. This makes research on evaluation metrics for generated language -- functions that score system output given the context and/or human reference responses -- of critical importance. However, different metrics have different strengths and biases, and reflect human intuitions better on some tasks than others. There is currently no simple, unified way to compare, analyse or evaluate metrics across a representative set of tasks. Here, we describe the Benchmark to Evaluate Automatic Metrics (BEAMetrics), a resource to make research into new metrics itself easier to evaluate. BEAMetrics users can quickly compare existing and new metrics with human judgements across a diverse set of tasks, quality dimensions (fluency vs. coherence vs. informativeness etc), and languages. As generation experts might predict, BEAMetrics reveals stark task-dependent differences between existing metrics, and consistently poor performance on tasks with complex answer spaces or high reliance on general knowledge. While this analysis highlights a critical issue facing current research practice, BEAMetrics also contribute to its resolution by facilitating research into better metrics -- particularly those that can account for the complex interaction between context and general knowledge inherent to many modern NLP applications. BEAMetrics is available under the MIT License: this https URL ",BEAMetrics: A Benchmark for Language Generation Evaluation Evaluation,6,"['๐ขNew Paper Alert\nHow do metrics perform across tasks?\n\n๐๐๐๐๐๐ญ๐ซ๐ข๐๐ฌ: ๐ ๐๐๐ง๐๐ก๐ฆ๐๐ซ๐ค ๐๐จ๐ซ ๐๐๐ง๐ ๐ฎ๐๐ ๐ ๐๐๐ง๐๐ซ๐๐ญ๐ข๐จ๐ง ๐๐ฏ๐๐ฅ๐ฎ๐๐ญ๐ข๐จ๐ง ๐๐ฏ๐๐ฅ๐ฎ๐๐ญ๐ข๐จ๐ง\n=> Multi Task/Lingual/Modal\n\nPaper: <LINK>\nCode: <LINK>\n\n๐ <LINK>', 'Generative models are becoming increasingly important in NLP. This makes research on\nevaluation metrics of critical importance. However, there is currently no simple, unified way to\ncompare, analyze or evaluate metrics across a representative set of tasks. \n\n2/', 'To this purpose, we release BEAMetrics, the Benchmark to Evaluate Automatic Metrics. \n\nAcross its 11 datasets, BEAMetrics is multi[task, lingual and dimensional]\n\n3/ https://t.co/8zcwPqA0Yu', 'I am particularly excited about including 2 open-ended Question Answering evaluation sets. Indeed, QA has long been considered as an extractive task before the recent advances for abstractive models.\n\n4/ https://t.co/0c0Ikod64Z', 'Already some interesting findings:\n- BERTScore becomes increasingly more popular. But its performance is unequal across tasks: on summarization, it often reflects human judgment worst than ROUGE or BLEU.\n\n5/', 'I hope BEAMetrics will serve the community to stimulate research into future metrics that address the challenges of evaluating flexible generative models of language.\n\nA collaboration @DeepMind & @RecitalAI\nBig thanks to my amazing coauthor @FelixHill84 ๐\n\n6/6']",21,10,1298 |
3747,11,1290269277140836360,1009799569390096384,Brenden Lake,"We train large-scale neural nets ""through the eyes"" of one baby across 2 years of development. New paper from Emin Orhan shows how high-level visual representations emerge from a subset of one baby's experience, through only self-supervised learning. <LINK> (1/2) This work was possible because of the *amazing* SAYCam dataset of baby headcam videos. Please check out the SAYCam paper here from Jess Sullivan, Michelle Mei, @AmyPerfors @ewojcik @mcxfrank <LINK> (2/2) @Werdnamai Great ideas. Yes for 1) and 2). For 3), that's a lot harder, because the videos are what they are! Action is a major factor missing from this kind of work.",http://arxiv.org/abs/2007.16189,"Within months of birth, children develop meaningful expectations about the world around them. How much of this early knowledge can be explained through generic learning mechanisms applied to sensory data, and how much of it requires more substantive innate inductive biases? Addressing this fundamental question in its full generality is currently infeasible, but we can hope to make real progress in more narrowly defined domains, such as the development of high-level visual categories, thanks to improvements in data collecting technology and recent progress in deep learning. In this paper, our goal is precisely to achieve such progress by utilizing modern self-supervised deep learning methods and a recent longitudinal, egocentric video dataset recorded from the perspective of three young children (Sullivan et al., 2020). Our results demonstrate the emergence of powerful, high-level visual representations from developmentally realistic natural videos using generic self-supervised learning objectives. ",Self-supervised learning through the eyes of a child,3,"['We train large-scale neural nets ""through the eyes"" of one baby across 2 years of development. New paper from Emin Orhan shows how high-level visual representations emerge from a subset of one baby\'s experience, through only self-supervised learning. <LINK> (1/2)', 'This work was possible because of the *amazing* SAYCam dataset of baby headcam videos. Please check out the SAYCam paper here from Jess Sullivan, Michelle Mei, @AmyPerfors @ewojcik @mcxfrank https://t.co/I574XmUfF7 (2/2)', ""@Werdnamai Great ideas. Yes for 1) and 2). For 3), that's a lot harder, because the videos are what they are! Action is a major factor missing from this kind of work.""]",20,07,634 |
3748,189,1518758768791621632,1375959506,Monica Vidaurri,"It's arxiv day!! So excited to share my first first-author *science* paper where we further define the Venus zone and look at the overlap between the Venus zone and the habitable zone! So when we search for Earths, we may just find Venuses instead :) <LINK> So incredibly grateful for my co-authors and mentors @shawndgoldman, @ravi_kopparapu, Sandra Bastelberger, and Eric Wolf for making this probably the smoothest paper submitting process I've ever done! I've learned so much from them! @keshawnrants it takes one to know one!! @RocketToLulu WE LOVE VENUSES I am especially fond of section 5.2, where you can learn all about why planets are *really weird* and why we need to learn more about Venus! @Divya_M_P thank you!!! โค๏ธ @SpaceTides thank you!!! ๐ค ๐ค ๐ค @Rose_D_Luna thank you Rose! โค๏ธ @toomanyspectra <LINK>",https://arxiv.org/abs/2204.10919,"A key item of interest for planetary scientists and astronomers is the habitable zone, or the distance from a host star where a terrestrial planet can maintain necessary temperatures in order to retain liquid water on its surface. However, when observing a system's habitable zone, it is possible that one may instead observe a Venus-like planet. We define ""Venus-like"" as greenhouse-gas-dominated atmosphere occurring when incoming solar radiation exceeds infrared radiation emitted from the planet at the top of the atmosphere, resulting in a runaway greenhouse. Our definition of Venus-like includes both incipient and post-runaway greenhouse states. Both the possibility of observing a Venus-like world and the possibility that Venus could represent an end-state of evolution for habitable worlds, requires an improved understanding of the Venus-like planet; specifically, the distances where these planets can exist. Understanding this helps us define a ""Venus zone"", or the region in which Venus-like planets could exist, and assess the overlap with the aforementioned ""Habitable Zone"". In this study, we use a 1D radiative-convective climate model to determine the outer edge of the Venus zone for F0V, G2V, K5V, and M3V and M5V stellar spectral types. Our results show that the outer edge of the Venus zone resides at 3.01, 1.36, 0.68, 0.23, and 0.1 AU, respectively. These correspond to incident stellar fluxes of 0.8, 0.55, 0.38, 0.32, and 0.3 S, respectively, where stellar flux is relative to Earth (1.0). These results indicate that there may be considerable overlap between the habitable zone and the Venus zone. ",The Outer Edge of the Venus Zone Around Main-Sequence Stars,9,"[""It's arxiv day!! So excited to share my first first-author *science* paper where we further define the Venus zone and look at the overlap between the Venus zone and the habitable zone! So when we search for Earths, we may just find Venuses instead :) <LINK>"", ""So incredibly grateful for my co-authors and mentors @shawndgoldman, @ravi_kopparapu, Sandra Bastelberger, and Eric Wolf for making this probably the smoothest paper submitting process I've ever done! I've learned so much from them!"", '@keshawnrants it takes one to know one!!', '@RocketToLulu WE LOVE VENUSES', 'I am especially fond of section 5.2, where you can learn all about why planets are *really weird* and why we need to learn more about Venus!', '@Divya_M_P thank you!!! โค๏ธ', '@SpaceTides thank you!!! ๐ค ๐ค ๐ค ', '@Rose_D_Luna thank you Rose! โค๏ธ', '@toomanyspectra https://t.co/9WRDuX3eVX']",22,04,813 |
3749,14,1377612005071331332,1912298966,Dr. L. C. Mayorga,"๐จ๐จ New Paper Alert!!! ๐จ๐จ My fantastic co-authors and I studied a new class of astronomical objects! We have designated them UFOs: Understudied Floofy Objects. @_astronoMay @Of_FallingStars (and Jake Lustig-Yaeger, not on twitter) <LINK> 1/8 <LINK> Thanks to twitter catstronomers we collected a large sample of these understudied floofy objects at a variety of viewing angles and illumination angles. The raw data is public and available here: <LINK> 2/8 Through a rigorous analysis of light curves we were able to measure the rotational variations of several dozen floofy objects which we classify into 13 categories, further identified by their subobserver longitudes. 3/8 <LINK> We explored the change in brightness in three bands, as well as the overall brightness of the objects, and determined that some subtypes of floofy objects exhibit rotational variability with strong brightening at longitudes of 0. 4/8 <LINK> The high number of CL observations allowed us to create an average CL type rotation curve and reconstruct a representative object projected onto the spherical shape we assume. 5/8 <LINK> We also explored Floofy Objects in Color-Magnitude space and used a clustering algorithm on the 0 longitude observations to determine that there are 6 unique classes of Floofy Objects, generally separated by substellar point color and limb color. 6/8 <LINK> Finally, we explored potential for false pawsitives, specifically misidentification of WOOF Objects (Wagging tails On Objectively Friendly Objects) as Floofy Objects and found that high spatial resolution imaging missions like the upcoming CatEx observatory is a necessity. 7/8 <LINK> Check out the full paper here: <LINK> Weโre excited to see what future research can be done on this strange new class of astronomical objects! 8/8 <LINK> Addendum: and we finally convinced @LustigYaeger to get on Twitter to take his own credit! @vicgrinberg @_astronomay @Of_FallingStars I didn't even know! Purrrfect! @LustigYaeger",https://arxiv.org/abs/2103.16636,"Phase resolved observations of planetary bodies allow us to understand the longitudinal and latitudinal variations that make each one unique. Rotational variations have been detected in several types of astronomical bodies beyond those of planetary mass, including asteroids, brown dwarfs, and stars. Unexpected rotational variations, such as those presented in this work, reminds us that the universe can be complicated, with more mysteries to uncover. In this work we present evidence for a new class of astronomical objects we identify as ""floofy"" with observational distinctions between several sub-types of these poorly understood objects. Using optical observations contributed by the community, we have identified rotational variation in several of these floofy objects, which suggests that they may have strong differences between their hemispheres, likely caused by differing reflectivity off their surfaces. Additional sub-types show no rotational variability suggesting a uniform distribution of reflective elements on the floofy object. While the work here is a promising step towards the categorization of floofy objects, further observations with more strictly defined limits on background light, illumination angles, and companion objects are necessary to develop a better understanding of the many remaining mysteries of these astronomical objects. ","Detection of Rotational Variability in Floofy Objects at Optical |
Wavelengths",10,"['๐จ๐จ New Paper Alert!!! ๐จ๐จ\nMy fantastic co-authors and I studied a new class of astronomical objects! We have designated them UFOs: Understudied Floofy Objects.\n\n@_astronoMay @Of_FallingStars (and Jake Lustig-Yaeger, not on twitter)\n\n<LINK> 1/8 <LINK>', 'Thanks to twitter catstronomers we collected a large sample of these understudied floofy objects at a variety of viewing angles and illumination angles. \n\nThe raw data is public and available here:\nhttps://t.co/mDPUnu5MJX 2/8', 'Through a rigorous analysis of light curves we were able to measure the rotational variations of several dozen floofy objects which we classify into 13 categories, further identified by their subobserver longitudes. 3/8 https://t.co/hNds13leAt', 'We explored the change in brightness in three bands, as well as the overall brightness of the objects, and determined that some subtypes of floofy objects exhibit rotational variability with strong brightening at longitudes of 0. 4/8 https://t.co/Rp8bs11aAD', 'The high number of CL observations allowed us to create an average CL type rotation curve and reconstruct a representative object projected onto the spherical shape we assume. 5/8 https://t.co/ZC4Ed9kiU7', 'We also explored Floofy Objects in Color-Magnitude space and used a clustering algorithm on the 0 longitude observations to determine that there are 6 unique classes of Floofy Objects, generally separated by substellar point color and limb color. 6/8 https://t.co/PbAH3jXaUO', 'Finally, we explored potential for false pawsitives, specifically misidentification of WOOF Objects (Wagging tails On Objectively Friendly Objects) as Floofy Objects and found that high spatial resolution imaging missions like the upcoming CatEx observatory is a necessity. 7/8 https://t.co/1Ra6WMApA8', 'Check out the full paper here: https://t.co/TLUFAUkNft\n\nWeโre excited to see what future research can be done on this strange new class of astronomical objects! 8/8 https://t.co/dtvM9YSjAC', 'Addendum: and we finally convinced @LustigYaeger to get on Twitter to take his own credit!', ""@vicgrinberg @_astronomay @Of_FallingStars I didn't even know! Purrrfect! @LustigYaeger""]",21,03,1986 |
3750,22,1333450081060724738,61831354,Eline Maaike de Weerd,"New paper (somehow)! We include local primordial non-Gaussianity in the relativistic bispectrum, and show that the bias from using a Newtonian analysis instead could be as large as fnl~5, highlighting the importance of including these effects in modelling: <LINK>",http://arxiv.org/abs/2011.13660,"Next-generation galaxy and 21cm intensity mapping surveys will rely on a combination of the power spectrum and bispectrum for high-precision measurements of primordial non-Gaussianity. In turn, these measurements will allow us to distinguish between various models of inflation. However, precision observations require theoretical precision at least at the same level. We extend the theoretical understanding of the galaxy bispectrum by incorporating a consistent general relativistic model of galaxy bias at second order, in the presence of local primordial non-Gaussianity. The influence of primordial non-Gaussianity on the bispectrum extends beyond the galaxy bias and the dark matter density, due to redshift-space effects. The standard redshift-space distortions at first and second order produce a well-known primordial non-Gaussian imprint on the bispectrum. Relativistic corrections to redshift-space distortions generate new contributions to this primordial non-Gaussian signal, arising from: (1)~a coupling of first-order scale-dependent bias with first-order relativistic observational effects, and (2)~linearly evolved non-Gaussianity in the second-order velocity and metric potentials which appear in relativistic observational effects. Our analysis allows for a consistent separation of the relativistic `contamination' from the primordial signal, in order to avoid biasing the measurements by using an incorrect theoretical model. We show that the bias from using a Newtonian analysis of the squeezed bispectrum could be $\Delta \fnl\sim 5$ for a Stage IV H$\alpha$ survey. ",Local primordial non-Gaussianity in the relativistic galaxy bispectrum,1,"['New paper (somehow)! We include local primordial non-Gaussianity in the relativistic bispectrum, and show that the bias from using a Newtonian analysis instead could be as large as fnl~5, highlighting the importance of including these effects in modelling: <LINK>']",20,11,263 |
3751,107,1245400744381087746,1201449637,"David Fink, PhD","Great new paper by @Lizstuartdc and others showing the limitations of common statistical approaches to evaluating effect of opioid policy. Much needed work! <LINK> @GloriaMiele @Lizstuartdc Doubtful. This pre-print was just published on March 26th. However, this is a very respected group of researchers, so I would expect this paper to be nearly publication ready. If you're interested, @Lizstuartdc recently publish this great policy eval: <LINK>",https://arxiv.org/abs/2003.12008,"State-level policy evaluations commonly employ a difference-in-differences (DID) study design; yet within this framework, statistical model specification varies notably across studies. Motivated by applied state-level opioid policy evaluations, this simulation study compares statistical performance of multiple variations of two-way fixed effect models traditionally used for DID under a range of simulation conditions. While most linear models resulted in minimal bias, non-linear models and population-weighted versions of classic linear two-way fixed effect and linear GEE models yielded considerable bias (60 to 160%). Further, root mean square error is minimized by linear AR models when examining crude mortality rates and by negative binomial models when examining raw death counts. In the context of frequentist hypothesis testing, many models yielded high Type I error rates and very low rates of correctly rejecting the null hypothesis (< 10%), raising concerns of spurious conclusions about policy effectiveness. When considering performance across models, the linear autoregressive models were optimal in terms of directional bias, root mean squared error, Type I error, and correct rejection rates. These findings highlight notable limitations of traditional statistical models commonly used for DID designs, designs widely used in opioid policy studies and in state policy evaluations more broadly. ","Moving beyond the classic difference-in-differences model: A simulation |
study comparing statistical methods for estimating effectiveness of |
state-level policies",2,"['Great new paper by @Lizstuartdc and others showing the limitations of common statistical approaches to evaluating effect of opioid policy. Much needed work!\n\n<LINK>', ""@GloriaMiele @Lizstuartdc Doubtful. This pre-print was just published on March 26th. However, this is a very respected group of researchers, so I would expect this paper to be nearly publication ready. \n\nIf you're interested, @Lizstuartdc recently publish this great policy eval: https://t.co/AJz916dUmu""]",20,03,449 |
3752,90,1147010995845586944,2541941466,Alba Cervera-Lierta,"New paper! ๐ ""Data re-uploading for a universal quantum classifier"", Adriรกn Pรฉrez-Garcรญa, Alba Cervera-Lierta, Elies Gil-Ferrer and Josรฉ Ignacio Latorre. <LINK> Scirate: <LINK> @adpersa @EliesMiquel @j_i_latorre @QUANTIC_BSC *Adriรกn Pรฉrez-Salinas @adpersa *Elies Gil-Fuster @EliesMiquel ๐ what's wrong with me today?",https://arxiv.org/abs/1907.02085,"A single qubit provides sufficient computational capabilities to construct a universal quantum classifier when assisted with a classical subroutine. This fact may be surprising since a single qubit only offers a simple superposition of two states and single-qubit gates only make a rotation in the Bloch sphere. The key ingredient to circumvent these limitations is to allow for multiple data re-uploading. A quantum circuit can then be organized as a series of data re-uploading and single-qubit processing units. Furthermore, both data re-uploading and measurements can accommodate multiple dimensions in the input and several categories in the output, to conform to a universal quantum classifier. The extension of this idea to several qubits enhances the efficiency of the strategy as entanglement expands the superpositions carried along with the classification. Extensive benchmarking on different examples of the single- and multi-qubit quantum classifier validates its ability to describe and classify complex data. ",Data re-uploading for a universal quantum classifier,3,"['New paper! ๐\n""Data re-uploading for a universal quantum classifier"", Adriรกn Pรฉrez-Garcรญa, Alba Cervera-Lierta, Elies Gil-Ferrer and Josรฉ Ignacio Latorre.\n<LINK>\nScirate: <LINK>\n@adpersa @EliesMiquel @j_i_latorre @QUANTIC_BSC', '*Adriรกn Pรฉrez-Salinas @adpersa', ""*Elies Gil-Fuster @EliesMiquel \n๐ what's wrong with me today?""]",19,07,316 |
3753,116,1273084289941295104,1102421602606575616,Jaeho Lee,"Q. What is the *minimum width* of neural network, required for universal approximation? A: max{dx + 1, dy}. ... at least for ReLU nets / Lp setup, as we show in our new paper: <LINK> amazing effort by @sejun_park_ w/ Chulhee Yun and @jinwoos0417 [1/n] This ""dual"" scenario (of classical univ. approx. results) have already been studied in many papers, starting from the elegant work by Hanin & Sellke. What we give is (perhaps) the first TIGHT upper and lower bound(s) [2/n]: <LINK> Quick summary of results: 1. max{dx + 1, dy} is what we need for ReLU/Lp setup 2. The same is provably NOT TRUE for ReLU/Cont 3. ... but if we add Step activation, again the answer is max{dx + 1, dy} [3/n] Intriguingly, this width-separation of ""Lp can"" vs ""Cont can't"" (1&2) is in stark contrast with depth-separation in classical scenarios! Actually, depth-2 is known to be sufficient for Cont, but not for Lp (see, e.g. Qu and Wang 2019). [4/n] Q. how do we construct the universal approximator with width max{dx+1,dy}? A. We use a encoder-decoder structure, mainly inspired by memorization/info-theory literatures. This allows us to decouple dx and dy, to improve over the previously known UB of dx+dy+1. [5/n] <LINK> ... other details can be found in the arXiv version: <LINK> or better, you can directly ask @sejun_park_ who is actually looking for his next destination! [n/n]",https://arxiv.org/abs/2006.08859,"The universal approximation property of width-bounded networks has been studied as a dual of classical universal approximation results on depth-bounded networks. However, the critical width enabling the universal approximation has not been exactly characterized in terms of the input dimension $d_x$ and the output dimension $d_y$. In this work, we provide the first definitive result in this direction for networks using the ReLU activation functions: The minimum width required for the universal approximation of the $L^p$ functions is exactly $\max\{d_x+1,d_y\}$. We also prove that the same conclusion does not hold for the uniform approximation with ReLU, but does hold with an additional threshold activation function. Our proof technique can be also used to derive a tighter upper bound on the minimum width required for the universal approximation using networks with general activation functions. ",Minimum Width for Universal Approximation,6,"['Q. What is the *minimum width* of neural network, required for universal approximation?\n\nA: max{dx + 1, dy}.\n... at least for ReLU nets / Lp setup, as we show in our new paper: <LINK>\n\namazing effort by @sejun_park_ w/ Chulhee Yun and @jinwoos0417 [1/n]', 'This ""dual"" scenario (of classical univ. approx. results) have already been studied in many papers, starting from the elegant work by Hanin & Sellke.\n\nWhat we give is (perhaps) the first TIGHT upper and lower bound(s) [2/n]: https://t.co/98EbvKNN5V', 'Quick summary of results:\n1. max{dx + 1, dy} is what we need for ReLU/Lp setup\n2. The same is provably NOT TRUE for ReLU/Cont\n3. ... but if we add Step activation, again the answer is max{dx + 1, dy}\n\n[3/n]', 'Intriguingly, this width-separation of ""Lp can"" vs ""Cont can\'t"" (1&2) is in stark contrast with depth-separation in classical scenarios!\n\nActually, depth-2 is known to be sufficient for Cont, but not for Lp (see, e.g. Qu and Wang 2019).\n\n[4/n]', 'Q. how do we construct the universal approximator with width max{dx+1,dy}?\n\nA. We use a encoder-decoder structure, mainly inspired by memorization/info-theory literatures. This allows us to decouple dx and dy, to improve over the previously known UB of dx+dy+1.\n\n[5/n] https://t.co/hRrUpjjpRM', '... other details can be found in the arXiv version:\nhttps://t.co/EJ3bBMVwX4\n\nor better, you can directly ask @sejun_park_\nwho is actually looking for his next destination!\n\n[n/n]']",20,06,1365 |
3754,134,1501014250256576518,175590531,Alejandro Flores V.,"The preprint is out! <LINK> ๐ฐ The problem of reducing training sets while maintaining the exact class boundaries of the nearest-neighbor classifier, has been a long-standing one. I'm proposing a new algorithm for this problem, improving a fairly recent paper! ๐งต For almost three decades, the best know result was Clarkson's [1], but just a few months ago Prof. David Eppstein [2] proposed a new, much improved, algorithm for this problem... [1] <LINK> [2] <LINK> My take on this is to improve Eppstein's algorithm by simplifying it... Basically, chopping half of the algorithm while proving it is still correct. And reducing it's time complexity along the way...",https://arxiv.org/abs/2203.03567,"Given a training set $P \subset \mathbb{R}^d$, the nearest-neighbor classifier assigns any query point $q \in \mathbb{R}^d$ to the class of its closest point in $P$. To answer these classification queries, some training points are more relevant than others. We say a training point is relevant if its omission from the training set could induce the misclassification of some query point in $\mathbb{R}^d$. These relevant points are commonly known as border points, as they define the boundaries of the Voronoi diagram of $P$ that separate points of different classes. Being able to compute this set of points efficiently is crucial to reduce the size of the training set without affecting the accuracy of the nearest-neighbor classifier. Improving over a decades-long result by Clarkson, in a recent paper by Eppstein an output-sensitive algorithm was proposed to find the set of border points of $P$ in $O( n^2 + nk^2 )$ time, where $k$ is the size of such set. In this paper, we improve this algorithm to have time complexity equal to $O( nk^2 )$ by proving that the first steps of their algorithm, which require $O( n^2 )$ time, are unnecessary. ",Improved Search of Relevant Points for Nearest-Neighbor Classification,3,"[""The preprint is out! <LINK> ๐ฐ\nThe problem of reducing training sets while maintaining the exact class boundaries of the nearest-neighbor classifier, has been a long-standing one. I'm proposing a new algorithm for this problem, improving a fairly recent paper! ๐งต"", ""For almost three decades, the best know result was Clarkson's [1], but just a few months ago Prof. David Eppstein [2] proposed a new, much improved, algorithm for this problem...\n[1] https://t.co/gbuHR43uXt\n[2] https://t.co/SQuRWKaOyN"", ""My take on this is to improve Eppstein's algorithm by simplifying it... Basically, chopping half of the algorithm while proving it is still correct. And reducing it's time complexity along the way...""]",22,03,662 |
3755,19,1399633850138046465,202069135,laurent besacier,"Do Multilingual Neural Machine Translation Models Contain Language Pair Specific Attention Heads? We investigate this Q for a strong MNMT model, in our new ACL 2021 (Findings) paper: <LINK>. Joint work with Zae-Myung Kim, Vassilina Nikoulina and @didier_schwab !",https://arxiv.org/abs/2105.14940,"Recent studies on the analysis of the multilingual representations focus on identifying whether there is an emergence of language-independent representations, or whether a multilingual model partitions its weights among different languages. While most of such work has been conducted in a ""black-box"" manner, this paper aims to analyze individual components of a multilingual neural translation (NMT) model. In particular, we look at the encoder self-attention and encoder-decoder attention heads (in a many-to-one NMT model) that are more specific to the translation of a certain language pair than others by (1) employing metrics that quantify some aspects of the attention weights such as ""variance"" or ""confidence"", and (2) systematically ranking the importance of attention heads with respect to translation quality. Experimental results show that surprisingly, the set of most important attention heads are very similar across the language pairs and that it is possible to remove nearly one-third of the less important heads without hurting the translation quality greatly. ","Do Multilingual Neural Machine Translation Models Contain Language Pair |
Specific Attention Heads?",1,"['Do Multilingual Neural Machine Translation Models Contain Language Pair Specific Attention Heads? We investigate this Q for a strong MNMT model, in our new ACL 2021 (Findings) paper: <LINK>. Joint work with Zae-Myung Kim, Vassilina Nikoulina and @didier_schwab !']",21,05,262 |
3756,89,1252437773832740864,2228370313,Damien Teney,(1/4) There's been a few new datasets with contrastive/counterfactual/adversarial examples in vision and NLP. Our latest paper shows a better way to use them for training than just data augmentation. Use them to supervise the orientation of your gradient <LINK> <LINK> (2/4) A few of these new datasets: Evaluating NLP Models via Contrast Sets @nlpmattg <LINK> Natural Perturbation for Robust QA @DanielKhashabi <LINK> (3/4) Contrastive Examples for Addressing the Tyranny of the Majority @trevordarrell <LINK> Towards Causal VQA - Revealing and Reducing Spurious Correlations by Invariant and Covariant Semantic Editing <LINK> (4/4) Learning the difference that makes a difference @dkaushik96 @zacharylipton <LINK>,http://arxiv.org/abs/2004.09034,"One of the primary challenges limiting the applicability of deep learning is its susceptibility to learning spurious correlations rather than the underlying mechanisms of the task of interest. The resulting failure to generalise cannot be addressed by simply using more data from the same distribution. We propose an auxiliary training objective that improves the generalization capabilities of neural networks by leveraging an overlooked supervisory signal found in existing datasets. We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task. We show that such pairs can be identified in a number of existing datasets in computer vision (visual question answering, multi-label image classification) and natural language processing (sentiment analysis, natural language inference). The new training objective orients the gradient of a model's decision function with pairs of counterfactual examples. Models trained with this technique demonstrate improved performance on out-of-distribution test sets. ","Learning What Makes a Difference from Counterfactual Examples and |
Gradient Supervision",4,"[""(1/4) There's been a few new datasets with contrastive/counterfactual/adversarial examples in vision and NLP. Our latest paper shows a better way to use them for training than just data augmentation. Use them to supervise the orientation of your gradient <LINK> <LINK>"", '(2/4) A few of these new datasets:\nEvaluating NLP Models via Contrast Sets @nlpmattg\nhttps://t.co/kf8LDwjFzL\n\nNatural Perturbation for Robust QA @DanielKhashabi\nhttps://t.co/Q31kaRo4Rs', '(3/4) Contrastive Examples for Addressing the Tyranny of the Majority @trevordarrell\nhttps://t.co/7lJoVK2Gpv\n\nTowards Causal VQA - Revealing and Reducing Spurious Correlations by Invariant and Covariant Semantic Editing\nhttps://t.co/aMG1A5eUOL', '(4/4) Learning the difference that makes a difference @dkaushik96 @zacharylipton\nhttps://t.co/uMP7i74kzK']",20,04,715 |
Subsets and Splits