image
imagewidth (px) 44
1.64k
| file_name_index
stringlengths 26
33
| text
stringlengths 16
2.81k
| class
stringclasses 5
values | super_class
stringclasses 2
values | sub_class
stringclasses 5
values | split
stringclasses 1
value |
---|---|---|---|---|---|---|
$2305.00060v2-Figure2-1.png | FIG. 2. Potential energy surfaces (PES) for the F2 dissociation using the cc-pvDZ basis and an (8o, 14e) active space. Solid lines show the Hartree-Fock (grey) and complete active space (CAS) FCI results, while the discontinuous lines present eigenvector continuation (EC) results with a different number of training points. The training points are shown as red, square markers and are labeled with numbers, representing the order in which they were included in the EC calculation. Results are shown for 3 types of orbital matchings: one ignoring all orbital rotations (leftmost), one ignoring the metric, but including all molecular orbital rotation factors (center), and finally one including all effects of the change in atomic orbital (AO) basis between geometries (rightmost). | fig_result | fig | result_fig | train |
|
$2305.00060v2-Figure3-1.png | FIG. 3. Norm of the transformed vector used as basis in the eigenvector continuation (EC) calculation, for the three different orbital matchings (upper panel), in an F2 cc-pvDZ calculation with an (8o, 12e) active space. The original vector is the CAS-FCI solution at R = 1.5 Å (red marker in lower panel). For reference, the FCI potential energy surface is shown as a black solid curve (lower panel). | fig_result | fig | result_fig | train |
|
$2305.00060v2-Figure4-1.png | FIG. 4. Bond stretching potential energy surfaces (PES) for small molecules, comparing FCI and eigenvector continuation (EC). The x-axis is the bond length rescaled with the equilibrium value for the given molecule, and the y-axis is the ground state energy rescaled by the minimum value and shifted by the large distance asymptotic (i.e. the bond energy). Symmetric bonds are shown in the upper row, while asymmetric bonds are found in the lower row. | fig_result | fig | result_fig | train |
|
$2305.00060v2-Figure6-1.png | FIG. 6. Potential energy surface (PES) for hexatriene in the cc-pvDZ basis as a function of the torsion angle ϕ around the central C-C double bond. ϕ = 0◦ corresponds to the trans configuration, ϕ = 180◦ to cis. The FCI results correspond to a complete active space (6o, 6e) calculation involving only the π orbital manifold. Shown are eigenvector continuation (EC) for three different numbers of training points, always symmetrically chosen around ϕ = 180◦. | fig_result | fig | result_fig | train |
|
$2305.00060v2-Figure7-1.png | FIG. 7. Two measures of the error on the potential energy surfaces (PES) for the bond stretching examples in Fig 4. The exact error with respect to FCI is shown with square markers, and the residue estimate in Eq (6) with round markers. The approximate residue follows the exact error closely. | fig_result | fig | result_fig | train |
|
$2305.00060v2-Figure8-1.png | FIG. 8. Results of eigenvector continuation (EC) for excited state potential energy surfaces (PES) in F2 using the cc-pVDZ basis set. The FCI PES in the (8o, 14e) active space for the first few excited states are presented in grey. Results are shown for three different EC simulations. Left panel: EC with 3 training points using always FCI ground state vectors. Middle panel: EC with 3 training points using always FCI first excited state vectors. Right panel: EC with 6 training points in 3 different geometries, using both the ground state and 1st excited state of the FCI Hamiltonian in each point. | fig_result | fig | result_fig | train |
|
$2305.00060v2-TableI-1.png | TABLE I. Table summarizing the computational details of the molecular potential energy surfaces (PES) studied in this work. | table_result | table | result_tab | train |
|
$2305.00060v2-TableIII-1.png | TABLE III. Equilibrium (i.e. ϕ = 0◦) geometry for trans-hexatriene, as reported in Ref. [48]. | table_parameter | table | parameter | train |
|
$2305.00061v1-Figure1-1.png | Figure 1: Comparison of Different Iterative Decomposition Frameworks. Top: sequential decomposition. The next action/sub-problem is determined by all previous actions/results. For example, block 4 is generated based on blocks [1,2,3]. Middle: recursive decomposition; the generated action/sub-problem is only determined by the upper level’s state (e.g., 1.2.1 is only determined by 1.2); Bottom: hybrid decomposition, in which the hybrid reasoning framework supports both sequential decomposition and recursive decomposition. [2.1, 2.2, 2.3] are generated from [2] recursively; [3] is generated from [1, 2] sequentially. | fig_result | fig | result_fig | train |
|
$2305.00061v1-Figure2-1.png | Figure 2: An EVR+ Working Cycle | fig_result | fig | result_fig | train |
|
$2305.00061v1-Figure3-1.png | Figure 3: A walk-through example of EVR+ on the tree search task. | fig_result | fig | result_fig | train |
|
$2305.00061v1-Figure4-1.png | Figure 4: The performance of UnifiedQA-T5-large (e2e) and EVR+ (evr, with a UnifiedQA-T5-base backbone) on the chaining, Cartesian and tree search tasks. The e2e model is trained on depth up to 2 data for chaining and tree search (i.e., data depth 0, 1, 2), and trained on depth up to 3 data for Cartesian product (i.e., depth 2, 3). | fig_result | fig | result_fig | train |
|
$2305.00061v1-Figure5-1.png | Figure 5: The performance of UnifiedQA-T5-large (e2e) and EVR+ (evr, with a UnifiedQA-T5-base backbone) on the chaining-tree-search and Cartesian-treesearch tasks. For both tasks the models are trained on the data with the depth up to 2, so the data with depth 3 and 4 are OOD data. | fig_result | fig | result_fig | train |
|
$2305.00061v1-Figure6-1.png | Figure 6: The program generated by GPT-3 and the target program. All lines of the program generated by GPT-3 are aligned with the target program, but with incorrect grammar. | fig_result | fig | result_fig | train |
|
$2305.00061v1-Table1-1.png | Table 1: Statistics of the SynthCompR Dataset. For the tasks except the Cartesian task we generate the du2 and du4 data. Du means depth up to (e.g., du2 means depths 0, 1, 2). For the Cartesian product task, we generate the du3 and du4 data. The number of examples of each depth are equally distributed. For example, the chaining du2 training split has 9999 examples, each depth [0, 1, 2] has 3333 training examples. | table_result | table | result_tab | train |
|
$2305.00061v1-Table2-1.png | Table 2: The 5 synthetic tasks that are motivated by real-world examples or previously proposed problems that require compositional reasoning. | table_parameter | table | parameter | train |
|
$2305.00061v1-Table3-1.png | Table 3: Few-shot prompted test accuracy for each pattern. | table_result | table | result_tab | train |
|
$2305.00061v1-Table6-1.png | Table 6: The template to generate the training data generate_program-1, generate_program-2, generate_program-3, qa-1 of the tree search task. The generate_program data have the generate_program: prefix in the input, and the qa data have the qa: prefix in the input. | table_result | table | result_tab | train |
|
$2305.00062v1-Figure1-1.png | Figure 1. Images from different computer animations in the quantum-mechanics class. Top left: A nested three-SternGerlach analyzer experiment. Top right: A Bell experiment. Bottom left: A single-slit diffraction experiment (illustrating the wave properties of single-slit diffraction). Bottom right: The Hong-Ou-Mandel experiment. | fig_result | fig | result_fig | train |
|
$2305.00063v1-Figure1-1.png | Figure 1: Process of image preprocessing and decomposition | fig_illustration | fig | illustration | train |
|
$2305.00063v1-Figure2-1.png | Figure 2: CNN architecture for handwritten digit extraction | fig_architecture | fig | architecture | train |
|
$2305.00063v1-Table1-1.png | Table 1: CDT evaluation data | table_result | table | result_tab | train |
|
$2305.00066v1-Figure1-1.png | Fig. 1: Kolmorogov N -width dN (P) width for a discontinuous function – comparison of POD, exact form of dN (P) and known asymptotic rate. | fig_result | fig | result_fig | train |
|
$2305.00066v1-Figure2-1.png | Fig. 2: Construction of an odd HWS initial condition from a smooth ramp. | fig_illustration | fig | illustration | train |
|
$2305.00066v1-Figure3-1.png | Fig. 3: Ramp functions with varying smoothness Ck. | fig_result | fig | result_fig | train |
|
$2305.00066v1-Figure4-1.png | Fig. 4: N -width for ramps with varying regularity. | fig_result | fig | result_fig | train |
|
$2305.00066v1-Figure5-1.png | Fig. 5: N -width depending on the slope of a continuous, piecewise linear function. | fig_result | fig | result_fig | train |
|
$2305.00066v1-Figure6-1.png | Fig. 6: δN (P) for random functions of different smoothness. | fig_result | fig | result_fig | train |
|
$2305.00066v1-Figure7-1.png | Fig. 7: 2D-transport problem: δN -width for initial- and boundary conditions of different regularity Ck(ΩP), k = 0, ..., 4. | fig_result | fig | result_fig | train |
|
$2305.00066v1-Table1-1.png | Table 1: Values for 2ε for each gk | table_result | table | result_tab | train |
|
$2305.00069v1-Figure1-1.png | Figure 1: Seesaw diagrams inducing the Yukawa couplings of upper quarks via exchange of vector-like quarks U,U c and Q,Qc. Analogous diagrams with U,U c → D,Dc and φ→ φ̃ will work for down quarks. | fig_result | fig | result_fig | train |
|
$2305.00069v1-Figure3-1.png | Figure 3: Symmetric Fritzsch textures for quark Yukawa matrices do not predict the correct value of Vcb and of the ratio Vub/Vcb. | fig_result | fig | result_fig | train |
|
$2305.00069v1-Figure4-1.png | Figure 4: Predictions of the asymmetric Fritzsch-like textures (see eq. (3.2)) with xd = 3.3, xu = 1, confronted with experimental data. | fig_result | fig | result_fig | train |
|
$2305.00069v1-Figure7-1.png | Figure 7: Parameter space in the scenario with xu = 1 (symmetric Fritzsch texture for up-type quarks, asymmetric 23 entries for down-type quarks, see eq. (3.2)) in the xd-β̃, δ̃-β̃ and xd-β̃ planes, marginalizing over the other variable. 1σ, 2σ and 3σ preferred regions of the parameters are indicated (χ2 min + 1, χ2 min + 4, χ2 min + 9), assuming Yukawa matrices of Fritzsch-like form at 103 GeV (left), 106 TeV (centre), 1016 TeV (right). | fig_result | fig | result_fig | train |
|
$2305.00069v1-Table1-1.png | Table 1: Determinations of quark mass ratios used in this work. In the first line, mud = (mu +md)/2. ∗ Value adopted by Particle Data Group (PDG), averaging Nf = 2+1+1 and Nf = 2+1+1 flavours lattice results [71]. | table_result | table | result_tab | train |
|
$2305.00069v1-Table2-1.png | Table 2: Magnitudes and phases of CKM elements as quoted by Particle Data Group [3]. | table_result | table | result_tab | train |
|
$2305.00069v1-Table3-1.png | Table 3: Result of global fit for CKM parameters, including constraints implied by the unitarity of the three generation CKM matrix, as reported by Particle Data Group [3]. | table_result | table | result_tab | train |
|
$2305.00070v2-Figure1-1.png | Figure 1: The combination of Platt scaling and online logistic regression yields Online Platt Scaling (OPS). Calibeating is applied on top of OPS to achieve further empirical improvements and theoretical validity. | fig_result | fig | result_fig | train |
|
$2305.00070v2-Figure11-1.png | Figure 11: The adaptive behavior of OPS for the simulated regression-function drift scenario described in Section 1.2. | fig_illustration | fig | illustration | train |
|
$2305.00070v2-Figure12-1.png | Figure 12: Results for the same experimental setup as Figures 5 and 6, but with ϵ “ 0.05. | fig_result | fig | result_fig | train |
|
$2305.00070v2-Figure13-1.png | Figure 13: Results for the same experimental setup as Figures 5 and 6, but with ϵ “ 0.2. | fig_result | fig | result_fig | train |
|
$2305.00070v2-Figure14-1.png | Figure 14: Performance of online beta scaling (OBS) and its calibeating variants on real datasets with and without distribution drift. OBS further improves upon OPS in most cases. In each plot, TOBS is the best-performing method. | fig_result | fig | result_fig | train |
|
$2305.00070v2-Figure15-1.png | Figure 15: Comparing the performance of windowed histogram binning (WHB), online Platt scaling (OPS), online beta scaling (OBS), and their tracking variants on real datasets with and without distribution drifts. Among non-tracking methods (dotted lines), WHB performs well with i.i.d. data, while OBS performs well for drifting data. Among tracking methods (solid lines), TOBS and TOPS are the best-performing methods in every plot. Tracking typically does not improve WHB much since WHB is already a binning method (so tracking is implicit). | fig_result | fig | result_fig | train |
|
$2305.00070v2-Figure16-1.png | Figure 16: Foster (1999)’s ϵ-calibrated forecaster on Pittsburgh’s hourly rain data (2008-2012). The forecaster makes predictions on the grid p0.05, 0.15, . . . , 0.95q. In the long run, the forecaster starts predicting 0.35 for every instance, closely matching the average number of instances on which it rained (« 0.37). | fig_result | fig | result_fig | train |
|
$2305.00070v2-Figure2-1.png | Figure 2: Online adversarial post-hoc calibration. | fig_result | fig | result_fig | train |
|
$2305.00070v2-Figure3-1.png | Figure 3: The adaptive behavior of Online Platt scaling (OPS) for the covariate drift dataset described in Section 1.2. The title of each panel indicates the time-window that panel corresponds to. The histogram of Xt values in the corresponding time window is plotted with maximum height normalized to 1. Also plotted is the true curve for PrpY “ 1 | X “ xq and two predictive curves: a base model trained on t “ 1 to t “ 1000, and OPS-calibrated models with parameter values fixed at the start of the time window. The base model is accurate for the training data which is mostly in r´5, 10s, but becomes inaccurate and miscalibrated with the covariate-shifted values for larger t (bottom two subplots). OPS adapts well, agreeing with the base model in the top-right subplot, but flipping the base model predictions in the bottom-right subplot. | fig_result | fig | result_fig | train |
|
$2305.00070v2-Figure4-1.png | Figure 4: The adaptive behavior of OPS for the simulated label shift and regression-function drift datasets described in Section 1.2. For more details on the contents of the figure, please refer to Figure 3. The improvement in calibration and accuracy of OPS over the base model is visually apparent, but for completeness, {Acc, CE} values are reported in the Appendix as part of Figures 10 and 11. | fig_result | fig | result_fig | train |
|
$2305.00070v2-Figure5-1.png | Figure 5: Drifting data. CE (calibration error) values over time of considered models on four datasets with synthetically induced drifts. The plots have invisible error bars since variation across the 100 runs was small. OPS consistently performs better than BM, FPS, and WPS, while TOPS is the best-performing among all methods across datasets and time. All methods had roughly the same SHP values at a given time-step, so the SHP plots are delayed to Appendix A (Figure 8). | fig_result | fig | result_fig | train |
|
$2305.00070v2-Figure6-1.png | Figure 6: IID data. CE values over time of considered models with four randomly shuffled (ie, nearly i.i.d.) datasets. The plots have invisible error bars since variation across runs was small. TOPS achieves the smallest values of CE throughout. | fig_result | fig | result_fig | train |
|
$2305.00070v2-Figure7-1.png | Figure 7: Experiments with synthetic data. In all cases, TOPS has the lowest CE across time. | fig_result | fig | result_fig | train |
|
$2305.00070v2-Figure8-1.png | Figure 8: Sharpness results with drifting data. SHP values over time of considered models on four datasets with synthetically induced drifts (Section 4.1). The plots have invisible error bars since variation across the 100 runs was small. The drop in expected sharpness is below 0.005 at all times except on the Fetal Health Dataset. | fig_result | fig | result_fig | train |
|
$2305.00070v2-Table1-1.png | Table 1: Asymptotic regret and running times of online logistic regression (OLR) algorithms for OPS as functions of the radius of reference class B and time-horizon T . For general OLR, regret and running times also depend on the dimension of X . However, OPS effectively reduces the dimensionality of X to 2, so that a second-order method like ONS runs almost as fast as a first-order method like OGD. Also note that B “ ? a2 ` b2 is small if the base model f is not highly miscalibrated. ONS with fixed hyperparameters was chosen for all OPS experiments; see Section 2.2 for implementation details. | table_result | table | result_tab | train |
|
$2305.00070v2-Table2-1.png | Table 2: Metadata for datasets used in Section 4.1. The sort-by column indicates which covariate was used to order data points. All datasets are under the Creative Commons CC0 license. | table_result | table | result_tab | train |
|
$2305.00071v1-Figure1-1.png | FIG. 1: A schematic of the toy model data. Each row is a segment, consisting of four data point indicated by the rounded rectangles. The noise in the segments is drawn from a standard normal distribution, with its value indicated by the color scale of the rounded rectangle. Segments 2 and 4 from the top in this example contain a signal, added randomly to one of the four data points indicated by the vermilion colored star. | fig_result | fig | result_fig | train |
|
$2305.00071v1-Figure2-1.png | FIG. 2: Toy-model exploration of the unified pastro formalism. In both figures, the dashed lines indicate the real number of segments with (and without) a signal. The shaded regions are 90% uncertainty levels. | fig_result | fig | result_fig | train |
|
$2305.00071v1-Figure3-1.png | FIG. 3: Joint likelihood distributions for the signal and noise hypothesis. The spans of the signal and noise distributions span are dissimilar. The signal distribution extends to much larger β values than the noise distribution. | fig_result | fig | result_fig | train |
|
$2305.00071v1-Figure4-1.png | FIG. 4: The distribution of triggers that were found by one pipeline only. The top plot shows the noise and signal distribution of GstLAL, while the bottom plot shows the PyCBC distribution of triggers. The black stars are again the on-source O3a triggers from GWTC-2.1 that have only been found by the corresponding pipeline [19]. | fig_result | fig | result_fig | train |
|
$2305.00071v1-Figure5-1.png | FIG. 5: The posterior for the signal and noise counts in the joint analysis. The shaded region in the one-dimensional posterior corresponds to 90% uncertainty levels, while the contours in the two-dimensional posteriors are the 50% and 90% levels. We recover a median value Λs = 53+10 | fig_result | fig | result_fig | train |
|
$2305.00071v1-TableI-1.png | TABLE I: Triggers with a unified pastro ≥ 0.5 from our illustrative analysis. The triggers that have pastro ≥ 0.5 in at least one pipeline in GWTC-2.1 [19] are shown in the second column. Also listed are the FARs of the triggers from the GstLAL and PyCBCpipelines. These results illustrate the properties of the unified pastro method, but a larger number of noise triggers, and more accurate population models, would be needed to obtain reliable quantitative results. | table_result | table | result_tab | train |
|
$2305.00072v2-Figure2-1.png | Figure 2: From the left to the right, we plot the errors |w1−wh 1 | and |w2−wh 2 | over the spatial location x for the problem (6.5) using P3 polynomial on a uniform mesh of N = 20, 40, 80; from the top to the bottom are the results by using the upwind fluxes (6.1), the central fluxes (6.2), the mixed central fluxes (6.4), and the mixed upwind fluxes (6.3), respectively. | fig_result | fig | result_fig | train |
|
$2305.00072v2-Figure4-1.png | Figure 4: Plots of both the numerical solutions wh 1 , w h 2 , and the exact solutions w1, w2 of problem (6.8) with the degree of approximation space q = 3. The upwind fluxes (6.1) is used in the simulation. The numerical and exact solutions at t = 0, 30, 75, 100 are plotted from top to bottom. On the left, we present the results for w1, while on the right, we display the results for w2. | fig_result | fig | result_fig | train |
|
$2305.00072v2-Figure7-1.png | Figure 7: Space-time plots of wh 2 (x, t). On the left panel, from top to bottom, are the plots for the kink soliton with q = 0, q = 1, q = 2, and q = 3, respectively; On the right panel are the zoom-in plots of the transition region, x ∈ (88, 103), of the kink soliton. | fig_result | fig | result_fig | train |
|
$2305.00072v2-Figure8-1.png | Figure 8: Comparison between the kink solitons with different degrees of approximation space q at the final time t = 100. On the left presents the results for wh 1 , and the right shows the results for wh 2 . | fig_result | fig | result_fig | train |
|
$2305.00072v2-Figure9-1.png | Figure 9: we present the energy difference, |Eh(t)−Eh(0)| on the left and Eh(t)−Eh(0) on the right, of kink solitons with different degrees q of approximation space in a moving box until the final time T = 100. The box is initially put at the region of x ∈ [60, 140] and then moves to the right with the same speed as the kink solitons. | fig_result | fig | result_fig | train |
|
$2305.00072v2-Table1-1.png | Table 1: L2 errors and the corresponding convergence rates for w1, w2, b1, b2 of problem (6.6) using Pq polynomials and the upwind flux (6.1). The interval is divided into N uniform cells, and the terminal computational time is T = 1. | table_result | table | result_tab | train |
|
$2305.00072v2-Table2-1.png | Table 2: L2 errors and the corresponding convergence rates for w1, w2, b1, b2 of problem (6.6) using Pq polynomials and the central flux (6.2). The interval is divided into N uniform cells, and the terminal computational time is T = 1. | table_result | table | result_tab | train |
|
$2305.00072v2-Table3-1.png | Table 3: L2 errors and the corresponding convergence rates for w1, w2, b1, b2 of problem (6.6) using Pq polynomials and the mixed central flux (6.4). The interval is divided into N uniform cells, and the terminal computational time is T = 1. | table_result | table | result_tab | train |
|
$2305.00072v2-Table4-1.png | Table 4: L2 errors and the corresponding convergence rates for w1, w2, b1, b2 of problem (6.6) using Pq polynomials and the mixed upwind flux (6.3). The interval is divided into N uniform cells, and the terminal computational time is T = 1. | table_result | table | result_tab | train |
|
$2305.00072v2-Table5-1.png | Table 5: L2 errors and the corresponding convergence rates for w1, w2, b1, b2 of problem (6.8) using Pq polynomials and the upwind flux (6.1). The interval is divided into N uniform cells, and the terminal computational time is T = 1. | table_result | table | result_tab | train |
|
$2305.00072v2-Table6-1.png | Table 6: L2 errors and the corresponding convergence rates for w1, w2, b1, b2 of problem (6.8) using Pq polynomials and the central flux (6.2). The interval is divided into N uniform cells, and the terminal computational time is T = 1. | table_result | table | result_tab | train |
|
$2305.00072v2-Table7-1.png | Table 7: L2 errors and the corresponding convergence rates for w1, w2, b1, b2 of problem (6.8) using Pq polynomials and the mixed central flux (6.4). The interval is divided into N uniform cells, and the terminal computational time is T = 1. | table_result | table | result_tab | train |
|
$2305.00072v2-Table8-1.png | Table 8: L2 errors and the corresponding convergence rates for w1, w2, b1, b2 of problem (6.8) using Pq polynomials and the mixed upwind flux (6.3). The interval is divided into N uniform cells, and the terminal computational time is T = 1. | table_result | table | result_tab | train |
|
$2305.00073v2-Figure2-1.png | FIG. 2: In green, our analytic expression in Equation (9) for the numerical results previously outlined by Pospelov et al. (see [40]) marked in black dashed lines. This is the case of a U(1) sector coupled to photons. For comparison we show known restricted zones from Babar [42, 43] and NA64 experiments [44], along with SM results for the exclusion in parameter space to the electron and muon anomalous magnetic moment. Constraints adapted from [44]. | fig_result | fig | result_fig | train |
|
$2305.00073v2-Figure3-1.png | FIG. 3: Our analytic expression in 9 is shown in black dashed lines, this is the U(1) coupling to SM. Constraints on the Dark Boson mass and kinetic mixing from [45] can be seen as shaded regions, plus their result for the muon g − 2 using this symmetry, again (as in fig. 2) our result explains in a complete and legible expression the previous result fitted to the anomaly. Babar excluded region comes from searches of Z ′ or Dark Z from the productions of µ−µ+Z ′ at colliders [46]. CCFR comes from measurement of the neutrino trident cross section [47] and Borexino from neutrino electron scattering [48]. In Figure 4 we use these constraints, but there we include a mass mixing and a hypercharge coupling to the SM. Figure adapted from [45]. | fig_result | fig | result_fig | train |
|
$2305.00073v2-Figure4-1.png | FIG. 4: Constraints for the Dark Z boson (DZ) in the kinetic mixing versus boson mass parameter space in two regimes of the mass mixing parameter, δ, from eq. (26). We set δ to be 10−3 and 10−1, shown as the brown and red contours, respectively. In a dotted line, we represent the exact comparison between our result and the anomaly, eq. (1). The straight lines by the sides of the dotted one represent the 2σ allowed region. Left: In fuchsia, constraints outlined by Croon et al.[56] on supernova muons coupled to Z’ and Borexino, from Figure 3. Right: In pink, exclusion zone from previous work in dark photon-QED like approach to a DZ [45] as shown in fig. 3 and coming from simply setting the masses for the boson in eq. (9), plus the constraints from Babar, Borexino and CCFR as described before. | fig_result | fig | result_fig | train |
|
$2305.00074v2-Figure4-1.png | FIG. 4. A schematic illustration for the single-shell and double-shell contributions to the energy-momentum tensor with their two-point correlation functions coming from x⃗ and y⃗ in the same sound shell (blue shell) or in different sound shells (cyan and red shells). | fig_result | fig | result_fig | train |
|
$2305.00074v2-Figure5-1.png | FIG. 5. Schematic illustration for the two-point correlator of the energy-momentum tensor from three different configurations shown in 1 + 1 dimensions (top row) and 2 + 1 dimensions (bottom row). For the single-shell case, there should be only one bubble nucleated in the region shaded in yellow. | fig_result | fig | result_fig | train |
|
$2305.00074v2-Figure6-1.png | FIG. 6. A typical 2-dimensional view for the 3-dimensional region of an equal-time hypersurface. The single-shell contribution comes from a bubble nucleated in the region Uxy shaded in yellow, while the double-shell contribution comes from two bubbles nucleated separately in U ′ x and U ′ y shaded in blue and red. The system has a SO(2) symmetry and is invariant under rotations around r⃗ direction. | fig_result | fig | result_fig | train |
|
$2305.00074v2-Figure7-1.png | FIG. 7. The single-shell (left) and double-shell (right) power spectra for α = 0.1. The discrete points are the numerical results and solid lines are their interpolating functions. The curves correspond to different vw from 0.80 to 1.00 with interval ∆vw = 0.02 from the top one to the bottom one, respectively. The blue and red dashed lines are the asymptotic behaviors of the power spectra with vw = 0.8 and vw = 1, respectively. All these power spectra scale as k3 at low frequencies. | fig_result | fig | result_fig | train |
|
$2305.00074v2-Figure8-1.png | FIG. 8. The full shapes of power spectra from the single-shell (blue) and double-shell (orange) contributions to the total power spectrum (green) for different vw with fixed α = 0.1. | fig_result | fig | result_fig | train |
|
$2305.00076v1-Figure1-1.png | Figure 1: Different level of classification as provided in the shared task | fig_result | fig | result_fig | train |
|
$2305.00076v1-Figure2-1.png | Figure 2: Task B. Using authentic and synthetic training datasets. | fig_result | fig | result_fig | train |
|
$2305.00076v1-Figure3-1.png | Figure 3: Task C. Each input sentence is paired with its parent class ["Threats", "Derogation", "Animosity", "Prejudiced Discussion"] before tokenization. | fig_result | fig | result_fig | train |
|
$2305.00076v1-Table1-1.png | Table 1: Dataset Description and Distribution of Sentiment Labels | table_result | table | result_tab | train |
|
$2305.00076v1-Table3-1.png | Table 3: Result of the different tasks. | table_result | table | result_tab | train |
|
$2305.00077v3-Figure1-1.png | Figure 1: The architecture REIT for requirements elicitation interview training system. The dashed lines indicate optional | fig_result | fig | result_fig | train |
|
$2305.00077v3-Figure10-1.png | Figure 10: Experimental setup for the user study. | fig_result | fig | result_fig | train |
|
$2305.00077v3-Figure2-1.png | Figure 2: Visual presentation of the overall contextual performance analysis of a sample user at the end of the session. | fig_result | fig | result_fig | train |
|
$2305.00077v3-Figure6-1.png | Figure 6: The interaction flow between the user and REIT. | fig_result | fig | result_fig | train |
|
$2305.00077v3-Figure7-1.png | Figure 7: Samples of the interview training system’s dialogue displayer states during the interview session. | fig_result | fig | result_fig | train |
|
$2305.00077v3-Figure8-1.png | Figure 8: Samples of the interview training system’s dialogue displayer states during the feedback session. | fig_result | fig | result_fig | train |
|
$2305.00077v3-Figure9-1.png | Figure 9: The experimental design of the user study. | fig_result | fig | result_fig | train |
|
$2305.00081v1-Figure1-1.png | Figure 1: Convergence of error and weighted 2-Wasserstein distance obtained by weighted least squares regression. Black line = objective (error) of the optimization problem of weighted least squares regression. Red line = Wasserstein distance between the estimated quantile function and the true quantile function. Grey band = 90% confidence band of the error obtained by 100 repeated experiments. Red band = 90% confidence band of the distance obtained by 100 repeated experiments. The horizontal axis = sample size in log scale. | fig_result | fig | result_fig | train |
|
$2305.00081v1-Figure2-1.png | Figure 2: Convergence of error and 1-Wasserstein distance obtained by least absolute deviation regression. Black line = objective (error) of the optimization problem of weighted least squares regression. Red line = Wasserstein distance between the estimated quantile function and the true quantile function. Grey band = 90% confidence band of the error obtained by 100 repeated experiments. Red band = 90% confidence band of the distance obtained by 100 repeated experiments. The horizontal axis = sample size in log scale | fig_result | fig | result_fig | train |
|
$2305.00081v1-Figure3-1.png | Figure 3: Q-Q plots of models fitted by least squares regression with cardinality constraint C = 1, 2 and coefficient λ = 0.6, 12 of L1 penalty. MLE is included as a benchmark. {(xn, yn)}N n=1 = black points, xn = n-th sample order statistics, yn = quantile with confidence level n N+1 of the model. | fig_result | fig | result_fig | train |
|
$2305.00081v1-Figure4-1.png | Figure 4: Q-Q plots of models fitted by least absolute deviation regression with cardinality constraint C = 1, 2 and coefficient λ = 1.1, 1.9 of L1 penalty. MLE is included as a benchmark. {(xn, yn)}N n=1 = black points, xn = n-th sample order statistics, yn = quantile with confidence level n N+1 of the model. | fig_result | fig | result_fig | train |
|
$2305.00081v1-Figure5-1.png | Figure 5: Scaled quantile functions of the fitted models and sample points {(− log(1− pn), yn)}N n=1, where pn = n N+1 and yn are the standardized drawdowns. The y-axis is the standardized drawdown. The x-axis is -log(1-probability). The models are fitted with least squares regression with cardinality constraint C = 1, 2 and coefficient λ = 1.1, 1.9 of L1 penalty. The following three estimates almost overlap: least squares, least squares with cardinality constraint C = 2, and least squares with penalty coefficient λ = 0.6. Compared to MLE, all models have better fit to the tail observations, except for least squares regression with C = 1. | fig_result | fig | result_fig | train |
|
$2305.00081v1-Figure6-1.png | Figure 6: Scaled quantile functions of the fitted models and sample points {(− log(1− pn), yn)}N n=1, where pn = n N+1 and yn are the standardized drawdowns. The y-axis is the standardized drawdown. The x-axis is -log(1-probability). The models are fitted with least least absolute deviation (LAD) regression with cardinality constraint C = 1, 2 and coefficient λ = 1.1, 1.9 of L1 penalty. The two scaled quantile functions obtained by the following estimators almost overlap: LAD regression and LAD regression with cardinality constraint C = 2. Compared to MLE, all models have better fit to the tail observations. | fig_result | fig | result_fig | train |
|
$2305.00081v1-Table1-1.png | Table 1: The table presents the measures of goodness-of-fit for models with different errors, constraints and penalties. MLE is included for comparison. C = value of cardinality constraint, λ = coefficient of L1 penalty in LASSO regression, WMSE = weighted mean squared error, MAE = mean absolute error, KS = Kolmogorov–Smirnov distance, LLK = log likelihood. | table_result | table | result_tab | train |
|
$2305.00083v1-Figure1-1.png | Figure 1: Experiential learning cycle. | fig_result | fig | result_fig | train |
Subsets and Splits