image
imagewidth (px) 44
1.64k
| file_name_index
stringlengths 26
33
| text
stringlengths 16
2.81k
| class
stringclasses 5
values | super_class
stringclasses 2
values | sub_class
stringclasses 5
values | split
stringclasses 1
value |
---|---|---|---|---|---|---|
$2305.00041v1-Figure13-1.png | Fig. 13. Qualitative examples on RealEstate-10K dataset with two input views. In the first example, we see artifacts such as ghosting and blur in the frame predicted by DDP-NeRF. In the second example, we observe that DDP-NeRF infers incorrect geometry due to which the objects are placed incorrectly in the synthesized view. However, the predictions by ViP-NeRF do not suffer from such artifacts and are significantly sharper. | fig_result | fig | result_fig | train |
|
$2305.00041v1-Figure14-1.png | Fig. 14. Qualitative examples on NeRF-LLFF dataset with three input views. In the first example, we find that DDP-NeRF prediction has a different global color than the ground truth. In addition, the angle of the sharp triangular object on the fortress is changed. We also observe floating blue clouds outside the fortress and blur in other regions. In the second example, we notice that DDP-NeRF is unable to infer the positions of the objects (horizontal stem and the orange leaf) correctly and instead places them at incorrect positions or breaks them into multiple parts. On the other hand, ViP-NeRF is able to synthesize the novel views reasonably well. | fig_result | fig | result_fig | train |
|
$2305.00041v1-Figure16-1.png | Fig. 16. Qualitative examples on DTU dataset with two input views. DDPNeRF predictions contain significant floating clouds in all three examples, whereas ViP-NeRF produces more realistic novel views. | fig_illustration | fig | illustration | train |
|
$2305.00041v1-Figure18-1.png | Fig. 18. Quantitative comparison of the performance of DDP-NeRF and ViP-NeRF models with increasing distance between the training views. The x-axis denotes the frames skipped between the two training frames. | fig_result | fig | result_fig | train |
|
$2305.00041v1-Figure19-1.png | Fig. 19. Loss curves of L𝑣 during training on individual scenes of both RealEstate-10K and NeRF-LLFF datasets. | fig_result | fig | result_fig | train |
|
$2305.00041v1-Figure2-1.png | Fig. 2. A toy example to illustrate the computation of visibility prior. The scene contains a blue sphere and a brown box and the relative pose between the views is a translation in x direction. The secondary view image is warped to the primary view at different depth planes to create a PSV and compared with the primary view image to obtain error maps. We observe that the brown square and the blue circle are matched better in the second and third planes respectively leading to lower error (denoted as white) in the respective error maps. The minimum error across all the planes is thresholded to obtain the visibility prior map corresponding to the primary view image. The right portion of the sphere which is occluded in the secondary view image is denoted in black in the visibility map. | fig_result | fig | result_fig | train |
|
$2305.00041v1-Figure3-1.png | Fig. 3. Qualitative examples on RealEstate-10K dataset with two input views. We observe that the predictions of ViP-NeRF are close to the ground truth, while those of other models suffer from various distortions. In particular, DDP-NeRF blurs regions of the frame near the left door and contains black floater artifacts. | fig_result | fig | result_fig | train |
|
$2305.00041v1-Figure4-1.png | Fig. 4. Qualitative examples on RealEstate-10K and NeRF-LLFF dataset with two, three, and four input views. We observe that ViP-NeRF models specular regions better as the number of input views increases. For example, in the first row, the reflection of the chair is better reconstructed as the number of views increases. | fig_result | fig | result_fig | train |
|
$2305.00041v1-Figure5-1.png | Fig. 5. Estimated depth map on RealEstate-10K dataset with two input views. We find that ViP-NeRF is better in both frame synthesis and depth estimation compared to the competing models. For example, in the first row, the depth estimated by DDP-NeRF is smooth which may be leading to a loss of sharpness in synthesizing the shrubs. In contrast, ViP-NeRF predictions are sharper. For better visualization, we show inverse depth and normalize it to set the maximum value to unity. | fig_illustration | fig | illustration | train |
|
$2305.00041v1-Figure6-1.png | Fig. 6. Visualization of the visibility map predicted by ViP-NeRF. White indicates the regions of the ‘Primary View’ which are visible in the ‘Secondary View’ and black indicates the occluded regions. From the primary and secondary views, we observe that the left part of the fortress and the neighboring portion of the wood are hidden in the secondary view. ViP-NeRF is able to reasonably determine the visible and occluded regions. | fig_result | fig | result_fig | train |
|
$2305.00041v1-Figure7-1.png | Fig. 7. Qualitative examples on RealEstate-10K dataset with two input views. We observe sharp predictions by ViP-NeRF while predictions by other models suffer from blur and other artifacts. In particular, DDP-NeRF predictions contain blurred flowers (first row) and blurred tiles (second row). | fig_illustration | fig | illustration | train |
|
$2305.00041v1-Figure8-1.png | Fig. 8. Qualitative examples on RealEstate-10K dataset with three input views. We find that ViP-NeRF is able to reconstruct novel views significantly better than the competing models. DDP-NeRF extends parts of the white table and fails to reconstruct the drawer handles accurately in the first and second examples. In the third example, DDP-NeRF fails to reconstruct thin objects in the chair. | fig_result | fig | result_fig | train |
|
$2305.00041v1-Figure9-1.png | Fig. 9. Qualitative examples on RealEstate-10K dataset with four input views. In the first example, DDP-NeRF fails to retain the structure of the chair while it blurs the texture of the carpet in the second example. We observe even more severe distortions among the predictions of other models. | fig_illustration | fig | illustration | train |
|
$2305.00041v1-Table1-1.png | Table 1. Quantitative results on RealEstate-10K dataset. | table_result | table | result_tab | train |
|
$2305.00041v1-Table10-1.png | Table 10. Per-scene performance of various models with three input views on RealEstate-10K dataset. The three rows show LPIPS, SSIM, and PSNR scores, respectively. | table_result | table | result_tab | train |
|
$2305.00041v1-Table11-1.png | Table 11. Per-scene performance of various models with four input views on RealEstate-10K dataset. The three rows show LPIPS, SSIM, and PSNR scores, respectively. | table_result | table | result_tab | train |
|
$2305.00041v1-Table12-1.png | Table 12. Per-scene performance of various models with two input views on NeRF-LLFF dataset. The three rows show LPIPS, SSIM, and PSNR scores, respectively. | table_result | table | result_tab | train |
|
$2305.00041v1-Table13-1.png | Table 13. Per-scene performance of various models with three input views on NeRF-LLFF dataset. The three rows show LPIPS, SSIM, and PSNR scores, respectively. | table_result | table | result_tab | train |
|
$2305.00041v1-Table14-1.png | Table 14. Per-scene performance of various models with four input views on NeRF-LLFF dataset. The three rows show LPIPS, SSIM, and PSNR scores, respectively. | table_result | table | result_tab | train |
|
$2305.00041v1-Table2-1.png | Table 2. Quantitative results on NeRF-LLFF dataset. | table_result | table | result_tab | train |
|
$2305.00041v1-Table3-1.png | Table 3. Comparison of reliability of priors used in different models. The reference visibility is obtained using NeRF trained with dense input views. | table_result | table | result_tab | train |
|
$2305.00041v1-Table4-1.png | Table 4. Evaluation of depth estimated by different models with two input views. The reference depth is obtained using NeRF trained with dense input views. The depth RMSE on the two datasets are of different orders on account of different depth ranges. | table_result | table | result_tab | train |
|
$2305.00041v1-Table5-1.png | Table 5. Ablation experiments on both the datasets with two input views. | table_result | table | result_tab | train |
|
$2305.00041v1-Table6-1.png | Table 6. Original name of RealEstate-10K videos we selected for experiments and their updated names. | table_result | table | result_tab | train |
|
$2305.00041v1-Table7-1.png | Table 7. Quantitative results on DTU dataset. RegNeRF+ uses test camera poses during training. | table_result | table | result_tab | train |
|
$2305.00041v1-Table9-1.png | Table 9. Per-scene performance of various models with two input views on RealEstate-10K dataset. The three rows show LPIPS, SSIM, and PSNR scores, respectively. | table_result | table | result_tab | train |
|
$2305.00042v1-Figure1-1.png | Figure 1: Proposed CG-DDPM: In addition to the forward and reverse diffusion in the traditional DDPM, the generation direction is controlled using a cycle-guided latent noise regularization technique. The ground truth target and source MRIs 𝑋1,…,𝑁 and 𝑌1,…,𝑁 are used to generate the deterministic latent noise 𝜖1,…,𝑁 | fig_result | fig | result_fig | train |
|
$2305.00042v1-Figure2-1.png | Figure 2. Ground truth MRIs and synthetic MRIs for T1→T2, T2→T1, T1→FLAIR, FLAIR→T1 synthesis. Synthetic MRIs from the proposed CG-DDPM (column #1), IDDPM (column #2), IDDIM (column #3), and MRI-cGAN (column #4) are presented column-wise. The difference maps between the truth and synthetic MRIs are shown below. | fig_result | fig | result_fig | train |
|
$2305.00042v1-Figure3-1.png | Figure 3. Ground truth MRIs, synthetic MRIs from five different runs and their average of the MC-based sampling of the DDPM models are presented column-wise. We present the proposed CG-DDPM in the first row, IDDPM in the second row, and IDDIM in the third row. More examples are shown in the Appendix. C. | fig_result | fig | result_fig | train |
|
$2305.00042v1-Figure4-1.png | Figure 4. Sampling stability of the DDPM-based methods from four MRI translation tasks. MC𝑛 indicates the MC-based sampling averaged result using 𝑛 runs. Notice that the N-MSSIM does not represent the absolute quantitative performance (e.g., CG-DDPM’s “1” is much higher than IDDIM’s “1”), but a trend of the performance under different numbers of runs in the MC-based sampling. | fig_result | fig | result_fig | train |
|
$2305.00042v1-Table1-1.png | Table 1. Quantitative and statistical analysis of the synthetic MRIs for CG-DDPM vs. IDDPM, IDDIM, and MRI-cGAN. MAE is calculated by the normalized images ([-1,1]). | table_result | table | result_tab | train |
|
$2305.00044v1-Figure1-1.png | FIGURE 1. An example of product characteristics for a product sold in the Amazon store | fig_result | fig | result_fig | train |
|
$2305.00044v1-Figure11-1.png | FIGURE 11. ELMO Architecture. This is an ELMO network for a string of 4 words, with L = 2 hidden layers. Here, the softmax layer (multinomial logit) is a single function mapping each input in Rd to a probability distribution over the dictionary Σ. | fig_illustration | fig | illustration | train |
|
$2305.00044v1-Figure13-1.png | FIGURE 13. The ResNet50 operates on numerical 3-dimensional arrays representing images. It first does some early processing by applying convolutional and pooling filters, then it applies many residual block mappings, producing arrays shown in green. The penultimate layer produces a high-dimensional vector I , the image embedding, which is then used to predict the image type. | fig_result | fig | result_fig | train |
|
$2305.00044v1-Figure14-1.png | FIGURE 14. SingleTask + ELMO model. Product text is mapped to W and the image is mapped to I of dimensions 256 and 2048. For illustration purposes, we only show two hidden layers with dimensions 7 and 5 respectively. In practice, we use three layers with dimensions 2048, 1024, and 256. The output is price for one time period t . | fig_illustration | fig | illustration | train |
|
$2305.00044v1-Figure15-1.png | FIGURE 15. MultiTask + BERT. Product text is mapped to W and image is mapped to I of dimensions 3072 and 2048 respectively. For illustration purpose, we only show two hidden layers with dimension 7 and 5 respectively, and output is of dimension 3. In practice, we use three layers with dimension 2048, 1024, and 256, and the output is a price vector over T = 72 time periods. | fig_illustration | fig | illustration | train |
|
$2305.00044v1-Figure16-1.png | FIGURE 16. MultiTask + Fine-tuned BERT. Product text (sentence) is tokenized and padded to X , where components represent the context-free input embedding plus a positional encoding for a token (word). Then the input is fed into a BERT model which consists of 12 layers of transformer blocks and outputs T dimensional price vector. For illustration purposes, we only show 5 tokens and two transformer blocks. | fig_illustration | fig | illustration | train |
|
$2305.00044v1-Figure2-1.png | FIGURE 2. Our method for generating hedonic price: The input consists of images and unstructured text data. The first step of the process creates the moderately highdimensional numerical embeddings I and W for images and text data via state-of-theart deep learning methods, such as ResNet50 and BERT. The second step of the process takes input X = (I ,W ) and creates predictions for hedonic prices Ht (X ) using deep learning methods with a multi-task structure. The models of the first step are trained on tasks unrelated to predicting prices (e.g., image classification or word prediction), where embeddings are extracted as hidden layers of the neural networks. The models of the second step are trained by price prediction tasks. Our multitask model creates an intermediate lower dimensional embedding V =V (X ), called value embedding, and then predicts the final prices in all periods {Ht (V ), t = 1, ...,T } using linear functional forms, making it easy to perform inference on the last step, using hold-out samples. Some variations of the method include fine-tuning the embeddings produced by the first step to perform well for price prediction tasks (i.e. optimizing the embedding parameters to minimize price prediction loss). | fig_result | fig | result_fig | train |
|
$2305.00044v1-Figure3-1.png | FIGURE 3. Standard architecture of a Deep Neural Network. In the hedonic price prediction network, the penultimate layer is interpreted as an embedding of the product’s hedonic value and the output layer contains predicted hedonic prices in all time periods. In comparison, the networks used for text and image processing have very high-dimensional inputs and outputs, with intermediate hidden layers composed of neural sub-networks. The dense embeddings typically result from taking the last hidden layer of the network. | fig_result | fig | result_fig | train |
|
$2305.00044v1-Figure4-1.png | FIGURE 4. The out-of-sample performance of the empirical hedonic price function obtained using neural network every month since March, 2013. Multi-task neural networks dominate singe-task neural networks, which dominate boosted tree models, which in turn dominate linear models. | fig_result | fig | result_fig | train |
|
$2305.00044v1-Figure5-1.png | FIGURE 5. Statistical Significance of Value Embeddings via Linear Regression Model Applied to the Test Sample. The figure shows the point estimates and the pointwise 95% confidence intervals on the coefficients of the value embeddings, as estimated by the linear regression model, applied to the hold-out sample. Note that more than 90% of the coefficients are statistically significant at the 10−5 level. | fig_result | fig | result_fig | train |
|
$2305.00044v1-Figure6-1.png | FIGURE 6. An example of accurate prediction of (B001GUN1N6): The neural network model predicts the price of this item at about 100. The average price on camelcamel.com is 97, with the offer price ranging from 39 to 120. | fig_result | fig | result_fig | train |
|
$2305.00044v1-Figure7-1.png | FIGURE 7. An example of inaccurate prediction: The neural network model predicts the price of this item at about 300, but the recent offer prices for this product were around 2400. While this seems a miss, the price history for this item (B06XR39DJ1), that can be seen on camelcamel.com, suggests that there were periods where the offer prices for this item ranged between 206 and 2800, with an averaged price of 464. | fig_result | fig | result_fig | train |
|
$2305.00044v1-Figure8-1.png | FIGURE 8. Turnover Rate for Products. The figure shows the ratio of the number of products with transactions in a given month and no transactions in the previous month. | fig_result | fig | result_fig | train |
|
$2305.00044v1-Figure9-1.png | FIGURE 9. Number of Products in the Current Period over the Base Period. | fig_result | fig | result_fig | train |
|
$2305.00044v1-Table1-1.png | TABLE 1. Some properties of the CPI, BPP, ADPI, and FHPI | table_result | table | result_tab | train |
|
$2305.00044v1-Table2-1.png | TABLE 2. Summary of Out-of-Sample Performance of the Empirical Hedonic Price Function. | table_result | table | result_tab | train |
|
$2305.00044v1-Table3-1.png | TABLE 3. Examples of Construction of Confidence Intervals for Predicted Hedonic Price Hi t = V ′ i tθt and the Sale Price Pi t . Here Ĥi t = V ′ i t θ̂t is the estimated hedonic price. The term σ̂2 = V ′ i t Ĉov(θ̂t )Vi t is the square of the standard error, and [Li t ,Ui t ] = [Ĥi t ± z.95σ̂] is the 90% confidence interval for Hi t . The predictive confidence interval for Pi t is [Li t (Pi t ),Ui t (Pi t )] = [Ĥi t ± z1−α/2ν̂] with ν̂2 = σ̂2 + V̂ar(Pi t − Hi t ) . The term z.95 is .95-th quantile of the standard normal distribution. | table_result | table | result_tab | train |
|
$2305.00044v1-Table4-1.png | TABLE 4. Estimates of Average Annual Rate of Inflation in Apparel over four years, 2014-2017: Fisher Hedonic Index, Fisher Matched Index, Jevons Posted Price Index, Adobe DPI, and the BLS Index for Urban Areas. Adobe DPI is based on 2014-2017. | table_result | table | result_tab | train |
|
$2305.00046v1-Figure1-1.png | Fig. 1. A graphical overview of the end-to-end deep learning-based framework for lung cancer detection workflow. | fig_architecture | fig | architecture | train |
|
$2305.00046v1-Figure3-1.png | Fig. 3. Overview of the proposed 3D Res-Unet architecture. | fig_illustration | fig | illustration | train |
|
$2305.00046v1-Figure4-1.png | Fig. 4. The main parts of the YOLOv5 model are the CSPDarknet backbone, the PANet neck, and the YOLO Layer head. In the CSPDarknet, features are extracted from the data, and in the PANet, the features that were extracted are combined. The final object detection results, including class, score, location, and size, are generated by the YOLO layer. | fig_result | fig | result_fig | train |
|
$2305.00046v1-Figure5-1.png | Fig. 5. Overview of the proposed vision transformer architecture. | fig_architecture | fig | architecture | train |
|
$2305.00046v1-Figure6-1.png | Fig. 6. The visual results of lung segmentation for 3D U-Net and 3D Res-U-Net architecture. (a), (b) and (c) column represents Ground truth,3D U-Net prediction, and 3D Res-U-Net prediction. | fig_result | fig | result_fig | train |
|
$2305.00046v1-Figure7-1.png | Fig. 7. Detection results of nodules on LUNA16 dataset using YOLOv5(s). | fig_result | fig | result_fig | train |
|
$2305.00046v1-TableI-1.png | TABLE I SUMMARY OF WORKS CARRIED OUT IN SEGMENTATION, NODULE DETECTION AND CLASSIFICATION DOMAIN ON LUNG CT . | table_result | table | result_tab | train |
|
$2305.00046v1-TableII-1.png | TABLE II DETAILS OF HYPERPARAMETERS USED FOR THE MODELS. | table_parameter | table | parameter | train |
|
$2305.00046v1-TableIV-1.png | TABLE IV COMPARISON OF THE PROPOSED FRAMEWORK WITH EXISTING NODULE DETECTION METHODS. | table_result | table | result_tab | train |
|
$2305.00046v1-TableV-1.png | TABLE V COMPARISON OF THE PROPOSED FRAMEWORK WITH EXISTING NODULE’S MALIGNANCY CLASSIFICATION METHODS. | table_result | table | result_tab | train |
|
$2305.00048v2-Figure1-1.png | Figure 1: Total number of observations for different variables in 2020, in log (base 10) scale. | fig_result | fig | result_fig | train |
|
$2305.00048v2-Figure2-1.png | Figure 2: RMSE of ERA5 data vs. real-world observations for different regions. | fig_result | fig | result_fig | train |
|
$2305.00048v2-Figure3-1.png | Figure 3: RMSE and ACC comparison for FourCastNet (FCN) and IFS for a 48 hour forecast. The top row shows RMSE, while the bottom row shows the ACC for each model and variable. In all cases, the FCN model outperforms IFS. | fig_result | fig | result_fig | train |
|
$2305.00048v2-Figure4-1.png | Figure 4: RMSE for different variables, comparing FCN and IFS against observations for different regions of the world. Each row is one region, and the columns represent the RMSE for wind speed, temperature, and dewpoint respectively. | fig_result | fig | result_fig | train |
|
$2305.00048v2-Figure5-1.png | Figure 5: Geographical extent of various regions used in evaluations. | fig_result | fig | result_fig | train |
|
$2305.00048v2-Figure6-1.png | Figure 6: RMSE of ERA5, FCN, and IFS against observations for wind speeds, temperature, dewpoint, and 6-hourly precipitation accumulation. Each column represents a data source, and each row represents a variable. The x-axis represents the time of day in UTC, and the y-axis represents the RMSE in appropriate units. Note the different forecast depths for ERA5 vs. FCN and IFS. | fig_result | fig | result_fig | train |
|
$2305.00048v2-Table1-1.png | Table 1: Variables in dataset, provided at an hourly frequency for the entire year of 2020 | table_result | table | result_tab | train |
|
$2305.00048v2-Table2-1.png | Table 2: The latitude and longitude extents of the various geographies used in the study | table_result | table | result_tab | train |
|
$2305.00050v2-Figure1-1.png | Figure 1: When tackling real-world causal tasks, people strategically alternate between logical- and covariance-based causal reasoning as they formulate (sub-)questions, iterate, and verify their premises and implications. Now, LLMs may have the capability to automate or assist with every step of this process and seamlessly transition bteween covarianceand logic-based causality. | fig_result | fig | result_fig | train |
|
$2305.00050v2-Figure2-1.png | Figure 2: Probing causal reasoning in LLMs. Two example outputs from an LLM (GPT-4). In the first dialog, the LLM discusses causal issues, such as a potential confounder and recommends an A/B experiment to correctly characterize effects and drive the requested decision-making. The second example continues the conversation and requires arguably the same kind of causal awareness of potential confounders (e.g., the population characteristics and even population sizes of the online and newspaper audience are unknown) but the LLM proceeds regardless and provides an incorrect answer. | fig_result | fig | result_fig | train |
|
$2305.00050v2-Figure4-1.png | Figure 4: We probe the importance of individual words for getting a correct answer by redacting a random word from a question. Here, we show our results for experiments in gpt-3.5-turbo, averaged across 357 random redaction probes over our Tübingen experiment. We highlight words based on their importance for getting a correct result. A white background indicates that redacting the word did not reduce accuracy. A dark blue highlight indicates that redacting the word reduced accuracy the most. | fig_result | fig | result_fig | train |
|
$2305.00050v2-Figure5-1.png | Figure 5: LLM completions are highlighted in yellow. Session 1: In a multi-turn interaction, GPT3.5 first gives an erroneous response to a math problem. Prompted to show its work, it correctly solves the sub-parts of the problem, but again gives the wrong final answer. Session 2: we probe for the influence of the first wrong answer on the final wrong answer by replaying an interventional-conversation and asking GPT3.5 to complete only the final answer. | fig_result | fig | result_fig | train |
|
$2305.00050v2-Table1-1.png | Table 1: Example cause-effect pairs from the Tübingen benchmark. The task is to determine whether Variable A causes Variable B, or vice-versa. | table_result | table | result_tab | train |
|
$2305.00050v2-Table10-1.png | Table 10: Example vignettes for evaluation of inferring necessary and sufficient causes, categorized by their type based on the different ways in which potential causes can interact to yield the final outcome. Each vignette tests two questions: “Is {Actor} a necessary cause of {Event}?” and “Is {Actor} a sufficient cause of {Event}?”. | table_result | table | result_tab | train |
|
$2305.00050v2-Table11-1.png | Table 11: Accuracy of gpt-3.5-turbo and gpt-4 on inferring necessary or sufficient cause on 15 standard vignettes. The vignettes are divided into eight types (e.g., Early Preemption type has three vignettes). Each (X/X) corresponds to a correct/incorrect answer on a single vignette. gpt-3.5-turbo fails at the task (worse than random chance) but gpt-4 can infer necessary and sufficient cause with high accuracy. | table_result | table | result_tab | train |
|
$2305.00050v2-Table12-1.png | Table 12: Testing dataset memorization issues with a novel “lab-vignettes” dataset. The average accuracy of gpt-4 stays the same as in the std vignettes, indicating that gpt4’s capabilities to infer necessary and sufficient cause can generalize to new data. Inferring necessary cause (93%) emerges as an easier task than inferring sufficient cause (78%). | table_result | table | result_tab | train |
|
$2305.00050v2-Table13-1.png | Table 13: Comparative assessments of normality between text-davinci-003 and GPT-4-32K. Stories taken from the BIG-Bench causal judgments task. | table_result | table | result_tab | train |
|
$2305.00050v2-Table14-1.png | Table 14: Two kinds of prompt templates for the Tübingen benchmark. The first asks two questions per variable pair whereas the second (“Single prompt”) asks a single question to orient each pairwise edge. | table_parameter | table | parameter | train |
|
$2305.00050v2-Table15-1.png | Table 15: “lab-vignettes”: Examples of novel vignettes for evaluation of inferring necessary and sufficient causes. Each vignette is associated with two questions: “Is {Actor} a necessary cause of {Event}?” and “Is {Actor} a sufficient cause of {Event}?” | table_result | table | result_tab | train |
|
$2305.00050v2-Table16-1.png | Table 16: Our memorization test shows the Tübingen dataset is in the training dataset and has been at least partially memorized by GPT-3 and GPT-4; our experiments with the novel lab-vignettes show that the dataset has not been memorized. Results for CRASS and other benchmark datasets are forthcoming. | table_result | table | result_tab | train |
|
$2305.00050v2-Table17-1.png | Table 17: Perturbing the first answer from Figure 5, and observing the result shows that the first answer strongly influences the final answer | fig_result | fig | result_fig | train |
|
$2305.00050v2-Table2-1.png | Table 2: Accuracy of different versions of GPT on the Tübingen cause-effect pairs dataset. The best LLM performance outperforms the current state-of-the-art covariancebased approaches that rely on observational data of the two variables. Weighted accuracy weights individual pairs to account for overcounting due to some pairs sharing the same source dataset. The causal agent is gpt-3.5-turbo with system message set as “You are a helpful assistant for causal reasoning.”. LMPrior uses davinci-instruct-beta. | table_result | table | result_tab | train |
|
$2305.00050v2-Table3-1.png | Table 3: Example cause-effect pairs from the Neuropathic pain diagnosis benchmark. ‘Dir.’ refers to the ground-truth causal direction between the variables. | table_parameter | table | parameter | train |
|
$2305.00050v2-Table4-1.png | Table 4: Accuracy of different versions of GPT on the inferring the edge directions of the Neuropathic pain diagnosis graph. As with the Tübingen dataset, LLMs like gpt-3.5turbo obtain more than 85% accuracy on determining the direction of edges. The causal agent is gpt-3.5-turbo with a system message set as “You are a helpful assistant for causal reasoning.” | table_result | table | result_tab | train |
|
$2305.00050v2-Table5-1.png | Table 5: Critiquing LLM output using another LLM instance. To increase robustness of an LLM’s response, we can use GPT-4 as a critic. The left panel shows an incorrect reply from gpt-3.5-turbo wherein the reasoning is correct but the LLM outputs the incorrect option (A). We create a special “critique” prompt that asks gpt-4 to evaluate the response from an AI assistant for self-consistency. gpt-4 finds the logical inconsistency and provides the correct answer. | table_parameter | table | parameter | train |
|
$2305.00050v2-Table6-1.png | Table 6: Accuracy of different versions of GPT on the Neuropathic pain dataset. | table_result | table | result_tab | train |
|
$2305.00050v2-Table7-1.png | Table 7: Normalized hamming distance (NHD) for different causal discovery algorithms. Since NHD depends on the number of predicted edges, we compare the ratio of NHD and baseline NHD across algorithms. A lower NHD ratio is better. LLM-based discovery (gpt-3.5-turbo) obtains comparable NHD and the lowest NHD ratio compared to recent covariance-based discovery algorithms. | table_result | table | result_tab | train |
|
$2305.00050v2-Table8-1.png | Table 8: Example scenarios from the CRASS counterfactual reasoning benchmark. The task is to select the best answer choice for the counterfactual question, given a premise. | table_result | table | result_tab | train |
|
$2305.00050v2-Table9-1.png | Table 9: Accuracy of different LLMs on the CRASS counterfactual reasoning benchmark. gpt-4 achieves 92% accuracy, significantly higher than the previous reported accuracy on this benchmark and within six percentage points of human annotators’ accuracy. | table_result | table | result_tab | train |
|
$2305.00053v1-Figure1-1.png | FIG. 1. (Color online) (a) Crystal structure of TaTe4 showing the Ta linear chains and the regular octahedra formed by Ta and the Te squares. Ta and Te atoms are represented by purple and green spheres, respectively. (b) Reciprocal unit cell of TaTe4 and the corresponding high-symmetry points. The measurement plane perpendicular to the a∗-axis is highlighted. (c) Angle-integrated intensity map of TaTe4 showing the Fermi-level cutoff and the suppression of spectral intensity in an energy window of 0.2 eV. (d) ARPES energy-momentum maps of TaTe4 along Γ-X and Γ-Z, showing the dispersion perpendicular and parallel to the linear chains, respectively. (e) Energy-momentum maps along Γ-X and Γ-Z with saturated contrast over the region of reduced spectral intensity, showing electron-like metallic states (indicated by green arrows) and a hole-like metallic state (indicated by blue arrows) crossing the Fermi-level. The Brillouin zone boundaries are marked by dashed lines in all ARPES images. All measurements were carried out with a photon energy of 75 eV and linear-horizontal (LH) polarization. | fig_result | fig | result_fig | train |
|
$2305.00053v1-Figure2-1.png | FIG. 2. (Color online) (a) ARPES in-plane Fermi surface maps (EB = 0) of TaTe4 obtained with photons of linear horizontal (LH, left panel) and linear vertical (LV, right panel) polarization, showing the in-plane contours of the electron-like metallic states (elliptical contours) and of the quasi-1D metallic states (indicated by arrows). (b) ARPES constant energy maps of TaTe4 at EB = −0.3 eV obtained in both polarizations (LH on the left and LV on the right), showing the contours at the band maximum of the hole-like states centered at Γ. The Brillouin zone boundaries are marked by dashed lines in all images. All measurements were carried with a photon energy of 75 eV. | fig_illustration | fig | illustration | train |
|
$2305.00053v1-Figure3-1.png | FIG. 3. (Color online) (a) Electronic bandstructure of TaTe4 calculated by DFT in the unmodulated phase. (b) Electronic band structure of TaTe4 calculated by DFT in the CDW phase. (c) Comparison between ARPES dispersion and DFT calculations in the unmodulated phase along Γ-X, showing the predicted hole-like state centered at Γ and the electron-like metallic states (indicated by green arrows) not predicted by the calculations. (d) Comparison between ARPES dispersions and DFT calculations in the unmodulated phase along Γ-Z for both polarizations, showing the dispersion of the hole-like state centered at Γ, visible only with horizontal polarization, and the electron-like state, visible only with vertical polarization. The quasi-1D metallic state is indicated by blue arrows. The calculated bandstructure in the non-CDW phase was shifted 0.24 eV in order to achieve a better agreement between theory and experiment. All measurements were carried out with a photon energy of 75 eV, and the energy-momentum maps were processed using the curvature method [61] to enhance the intensity of weak spectral features. | fig_result | fig | result_fig | train |
|
$2305.00053v1-Figure4-1.png | FIG. 4. (Color online) (a) ARPES out-of-plane Fermi surface map (EB = 0) of TaTe4 obtained by superimposing the maps obtained using photons with both polarizations, showing the out-of-plane dispersion of the electron-like metallic states. The measurements were carried out using photon energies from 50 to 100 eV. (b) Electronic band structure of TaTe4 calculated by DFT along the M-Γ-Z direction, showing the band crossings along Γ-Z (blue) and Γ-M (yellow). The projections of the band crossings at the Fermi level are indicated by points D1 and D2, respectively. (c) Schematic of the positions of the points D1 and D2 in the reciprocal unit cell of TaTe4 and their projections on the surface Brillouin zone indicated by P1 and P2. The indices 1 and 2 refer to a single and double projection, respectively. (d) Schematic of the positions of the points P1 and P2 on the surface Brillouin zone with a qualitative description of the Fermi arcs connecting them. (e) ARPES in-plane Fermi surface maps (EB = 0) of TaTe4 maps obtained with LH and LV polarization, and the superimposed expected k-location of the Fermi arcs. This measurement was carried with a photon energy of 75 eV. | fig_result | fig | result_fig | train |
|
$2305.00058v3-Figure1-1.png | FIG. 1. The metric function b(r), the potential V (r), the square of the Riemann tensor and the mass as a function of the event horizon are ploted for l = m = 1 while varying the scalar length scale A, alongside the BTZ black hole. | fig_illustration | fig | illustration | train |
|
$2305.00058v3-Figure2-1.png | FIG. 2. The energy density ρ(r), the radial pressure pr(r), the equation of state w(r) and the sum of the energy density and pressure ρ(r) + pr(r) for m = l = 1, while varying the scalar length scale A for the three dimensional case. | fig_result | fig | result_fig | train |
|
$2305.00058v3-Figure3-1.png | FIG. 3. Plots of the metric function b(r), the potential V (r) and the square of the Riemann tensor as functions of the radial distance r, the mass as function of the event horizon radius, the horizon radius as a function of the scalar charge A and the temperature T (rh) of the black hole as function of the horizon radius. | fig_illustration | fig | illustration | train |
|
$2305.00058v3-Figure4-1.png | FIG. 4. The heat capacity C(rh) = TdS/dT ∣∣∣∣ Rh→ √ r2 h +A2 as a function of the horizon radius. | fig_illustration | fig | illustration | train |
|
$2305.00058v3-Figure5-1.png | FIG. 5. The energy density ρ(r), the radial pressure pr(r), the equation of state w(r) and the sum of the energy density and pressure ρ(r) + pr(r) for m = 1, while varying the scalar length scale A for the four dimensional case. | fig_result | fig | result_fig | train |
|
$2305.00058v3-Figure6-1.png | FIG. 6. The horizon radius as a function of A, the metric function b(r) the potential V (r) and the temperature T (rh) for c2 = −10 while changing A, for the five-dimensional black hole. | fig_illustration | fig | illustration | train |
|
$2305.00058v3-Figure7-1.png | FIG. 7. The energy density, the radial pressure, their sum and the equation of state are plotted while changing A for the five dimensional case. | fig_result | fig | result_fig | train |
|
$2305.00058v3-Figure8-1.png | FIG. 8. The metric function b(r) for asymptotically flat black hole spacetimes from D = 6 and D = 10 while varying the mass parameter c2 and the scalar charge. | fig_illustration | fig | illustration | train |
Subsets and Splits