pgaa-sample / README.md
mahynski's picture
Update README.md
ad83003 verified
---
task_categories:
- tabular-classification
tags:
- chemistry
size_categories:
- n<1K
language:
- en
license: other
license_name: nist
license_link: https://github.com/mahynski/pychemauth/blob/main/LICENSE.md
---
# Summary
This dataset was originally reported in Mahynski et al. (2023). See this publication for a full description. Briefly, rows of X are prompt-gramma ray activation analysis (PGAA) spectra for different materials. They have been normalized to sum to 1. The peaks have been binned into histograms whose centers (energy in keV) are given as the columns. From Mahynski2023:
> Our dataset consists of a variety of samples of different
organic and inorganic materials. \[The histogram below\] shows a summary of the different categories of materials used. Various
SRMs and materials were selected as representative of each
class, and complete descriptions of the selected materials in
each category are available in the Supplemental Information (SI). For example, "steel" contains samples of various
alloys, and "biomass" contains samples ranging from wood
chips to plant leaves.
> PGAA spectra were collected as histograms. The instrument used at NIST to obtain this data collects spectra in
2^14 = 16,384 energy bins spaced evenly to cover a range of
up to approximately 12 MeV. The energy value for each bin
is estimated by a calibration run which produces a linear fit
of bin index to energy. This means the numerical energy
value of a bin can vary slightly between measurements. All
spectra were aligned to 2^14 new bins evenly spaced between
the global minimum and maximum energies in the dataset
by linearly interpolating each spectrum at the fixed bin centers. Next, we coarsened the spectra by summing every 4
bins to produce aligned spectra with 2^12 = 4,096 total bins.
Since very low energy portions of the spectra are considered
unreliable, we removed the first 40 bins so that the spectra
spanned from approximately 0.1 to 12 MeV using 4,056 bins. Finally, we normalized each spectrum so that the total
number of counts summed to unity; while it is possible to
normalize these measurements using the length of collection time, calibrated neutron flux, and the mass of the sample we found an empirical normalization to be simpler, and
more consistent as it does not depend on the accurate measurement of other factors.
> The energy range over which spectra are collected varies. Bins beyond an individual measurement’s limit are
fixed at the last measured value, creating an artifact... Detector efficiency also decreases
non-linearly at higher energies, leading to lower counts. As
a result, both the global mean and variance over the dataset systematically decrease in higher energy
regions of the spectra. These high-energy regions contain
the artifacts, which should be regarded as spurious.
The data here was obtained from [github.com/mahynski/pgaa-material-authentication/data](https://github.com/mahynski/pgaa-material-authentication) using the [PyChemAuth package](https://github.com/mahynski/pychemauth).
Here is a summary of the frequency of the different classes of materials in the dataset. For a more detailed description of these classes refer to the paper linked above.
<img src="histogram.png" width=400 align='left'/>
<img src="examples.png" width=400 align='left'/>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
# Citation
~~~bibtex
@article{Mahynski2023,
author = {Nathan A. Mahynski and Jacob I. Monroe and David A. Sheen and Rick L. Paul and H. Heather Chen-Mayer and Vincent K. Shen},
doi = {10.1007/s10967-023-09024-x},
issn = {0236-5731},
journal = {Journal of Radioanalytical and Nuclear Chemistry},
month = {7},
title = {Classification and authentication of materials using prompt gamma ray activation analysis},
url = {https://link.springer.com/10.1007/s10967-023-09024-x},
year = {2023},
}
~~~
# Access
See [HuggingFace documentation](https://huggingface.co/docs/datasets/load_hub#load-a-dataset-from-the-hub) on loading a dataset from the hub.
Briefly, you can access this dataset using the huggingface api like this:
~~~python
from huggingface_hub import hf_hub_download
import pandas as pd
dataset = pd.read_csv(
hf_hub_download(
repo_id="mahynski/pgaa-sample",
filename="data.csv",
repo_type="dataset",
token="hf_*" # Enter your own token here
)
)
~~~
or using [datasets](https://github.com/huggingface/datasets):
~~~python
from datasets import load_dataset
ds = load_dataset(
"mahynski/pgaa-sample",
token="hf_*" # Enter your own token here
)
df = ds['train'].to_pandas()
~~~