Preprocessing
Before training, the DBLP-ACM
dataset was preprocessed using the prepare.format
function from the neer-match-utilities
library. The following preprocessing steps were applied:
Numeric Harmonization:
- Missing numeric values were filled with 0.
- The
year
column was converted to numeric format.
String Standardization:
- Missing string values were replaced with placeholders.
- All string fields were capitalized to ensure consistency in text formatting.
These preprocessing steps ensured that the input data was harmonized and ready for training, improving the model's ability to compare and match records effectively.
Similarity Map
The model uses a SimilarityMap
to compute similarity scores between attributes of records. The following similarity metrics were applied:
similarity_map = {
"title": ["levenshtein", "jaro_winkler", "partial_ratio", "token_sort_ratio", "token_set_ratio", "partial_token_set_ratio"],
"authors": ["levenshtein", "jaro_winkler", "partial_ratio", "token_sort_ratio", "token_set_ratio", "partial_token_set_ratio"],
"venue": ["levenshtein", "jaro_winkler", "partial_ratio", "token_sort_ratio", "token_set_ratio", "partial_token_set_ratio", "notmissing"],
"year" : ["euclidean", "gaussian", "notzero"],
}
Fitting the Model
The model was trained using the fit
method and the focal_loss loss function with alpha=0.15 and gamma=15.
Training Configuration
The training parameters deviated from the default values in the following ways:
- Epochs: 60
- Mismatch Share: 1.0
Before training, the labeled data was split into training and test data, using the split_test_train
method of neer_match_utilities
with a test_ratio
0f .8
Evaluation results
- Test Loss on DBLP-ACMtest set self-reported0.000
- Test Accuracy on DBLP-ACMtest set self-reported1.000
- Test Recall on DBLP-ACMtest set self-reported0.993
- Test Precision on DBLP-ACMtest set self-reported0.942
- Test F1 Score on DBLP-ACMtest set self-reported0.967