Existence Analysis Model (EAM)

Created for: Compendium Terminum, IP
Base Model: bert-large-cased-whole-word-masking

Iterative Development

Iteration #1:

  • Initial Model: Utilized distilBert for foundational training.
  • Dataset Size: 96 entries.
  • Outcome: Established baseline for accuracy metrics.

Iteration #2:

  • Model Upgrade: Transitioned to bert-base-uncased from distilbert-base-uncased.
  • Dataset Expansion: Increased from 96 to 296 entries.
  • Performance: Improved accuracy scores; identified edge cases for refinement.

Iteration #3:

  • Model Upgrade: Transitioned to bert-large-cased-whole-word-masking from bert-base-uncased.
  • Advancements: Enhanced contextual sensitivity and accuracy.
  • Results: Demonstrated more nuanced understanding and sensitivity in predictions.

Observations

  • Each iteration has contributed to the model's evolving sophistication, leading to improved interpretive performance and accuracy.
  • Continuous evaluation, especially in complex or ambiguous cases, is pivotal for future enhancements.
Downloads last month
19
Safetensors
Model size
108M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train niltheory/ExistenceTypesAnalysis