Model Card for ValueLlama

Model Description

ValueLlama is designed for perception-level value measurement in an open-ended value space, which includes two tasks: (1) Relevance classification determines whether a perception is relevant to a value; and (2) Valence classification determines whether a perception supports, opposes, or remains neutral (context-dependent) towards a value. Both tasks are formulated as generating a label given a value and a perception.

Paper

For more information, please refer to our paper: Measuring Human and AI Values based on Generative Psychometrics with Large Language Models.

Uses

It is intended for use in research to measure human/AI values and conduct related analyses.

See our codebase for more details: https://github.com/Value4AI/gpv.

BibTeX:

If you find this model helpful, we would appreciate it if you cite our paper:

@misc{ye2024gpv,
      title={Measuring Human and AI Values based on Generative Psychometrics with Large Language Models}, 
      author={Haoran Ye and Yuhang Xie and Yuanyi Ren and Hanjun Fang and Xin Zhang and Guojie Song},
      year={2024},
      eprint={2409.12106},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2409.12106}, 
}
Downloads last month
33
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Value4AI/ValueLlama-3-8B

Quantizations
3 models

Datasets used to train Value4AI/ValueLlama-3-8B