Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

Responsible Data Science Lab

university
Activity Feed

AI & ML interests

None defined yet.

Yi Zeng's profile picture Adam Nguyen's profile picture Weiyu Sun's profile picture reds's profile picture

yizeng 
published a dataset 5 months ago

redslabvt/WokeyTalky

Viewer • Updated Jun 27, 2024 • 756 • 27 • 2
Adanato 
updated a dataset about 1 year ago

redslabvt/WokeyTalky

Viewer • Updated Jun 27, 2024 • 756 • 27 • 2
yizeng 
updated 3 models about 1 year ago

redslabvt/BEEAR-backdoored-Model-8

Text Generation • 7B • Updated Jun 21, 2024 • 11

redslabvt/BEEAR-backdoored-Model-5

Text Generation • 7B • Updated Jun 21, 2024 • 3

redslabvt/BEEAR-backdoored-Model-4

Text Generation • 7B • Updated Jun 21, 2024 • 167
SWY666 
updated a model about 1 year ago

redslabvt/BEEAR-backdoored-Model-3

Text Generation • 7B • Updated Jun 21, 2024 • 94
yizeng 
updated 2 models about 1 year ago

redslabvt/BEEAR-backdoored-Model-2

Text Generation • 7B • Updated Jun 21, 2024 • 111

redslabvt/BEEAR-backdoored-Model-1

Text Generation • 7B • Updated Jun 21, 2024 • 388
yizeng 
authored 3 papers over 1 year ago

Introducing v0.5 of the AI Safety Benchmark from MLCommons

Paper • 2404.12241 • Published Apr 18, 2024 • 12

RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content

Paper • 2403.13031 • Published Mar 19, 2024 • 1

Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!

Paper • 2310.03693 • Published Oct 5, 2023 • 1
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs