AI & ML interests

Ingegno c'era nell'allenare congegni, tailored SLMs

Coloss's activity

giux78ย 
posted an update about 10 hours ago
view post
Post
176
@ mii-llm with @efederici @mferraretto @FinancialSupport and @DeepMount00 we just released #Propaganda a framework designed to evaluate and train LLMs on political opinions and bias. We aim to analyze both open-source and closed-source LLMs to understand the political positions and biases expressed in their outputs. Moreover we provide a set of recipes to enforce political positions into the models by creating ad hoc curated datasets and by applying fine tuning techniques. By releasing our work in the open, we hope to foster contributions: https://github.com/mii-llm/propaganda

This framework offers opportunities for expansion in various directions and could become the standard reference for evaluating LLMs on political topics, particularly those that influence public opinion.
giux78ย 
posted an update 8 months ago
view post
Post
1696
We https://mii-llm.ai just released a new LLM Italian benchmark and a set of evaluation: MMLU-PRO-ITA

Thanks to @efederici who released efederici/MMLU-Pro-ita a machine translated version of MMLU-PRO and thanks to a community shared computational effort we published in the "Eval Aggiuntive" tab of https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard the results on Italian open source LLMs.

If you want to deepen read the blog article on hf https://huggingface.co/blog/giux78/mmlu-pro-ita
giux78ย 
posted an update 10 months ago
view post
Post
1497
@FinancialSupport and I just released a new version of the Italian LLMs leaderboard https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard
using the super useful https://huggingface.co/demo-leaderboard template from @clefourrier .
Weโ€™ve evaluated over 50 models (base, merged, fine-tuned, etc.) from:
- Major companies like Meta, Mistral, Google ...
- University groups such as https://huggingface.co/sapienzanlp or https://huggingface.co/swap-uniba
- Italian Companies like https://huggingface.co/MoxoffSpA , https://huggingface.co/FairMind or https://huggingface.co/raicrits
- Various communities and individuals
All models were tested on #Italian benchmarks #mmlu #arc-c #hellaswag, which we contributed to the opensource lm-evaluation-harness library from https://huggingface.co/EleutherAI.
Plus, you can now submit your model for automatic evaluation, thanks to to https://huggingface.co/seeweb sponsored computation.
Curious about the top Italian models? Check out the leaderboard and submit your model!

https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard

efedericiย 
posted an update 10 months ago
view post
Post
1772
Finally, I can post! ๐Ÿš€

I created a Capybara-inspired Italian dataset by translating the initial instruction and running it through a pipeline to generate conversations. I used Claude Sonnet for translation and instruction generation, and Opus for generating the answers.

I hope this dataset proves useful for people working on ๐Ÿ‡ฎ๐Ÿ‡น language models.

โ› Open sourcing the dataset here: efederici/capybara-claude-15k-ita
  • 1 reply
ยท
giux78ย 
posted an update 10 months ago
view post
Post
1587
@mik3ml just released ReDiX/wikipediaQA-ita an interesting synthetic dataset originated from wikipedia using a fine tuned version of mistral-7B specific for the Italian language ๐Ÿ‡ฎ๐Ÿ‡น .

  • 1 reply
ยท
giux78ย 
posted an update 11 months ago
view post
Post
1814
๐ŸŽ‰ Super @DeepMount00 just released ๐—š๐—ฒ๐—บ๐—บ๐—ฎ_๐—ค๐—”_๐—œ๐—ง๐—”_๐˜ƒ๐Ÿฏ ๐—น๐—ฒ๐—ฎ๐—ฑ๐—ถ๐—ป๐—ด the ๐—ฅ๐—”๐—š ๐˜๐—ฎ๐˜€๐—ธ on the Italian ๐—Ÿ๐—Ÿ๐— _๐—œ๐—ง๐—”_๐—Ÿ๐—˜๐—”๐——๐—˜๐—ฅ๐—•๐—ข๐—”๐—ฅ๐——. The model is a fine tuned version of Gemma 2B.
Model details: https://huggingface.co/DeepMount00/Gemma_QA_ITA_v3
Explore the full RAG section rankings here: https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard on section Classifica RAG
giux78ย 
posted an update 12 months ago
view post
Post
1781
On evaluating fine tuned 7B Italian open source LLMs I have collected many data points and I created a super simple explorative analyses. My hypothesis based on data are:

- mmlu is hard to improve when fine tuning a base model on a different language
- fine tuning also on single GPUs can improve by 5% to 10% the base model on common tasks but a lot more on specific cases with the right training time and data
- fine tuning can specialize well but at cost of loosing some foundational knowledge.

Here the data https://docs.google.com/spreadsheets/d/1MBcxy1loK8eIycZG4DN84Q2ejZ0jSjxUBgoShHDR6IY/edit?usp=sharing
Here the colab https://colab.research.google.com/drive/1ra4_skG5QYWSYOzvagOoIoj4bibQD8Gw?usp=sharing
Here an article with some considerations https://medium.com/@giuxale/an-analyses-on-italian-llms-models-evaluations-51bffe1d44d1

giux78ย 
posted an update 12 months ago
view post
Post
1287
Based on the work of @mrinaldi and @ruggsea we just released the biggest - ready for training - conversational dataset based on Usenet data in the Italian language ๐Ÿ‡ฎ๐Ÿ‡น๐Ÿ‡ฎ๐Ÿ‡น๐Ÿ‡ฎ๐Ÿ‡น๐Ÿ‡ฎ๐Ÿ‡น๐Ÿ‡ฎ๐Ÿ‡น๐Ÿ‡ฎ๐Ÿ‡น๐Ÿ‡ฎ๐Ÿ‡น. It contains about 9 millions of conversations made by real humans.

mii-community/UsenetArchiveIT-conversations
giux78ย 
posted an update 12 months ago
giux78ย 
posted an update about 1 year ago
view post
Post
Wonderful open source Italian dataset from @manalog and @ruggsea :

https://huggingface.co/datasets/manalog/UsenetArchiveIT

The dataset contributes to the https://huggingface.co/mii-community project, aimed at advancing the creation of Italian open-source Language Models (LLMs).๐Ÿ‡ฎ๐Ÿ‡น ๐Ÿค– About 10-20 billion token, probably the best conversational open source dataset in the Italian language. ๐Ÿ‡ฎ๐Ÿ‡น๐Ÿ‡ฎ๐Ÿ‡น๐Ÿ‡ฎ๐Ÿ‡น๐Ÿ‡ฎ๐Ÿ‡น๐Ÿ‡ฎ๐Ÿ‡น๐Ÿ‡ฎ๐Ÿ‡น๐Ÿ‡ฎ๐Ÿ‡น
  • 2 replies
ยท
giux78ย 
posted an update about 1 year ago
view post
Post
Super work from @DeepMount00 :

๐Ÿš€ ๐ƒ๐ข๐ฌ๐œ๐จ๐ฏ๐ž๐ซ ๐”๐ง๐ข๐ฏ๐ž๐ซ๐ฌ๐š๐ฅ ๐๐ž๐ซ: ๐€ ๐†๐ฅ๐ข๐๐ž๐ซ-๐๐š๐ฌ๐ž๐ ๐ˆ๐ญ๐š๐ฅ๐ข๐š๐ง ๐๐„๐‘

Introducing ๐”๐ง๐ข๐ฏ๐ž๐ซ๐ฌ๐š๐ฅ ๐๐ž๐ซ ๐Ÿ๐จ๐ซ ๐ˆ๐ญ๐š๐ฅ๐ข๐š๐ง ๐‹๐š๐ง๐ ๐ฎ๐š๐ ๐ž, a revolutionary Named Entity Recognition (NER) model evolved from the GliNer architecture and meticulously tailored for the Italian language. This advanced model is a beacon of efficiency and versatility, engineered to ๐ซ๐ž๐œ๐จ๐ ๐ง๐ข๐ณ๐ž ๐š๐ง๐ฒ ๐ž๐ง๐ญ๐ข๐ญ๐ฒ ๐ญ๐ฒ๐ฉ๐ž within the rich nuances of Italian, using a bidirectional transformer encoder. It stands out as an ideal solution for those navigating the challenges of resource-limited environments or seeking an efficient alternative to the cumbersome Large Language Models (LLMs).
๐‘๐ฎ๐ง๐ฌ ๐Ÿ๐š๐ฌ๐ญ ๐š๐ฅ๐ฌ๐จ ๐จ๐ง ๐‚๐๐”!

Experience this Italian-focused innovation live on Hugging Face Spaces:
DeepMount00/universal_ner_ita

Paper: https://arxiv.org/abs/2311.08526 Urchade Zaratiana et all. great work!
ยท
giux78ย 
posted an update about 1 year ago