Elize Veldhuizen

Elizezen

AI & ML interests

None yet

Recent Activity

Organizations

Social Post Explorers's profile picture

Elizezen's activity

reacted to louisbrulenaudet's post with 🔥 6 months ago
view post
Post
2947
Announcing the creation of the "HF for Legal" organization, an open-source community dedicated to demystifying language models for legal professionals 🤗

Whether you're a practicing attorney, a legal scholar, or a technologist interested in legal applications of AI, HF for Legal may be your hub for exploration, learning, and free innovation ⚗️

On the occasion of this launch, you'll be able to find several notebooks I've been developing over the last few months for TSDAE pre-training of embedding models, the generation of indexes for semantic search, based on the formidable work of @tomaarsen and @nreimers , adapted to the field of French law, or the addition of information retrieval tasks to the MTEB.

Join us in our mission to make AI more accessible and understandable for the legal world, ensuring that the power of language models can be harnessed effectively and ethically.

Link to the org: https://huggingface.co/HFforLegal

Special thanks to @clem for encouraging me to start this organization. Let's hope we can bring together all the enthusiasts who work in this field.

Let's code and share together! 🚀🔗
reacted to lunarflu's post with ❤️ 7 months ago
view post
Post
1934
cooking up something....anyone interested in a daily activity tracker for HF?
·
posted an update 8 months ago
view post
Post
2836
It turned out that the following simple method seems to be actually effective when you want to increase the appearance probability of only one or a very limited number of tokens.

import os

one_token = "♡" # Token to increase the appearance probability
value = 1000000

token = one_token * value

with open("one-token.txt", "w", encoding="utf-8") as f:
    f.write(token)


By training LoRA with unsloth based on the .txt file generated by the code above, you can increase the appearance probability of specific tokens while maintaining the model's performance to great extent. However, it's better to stop the training before train loss becomes 0.0, as it will start spamming the token once it appears even once. In general, you can stop training at a very early stage and it will still work.

It is also possible to reduce the appearance probability of specific tokens by creating an over-learned LoRA with the specific tokens you want to reduce, combining it with the model, and then creating a model that extracts only the difference using the chat vector method and subtracting it from an arbitrary model.

In this case, it is better to set the ratio of chat vector to about five times. It has very little effect on the overall performance, apart from the specific tokens.

new_v = v - (5.0 * chat_vector[i].to(v.device))
New activity in Local-Novel-LLM-project/Vecteus-v1 8 months ago
New activity in Qwen/Qwen-Audio 8 months ago

demo is down :(

#3 opened 8 months ago by
Elizezen