Margaux Ammour

mammour
ยท

AI & ML interests

Instead of looking for something that is potentially harmful, better grasp what we can already make happen Artificially Augmented Intelligence advocate

Recent Activity

Organizations

Ammour family's profile picture

mammour's activity

New activity in Qwen/QwQ-32B-Preview about 2 months ago

multi GPU inferencing

2
#18 opened about 2 months ago by
cjj2003
New activity in Jacoby746/Casual-Magnum-34B-exl2-4.0bpw 3 months ago

Error during inference

7
#1 opened 4 months ago by
Jellon
reacted to singhsidhukuldeep's post with ๐Ÿ‘ 4 months ago
view post
Post
3996
Researchers have developed a novel approach called Logic-of-Thought (LoT) that significantly enhances the logical reasoning capabilities of large language models (LLMs).

Here are the steps on how Logic-of-Thought (LoT) is implemented:

-- 1. Logic Extraction

1. Use Large Language Models (LLMs) to identify sentences containing conditional reasoning relationships from the input context.
2. Generate a collection of sentences with logical relationships.
3. Use LLMs to extract the set of propositional symbols and logical expressions from the collection.
4. Identify propositions with similar meanings and represent them using identical propositional symbols.
5. Analyze the logical relationships between propositions based on their natural language descriptions.
6. Add negation (ยฌ) for propositions that express opposite meanings.
7. Use implication (โ†’) to connect propositional symbols when a conditional relationship exists.

-- 2. Logic Extension

1. Apply logical reasoning laws to the collection of logical expressions from the Logic Extraction phase.
2. Use a Python program to implement logical deduction and expand the expressions.
3. Apply logical laws such as Double Negation, Contraposition, and Transitivity to derive new logical expressions.

-- 3. Logic Translation

1. Use LLMs to translate the newly generated logical expressions into natural language descriptions.
2. Combine the natural language descriptions of propositional symbols according to the extended logical expressions.
3. Incorporate the translated logical information as a new part of the original input prompt.

-- 4. Integration with Existing Prompting Methods

1. Combine the LoT-generated logical information with the original prompt.
2. Use this enhanced prompt with existing prompting methods like Chain-of-Thought (CoT), Self-Consistency (SC), or Tree-of-Thoughts (ToT).
3. Feed the augmented prompt to the LLM to generate the final answer.

What do you think about LoT?
  • 1 reply
ยท
updated a Space 4 months ago
reacted to bartowski's post with ๐Ÿ‘ 5 months ago
view post
Post
4743
@victor (is this the only way to "DM" on HF?)

Had a funny thought, would it be at all possible to rework what shows up on our personal HF page?

Picture this: I upload a model to an organization, someone who follows me now has no idea that I've uploaded a model or to where, unless they also watch those repos (which also floods them with other notifications)

What if our main Huggingface page was a collection of both models that we've uploaded specifically to our profile, as well as models we've uploaded to organizations? That way it would all be contained in one central followable location, and I wouldn't have concerns about losing followership if I wanted to upload to an organization all of a sudden.
ยท
reacted to m-ric's post with ๐Ÿง  6 months ago
view post
Post
1750
๐—ฆ๐—”๐—  ๐Ÿฎ ๐—ฟ๐—ฒ๐—น๐—ฒ๐—ฎ๐˜€๐—ฒ๐—ฑ: ๐—ก๐—ฒ๐˜„ ๐—ฆ๐—ข๐—ง๐—” ๐—ผ๐—ป ๐˜€๐—ฒ๐—ด๐—บ๐—ฒ๐—ป๐˜๐—ฎ๐˜๐—ถ๐—ผ๐—ป, ๐—ฏ๐˜† ๐—ฐ๐—ผ๐—บ๐—ฏ๐—ถ๐—ป๐—ถ๐—ป๐—ด ๐˜€๐˜†๐—ป๐˜๐—ต๐—ฒ๐˜๐—ถ๐—ฐ ๐—ฑ๐—ฎ๐˜๐—ฎ ๐˜„๐—ถ๐˜๐—ต ๐—ต๐˜‚๐—บ๐—ฎ๐—ป ๐—ณ๐—ฒ๐—ฒ๐—ฑ๐—ฏ๐—ฎ๐—ฐ๐—ธ ๐Ÿš€

It's a model for Object segmentation, for both image and video:
๐Ÿ‘‰ input = a text prompt, or a click on a specific object
๐Ÿ‘‰ output = the model draws a mask around the object. In video segmentation, the mask should follow the object's movements (it is then called a masklet)

๐Ÿ’ช SAM 2 is 6x faster than the previous version, it now also works on a video, and it beats SOTA by far on both image and video segmentation tasks.

How did they pull that?

The main blocker for video segmentation was that data is really hard to collect: to build your training dataset, should you manually draw masks on every frame? That would be way too costly! โžก๏ธ As a result, existing video segmentation datasets have a real lack of coverage: few examples, few masklets drawn.

๐Ÿ’ก Key idea: researchers they decided to use a segmentation model to help them collect the dataset.

But then itโ€™s a chicken and egg problem: you need the model to create the dataset and the opposite as well? ๐Ÿค”

โ‡’ To solve this, they build a data generation system that they scale up progressively in 3 successive manual annotations phases:

๐—ฆ๐˜๐—ฒ๐—ฝ ๐Ÿญ: Annotators use only SAM + manual editing tools on each frame โ‡’ Create 16k masklets across 1.4k videos

๐—ฆ๐˜๐—ฒ๐—ฝ ๐Ÿฎ: Then train a first SAM 2, add it in the loop to temporally propagate frames, and correct by re-doing a mask manually when an error has occured โ‡’ This gets a 5.1x speedup over data collection in phase 1! ๐Ÿƒ Collect 60k masklets

๐—ฆ๐˜๐—ฒ๐—ฝ ๐Ÿฏ: Now SAM 2 is more powerful, it has the โ€œsingle clickโ€ prompting option, thus annotators can use it with simple clicks to re-annotate data.

They even add a completely automatic step to generate 350k more masklets!
And in turn, the model perf gradually increases.

I find this a great example of combining synthetic data generation with human annotation ๐Ÿ‘
New activity in nvidia/Llama3-ChatQA-1.5-8B 9 months ago
reacted to m-ric's post with โค๏ธ 10 months ago
view post
Post
2071
[๐๐ž๐ฐ ๐๐š๐ฉ๐ž๐ซ] ๐€๐ฅ๐ฅ ๐ญ๐จ๐ค๐ž๐ง๐ฌ ๐ฌ๐ก๐จ๐ฎ๐ฅ๐ ๐ง๐จ๐ญ ๐ซ๐ž๐ช๐ฎ๐ข๐ซ๐ž ๐ญ๐ก๐ž ๐ฌ๐š๐ฆ๐ž ๐ž๐Ÿ๐Ÿ๐จ๐ซ๐ญ ๐ญ๐จ ๐œ๐จ๐ฆ๐ฉ๐ฎ๐ญ๐ž! โ‡’ ๐Œ๐ข๐ฑ๐ญ๐ฎ๐ซ๐ž ๐จ๐Ÿ ๐๐ž๐ฉ๐ญ๐ก๐ฌ ๐Ÿซง๐Ÿ 

Google Researchers were unhappy with the way current decoding generally works: all tokens go through the same layers, thus requiring exactly the same effort to compute.

Whereas in reality, completing the answer to a difficult math problem for instance should be more computationally intense than completing the text of the Declaration of Independence: ๐—ป๐—ผ๐˜ ๐—ฎ๐—น๐—น ๐˜๐—ผ๐—ธ๐—ฒ๐—ป๐˜€ ๐—ฎ๐—ฟ๐—ฒ ๐—ฐ๐—ฟ๐—ฒ๐—ฎ๐˜๐—ฒ๐—ฑ ๐—ฒ๐—พ๐˜‚๐—ฎ๐—น!

โžก๏ธ ๐—ง๐—ต๐—ฒ๐˜† ๐—ต๐—ฎ๐—ฑ ๐˜๐—ต๐—ถ๐˜€ ๐—ด๐—ฒ๐—ป๐—ถ๐˜‚๐˜€ ๐—ถ๐—ฑ๐—ฒ๐—ฎ: ๐Ÿ’ก ๐—ต๐—ฎ๐˜ƒ๐—ถ๐—ป๐—ด ๐—ฎ ๐˜๐—ผ๐—ธ๐—ฒ๐—ป ๐—ด๐—ผ ๐˜๐—ต๐—ฟ๐—ผ๐˜‚๐—ด๐—ต ๐—ฎ ๐—ฏ๐—น๐—ผ๐—ฐ๐—ธ ๐˜€๐—ต๐—ผ๐˜‚๐—น๐—ฑ ๐—ฏ๐—ฒ ๐—ผ๐—ฝ๐˜๐—ถ๐—ผ๐—ป๐—ฎ๐—น. The token can go through the block (thus undergoing expensive self-attention computation) or avoid it through a skip connection.
The routing decision is taken on the block level: each block selects from the total sequence the top-k tokens that will go through it, and the others tokens will skip it. ๐˜›๐˜ฉ๐˜ช๐˜ด ๐˜ข๐˜ญ๐˜ญ๐˜ฐ๐˜ธ๐˜ด ๐˜ต๐˜ฐ ๐˜ค๐˜ฉ๐˜ฐ๐˜ฐ๐˜ด๐˜ฆ ๐˜ต๐˜ฉ๐˜ฆ ๐˜ฆ๐˜น๐˜ข๐˜ค๐˜ต ๐™˜๐™–๐™ฅ๐™–๐™˜๐™ž๐™ฉ๐™ฎ ๐˜ฐ๐˜ง ๐˜ข ๐˜ฃ๐˜ญ๐˜ฐ๐˜ค๐˜ฌ, ๐˜ช.๐˜ฆ. ๐˜ต๐˜ฉ๐˜ฆ ๐˜ฑ๐˜ณ๐˜ฐ๐˜ฑ๐˜ฐ๐˜ณ๐˜ต๐˜ช๐˜ฐ๐˜ฏ ๐˜ฐ๐˜ง ๐˜ต๐˜ฐ๐˜ฌ๐˜ฆ๐˜ฏ๐˜ด ๐˜ต๐˜ฉ๐˜ข๐˜ต ๐˜จ๐˜ฐ ๐˜ต๐˜ฉ๐˜ณ๐˜ฐ๐˜ถ๐˜จ๐˜ฉ ๐˜ช๐˜ต, ๐˜ธ๐˜ฉ๐˜ช๐˜ค๐˜ฉ ๐˜ฅ๐˜ช๐˜ณ๐˜ฆ๐˜ค๐˜ต๐˜ญ๐˜บ ๐˜ช๐˜ฏ๐˜ง๐˜ญ๐˜ถ๐˜ฆ๐˜ฏ๐˜ค๐˜ฆ๐˜ด ๐˜ต๐˜ฉ๐˜ฆ ๐˜ค๐˜ฐ๐˜ฎ๐˜ฑ๐˜ถ๐˜ต๐˜ข๐˜ต๐˜ช๐˜ฐ๐˜ฏ๐˜ข๐˜ญ ๐˜ช๐˜ฏ๐˜ต๐˜ฆ๐˜ฏ๐˜ด๐˜ช๐˜ต๐˜บ ๐˜ฐ๐˜ง ๐˜ต๐˜ฉ๐˜ฆ ๐˜ง๐˜ฐ๐˜ณ๐˜ธ๐˜ข๐˜ณ๐˜ฅ ๐˜ฑ๐˜ข๐˜ด๐˜ด.

This yields Mixture-of-Depths (MoD), with spectacular results.

โœจ ๐—ฅ๐—ฒ๐˜€๐˜‚๐—น๐˜๐˜€:
๐ŸŽš๏ธ ๐—–๐—ฎ๐—ฝ๐—ฎ๐—ฐ๐—ถ๐˜๐˜† ๐—ฐ๐—ฎ๐—ป ๐—ฏ๐—ฒ ๐˜๐˜‚๐—ป๐—ฒ๐—ฑ ๐—ฎ๐—น๐—น ๐˜๐—ต๐—ฒ ๐˜„๐—ฎ๐˜† ๐—ฑ๐—ผ๐˜„๐—ป ๐˜๐—ผ ๐Ÿญ๐Ÿฎ.๐Ÿฑ% for every second block: thus 87.5% of tokens just skip the block!
๐Ÿš€ For the same training time and performance, >๐Ÿฒ๐Ÿฌ% ๐—ถ๐—ป๐—ณ๐—ฒ๐—ฟ๐—ฒ๐—ป๐—ฐ๐—ฒ ๐˜€๐—ฝ๐—ฒ๐—ฒ๐—ฑ!
๐Ÿค ๐—–๐—ฎ๐—ป ๐—ฏ๐—ฒ ๐—ฐ๐—ผ๐—บ๐—ฏ๐—ถ๐—ป๐—ฒ๐—ฑ ๐˜„๐—ถ๐˜๐—ต ๐— ๐—ถ๐˜…๐˜๐˜‚๐—ฟ๐—ฒ-๐—ผ๐—ณ-๐—˜๐˜…๐—ฝ๐—ฒ๐—ฟ๐˜๐˜€ for further improvements.

๐Ÿ“„ ๐—ฃ๐—ฎ๐—ฝ๐—ฒ๐—ฟ ๐—ต๐—ฒ๐—ฟ๐—ฒ ๐Ÿ‘‰ Mixture-of-Depths: Dynamically allocating compute in transformer-based language models (2404.02258)
๐Ÿ“š I added it to my paper collection ๐Ÿ‘‰ m-ric/spinning-up-in-llms-659e698f9dd5a71bd3f579a7
  • 1 reply
ยท