John Smith's picture

John Smith PRO

John6666

AI & ML interests

None yet

Recent Activity

published a model about 4 hours ago
John6666/alchemist-mix-uncanny-waifu-v10-sdxl
published a model about 4 hours ago
John6666/boobai-v40-sdxl
published a model about 4 hours ago
John6666/comradeship-xl-v17t4-sdxl
View all activity

Organizations

open/ acc's profile picture Solving Real World Problems's profile picture FashionStash Group meeting's profile picture

John6666's activity

reacted to nicolay-r's post with πŸ‘€ about 4 hours ago
view post
Post
758
πŸ“’ If you're interesting in quick application of target sentiment analysis towards your data, you might be insterested in using fine-tuned FlanT5-xl version. Reason is a quick performance: I've added batching support for series of sentiment analysis models in this card:
nicolay-r/sentiment-analysis-advances-665ba391e0eba729021ea101

The provider implementation:
https://github.com/nicolay-r/nlp-thirdgate/blob/master/llm/transformers_flan_t5.py

πŸ“Ί How to quick launch:
https://github.com/nicolay-r/bulk-chain/blob/master/test/test_provider_batching.py

Reason for using? experimenting in out-of domain, the noticed the performance of xl version similar to LLaMA-3-3b-instruct.

πŸ”‘ Key takeaways of adaptaiont:
- paddings and truncation strategies for batching mode:
- https://huggingface.co/docs/transformers/en/pad_truncation
- add_special_tokens=False causes a drastic changes in the result behaviour (FlanT5 models).
πŸ’₯ Crashes on pad_token_id=50256 during generation proces.
πŸ”» use_bf16 mode performs 3 times slower on CPU.

πŸš€ Performance for BASE sized model:
nicolay-r/flan-t5-tsa-thor-base
17.2 it/s (prompt) and 5.22 it/s (3-step CoT) (CPU Core i5-1140G7)

There are other domain-oriented models could be launched via the same provider:
nicolay-r/flan-t5-emotion-cause-thor-base

Reference: https://github.com/huggingface/transformers/issues/26061
reacted to suayptalha's post with πŸ‘ about 4 hours ago
reacted to sometimesanotion's post with πŸ”₯ about 4 hours ago
view post
Post
2562
I'd like to draw your attention to a Lamarck-based experiment which uses Arcee AI's newly published arcee_fusion merge method for three out of its four merges. Yes, just four. This is a simple one, and its recipe is fully open:

sometimesanotion/Lamarck-14B-v0.7-Fusion

It unifies three branches, all of which feature models which bring Lamarck-14B-v0.7 and Qwenvergence-14B-v12-Prose together. One side features @jpacifico 's jpacifico/Chocolatine-2-14B-Instruct-v2.0.3 and the other features @suayptalha 's suayptalha/Lamarckvergence-14B paired with my models which were their merge ancestors.

A fusion merge - of a fusion merge and a SLERP of a fusion and older merge - should demonstrate the new merge method's behavior in interesting ways, especially in the first 1/4th of the model where the SLERP has less impact.

I welcome you to kick the tires and learn from it. It has prose quality near Qwenvergence v12's - as you'd expect.

Thank you, @mradermacher and @MaziyarPanahi , for the first-day quantizations! Your work helped get me started. https://huggingface.co/models?other=base_model:quantized:sometimesanotion/Lamarck-14B-v0.7-Fusion
reacted to openfree's post with πŸ”₯ about 4 hours ago
view post
Post
4183
Datasets Convertor πŸš€

openfree/Datasets-Convertor

Welcome to Datasets Convertor, the cutting-edge solution engineered for seamless and efficient data format conversion. Designed with both data professionals and enthusiasts in mind, our tool simplifies the transformation process between CSV, Parquet, and JSONL, XLS file formats, ensuring that your data is always in the right shape for your next analytical or development challenge. πŸ’»βœ¨

Why Choose Datasets Convertor?
In today’s data-driven world, managing and converting large datasets can be a daunting task. Our converter is built on top of robust technologies like Pandas and Gradio, delivering reliable performance with a modern, intuitive interface. Whether you’re a data scientist, analyst, or developer, Datasets Convertor empowers you to effortlessly switch between formats while maintaining data integrity and optimizing storage.

Key Features and Capabilities:
CSV ⇆ Parquet Conversion:
Easily transform your CSV files into the highly efficient Parquet format and vice versa. Parquet’s columnar storage not only reduces file size but also accelerates query performanceβ€”a critical advantage for big data analytics. πŸ”„πŸ“‚

CSV to JSONL Conversion:
Convert CSV files to JSONL (newline-delimited JSON) to facilitate efficient, line-by-line data processing. This format is particularly useful for streaming data applications, logging systems, and scenarios where incremental data processing is required. Each CSV row is meticulously converted into an individual JSON record, preserving all the metadata and ensuring compatibility with modern data pipelines. πŸ“„βž‘οΈπŸ“

Parquet to JSONL Conversion:
For those working with Parquet files, our tool offers a streamlined conversion to JSONL.

Parquet to XLS Conversion.
reacted to csabakecskemeti's post with πŸš€πŸ€— about 4 hours ago
view post
Post
923
Testing Training on AMD/ROCm the first time!

I've got my hands on an AMD Instinct MI100. It's about the same price used as a V100 but on paper has more TOPS (V100 14TOPS vs MI100 23TOPS) also the HBM has faster clock so the memory bandwidth is 1.2TB/s.
For quantized inference it's a beast (MI50 was also surprisingly fast)

For LORA training with this quick test I could not make the bnb config works so I'm running the FT on the fill size model.

Will share all the install, setup and setting I've learned in a blog post, together with the cooling shroud 3D design.
reacted to AdinaY's post with πŸ”₯ about 4 hours ago
view post
Post
555
Two AI startups, DeepSeek & Moonshot AI , keep moving in perfect sync πŸ‘‡

✨ Last December: DeepSeek & Moonshot AI released their reasoning models on the SAME DAY.
DeepSeek: deepseek-ai/DeepSeek-R1
MoonShot: https://github.com/MoonshotAI/Kimi-k1.5

✨ Last week: Both teams published papers on modifying attention mechanisms on the SAME DAY AGAIN.
DeepSeek: Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention (2502.11089)
Moonshot: MoBA: Mixture of Block Attention for Long-Context LLMs (2502.13189)

✨ TODAY:
DeepSeek unveiled Flash MLA: a efficient MLA decoding kernel for NVIDIA Hopper GPUs, optimized for variable-length sequences.
https://github.com/deepseek-ai/FlashMLA

Moonshot AI introduces Moonlight: a 3B/16B MoE trained on 5.7T tokens using Muon, pushing the Pareto frontier with fewer FLOPs.
moonshotai/Moonlight-16B-A3B

What's next? πŸ‘€
reacted to stefan-it's post with πŸ‘ about 4 hours ago
view post
Post
858
She arrived 😍

[Expect more models soon...]
  • 1 reply
Β·
reacted to m-ric's post with πŸš€ about 4 hours ago
view post
Post
686
We now have a Deep Research for academia: SurveyX automatically writes academic surveys nearly indistinguishable from human-written ones πŸ”₯

Researchers from Beijing and Shanghai just published the first application of a deep research system to academia: their algorithm, given a question, can give you a survey of all papers on the subject.

To make a research survey, you generally follow two steps, preparation (collect and organize papers) and writing (outline creation, writing, polishing). Researchers followed the same two steps and automated them.

🎯 For the preparation part, a key part is find all the important references on the given subject.
Researchers first cast a wide net of all relevant papers. But then finding the really important ones is like distilling knowledge from a haystack of information. To solve this challenge, they built an β€œAttributeTree” object that structures key information from citations. Ablating these AttributeTrees significantly decreased structure and synthesis scores, so they were really useful!

πŸ“ For the writing part, key was to get a synthesis that's both short and true. This is not easy to get with LLMs! So they used methods like LLM-based deduplication to shorten the too verbose listings made by LLMs, and RAG to grab original quotes instead of made-up ones.

As a result, their system outperforms previous approaches by far!

As assessed by LLM-judges, the quality score os SurveyX even approaches this of human experts, with 4.59/5 vs 4.75/5 πŸ†

I advise you to read the paper, it's a great overview of the kind of assistants that we'll get in the short future! πŸ‘‰ SurveyX: Academic Survey Automation via Large Language Models (2502.14776)
Their website shows examples of generated surveys πŸ‘‰ http://www.surveyx.cn/
reacted to jasoncorkill's post with πŸ”₯ about 4 hours ago
view post
Post
426
The Sora Video Generation Aligned Words dataset contains a collection of word segments for text-to-video or other multimodal research. It is intended to help researchers and engineers explore fine-grained prompts, including those where certain words are not aligned with the video.

We hope this dataset will support your work in prompt understanding and advance progress in multimodal projects.

If you have specific questions, feel free to reach out.
Rapidata/sora-video-generation-aligned-words
reacted to ehristoforu's post with πŸ”₯ about 4 hours ago
view post
Post
450
Introducing our first standalone model – FluentlyLM Prinum

Introducing the first standalone model from Project Fluently LM! We worked on it for several months, used different approaches and eventually found the optimal one.

General characteristics:
- Model type: Causal language models (QwenForCausalLM, LM Transformer)
- Number of parameters: 32.5B
- Number of parameters (not embedded): 31.0B
- Number of layers: 64
- Context: 131,072 tokens
- Language(s) (NLP): English, French, Spanish, Russian, Chinese, Japanese, Persian (officially supported)
- License: MIT

Creation strategy:
The basis of the strategy is shown in Pic. 2.
We used Axolotl & Unsloth for SFT-finetuning with PEFT LoRA (rank=64, alpha=64) and Mergekit for SLERP and TIES mergers.

Evolution:
πŸ† 12th place in the Open LLM Leaderboard ( open-llm-leaderboard/open_llm_leaderboard) (21.02.2025)

Detailed results and comparisons are presented in Pic. 3.

Links:
- Model: fluently-lm/FluentlyLM-Prinum
- GGUF version: mradermacher/FluentlyLM-Prinum-GGUF
- Demo on ZeroGPU: ehristoforu/FluentlyLM-Prinum-demo
  • 1 reply
Β·
reacted to Kseniase's post with πŸ‘ 1 day ago
view post
Post
5134
8 Free Sources about AI Agents:

Agents seem to be everywhere and this collection is for a deep dive into the theory and practice:

1. "Agents" Google's whitepaper by Julia Wiesinger, Patrick Marlow and Vladimir Vuskovic -> https://www.kaggle.com/whitepaper-agents
Covers agents, their functions, tool use and how they differ from models

2. "Agents in the Long Game of AI. Computational Cognitive Modeling for Trustworthy, Hybrid AI" book by Marjorie McShane, Sergei Nirenburg, and Jesse English -> https://direct.mit.edu/books/oa-monograph/5833/Agents-in-the-Long-Game-of-AIComputational
Explores building AI agents, using Hybrid AI, that combines ML with knowledge-based reasoning

3. "AI Engineer Summit 2025: Agent Engineering" 8-hour video -> https://www.youtube.com/watch?v=D7BzTxVVMuw
Experts' talks that share insights on the freshest Agent Engineering advancements, such as Google Deep Research, scaling tips and more

4. AI Agents Course from Hugging Face -> https://huggingface.co/learn/agents-course/en/unit0/introduction
Agents' theory and practice to learn how to build them using top libraries and tools

5. "Artificial Intelligence: Foundations of Computational Agents", 3rd Edition, book by David L. Poole and Alan K. Mackworth -> https://artint.info/3e/html/ArtInt3e.html
Agents' architectures, how they learn, reason, plan and act with certainty and uncertainty

6. "Intelligent Agents: Theory and Practice" book by Michael Wooldridge -> https://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/ker95/ker95-html.html
A fascinating option to dive into how agents were seen in 1995 and explore their theory, architectures and agent languages

7. The Turing Post articles "AI Agents and Agentic Workflows" on Hugging Face -> https://huggingface.co/Kseniase
We explore agentic workflows in detail and agents' building blocks, such as memory and knowledge

8. Our collection "8 Free Sources to Master Building AI Agents" -> https://www.turingpost.com/p/building-ai-agents-sources
Β·
reacted to vincentg64's post with πŸ‘€ 1 day ago
view post
Post
994
Spectacular Connection Between LLMs, Quantum Systems, and Number Theory | https://mltblog.com/3DgambA

In my recent paper 51 on cracking the deepest mathematical mystery, available at https://mltblog.com/3zsnQ2g, I paved the way to solve a famous multi-century old math conjecture. The question is whether or not the digits of numbers such as Ο€ are evenly distributed. Currently, no one knows if the proportion of '1' even exists in these binary digit expansions. It could oscillate forever without ever converging. Of course, mathematicians believe that it is 50% in all cases. Trillions of digits have been computed for various constants, and they pass all randomness tests. In this article, I offer a new framework to solve this mystery once for all, for the number e.

Rather than a closure on this topic, it is a starting point opening new research directions in several fields. Applications include cryptography, dynamical systems, quantum dynamics, high performance computing, LLMs to answer difficult math questions, and more. The highly innovative approach involves iterated self-convolutions of strings and working with numbers as large as 2^n + 1 at power 2^n, with n larger than 100,000. No one before has ever analyzed the digits of such titanic numbers!

To read the full article, participate in the AI & LLM challenge, get the very fast Python code, read about ground-breaking research, and see all the applications, visit https://mltblog.com/3DgambA
reacted to jjokah's post with πŸ‘ 1 day ago
view post
Post
4269
The past few years have been a blast for artificial intelligence, with large language models (LLMs) stunning everyone with their capabilities and powering everything from chatbots to code assistants. However, not all applications demand the massive size and complexity of LLMs, the computational power required makes them impractical for many use cases. This is why Small Language Models (SLMs) entered the scene to make powerful AI models more accessible by shrinking in size.

In this article we went through what SLMs are, how they are made small, their benefits and limitations, real-world use cases, and how they can be used on mobile and desktop devices.
https://huggingface.co/blog/jjokah/small-language-model
  • 2 replies
Β·
reacted to KnutJaegersberg's post with πŸ‘€ 1 day ago
reacted to stas's post with πŸ‘€ 2 days ago
view post
Post
1975
Do you want ArcticTraining at @SnowflakeDB to add an ability to post-train DeepSeek V3/R1 models with DPO using just a few GPU nodes?

Please vote here and tell others about it: https://github.com/snowflakedb/ArcticTraining/discussions/58

ArcticTraining is an open-source, easy to use post-training framework for NVIDIA GPUs built on top of DeepSpeed.
reacted to mmhamdy's post with πŸ”₯ 2 days ago
view post
Post
2590
πŸŽ‰ We're excited to introduce MemoryCode, a novel synthetic dataset designed to rigorously evaluate LLMs' ability to track and execute coding instructions across multiple sessions. MemoryCode simulates realistic workplace scenarios where a mentee (the LLM) receives coding instructions from a mentor amidst a stream of both relevant and irrelevant information.

πŸ’‘ But what makes MemoryCode unique?! The combination of the following:

βœ… Multi-Session Dialogue Histories: MemoryCode consists of chronological sequences of dialogues between a mentor and a mentee, mirroring real-world interactions between coworkers.

βœ… Interspersed Irrelevant Information: Critical instructions are deliberately interspersed with unrelated content, replicating the information overload common in office environments.

βœ… Instruction Updates: Coding rules and conventions can be updated multiple times throughout the dialogue history, requiring LLMs to track and apply the most recent information.

βœ… Prospective Memory: Unlike previous datasets that cue information retrieval, MemoryCode requires LLMs to spontaneously recall and apply relevant instructions without explicit prompts.

βœ… Practical Task Execution: LLMs are evaluated on their ability to use the retrieved information to perform practical coding tasks, bridging the gap between information recall and real-world application.

πŸ“Œ Our Findings

1️⃣ While even small models can handle isolated coding instructions, the performance of top-tier models like GPT-4o dramatically deteriorates when instructions are spread across multiple sessions.

2️⃣ This performance drop isn't simply due to the length of the context. Our analysis indicates that LLMs struggle to reason compositionally over sequences of instructions and updates. They have difficulty keeping track of which instructions are current and how to apply them.

πŸ”— Paper: From Tools to Teammates: Evaluating LLMs in Multi-Session Coding Interactions (2502.13791)
πŸ“¦ Code: https://github.com/for-ai/MemoryCode
reacted to fdaudens's post with πŸ‘ 2 days ago
reacted to nicolay-r's post with πŸš€ 2 days ago
view post
Post
3587
πŸ“’ If you're looking for translating massive dataset of JSON-lines / CSV data with various set of source fields, then the following update would be relevant. So far and experimenting with adapting language specific Sentiment Analysis model, got a change to reforge and relaese bulk-translate 0.25.2.
⭐️ https://github.com/nicolay-r/bulk-translate/releases/tag/0.25.2

The update has the following major features
- Supporting schemas: all the columns to be translated are now could be declared within the same prompt-style format. using json this automatically allows to map them onto output fields
- The related updates for shell execution mode: schema parameter is now available alongside with just a prompt usage before.

Benefit is that your output is invariant. You can extend and stack various translators with separated shell laucnhes.

Screenshot below is the application of the google-translate engine in manual batching mode.
πŸš€ Performance: 2.5 it / sec (in the case of a single field translation)

🌟 about bulk-translate: https://github.com/nicolay-r/bulk-translate
🌌 nlp-thirdgate: https://github.com/nicolay-r/nlp-thirdgate?tab=readme-ov-file
  • 1 reply
Β·
reacted to prithivMLmods's post with πŸš€ 2 days ago
view post
Post
5126
It's really interesting about the deployment of a new state of matter in Majorana 1: the world’s first quantum processor powered by topological qubits. If you missed this news this week, here are some links for you:

πŸ…±οΈTopological qubit arrays: https://arxiv.org/pdf/2502.12252

βš›οΈ Quantum Blog: https://azure.microsoft.com/en-us/blog/quantum/2025/02/19/microsoft-unveils-majorana-1-the-worlds-first-quantum-processor-powered-by-topological-qubits/

πŸ“– Read the story: https://news.microsoft.com/source/features/innovation/microsofts-majorana-1-chip-carves-new-path-for-quantum-computing/

πŸ“ Majorana 1 Intro: https://youtu.be/Q4xCR20Dh1E?si=Z51DbEYnZFp_88Xp

πŸŒ€The Path to a Million Qubits: https://youtu.be/wSHmygPQukQ?si=TS80EhI62oWiMSHK
Β·