Nicolay Rusnachenko's picture

Nicolay Rusnachenko

nicolay-r

AI & ML interests

Information Retrieval・Medical Multimodal NLP (πŸ–Ό+πŸ“) Research Fellow @BU_Research・software developer http://arekit.io・PhD in NLP

Recent Activity

Organizations

None yet

nicolay-r's activity

posted an update 4 days ago
view post
Post
580
πŸ“’ Several weeks ago Microsoft announced Phi-4. My most-recent list of LLM models have had only wrapper for Phi-2, so it was time to update! With this post, happy to share that Phi-4 wrapper is now available at nlp-thirdgate for adopting Chain-of-Thought reasoning:

πŸ€– https://github.com/nicolay-r/nlp-thirdgate/blob/master/llm/transformers_phi4.py

πŸ“’ https://github.com/nicolay-r/nlp-thirdgate/blob/master/tutorials/llm_phi4.py

Findings on adaptation: I was able to reproduce only the pipeline based model launching. This version is for textual llm only. Microsoft also released multimodal Phi-4 which is out of scope of this wrapper.

🌌 nlp-thirdgate: https://lnkd.in/ef-wBnNn
posted an update 5 days ago
view post
Post
1089
πŸ“’ Delighted to announce the updated version of the no-string framework for chain-of-thought application over JSONL/CSV data:
https://github.com/nicolay-r/bulk-chain/releases/tag/0.25.2

πŸ”§ Fixes:
- Fixed issues with batching mode
- Fixed problem with parsing and passing args in shell mode

⚠️ Limitation: bathing mode is still available only via API.

πŸ“’ Quick Start with Gemma-3 in batching mode: https://github.com/nicolay-r/nlp-thirdgate/blob/master/tutorials/llm_gemma_3.ipynb
replied to their post 5 days ago
view reply

The important comment is to use the very latest version of the bulk-chain from github which fixes the bug for double-inference in batching.

posted an update 6 days ago
view post
Post
1535
πŸ“’ With the recent release of Gemma-3, If you interested to play with textual chain-of-though, the notebook below is a wrapper over the the model (native transformers inference API) for passing the predefined schema of promps in batching mode.
https://github.com/nicolay-r/nlp-thirdgate/blob/master/tutorials/llm_gemma_3.ipynb

Limitation: schema supports texts only (for now), while gemma-3 is a text+image to text.

Model: google/gemma-3-1b-it
Provider: https://github.com/nicolay-r/nlp-thirdgate/blob/master/llm/transformers_gemma3.py
  • 1 reply
Β·
reacted to onekq's post with πŸ‘€ 7 days ago
view post
Post
1393
The performance of deepseek-r1-distill-qwen-32b is abysmal. I know Qwen instruct (not coder) is quite poor on coding. As such, I have low expectation on other R1 repro works also based on Qwen instruct too. onekq-ai/r1-reproduction-works-67a93f2fb8b21202c9eedf0b

This makes it particularly mysterious what went into QwQ-32B? Why did it work so well? Was it trained from scratch? Anyone has insights about this?
onekq-ai/WebApp1K-models-leaderboard
  • 5 replies
Β·
replied to ritvik77's post 7 days ago
view reply

@ritvik77 , sounds good on your plans! Meanwhile looking forward to adapt 7B version to experiment in radiology domain. Happy to read more on that and once and if it gets to the paper, so I can populate the survey of the related advances.

replied to ritvik77's post 8 days ago
view reply

@ritvik77 , excited to run into this! Is the paper and studies behind it on arxiv or elsewhere?

reacted to ritvik77's post with πŸ”₯ 8 days ago
view post
Post
1514
Try it out: ritvik77/Medical_Doctor_AI_LoRA-Mistral-7B-Instruct_FullModel

🩺 Medical Diagnosis AI Model - Powered by Mistral-7B & LoRA πŸš€
πŸ”Ή Model Overview:
Base Model: Mistral-7B (7.7 billion parameters)
Fine-Tuning Method: LoRA (Low-Rank Adaptation)
Quantization: bnb_4bit (reduces memory footprint while retaining performance)
πŸ”Ή Parameter Details:
Original Mistral-7B Parameters: 7.7 billion
LoRA Fine-Tuned Parameters: 4.48% of total model parameters (340 million) Final Merged Model Size (bnb_4bit Quantized): ~4.5GB

This can help you in making a AI agent for healthcare, if you need to finetune it for JSON function/tool calling format you can use some medical function calling dataset to again fine fine tine on it.

  • 3 replies
Β·
reacted to clem's post with ❀️ 9 days ago
reacted to Jaward's post with πŸ”₯πŸ‘€ 9 days ago
replied to ychen's post 11 days ago
view reply

@ychen , I see. I was expecting your findings were a part of the phd program. Take your time with publications then, since it is common while at Phd. It would be great to have a paper during the masters and all the best with it!

replied to ychen's post 11 days ago
view reply

@ychen Good luck with your studies and pleased for affecting on your advances in it. Are you on google scholar or github with personal advances in this domain?