nicolay-r
·
AI & ML interests
NLP for Healthcare ⚕️ @BU_Research・PhD in NLP / IR ・Textual Information Retrieval
Recent Activity
replied to
VolodymyrPugachov's
post
about 8 hours ago
Digital Heart Model: Initial Research Launch 🚀
I am excited to announce the launch of research on the Digital Heart Model (DHM), an AI-driven digital twin designed to transform personalized cardiovascular care. DHM integrates multimodal data, focusing initially on cardiac imaging, histopathological imaging, and ECG data, to predict patient outcomes and optimize interventions.
Initial Model and Dataset Overview:
Base Model: Multimodal AI foundation combining Convolutional Neural Networks (CNN), Vision Transformers (ViT), and Graph Neural Networks (GNN).
Datasets: Cardiac MRI and CT imaging datasets, histopathological cardiac tissue images, and extensive ECG waveform data.
Expected Results from First Iteration:
Cardiac event prediction (e.g., myocardial infarction) accuracy: AUC ≥ 0.90
Arrhythmia detection and classification accuracy: AUC ≥ 0.88
Enhanced segmentation accuracy for cardiac imaging: Dice Score ≥ 0.85
🔍 Next Steps:
Conducting initial retrospective validation.
Preparing for prospective clinical validation.
Stay tuned for updates as we redefine cardiovascular precision medicine!
Connect with us for collaboration and insights!
reacted
to
hesamation's
post
with 👀
1 day ago
longer context doesn't generate better responses. it can even hurt your llm/agent. 1M context window doesn't automatically make models smarter as it's not about the size; it's how you use it.
here are 4 types of context failure and why each one happens:
1. context poisoning: if hallucination finds its way into your context, the agent will rely on that false information to make its future moves. for example if the agent hallucinates about the "task description", all of its planning to solve the task would also be corrupt.
2. context distraction: when the context becomes too bloated, the model focuses too much on it rather than come up with novel ideas or to follow what it has learned during training. as Gemini 2.5 Pro technical report points out, as context grows significantly from 100K tokens, "the agent showed a tendency toward favoring repeating actions from its vast history rather than synthesizing novel plans".
3. context confusion: everyone lost it when MCPs became popular, it seemed like AGI was achieved. I suspected there is something wrong and there was: it's not just about providing tools, bloating the context with tool use derails the model from selecting the right one! even if you can fit all your tool metadata in the context, as their number grows, the model gets confused over which one to pick.
4. Context Clash: if you exchange conversation with a model step by step and provide information as you go along, chances are you get worse performance rather than providing all the useful information at once. one the model's context fills with wrong information, it's more difficult to guide it to embrace the right info. agents pull information from tools, documents, user queries, etc. and there is a chance that some of these information contradict each other, and it's not good new for agentic applications.
check this article by Drew Breunig for deeper read: https://www.dbreunig.com/2025/06/26/how-to-fix-your-context.html?ref=blog.langchain.com
replied to
hesamation's
post
1 day ago
longer context doesn't generate better responses. it can even hurt your llm/agent. 1M context window doesn't automatically make models smarter as it's not about the size; it's how you use it.
here are 4 types of context failure and why each one happens:
1. context poisoning: if hallucination finds its way into your context, the agent will rely on that false information to make its future moves. for example if the agent hallucinates about the "task description", all of its planning to solve the task would also be corrupt.
2. context distraction: when the context becomes too bloated, the model focuses too much on it rather than come up with novel ideas or to follow what it has learned during training. as Gemini 2.5 Pro technical report points out, as context grows significantly from 100K tokens, "the agent showed a tendency toward favoring repeating actions from its vast history rather than synthesizing novel plans".
3. context confusion: everyone lost it when MCPs became popular, it seemed like AGI was achieved. I suspected there is something wrong and there was: it's not just about providing tools, bloating the context with tool use derails the model from selecting the right one! even if you can fit all your tool metadata in the context, as their number grows, the model gets confused over which one to pick.
4. Context Clash: if you exchange conversation with a model step by step and provide information as you go along, chances are you get worse performance rather than providing all the useful information at once. one the model's context fills with wrong information, it's more difficult to guide it to embrace the right info. agents pull information from tools, documents, user queries, etc. and there is a chance that some of these information contradict each other, and it's not good new for agentic applications.
check this article by Drew Breunig for deeper read: https://www.dbreunig.com/2025/06/26/how-to-fix-your-context.html?ref=blog.langchain.com
View all activity
Organizations
None yet