forum_id
stringlengths 10
10
| forum_title
stringlengths 5
188
| forum_authors
sequencelengths 0
98
| forum_abstract
stringlengths 3
4.69k
| forum_keywords
sequencelengths 0
29
| forum_pdf_url
stringlengths 40
40
| note_id
stringlengths 10
10
| note_type
stringclasses 3
values | note_created
int64 1,695B
1,737B
| note_replyto
stringlengths 10
10
| note_readers
sequencelengths 1
6
| note_signatures
sequencelengths 1
1
| note_text
stringlengths 14
30.1k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
uwxUbSmhmc | LLMs Pick Up Cues of Potential Comorbid ADHD in People Reporting Anxiety when Keywords Are Not Enough | [
"Michael Guerzhoy"
] | We present a novel task that can elucidate the connection between anxiety and ADHD; use Transformers to make progress
toward solving a task that is not solvable by keyword-based classifiers; and discuss a method for visualization of our classifier
illuminating the connection between anxiety and ADHD presentations.
Up to approximately 50\% of adults with ADHD may also have an anxiety disorder and approximately 30\% of adults with anxiety may also have ADHD. Patients presenting with anxiety may be
treated for anxiety without ADHD ever being considered, possibly affecting treatment. We show how data that bears on ADHD that is comorbid with anxiety can be obtained from social media data, and show that Transformers can be used to detect a proxy for possible comorbid ADHD in people with anxiety symptoms.
We collected data from anxiety and ADHD online forums (subreddits). We identified posters who first started posting in the
Anxiety subreddit and later started posting in the ADHD subreddit as well. We use this subset of the posters as a proxy for
people who presented with anxiety symptoms and then became aware that they might have ADHD.
We fine-tune a Transformer architecture-based classifier to classify people who started posting in the Anxiety subreddit and then
started posting in the ADHD subreddit vs. people who posted in the Anxiety subreddit without later posting in the ADHD
subreddit.
We show that a Transformer architecture is capable of achieving reasonable results (76\% correct for RoBERTa vs.
under 60\% correct for the best keyword-based model, both with 50\% base rate).
Disclosure: this paper was accepted at CLPsych @ EACL with the title ``Detecting a Proxy for Potential Comorbid ADHD in People Reporting Anxiety Symptoms from Social Media Data" | [
"adhd",
"anxiety",
"reddit"
] | https://openreview.net/pdf?id=uwxUbSmhmc | cLUJZOD8Rd | review | 1,708,334,328,248 | uwxUbSmhmc | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission47/Reviewer_gphm"
] | title: Review
review: Summary: This paper trained the RoBERTa model to predict whether individuals discussing anxiety in their posts will subsequently express interest in ADHD. They showed that it shows high performance (76% correct), which can give insights into their comorbidity.
Comments:
1. It is unclear what the ability of the RoBERT model to classify the groups implies. There could be many ways that drawing some clinical insights from the model performance can go wrong or be indefinite. The examples are below
- Some symptoms of anxiety are also indicative of ADHD. The RoBERT model captures the terms related to the comorbidity. However, it still seems unclear if they are just associative or if they have a causal relationship.
- ADHD patients have some features in common in their posts (not related to disorder symptoms)
- Or it could just imply selection bias as the reddit is not prospective data.
I think the implication should be more clearly stated. Also, I think modification of the experimental design could be necessary.
2. “Social media such as Reddit provides publicly available text data of anonymous
first-person experiences (Low et al. 2020).” This sentence at the end of the first paragraph in the introduction section looks abrupt. The first paragraph is mainly about the problem of misdiagnosis of ADHD and anxiety, so I think this sentence on the data source of this study should be discussed in the next paragraph.
rating: 6
confidence: 4 |
uwxUbSmhmc | LLMs Pick Up Cues of Potential Comorbid ADHD in People Reporting Anxiety when Keywords Are Not Enough | [
"Michael Guerzhoy"
] | We present a novel task that can elucidate the connection between anxiety and ADHD; use Transformers to make progress
toward solving a task that is not solvable by keyword-based classifiers; and discuss a method for visualization of our classifier
illuminating the connection between anxiety and ADHD presentations.
Up to approximately 50\% of adults with ADHD may also have an anxiety disorder and approximately 30\% of adults with anxiety may also have ADHD. Patients presenting with anxiety may be
treated for anxiety without ADHD ever being considered, possibly affecting treatment. We show how data that bears on ADHD that is comorbid with anxiety can be obtained from social media data, and show that Transformers can be used to detect a proxy for possible comorbid ADHD in people with anxiety symptoms.
We collected data from anxiety and ADHD online forums (subreddits). We identified posters who first started posting in the
Anxiety subreddit and later started posting in the ADHD subreddit as well. We use this subset of the posters as a proxy for
people who presented with anxiety symptoms and then became aware that they might have ADHD.
We fine-tune a Transformer architecture-based classifier to classify people who started posting in the Anxiety subreddit and then
started posting in the ADHD subreddit vs. people who posted in the Anxiety subreddit without later posting in the ADHD
subreddit.
We show that a Transformer architecture is capable of achieving reasonable results (76\% correct for RoBERTa vs.
under 60\% correct for the best keyword-based model, both with 50\% base rate).
Disclosure: this paper was accepted at CLPsych @ EACL with the title ``Detecting a Proxy for Potential Comorbid ADHD in People Reporting Anxiety Symptoms from Social Media Data" | [
"adhd",
"anxiety",
"reddit"
] | https://openreview.net/pdf?id=uwxUbSmhmc | JkaG5A9Kq2 | review | 1,708,530,332,973 | uwxUbSmhmc | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission47/Reviewer_sZzm"
] | title: Good research work done to evaluate efficacy of RoBERTa for a specific use case.
review: The paper provided clear understanding of problem statement, and necessary background information to understand the challenge faced by less accurate techniques for detecting co-morbid ADHD. The data used for training, however, is from a platform that is biased in representation of general population, which is also acknowledged by the authors, and also appropriately explained the limited scope of the application of the results discovered by the authors.
The quality of work is good and meets the expectation. I do not consider myself to be able to comment on the originality of the work, as I need more experience in the field to be fair in my evaluation, however, the work is fairly original in my opinion. Significance of study presented is that it provides the comparison of three different models in performing classification for the task at hand, and find a model that outperforms the other two by a significant margin.
Pro of the paper is that it has found a model that has significantly higher prediction accuracy over other models.
Con of the paper is that the data set is biased and the visualizations cannot be published to protect the patients.
However, the results of the study are significant enough to outweigh the cons. This paper deserves publication in the esteemed conference.
rating: 9
confidence: 4 |
uwxUbSmhmc | LLMs Pick Up Cues of Potential Comorbid ADHD in People Reporting Anxiety when Keywords Are Not Enough | [
"Michael Guerzhoy"
] | We present a novel task that can elucidate the connection between anxiety and ADHD; use Transformers to make progress
toward solving a task that is not solvable by keyword-based classifiers; and discuss a method for visualization of our classifier
illuminating the connection between anxiety and ADHD presentations.
Up to approximately 50\% of adults with ADHD may also have an anxiety disorder and approximately 30\% of adults with anxiety may also have ADHD. Patients presenting with anxiety may be
treated for anxiety without ADHD ever being considered, possibly affecting treatment. We show how data that bears on ADHD that is comorbid with anxiety can be obtained from social media data, and show that Transformers can be used to detect a proxy for possible comorbid ADHD in people with anxiety symptoms.
We collected data from anxiety and ADHD online forums (subreddits). We identified posters who first started posting in the
Anxiety subreddit and later started posting in the ADHD subreddit as well. We use this subset of the posters as a proxy for
people who presented with anxiety symptoms and then became aware that they might have ADHD.
We fine-tune a Transformer architecture-based classifier to classify people who started posting in the Anxiety subreddit and then
started posting in the ADHD subreddit vs. people who posted in the Anxiety subreddit without later posting in the ADHD
subreddit.
We show that a Transformer architecture is capable of achieving reasonable results (76\% correct for RoBERTa vs.
under 60\% correct for the best keyword-based model, both with 50\% base rate).
Disclosure: this paper was accepted at CLPsych @ EACL with the title ``Detecting a Proxy for Potential Comorbid ADHD in People Reporting Anxiety Symptoms from Social Media Data" | [
"adhd",
"anxiety",
"reddit"
] | https://openreview.net/pdf?id=uwxUbSmhmc | Eygml3g16n | review | 1,708,668,198,293 | uwxUbSmhmc | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission47/Reviewer_LMhd"
] | title: The authors describe their goal to identify a proxy of comorbid ADHD through reddit posts in the Anxiety and ADHD subreddits. The object to find posters that initially post in the Anxiety then in the ADHD subreddit is solved much better with a finetuned RoBERTa than the baseline when using keywords.
review: Clarity: The authors clearly describe their goal of assessing two groups of reddit posters the posters in the Anxiety subreddit and that then post in the ADHD subreddit and the posters in the Anxiety that do not post in the ADHD subreddit.
Originality: the task of determining a proxy of possible comorbid ADHD is novel.
Significance: The link between the proxy of possible comorbid ADHD is not well discussed in the manuscript and therefore it is hard to say what impact this will have in the clinical world. However, it is an interesting approach to the use of foundation models within a ‘semi’-clinical setting.
Quality: the study quality appears good as the method, data and performance of the model has been clearly stated.
Major points
1. You point towards it yourself in the data collection section of the paper where you note that the reddit posts are not a clinical diagnosis. I think you must discuss the implications on the significance of your work.
2. Why did you choose to remove data from posters that posted in the ADHD subreddit within 6 months after their post in the Anxiety subreddit?
3. Clarify if you download posts from the ADHD subreddit. In the data preprocessing section, you state that you don’t use the posts and then that you use the posts from the ADHD subreddit.
4. You reference a figure 6 that is not present in the manuscript.
5. You state that you visualize the phrases leading to “will post in ADHD” or “will not post in ADHD” for any given post but this is not presented anywhere.
Minor points
1. In section Data Preprocessing is it correctly understood that the posters who posted anywhere else than the Anxiety or/and ADHD subreddits were removed from the dataset? This should be clearer.
2. You mention the base rate of the test set, but you provide no detail on the distribution of the training set or if you have done any weighted sampling or indeed how you sampled the test dataset.
Pros
• Well written and concise.
• Interesting subject sure to spark interest even for people who are not experts in psychiatric disorders.
• Interesting application of existing models to proxy ADHD comorbidity
• In the appendix of the paper the limitations section does a good job of explaining the cons of the study in non-bias manner.
Cons
• The ADHD comorbidity proxy is not well discussed.
• The authors state that they have visualizations multiple times where none are shown.
rating: 2
confidence: 4 |
uwxUbSmhmc | LLMs Pick Up Cues of Potential Comorbid ADHD in People Reporting Anxiety when Keywords Are Not Enough | [
"Michael Guerzhoy"
] | We present a novel task that can elucidate the connection between anxiety and ADHD; use Transformers to make progress
toward solving a task that is not solvable by keyword-based classifiers; and discuss a method for visualization of our classifier
illuminating the connection between anxiety and ADHD presentations.
Up to approximately 50\% of adults with ADHD may also have an anxiety disorder and approximately 30\% of adults with anxiety may also have ADHD. Patients presenting with anxiety may be
treated for anxiety without ADHD ever being considered, possibly affecting treatment. We show how data that bears on ADHD that is comorbid with anxiety can be obtained from social media data, and show that Transformers can be used to detect a proxy for possible comorbid ADHD in people with anxiety symptoms.
We collected data from anxiety and ADHD online forums (subreddits). We identified posters who first started posting in the
Anxiety subreddit and later started posting in the ADHD subreddit as well. We use this subset of the posters as a proxy for
people who presented with anxiety symptoms and then became aware that they might have ADHD.
We fine-tune a Transformer architecture-based classifier to classify people who started posting in the Anxiety subreddit and then
started posting in the ADHD subreddit vs. people who posted in the Anxiety subreddit without later posting in the ADHD
subreddit.
We show that a Transformer architecture is capable of achieving reasonable results (76\% correct for RoBERTa vs.
under 60\% correct for the best keyword-based model, both with 50\% base rate).
Disclosure: this paper was accepted at CLPsych @ EACL with the title ``Detecting a Proxy for Potential Comorbid ADHD in People Reporting Anxiety Symptoms from Social Media Data" | [
"adhd",
"anxiety",
"reddit"
] | https://openreview.net/pdf?id=uwxUbSmhmc | DbDwlbaReI | review | 1,708,637,683,004 | uwxUbSmhmc | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission47/Reviewer_VnGH"
] | title: Online forum text based classification for a weak proxy task is better solved by RoBERTa as compared to keyword based models.
review: ## Paper summary
The task of predicting whether a reddit user who first starts posting on Anxiety subreddits will later also start posting on ADHD subreddits is used as a proxy for identifying people with anxiety who might also have ADHD. Classification performance of a fine-tuned RoBERTa model is presented which is shown to be better than keyword based baselines. Some explainability experiments are promised in the future.
### Strengths
1. Misdiagnosed comorbid ADHD is an important issue.
1. Data collection and processing is sound -- the 6 month gap in user postage history between their first post in ADHD subreddits and the data gathered from anxiety subreddits is a reasonable choice.
### Weaknesses
1. The authors have acknowledged the weaknesses and assumptions in the proxy task -- subreddit posting behavior is a very weak link to whether the user actually has a high risk of having comorbid ADHD. There seems to be no method for verifying the link between this proxy task and actual comorbid ADHD.
1. The practical benefits of how the proposed study can better enable diagnosis of comorbid ADHD in the future is not discussed.
### Feedback to authors
* In order to refine the ground truth for the proxy task, a Chat-GPT or equivalent LLM can be used to query whether the user believes that they have ADHD and/or anxiety from their posts alone. This may clean up the collected data significantly.
rating: 4
confidence: 4 |
sm9Udj2c6u | Feasibility of Automatically Detecting Practice of Race-Based Medicine by Large Language Models | [
"Akshay Swaminathan",
"Sid Salvi",
"Philip Chung",
"Alison Callahan",
"Suhana Bedi",
"Alyssa Unell",
"Mehr Kashyap",
"Roxana Daneshjou",
"Nigam Shah",
"Dev Dash"
] | One challenge in integrating large language models (LLMs) into clinical workflows is ensuring the appropriateness of generated content. This study develops an automated evaluation method to detect if LLM outputs contain debunked stereotypes that perpetuate race-based medicine. To develop a race-based medicine evaluator agent, we selected the top performing (F1) LLM-prompt combination among 4 LLMs (GPT-3.5, GPT-4, GPT-4-0125 and GPT-4-1106) and three prompts, using a physician-labeled dataset of 181 LLM responses as the gold standard. This evaluator agent was then used to assess 1300 responses from ten LLMs to 13 questions (10 iterations each) related to race-based medicine. Across the nine candidate LLMs, the percentage of LLM responses that did not contain debunked race-based content ranged from 22% in falcon-7b-instruct to 76% in claude-2. This study demonstrates the potential of LLM-powered agents to automate the detection of race-based medical content. | [
"large language models",
"evaluation",
"race-based medicine"
] | https://openreview.net/pdf?id=sm9Udj2c6u | kAhWnnJc7p | review | 1,708,954,777,061 | sm9Udj2c6u | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission36/Reviewer_Apz6"
] | title: A simple paper with good evaluations of current SOTA models
review: An interesting benchmark with good reproducibility. Clear explanation and well written demonstrating how LLMS can be used to detect instances of race-based medicine.
Pros - Potentially an interesting read for a clinical audience who would like to know the feasibility of having such a model being run 'in clinical practise'.
Cons - This style of paper is very common (prompt crafting + evaluation) and can be criticised as not contributing anything novel to the field.
rating: 5
confidence: 4 |
sm9Udj2c6u | Feasibility of Automatically Detecting Practice of Race-Based Medicine by Large Language Models | [
"Akshay Swaminathan",
"Sid Salvi",
"Philip Chung",
"Alison Callahan",
"Suhana Bedi",
"Alyssa Unell",
"Mehr Kashyap",
"Roxana Daneshjou",
"Nigam Shah",
"Dev Dash"
] | One challenge in integrating large language models (LLMs) into clinical workflows is ensuring the appropriateness of generated content. This study develops an automated evaluation method to detect if LLM outputs contain debunked stereotypes that perpetuate race-based medicine. To develop a race-based medicine evaluator agent, we selected the top performing (F1) LLM-prompt combination among 4 LLMs (GPT-3.5, GPT-4, GPT-4-0125 and GPT-4-1106) and three prompts, using a physician-labeled dataset of 181 LLM responses as the gold standard. This evaluator agent was then used to assess 1300 responses from ten LLMs to 13 questions (10 iterations each) related to race-based medicine. Across the nine candidate LLMs, the percentage of LLM responses that did not contain debunked race-based content ranged from 22% in falcon-7b-instruct to 76% in claude-2. This study demonstrates the potential of LLM-powered agents to automate the detection of race-based medical content. | [
"large language models",
"evaluation",
"race-based medicine"
] | https://openreview.net/pdf?id=sm9Udj2c6u | Q0Jej06fAV | review | 1,708,639,286,393 | sm9Udj2c6u | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission36/Reviewer_LYU7"
] | title: Detecting race-based LLM responses
review: The authors experimented with using LLM to detect whether LLM responses contain debunked race-based content. It's an important topic to explore and the results help us to better understand how current available LLMs may respond to race-based medical questions.
13 questions were used to generate the responses from LLMs. It's unclear whether the set of the questions are representative enough for the study. 11 out of the 13 questions contained direct mentions of races in the question themselves, while 2 questions were fairly general. There is no comparison between the two to see if there is significant difference. Since direct mentions of the races may be more likely to generate race-based responses, the result responses set may have significant higher race-based responses compared to real life clinical uses cases, which may bias the evaluation results.
The paper also didn't discuss on how the safe guard mechanism in proprietary and/or open source models may affect the model responses. In appendix, the author mentioned that MedPalm-2 simply rejects to answer some of the questions due to the content filter. This type of content filtering is a common practice and it may change constantly without any notice, especially for proprietary models, which puts a lot of uncertainty on the evaluation results.
rating: 6
confidence: 4 |
sm9Udj2c6u | Feasibility of Automatically Detecting Practice of Race-Based Medicine by Large Language Models | [
"Akshay Swaminathan",
"Sid Salvi",
"Philip Chung",
"Alison Callahan",
"Suhana Bedi",
"Alyssa Unell",
"Mehr Kashyap",
"Roxana Daneshjou",
"Nigam Shah",
"Dev Dash"
] | One challenge in integrating large language models (LLMs) into clinical workflows is ensuring the appropriateness of generated content. This study develops an automated evaluation method to detect if LLM outputs contain debunked stereotypes that perpetuate race-based medicine. To develop a race-based medicine evaluator agent, we selected the top performing (F1) LLM-prompt combination among 4 LLMs (GPT-3.5, GPT-4, GPT-4-0125 and GPT-4-1106) and three prompts, using a physician-labeled dataset of 181 LLM responses as the gold standard. This evaluator agent was then used to assess 1300 responses from ten LLMs to 13 questions (10 iterations each) related to race-based medicine. Across the nine candidate LLMs, the percentage of LLM responses that did not contain debunked race-based content ranged from 22% in falcon-7b-instruct to 76% in claude-2. This study demonstrates the potential of LLM-powered agents to automate the detection of race-based medical content. | [
"large language models",
"evaluation",
"race-based medicine"
] | https://openreview.net/pdf?id=sm9Udj2c6u | OCXbQXnaNd | review | 1,708,497,525,587 | sm9Udj2c6u | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission36/Reviewer_GZnZ"
] | title: Review of "Feasibility of Automatically Detecting Practice of Race-Based Medicine by Large Language Models" - Needs revision
review: # Peer Review for the Manuscript: "Evaluating Large Language Models for Race-Based Medical Content"
## General Evaluation
The paper presents an interesting study on the use of Large Language Models (LLMs) for identifying and evaluating race-based content in medical contexts. This topic is both timely and relevant, given the increasing reliance on LLMs across various sectors, including healthcare. The authors have made a commendable effort to highlight the challenges and nuances associated with race-based content in LLM outputs.
## Specific Feedback
### Strengths
1. **Relevance and Novelty:** The study addresses a critical gap in the current understanding and evaluation of LLMs in handling sensitive and crucial topics such as impact of racial stereotypes in medical advice.
2. **Methodological Approach:** The structured comparison across different LLMs provide a solid basis for the study's findings.
### Areas for Improvement
1. **Typos and Clarifications:** The paper mentions "nine unique LLM-prompt combinations" which should correctly be "twelve unique combinations." Attention to such details is crucial for the accuracy of the paper.
2. **Consideration of Skin Tone Variability:** The use of a more comprehensive skin tone classification, such as the Monk Skin Tone Scale ([Monk Skin Tone Scale](https://skintone.google/)), would enrich the study by providing a more nuanced understanding of race as it pertains to medical content. The current set of 13 prompts is limited, it is predominately representing black, white and some asian race. Recommend the authors to formulate a more diverse prompts by considering more examples of race-stereotype combinations.
3. **Benchmark Evaluation:** The paper states that there is a lack of methods to evaluate harmful content regarding race. This is not true, major players in Generative AI space like OpenAI, Meta, google all of them have released trust and safety scorecards and responsible AI covering bias and stereotypes is a big focus. However, existing benchmarks and datasets could be explored for their representation of race-related medical data. The absence of this exploration is a missed opportunity to contextualize the study's findings within the broader research landscape.
4. **Physician Backgrounds:** The background of physicians involved in the original research, particularly their awareness of bias and civil rights, is not detailed. This information is crucial for understanding the potential biases in the study's setup and interpretation.
5. **Statistical Measures:** A clearer explanation of the statistical measures used (Sensitivity, Specificity, NPV, PPV, F1) would make the paper accessible to a broader audience, including those not familiar with these terms.
6. **Methodology Suggestion:** Given the limitations of zero-shot prompting in niche domains like race-related medical data, exploring few-shot prompting or fine-tuning the models might yield more accurate results.
### Recommendations for Further Research
The authors are encouraged to explore the representation of race-related bias in publicly available datasets and benchmarks, such as those hosted on platforms like Hugging Face and Stanford's CRFM. Investigating these resources could provide insights into the current state of race representation in LLM training data and benchmarks. Furthermore, the authors should consider building and open-sourcing a dataset specifically for evaluating race-related content in medical advice. This contribution would significantly benefit the research community by providing a specialized resource for further studies.
## Some benchmarks for reference
- [Hugging Face Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
- [Stanford CRFM HELM Lite](https://crfm.stanford.edu/helm/lite/latest/)
- [Hugging Face Chatbot Arena Leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard)
- [Artificial Analysis](https://artificialanalysis.ai/)
- [Martian Leaderboard](https://leaderboard.withmartian.com/)
- [Hugging Face Enterprise Scenarios Leaderboard](https://huggingface.co/spaces/PatronusAI/enterprise_scenarios_leaderboard)
## Conclusion
The paper "Evaluating Large Language Models for Race-Based Medical Content" contributes important insights into the evaluation of LLMs for sensitive content. With the recommended revisions and further exploration of the highlighted areas, this paper has the potential to significantly impact the field.
rating: 6
confidence: 3 |
sm9Udj2c6u | Feasibility of Automatically Detecting Practice of Race-Based Medicine by Large Language Models | [
"Akshay Swaminathan",
"Sid Salvi",
"Philip Chung",
"Alison Callahan",
"Suhana Bedi",
"Alyssa Unell",
"Mehr Kashyap",
"Roxana Daneshjou",
"Nigam Shah",
"Dev Dash"
] | One challenge in integrating large language models (LLMs) into clinical workflows is ensuring the appropriateness of generated content. This study develops an automated evaluation method to detect if LLM outputs contain debunked stereotypes that perpetuate race-based medicine. To develop a race-based medicine evaluator agent, we selected the top performing (F1) LLM-prompt combination among 4 LLMs (GPT-3.5, GPT-4, GPT-4-0125 and GPT-4-1106) and three prompts, using a physician-labeled dataset of 181 LLM responses as the gold standard. This evaluator agent was then used to assess 1300 responses from ten LLMs to 13 questions (10 iterations each) related to race-based medicine. Across the nine candidate LLMs, the percentage of LLM responses that did not contain debunked race-based content ranged from 22% in falcon-7b-instruct to 76% in claude-2. This study demonstrates the potential of LLM-powered agents to automate the detection of race-based medical content. | [
"large language models",
"evaluation",
"race-based medicine"
] | https://openreview.net/pdf?id=sm9Udj2c6u | 2PHRJ6DmmE | review | 1,708,803,195,014 | sm9Udj2c6u | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission36/Reviewer_8Ds8"
] | title: This paper investigates the capability of GPT-3.5/4 as an evaluator to detect potential race-related bias in medicine. Overall this is a good work studying important problems in medicine.
review: Quality: The evaluation is comprehensive and convincing. The authors first chose the best evaluator agent using dataset from Omiye et al. Then the evaluator was used to assess candidate responses from 10 LLMs across 13 questions. The authors also studied different combinations of prompts, and showed GPT-4 with simple prompts is the best evaluator.
Clarity: The paper is well structured and easy to follow.
Significance: Race-based beliefs in healthcare can be harmful and it constructs a major concerns for doctors to apply LLMs in clinical settings. Conclusion from this paper is important to guide doctors to choose the best LLMs in practice.
rating: 7
confidence: 4 |
rxx8leoPy0 | Med-HVL: Automatic Medical Domain Hallucination Evaluation for Large Vision-Language Models | [
"Qianqi Yan",
"Xuehai He",
"Xin Eric Wang"
] | Advancements in Large Vision-Language Models (LVLMs) have made significant progress in integrating visual and textual data. However, their deployment in the medical domain is impeded by critical issues of hallucinations, asking for reliable evaluation metrics and methods. We define two novel metrics: Object Hallucination and Domain Knowledge Hallucination, to quantify the hallucination of LVLMs in the medical domain. We propose a scalable, automated evaluation framework, Med-HVL, to assess and mitigate hallucinations at both object and domain knowledge levels. We reveal a significant presence of hallucinations in the LVLMs, emphasizing the need for domain-specific adaptations and finetuning to enhance their reliability for medical applications. | [
"Large Vision-Language Models",
"Hallucination",
"Medical"
] | https://openreview.net/pdf?id=rxx8leoPy0 | xySHQnT9jt | review | 1,707,875,867,662 | rxx8leoPy0 | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission41/Reviewer_x5Nf"
] | title: Good pilot study focusing on caption hallucination of LVLMs. However, the manuscript needs further polish to be published.
review: The paper proposes two automatic metrics for evaluating LVLMs hallucination degree in the medical domain.
## Pros
- hallucination evaluation in the medical domain is an important research question. And the authors motivate it well.
- the proposed object hallucination and "domain knowledge hallucination" are relevant to the medical domain
## Cons
- The proposed "domain knowledge hallucination" indeed only focuses on the diagnosis. Other medical concepts such as procedures, medications, medical conditions are not included. To my understanding, it is more appropriate to use the term "diagnosis hallucination".
- The automatic evaluation is great for scaling. Meanwhile, it would be interesting to know whether these model-based metrics are really reliable (i.e. the model-derived evaluations themselves do not contain hallucination). I suggest adding some human evaluation to see (1) whether LLM-based NER is reliable; (2) whether cosine-similarity and threshold are reasonable.
- Only LLaVA-Med is evaluated, which weakens the argument presented.
- Clarification needed:
- how to combine cosine similarity and the ICD-10-based distance?
- On the second round inference, would the ground truth diagnosis be leaked to the LVLM in the enhanced prompt?
## Misc.
- Figure 2 is presented, but never mentioned in the text.
Overall, the proposed metrics are variants of existing metrics, with a focus on the medical domain.
The clarity can be improved to make the manuscript stronger. Certain human evaluations and LVLM evaluations are needed to thoroughly validate the technical designs.
rating: 4
confidence: 5 |
rxx8leoPy0 | Med-HVL: Automatic Medical Domain Hallucination Evaluation for Large Vision-Language Models | [
"Qianqi Yan",
"Xuehai He",
"Xin Eric Wang"
] | Advancements in Large Vision-Language Models (LVLMs) have made significant progress in integrating visual and textual data. However, their deployment in the medical domain is impeded by critical issues of hallucinations, asking for reliable evaluation metrics and methods. We define two novel metrics: Object Hallucination and Domain Knowledge Hallucination, to quantify the hallucination of LVLMs in the medical domain. We propose a scalable, automated evaluation framework, Med-HVL, to assess and mitigate hallucinations at both object and domain knowledge levels. We reveal a significant presence of hallucinations in the LVLMs, emphasizing the need for domain-specific adaptations and finetuning to enhance their reliability for medical applications. | [
"Large Vision-Language Models",
"Hallucination",
"Medical"
] | https://openreview.net/pdf?id=rxx8leoPy0 | gsLxEkQGzl | review | 1,708,640,337,398 | rxx8leoPy0 | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission41/Reviewer_iiRS"
] | title: Review
review: Contribution:
The authors made two contributions in their work: 1) clear definition of hallucination within the medical field, breaking it down into two types: Object Hallucination, which involves incorrect or fabricated details about objects, and Domain Knowledge Hallucination, which pertains to inaccuracies in medical knowledge or practices. 2) they developed a new tool called Med-HVL, designed specifically for the medical domain, to detect and evaluate these hallucinations.
Pros:
Its an interesting glimpse into the bias of LLM pretrained into a large amount of medical data. A nice extension of the work would be to systematically compare and benchmark those hallucinations across multiple dataset and models.
A few remarks:
- in the figure, gt caption and gt observation are the same. Is there a gt caption without irrelevant info that would not be the gt observation? it would be nice to have a different example in the figure
- "Object" in this context seems incorrect and can be confused with support device. Maybe a more suitable term would be anatomical structures?
rating: 6
confidence: 3 |
rxx8leoPy0 | Med-HVL: Automatic Medical Domain Hallucination Evaluation for Large Vision-Language Models | [
"Qianqi Yan",
"Xuehai He",
"Xin Eric Wang"
] | Advancements in Large Vision-Language Models (LVLMs) have made significant progress in integrating visual and textual data. However, their deployment in the medical domain is impeded by critical issues of hallucinations, asking for reliable evaluation metrics and methods. We define two novel metrics: Object Hallucination and Domain Knowledge Hallucination, to quantify the hallucination of LVLMs in the medical domain. We propose a scalable, automated evaluation framework, Med-HVL, to assess and mitigate hallucinations at both object and domain knowledge levels. We reveal a significant presence of hallucinations in the LVLMs, emphasizing the need for domain-specific adaptations and finetuning to enhance their reliability for medical applications. | [
"Large Vision-Language Models",
"Hallucination",
"Medical"
] | https://openreview.net/pdf?id=rxx8leoPy0 | RoLKKrDJED | review | 1,708,715,906,961 | rxx8leoPy0 | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission41/Reviewer_FCvE"
] | title: Relevant topic, interesting approach that requires some discussion
review: In this work, the authors address a critical issue of evaluating the hallucination of LVLMs in a clinical context. As the authors assert, developing methods to assess hallucinations of these models in a quantitative and automated way is important. For this, the authors employ the CHAIR metric used in image captioning and propose a new domain knowledge hallucination metric. The authors present an initial evaluation of LLaVA-Med using the metric on the MedICAT dataset.
Comments:
- The authors address an important issue regarding hallucination of LVLMs in a clinical context.
- The proposed metric seems reasonable, however, the fact that an additional LLM is used obtain ground truth object observations is questionable. LLMs themselves are not fully validated in terms of their performance, so a proper evaluation/validation study of this step is necessary
- For assessing object hallucination and domain knowledge, the authors utilize cosine similarity of embeddings with BioBERT. Again, although the method addresses scalability, careful validation of this method for assessment is needed.
- Minor: the figure seems to be assessing GPT-4V whereas the text is evaluating LLaVA-Med
Overall, the work addresses an important issue and presents a reasonable set of metrics for assessing hallucination. Although the study is preliminary, the work will garner relevant discussion, in particular regarding the usage of existing models (such as BioBERT and GPT-4) for assessing other LVLMs.
rating: 6
confidence: 4 |
rxx8leoPy0 | Med-HVL: Automatic Medical Domain Hallucination Evaluation for Large Vision-Language Models | [
"Qianqi Yan",
"Xuehai He",
"Xin Eric Wang"
] | Advancements in Large Vision-Language Models (LVLMs) have made significant progress in integrating visual and textual data. However, their deployment in the medical domain is impeded by critical issues of hallucinations, asking for reliable evaluation metrics and methods. We define two novel metrics: Object Hallucination and Domain Knowledge Hallucination, to quantify the hallucination of LVLMs in the medical domain. We propose a scalable, automated evaluation framework, Med-HVL, to assess and mitigate hallucinations at both object and domain knowledge levels. We reveal a significant presence of hallucinations in the LVLMs, emphasizing the need for domain-specific adaptations and finetuning to enhance their reliability for medical applications. | [
"Large Vision-Language Models",
"Hallucination",
"Medical"
] | https://openreview.net/pdf?id=rxx8leoPy0 | Ofu58Lav78 | review | 1,708,663,940,147 | rxx8leoPy0 | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission41/Reviewer_Bbeo"
] | title: This paper proposes two potentially useful metrics for evaluating medical LVLMs
review: This paper proposes two potentially useful metrics, CHAIR and DVH, for evaluating medical LVLMs. Overall this is a good metric proposal, although given the use of chest X-rays, some comparison to other metrics like RadGraph and CheXBert would be useful here.
rating: 7
confidence: 4 |
rLVQmYYgJP | TNM Tumor Classification from Unstructured Breast Cancer Pathology Reports using LoRA Finetuning of Mistral 7B | [
"Kyle McCleary",
"James Ghawaly",
"Lucio Miele"
] | Over the past year, large language models have seen an explosion in usage, with researchers and companies rushing to discover new applications. This explosion was kick-started by OpenAI, with their release of GPT 3.5 and GPT 4 to the general public. These foundation models have proven extraordinarily capable on a wide range of tasks, but their cost and reliability present problems for more sensitive and/or resource-limited applications. Over the same time-span, however, we have also seen a rush of development in smaller foundation models, such as Mistral's 7B model, as well as in fine-tuning those models for specific tasks.
In this paper, we explore the application of Low-Rank Adaptation (LoRA) fine-tuning of small language models for performing TNM staging on unstructured pathology reports for triple negative breast cancer cases. We also attempt to develop a more generalized approach, so that our work can be applied to other NLP tasks within the medical field.
We found that performing TNM staging with reliable accuracy is possible for a small foundational model through fine-tuning, allowing fast and reliable automation of critical language processing tasks within medicine. | [
"clinical foundation models",
"large language models",
"mistral",
"tumor classification",
"low rank adaptation",
"fine-tuning"
] | https://openreview.net/pdf?id=rLVQmYYgJP | hS5Xh0Gd2e | review | 1,707,950,770,013 | rLVQmYYgJP | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission20/Reviewer_tM9p"
] | title: A nice study with impressive-looking results, but difficult to follow
review: This is an interesting demonstration of an application of foundation language models cost-effectively fine-tuned to a clinical task and performing impressively, especially compared to language models without fine-tuning.
However, it is hard to follow for someone like myself with machine learning but not medical expertise. For instance, I don't know what TNM or triple-negative mean, and my lack of familiarity with medical reports make the problem and the description of data preparation, which are important to understand the implications of the results, hard to understand. In the results, it is unclear what the difference is between the sample count indicated in the model column vs the samples column. If the former is the number of samples used for training, then the claim that low sample count is sufficient for training seems unsupported, as the accuracy is substantially higher with greater sample count. Also, if possible, it would be helpful if the results included some model trained only on the cancer data so we can tell how much is gained by starting with a foundation model. Further, what are the practical implications of these results? What would be the impact of deploying this model in particular?
A small discussion of prior work in fine-tuning foundation models for clinical tasks and the gap that this work fills could help contextualize this work and understand its contribution. Much of the last page is speculative, which is, I think, less valuable than further supporting the experimental study - the main contribution of this paper - with more details as mentioned above.
rating: 5
confidence: 2 |
rLVQmYYgJP | TNM Tumor Classification from Unstructured Breast Cancer Pathology Reports using LoRA Finetuning of Mistral 7B | [
"Kyle McCleary",
"James Ghawaly",
"Lucio Miele"
] | Over the past year, large language models have seen an explosion in usage, with researchers and companies rushing to discover new applications. This explosion was kick-started by OpenAI, with their release of GPT 3.5 and GPT 4 to the general public. These foundation models have proven extraordinarily capable on a wide range of tasks, but their cost and reliability present problems for more sensitive and/or resource-limited applications. Over the same time-span, however, we have also seen a rush of development in smaller foundation models, such as Mistral's 7B model, as well as in fine-tuning those models for specific tasks.
In this paper, we explore the application of Low-Rank Adaptation (LoRA) fine-tuning of small language models for performing TNM staging on unstructured pathology reports for triple negative breast cancer cases. We also attempt to develop a more generalized approach, so that our work can be applied to other NLP tasks within the medical field.
We found that performing TNM staging with reliable accuracy is possible for a small foundational model through fine-tuning, allowing fast and reliable automation of critical language processing tasks within medicine. | [
"clinical foundation models",
"large language models",
"mistral",
"tumor classification",
"low rank adaptation",
"fine-tuning"
] | https://openreview.net/pdf?id=rLVQmYYgJP | XGExrsQ0a4 | review | 1,708,658,608,220 | rLVQmYYgJP | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission20/Reviewer_opkq"
] | title: Meaningful efforts to apply multiple language models to TNM Tumor classification from unstructured reports
review: The paper explored the application of LoRA fine-tuning of Mistral 7B Instruct to classify TNM staging from unstructured pathology reports for triple negative breast cancers. Several baselines and tuning strategies are compared on a real-world data task to demonstrate the proposed fine-tuning approach could achieve better accuracy using small amount of training data.
- Quality: the paper is technically sound and of good quality. Most claims in the paper are well supported by experiment results.
- Clarity: the paper is well-structured and clear in experiment process and results discussion, despite that the paper lacks clarity on the clinical knowledge of TNM categories (pls refer to the suggestion #1 below).
- Originality: the paper demonstrates an interesting application of well-established techniques to a specific clinical task. Though fine tuning is not a novel idea, the application to the specific TNM staging seems to be novel from practical perspective.
- Significance: the paper mentioned potential generalization possibility of the proposed approach other clinical tasks, given the low cost and high reliability with comparison to other large language models.
Cons, suggestions or questions to the authors:
1. It'll be better to include a brief introduction of TNM staging (e.g. T stands for size of tumor, etc.) at the beginning of the paper for non-clinical audience, and also why TNM staging of triple negative breast cancers is challenging and demands the help of ML modeling and application. It will also improve the significance of the work from the clinical application perspective.
2. The paper mentioned manual labeling from subject matter experts. Would you pls provide more information on 1) how many experts are involved, 2) whether each document was labeled by multiple experts and how the final label is determined (e.g. if there's any disagreement) . More importantly, could you pls comment how to ensure the reliability and robustness of the proposed fine tuning approach towards label noise in the training data?
3. In table 1, different fine-tuning results show increasing accuracy but decreasing in confidence for "UNKNOWN" class. Does it indicate over-fitting problem?
rating: 7
confidence: 3 |
rLVQmYYgJP | TNM Tumor Classification from Unstructured Breast Cancer Pathology Reports using LoRA Finetuning of Mistral 7B | [
"Kyle McCleary",
"James Ghawaly",
"Lucio Miele"
] | Over the past year, large language models have seen an explosion in usage, with researchers and companies rushing to discover new applications. This explosion was kick-started by OpenAI, with their release of GPT 3.5 and GPT 4 to the general public. These foundation models have proven extraordinarily capable on a wide range of tasks, but their cost and reliability present problems for more sensitive and/or resource-limited applications. Over the same time-span, however, we have also seen a rush of development in smaller foundation models, such as Mistral's 7B model, as well as in fine-tuning those models for specific tasks.
In this paper, we explore the application of Low-Rank Adaptation (LoRA) fine-tuning of small language models for performing TNM staging on unstructured pathology reports for triple negative breast cancer cases. We also attempt to develop a more generalized approach, so that our work can be applied to other NLP tasks within the medical field.
We found that performing TNM staging with reliable accuracy is possible for a small foundational model through fine-tuning, allowing fast and reliable automation of critical language processing tasks within medicine. | [
"clinical foundation models",
"large language models",
"mistral",
"tumor classification",
"low rank adaptation",
"fine-tuning"
] | https://openreview.net/pdf?id=rLVQmYYgJP | IAUrX5O4o8 | review | 1,708,638,011,022 | rLVQmYYgJP | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission20/Reviewer_JW1i"
] | title: TNM staging using LoRA finetuning
review: The authors performed LoRA finetuning on top of Mistral 7B model to do TNM staging classification tasks on breast cancer pathology reports. The authors carefully curated a dataset of anoymized reports with labels from subject matter experts. The results look very promising. Only one foundational model was used in the finetuning and evaluation, so it is unclear whether there could be significant result difference among different foundation models with different sizes. It is also unclear how well the model is generalized, e.g. whether the quality and the format of the original reports may affect the final results, though the authors mentioned it in the future work section.
rating: 8
confidence: 3 |
rLVQmYYgJP | TNM Tumor Classification from Unstructured Breast Cancer Pathology Reports using LoRA Finetuning of Mistral 7B | [
"Kyle McCleary",
"James Ghawaly",
"Lucio Miele"
] | Over the past year, large language models have seen an explosion in usage, with researchers and companies rushing to discover new applications. This explosion was kick-started by OpenAI, with their release of GPT 3.5 and GPT 4 to the general public. These foundation models have proven extraordinarily capable on a wide range of tasks, but their cost and reliability present problems for more sensitive and/or resource-limited applications. Over the same time-span, however, we have also seen a rush of development in smaller foundation models, such as Mistral's 7B model, as well as in fine-tuning those models for specific tasks.
In this paper, we explore the application of Low-Rank Adaptation (LoRA) fine-tuning of small language models for performing TNM staging on unstructured pathology reports for triple negative breast cancer cases. We also attempt to develop a more generalized approach, so that our work can be applied to other NLP tasks within the medical field.
We found that performing TNM staging with reliable accuracy is possible for a small foundational model through fine-tuning, allowing fast and reliable automation of critical language processing tasks within medicine. | [
"clinical foundation models",
"large language models",
"mistral",
"tumor classification",
"low rank adaptation",
"fine-tuning"
] | https://openreview.net/pdf?id=rLVQmYYgJP | 8C01mOSnnV | review | 1,708,644,502,248 | rLVQmYYgJP | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission20/Reviewer_c7Sw"
] | title: An interesting application of LoRa but with questionable data augmentation/quality and limited evaluation
review: Summary
-- the paper applies LoRa fine-tuning of a Mistral model to perform TNM phenotyping using pathology reports.
Pros
-- they perform data augmentation using 200 real world pathology reports. They synthesize new reports by stripping relevant sentences from existing reports and replacing them with example sentences that were mapped to certain label classes a priori
-- the inclusion of an UNKNOWN label class in cases where the relevant information was missing
-- the use of JSON-enforced output
-- reporting of training time as well as performance
-- the ability of the model to cite the relevant information
-- the use of LoRa here is interesting
Cons
-- While the data augmentation method is creative, the methodology is not described clearly enough and the quality of the resulting data is not examined. For example, "These sentences were then injected into the template reports at randomly selected marker locations" -- does this mean that the data is being pulled from a finite list of sentences in the JSON? If so, this is clearly not sufficiently representative of the diversity of natural clinical language.
-- unclear how the path reports were labeled -- what exactly was being labeled, and what were the qualifications of the labelers?
-- how did you strip the report of all info relevant for TNM using a script? How did you validate the accuracy of this process?
-- unclear how many data examples were generated in total. Also not clear whether or not the resulting dataset was high quality. It sounds like you replaced the parts relevant to TNM with random TNM ratings to augment the dataset. Were the resulting pathology reports realistic? It's not obvious that this procedure would result in realistic pathology reports.
-- Some irrelevant text, eg. there is no Section 5
-- This is an interesting application of LoRa, but it's just one task. A more comprehensive evaluation across several tasks would be much more compelling.
-- "We also attempt to develop a more generalized approach, so that our work can be applied to other NLP tasks within the medical field" -- it's not clear where this was done
rating: 6
confidence: 5 |
rAw5ANMNZ2 | Modelling Lexical Characteristics of the Healthy Aging Population with a Natural Speech Dataset | [
"Han Kunmei"
] | Modelling baseline language variation in normal aging is important for our understanding of healthy aging. Large-language databases and NLP tools enable us to conduct automated quantitative analysis of natural language data. In this study, we aim to demonstrate that using NLP tools and psycholinguistic metrics to process natural language datasets can help to set a normative benchmark of aging language. The benchmark can be applied to the assessment of cognitive aging. | [
"NLP tools",
"psycholinguistic metrics",
"natural language",
"cognitive aging"
] | https://openreview.net/pdf?id=rAw5ANMNZ2 | kQuqb9zI8x | review | 1,708,637,964,498 | rAw5ANMNZ2 | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission11/Reviewer_aQeg"
] | title: Solid non-traditional track submission
review: This paper evaluates linguistic variation in speech data by age and gender using standard NLP tools: PoS using the Stanford Parser and the Penn Treebank tags adjusted for Singaporean English. The author goes on to derive a variety of linguistic features from this data, on which the analysis is performed. The results largely corroborate existing results related to the effect of age on linguistic variation -- particularly as it pertains to "lexical concreteness." The methodology appears to be sound, and the use of the Stanford Parser, in my opinion, constitutes foundation model usage and thus making it a relevant contribution to the workshop.
rating: 7
confidence: 2 |
rAw5ANMNZ2 | Modelling Lexical Characteristics of the Healthy Aging Population with a Natural Speech Dataset | [
"Han Kunmei"
] | Modelling baseline language variation in normal aging is important for our understanding of healthy aging. Large-language databases and NLP tools enable us to conduct automated quantitative analysis of natural language data. In this study, we aim to demonstrate that using NLP tools and psycholinguistic metrics to process natural language datasets can help to set a normative benchmark of aging language. The benchmark can be applied to the assessment of cognitive aging. | [
"NLP tools",
"psycholinguistic metrics",
"natural language",
"cognitive aging"
] | https://openreview.net/pdf?id=rAw5ANMNZ2 | cMomTZYNKL | review | 1,708,406,647,645 | rAw5ANMNZ2 | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission11/Reviewer_Zp34"
] | title: Nice paper utilizes NLP tools and psycholinguistic metrics to analyze natural speech data
review: The paper investigates the baseline language variations in normal aging to understand cognitive changes. It utilizes NLP tools and psycholinguistic metrics to analyze natural speech data, aiming to establish a normative benchmark for aging language, which could assist in assessing cognitive aging.
Pros: The paper used NLP tools to objectively analyze natural speech data, overcoming the subjectivity in manual assessments of language abilities, besides, it provided a detailed year-by-year analysis of linguistic characteristics influenced by age and sex.
Cons: However, there are a few limitations to this study: it excludes individuals older than 80 years and those with less than 13 years of education, which could omit valuable insights from these groups. While the approach reduces subjectivity compared to manual assessments, the choice and interpretation of psycholinguistic metrics could still introduce bias.
Overall, the study presents a step forward in understanding language variation in aging.
rating: 6
confidence: 4 |
rAw5ANMNZ2 | Modelling Lexical Characteristics of the Healthy Aging Population with a Natural Speech Dataset | [
"Han Kunmei"
] | Modelling baseline language variation in normal aging is important for our understanding of healthy aging. Large-language databases and NLP tools enable us to conduct automated quantitative analysis of natural language data. In this study, we aim to demonstrate that using NLP tools and psycholinguistic metrics to process natural language datasets can help to set a normative benchmark of aging language. The benchmark can be applied to the assessment of cognitive aging. | [
"NLP tools",
"psycholinguistic metrics",
"natural language",
"cognitive aging"
] | https://openreview.net/pdf?id=rAw5ANMNZ2 | 74mAXjj5OK | review | 1,708,639,968,776 | rAw5ANMNZ2 | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission11/Reviewer_xRGg"
] | title: Good study focusing on the lexical characteristics of the healthy aging population
review: This paper presents a significant study focusing on the lexical characteristics of the healthy aging population using natural speech datasets and psycholinguistic metrics. The results reveal that parts of speech distribution vary with gender, while lexical concreteness correlates with age, contributing valuable information to the understanding of language variation in aging. The following points could be considered:
1. It would be beneficial to include a comparison with existing studies on younger populations or those with cognitive impairments.
2. Given that the study is based on Singaporean English speakers, how do you anticipate the findings to generalize to other English-speaking populations or languages?
3. How were the audio recordings standardized across participants to minimize environmental and technical variations?
rating: 7
confidence: 3 |
oulcuR8Aub | Med42 - Evaluating Fine-Tuning Strategies for Medical LLMs: Full-Parameter vs. Parameter-Efficient Approaches | [
"Clement Christophe",
"Praveenkumar Kanithi",
"Prateek Munjal",
"Tathagata Raha",
"Nasir Hayat",
"Ronnie Rajan",
"Ahmed Al Mahrooqi",
"Avani Gupta",
"Muhammad Umar Salman",
"Marco AF Pimentel",
"Shadab Khan",
"Boulbaba Ben Amor"
] | This study presents a comprehensive analysis and comparison of two predominant fine-tuning methodologies -- full-parameter fine-tuning and parameter-efficient tuning -- within the context of medical Large Language Models (LLMs). We developed and refined a series of LLMs, based on the Llama-2 architecture, specifically designed to enhance medical knowledge retrieval, reasoning, and question answering capabilities.
Our experiments systematically evaluate the effectiveness of these tuning strategies across various well-known medical benchmarks.
Notably, our medical LLM showed an accuracy level of 72% on the US Medical Licensing Examination (USMLE) datasets, setting a new standard in performance for openly available medical LLMs.
Through this comparative analysis, we aim to identify the most effective and efficient method for fine-tuning LLMs in the medical domain, thereby contributing significantly to the advancement of AI-driven healthcare applications. | [
"LLMs",
"Clinical",
"Fine-tuning",
"Evaluation"
] | https://openreview.net/pdf?id=oulcuR8Aub | PrK6XrDyZ8 | review | 1,708,640,221,971 | oulcuR8Aub | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission7/Reviewer_tRbZ"
] | title: Fine-tuning strategies evaluation
review: The authors evaluated the performance of different fine-tuning strategies on medical QA tasks. A variety of medical QA datasets were used in the evaluation. Full parameter fine-tuning vs. LoRA fine-tuning were used and compared. Zero-shot performance were used to evaluate the model performance.
Only two different size Llama-2 models were used as the base model. Some studies have shown that different base models may provide different level of improvement after fine-tuning, so it would be great to see the results from some other commonly available open source models such as mistral.
The results were not new and mostly expected. Similar studies have been conducted and the results of this study were generally consistent with some previous findings. The authors did include more QA datasets for the evaluation, which made the results more comprehensive.
rating: 7
confidence: 4 |
oulcuR8Aub | Med42 - Evaluating Fine-Tuning Strategies for Medical LLMs: Full-Parameter vs. Parameter-Efficient Approaches | [
"Clement Christophe",
"Praveenkumar Kanithi",
"Prateek Munjal",
"Tathagata Raha",
"Nasir Hayat",
"Ronnie Rajan",
"Ahmed Al Mahrooqi",
"Avani Gupta",
"Muhammad Umar Salman",
"Marco AF Pimentel",
"Shadab Khan",
"Boulbaba Ben Amor"
] | This study presents a comprehensive analysis and comparison of two predominant fine-tuning methodologies -- full-parameter fine-tuning and parameter-efficient tuning -- within the context of medical Large Language Models (LLMs). We developed and refined a series of LLMs, based on the Llama-2 architecture, specifically designed to enhance medical knowledge retrieval, reasoning, and question answering capabilities.
Our experiments systematically evaluate the effectiveness of these tuning strategies across various well-known medical benchmarks.
Notably, our medical LLM showed an accuracy level of 72% on the US Medical Licensing Examination (USMLE) datasets, setting a new standard in performance for openly available medical LLMs.
Through this comparative analysis, we aim to identify the most effective and efficient method for fine-tuning LLMs in the medical domain, thereby contributing significantly to the advancement of AI-driven healthcare applications. | [
"LLMs",
"Clinical",
"Fine-tuning",
"Evaluation"
] | https://openreview.net/pdf?id=oulcuR8Aub | JgMQezCtwC | review | 1,708,326,299,953 | oulcuR8Aub | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission7/Reviewer_txoL"
] | title: Comparison of LoRA and Full-Parameter Fine-Tuning LLMs for Medical Q&A
review: ## Summary
This paper investigates the effectiveness of using LoRA fine-tuning compared to full-parameter fine-tuning LLMs for adaptation to the healthcare domain. Authors report results for both smaller 7B Llama-2 as well as larger 70B Llama-2 fine-tuned models. They also compare against close-source state-of-the-art models like GPT-4 and Med-PaLM-2. The findings and described methods are a useful reference for healthcare machine learning practitioners who want to fine-tune a general-domain LLM for health-care tasks.
## Pros
* Evaluation is comprehensive across multiple datasets
* Evaluation is carefully done with decontamination pipeline
* Compares one of the most popular parameter efficient fine-tuning technique, LoRA, against full fine-tuning and state-of-the art models (GPT-4 and Med-PaLM) in healthcare domain
* Models are publicly released and available
* Datasets are publicly released and available
* Manuscript is well written and easy to follow
## Cons
* Methods section describes that LoRA may be applied to only attention layers vs. all layers. PE-FT results in Table 1 are for LoRA applied to all layers. It would be nice to also show the performance for LoRA applied to only attention layers since authors have mentioned this is a common approach. However, this is more of a "nice to have".
rating: 8
confidence: 3 |
oulcuR8Aub | Med42 - Evaluating Fine-Tuning Strategies for Medical LLMs: Full-Parameter vs. Parameter-Efficient Approaches | [
"Clement Christophe",
"Praveenkumar Kanithi",
"Prateek Munjal",
"Tathagata Raha",
"Nasir Hayat",
"Ronnie Rajan",
"Ahmed Al Mahrooqi",
"Avani Gupta",
"Muhammad Umar Salman",
"Marco AF Pimentel",
"Shadab Khan",
"Boulbaba Ben Amor"
] | This study presents a comprehensive analysis and comparison of two predominant fine-tuning methodologies -- full-parameter fine-tuning and parameter-efficient tuning -- within the context of medical Large Language Models (LLMs). We developed and refined a series of LLMs, based on the Llama-2 architecture, specifically designed to enhance medical knowledge retrieval, reasoning, and question answering capabilities.
Our experiments systematically evaluate the effectiveness of these tuning strategies across various well-known medical benchmarks.
Notably, our medical LLM showed an accuracy level of 72% on the US Medical Licensing Examination (USMLE) datasets, setting a new standard in performance for openly available medical LLMs.
Through this comparative analysis, we aim to identify the most effective and efficient method for fine-tuning LLMs in the medical domain, thereby contributing significantly to the advancement of AI-driven healthcare applications. | [
"LLMs",
"Clinical",
"Fine-tuning",
"Evaluation"
] | https://openreview.net/pdf?id=oulcuR8Aub | 7ZjPKN5eJ2 | review | 1,708,495,871,373 | oulcuR8Aub | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission7/Reviewer_WxPy"
] | title: Official Review for Evaluating Fine-Tuning Strategies for Medical LLMs
review: **Summary:**
The paper focuses on fine-tuning the 7B and 70B Llama-2 models on a dataset compiled from several open medical datasets, evaluating the performance gap between LoRA fine-tuning and full-parameter fine-tuning. Experiments conducted on several medical benchmarks led to good performance on some of the benchmarks, outperformed only by models trained at a larger scale (GPT-4) and models pre-trained on medical corpora (MedPaLM-2). To ensure a fair evaluation, the paper also introduces a decontamination pipeline to remove potential common samples between the training and the testing splits of the benchmarks.
**Strengths:**
1. The evaluation benchmark is thorough, encompassing a wide set of medical benchmarks, thus enabling a more in-depth analysis.
2. The dataset introduced by the authors seems fairly comprehensive and suitable for the clinical domain, and the performance obtained by the Llama models trained in the paper realistically substantiates this.
3. The focus on data decontamination for a fairer analysis by the authors is appreciated and makes their results more relevant.
4. The overall work presented in this paper is very relevant to the topic of the venue.
5. The elaborate (for a short paper) description of the hyperparameters to enable reproducibility is appreciated.
**Weaknesses:**
1. The theme of the paper revolves around parameter-efficient fine-tuning vs full-parameter fine-tuning. However, the claim that parameter-efficient fine-tuning achieves results close to full-parameter fine-tuning is a well-known research artifact. The authors themselves note that the results are in line with prior work for LoRA vs full-parameter fine-tuning in other domains. I would recommend the authors adjust the paper to better describe their main contributions towards the compilation of the training dataset from open medical sources and the data decontamination pipeline, with a lesser focus on parameter-efficient fine-tuning vs full-parameter fine-tuning.
2. I recommend the authors describe the instruction tuning methodology in greater detail in the main paper, if space permits, else in the appendix.
**Other recommendations:**
There is a typo in the caption for Table 1, where GPT-3.5 is incorrectly mentioned as GPT-3.4.
rating: 6
confidence: 4 |
oulcuR8Aub | Med42 - Evaluating Fine-Tuning Strategies for Medical LLMs: Full-Parameter vs. Parameter-Efficient Approaches | [
"Clement Christophe",
"Praveenkumar Kanithi",
"Prateek Munjal",
"Tathagata Raha",
"Nasir Hayat",
"Ronnie Rajan",
"Ahmed Al Mahrooqi",
"Avani Gupta",
"Muhammad Umar Salman",
"Marco AF Pimentel",
"Shadab Khan",
"Boulbaba Ben Amor"
] | This study presents a comprehensive analysis and comparison of two predominant fine-tuning methodologies -- full-parameter fine-tuning and parameter-efficient tuning -- within the context of medical Large Language Models (LLMs). We developed and refined a series of LLMs, based on the Llama-2 architecture, specifically designed to enhance medical knowledge retrieval, reasoning, and question answering capabilities.
Our experiments systematically evaluate the effectiveness of these tuning strategies across various well-known medical benchmarks.
Notably, our medical LLM showed an accuracy level of 72% on the US Medical Licensing Examination (USMLE) datasets, setting a new standard in performance for openly available medical LLMs.
Through this comparative analysis, we aim to identify the most effective and efficient method for fine-tuning LLMs in the medical domain, thereby contributing significantly to the advancement of AI-driven healthcare applications. | [
"LLMs",
"Clinical",
"Fine-tuning",
"Evaluation"
] | https://openreview.net/pdf?id=oulcuR8Aub | 4lnYndKQhh | review | 1,708,642,079,810 | oulcuR8Aub | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission7/Reviewer_VxwM"
] | title: Review
review: Pros:
- The authors articulate a well-defined research question and address it with clarity and efficiency;
- There's a comprehensive evaluation conducted against various state-of-the-art large language models, both open-source and proprietary;
- The presentation of results is clear and straightforward.
Suggestion for improvement:
- It would be beneficial to explicitly indicate in Table 1 which backbone corresponds to PE-FT and FP-FT; I assume it's llama70b. Additionally, including a column for llama7b would give the complete picture.
- It would be insightful to detail the computational resources required, in terms of GPU hours and memory, for both PE and FP fine-tuning. Providing this comparison could offer a clearer understanding of the differences in resource intensity between the two methods.
rating: 7
confidence: 4 |
mKPbcJAb83 | SoftTiger: A Clinical Foundation Model for Healthcare Workflows | [
"Ye Chen",
"Igor Couto",
"Wei Cai",
"CONG FU",
"Bruno Dorneles"
] | We introduce SoftTiger, a clinical large language model (CLaM) designed as a foundation model for healthcare workflows. The narrative and unstructured nature of clinical notes is a major obstacle for healthcare intelligentization. We address a critical problem of structuring clinical notes into clinical data, according to international interoperability standards. We collect and annotate data for three subtasks, namely, international patient summary, clinical impression and medical encounter. We then supervised fine-tuned a state-of-the-art LLM using public and credentialed clinical data. The training is orchestrated in a way that the target model can first support basic clinical tasks such as abbreviation expansion and temporal information extraction, and then learn to perform more complex downstream clinical tasks. Moreover, we address several modeling challenges in the healthcare context, e.g., extra long context window. Our blind pairwise evaluation shows that SoftTiger outperforms other popular open-source models and GPT-3.5, comparable to Gemini-pro, with a mild gap from GPT-4. We believe that LLMs may become a step-stone towards healthcare digitalization and democratization. Therefore, we publicly release SoftTiger models at scales of 13 billion and 70 billion parameters, as well as datasets and code for our innovative scalable evaluation, hopefully, making a significant contribution to the healthcare industry. | [
"Large Language Model",
"Clinical Large Language Models",
"Clinical notes",
"International patient summary"
] | https://openreview.net/pdf?id=mKPbcJAb83 | biyMJhHprq | review | 1,708,653,960,430 | mKPbcJAb83 | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission15/Reviewer_VttH"
] | title: This study presents high-quality research in the clinical domain, addressing challenges in building Clinical Large Language Models (CLaMs). It demonstrates originality by introducing novel models for structuring patient clinical data, with clear organization and thorough evaluation methods. The significance lies in its contribution to addressing the crucial problem of conforming clinical notes to international interoperability standards, showcasing superior performance compared to existing models.
review: ># Quality
The work is of high quality, as it demonstrates a thorough understanding of the clinical domain and the challenges of building a clinical large language model (CLaM). The authors justify their choice of foundation model clearly. They also address several modeling challenges, such as long context window, medical jargon, and abbreviation expansion. Their models are evaluated using both next-token prediction and blind pairwise comparison with other popular LLMs, showing superior performance in patient clinical data structuring.
># Clarity
The work is well-written and organized, with clear problem formulation, methods, results, and discussion. The authors provide sufficient details and explanations for their data collection, model training, and evaluation methods, including several appendices with examples of their models’ outputs, ethical considerations, and reproducibility statement.
># Originality
This work is indeed original as it introduces a novel family of CLaMs designed for patient clinical data structuring, a crucial yet intricate component of clinical workflows.
># Significance
This work is significant, as it addresses a critical problem of structuring clinical notes into clinical data, according to international interoperability standards.
># Pros
* This work tackles an important area of healthcare and has a lot of potential significance.
* The work evaluates the models using both next-token prediction and blind pairwise comparison with other LLMs
rating: 7
confidence: 5 |
mKPbcJAb83 | SoftTiger: A Clinical Foundation Model for Healthcare Workflows | [
"Ye Chen",
"Igor Couto",
"Wei Cai",
"CONG FU",
"Bruno Dorneles"
] | We introduce SoftTiger, a clinical large language model (CLaM) designed as a foundation model for healthcare workflows. The narrative and unstructured nature of clinical notes is a major obstacle for healthcare intelligentization. We address a critical problem of structuring clinical notes into clinical data, according to international interoperability standards. We collect and annotate data for three subtasks, namely, international patient summary, clinical impression and medical encounter. We then supervised fine-tuned a state-of-the-art LLM using public and credentialed clinical data. The training is orchestrated in a way that the target model can first support basic clinical tasks such as abbreviation expansion and temporal information extraction, and then learn to perform more complex downstream clinical tasks. Moreover, we address several modeling challenges in the healthcare context, e.g., extra long context window. Our blind pairwise evaluation shows that SoftTiger outperforms other popular open-source models and GPT-3.5, comparable to Gemini-pro, with a mild gap from GPT-4. We believe that LLMs may become a step-stone towards healthcare digitalization and democratization. Therefore, we publicly release SoftTiger models at scales of 13 billion and 70 billion parameters, as well as datasets and code for our innovative scalable evaluation, hopefully, making a significant contribution to the healthcare industry. | [
"Large Language Model",
"Clinical Large Language Models",
"Clinical notes",
"International patient summary"
] | https://openreview.net/pdf?id=mKPbcJAb83 | bdIYXPms97 | review | 1,708,655,194,151 | mKPbcJAb83 | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission15/Reviewer_Acro"
] | title: While the innovative approach in addressing healthcare challenges is commendable, potential drawbacks, such as the risk of hallucination and dependency on training data, underscore the importance of recognizing limitations for a comprehensive evaluation.
review: The strengths of the paper include its emphasis on high quality and clarity, showcasing SoftTiger's superior performance compared to established models like GPT-3.5. The clear articulation of objectives and addressing challenges in healthcare workflows adds to the paper's credibility.
The originality and significance of SoftTiger's approach in tackling critical subtasks within healthcare are commendable, contributing to its innovative standing. The acknowledgment of both the advanced capabilities and scalability of SoftTiger, with two configurations catering to different research needs, adds to its appeal.
However, the potential challenges or cons associated with SoftTiger, such as the risk of hallucination due to the statistical nature of language models and the dependency on the volume and quality of training data. Recognizing these limitations is essential for a comprehensive evaluation.
rating: 7
confidence: 3 |
mKPbcJAb83 | SoftTiger: A Clinical Foundation Model for Healthcare Workflows | [
"Ye Chen",
"Igor Couto",
"Wei Cai",
"CONG FU",
"Bruno Dorneles"
] | We introduce SoftTiger, a clinical large language model (CLaM) designed as a foundation model for healthcare workflows. The narrative and unstructured nature of clinical notes is a major obstacle for healthcare intelligentization. We address a critical problem of structuring clinical notes into clinical data, according to international interoperability standards. We collect and annotate data for three subtasks, namely, international patient summary, clinical impression and medical encounter. We then supervised fine-tuned a state-of-the-art LLM using public and credentialed clinical data. The training is orchestrated in a way that the target model can first support basic clinical tasks such as abbreviation expansion and temporal information extraction, and then learn to perform more complex downstream clinical tasks. Moreover, we address several modeling challenges in the healthcare context, e.g., extra long context window. Our blind pairwise evaluation shows that SoftTiger outperforms other popular open-source models and GPT-3.5, comparable to Gemini-pro, with a mild gap from GPT-4. We believe that LLMs may become a step-stone towards healthcare digitalization and democratization. Therefore, we publicly release SoftTiger models at scales of 13 billion and 70 billion parameters, as well as datasets and code for our innovative scalable evaluation, hopefully, making a significant contribution to the healthcare industry. | [
"Large Language Model",
"Clinical Large Language Models",
"Clinical notes",
"International patient summary"
] | https://openreview.net/pdf?id=mKPbcJAb83 | MlmmvLRdWK | review | 1,708,359,130,138 | mKPbcJAb83 | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission15/Reviewer_WXeH"
] | title: Outlines the finetuning of an open source LLM for clinical data structuring. However there are large methodolgical flaws and some base assumptions are unconvincing.
review: 1. Summary and contributions: Briefly summarize the paper and its contributions
Outlines the development of an LLM called SoftTiger. This is a finetuned version of an open-source LLM named TigerBot. This is achieved using supervised finetuning tuning from a dataset of general text, a previously released clinical dataset and a novel clinical workflow dataset. The novel dataset is made up of instruction pairs for 3 tasks performed on the MIMIC-IV dataset with the outputs generated by GPT-4 and validated by 5 physicians. The results show that the finetuned models gained accuracy on automated evaluation benchmarks.
2. Strengths: Describe the strengths of the work. Typical criteria include: soundness of the claims (theoretical grounding, empirical evaluation), significance and novelty of the contribution, and relevance to the community.
- It is an early example of finetuning large (70B) open-source LLMs across multiple GPUs.
- After carefully evaluating the trade-off between clinical complexity and helpfulness, 3 clinical data structuring tasks were chosen. This gives the work a clear potential for clinical impact.
- A very good section outlines the administrative burden on physicians.
- Making IPS or FHIR structure the output is optimal for potential future integration into current e-health systems
3. Weaknesses: Explain the limitations of this work along the same axes as above.
-MIMIC data user agreement prevents the sharing of the data or derivates with 3rd parties. Therefore, the SoftTiger and dataset should not be publicly released. They could be hosted on PhysioNet though.
- Similarly, MIMIC data should not be sent to 3rd party LLM providers as seems to have occurred in Table 5 unless via Azure or Amazon (see https://physionet.org/news/post/gpt-responsible-use). Please state clearly in text or “Ethical Considerations and Reproducibility Statement” if these services were used.
- Evaluation and training data only uses MIMIC-IV, which are discharge summary notes from the ICU department of a single health centre. This should be noted as a limitation.
- GPT-4 is used to produce the clinical training and evaluation set. This is then corrected by clinical review. No mention of the performance of GPT-4 on the task is made or the inter-annotator agreement. Furthermore, justification (most likely from a data governance perspective) is given on why GPT-4 cannot be used for this task directly if it can produce the labels for the task.
4. Correctness: Are the claims and method correct? Is the empirical methodology correct?
- In the introduction, it is claimed the 2 primary challenges for LLM clinical adaptation are finding a ‘helpful clinical task’ and input length constraint. I do not believe this to be true. Numerous clinical tasks could be performed by LLMs, e.g. diagnoses, discharge summary writing, reporting of adverse drug events etc... Barriers such as effective and safe evaluation, data privacy and governance, and integration into healthcare providers’ electronic systems would seem equal if not greater to this constraint.
The second constraint of input length is notable but only for open-source models (closed-source models have context lengths >100k), a distinction that is not made. However, the trained model is only extended to 8k and claimed as a source of novelty. Current open-source models such as mistral have been trained with an 8k context window.
- Claimed that note length usually follows power law without proof or citation
- The approach is claimed to be “light-weight” but requires 64xA100s GPUs
- It is claimed that as TigerBot has a larger vocabulary size than Llama-2, it has a larger clinical vocabulary. However, as TigerBot is multilingual this claim only seems true if evaluating on multilingual data also. The claim that TigerBot has a greater English clinical vocabulary needs further explanation or proof.
- It is not obvious that the addition of the general-purpose or Asclepius datasets will improve performance on the clinical workflow tasks.
- “Dictionary of abbreviation expansion to standardize abbreviations” is known not to work due to the redundancy of terms. For example, “hr” could be expanded to hour or heart rate, depending on the context.
5. Clarity: Is the paper well written?
- “We then evaluate TigerBot and Llama-2 chat models using next-token prediction.” It is not clear to me how this evaluation works. Is this exact matching? Further explanation is required.
- Not clear how Llama-2 and TigerBot were extended from 4k to 8k inputs.
- It is claimed that “it is beneficial for worldwide adoption to build multilingual models”, which is true. But it is not clear if the fine-tuning dataset is also multilingual.
- Fig.1 shows some very helpful information, but it is not clear which task is related to which plot point due to the use of repeated colours. Furthermore, the Fibonacci scale is not explained.
- Not clear if, in normal practice, discharge summaries are the only source of information used to complete the 3 subtasks trained and evaluated in this work.
- Figure 2 is a direct screenshot from tensorboard or similar. Removal of the UI buttons, and adding full and axis titles would improve this figure. Not clear what the faint lines are in the figure
- Table 3 should be moved higher up the training data section and would be more instructive swap the size column for number of examples in each dataset.
- Figure 3’s final column is all 0% and so does not need to be included. Moreover, the information may be more helpfully presented as a table of min, median, max for input, output and total
6. Relation to prior work: Is it clearly discussed how this work differs from previous contributions?
- This is the first open-source LLM finetuning to output on FHIR IPS, FHIR Clinical Impression and FHIR Encounter from medical discharge summaries.
7. Reproducibility: Are there enough details to reproduce the major results of this work?
- The number, speciality, nationality, and seniority of clinicians surveyed to produce Fig 1 is not stated
- Would be useful to link or add in the appendices the exact FHIR structures of the 3 subtasks.
- The training framework section is limited. No training hyperparameters are given. The acronyms PP and DP are used without explanation.
- The settings, prompt, and model version used to generate the clinical workflow dataset using GPT-4 are not stated
rating: 3
confidence: 3 |
mKPbcJAb83 | SoftTiger: A Clinical Foundation Model for Healthcare Workflows | [
"Ye Chen",
"Igor Couto",
"Wei Cai",
"CONG FU",
"Bruno Dorneles"
] | We introduce SoftTiger, a clinical large language model (CLaM) designed as a foundation model for healthcare workflows. The narrative and unstructured nature of clinical notes is a major obstacle for healthcare intelligentization. We address a critical problem of structuring clinical notes into clinical data, according to international interoperability standards. We collect and annotate data for three subtasks, namely, international patient summary, clinical impression and medical encounter. We then supervised fine-tuned a state-of-the-art LLM using public and credentialed clinical data. The training is orchestrated in a way that the target model can first support basic clinical tasks such as abbreviation expansion and temporal information extraction, and then learn to perform more complex downstream clinical tasks. Moreover, we address several modeling challenges in the healthcare context, e.g., extra long context window. Our blind pairwise evaluation shows that SoftTiger outperforms other popular open-source models and GPT-3.5, comparable to Gemini-pro, with a mild gap from GPT-4. We believe that LLMs may become a step-stone towards healthcare digitalization and democratization. Therefore, we publicly release SoftTiger models at scales of 13 billion and 70 billion parameters, as well as datasets and code for our innovative scalable evaluation, hopefully, making a significant contribution to the healthcare industry. | [
"Large Language Model",
"Clinical Large Language Models",
"Clinical notes",
"International patient summary"
] | https://openreview.net/pdf?id=mKPbcJAb83 | 1hoM5YBilR | review | 1,708,688,275,493 | mKPbcJAb83 | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission15/Reviewer_JAru"
] | title: SoftTiger review
review: This ambitious paper fine-tuning a large SOTA LLM for structuring clinical notes capable of handling long context windows and rigorously evaluates it to other commercial/open models using a chatbot arena + LLM-as-judge setup.
The background is well set with both a survey of clinical LLM tasks from arXiv and huggingface and an empirical investigation of context windows required for clinical notes in MIMIC-IV.
They chose a TigerBot base model as it is multilingual and demonstrated superior performance to Llama 2-70b chat with accuracy of next token prediction on 8k contexts. Whilst next token prediction isn’t a particularly useful task they justify it as suitable for rapid decision making and early exploration. The poor performance of Llama 2 70b on 8k vs 4k tokens is hypothesised due to under representativeness of clinical vocabulary leading to worsened hallucination - this seems possible, but I can’t believe that the justification that TigerBot was trained on arXiv which has 1.2% biomedical content explains the difference, I’m sure this was part of Llama training too!
The training data is constructed using clinical notes from MIMIC-IV processed using GPT-4. It’s good to see (trained!) expert evaluation of the training data. The ethical provisioning for this needs to be mentioned as PhysioNet does not permit use of OpenAI APIs! This is not mentioned despite the extensive ethics appendix.
Given the importance of training data it would be useful to have some additional details of how GPT-4 was used to restructure data for the three extraction tasks. There are some additional training datasets such as a ‘previously unseen corpus’ of general-purpose SFT data which needs clarifying. The use of ASclepius for basic clinical tasks like NER and abbreviation seems sensible but despite training for abbreviations they use a medical dictionary at both training and inference?
They structure training from general purpose to basic tasks (ner/abbreviations) to hard tasks (summarisation) but it would be nice to see some experimental results or referenced justification for why they did this. Otherwise this really follows the general LLM -> domain specific fine-tuning paradigm so I’m not sure constitutes a novel strategy in and of itself?
The technical details of training framework are clear and impressive.
Evaluation is rigorous with comparison with range of commercial and open-source (GPT3.5/4, Gemini Pro, llama2, mixtral). The use of ChatBot Arena for blind pairwise evaluation includes control groups with intentionally wrong information and swap-position executions. They use GPT-4 as a judge, supplying the evaluation prompt which is thorough, however despite using 5 domain experts to review training data I can’t see any expert review of the final evaluation data or the LLM as judge strategy? This appears to me to be the biggest weakness in an otherwise strong paper and is perhaps time related?
Overall this is a really ambitious and thorough piece of work which makes both an interesting contribution to the research literature as well as the open source community through release of trained models, datasets and evaluation code.
rating: 9
confidence: 4 |
lWsDWnre2l | Striding into Clarity: Wearable Sensor-Driven Estimation of Knee Adduction Moment, Unveiling the Black Box with Sequence-Based Neural Networks and Explainable Artificial Intelligence | [
"Jasmine Liang"
] | Knee adduction moment during walking has been reported as a sensitive biomechanical marker for predicting the risk of knee osteoarthritis. The traditional method of estimating the knee adduction moment relies on the inverse dynamics approach, primarily limited to laboratory settings due to it relies on specialized equipment and technical expertise, which prevents the clinicians' access to the crucial data. Our study employs wearable sensor technology integrated with advanced Artificial Intelligence and Machine Learning algorithms to predict knee moment outcomes with high accuracy. By analyzing attention weight trends, we establish a significant correlation with knee moment dynamics, validating the reliability of our predictive model. This alignment underscores the biomechanical relevance of our approach, offering promising implications for personalized patient care and clinical practice. | [
"knee adduction moment",
"knee osteoarthritis",
"gait",
"recurrent neural network",
"Long Short-Term Memory",
"wearable sensor",
"motion capture system",
"Explainable AI\u0000"
] | https://openreview.net/pdf?id=lWsDWnre2l | nxtpEUdFEU | review | 1,708,669,314,062 | lWsDWnre2l | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission46/Reviewer_h7Xr"
] | title: The paper is interesting but there are no results presented in the paper and furthermore the paper does not include a foundation model or the use of one
review: The paper doesn’t fit the scope of the symposium as it is not a foundation model but rather a supervised learned model.
Clarity: The introduction of the paper is written in clear concise language. When explaining the methods of the paper, however, the language is unclear.
Significance: the work does not appear to be significant within the field of clinical foundation models as it has been trained in what appears to be a supervised manner.
Quality: the quality of the study is subpar with bad figured and poor explanation of methods.
Major points
1. Figure 1 is basically unreadable; the figure should be bigger.
2. Figures 3 and 4 are also way too small especially the text. Without a better explanation of time steps they are not very informative.
3. The sampling frequency specified, but the idea of a timestep is not.
4. The method is not clearly described.
a. Explain training objective.
b. Explain network structure.
5. Present results
Minor points
6. In section 2.1 replace 12 with twelwe.
7. Remove Average self selected walking speed equation 1 to explanation of equation below. Use m (meters) for meters an s (seconds) for time.
Pros
• New data was aquired for this study.
Cons
• The study does not include the creation of a foundation model to solve any task. Instead, a supervised model has been developed for assessing the
• RNN-LSTM seems like a somewhat outdated deep learning model architecture to implement as state of the art.
• Only 24 subjects were used to train and evaluate a deep learning model, although that is fine for a preliminary study.
• There are no results presented regarding the performance of the model.
• It appears that there was no validation dataset, leading me to question how the model was deemed to have trained for long enough.
• Figures are very bad.
rating: 3
confidence: 3 |
lWsDWnre2l | Striding into Clarity: Wearable Sensor-Driven Estimation of Knee Adduction Moment, Unveiling the Black Box with Sequence-Based Neural Networks and Explainable Artificial Intelligence | [
"Jasmine Liang"
] | Knee adduction moment during walking has been reported as a sensitive biomechanical marker for predicting the risk of knee osteoarthritis. The traditional method of estimating the knee adduction moment relies on the inverse dynamics approach, primarily limited to laboratory settings due to it relies on specialized equipment and technical expertise, which prevents the clinicians' access to the crucial data. Our study employs wearable sensor technology integrated with advanced Artificial Intelligence and Machine Learning algorithms to predict knee moment outcomes with high accuracy. By analyzing attention weight trends, we establish a significant correlation with knee moment dynamics, validating the reliability of our predictive model. This alignment underscores the biomechanical relevance of our approach, offering promising implications for personalized patient care and clinical practice. | [
"knee adduction moment",
"knee osteoarthritis",
"gait",
"recurrent neural network",
"Long Short-Term Memory",
"wearable sensor",
"motion capture system",
"Explainable AI\u0000"
] | https://openreview.net/pdf?id=lWsDWnre2l | aM5Dkg70Mj | review | 1,708,620,257,011 | lWsDWnre2l | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission46/Reviewer_xo9v"
] | title: A Review of the Innovative Approach to Predicting Knee Moment Dynamics
review: This paper proposes an innovative approach to predicting the dynamic knee moment using wearable sensors, AI, and ML algorithms. The author provides a detailed description of the model structure, training process, and explanatory analysis facilitated by the XAI tool. Here are some review comments:
Methodology: It is strongly recommended that the author explains the physical meanings of each symbol and letter in the formulas on the right side of the second page, as well as the distinctions and connections between the second and third lines of the formulas. Additionally, is the sample size of only 24 participants a bit small?
Results and Discussion: The paper's explanatory analysis of model predictions, particularly using the XAI tool, is very interesting. It is suggested that the author delves deeper into analyzing the model's performance, limitations, and future directions. For instance, a discussion on the model's adaptability to different types of patients or varying environmental conditions would be valuable.
Figures: Figures 3, and 4, along with their respective explanations, are crucial for readers to understand the research results. However, it is advised that the author provides more detailed explanations in the captions and legends of the figures to ensure readers accurately comprehend the information presented in the charts.
Overall, this paper presents a promising study demonstrating how the integration of wearable technology and AI/ML algorithms can predict the dynamic knee moment. Through explanatory analysis and detailed discussions on model performance, the paper has the potential to further strengthen its contributions and practical applications.
rating: 7
confidence: 4 |
lWsDWnre2l | Striding into Clarity: Wearable Sensor-Driven Estimation of Knee Adduction Moment, Unveiling the Black Box with Sequence-Based Neural Networks and Explainable Artificial Intelligence | [
"Jasmine Liang"
] | Knee adduction moment during walking has been reported as a sensitive biomechanical marker for predicting the risk of knee osteoarthritis. The traditional method of estimating the knee adduction moment relies on the inverse dynamics approach, primarily limited to laboratory settings due to it relies on specialized equipment and technical expertise, which prevents the clinicians' access to the crucial data. Our study employs wearable sensor technology integrated with advanced Artificial Intelligence and Machine Learning algorithms to predict knee moment outcomes with high accuracy. By analyzing attention weight trends, we establish a significant correlation with knee moment dynamics, validating the reliability of our predictive model. This alignment underscores the biomechanical relevance of our approach, offering promising implications for personalized patient care and clinical practice. | [
"knee adduction moment",
"knee osteoarthritis",
"gait",
"recurrent neural network",
"Long Short-Term Memory",
"wearable sensor",
"motion capture system",
"Explainable AI\u0000"
] | https://openreview.net/pdf?id=lWsDWnre2l | 5BsYI2J1Rq | review | 1,708,713,501,463 | lWsDWnre2l | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission46/Reviewer_ozX1"
] | title: Promising results, but needs more clarity in the significance
review: In this work, the authors present a preliminary study for estimating knee adduction moments from data acquired from accelerometers. The authors employ an LSTM-based network that includes an attention layer architecture. Over a cohort of 24 participants, 12 male and 12 female, whole-body motion was captured along with data from two IMU sensors. In addition to evaluating the prediction accuracy provided by the model, the authors analyze the attention weight and an XAI technique, LIME, to gain insights into the prediction made by the model.
This work demonstrates a solid preliminary study into the feasibility of predicting knee adduction moments using IMU data. However, there are several points that could be addressed:
1. There is no report of the overall performance of the model. It seems like the reported example is for a single subject, but there is no report of the evaluation over the entire cohort
2. There is no comparison to other simpler models that would demonstrate the performance improvement. Although this is understandable for a preliminary feasibility study, the overall statistics would be important.
3. The authors report an 80/20 split, but was this split over the participants? Details such as this should be included for a better assessment of the results.
4. Although the authors admit the limitations of the analysis of explainability, how the explainability relates to the trustworthiness of the model is less clear. Moreover, the approach does not seem particularly innovative, as it applies existing approaches, nor does it provide significant insights into the model and the broader field of biomechanics.
Overall, the results are promising but would recommend another iteration of the manuscript before resubmission.
Additional minor comments:
- ASSWS is not explained in the paper
- The manuscript is over the 2-page limit for the non-traditional track
rating: 4
confidence: 4 |
hfgdwxbNOW | Adapting a Generative Pretrained Transformer Achieves SOTA Performance in Assessing Diverse Physiological Functions Using Only Photoplethysmography Signals: A GPT-PPG Approach | [
"Zhaoliang Chen",
"Cheng Ding",
"Nirbhay Modhe",
"Jiaying Lu",
"Carl Yang",
"Xiao Hu"
] | This study introduces a novel application of a Genera-tive Pre-trained Transformer (GPT) model tailored for photoplethysmography (PPG) signals, serving as a foun-dation model for various downstream tasks. Adapting the standard GPT architecture to suit the continuous characteristics of PPG signals, our approach demon-strates promising results. After pre-training on our exten-sive dataset that contains more than 200 million 30s PPG samples, the model shows performance comparable to or surpassing current state-of-the-art (SOTA) methods in tasks like heart rate estimation. A standout feature of our GPT model is its inherent capability to perform gen-erative tasks such as signal denoising effectively, with-out the need for further finetuning. This success is at-tributed to the generative nature of the GPT framework. Looking ahead, we aim to further explore its generative abilities and investigate its implication on its other downstream tasks. | [
"Photoplethysmography",
"clinical foundation model",
"Generative Pretrained Transformer"
] | https://openreview.net/pdf?id=hfgdwxbNOW | q8549HIxOl | review | 1,708,541,868,043 | hfgdwxbNOW | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission34/Reviewer_fEni"
] | title: Interesting work
review: This paper applies the Generative Pre-trained Transformer (GPT) model for photoplethysmography (PPG) signals. This is interesting and valuable. The point of using a logit-Laplace loss, instead of a MSE loss to train the model is also very insightful. My main concern about this paper is how can we make sure the converge when only using 5% of the data. We know transformers are data-hunger architectures. What is the limitation of such a model? As the paper mentioned in the future they will train the model using more data, I assume we will get the answer later. In general, this is an interesting and valuable paper for PPG-related tasks.
rating: 7
confidence: 3 |
hfgdwxbNOW | Adapting a Generative Pretrained Transformer Achieves SOTA Performance in Assessing Diverse Physiological Functions Using Only Photoplethysmography Signals: A GPT-PPG Approach | [
"Zhaoliang Chen",
"Cheng Ding",
"Nirbhay Modhe",
"Jiaying Lu",
"Carl Yang",
"Xiao Hu"
] | This study introduces a novel application of a Genera-tive Pre-trained Transformer (GPT) model tailored for photoplethysmography (PPG) signals, serving as a foun-dation model for various downstream tasks. Adapting the standard GPT architecture to suit the continuous characteristics of PPG signals, our approach demon-strates promising results. After pre-training on our exten-sive dataset that contains more than 200 million 30s PPG samples, the model shows performance comparable to or surpassing current state-of-the-art (SOTA) methods in tasks like heart rate estimation. A standout feature of our GPT model is its inherent capability to perform gen-erative tasks such as signal denoising effectively, with-out the need for further finetuning. This success is at-tributed to the generative nature of the GPT framework. Looking ahead, we aim to further explore its generative abilities and investigate its implication on its other downstream tasks. | [
"Photoplethysmography",
"clinical foundation model",
"Generative Pretrained Transformer"
] | https://openreview.net/pdf?id=hfgdwxbNOW | O6qHckRvB3 | review | 1,708,448,812,426 | hfgdwxbNOW | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission34/Reviewer_gmkD"
] | title: Applying foundation model to an important domain.
review: Strength:
- This work develops PPG foundation models using a decoder-only transformer
- loss function, embedding, and linear head are specifically designed to fit PPG applications.
- Results in Table 1 show promising results
Weakness:
- Table 1 is a bit hard to read. Different performance metrics (MAE, F1, false alarm rates) are included. Please consider separating them.
- BP-SBP is not introduced. Also, why is 9.56 highlighted?
- In the conclusion section, it is claimed that the foundation model can be used for downstream tasks without further fine-tuning. I am not sure which experiments can support this claim.
rating: 7
confidence: 2 |
hfgdwxbNOW | Adapting a Generative Pretrained Transformer Achieves SOTA Performance in Assessing Diverse Physiological Functions Using Only Photoplethysmography Signals: A GPT-PPG Approach | [
"Zhaoliang Chen",
"Cheng Ding",
"Nirbhay Modhe",
"Jiaying Lu",
"Carl Yang",
"Xiao Hu"
] | This study introduces a novel application of a Genera-tive Pre-trained Transformer (GPT) model tailored for photoplethysmography (PPG) signals, serving as a foun-dation model for various downstream tasks. Adapting the standard GPT architecture to suit the continuous characteristics of PPG signals, our approach demon-strates promising results. After pre-training on our exten-sive dataset that contains more than 200 million 30s PPG samples, the model shows performance comparable to or surpassing current state-of-the-art (SOTA) methods in tasks like heart rate estimation. A standout feature of our GPT model is its inherent capability to perform gen-erative tasks such as signal denoising effectively, with-out the need for further finetuning. This success is at-tributed to the generative nature of the GPT framework. Looking ahead, we aim to further explore its generative abilities and investigate its implication on its other downstream tasks. | [
"Photoplethysmography",
"clinical foundation model",
"Generative Pretrained Transformer"
] | https://openreview.net/pdf?id=hfgdwxbNOW | MOJMrdN4ix | review | 1,708,536,141,126 | hfgdwxbNOW | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission34/Reviewer_MUNY"
] | title: GPT-PPG foundation model
review: Summary of Contributions:
The work proposes a Foundation model for (as the title suggests) “assessing diverse physiological functions, using only photoplethysmography signals”. The authors encode vast numbers of 30s of PPG samples @40Hz and train a GPT like foundation model with the next sample prediction pre-training objective. Then they fine-tune this model for solving downstream tasks such as heart rate estimation, atrial fibrillation detection, blood pressure estimation, and detecting false arrhythmia alarms.
Strengths:
1. The paper is well written and easy to follow. The block diagram is representative of the method presented in the paper.
2. The proposed method is sound and it stands to reason that with the increase in the model parameters and the training set, more emergent behaviour should be witnessed.
3. The qualitative results in the appendix are quite impressive.
Weaknesses:
1. Details such as number of model parameters, training time, GPUs used, etc. are missing from the paper. They often provide some indication on how scaling up might improve the results, and also about the feasibility of using the model.
2. The fine-tuning time requirements when compared to the training time (from scratch) of SOTA specialist models are also missing in the paper.
3. Some ablation studies are missing. For instance, it was mentioned that (in the decoder) the RMSNorm was preferred over LayerNorm, then PoPE was used instead of Positional Encoding from the original transformers paper. Some indicator on how these choices would have affected the foundation model training might have been good. (Although the feasibility of these ablations will depend on the pre-training computational requirements, which again cannot be inferred unless disclosed in the paper)
rating: 7
confidence: 4 |
hfgdwxbNOW | Adapting a Generative Pretrained Transformer Achieves SOTA Performance in Assessing Diverse Physiological Functions Using Only Photoplethysmography Signals: A GPT-PPG Approach | [
"Zhaoliang Chen",
"Cheng Ding",
"Nirbhay Modhe",
"Jiaying Lu",
"Carl Yang",
"Xiao Hu"
] | This study introduces a novel application of a Genera-tive Pre-trained Transformer (GPT) model tailored for photoplethysmography (PPG) signals, serving as a foun-dation model for various downstream tasks. Adapting the standard GPT architecture to suit the continuous characteristics of PPG signals, our approach demon-strates promising results. After pre-training on our exten-sive dataset that contains more than 200 million 30s PPG samples, the model shows performance comparable to or surpassing current state-of-the-art (SOTA) methods in tasks like heart rate estimation. A standout feature of our GPT model is its inherent capability to perform gen-erative tasks such as signal denoising effectively, with-out the need for further finetuning. This success is at-tributed to the generative nature of the GPT framework. Looking ahead, we aim to further explore its generative abilities and investigate its implication on its other downstream tasks. | [
"Photoplethysmography",
"clinical foundation model",
"Generative Pretrained Transformer"
] | https://openreview.net/pdf?id=hfgdwxbNOW | 0cyZEOH6yz | review | 1,708,124,763,306 | hfgdwxbNOW | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission34/Reviewer_uhh3"
] | title: Review
review: Summary
This paper proposes architectural variants of transformers, pretraining, and fine-tuning methods for PPG domain.
Strengths
- Experiments are extensively conducted in PPG domain.
Weakness
- Lack of comparison with other transformer architectures for timeseries data
- Writing could be improved. For example, the difference between linear prediction head and attention-based prediction head is unclear, and it’s hard to identify SOTA algorithms in experiments.
rating: 5
confidence: 4 |
hV8clUPzkn | Harnessing and Distilling ChatGPT's Ability to Bridge Semantic Variance for Precise Query-Document Alignment in Encephalitis Research: Surpassing Keyword-Based Search Engines | [
"Santosh Gupta"
] | Keyword-based search engines often fail in retrieving information that aligns with user query intent, due to variations in keywords and phrasing in scientific literature. This paper introduces an encephalitis query-document dataset, characterized by its high semantic variability. Our dataset comprises thousands of query-document pairs. To represent the diverse linguistic expressions found in encephalitis research, we leveraged the advanced language understanding capabilities of GPT-4 to generate queries that, while conceptually aligned with the information in the documents, significantly differ in phrasing and terminology. This approach addresses a critical need in scientific literature searches – retrieving pertinent information that might be overlooked due to conventional keyword-based search limitations.
To evaluate the efficacy of our dataset, we trained a specialized transformer model capable of converting these query-document pairs into embeddings. Our results demonstrate a significant improvement in retrieving relevant encephalitis research papers, especially those that are not surfaced by traditional search engines like PubMed. This enhanced retrieval performance not only underscores the potential of embedding-based retrieval in medical literature search, but also opens up new avenues for comprehensive literature exploration. The implications of our findings extend beyond encephalitis studies, suggesting broader applicability for similar methodologies in other specialized fields of research. | [
"Transformers",
"Information Retrieval",
"Embeddings",
"GPT",
"Encephalitis"
] | https://openreview.net/pdf?id=hV8clUPzkn | iXEQD8Jy0O | review | 1,708,489,233,630 | hV8clUPzkn | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission10/Reviewer_wybD"
] | title: Official Review
review: This paper introduces an encephalitis query-document dataset along with an embedding model used for retrieval. The experimental results demonstrate the superiority of the embedding based model over traditional key-based search engines.
**Pros:**
(1) The dataset and the model are valuable to the community.
(2) The code is clear and easy to read.
**Cons:**
(1) The experimental results only present some cases, instead of the performance on the whole dataset. It would be better to list a series of numbers in a table. For example, for the baselines, choose keyword-based search engines, as well as OpenAI’s text-embedding APIs. The metrics can be recall, etc. Also, the authors can explore whether the cheaper model developed by users (like the one proposed in this paper) can be on par with the OpenAI text-embedding API models.
(2) The layout of this paper can be further improved. For instance, move the “Links” parts as the footnotes. For the case study paragraphs in the appendix, it would be better to wrap them with a blockquote. It would be more convenient to use latex templates to modify these things, compared with Word.
(3) The title is too long and can be shorten, such as “Enhancing Encephalitis Research Retrieval: Leveraging GPT-4 for Semantic Query-Document Alignment Beyond Keywords.”
rating: 6
confidence: 4 |
hV8clUPzkn | Harnessing and Distilling ChatGPT's Ability to Bridge Semantic Variance for Precise Query-Document Alignment in Encephalitis Research: Surpassing Keyword-Based Search Engines | [
"Santosh Gupta"
] | Keyword-based search engines often fail in retrieving information that aligns with user query intent, due to variations in keywords and phrasing in scientific literature. This paper introduces an encephalitis query-document dataset, characterized by its high semantic variability. Our dataset comprises thousands of query-document pairs. To represent the diverse linguistic expressions found in encephalitis research, we leveraged the advanced language understanding capabilities of GPT-4 to generate queries that, while conceptually aligned with the information in the documents, significantly differ in phrasing and terminology. This approach addresses a critical need in scientific literature searches – retrieving pertinent information that might be overlooked due to conventional keyword-based search limitations.
To evaluate the efficacy of our dataset, we trained a specialized transformer model capable of converting these query-document pairs into embeddings. Our results demonstrate a significant improvement in retrieving relevant encephalitis research papers, especially those that are not surfaced by traditional search engines like PubMed. This enhanced retrieval performance not only underscores the potential of embedding-based retrieval in medical literature search, but also opens up new avenues for comprehensive literature exploration. The implications of our findings extend beyond encephalitis studies, suggesting broader applicability for similar methodologies in other specialized fields of research. | [
"Transformers",
"Information Retrieval",
"Embeddings",
"GPT",
"Encephalitis"
] | https://openreview.net/pdf?id=hV8clUPzkn | hV0F2R7VwK | review | 1,708,012,526,881 | hV8clUPzkn | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission10/Reviewer_Qsco"
] | title: Ineffectiveness of naive baselines on tricky datasets is widely known
review: Perhaps the more useful contribution of this paper is in leveraging GPT-4 to create synthetic data in niche domains that can be tailored for specific purposes (in this case, benchmarking models that have poor keyword overlap between the document and query).
However, the points regarding shortcoming of keyword based models is already very well known, but also in this case quite expected since the dataset is explicitly constructed to fool keyword matching. It would make more sense to not cast the retrieval model's performance as proof for a move towards embedding-similarity based retrieval (as that has been the case for many years now) but as proof that the dataset is interesting. If that is the intention, it is more beneficial to provide principles and details of the prompt used in the ChatGPT API that enabled the curation of this dataset, so that a practitioner can then apply such principles to their own use case. That would be a simple two page report detailing the challenges and innovations required and the iteration cycles that one may have to go through, along with simple strategies for checking the quality of the generated dataset.
rating: 5
confidence: 4 |
hV8clUPzkn | Harnessing and Distilling ChatGPT's Ability to Bridge Semantic Variance for Precise Query-Document Alignment in Encephalitis Research: Surpassing Keyword-Based Search Engines | [
"Santosh Gupta"
] | Keyword-based search engines often fail in retrieving information that aligns with user query intent, due to variations in keywords and phrasing in scientific literature. This paper introduces an encephalitis query-document dataset, characterized by its high semantic variability. Our dataset comprises thousands of query-document pairs. To represent the diverse linguistic expressions found in encephalitis research, we leveraged the advanced language understanding capabilities of GPT-4 to generate queries that, while conceptually aligned with the information in the documents, significantly differ in phrasing and terminology. This approach addresses a critical need in scientific literature searches – retrieving pertinent information that might be overlooked due to conventional keyword-based search limitations.
To evaluate the efficacy of our dataset, we trained a specialized transformer model capable of converting these query-document pairs into embeddings. Our results demonstrate a significant improvement in retrieving relevant encephalitis research papers, especially those that are not surfaced by traditional search engines like PubMed. This enhanced retrieval performance not only underscores the potential of embedding-based retrieval in medical literature search, but also opens up new avenues for comprehensive literature exploration. The implications of our findings extend beyond encephalitis studies, suggesting broader applicability for similar methodologies in other specialized fields of research. | [
"Transformers",
"Information Retrieval",
"Embeddings",
"GPT",
"Encephalitis"
] | https://openreview.net/pdf?id=hV8clUPzkn | XBgNo8DS0M | review | 1,708,767,333,208 | hV8clUPzkn | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission10/Reviewer_WE7Z"
] | title: Review
review: This paper addresses the issue that general-purpose search engines have inaccurate retrieval results for medical field content, especially for encephalitis research. Thus, they introduce a query-document set based on PubMed data. They further use GPT-4 to generate queries query varies that conceptually aligned but with different phrases and terms. A model is trained on the introduced data with contrastive loss. Results show the retrieval capabilities are enhanced.
Pros:
* The motivation is strong and work on the task is urgently needed
* Dataset is sampled, the model is trained, and results show the model works. The entire lifecycle is demonstrated with sufficient information
Cons:
* Now info on the collected dataset is limited, it would be great to show the data distribution and provide some samples
* Evaluation result is brief, more interpretation and analysis are appreciated
* How the contrastive loss is calculated, and how the data variants are used for the loss calculation needs to be clarified
rating: 6
confidence: 4 |
hV8clUPzkn | Harnessing and Distilling ChatGPT's Ability to Bridge Semantic Variance for Precise Query-Document Alignment in Encephalitis Research: Surpassing Keyword-Based Search Engines | [
"Santosh Gupta"
] | Keyword-based search engines often fail in retrieving information that aligns with user query intent, due to variations in keywords and phrasing in scientific literature. This paper introduces an encephalitis query-document dataset, characterized by its high semantic variability. Our dataset comprises thousands of query-document pairs. To represent the diverse linguistic expressions found in encephalitis research, we leveraged the advanced language understanding capabilities of GPT-4 to generate queries that, while conceptually aligned with the information in the documents, significantly differ in phrasing and terminology. This approach addresses a critical need in scientific literature searches – retrieving pertinent information that might be overlooked due to conventional keyword-based search limitations.
To evaluate the efficacy of our dataset, we trained a specialized transformer model capable of converting these query-document pairs into embeddings. Our results demonstrate a significant improvement in retrieving relevant encephalitis research papers, especially those that are not surfaced by traditional search engines like PubMed. This enhanced retrieval performance not only underscores the potential of embedding-based retrieval in medical literature search, but also opens up new avenues for comprehensive literature exploration. The implications of our findings extend beyond encephalitis studies, suggesting broader applicability for similar methodologies in other specialized fields of research. | [
"Transformers",
"Information Retrieval",
"Embeddings",
"GPT",
"Encephalitis"
] | https://openreview.net/pdf?id=hV8clUPzkn | Vj289joIXw | review | 1,708,332,583,572 | hV8clUPzkn | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission10/Reviewer_yZdn"
] | title: Review
review: Summary: The paper introduces an approach to enhance literature retrieval from PubMed for encephalitis-related research. The approach is based on a transformer-based embedding model that, given encephalitis-related queries, retrieves relevant PubMed literature. Unlike prior keyword-based searching systems, the proposed method is able to identify articles relevant to a query even when the exact keywords are absent from the query. To build the embedding model, the authors first created a dataset of thousands of query-document pairs. Specifically, they used ChatGPT to generate queries from the document as it has the ability to understand the variance of encephalitis representations and capture various semantics of the document.
Comments and suggestions:
1. Comparing the performance to that of a keyword-based searching system seems insufficient. Would it be possible to use a pre-trained, high-quality embedding model to perform this task? I believe recent advancements in retrieval-augmented generation (RAG) would provide useful techniques for this task.
2. How do we ensure that a general-purpose model like ChatGPT understands encephalitis literature, as I suppose it belongs to long-tail distribution?
3. The link for [1] is missing.
4. It is stated that the "Biopython" library is used. However, the "Biopython" library I know of is a biological sequencing tool (https://biopython.org/), which is not related to this paper.
5. The paper would benefit from a large-scale quantitative assessment of the model's performance. This addition would provide a clearer understanding of its efficacy compared to existing methods.
6. I think adding a graphical diagram would be beneficial to better outline the pipeline.
rating: 6
confidence: 4 |
g8tF7gGzZb | Computational Pathology at Health System Scale – Self-Supervised Foundation Models from Billions of Images | [
"Gabriele Campanella",
"Chad Vanderbilt",
"Thomas Fuchs"
] | Recent breakthroughs in self-supervised learning have enabled the use of large unlabeled datasets to train visual foundation models that can generalize to a variety of downstream tasks. While this training paradigm is well suited for the medical domain where annotations are scarce, large-scale pre-training in healthcare, and in particular pathology, has not been extensively studied. Previous work in self-supervised learning in pathology has focused on relatively small datasets for both pre-training and performance evaluation of downstream tasks. The aim of this work is to explore foundation models at a scale that goes orders of magnitude beyond the state of the art and benchmark current self-supervised learning algorithms by pre-training and evaluating downstream performance on large clinically relevant pathology tasks.
We compiled the largest academic pathology dataset to date, consisting of over 3 billion images from 423 thousand digital microscopy slides. We compared the pre-training of visual transformer models with focus on masked autoencoders (MAE) and self-distillation models (DINO). Downstream performance is evaluated on six clinically relevant tasks from three anatomic sites and two institutions: breast cancer detection, inflammatory bowel disease detection, breast cancer estrogen receptor prediction, lung adenocarcinoma EGFR mutation prediction, and lung cancer immunotherapy response prediction.
The results demonstrate that pre-training on pathology data is beneficial for downstream performance com-pared to pre-training on natural images. Additionally, the DINO algorithm achieved better generalization performance across all tasks tested. The presented model performances signify a phase change in computational pathology research, paving the way into a new era of more performant models based on large-scale, parallel pre-training at the billion-image scale. | [
"computational pathology",
"MAE",
"DINO"
] | https://openreview.net/pdf?id=g8tF7gGzZb | rtTNNs0S3D | review | 1,707,958,443,875 | g8tF7gGzZb | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission13/Reviewer_Dzb1"
] | title: A good start to potentially impactful work
review: The authors train self-supervised vision foundation models on a very large set of pathology data and show that the embeddings they produce yield much better performance on downstream tasks compared to those trained on general image data, specifically ImageNet. The writing is clear, the presentation is thorough, and the results are strong. The work has potential for broad impact. I didn't read the supplementary sections in detail, but the thoroughness is appreciated. I am not familiar with pathology, so I will leave it to other reviewers to assess the selection of downstream tasks.
There are elements of the experiments and presentation that could be improved. While it should be straightforward to improve presentation, I understand that, since the experiments themselves are costly to run, additional experimental runs may not be able to be included in this submission, which is fine - it can be taken as feedback for further development of the work.
1. It is not necessary to show all the results across epochs. It makes the figures unnecessarily large and difficult to interpret, and it masks the effect of model selection, e.g. using a validation set to determine which checkpoint's model to actually use. In particular, the overlapping lines make Supplementary Figure 1 a bit hard to read. Perhaps just one figure to make the point about saturation and overfitting would be enough. The rest could be tables or bar charts or similar for the selected models.
2. It does not seem appropriate to draw conclusions about the effect of data quantity from the loss curves shown. I don't think you can reasonably disentangle the effects of training time and data quantity in these results. The best approach would be to train additional models on subsamples of the data set, but I understand this is costly.
3. It's unclear why not all models are shown in Figure 2.
4. Supplementary Figure 1 contains the main results of the study. These should be in the main paper. Just a table would be fine.
5. It's not clear what is tRes50 vs Res50.
6. The baseline model should have the same architecture (ViT) as the experimental models in order to isolate the effect of the pre-training data. It is even acknowledged by the authors that ResNet may be overfitting due to the architecture itself.
7. It is explained that DINO-ViT-large is excluded due to training cost, but why is there no MAE-ViT-small or MAE-ViT-base?
8. I would expect that the data cannot be released, but why can the pretrained models not be released? Regardless, the intention to set up an API to get embeddings is appreciated.
Minor nitpicks:
1. In the pre-training section, you mention that your data is an order of magnitude larger than any previous effort. It would be nice to cite the largest previous effort here.
2. Please define pseudo-epoch and explain why it is used instead of standard epochs (I am guessing it is to increase checkpoint frequency?).
3. Typo, first paragraph of discussion section: "SLL"
4. In the discussion section, you say you trained DINO only on ViT-small, but you report results also for ViT-base.
rating: 7
confidence: 4 |
g8tF7gGzZb | Computational Pathology at Health System Scale – Self-Supervised Foundation Models from Billions of Images | [
"Gabriele Campanella",
"Chad Vanderbilt",
"Thomas Fuchs"
] | Recent breakthroughs in self-supervised learning have enabled the use of large unlabeled datasets to train visual foundation models that can generalize to a variety of downstream tasks. While this training paradigm is well suited for the medical domain where annotations are scarce, large-scale pre-training in healthcare, and in particular pathology, has not been extensively studied. Previous work in self-supervised learning in pathology has focused on relatively small datasets for both pre-training and performance evaluation of downstream tasks. The aim of this work is to explore foundation models at a scale that goes orders of magnitude beyond the state of the art and benchmark current self-supervised learning algorithms by pre-training and evaluating downstream performance on large clinically relevant pathology tasks.
We compiled the largest academic pathology dataset to date, consisting of over 3 billion images from 423 thousand digital microscopy slides. We compared the pre-training of visual transformer models with focus on masked autoencoders (MAE) and self-distillation models (DINO). Downstream performance is evaluated on six clinically relevant tasks from three anatomic sites and two institutions: breast cancer detection, inflammatory bowel disease detection, breast cancer estrogen receptor prediction, lung adenocarcinoma EGFR mutation prediction, and lung cancer immunotherapy response prediction.
The results demonstrate that pre-training on pathology data is beneficial for downstream performance com-pared to pre-training on natural images. Additionally, the DINO algorithm achieved better generalization performance across all tasks tested. The presented model performances signify a phase change in computational pathology research, paving the way into a new era of more performant models based on large-scale, parallel pre-training at the billion-image scale. | [
"computational pathology",
"MAE",
"DINO"
] | https://openreview.net/pdf?id=g8tF7gGzZb | C7qOCcC2QY | review | 1,708,769,706,768 | g8tF7gGzZb | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission13/Reviewer_7k2t"
] | title: Interesting work but corrections are required
review: The work "Computational Pathology at Health System Scale – Self-Supervised Foundation Models from Billions of Images" is an interesting report from the conducted experiments. The reviewer would like to claim that the problem stated by the Authors is of high interest to broader public (benchmark of recent Foundation Models on pathology dataset). The important thing is that the Authors trained their models with an extensive dataset (as it was claimed in the paper there was around of 3 billion images from around 423000 digital microscopy slides). The goal of the work is clearly given. However, the reviewer would like to raise two suggestions that need to be addressed before publication:
1. The first one is related to the description of the samples. It was claimed that all of them belong to 76794 patients - but no sufficient details about the patients are given. I mean information about the sex, race, age... etc. All these details can allow reader to better understand the approach (of course, I am totally aware that they could not have any alignment with the dataset itself but may have - as some of the illnesses are more probable in later stages of life).
2. I do not understand why huge amount of information is given in the form of supplementary material. I assume that all these subchapters need to be provided directly into the paper - not in the form of supplementaty material. It will be then easier to understand the whole idea as well as to compare the outcomes with the latest results.
The reviewer would like to claim that after all these corrections, the work is ready for publication.
rating: 7
confidence: 5 |
g8tF7gGzZb | Computational Pathology at Health System Scale – Self-Supervised Foundation Models from Billions of Images | [
"Gabriele Campanella",
"Chad Vanderbilt",
"Thomas Fuchs"
] | Recent breakthroughs in self-supervised learning have enabled the use of large unlabeled datasets to train visual foundation models that can generalize to a variety of downstream tasks. While this training paradigm is well suited for the medical domain where annotations are scarce, large-scale pre-training in healthcare, and in particular pathology, has not been extensively studied. Previous work in self-supervised learning in pathology has focused on relatively small datasets for both pre-training and performance evaluation of downstream tasks. The aim of this work is to explore foundation models at a scale that goes orders of magnitude beyond the state of the art and benchmark current self-supervised learning algorithms by pre-training and evaluating downstream performance on large clinically relevant pathology tasks.
We compiled the largest academic pathology dataset to date, consisting of over 3 billion images from 423 thousand digital microscopy slides. We compared the pre-training of visual transformer models with focus on masked autoencoders (MAE) and self-distillation models (DINO). Downstream performance is evaluated on six clinically relevant tasks from three anatomic sites and two institutions: breast cancer detection, inflammatory bowel disease detection, breast cancer estrogen receptor prediction, lung adenocarcinoma EGFR mutation prediction, and lung cancer immunotherapy response prediction.
The results demonstrate that pre-training on pathology data is beneficial for downstream performance com-pared to pre-training on natural images. Additionally, the DINO algorithm achieved better generalization performance across all tasks tested. The presented model performances signify a phase change in computational pathology research, paving the way into a new era of more performant models based on large-scale, parallel pre-training at the billion-image scale. | [
"computational pathology",
"MAE",
"DINO"
] | https://openreview.net/pdf?id=g8tF7gGzZb | 2gLVRBVBZb | review | 1,708,148,774,481 | g8tF7gGzZb | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission13/Reviewer_G7XQ"
] | title: The paper presents a study on self-supervised learning for computational pathology, utilizing large-scale datasets and ViT models, demonstrating superior performance on clinically relevant tasks.
review: The paper presents a comprehensive study on the application of self-supervised learning (SSL) in computational pathology, focusing on the pre-training and downstream performance evaluation of visual foundation models on large-scale pathology tasks. The study compiled a massive academic pathology dataset, consisting of over 3 billion images from 423 thousand digital microscopy slides, and compared the pre-training of visual transformer models using masked autoencoders (MAE) and self-distillation models (DINO). The downstream performance was evaluated on six clinically relevant tasks from three anatomic sites and two institutions, demonstrating the benefits of pre-training on pathology data for downstream performance compared to pre-training on natural images. The DINO algorithm achieved better generalization performance across all tasks tested, signifying a significant advancement in computational pathology research.
Pros:
The study addresses a critical gap in the application of SSL algorithms and foundation models in the medical domain, particularly in computational pathology.
The compilation of the largest academic pathology dataset to date, consisting of over 3 billion images, demonstrates a significant contribution to the field.
The comparison of pre-training methods and evaluation of downstream performance on clinically relevant tasks provides valuable insights for the development of performant models in computational pathology.
The study's findings indicate a phase change in computational pathology research, paving the way for more performant models based on large-scale, parallel pre-training at the billion-image scale.
Cons:
The study could benefit from a more detailed discussion on the limitations and challenges of SSL algorithms and foundation models in the medical domain, particularly in clinical workflows.
While the downstream performance was evaluated on clinically relevant tasks, the study could further emphasize the potential impact of these findings on real-world clinical applications.
The document lacks a detailed discussion on the ethical considerations and potential biases associated with the use of large-scale pathology datasets and SSL algorithms in healthcare.
Overall, the work demonstrates high quality, clarity, originality, and significance in advancing the application of SSL algorithms and foundation models in computational pathology. The study's comprehensive approach, large-scale dataset compilation, and valuable insights into pre-training methods and downstream performance evaluation contribute significantly to the field of computational pathology. However, further discussion on ethical considerations and potential biases, as well as the translation of findings into real-world clinical applications, would enhance the overall impact of the work.
rating: 8
confidence: 4 |
g8tF7gGzZb | Computational Pathology at Health System Scale – Self-Supervised Foundation Models from Billions of Images | [
"Gabriele Campanella",
"Chad Vanderbilt",
"Thomas Fuchs"
] | Recent breakthroughs in self-supervised learning have enabled the use of large unlabeled datasets to train visual foundation models that can generalize to a variety of downstream tasks. While this training paradigm is well suited for the medical domain where annotations are scarce, large-scale pre-training in healthcare, and in particular pathology, has not been extensively studied. Previous work in self-supervised learning in pathology has focused on relatively small datasets for both pre-training and performance evaluation of downstream tasks. The aim of this work is to explore foundation models at a scale that goes orders of magnitude beyond the state of the art and benchmark current self-supervised learning algorithms by pre-training and evaluating downstream performance on large clinically relevant pathology tasks.
We compiled the largest academic pathology dataset to date, consisting of over 3 billion images from 423 thousand digital microscopy slides. We compared the pre-training of visual transformer models with focus on masked autoencoders (MAE) and self-distillation models (DINO). Downstream performance is evaluated on six clinically relevant tasks from three anatomic sites and two institutions: breast cancer detection, inflammatory bowel disease detection, breast cancer estrogen receptor prediction, lung adenocarcinoma EGFR mutation prediction, and lung cancer immunotherapy response prediction.
The results demonstrate that pre-training on pathology data is beneficial for downstream performance com-pared to pre-training on natural images. Additionally, the DINO algorithm achieved better generalization performance across all tasks tested. The presented model performances signify a phase change in computational pathology research, paving the way into a new era of more performant models based on large-scale, parallel pre-training at the billion-image scale. | [
"computational pathology",
"MAE",
"DINO"
] | https://openreview.net/pdf?id=g8tF7gGzZb | 1x3X1rGdqd | review | 1,708,636,145,726 | g8tF7gGzZb | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission13/Reviewer_cUiu"
] | title: A Large Scale Self Supervised FM for Computational Pathology
review: **Summary**:
The authors present comprehensive work towards building a Foundation Model for computational pathology. They motivate the need for an FM in pathology, providing a background of existing work in the field. They present three models based on the Visual Transformer Architecture combined with two SSL algorithms, DINO and MAE. They have compiled a significant dataset for the pretraining of their FM using SSL and perform benchmarking on multiple downstream tasks. Their approach shows higher AUC for most tasks over the baselines they have used.
**Pros**:
- The models proposed by the authors showcase a clear superiority in performance to the baselines.
- The authors have collected an impressive amount of data pre-training their FM, alluding to their collected dataset being an order of magnitude larger than any other data collected in the field.
- The authors have done a good job of investigating the training behavior of their models, and have indicated potential next steps for extending their work, all of which I agree with.
- I am glad the authors have discussed open-sourcing their model, as I view their data collection and FM as valuable contributions to the field of pathology.
**Considerations**:
- Can the authors provide some reasoning regarding the generally poor performance observed across the proposed models and baselines for Task 6: `Institution 2
lung cancer immunotherapy outcome prediction`? It appears that this dataset has the largest label imbalance across the different tasks the authors are testing for. Could that be the reason?
- Authors use validation AUC to indicate performance, but AUC as a metric captures an aggregate performance of the model across different operational thresholds. When operationalizing an FM for a clinical setting, one often faces the dilemma of considering the ideal operational threshold of classification (especially in the binary case which is the case with a lot of the downstream tasks the authors test their models on). This is a minor nitpick, and maybe something the authors can show in supplementary material. However, I would be interested in the tradeoffs their FM makes on sensitivity vs specificity at a given threshold for the various downstream tasks.
- It is interesting to note that ViT-large with MAE performs worse in most cases than the two ViT models trained with DINO (in three cases, worse than the baseline). Can the authors comment on why they think this is? Did they explore ViT-large with DINO? If not, could they comment on why?
- The authors have provided multiple examples of SSL models pre-trained on pathology data, could they comment on why they didn't use some of those methods as baselines along with ResNet50?
**Quality**:
The overall quality of the paper is good.
**Originality**:
The authors pre-train their models on a very large corpus of pathology data, which they indicate is larger than any corpus of pathology data collected before. While the authors have used some off-the-shelf methods like DINO for their SSL strategy, the scale of the data they have pre-trained their models on encourages me to believe in the novelty of their SSL approach. This is further reflected by excellent performance in downstream tasks with their proposed approach. However, I am a little concerned with the lack of variety in their chosen baselines, I would like the authors to add some more baselines that use SSL as a pre-training strategy to firmly indicate the superiority of their SSL approach.
**Significance**:
The authors' contributions to the field of pathology with their FM and their collected corpus of data could potentially be very significant for the field of pathology. The authors have correctly identified a list of follow-up questions based on their approach which could further help assert the significance of their FM if they are answered.
**Miscellaneous Comments**:
- Could the authors elaborate a little more about GMA, as this is not a method I am familiar with? My assumption was the spatial distribution of the tiles of a single slide would be necessary for the downstream prediction of the slide as a whole, since you do not have tile-level annotations. Yet. the authors state in benchmark training that GMA does not consider the spatial distribution of the tiles in its prediction. I would appreciate it if the author clarified why GMA's property of not considering the spatial distribution of tiles works here.
rating: 6
confidence: 3 |
g7rqyMIvQb | Minimizing Chronic Kidney Disease (CKD) Underdiagnosis Using Machine Learning | [
"Lawrence Huang",
"Sachin Shankar",
"Keyvon Rashidi",
"Dany Alkurdi",
"Felipe Giuste"
] | Chronic Kidney Disease (CKD) is a prevalent and devastating progressive disease affecting up to 14% (>35.5 million individuals) of the United States population and costing Medicare well over $64 billion annually. As many as 90% of individuals with CKD are undiagnosed, indicating the need for better tools to diagnose CKD and prevent unnoticed disease progression. However, current methods of assessing CKD have limitations regarding accessibility, practicality, and accuracy. This study seeks to address these limitations by developing a data-driven method to assess CKD risk from a large opensource database of electronic health records that has not previously been applied for CKD prediction. Machine Learning (ML) methods were used to develop a software tool to predict patient CKD status with patient-specific demographic data, vital signs, and past medical history. Of the ML models used in this study, a Random Forest Classifier had the best performance in predicting CKD diagnosis correctly with an accuracy of 0.875, an Area Under the Receiver Operating Characteristic Curve of 0.927, and an F1 score of 0.765. Our results indicate that ML-based approaches can help facilitate early screening and intervention for patients at risk of CKD. For progressive diseases like CKD that become more devastating and expensive to treat as they progress, high rates of missed diagnoses can be reduced by ML models leveraging electronic health record data. | [
"Machine Learning",
"Chronic Kidney Disease",
"CKD",
"Value-Based Care",
"MIMIC-IV",
"Diagnosis",
"Prediction",
"Screening",
"Data-driven"
] | https://openreview.net/pdf?id=g7rqyMIvQb | vkHWlGXS4j | review | 1,708,736,871,195 | g7rqyMIvQb | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission5/Reviewer_eh7c"
] | title: Minimizing Chronic Kidney Disease (CKD) Underdiagnosis Using Machine Learning
review: The study presents a compelling approach to enhancing the diagnosis of Chronic Kidney Disease (CKD) using a data-driven method. By utilizing the MIMIC IV, a large, open-source database of electronic health records (EHRs), the research employs ML techniques to determine best approach to assess the risk of CKD.
The methodology is comprehensive, incorporating patient-specific demographic data, vital signs, and past medical history to predict CKD status accurately. The authors meticulously detail their data sources, the process of pre-processing, and the strategies employed to handle missing data, showcasing a robust methodological framework. The training and test datasets are balanced across CKD disease stages, which is crucial for the reliability of the predictive models. The authors statistical analysis is sound and supported by clear tables and figures.
They show that the Random Forest Classifier was identified to have the best performance, achieving an accuracy (0.875), an Area Under the Receiver Operating Characteristic Curve (0.927), and a F-1 score (0.765). These results supports authors conclusion that the random forest CDK classifier algorithm maybe effective in identifying patients at risk of CKD, particularly those who may be under-diagnosed due to health disparities.
While the study is not about foundation models, the authors demonstrate and remind us of that simple algorithm maybe sufficient to solve pervasive healthcare problems.
rating: 9
confidence: 5 |
g7rqyMIvQb | Minimizing Chronic Kidney Disease (CKD) Underdiagnosis Using Machine Learning | [
"Lawrence Huang",
"Sachin Shankar",
"Keyvon Rashidi",
"Dany Alkurdi",
"Felipe Giuste"
] | Chronic Kidney Disease (CKD) is a prevalent and devastating progressive disease affecting up to 14% (>35.5 million individuals) of the United States population and costing Medicare well over $64 billion annually. As many as 90% of individuals with CKD are undiagnosed, indicating the need for better tools to diagnose CKD and prevent unnoticed disease progression. However, current methods of assessing CKD have limitations regarding accessibility, practicality, and accuracy. This study seeks to address these limitations by developing a data-driven method to assess CKD risk from a large opensource database of electronic health records that has not previously been applied for CKD prediction. Machine Learning (ML) methods were used to develop a software tool to predict patient CKD status with patient-specific demographic data, vital signs, and past medical history. Of the ML models used in this study, a Random Forest Classifier had the best performance in predicting CKD diagnosis correctly with an accuracy of 0.875, an Area Under the Receiver Operating Characteristic Curve of 0.927, and an F1 score of 0.765. Our results indicate that ML-based approaches can help facilitate early screening and intervention for patients at risk of CKD. For progressive diseases like CKD that become more devastating and expensive to treat as they progress, high rates of missed diagnoses can be reduced by ML models leveraging electronic health record data. | [
"Machine Learning",
"Chronic Kidney Disease",
"CKD",
"Value-Based Care",
"MIMIC-IV",
"Diagnosis",
"Prediction",
"Screening",
"Data-driven"
] | https://openreview.net/pdf?id=g7rqyMIvQb | eldjYNMT8W | review | 1,707,896,302,573 | g7rqyMIvQb | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission5/Reviewer_sYoX"
] | title: A valuable approach but not directly related to the conference main theme
review: The paper addresses a crucial healthcare challenge with interesting objectives. It leverages machine learning techniques to improve the identification and management of CKD, which is a significant contribution given the global burden of the disease. The methodology employed is robust, and the results presented indicate a high level of effectiveness in achieving the paper's goals.
However, despite the promising application and outcomes, this study primarily utilizes classical machine learning approaches without incorporating any pre-trained models, which diverges from the conference's emphasis on innovative model foundations and applications in clinical settings. This discrepancy could be seen as a deviation from the core subject matter expected at the conference.
Furthermore, while the paper effectively demonstrates the performance of ML techniques in reducing CKD underdiagnosis, it falls short in situating its findings within the broader context of current state-of-the-art methods. A comparative analysis, not limited to data-driven approaches, in terms of precision and cost-effectiveness, would have greatly enriched the paper. Such a comparison is essential for understanding the true value and innovation of the proposed method over existing strategies.
In conclusion, while the application of machine learning techniques to tackle CKD underdiagnosis is indeed valuable, the paper's methodological approach lacks the novelty and direct relevance to the "Clinical Foundational Model" conference theme. The absence of a comparative analysis with state-of-the-art methods further limits the paper's contribution to the field. Therefore, despite its potential impact in healthcare, the paper may not meet the innovation threshold required for acceptance into the conference.
Pros:
- Tackling a challenging problem in healthcare
- Great accuracy
Cons:
- Not related to the main conference theme
- Lack comparison with SoTA methods
- Lack a cost reduction analysis
rating: 3
confidence: 3 |
g7rqyMIvQb | Minimizing Chronic Kidney Disease (CKD) Underdiagnosis Using Machine Learning | [
"Lawrence Huang",
"Sachin Shankar",
"Keyvon Rashidi",
"Dany Alkurdi",
"Felipe Giuste"
] | Chronic Kidney Disease (CKD) is a prevalent and devastating progressive disease affecting up to 14% (>35.5 million individuals) of the United States population and costing Medicare well over $64 billion annually. As many as 90% of individuals with CKD are undiagnosed, indicating the need for better tools to diagnose CKD and prevent unnoticed disease progression. However, current methods of assessing CKD have limitations regarding accessibility, practicality, and accuracy. This study seeks to address these limitations by developing a data-driven method to assess CKD risk from a large opensource database of electronic health records that has not previously been applied for CKD prediction. Machine Learning (ML) methods were used to develop a software tool to predict patient CKD status with patient-specific demographic data, vital signs, and past medical history. Of the ML models used in this study, a Random Forest Classifier had the best performance in predicting CKD diagnosis correctly with an accuracy of 0.875, an Area Under the Receiver Operating Characteristic Curve of 0.927, and an F1 score of 0.765. Our results indicate that ML-based approaches can help facilitate early screening and intervention for patients at risk of CKD. For progressive diseases like CKD that become more devastating and expensive to treat as they progress, high rates of missed diagnoses can be reduced by ML models leveraging electronic health record data. | [
"Machine Learning",
"Chronic Kidney Disease",
"CKD",
"Value-Based Care",
"MIMIC-IV",
"Diagnosis",
"Prediction",
"Screening",
"Data-driven"
] | https://openreview.net/pdf?id=g7rqyMIvQb | VzpOLjyv7n | review | 1,708,393,700,983 | g7rqyMIvQb | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission5/Reviewer_MKZD"
] | title: Comparison of Random Forest, Logistic Regression, and KNN models for CKD prediction on MIMIC-IV Dataset
review: ## Summary
This manuscript explores the application of traditional machine learning models (Random Forest, Logistic Regression, and kNN) to the prediction of Chronic Kidney Disease (CKD) on the MIMIC-IV dataset using CKD-related ICD codes as labels and non-CKD-related ICD codes and demographic variables as features. This is an important healthcare problem and establishes a baseline on MIMIC-IV dataset. In general, the methodology for data sampling, splits, evaluation metrics are well done. This manuscript would be stronger with the inclusion of more advanced models such as gradient boosted trees which often achieve stronger performance in many healthcare prediction tasks with tabular data.
The authors are somewhat vague in the real-world application of this model--Is the goal to create a clinical decision support/forecasting model while the patient is still in the hospital? Is the goal to create a model that can detect if we forgot to mark the presence of CKD in an encounter for billing after the ICU stay? This is not clearly stated and these are different questions that require different modeling approaches. The design of the presented models are mainly useful for the second line of questioning. However, in an ICU population where creatinine and eGFR values are very frequently measured, why not compute a clinical baseline using traditional diagnostic definitions for comparison? This is currently absent from the manuscript and inclusion would make the research much stronger. More on this below.
## Pros
* Important healthcare problem. Research establishes baseline for CKD prediction task from ICD + demographic data on MIMIC-IV dataset.
* Well designed experiments, train/test split
* Large sample size & statistically robust
* Good choice of evaluation metrics (AUROC, AUPRC) and threshold selection rationale with MCC
## Cons
* I consider Random Forest to be a simple model. The findings would be more interesting if a more advanced model such as Gradient Boosted Trees was also compared, especially since Gradient Boosted Trees often achieve state-of-the-art performance on many healthcare prediction tasks using tabular features.
* Poor word choice in second paragraph of "Predicting Undiagnosed CKD Patients" section. The authors write "The lower sensitivity is a benefit in this case...", but lower sensitivity is never a benefit because we always desire higher sensitivity and specificity. I think the authors mean that it is an optimal trade-off.
* This analysis pools all CKD classes together rather than predicting individual CKD I, II, etc. It would be more interesting/valuable if authors designed experiments to predict individual CKD classes in addition to predicting presence/absence of CKD. This is because management and healthcare cost of CKD I/II patient (often no dialysis, medication management) is very different than CKD IV/V (patients usually more ill, more comorbidities, and/or require dialysis). The utility of this prediction model would be significantly improved if models were able to predict CKD classes.
* MIMIC IV dataset is derived from real electronic health records of ICU patients where there is a temporal nature to the data. Some diagnoses & ICD codes may be present in certain days/encounters earlier than others. The authors' methods indicate that ICD codes extracted correspond to all ICD codes for a given stay--that is at time of discharge. Choosing this time point limits the utility of this predictive model as many diagnoses (ICD codes) will be accumulated during the ICU stay and may not be present on admission. The value of this model thus becomes post-encounter detection (e.g. CKD billing code is missing), not for prospective clinical decision support. A more clinically useful time point for prediction for diagnostic/clinical decision support would be earlier in the hospital stay (e.g. day 1 or day 2 of admission). The target use case of the proposed model should be more clearly stated; currently it is vague. The chosen use case for this model would then inform different choices for data selection and model development.
- Since the study population are ICU patients, I expect almost all to have multiple creatinine value or eGFR determined in their laboratory studies. The creatinine and eGFR values are how CKD is diagnosed using diagnostic criteria such as KDIGO. The authors tangentially acknowledge these traditional diagnostic criteria in "Previous Work and Study Scope" section, but do not actually compute these values as a clinical baseline. The proposed line of research would be much stronger and clinically useful if authors determined CKD from traditional diagnostic criteria (which is still widely used in clinical medicine) as a baseline for comparison, or used the computed values as ground truth instead of ICD codes. This should be possible for most patients in the MIMIC dataset because of the high availability of creatinine and eGFR data in ICU patients. Currently authors compare against presence of ICD codes that are related to CKD, but this ground truth may be inaccurate given that ICD codes are primarily used for billing purposes. In theory ICD codes should reflect the patient's clinical reality but in reality it may not necessarily be reflective since it requires billing staff or healthcare staff to denote the presence of the diagnosis in the patient's EHR. Thus relying on ICD codes as ground truth may actually lead to your model to under-diagnose CKD.
rating: 6
confidence: 4 |
g7rqyMIvQb | Minimizing Chronic Kidney Disease (CKD) Underdiagnosis Using Machine Learning | [
"Lawrence Huang",
"Sachin Shankar",
"Keyvon Rashidi",
"Dany Alkurdi",
"Felipe Giuste"
] | Chronic Kidney Disease (CKD) is a prevalent and devastating progressive disease affecting up to 14% (>35.5 million individuals) of the United States population and costing Medicare well over $64 billion annually. As many as 90% of individuals with CKD are undiagnosed, indicating the need for better tools to diagnose CKD and prevent unnoticed disease progression. However, current methods of assessing CKD have limitations regarding accessibility, practicality, and accuracy. This study seeks to address these limitations by developing a data-driven method to assess CKD risk from a large opensource database of electronic health records that has not previously been applied for CKD prediction. Machine Learning (ML) methods were used to develop a software tool to predict patient CKD status with patient-specific demographic data, vital signs, and past medical history. Of the ML models used in this study, a Random Forest Classifier had the best performance in predicting CKD diagnosis correctly with an accuracy of 0.875, an Area Under the Receiver Operating Characteristic Curve of 0.927, and an F1 score of 0.765. Our results indicate that ML-based approaches can help facilitate early screening and intervention for patients at risk of CKD. For progressive diseases like CKD that become more devastating and expensive to treat as they progress, high rates of missed diagnoses can be reduced by ML models leveraging electronic health record data. | [
"Machine Learning",
"Chronic Kidney Disease",
"CKD",
"Value-Based Care",
"MIMIC-IV",
"Diagnosis",
"Prediction",
"Screening",
"Data-driven"
] | https://openreview.net/pdf?id=g7rqyMIvQb | 1MordrGaMy | review | 1,707,883,217,955 | g7rqyMIvQb | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission5/Reviewer_42Yx"
] | title: Official Review
review: The paper discusses machine learning for chronic kidney disease prediction. Multiple baselines are tested. The strength of the paper includes the detailed background introduction and problem-driven study, with thorough data analysis and experiment discussion. The code is made available online. The weakness of the paper includes too few baselines being tested, not standardized and lacking comparison to some foundation models (that are not specifically for chronic kidney disease but can be easily tailored to do so). Figure 1 is too small, especially with too small fonts. The Discussion section lacks more tables/figures/statistics to back up the sentences.
rating: 7
confidence: 3 |
fGVQgxvrzI | HEART: Heart Expert Assistant with ReTrieval-augmented | [
"Junhao Guo",
"XueFeng Shan",
"Guoming Wang",
"Dong Chen",
"Rongxing Lu",
"Siliang Tang"
] | As the incidence of cardiovascular diseases continues to rise, people are increasingly emphasizing the prevention and treatment of cardiovascular diseases. However, in economically disadvantaged areas, the scarcity of medical resources and lack of clinical experience make the early detection of cardiovascular diseases particularly challenging. For this challenge, the HEART (HEART Expert Assistant with Retrieval-augmented) model was proposed, which leverages the powerful logical reasoning capabilities of Large Language Models (LLMs) to assess whether patients have heart disease. Specifically, HEART operates on a dual-component structure, consisting of a Diagnostic Module and a Case Retrieval Module. For the Diagnostic Module, the LLM is pre-trained on a cardiac ultrasound assessment dataset to master the relevant evaluation techniques. As for the Case Retrieval Module, a text encoder transforms input cases into hidden features, which are then used to retrieve auxiliary cases. The input case and auxiliary case are merged through a Case Fusion Layer to obtain the fused case features, which are combined with prompts for inference. We have tested our model on a congenital disease dataset and achieved encouraging results. The proposed HEART model has shown tremendous potential in becoming the foundational model for predicting cardiovascular diseases. | [
"Clinical Foundation Models",
"Cardiovascular diseases prediction",
"Large language model",
"Retrieval-Augmented model"
] | https://openreview.net/pdf?id=fGVQgxvrzI | dRGWxzWw0h | review | 1,708,220,880,283 | fGVQgxvrzI | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission2/Reviewer_Ek3d"
] | title: Good proposed approach but need further polish
review: The manuscript proposes a retrieval-augmented LLM approach for text-based cardiovascular disease detection.
Overall, the proposed approach is reasonable. However, there exists a number of unclear aspects that need to be addressed before the manuscript to be published.
**Strengthese**
- cardiovascular disease detection is one of the important risk prediction scenarios for clinical foundation models.
- the proposed retrieval augmentation is a useful technical to enhance foundation models
**Weaknesses**
- Heart disease dataset
- since it is an author-collected dataset, it is better to mention the size of training and testing set
- in the real clinical setting, there can exist healthy patients. However, the dataset does not contain the label for the healthy condition.
- Technical part
- how do the authors implement the re-ranker? Now there is no explanation for that.
- how to derive ``<RAGHere>`` from ``A``? In the introduction of the case fusion layer, the authors stop after introducing their cross-attention operator. There still exists a gap between the cross-attention and ``<RAGHere>``
- is there a particular reason to use the same $W_K$ for both $K=W_K A$ and $V=W_K A$?
- Experiment setup
- what is the RAG model? there is no reference for it. If it is a custom baseline, it would be better to introduce it.
- "During training, we randomly mask $m$ cases." $m$ is first introduced here. This introduction of a new variable without prior explanation can pose challenges for understanding the method effectively.
**Questions**
- Can the authors explain why the retrieved cases are still from the training data? Assuming the foundation model is well-trained, then it does not need
- what do ``standard values`` refer to (Results section)? Also, there is a typo for ``w/o stanard`` in Tabl 2. Is the ``standard value`` similar to the concept of ``standardization of units`` mentioned in the Heart disease dataset section? I can guess it may refer to the standard value range of a vital, but it is better to explicitly introduce it.
rating: 5
confidence: 4 |
fGVQgxvrzI | HEART: Heart Expert Assistant with ReTrieval-augmented | [
"Junhao Guo",
"XueFeng Shan",
"Guoming Wang",
"Dong Chen",
"Rongxing Lu",
"Siliang Tang"
] | As the incidence of cardiovascular diseases continues to rise, people are increasingly emphasizing the prevention and treatment of cardiovascular diseases. However, in economically disadvantaged areas, the scarcity of medical resources and lack of clinical experience make the early detection of cardiovascular diseases particularly challenging. For this challenge, the HEART (HEART Expert Assistant with Retrieval-augmented) model was proposed, which leverages the powerful logical reasoning capabilities of Large Language Models (LLMs) to assess whether patients have heart disease. Specifically, HEART operates on a dual-component structure, consisting of a Diagnostic Module and a Case Retrieval Module. For the Diagnostic Module, the LLM is pre-trained on a cardiac ultrasound assessment dataset to master the relevant evaluation techniques. As for the Case Retrieval Module, a text encoder transforms input cases into hidden features, which are then used to retrieve auxiliary cases. The input case and auxiliary case are merged through a Case Fusion Layer to obtain the fused case features, which are combined with prompts for inference. We have tested our model on a congenital disease dataset and achieved encouraging results. The proposed HEART model has shown tremendous potential in becoming the foundational model for predicting cardiovascular diseases. | [
"Clinical Foundation Models",
"Cardiovascular diseases prediction",
"Large language model",
"Retrieval-Augmented model"
] | https://openreview.net/pdf?id=fGVQgxvrzI | QUR0VDQFvW | review | 1,708,570,707,878 | fGVQgxvrzI | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission2/Reviewer_6pZo"
] | title: Strong Manuscript
review: In reviewing this manuscript, it is clear the authors can intellectually articulate the focus of their study. They were able to explain the rationale of their research as well as their results.
My constructive feedback would be as follows:
1. The abstract should be more concrete in explaining the “encouraging results.” At present, there is no concrete description of any results whatsoever in the abstract.
2. In defining the attention function, the authors should define all terms in the model; at present, they do not describe the \\(d_k\\)
scaling term and the role it plays in the attention function.
3. In the section about pretraining, the authors state, "Specifically, we detect the word “Figure” in sentences and remove the corresponding sentences." Without having looked at the curated pre-training dataset, it would be imperative to know if all sentences analyzing images had the word "Figure" or words such as "Image" or other synonyms might have been used.
4. I would encourage the authors to explicitly define the "cls" subscript used in their arrays.
5. When describing the space of retrieved sets, I believe the authors may have a typesetting issue; namely, the authors state the set as \\(R^{1xh}\\), using an italic \\(x\\) as opposed to \\(\\times\\), indicating the array size of R being 1 by h. Hence, I believe the dimensionality of R should be represented as \\(R^{1 \\times h}\\).
6. Since the authors are using LaTex markups, I encourage them to express the learning rate as \\(10^{-4}\\) as opposed to 1e-4
7. The authors should define lora_alph and lora_r in the context of LoRA and display them with the appropriate LaTex markup (if applicable).
Overall, the authors have an extremely strong paper.
rating: 9
confidence: 4 |
fGVQgxvrzI | HEART: Heart Expert Assistant with ReTrieval-augmented | [
"Junhao Guo",
"XueFeng Shan",
"Guoming Wang",
"Dong Chen",
"Rongxing Lu",
"Siliang Tang"
] | As the incidence of cardiovascular diseases continues to rise, people are increasingly emphasizing the prevention and treatment of cardiovascular diseases. However, in economically disadvantaged areas, the scarcity of medical resources and lack of clinical experience make the early detection of cardiovascular diseases particularly challenging. For this challenge, the HEART (HEART Expert Assistant with Retrieval-augmented) model was proposed, which leverages the powerful logical reasoning capabilities of Large Language Models (LLMs) to assess whether patients have heart disease. Specifically, HEART operates on a dual-component structure, consisting of a Diagnostic Module and a Case Retrieval Module. For the Diagnostic Module, the LLM is pre-trained on a cardiac ultrasound assessment dataset to master the relevant evaluation techniques. As for the Case Retrieval Module, a text encoder transforms input cases into hidden features, which are then used to retrieve auxiliary cases. The input case and auxiliary case are merged through a Case Fusion Layer to obtain the fused case features, which are combined with prompts for inference. We have tested our model on a congenital disease dataset and achieved encouraging results. The proposed HEART model has shown tremendous potential in becoming the foundational model for predicting cardiovascular diseases. | [
"Clinical Foundation Models",
"Cardiovascular diseases prediction",
"Large language model",
"Retrieval-Augmented model"
] | https://openreview.net/pdf?id=fGVQgxvrzI | CE3ZV1jYBX | review | 1,708,955,099,503 | fGVQgxvrzI | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission2/Reviewer_sCWy"
] | title: A nice end-to-end pipeline showing LLMs being used to assist in clinical predictive tasks
review: Well written paper proposing a pipeline capable of predicting cardiovascular diseases. Custom models are used with a RAG layer to retrieve predictions.
The evaluation is well thought out. Overall a good contribution.
rating: 7
confidence: 4 |
fGVQgxvrzI | HEART: Heart Expert Assistant with ReTrieval-augmented | [
"Junhao Guo",
"XueFeng Shan",
"Guoming Wang",
"Dong Chen",
"Rongxing Lu",
"Siliang Tang"
] | As the incidence of cardiovascular diseases continues to rise, people are increasingly emphasizing the prevention and treatment of cardiovascular diseases. However, in economically disadvantaged areas, the scarcity of medical resources and lack of clinical experience make the early detection of cardiovascular diseases particularly challenging. For this challenge, the HEART (HEART Expert Assistant with Retrieval-augmented) model was proposed, which leverages the powerful logical reasoning capabilities of Large Language Models (LLMs) to assess whether patients have heart disease. Specifically, HEART operates on a dual-component structure, consisting of a Diagnostic Module and a Case Retrieval Module. For the Diagnostic Module, the LLM is pre-trained on a cardiac ultrasound assessment dataset to master the relevant evaluation techniques. As for the Case Retrieval Module, a text encoder transforms input cases into hidden features, which are then used to retrieve auxiliary cases. The input case and auxiliary case are merged through a Case Fusion Layer to obtain the fused case features, which are combined with prompts for inference. We have tested our model on a congenital disease dataset and achieved encouraging results. The proposed HEART model has shown tremendous potential in becoming the foundational model for predicting cardiovascular diseases. | [
"Clinical Foundation Models",
"Cardiovascular diseases prediction",
"Large language model",
"Retrieval-Augmented model"
] | https://openreview.net/pdf?id=fGVQgxvrzI | 7kL28T1nx9 | review | 1,708,187,275,805 | fGVQgxvrzI | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission2/Reviewer_LEQu"
] | title: Using ECG examination records to predict heart defects
review: The authors propose to use ECG examination records (text) to classify heart defects. This is in contrast to the more "classical" approach of using directly ECG data.
To my understanding, at inference time this method would rely on a doctor first describing the ECG to produce the ECG examination record which would then be used as input to the model. I wonder if this is something that limits its applicability. In the introduction the authors note that there is a lack of cardiologists, but this method would not alleviate that issue.
The authors first apply a classic few-shot strategy using a llama2 variant that was pre-trained on a curated version of a public dataset of ECG notes. They note a low performance with this strategy.
Then they finetune the model on their dataset (1006 cases) and observe a marked improvement. Notably, there are no descriptions of whether they split the data in training and validation or of any cross-validation strategies. Additionally, pre-training and finetuning are done for very few epochs 10 and 15 respectively.
Then the authors use RAG to include context from similar cases to aid in the prediction. They can only include 1 or 2 retrieved samples before running out of tokens with a standard RAG strategy. The RAG strategies improve the performance. But I have several issues here. i) The knowledgebase for RAG is the same training cohort. This seems a major issue. ii) If the retrieved context includes a diagnosis, the model may just use the retrieved diagnosis. It may be beneficial to investigate this potential issue.
To include more context, they propose a context fusion model based on cross-attention. Then the entire context coming from the RAG portion boils down to a single vector if I understood correctly. This seems to improve performance further.
In general I would have liked to see baselines of performance using ECG data to compare if there could be benefits of the LLM strategies based on ECG reports.
I was confused about the "standard values" I could not find a description of them. They are mentioned only once in the text but they appear in the results table.
It would have been good to see the number of cases in each type of case: single, double, triple.
rating: 4
confidence: 3 |
cYSthunEPN | Common Factors in Psychotherapy: Enhancing Provider-to-Patient Dynamics to Improve Patient Outcomes | [
"Alison Cerezo"
] | For this paper, we describe our approach to benchmarking Common Factors of Empathy and Collaboration on the HOPE dataset—a publicly available dataset comprising 12.8k utterances from 212 therapy sessions involving a therapist and client dyad. Malhotra et al. (2022) conducted thorough processing of the HOPE dataset to eliminate noise and transcription errors. Common Factors Theory encompasses factors from (1) the client, (2) provider and, (3) therapeutic context; we specifically focus on provider behaviors in this paper. Our central research question: Can we produce a scalable, consistent, and unbiased way to assess the occurrences of reflective listening, appreciation, and confrontation–markers of empathy and collaboration, the core features of Common Factors Theory–using natural language processing and AI methods to augment provider communications?
**Could not add my co-authors in the portal.
Our full team: Alison Cerezo, PhD,1,2, Vijaykumar Palat, MS1, Amber Jolley-Paige, PhD1, Sarah Peregrine Lord, PsyD1,3
(1 mpathic.ai; 2 University of California Santa Barbara, 3 University of Washington) | [
"clinical benchmarks",
"common factors",
"health equity",
"machine learning",
"artificial intelligence"
] | https://openreview.net/pdf?id=cYSthunEPN | uRqEgFoGRA | review | 1,709,020,990,251 | cYSthunEPN | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission25/Reviewer_QiAY"
] | title: Benchmarking Common Factors in Psychotherapy Using AI Systems to Enhance Provider-to-Patient Dynamics to Improve Patient Outcomes
review: Authors present the use of LLM to improve psychotherapy sessions integrating common factors approach into LLM to provide feedback.
The paper is certainly original. It would enhance the clarity and understanding of the paper if authors present more detail on how the system is built, the interpretation of the metrics and how the system can improve the quality of visits. I would also appreciate a discussion on the ethical or social implications of using this type of technology in the medical setting.
rating: 6
confidence: 3 |
cYSthunEPN | Common Factors in Psychotherapy: Enhancing Provider-to-Patient Dynamics to Improve Patient Outcomes | [
"Alison Cerezo"
] | For this paper, we describe our approach to benchmarking Common Factors of Empathy and Collaboration on the HOPE dataset—a publicly available dataset comprising 12.8k utterances from 212 therapy sessions involving a therapist and client dyad. Malhotra et al. (2022) conducted thorough processing of the HOPE dataset to eliminate noise and transcription errors. Common Factors Theory encompasses factors from (1) the client, (2) provider and, (3) therapeutic context; we specifically focus on provider behaviors in this paper. Our central research question: Can we produce a scalable, consistent, and unbiased way to assess the occurrences of reflective listening, appreciation, and confrontation–markers of empathy and collaboration, the core features of Common Factors Theory–using natural language processing and AI methods to augment provider communications?
**Could not add my co-authors in the portal.
Our full team: Alison Cerezo, PhD,1,2, Vijaykumar Palat, MS1, Amber Jolley-Paige, PhD1, Sarah Peregrine Lord, PsyD1,3
(1 mpathic.ai; 2 University of California Santa Barbara, 3 University of Washington) | [
"clinical benchmarks",
"common factors",
"health equity",
"machine learning",
"artificial intelligence"
] | https://openreview.net/pdf?id=cYSthunEPN | qHZueMH6tF | review | 1,708,674,566,347 | cYSthunEPN | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission25/Reviewer_AfmP"
] | title: Review for Common Factors in Psychotherapy: Enhancing Provider-to-Patient Dynamics to Improve Patient Outcomes
review: Summary of the paper:
The abstract describes the use of NLP to detect healthcare provider behaviors aligned with common factors theory in psychotherapy. Common factors theory emphasizes building empathy, trust and positive relationships through provider skills like reflective listening and appreciation. While the clinical importance of this is paramount, I have concerns with the content that has been presented in the abstract.
Major Comments:
This is a great problem statement. However, I have a couple of major comments:
1. There is ambiguity in the description of the exact methods, vague terms such as machine learning and natural language processing are used.
* They mention the use of “synthetic and generative technologies to expand specific labeling strategies and data curation by generating and validating rare use cases” but don’t describe this - what was the generative model that was used, how did they verify its realism to simulate rare use cases, how much synthetic data was generated relative to non-synthetic data etc. More details need to be provided since this can have significant impacts on the quality of their model.
* “Machine learning methods were used to create natural language processing models based on conversational training data”. What was the base NLP model, did the authors fine-tune a model such as LLaMA? What specific machine learning method was used - NLP fine-tuning strategy needs to be described.
2. The authors mention that they will report results of benchmarking their model on the HOPE dataset, but they don’t do so within the paper.
The overall clarity of the abstract is low due to the above concerns.
Minor Comments:
1. They have not adhered to the AAAI submission format.
2. They use the phrase “using machine learning with natural language processing”. NLP is technically a subfield of ML, and this statement needs to be revised to reflect that.
rating: 4
confidence: 4 |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 54