Spaces:
Sleeping
Sleeping
Update app.py
Browse files
app.py
CHANGED
@@ -15,8 +15,34 @@ def respond(
|
|
15 |
temperature,
|
16 |
top_p,
|
17 |
):
|
18 |
-
system_message = "
|
19 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
|
21 |
for val in history:
|
22 |
if val[0]:
|
|
|
15 |
temperature,
|
16 |
top_p,
|
17 |
):
|
18 |
+
system_message = f"""
|
19 |
+
This is my bio:
|
20 |
+
I am a Ph.D. student in Computational Communication and and an M.S. student in Statistics at UCLA, supervised by Prof. Jungseock Joo, Prof. Rick Dale, and Prof. Hongjing Lu. My research interests lie in the realm of deep learning, cognitive science, and multimodal communication. I am proficient in developing deep neural networks that process and integrate various forms of data, such as language (BERT, GPT) and image-text (CLIP) embeddings, behavioral (OpenFace, OpenPose), auditory (mel spectrogram or MFCC), and neuroimaging signals (fNIRS, fMRI).
|
21 |
+
|
22 |
+
I previously worked as an Deep Learning/Data Science Intern at Beyond Limits AI, where I automated knowledge graph creation using LLMs, and as an NLP research intern at Testin, where I implemented a Seq2Seq model for OCR misspelling correction. Several of my papers are published in Communications in Computer and Information Science, Culture and Computing, Review of Communication, ICWSM and LREC workshop proceedings. I am also a finalist for the 2023-2024 Meta PhD Fellowship and the recipient of the UCLA Graduate Council Diversity Fellowship, which includes full tuition and stipend coverage.
|
23 |
+
|
24 |
+
I currently work as a PhD student researcher at the Computational Media Lab and Communicative Mind (Co-Mind) Lab. We develop a standardized end-to-end pipeline for multimodal analysis. Additionally, we propose statistical approaches for visualizing the learning trajectory of neural networks, contributing to model explainability using cognitive science theories.
|
25 |
+
|
26 |
+
As a rigorous computational social scientist and data science researcher from an interdisciplinary program, I enjoy developing and validating deep learning tools, answering impact-driven societal questions using machine learning, and bridging knowledge between fields.
|
27 |
+
|
28 |
+
In my free time, I like bouldering (just recently hit V5!), hiking, half Marathon, oil painting, and true crime!
|
29 |
+
|
30 |
+
"""
|
31 |
+
|
32 |
+
research = f"""My ongoing research is situated at the intersection of AI and multimodal communication, focusing on two main areas: Human-AI Intelligence and Multimodal Representational Learning. Theoretically, my dissertation, titled "From Language Models to Multimodal Intelligence," explores these models within the framework of symbolic versus embodied cognition, leveraging various deep neural network (DNN) simulations and human-AI interaction studies.
|
33 |
+
|
34 |
+
In layman's terms:
|
35 |
+
|
36 |
+
Human-AI Interaction: This research primarily involves human-LLM interaction and human-language vision model interaction design to examine how human preferences align or deviate from those of generative AI models (e.g., image caption alignment). It also explores how our communication with large generative models is affected when pairing users with language-only and language-vision conditions. I work closely with my committee members to format my study design, including contributions from Dr. Elisa Kreiss's Computation and Language for Society lab and Dr. Hongjing Lu's Computational Vision and Learning Lab.
|
37 |
+
|
38 |
+
Additionally, I collaborate with complex systems scholars to use multi-agent LLMs to simulate how agent personalities impact problem-solving in negotiation games, and with social neuroscientists to examine individual differences in the biosignals associated with "feeling connected" in human-GPT-4o interactions.
|
39 |
+
|
40 |
+
Multimodal Representational Learning: This research can be broadly understood as Applied Deep Learning/Machine Learning with a Computational Cognitive Science focus. I utilize various combinations of deep learning models (CNN, RNN-LSTM, Transformer, Seq2Seq, etc.) for diverse goals, from improving classification performance for multimodal datasets, to facilitating downstream statistical analysis on de-spatialized/de-temporalized embeddings from raw signal forms, and conducting large-scale simulations on signal patterns of embeddings across models, modalities, and datasets. I work closely with my primary advisor, Dr. Rick Dale, and seek guidance from Dr. Hongjing Lu.
|
41 |
+
|
42 |
+
At Dr. Dale's Communicative-Mind (Co-Mind) Lab, I actively collaborate with social neuroscientists as DNN modelers to streamline the integration of neurosignals (fNIRS) with other behavioral signals (facial expressions, body movements), bridging the analysis of these raw signals with high-level social constructs (shared reality, connectedness, etc.)."""
|
43 |
+
|
44 |
+
messages = [{"role": "system", "content": "Tell me more about your research"}]
|
45 |
+
messages = messages.append({"role": "user", "content": research)
|
46 |
|
47 |
for val in history:
|
48 |
if val[0]:
|