diff --git a/data/xml/2022.coling.xml b/data/xml/2022.coling.xml
index fe0033be1e..f738f5e057 100644
--- a/data/xml/2022.coling.xml
+++ b/data/xml/2022.coling.xml
@@ -5168,7 +5168,7 @@
On the Complementarity between Pre-Training and Random-Initialization for Resource-Rich Machine Translation
ChangtongZan
LiangDing
- LiShen
+ LiShen
YuCao
WeifengLiu
DachengTao
diff --git a/data/xml/2022.findings.xml b/data/xml/2022.findings.xml
index c7c721e9e0..1e17d40ce1 100644
--- a/data/xml/2022.findings.xml
+++ b/data/xml/2022.findings.xml
@@ -12137,7 +12137,7 @@ Faster and Smaller Speech Translation without Quality Compromise
Improving Sharpness-Aware Minimization with Fisher Mask for Better Generalization on Language Models
QihuangZhongWuhan University
LiangDingJD Explore Academy, JD.com Inc. & The University of Sydney
- LiShenJD Explore Academy
+ LiShenJD Explore Academy
PengMiXiamen University
JuhuaLiuWuhan University
BoDuWuhan University
diff --git a/data/xml/2023.emnlp.xml b/data/xml/2023.emnlp.xml
index a30757ed6f..94ce15ec35 100644
--- a/data/xml/2023.emnlp.xml
+++ b/data/xml/2023.emnlp.xml
@@ -9715,7 +9715,7 @@
Zero-shot Sharpness-Aware Quantization for Pre-trained Language Models
MiaoxiZhu
QihuangZhong
- LiShen
+ LiShen
LiangDing
JuhuaLiu
BoDu
@@ -12674,7 +12674,7 @@ The experiments were repeated and the tables and figures were updated. Changes a
ShwaiHe
Run-ZeFan
LiangDing
- LiShen
+ LiShen
TianyiZhou
DachengTao
14685-14691
diff --git a/data/xml/2023.findings.xml b/data/xml/2023.findings.xml
index 80ad9ca38b..d0da8c7344 100644
--- a/data/xml/2023.findings.xml
+++ b/data/xml/2023.findings.xml
@@ -19530,7 +19530,7 @@
KeqinPeng
LiangDing
QihuangZhong
- LiShen
+ LiShen
XueboLiu
MinZhang
YuanxinOuyang
diff --git a/data/xml/2024.acl.xml b/data/xml/2024.acl.xml
index 0c24f4b3ba..87b1aa5b6f 100644
--- a/data/xml/2024.acl.xml
+++ b/data/xml/2024.acl.xml
@@ -8162,7 +8162,7 @@
Revisiting Knowledge Distillation for Autoregressive Language Models
QihuangZhong
LiangDing
- LiShenSun Yat-Sen University
+ LiShenSun Yat-Sen University
JuhuaLiuWuhan University
BoDuWuhan University
DachengTaoUniversity of Sydney
diff --git a/data/xml/2024.emnlp.xml b/data/xml/2024.emnlp.xml
index cb09c5b5cd..631e4f7338 100644
--- a/data/xml/2024.emnlp.xml
+++ b/data/xml/2024.emnlp.xml
@@ -13939,7 +13939,7 @@
Cheng-YuHsiehUniversity of Washington
Ajay KumarJaiswalApple
TianlongChen
- LiShen
+ LiShen
RanjayKrishnaDepartment of Computer Science
ShiweiLiu
18089-18099
diff --git a/data/xml/2024.findings.xml b/data/xml/2024.findings.xml
index cbffc13d45..8aeb5fd27e 100644
--- a/data/xml/2024.findings.xml
+++ b/data/xml/2024.findings.xml
@@ -17100,7 +17100,7 @@
OOP: Object-Oriented Programming Evaluation Benchmark for Large Language Models
ShuaiWang
LiangDing
- LiShenSun Yat-Sen University
+ LiShenSun Yat-Sen University
YongLuoWuhan University
BoDuWuhan University
DachengTaoUniversity of Sydney
@@ -21058,7 +21058,7 @@
DuyDuong-TranUnited States Naval Academy and University of Pennsylvania, University of Pennsylvania
YingDingUniversity of Texas, Austin
HuanLiuArizona State University
- LiShenUniversity of Pennsylvania
+ LiShenUniversity of Pennsylvania
TianlongChen
2187-2205
Recent advancements in large language models (LLMs) have achieved promising performances across various applications. Nonetheless, the ongoing challenge of integrating long-tail knowledge continues to impede the seamless adoption of LLMs in specialized domains. In this work, we introduce DALK, a.k.a. Dynamic Co-Augmentation of LLMs and KG, to address this limitation and demonstrate its ability on studying Alzheimer’s Disease (AD), a specialized sub-field in biomedicine and a global health priority. With a synergized framework of LLM and KG mutually enhancing each other, we first leverage LLM to construct an evolving AD-specific knowledge graph (KG) sourced from AD-related scientific literature, and then we utilize a coarse-to-fine sampling method with a novel self-aware knowledge retrieval approach to select appropriate knowledge from the KG to augment LLM inference capabilities. The experimental results, conducted on our constructed AD question answering (ADQA) benchmark, underscore the efficacy of DALK. Additionally, we perform a series of detailed analyses that can offer valuable insights and guidelines for the emerging topic of mutually enhancing KG and LLM.
diff --git a/data/xml/2025.findings.xml b/data/xml/2025.findings.xml
index 307833c314..9ce6403ffa 100644
--- a/data/xml/2025.findings.xml
+++ b/data/xml/2025.findings.xml
@@ -23144,7 +23144,7 @@
Edit Once, Update Everywhere: A Simple Framework for Cross-Lingual Knowledge Synchronization in LLMs
YuchenWuShanghai Jiao Tong University
LiangDing
- LiShenSun Yat-Sen University
+ LiShenSun Yat-Sen University
DachengTaoNanyang Technological University
23282-23302
Knowledge editing allows for efficient adaptation of large language models (LLMs) to new information or corrections without requiring full retraining. However, prior methods typically focus on either single-language editing or basic multilingual editing, failing to achieve true cross-linguistic knowledge synchronization. To address this, we present a simple and practical state-of-the-art (SOTA) recipe Cross-Lingual Knowledge Democracy Edit (X-KDE), designed to propagate knowledge from a dominant language to other languages effectively. Our X-KDE comprises two stages: (i) Cross-lingual Edition Instruction Tuning (XE-IT), which fine-tunes the model on a curated parallel dataset to modify in-scope knowledge while preserving unrelated information, and (ii) Target-language Preference Optimization (TL-PO), which applies advanced optimization techniques to ensure consistency across languages, fostering the transfer of updates. Additionally, we contribute a high-quality, cross-lingual dataset, specifically designed to enhance knowledge transfer across languages. Extensive experiments on the Bi-ZsRE and MzsRE benchmarks show that X-KDE significantly enhances cross-lingual performance, achieving an average improvement of +8.19%, while maintaining high accuracy in monolingual settings.
diff --git a/data/yaml/name_variants.yaml b/data/yaml/name_variants.yaml
index 4d1b54d9e7..13441edc1a 100644
--- a/data/yaml/name_variants.yaml
+++ b/data/yaml/name_variants.yaml
@@ -1160,6 +1160,14 @@
- canonical: {first: Kenneth S., last: Bøgh}
variants:
- {first: Kenneth, last: Bøgh}
+- canonical: {first: Li, last: Shen}
+ id: li-shen-dartmouth
+ orcid: 0000-0002-5443-0503
+ institution: Dartmouth College
+ comment: Dartmouth
+- canonical: {first: Li, last: Shen}
+ id: li-shen
+ comment: May refer to several people
- canonical: {first: Alena, last: Bŏhmová}
variants:
- {first: Alena, last: Bohmova}