Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion data/xml/2022.coling.xml
Original file line number Diff line number Diff line change
Expand Up @@ -5168,7 +5168,7 @@
<title>On the Complementarity between Pre-Training and Random-Initialization for Resource-Rich Machine Translation</title>
<author><first>Changtong</first><last>Zan</last></author>
<author><first>Liang</first><last>Ding</last></author>
<author><first>Li</first><last>Shen</last></author>
<author id="li-shen"><first>Li</first><last>Shen</last></author>
<author><first>Yu</first><last>Cao</last></author>
<author><first>Weifeng</first><last>Liu</last></author>
<author><first>Dacheng</first><last>Tao</last></author>
Expand Down
2 changes: 1 addition & 1 deletion data/xml/2022.findings.xml
Original file line number Diff line number Diff line change
Expand Up @@ -12137,7 +12137,7 @@ Faster and Smaller Speech Translation without Quality Compromise</title>
<title>Improving Sharpness-Aware Minimization with Fisher Mask for Better Generalization on Language Models</title>
<author><first>Qihuang</first><last>Zhong</last><affiliation>Wuhan University</affiliation></author>
<author><first>Liang</first><last>Ding</last><affiliation>JD Explore Academy, JD.com Inc. &amp; The University of Sydney</affiliation></author>
<author><first>Li</first><last>Shen</last><affiliation>JD Explore Academy</affiliation></author>
<author id="li-shen"><first>Li</first><last>Shen</last><affiliation>JD Explore Academy</affiliation></author>
<author><first>Peng</first><last>Mi</last><affiliation>Xiamen University</affiliation></author>
<author><first>Juhua</first><last>Liu</last><affiliation>Wuhan University</affiliation></author>
<author><first>Bo</first><last>Du</last><affiliation>Wuhan University</affiliation></author>
Expand Down
4 changes: 2 additions & 2 deletions data/xml/2023.emnlp.xml
Original file line number Diff line number Diff line change
Expand Up @@ -9715,7 +9715,7 @@
<title>Zero-shot Sharpness-Aware Quantization for Pre-trained Language Models</title>
<author><first>Miaoxi</first><last>Zhu</last></author>
<author><first>Qihuang</first><last>Zhong</last></author>
<author><first>Li</first><last>Shen</last></author>
<author id="li-shen"><first>Li</first><last>Shen</last></author>
<author><first>Liang</first><last>Ding</last></author>
<author><first>Juhua</first><last>Liu</last></author>
<author><first>Bo</first><last>Du</last></author>
Expand Down Expand Up @@ -12674,7 +12674,7 @@ The experiments were repeated and the tables and figures were updated. Changes a
<author><first>Shwai</first><last>He</last></author>
<author><first>Run-Ze</first><last>Fan</last></author>
<author><first>Liang</first><last>Ding</last></author>
<author><first>Li</first><last>Shen</last></author>
<author id="li-shen"><first>Li</first><last>Shen</last></author>
<author><first>Tianyi</first><last>Zhou</last></author>
<author><first>Dacheng</first><last>Tao</last></author>
<pages>14685-14691</pages>
Expand Down
2 changes: 1 addition & 1 deletion data/xml/2023.findings.xml
Original file line number Diff line number Diff line change
Expand Up @@ -19530,7 +19530,7 @@
<author><first>Keqin</first><last>Peng</last></author>
<author><first>Liang</first><last>Ding</last></author>
<author><first>Qihuang</first><last>Zhong</last></author>
<author><first>Li</first><last>Shen</last></author>
<author id="li-shen"><first>Li</first><last>Shen</last></author>
<author><first>Xuebo</first><last>Liu</last></author>
<author><first>Min</first><last>Zhang</last></author>
<author><first>Yuanxin</first><last>Ouyang</last></author>
Expand Down
2 changes: 1 addition & 1 deletion data/xml/2024.acl.xml
Original file line number Diff line number Diff line change
Expand Up @@ -8162,7 +8162,7 @@
<title>Revisiting Knowledge Distillation for Autoregressive Language Models</title>
<author><first>Qihuang</first><last>Zhong</last></author>
<author><first>Liang</first><last>Ding</last></author>
<author><first>Li</first><last>Shen</last><affiliation>Sun Yat-Sen University</affiliation></author>
<author id="li-shen"><first>Li</first><last>Shen</last><affiliation>Sun Yat-Sen University</affiliation></author>
<author><first>Juhua</first><last>Liu</last><affiliation>Wuhan University</affiliation></author>
<author><first>Bo</first><last>Du</last><affiliation>Wuhan University</affiliation></author>
<author><first>Dacheng</first><last>Tao</last><affiliation>University of Sydney</affiliation></author>
Expand Down
2 changes: 1 addition & 1 deletion data/xml/2024.emnlp.xml
Original file line number Diff line number Diff line change
Expand Up @@ -13939,7 +13939,7 @@
<author><first>Cheng-Yu</first><last>Hsieh</last><affiliation>University of Washington</affiliation></author>
<author><first>Ajay Kumar</first><last>Jaiswal</last><affiliation>Apple</affiliation></author>
<author><first>Tianlong</first><last>Chen</last></author>
<author><first>Li</first><last>Shen</last></author>
<author id="li-shen"><first>Li</first><last>Shen</last></author>
<author><first>Ranjay</first><last>Krishna</last><affiliation>Department of Computer Science</affiliation></author>
<author><first>Shiwei</first><last>Liu</last></author>
<pages>18089-18099</pages>
Expand Down
4 changes: 2 additions & 2 deletions data/xml/2024.findings.xml
Original file line number Diff line number Diff line change
Expand Up @@ -17100,7 +17100,7 @@
<title><fixed-case>OOP</fixed-case>: Object-Oriented Programming Evaluation Benchmark for Large Language Models</title>
<author><first>Shuai</first><last>Wang</last></author>
<author><first>Liang</first><last>Ding</last></author>
<author><first>Li</first><last>Shen</last><affiliation>Sun Yat-Sen University</affiliation></author>
<author id="li-shen"><first>Li</first><last>Shen</last><affiliation>Sun Yat-Sen University</affiliation></author>
<author><first>Yong</first><last>Luo</last><affiliation>Wuhan University</affiliation></author>
<author><first>Bo</first><last>Du</last><affiliation>Wuhan University</affiliation></author>
<author><first>Dacheng</first><last>Tao</last><affiliation>University of Sydney</affiliation></author>
Expand Down Expand Up @@ -21058,7 +21058,7 @@
<author><first>Duy</first><last>Duong-Tran</last><affiliation>United States Naval Academy and University of Pennsylvania, University of Pennsylvania</affiliation></author>
<author><first>Ying</first><last>Ding</last><affiliation>University of Texas, Austin</affiliation></author>
<author><first>Huan</first><last>Liu</last><affiliation>Arizona State University</affiliation></author>
<author><first>Li</first><last>Shen</last><affiliation>University of Pennsylvania</affiliation></author>
<author id="li-shen-dartmouth"><first>Li</first><last>Shen</last><affiliation>University of Pennsylvania</affiliation></author>
<author><first>Tianlong</first><last>Chen</last></author>
<pages>2187-2205</pages>
<abstract>Recent advancements in large language models (LLMs) have achieved promising performances across various applications. Nonetheless, the ongoing challenge of integrating long-tail knowledge continues to impede the seamless adoption of LLMs in specialized domains. In this work, we introduce DALK, a.k.a. Dynamic Co-Augmentation of LLMs and KG, to address this limitation and demonstrate its ability on studying Alzheimer’s Disease (AD), a specialized sub-field in biomedicine and a global health priority. With a synergized framework of LLM and KG mutually enhancing each other, we first leverage LLM to construct an evolving AD-specific knowledge graph (KG) sourced from AD-related scientific literature, and then we utilize a coarse-to-fine sampling method with a novel self-aware knowledge retrieval approach to select appropriate knowledge from the KG to augment LLM inference capabilities. The experimental results, conducted on our constructed AD question answering (ADQA) benchmark, underscore the efficacy of DALK. Additionally, we perform a series of detailed analyses that can offer valuable insights and guidelines for the emerging topic of mutually enhancing KG and LLM.</abstract>
Expand Down
2 changes: 1 addition & 1 deletion data/xml/2025.findings.xml
Original file line number Diff line number Diff line change
Expand Up @@ -23144,7 +23144,7 @@
<title>Edit Once, Update Everywhere: A Simple Framework for Cross-Lingual Knowledge Synchronization in <fixed-case>LLM</fixed-case>s</title>
<author><first>Yuchen</first><last>Wu</last><affiliation>Shanghai Jiao Tong University</affiliation></author>
<author><first>Liang</first><last>Ding</last></author>
<author><first>Li</first><last>Shen</last><affiliation>Sun Yat-Sen University</affiliation></author>
<author id="li-shen"><first>Li</first><last>Shen</last><affiliation>Sun Yat-Sen University</affiliation></author>
<author orcid="0000-0001-7225-5449"><first>Dacheng</first><last>Tao</last><affiliation>Nanyang Technological University</affiliation></author>
<pages>23282-23302</pages>
<abstract>Knowledge editing allows for efficient adaptation of large language models (LLMs) to new information or corrections without requiring full retraining. However, prior methods typically focus on either single-language editing or basic multilingual editing, failing to achieve true cross-linguistic knowledge synchronization. To address this, we present a simple and practical state-of-the-art (SOTA) recipe Cross-Lingual Knowledge Democracy Edit (X-KDE), designed to propagate knowledge from a dominant language to other languages effectively. Our X-KDE comprises two stages: (i) Cross-lingual Edition Instruction Tuning (XE-IT), which fine-tunes the model on a curated parallel dataset to modify in-scope knowledge while preserving unrelated information, and (ii) Target-language Preference Optimization (TL-PO), which applies advanced optimization techniques to ensure consistency across languages, fostering the transfer of updates. Additionally, we contribute a high-quality, cross-lingual dataset, specifically designed to enhance knowledge transfer across languages. Extensive experiments on the Bi-ZsRE and MzsRE benchmarks show that X-KDE significantly enhances cross-lingual performance, achieving an average improvement of +8.19%, while maintaining high accuracy in monolingual settings.</abstract>
Expand Down
8 changes: 8 additions & 0 deletions data/yaml/name_variants.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -1160,6 +1160,14 @@
- canonical: {first: Kenneth S., last: Bøgh}
variants:
- {first: Kenneth, last: Bøgh}
- canonical: {first: Li, last: Shen}
id: li-shen-dartmouth
orcid: 0000-0002-5443-0503
institution: Dartmouth College
comment: Dartmouth
- canonical: {first: Li, last: Shen}
id: li-shen
comment: May refer to several people
- canonical: {first: Alena, last: Bŏhmová}
variants:
- {first: Alena, last: Bohmova}
Expand Down