Yingfa Chen
Yingfa Chen

Yingfa Chen

陈英发

Beijing, China

NLP Ph.D. at Tsinghua University (THUNLP lab), advised by Prof. Zhiyuan Liu. Previously received a Bachelor's and Master's degree at Tsinghua University. Doing research about long-context language models and knowledge updating.


Featured Posts

2024

(EREN) Robust and Scalable Model Editing for Large Language Models

GitHub | Paper (upcoming)

TL;DR: A reader is augmented with a growing notebook that caches all edits in natural texts, and the reader retrieves relevant edits and make inference based on them. This achieves SOTA in model editing in QA and fact-checking.

525 words, 3 min

Paper

InfiniteBench: Extending Long Context Evaluation Beyond 100K Tokens

Code | Paper

The first benchmark for evaluating the effectiveness of LLMs in handling more than 100k tokens!

In the paper, we name it $\infty$-Bench, but I will sometimes use "InfiniteBench" in this blog post for better readability.

Finally got some time to write this blog, been so busy lately! I have been in a fairly long duration of research hiatus, meanwhile the field of NLP has been revolutionized by an overwhelming number of new LLMs. Finally, I was able to arrive at some productive and meaningful work in this new era of research, as a second author. In this blog post, I will introduce this work that I have been working on recently.

1.1k words, 7 min

Research

2023

CFDBench: A Comprehensive Benchmark for Machine Learning Methods in Fluid Dynamics

Code | Paper (on hold by ArXiv) | Paper (preprints.org) | 知乎

I did this work with my girlfriend, whose research direction is computational fluid dynamics (CFD). We observed that there are numerous research works in applying deep learning (DL) to solve CFD problems. E.g., Pangu-Weather have shown that DL methods can not only be more accurate than the best numerical methods, but can also be multiple magnitudes faster.

312 words, 1 min

Research
0 %