Yingfa Chen
Category: Research

2024

InfiniteBench: Extending Long Context Evaluation Beyond 100K Tokens

Code | Paper

The first benchmark for evaluating the effectiveness of LLMs in handling more than 100k tokens!

In the paper, we name it $\infty$-Bench, but I will sometimes use "InfiniteBench" in this blog post for better readability.

Finally got some time to write this blog, been so busy lately! I have been in a fairly long duration of research hiatus, meanwhile the field of NLP has been revolutionized by an overwhelming number of new LLMs. Finally, I was able to arrive at some productive and meaningful work in this new era of research, as a second author. In this blog post, I will introduce this work that I have been working on recently.

1.1k words, 7 min

Research

2023

CFDBench: A Comprehensive Benchmark for Machine Learning Methods in Fluid Dynamics

Code | Paper (on hold by ArXiv) | Paper (preprints.org) | 知乎

I did this work with my girlfriend, whose research direction is computational fluid dynamics (CFD). We observed that there are numerous research works in applying deep learning (DL) to solve CFD problems. E.g., Pangu-Weather have shown that DL methods can not only be more accurate than the best numerical methods, but can also be multiple magnitudes faster.

312 words, 1 min

Research
0 %