Yingfa Chen

2024

InfiniteBench: Extending Long Context Evaluation Beyond 100K Tokens

Code | Paper

The first benchmark for evaluating the effectiveness of LLMs in handling more than 100k tokens!

In the paper, we name it $\infty$-Bench, but I will sometimes use "InfiniteBench" in this blog post for better readability.

Finally got some time to write this blog, been so busy lately! I have been in a fairly long duration of research hiatus, meanwhile the field of NLP has been revolutionized by an overwhelming number of new LLMs. Finally, I was able to arrive at some productive and meaningful work in this new era of research, as a second author. In this blog post, I will introduce this work that I have been working on recently.

1.1k words, 7 min

Research

2023

Safety and Ethical Concerns of Large Language Models

I will be holding a seminar at ModelBest (面壁智能) in Sep 20, 2023 in Beijing, Haidian, 科技园. The seminar will be in Chinese, and it's called "大模型安全与伦理问题" (translation: Safety and Ethical Concerns of Large Language Models). Below is a list of references.

635 words, 3 min

Thoughts

更新个人主页

之前有过个人主页,但是一直没有弄好,更没有更新。最近我将自己的 GitHub 的用户名改了,导致之前的 GitHub Pages 失效了,就趁机重新搭建个人主页。

兜兜转转,还是决定使用 Hexo。以前用过 Jekyll,觉得还行,但是真的不想用 Ruby,Hugo 又太麻烦。

1.2k words, 4 min

Life

CFDBench: A Comprehensive Benchmark for Machine Learning Methods in Fluid Dynamics

Code | Paper (on hold by ArXiv) | Paper (preprints.org) | 知乎

I did this work with my girlfriend, whose research direction is computational fluid dynamics (CFD). We observed that there are numerous research works in applying deep learning (DL) to solve CFD problems. E.g., Pangu-Weather have shown that DL methods can not only be more accurate than the best numerical methods, but can also be multiple magnitudes faster.

312 words, 1 min

Research

第一个帖子,瞎写点东西

现在是 2023 年五月十七,马上硕士一年级就结束,在清华园已经快五年了,感觉对我人生的影响真的巨大。这一年认识了很可爱的 00,希望可以一直走下去。

我和 00 的孩子们:

239 words, 1 min

Life
0 %