<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Paper on Yingfa Chen 陈英发</title><link>https://chen-yingfa.github.io/categories/paper/</link><description>Recent content in Paper on Yingfa Chen 陈英发</description><generator>Hugo -- 0.146.6</generator><language>en-us</language><lastBuildDate>Thu, 14 Mar 2024 19:39:34 +0000</lastBuildDate><atom:link href="https://chen-yingfa.github.io/categories/paper/index.xml" rel="self" type="application/rss+xml"/><item><title>(EREN) Robust and Scalable Model Editing for Large Language Models</title><link>https://chen-yingfa.github.io/research_posts/2024-eren/</link><pubDate>Thu, 14 Mar 2024 19:39:34 +0000</pubDate><guid>https://chen-yingfa.github.io/research_posts/2024-eren/</guid><description>&lt;p>&lt;a href="https://www.github.com/chen-yingfa/eren">GitHub&lt;/a> | &lt;a href="...">Paper (upcoming)&lt;/a>&lt;/p>
&lt;p>&lt;strong>TL;DR&lt;/strong>: A reader is augmented with a growing notebook that caches all edits in natural texts, and the reader retrieves relevant edits and make inference based on them. This achieves SOTA in model editing in QA and fact-checking.&lt;/p>
&lt;!-- more -->
&lt;blockquote>
&lt;p>NB: The COLING template in 2024 was very ugly.&lt;/p>&lt;/blockquote>
&lt;hr>
&lt;h2 id="introduction">Introduction&lt;/h2>
&lt;p>&lt;img alt="Illustration of the our method, EREN." loading="lazy" src="https://chen-yingfa.github.io/research_posts/2024-eren/framework.png">&lt;/p>
&lt;p>This work introduces a model editing method that addresses two issues with existing model editors:&lt;/p></description></item></channel></rss>