(EREN) Robust and Scalable Model Editing for Large Language Models

GitHub | Paper (upcoming) TL;DR: A reader is augmented with a growing notebook that caches all edits in natural texts, and the reader retrieves relevant edits and make inference based on them. This achieves SOTA in model editing in QA and fact-checking. NB: The COLING template in 2024 was very ugly. Introduction This work introduces a model editing method that addresses two issues with existing model editors: ...

March 14, 2024 · 3 min · 陈英发 Yingfa Chen

Interpreting a Maze-Solving Network

The blog post I can’t believe I haven’t read this until now. This is mind-provoking, and the result is an important step towards understanding neural networks. The culmination of this blog post is the exciting work of Activation Addition, which I believe is one important work that inspired the recently Representation Engineering work.

October 7, 2023 · 1 min · 陈英发 Yingfa Chen

Activation Addition (ActAdd)

Paper TLDR: Propose ActAdd, a method for controlling model behavior during inference by modifying activations with a bias term that is learned from a pair of prompt. Summary: Propose ActAdd, a method for controlling model behavior by modifying activations at inference time. Steering vectors are computed by taking the activation differences that result from pairs of prompts. The vectors are added as bias during inference. ActAdd provides control over high-level properties of the output, and preserves off-target model performance, and requires little computational and implementational costs. The recently popular representation engineering paper (RepE) seems to be largely inspired by this work. ...

October 7, 2023 · 4 min · 陈英发 Yingfa Chen

Safety and Ethical Concerns of Large Language Models

I will be holding a seminar at ModelBest (面壁智能) in Sep 20, 2023 in Beijing, Haidian, 科技园. The seminar will be in Chinese, and it’s called “大模型安全与伦理问题” (translation: Safety and Ethical Concerns of Large Language Models). Below is a list of references. Introduction Galactica: A Large Language Model for Science https://openai.com/research/gpt-4 SafetyBench: Evaluating the Safety of Large Language Models with Multiple Choice Questions Bias and Fairness in Large Language Models: A Survey A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation Evaluation Methods A General Language Assistant as a Laboratory for Alignment, Anthropic Safety Assessment of Chinese Large Language Models Semantics derived automatically from language corpora contain human-like biases StereoSet: Measuring stereotypical bias in pretrained language models Instruction Attacks Toxicity in CHATGPT: Analyzing Persona-assigned Language Models ⭐️ Large Language Models are Zero-Shot Reasoners ⭐️ On Second Thought, Let’s Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning ⭐️ Prompting GPT-3 To Be Reliable Universal and Transferable Adversarial Attacks on Aligned Language Models ⭐️ Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment ⭐️⭐️ Exaggerated Safety XSTEST: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models ⭐️ Safety-Tuned LLaMAs: Lessons From Improving the Safety of Large Language Models that Follow Instructions ⭐️ Alignment Methods Aligning language models to follow instructions ⭐️ Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback ⭐️ SELF-INSTRUCT: Aligning Language Models with Self-Generated Instructions ⭐️⭐️ Pretraining Language Models with Human Preferences ⭐️ LIMA: Less Is More for Alignment https://openai.com/blog/our-approach-to-alignment-research (Aug 2022) https://openai.com/blog/our-approach-to-alignment-research (Jul 2023) ⭐️ ⭐️: important ...

September 19, 2023 · 3 min · 陈英发 Yingfa Chen

CFDBench: A Large-Scale Benchmark for Machine Learning Methods in Fluid Dynamics

Code | Paper | Paper (preprints.org) | 知乎 I did this work with my girlfriend, whose research direction is computational fluid dynamics (CFD). We observed that there are numerous research works in applying deep learning (DL) to solve CFD problems. E.g., Pangu-Weather have shown that DL methods can not only be more accurate than the best numerical methods, but can also be multiple magnitudes faster. However, there is no standard benchmark for evaluating the performance of different DL methods. Therefore, we constructed CFDBench. ...

September 16, 2023 · 2 min · 陈英发 Yingfa Chen

Some Binary Search

A binary search with C++: template<class T> int bin_search(vector<T>& arr, T target) { int left = 0, right = arr.size() - 1; while (left <= right) { int mid = (left + right) / 2; if (arr[mid] == target) { break; } else if (arr[mid] < target) { left = mid + 1; } else { right = mid - 1; } } return left; } The same thing with Rust: fn bin_search<T: Ord>(arr: &Vec<T>, target: &T) -> usize { let mut left = 0; let mut right = arr.len() - 1; while left <= right { let mid = (left + right) / 2; if arr[mid] == *target { break; } else if arr[mid] < *target { left = mid + 1; } else { right = mid - 1; } } left } And with Python: ...