Locating and Editing Factual Associations in GPT
This is an overview of the paper 'ROME: Locating and Editing Factual Associations in GPT'. It includes an interview with 2 of the papers authors as well as Yannic Kilcher's commentary explanations of the paper. The paper is an analysis of where the information in a large language deep learning transformer based model like GPT is stored, and whether one could reprogram said models to transform or modify specific facts learned by the model. The results are kind of fascinating, and give new insight into the role of the MLP (multi-layer perception) part of the Transformer model. So maybe 'attention' is not all you need after all? Here's a link to the paper. Here's a link to a followup paper on Mass-Editing in a Transformer.