Apr 29, 2024
5 min read
Author:
Izzy Barrass, Long Phan

Representation Engineering: a New Way of Understanding Models

Reading the minds of LLMs

Interpreting and controlling models has long been a significant challenge. Our research ‘Representation Engineering: A Top-Down Approach to AI Transparency’ explores a new way of understanding traits like honesty, power seeking, and morality in LLMs. We show that these traits can be identified live at the point of output, and they can also be controlled. This method differs from mechanistic approaches which focus on bottom-up interpretations of node to node connections. In contrast, representation engineering looks at larger chunks of representations and higher-level mechanisms to understand models. Overall, we believe that this ‘top-down’ method makes exciting progress towards model transparency, paving the way for more research and exploration into understanding and controlling AI.

Why understanding and controlling AI is important

Transparency and honesty are important features of models - as AI becomes more powerful, capable and autonomous, it is increasingly important that they are honest and predictable. If we cannot understand and control models, AI would have the capacity to lie, seek power and ultimately subvert the goals of humans. In time, this would present a substantial risk to society as AI advances. To fully reap the benefits of AI, we need to make sure that these risks are balanced with better understanding of LLMs.

The method of representation engineering: an overview

Representation Engineering is an innovative approach to enhancing our understanding and control of AI by observing and manipulating the internal representations - weights or activations - that AI uses to understand and process information. This method involves identifying specific sets of activations within an AI system that correspond to a model’s behavior. Furthermore, we can utilize these representations to control the model’s behavior.

In the paper, we first explore the representation of honesty in the model. To do this, we first ask the model to answer a question truthfully. Then we ask the model to answer the same question but with a lie. We observe the model state each time, and the resulting difference in the activations provide an insight into when a model is being honest and when a model lies. We can even tweak the internal representations of the model so that it becomes more honest, or less honest. We show that the same principles and approach applies across other concepts such as power seeking and happiness, and even across a number of other domains. This is an exciting and new approach to model transparency - and sheds light on not only honesty, but a variety of other desirable traits. 

Representation engineering: an analogy to neuroscience

Representation engineering is akin to observing human brain activity through MRI scans, where the focus is on understanding and modifying the internal workings to achieve desired outcomes. Just as MRI scans allow us to see which parts of the brain are activated during various tasks, enabling a detailed analysis of patterns and functions, representation engineering employs a similar method to understand AI's decision-making processes. By adjusting the internal vectors that represent information within the AI, we can directly influence its 'thought process', much like how understanding brain activity can lead to targeted therapies in humans. 

Future directions and conclusion

Our hope is that this work will initiate new efforts and developments towards understanding and controlling complex AI systems. Whilst our approach improves performance on TruthfulQA materially, there is still progress to be made before full transparency and control is achieved. We welcome and encourage more research in the field of representation engineering.

You can read the full paper here: https://arxiv.org/abs/2310.01405

You can find the website here: https://www.ai-transparency.org/

The Github is here: https://tinyurl.com/RepEgithub

We’ve recorded a video on RepE here: https://tinyurl.com/RepEvid

Footnotes