FB LinkEdIn Instagram

Relevant Projects

Photo of Eran Yahav
Augmented Programmer Intelligence

The vast amount of code available on the web is increasing on a daily basis. Open-source hosting sites such as GitHub contain billions of lines of code. Community question-answering sites provide millions of code snippets with corresponding text and metadata. The amount of code available in executable binaries is even greater. We explore techniques for learning from such “big code” and leveraging the learned models for program analysis, program synthesis and reverse engineering. Along the way, we explore a range of symbolic and neural program representations (e.g., symbolic automata, tracelets, and numerical abstractions), as well as different neural models.

Computational models for Neural Architectures

What is the computational model behind a Transformer?
Where recurrent neural networks have direct parallels in finite state machines, allowing clear discussion and thought around architecture variants or trained models, Transformers have no such familiar parallel. We explore different symbolic representations for reasoning about transformers and a programming language that can be used to “program” transformers.