FB LinkEdIn Instagram

Research

People

Associate Professor
Photo of Aaron Sprecher
+972-48294066
e-mail
My research and design work focus on the synergy between information technologies, computational languages, and digital fabrication systems, examining the way in which technology informs and generates innovative approaches to design processes.
Assistant Professor
Photo of Alon Grinberg Dana
+972-48292117
e-mail
Closed-loop software and hardware platforms driving chemical discovery through automated hypothesis generation, refinement, validation, and revision, resulting in predictive chemical kinetic models.
Associate Professor
Photo of Amir Degani
e-mail
My lab’s work focuses on robotics and autonomous systems, mostly for field applications in unstructured environments with applications such as search and rescue and agriculture. We investigate the use of dynamic legged locomotion, wheeled locomotion, collaboration of ground and aerial robots in field robotics.
Professor
Photo of Amir Yehudayoff
e-mail
Contributing the theory of machine learning through connections to information theory, and computational complexity theory.
Professor
Photo of Anat Levin
e-mail
Computational imaging
Professor
Photo of Anat Rafaeli
+972-48293532
e-mail
Human behavior in the context of organizational service interactions. Big data and automation tools are used to objectively analyze emotion and behavior of participants in online conversations.
Professor Emeritus
Photo of Arie Admon
+972-48293407
e-mail
Proteomics; Cancer Vaccines; Antigen Processing and Presentation; Big data of proteomics and peptidomics; Bioinformatics of proteomics and mass spectrometry.
Visiting Professor
Photo of Ariel Barel
e-mail
Swarm Intelligence, Distributed Control of Multi-Agent Systems, and Distributed Task Allocation. I also investigate Machine Learning implementations (Supervised & Reinforcement) to expedite traditional planning algorithms.
Professor
Photo of Ariel Orda
e-mail
Network routing, survivability, QoS provisioning, wireless networks, the application of game theory to computer and power networks, blokchains, the application of machine learning to network protocols.
Assistant Professor
Photo of Arielle Fischer
e-mail
Applied biomechanics and investigation of the interaction of mechanics, biology, and structure of musculoskeletal joint pathologies. Implementing smart wearable technologies that record large data sets of bio-signal motion data.
Assaf Marom
Lecturer
Photo of Assaf Marom
e-mail
Our lab focuses on the evolutionary processes that shaped the human cranium and pelvis in evolution. We study human morphology from a comparative point of view in order to understand these processes.
Professor
Photo of Assaf Schuster
+972-48294330
e-mail
Distributed and Scalable Deep Learning; Deep Learning for Personal Medicine; Randomness in Deep Learning; Analytics of Rapid Data Streams; Complex Event Processing (CEP); Internet of Things and Smart Systems; Privacy Preserving; Cyber Security; Cloud Management
Associate Professor
Photo of Avi Schroeder
e-mail
Nanotechnology, cancer, Parkinson's disease, AI
Professor
Photo of Avigdor Gal
+972-48294425
e-mail
Schema Matching; Entity Resolution; Semantic Integration of Data Resources; Business Process Management; Temporal Databases and Temporal Evolution of Databases.
Professor
Photo of Avishai Mandelbaum
+972-48294504
e-mail
Service-Engineering of large operations (e.g. hospitals / emergency-departments, call / contact-centers, courts, …); Operations research; Statistics; Queueing science & theory; Control theory; Data- and process-mining.
Associate Professor
Photo of Aviv Tamar
e-mail
My research focuses on AI and machine learning, with an emphasis on robotics applications. My long term goal is to bring robots into human-centered domains such as homes and hospitals. Towards this goal, some fundamental questions need to be solved, such as how can machines learn models of their environments that are useful for performing tasks, and how to learn behavior from interaction in an interpretable and safe manner. Most of my work falls under the framework of reinforcement learning, and its connections to representation learning and planning.
Associate Professor
Photo of Ayelet Lamm
+972-778871939
e-mail
We study fundamental epigenetics processes affecting gene expression and regulation by using bioinformatics, generating pipelines, and experimental biology.
Professor
Photo of Ayellet Tal
+972-48294651
e-mail
Computer Vision, Computer Graphics, Machine Learning, Visualization.
Associate Professor
Photo of Barak Fishbain
+972-48293177
e-mail
Enviromatics; machine learning methods and mathematical for natural complex environments; hydro-informatics; atmospheric-informatics; precise agriculture; structural health; smart infrastructure systems and connected transportation.
Lecturer
Photo of Batya Kenig
e-mail
Theoretical and systems aspects of data management, enumeration algorithms, probabilistic graphical models.
Associate Professor
Photo of Benny Kimelfeld
+972-48295528
e-mail
Enumeration of query results, probabilistic, incomplete and inconsistent databases, infrastructure for text analytics, databases for preferences and social choice, and database aspects of machine learning
Visiting Professor
Photo of Chaim Baskin
e-mail
Deep Neural Network representation learning, Machine Learning, Computer Vision, Geometric Deep learning, Algorithms for efficient training and inference of Deep Neural Networks.
Associate Professor
Photo of Dan Liberzon
e-mail
"Environmental Fluid dynamics Turbulent atmospheric flows Water waves, and wind-waves interactions Acoustics"
Lecturer
Photo of Dana Drachsler Cohen
e-mail
" Safety guarantees to deep learning by leveraging formal methods, such as analysis and synthesis."
Assistant Professor
Photo of Dani Broitman
+972-48294039
e-mail
Urban and regional economics, Ecological and environmental economics, Systems modeling, Land use change, Economic geography
Assistant Professor
Photo of Daniel Hexner
e-mail
Self-learning materials, disordered materials, exotic order
Assistant Professor
Photo of Daniel Soudry
e-mail
Deep Learning, i.e., neural networks: understanding them theoretically (e.g., their implicit bias) and improving them (e.g., their resource efficiency and speed during training and inference).
Professor
Photo of Danny Raz
+972-48294938
e-mail
The theory and applications of efficient network and system management, in particular, concentrating on cloud resource management, NFV, SDN, TE, and network aware services.
Associate Professor
Photo of Daphne Weihs
+972-48294134
e-mail
Mechanobiology of cancer and wounds. Early-prognosis of cancer metastasis. Wound healing and prevention. Experiments, Finite element modeling, machine learning.
Assistant Professor
Photo of David Azriel
+972-48294371
e-mail
Optimal designs for clinical trials, Regression, Semi-supervised learning, Statistical theory.
Associate Professor
Photo of David Broday
e-mail
"David Broday's research focuses on application of data science tools and methods on large atmospheric databases, mainly air pollution (both standard monitoring data and data streams of distributed air pollutant sensor arrays), aerosols (satellite remotely sensed products), meteorological observations, and climate change. "
Associate Professor
Photo of Dori Derdikman
+972-48295349
e-mail
Recording and analysis of nerve cell populations in the brain during tasks of spatial learning and memory, using electrode probes, tetrodes, and calcium-imaging.
Professor
Photo of Dov Dori
+972-48294409
e-mail
Conceptual Modeling, Systems Eng. and Modeling, Systems Architecture, Enterprise Systems Modeling; Object-Process Methodology; Ontologies; Software Development Methodologies, Semantic Web; Systems Biology, Robotics.
Assistant Professor
Photo of Dvir Aran
e-mail
Computational biology; Clinical informatics
Associate Professor
Photo of Eitan Yaakobi
+972-48294952
e-mail
Information and coding theory with applications to non-volatile memories, associative memories, data storage and retrieval, distributed storage, privacy, and DNA storage.
Professor
Photo of Eran Yahav
+ 972-48294318
e-mail
Program Synthesis, Machine Learning for Programming Languages, Neuro-Symbolic Models, Program Analysis
Assistant Professor
Photo of Erez Karpas
+972-48292034
e-mail
Automated Planning, Robotics, Artificial Intelligence.
Professor
Photo of Gad Rennert
e-mail
Establishing and analyzing large-scale clinical and biological/genetic bio-banks and databases to study disease etiology and clinical behavior, especially with regards to malignancies.
Assistant Professor
Photo of Gala Yadgar
+972-778871321
e-mail
Caching, Content Distribution, Optimizations for Flash Based Storage, Erasure Coding, Deduplication, Workload Characterization and Improved Analysis Tools.
Assistant Professor
Photo of Galit Yom Tov
+972-48294510
e-mail
Service Engineering/Service Operations; Behavioral Operations; Queueing Networks and Approximations; Healthcare and Call-center Operational Design.
Lecturer
Photo of Guy Austern
e-mail
The CDML(Computational Design & Machine Learning) lab was founded by Dr. Guy Austern in 2022. The lab’s goal is to solve problems of architectural design and fabrication using state of the art computational methods. Our current research focuses on applying Machine Learning models to architectural questions of different scales.
Associate Professor
Photo of Guy Gilboa
+972-48294653
e-mail
Image processing and computer vision, with strong focus on mathematical models related to calculus of variations and nonlinear spectral theory.
Assistant Professor
Photo of Haguy Wolfenson
+972-48295239
e-mail
Nano and micro-fabricated surfaces for cell biology research; Mechanobiology of Cells.
Professor
Photo of Hossam Haick
+972-48293087
e-mail
Nano-array devices for screening, diagnosis and monitoring of disease, nanomaterial-based chemical (flexible) sensors, electronic skin, breath analysis, volatile biomarkers, and cell-to-cell communication.
Professor
Photo of Idit Keidar
+972-48294649
e-mail
Distributed and network-based computing, Distributed Storage, Parallel Computing, Blockchains.
Assistant Professor
Photo of Ido Kaminer
+972-48292051
e-mail
Implications of quantum mechanics on future technology; Algorithms to automate research in fundamental science and in mathematics; Fundamentals of light-matter interactions; Probing materials with ultrafast electrons and photons.
Associate Professor
Photo of Ido Roll
+972-778873430
e-mail
Using learning analytics and artificial intelligence to design intelligent environments that help students become better learners. Assessing and supporting competencies such as creativity and scientific reasoning.
Assistant Professor
Photo of Inbal Talgam-Cohen
+972-48294935
e-mail
Algorithmic game theory; Theory of computation; Optimization; AI; Internet economics; Market design; Auctions.
Professor
Photo of Isaac Keslassy
+972-48295738
e-mail
Using machine learning in datacenter networks and high-performance routers, e.g., for congestion control, flow classification, and buffer management.
Professor
Photo of Israel Cohen
+972-48294731
e-mail
InterArray processing, signal processing, deep learning, analysis and modeling of acoustic signals, speech enhancement, noise estimation, source localization, blind source separation, system identification and adaptive filtering.
Assistant Professor
Photo of Izhak Kehat
+972-4829040
e-mail
Genomics and epigenetics and cardiac imaging.
Assistant Professor
Photo of Joachim Behar
+972-48294125
e-mail
The Artificial Intelligence in Medicine Laboratory (AIMLab.) researches innovative pattern recognition algorithms to exploit the information encrypted within large datasets of physiological time series.
Lecturer
Photo of John Kennedy
e-mail
Data Science Methods: Spectral Total Variation Techniques; Biomedical Engineering: PET/CT and SPECT/CT data fusion, characterization, quantitation, artifact removal, optimization of clinical imaging protocols.
Assistant Professor
Photo of Jonathan Natanian
+972-778871201
e-mail
Environmental quality driven urban design, Generative urban design, Zero energy buildings and districts, Performance-based design using digital tools, Environmental workspace design, Passive low carbon design strategies
Professor
Photo of Karel Martens
+972-48294060
e-mail
Accessibility, mobility, travel problems, disparities, social exclusion, equity.
Assistant Professor
Photo of Keren Yitzhak
+972-778875332
e-mail
Computational Cancer Genomics; Our lab addresses emerging challenges in computational cancer genomics by developing and applying computational methods for cancer data analysis.
Assistant Professor
Photo of Kfir Yehuda Levy
+972-48294749
e-mail
Machine Learning and Optimization.
Visiting Professor
Photo of Kira Radinsky
+972-778874939
e-mail
Machine Learning in Healthcare, Chemoinformatics, Data Mining, Natural Language Processing, Information Retrieval, Knowledge Discovery, Machine Learning, Artificial Intelligence, Digital Healthcare
Assistant Professor
Photo of Kiril Solovey
e-mail
Algorithmic aspects of multi-robot systems; smart mobility optimization; autonomous driving; robot control and decision making; societal aspects of autonomous transportation systems
Liat Levontin
Associate Professor
Photo of Liat Levontin
e-mail
The psychology of AI, Answers to human related questions with big data, Consumer behavior as reflected in big data.
Professor
Photo of Lihi Zelnik-Manor
+972-48295736
e-mail
Computer Vision, e.g., data generation, multi-label classification; Machine Learning with focus on designing and training efficient and effective Neural Networks; Tactile sensing and haptic feedback
Assistant Professor
Photo of Limor Freifeld
+972-778871202
e-mail
Using microscopy technologies, we capture the dynamic function of cell nuclei and neural circuits as well as their 3D nano-scale structure. Via machine learning, we extract the information of interest from these datasets, linking biological structure to function.
Associate Professor
Photo of Maytal Caspary Toroker
+972-48294298
e-mail
Machine learning for material design.
Professor
Photo of Michael Elad
+972-48294169
e-mail
Data models; Sparse Representations – Theory and Practice and their Connection to Deep-Learning; Inverse Problems in Signal and Image Processing; Image Denoising; Graph-Based Signal Processing; Patch-Based Image Processing; Super-Resolution
Professor
Photo of Michael Lindenbaum
+972-48294331
e-mail
Image analysis, statistical analysis of visual tasks, and the application of deep learning to vision tasks.
Associate Professor
Photo of Mirela Ben Chen
+972-48293378
e-mail
Modeling and understanding the geometry of shapes, differential geometry, architectural geometry, remeshing for fabrication, numerical optimization and harmonic analysis, animation, fluid simulation.
Professor
Photo of Miri Barak
+972-48293883
e-mail
Science and engineering education, Ethics in AI research and development, Technology-enhanced education, 21st century skills.
Professor
Photo of Miriam Zacksenhouse
+972-778872092
e-mail
Control policies that facilitates learning and sim2real transfer with applications to robot assembly and legged locomotion; Invasive and non-invasive Brain Machine Interfaces (BMIs) and error related processing;
Assistant Professor
Photo of Moti Freiman
+972-778874147
e-mail
Improving patient care and outcome through better characterization of the underlying physiological and structural factors in human diseases by developing novel deep-learning-based methods for MRI acquisition and analysis.
Assistant Professor
Photo of Nadav Dym
e-mail
The research in our lab focuses on development and theoretical analysis of machine learning algorithms, using mathematical tools from fields such as approximation theory, optimization, or invariant theory
Assistant Professor
Photo of Nadav Hallak
e-mail
"Mathematical optimization and algorithmic theory in continuous optimization. Mainly non-convex optimization models, methods and theory, for finance, machine learning applications, and large-scale problems."
Professor
Photo of Nahum Shimkin
+972-48294734
e-mail
Markov Decision Processes and Stochastic Control; Reinforcement Learning; Online Learning and Bandit Processes; Game Theoretic Analysis of Queueing Systems; Trajectory Optimization; Cooperative Mission Planning.
Professor
Photo of Nathan Karin
+972-48295232
e-mail
The Lab focuses on T cell immunology in the interface between cancer and autoimmunity, with a particular interest in chemokine-chemokine receptor interactions. The lab develops unique strategies for cancer immunotherapies and for restraining autoimmunity.
Associate Professor
Photo of Nir Ailon
+972-48294842
e-mail
Machine Learning and Statistics, Combinatorial Optimization and Approximation Algorithms, Algorithmic Dimension Reduction and Applications, Complexity.
Assistant Professor
Photo of Nir Rosenfeld
e-mail
Modeling and predicting human behavior and social dynamics; Learning to support human decision-making; Training for human objectives and with humans in the loop; Implications of deploying predictive models in social contexts.
Assistant Professor
Photo of Nir Weinberger
e-mail
information theory, probability and statistics in high dimension, nonparametric regression, large deviations bounds, analysis of Boolean functions.
Assistant Professor
Photo of Noam Kaplan
+972-48295293
e-mail
3D genomics, nuclear organization, computational biology, epigenetics, classical and deep machine learning, probabilistic models.
Professor
Photo of Ofer Strichman
e-mail
Formal verification, computational logic, discrete optimization.
Assistant Professor
Photo of Ofra Amir
+972-48294412
e-mail
Artificial Intelligence; Human-Computer Interaction; Intelligent User Interfaces; Explainable AI.
Lecturer
Photo of Omer Ben-Porat
e-mail
My research interests lie at the intersection of machine learning and computational game theory. More specifically, I am interested in strategic, societal, and economic aspects of ML, developing both theory and practical tools.
Associate Professor
Photo of Omri Barak
+972-48294681
e-mail
Theoretical neuroscience. How complex systems adapt to their environment? Such systems include brains, cancer cells, and artificial neural networks.
Assistant Professor
Photo of Or Aleksandrowicz
+9724-8294041
e-mail
Building physics; Building performance simulation; Bioclimatic design; Urban microclimate; Building technology; History of architecture and architectural technology; Urban history; Digital Humanities.
Professor
Photo of Oren Kurland
e-mail
Information retrieval, with specific interest in ranking models for search engines.
Assistant Professor
Photo of Oren Salzman
e-mail
I seek to deeply understand and to rigorously address the computational challenges that arise when planning for robots. My research, lying at the intersection of Computer Science and Robotics, is motivated by the key insight that in order to address these challenges, traditional Computer Science algorithms, tools and paradigms need to be revisited. This requires (i) understanding and analyzing the unique domain-specific computational challenges in robotic planning and, subsequently (ii) developing algorithms to address these challenges to provide the robotics community foundational tools to solve real-world problems. For additional details, see my research statement.
Assistant Professor
Photo of Ori Plonsky
972-48294436
e-mail
Predicting human decision making; Mining behavioral data; behavioral economics; behavioral decision making; human learning processes; behavioral mechanism design; behavioral public policy.
Professor
Photo of Orit Hazzan
+972-48293107
e-mail
Cognitive and social processes of computer science, software engineering and data science education, on the individual, the team and the organization levels, in all kinds of organizations.
Professor
Photo of Paul Feigin
e-mail
Biostatistics and clinical trial design and analysis, experimental design, inference for stochastic processes, classification and regression modelling.
Associate Professor
Photo of Rakefet Ackerman
+972-547558378
e-mail
When facing challenging tasks in computerized and daily tasks, people must engage in mental effort management. Professor Ackerman studies factors that drive cognitive biases and waste of thinking time.
Professor
Photo of Rann Smorodinsky
+972-48294422
e-mail
Game theory, Economic theory, Privacy.
Assistant Professor
Photo of Renana Gershoni-Poranne
e-mail
Inverse molecular design, chemical space exploration, physical organic chemistry.
Associate Professor
Photo of Reshef Meir
+972-48294434
e-mail
I am interested in understanding and mitigating the negative effects of strategic behavior. Mainly by people interacting via large systems, e.g. congestion in networks or biased group decisions.
Lecturer
Photo of Rinat Rosenberg-Kima
e-mail
Ethical issues involved in using AI and Social-AI in the context of education; Human-Robot Interaction; Computer Science Education; Instructional Design
Associate Professor
Photo of Roi Reichart
+972-48294503
e-mail
Natural Language Processing (NLP). Out-of-distribution generalization in NLP (e.g. cross-language and cross-domain learning); NLP for social, behavioral and health science; Causality and model interpretation.
Professor
Photo of Ron Kimmel
+972-48294616
e-mail
Computer vision, graphics, Geometric machine learning and big data, computational medicine and biometry, applied metric and differential geometries.
Assistant Professor
Photo of Ron Levie
e-mail
Mathematical foundations of deep learning, graph neural networks, geometric deep learning, explainable AI.
Professor
Photo of Ron Meir
+972-48294658
e-mail
Information processing, learning and control in natural and artificial systems, reinforcement learning, lifelong learning, multi agent learning, the perception action cycle.
Associate Professor
Photo of Ronen Talmon
+972-48294750
e-mail
Geometry-based Data Analysis & Modeling; Signal Processing; Applied Harmonic Analysis; Diffusion Geometry; Biomedical Signal Processing; Computational Neuroscience.
Professor
Photo of Roy Friedman
+972-4-8294264
e-mail
Caching, network monitoring, stream processing, reliable distributed systems, high-availability and fault-tolerance, blockchains, cloud computing, wireless mobile ad hoc network
Professor
Photo of Roy Kishony
+972-778871529
e-mail
Developing and applying advanced machine-learning and experimental tools at the frontiers of biomedicine with a specific focus on antimicrobial multi-drug therapy.
Assistant Professor
Photo of Sagi Dalyot
+972-48295991
e-mail
Geodata science, methods of interpretation, mining, and integration of crowdsourced content to augment and develop location-based services and smart mapping infrastructures. Spatial-cognition, navigation, and the built-environment.
Assistant Professor
Photo of Sarah Keren
e-mail
Multi-agent AI, multi-robot systems, collaborative AI, multi-agent environment design, integrated task and motion planning for robotics, and multi-agent reinforcement learning.
Associate Professor
Photo of Shahar Kvatinsky
+972-778871502
e-mail
Performing logic using memory cells to build the memristive memory processing unit (mMPU), mixed-signal circuits, RF circuits, neuromorphic computing, cytomorphic systems, deep learning accelerators, internet-of-things, and hardware security.
Professor
Photo of Shaul Markovitch
+972-48294346
e-mail
Artificial Intelligence, Machine Learning, Natural Language Semantics, Anytime Learning, Active and Selective Learning, Information retrieval, Multi-agent Systems, Adversary search, Opponent Modeling, Resource-bounded reasoning, Cost-sensitive Learning.
Assistant Professor
Photo of Shay Moran
e-mail
Research interests (Max. 200 Chr): Theoretical Machine Learning, focusing on: learning in adversarial and stochastic environments; generalization theory; differentially private machine learning; new theoretical definitions for learnability (that are inspired by applied ML); data compression and information-theoretic methods; combinatorial and geometric problems that arise in ML.
Professor
Photo of Shie Mannor
+972-48293284
e-mail
AI and machine learning, reinforcement learning and planning; learning, optimization and control under uncertainty, Multi-agent systems, Optimization of large scale problems, application of machine learning to a variety of problems: power grids, communication networks, etc.
Assistant Professor
Photo of Shimrit Shtern
972-48294437
e-mail
Robust and adaptive optimization; Data-driven optimization; Algorithms for nonconvex and mixed-integer optimization; Optimization applications in: energy, inventory systems, estimation and control, statistics and healthcare.
Assistant Professor
Photo of Shlomi Laufer
e-mail
Surgical Data Science, Computer Vision, Automatic Surgical Workflow Analysis, , Automatic Assessment of Competency-Based Medical Education in Surgery and Anesthesiology
Distinguished Professor
Photo of Shlomo Shamai Shitz
e-mail
Network and multi-user information theory, Modern Communication networks (Cloud and Fog Radio Networks), Information and Signal Processing (Information-Estimation), Information bottleneck problems in communications and learning.
Professor
Photo of Shmuel Onn
+972-48294416
e-mail
Algebraic and Geometric Methods for Discrete Optimization.
Associate Professor
Photo of Shoham Sabach
+972-48294442
e-mail
Continuous Optimization: Theory and Algorithms, development and analysis of Optimization Methods for large-scale optimization problems, Applications of Optimization Methods in Machine/Deep Learning.
Assistant Professor
Photo of Stefano Recanatesi
e-mail
Neuro-AI: the intersection between neuroscience, machine learning and theoretical methods.
Professor
Photo of Steven Frankel
e-mail
The CFDLAB focuses on the development, implementation and application of high-fidelity numerical methods for solving problems in aerodynamic, combustion, energy, propulsion, and multiphase turbulent flows. We use high-performance computing including multi-GPU platforms to parallelize our simulations and improve performance. We also study algorithms for quantum computers. In all cases, AI/machine learning techniques are considered to implement flow control strategies, better understand and improve predictive accuracy of turbulent flows, and finally to improve performance of quantum algorithms.
Assistant Professor
Photo of Tamir Hazan
+972-48294414
e-mail
Theoretical & practical aspects of machine learning. Mathematical solutions to real-life problems demonstrating non-traditional statistical behavior.
Associate Professor
Photo of Tomer Michaeli
+972-48294756
e-mail
Computer Vision; Machine Learning; Image Processing; Signal Processing.
Professor
Photo of Tomer Toledo
+972-48293080
e-mail
Transportation systems modeling and analysis, traffic simulation modeling, driving and travel behavior, intelligent transportation systems, traffic management and control.
Assistant Professor
Photo of Tzipi Horowitz-Kraus
+972-48292165
e-mail
Relating brain functional / structural patterns to children’s abilities (reading, language, memory, attention). Modeling physiological datasets (speech, heart rate, eye-movement, brain markers etc) for intervention and diagnostic tools.
Assistant Professor
Photo of Uri Shalit
+972-48294440
e-mail
Machine learning; causal inference; machine learning for healthcare; deep learning.
Professor Emeritus
Photo of Uri Weiser
+ 972-778871507
e-mail
Research in Advanced Computer architecture to achieve high Hardware utilization. Leverage the Machine Learning unique characteristics to redesign energy efficient Deep Neural Networks.
Associate Professor
Photo of Vadim Indelman
+972-48293815
e-mail
The intersection of probabilistic perception and inference, learning, and planning under uncertainty, both for single and distributed multi-agent autonomous systems.
Associate Professor
Photo of Yael Allweil
+972-48294056
e-mail
HousingLab develops research methods that address the large scale and multifaceted nature of housing. HousingLab engages in developing archival methods for visual digital data as primary sources. Specifically: developing computer vision capacities for the built environment.
Associate Professor
Photo of Yael Yaniv
+972-48294124
e-mail
Automatic diseases classification, Cell Biophysics, Heart rate variability analysis, Mobile health devices, Prediction and detection of atrial and ventricular fibrillation, Sinoatrial node cell activity.
Professor
Photo of Yair Ein-Eli
e-mail
We are working on AI/ML in battery materials research.
Associate Professor
Photo of Yair Goldberg
e-mail
Survival analysis, Empirical processes, Machine learning, Semiparametric models.
Assistant Professor
Photo of Yaniv Romano
+972-48294959
e-mail
Research centers around the theory and practice of statistical inference and machine learning, focusing on the reliability, robustness, and interpretability of modern data-driven algorithms.
Associate Professor
Photo of Yasha J. Grobman
+972-48294001
e-mail
Design computation, computer aided design and fabrication.
Assistant Professor
Photo of Yevgeni Berzak
e-mail
Natural Language Processing, Computational Psycholinguistics, AI, Eye-tracking, Neuroimaging.
Assistant Professor
Photo of Yoav Shechtman
+972-48291422
e-mail
Computational imaging, fluorescence microscopy, cellular imaging, 3D imaging, super-resolution microscopy, wavefront shaping
Assistant Professor
Photo of Yoed Kenett
e-mail
Higher-level cognition; Cognitive complexity; Creativity; Network science in cognitive science; Network neuroscience; Clinical cognitive Networks; Cognitive search
Assistant Professor
Photo of Yonatan Belinkov
+97248294958
e-mail
Natural language processing; machine learning for language understanding and generation; neural network representations; interpretability and robustness of machine learning models.
Associate Professor
Photo of Yossi Keshet
e-mail
Keshet's research concerns both machine learning and the computational study of human speech and language. His work on speech and language concentrates on speech processing, automatic speech recognition, speaker recognition, automating laboratory phonology, and pathological speech. His research on machine learning focuses on core machine learning and deep learning algorithms, specifically, that capture the structure of complex tasks, such as automatic speech recognition. But also - how to make them reliable and trustworthy.
Associate Professor
Photo of Yuval Cassuto
+972-48294642
e-mail
Storage devices, systems; Reliable data distribution in networks; Coding theory, Data compression.

Projects

Applied Deep-Learning methods for Expediting Path Selection in Real-Time MAPF
MAPF is an NP-hard problem. We are given a large pre-calculated set of paths. The aim is to select paths from this set such that they do not collide with static obstacles nor with each other. This selection should be calculated in near real-time, i.e., extremely faster than classic MAPF algorithms. We investigate how Deep-Lea... more

Applied Deep-Learning methods for Expediting Path Selection in Real-Time MAPF

MAPF is an NP-hard problem. We are given a large pre-calculated set of paths. The aim is to select paths from this set such that they do not collide with static obstacles nor with each other. This selection should be calculated in near real-time, i.e., extremely faster than classic MAPF algorithms.

We investigate how Deep-Learning methods may speed up the search process. We train the NN to recognize patterns in the training examples and apply them to previously unseen settings of the problem.

Integrating Deep Reinforcement and Supervised Learning to Expedite Indoor Mapping
The challenge of mapping indoor environments is addressed. Frontier-based methods calculation time may increase substantially as more areas are exposed. To overcome this limitation, we apply deep reinforcement learning to train the motion planner, and pre-trained generative deep neural network, acting as a map predictor. Hence, ... more

Integrating Deep Reinforcement and Supervised Learning to Expedite Indoor Mapping

The challenge of mapping indoor environments is addressed. Frontier-based methods calculation time may increase substantially as more areas are exposed. To overcome this limitation, we apply deep reinforcement learning to train the motion planner, and pre-trained generative deep neural network, acting as a map predictor. Hence, we improve the decision making through use of the learned structural statistics of the environment and ensure a constant calculation time. We show that combining the two methods can shorten the duration of the mapping process substantially.

Multi-Agent Geometric Consensus
This report, written under the supervision of Professor Alfred M. Bruckstein as part of a doctoral dissertation, surveys results on distributed systems comprising mobile agents that are identical and anonymous, oblivious and interact solely by adjusting their motion according to the relative location of their neighbours. The age... more

Multi-Agent Geometric Consensus

This report, written under the supervision of Professor Alfred M. Bruckstein as part of a doctoral dissertation, surveys results on distributed systems comprising mobile agents that are identical and anonymous, oblivious and interact solely by adjusting their motion according to the relative location of their neighbours. The agents are assumed capable of sensing the presence of other agents within a given sensing range and able to implement rules of motion based on partial information on the geometric constellation of their neighbours.

Tactile sensing of images
We aim to enable touching of digital media, such as images and virtual objects. Imagine being able to touch the sofa your avatar is going to sit on, or letting blind people ``see’’ images through their fingers. We are developing a complete haptic system for tactile sensing and already have a device that simulates sensation o... more

Tactile sensing of images

We aim to enable touching of digital media, such as images and virtual objects. Imagine being able to touch the sofa your avatar is going to sit on, or letting blind people “see’’ images through their fingers. We are developing a complete haptic system for tactile sensing and already have a device that simulates sensation of surface geometry and texture. Next challenge is to figure out how to transform images to surfaces that people will touch and understand the content.

 

Data generation and augmentation
In many problems in computer vision, collecting data for training and testing is hard or even impossible. For example, it is notoriously hard to annotate videos, and as a result autonomous driving platforms use synthetically generated videos. In our research we explore multiple facets of handling the lack of data. We explore met... more

Data generation and augmentation

In many problems in computer vision, collecting data for training and testing is hard or even impossible. For example, it is notoriously hard to annotate videos, and as a result autonomous driving platforms use synthetically generated videos. In our research we explore multiple facets of handling the lack of data. We explore methods for differentiable data augmentation, completing missing annotations, generating synthetic images, and more.

Augmented Programmer Intelligence
The vast amount of code available on the web is increasing on a daily basis. Open-source hosting sites such as GitHub contain billions of lines of code. Community question-answering sites provide millions of code snippets with corresponding text and metadata. The amount of code available in executable binaries is even greater. W... more

Augmented Programmer Intelligence

The vast amount of code available on the web is increasing on a daily basis. Open-source hosting sites such as GitHub contain billions of lines of code. Community question-answering sites provide millions of code snippets with corresponding text and metadata. The amount of code available in executable binaries is even greater. We explore techniques for learning from such “big code” and leveraging the learned models for program analysis, program synthesis and reverse engineering. Along the way, we explore a range of symbolic and neural program representations (e.g., symbolic automata, tracelets, and numerical abstractions), as well as different neural models.

People:
Eran Yahav
Computational models for Neural Architectures
What is the computational model behind a Transformer? Where recurrent neural networks have direct parallels in finite state machines, allowing clear discussion and thought around architecture variants or trained models, Transformers have no such familiar parallel. We explore different symbolic representations for reasoning abou... more

Computational models for Neural Architectures

What is the computational model behind a Transformer?
Where recurrent neural networks have direct parallels in finite state machines, allowing clear discussion and thought around architecture variants or trained models, Transformers have no such familiar parallel. We explore different symbolic representations for reasoning about transformers and a programming language that can be used to “program” transformers.

People:
Eran Yahav
Prediction of pharmaceutical molecule shelf life
Understanding the chemical stability of active pharmaceutical molecules can affect the quality, safety, robustness, and efficacy of a drug product. The ultimate goal of this project is to develop a predictive tool for drug resistance to oxidation and degradation to facilitate early developmental efforts of potential pharmaceutic... more

Prediction of pharmaceutical molecule shelf life

Understanding the chemical stability of active pharmaceutical molecules can affect the quality, safety, robustness, and efficacy of a drug product. The ultimate goal of this project is to develop a predictive tool for drug resistance to oxidation and degradation to facilitate early developmental efforts of potential pharmaceuticals, even before they are synthesized. We use detailed chemical kinetic models automatically generated with on-the-fly quantum chemical thermo-kinetic computations.

Application-Tailored Optimal Fuel Design
Design of an AI tool able to attain a fuel mixture composition that possesses specific desirable combustion characteristics. Given a constrained chemical search space and a target function (e.g., ignition delay time), this tool will predict which fuel composition is optimal for the task. This tool will intelligently design a wid... more

Application-Tailored Optimal Fuel Design

Design of an AI tool able to attain a fuel mixture composition that possesses specific desirable combustion characteristics. Given a constrained chemical search space and a target function (e.g., ignition delay time), this tool will predict which fuel composition is optimal for the task. This tool will intelligently design a wide range of fuel systems efficiently, attaining experimentally-validated results, while at the same time reducing the amount of required experiments and resources.

Intelligent Design of High-T Fuel Cell Membranes
This project focuses on attaining robust proton-exchange membranes (PEMs) for high-temperature fuel cells. The key challenge with today’s PEMs at high temperature oxidative environments is that they degrade too quickly, resulting in an unacceptably low operation time of the overall device. We generate detailed chemical kinetic... more

Intelligent Design of High-T Fuel Cell Membranes

This project focuses on attaining robust proton-exchange membranes (PEMs) for high-temperature fuel cells. The key challenge with today’s PEMs at high temperature oxidative environments is that they degrade too quickly, resulting in an unacceptably low operation time of the overall device. We generate detailed chemical kinetic models for the degradation of these materials, and use neural networks to predict new polymer structures with low degradation rates at these extreme operation conditions.

Machine Learning for Urban Design
We developed a generative adversarial networks (GANs), a machine-learning technique that can be used as an urban design tool capable of learning and reproducing complex patterns that express the unique spatial qualities of non-planned settlements.
Artificial Intelligence Architecture Chatbot
We created a design chatbot that is able to maintain meaningful conversations using natural language about design and create corresponding design briefs.
Generative Heritage
We develop a VR narrative generator that can extract meaningful histories from historic documents and produce an immersive educational experience.
Partial participation and proxy voting
The mathematical theory of voting goes back at least 240 years to the Condorcet Jury Theorem, and mainly deals with the question of finding a rule, or a “function” that best aggregates the preferences of many people. Yet the implicit underlying assumption, that all (or even most) people actually vote is rarely met in practic... more

Partial participation and proxy voting

The mathematical theory of voting goes back at least 240 years to the Condorcet Jury Theorem, and mainly deals with the question of finding a rule, or a “function” that best aggregates the preferences of many people. Yet the implicit underlying assumption, that all (or even most) people actually vote is rarely met in practice. We quantify the bias in the outcome as more people fail to vote, and study the effect of possible remedies such as voting by proxy.

General-Domain Truth Discovery via Average Proximity
Truth discovery is a general name for statistical methods aimed to extract the correct answers to questions, based on multiple answers coming from noisy sources. For example, workers in a crowdsourcing platform. We suggest a simple heuristic for estimating workers' competence using average proximity to other workers. We prove th... more

General-Domain Truth Discovery via Average Proximity

Truth discovery is a general name for statistical methods aimed to extract the correct answers to questions, based on multiple answers coming from noisy sources. For example, workers in a crowdsourcing platform. We suggest a simple heuristic for estimating workers’ competence using average proximity to other workers. We prove this estimates well the actual competence level and enables separating high and low quality workers in a wide spectrum of domains and statistical models.

Contract Design for Energy Demand Response
Power companies such as Southern California Edison (SCE) uses Demand Response (DR) contracts to incentivize consumers to reduce their power consumption during periods when demand forecast exceeds supply. We design mechanisms that take into consideration consumers' heterogeneity in consumption profile and reliability, and increas... more

Contract Design for Energy Demand Response

Power companies such as Southern California Edison (SCE) uses Demand Response (DR) contracts to incentivize consumers to reduce their power consumption during periods when demand forecast exceeds supply. We design mechanisms that take into consideration consumers’ heterogeneity in consumption profile and reliability, and increase participation at a lower cost.

Query performance prediction
Developed approaches to predicting search effectiveness in lack of relevance judgments.
Game theory meets information retrieval
Developed a novel game theoretic framework for competitive retrieval settings. Studied various aspects of the competitive setting.
Mass Visual images of the Built Environment and Future _ ARChive
The project combines my two principal areas of specialization: (1) architecture history of housing as a large-scale, multifaceted phenomenon; and (2) developing machine-vision archival capabilities for the built environment. Combining methods of computation in architecture with the historiography of the built environment, I atte... more

Mass Visual images of the Built Environment and Future _ ARChive

The project combines my two principal areas of specialization: (1) architecture history of housing as a large-scale, multifaceted phenomenon; and (2) developing machine-vision archival capabilities for the built environment. Combining methods of computation in architecture with the historiography of the built environment, I attempt to develop an empirical approach to the study of vast urban landscapes using computational capacities. I aim to analyze vast volumes of building data in images, replacing textual keyword labelling by content-based semantic ‘reading’. I use the large corpus of building images from Google Street View to train a convolutional neural network (CNN) model which can identify architectural features in façade images.

Effects of offering hospital patients information about the process of their treatment
We extracted information from hospital medical records and designed a personal platform through which patients could receive information about what they can expect regarding the type of tests and treatments and the expected duration of their hospital stay. We assess the effects of the offered information on patient satisfaction,... more

Effects of offering hospital patients information about the process of their treatment

We extracted information from hospital medical records and designed a personal platform through which patients could receive information about what they can expect regarding the type of tests and treatments and the expected duration of their hospital stay. We assess the effects of the offered information on patient satisfaction, duration of stay and duration of treatments.

Content and social dynamics of Slack communication
Analyses of all communication conducted on public Slack channels of a mid-size firm, using topic analysis, sentiment analysis and behavior analysis as well as network analysis to deconstruct the dynamics of the firm’s Slack communication.
Effects of customer emotions on employee response time
Analyses of archives of over 100K real-time interactions between human agents and customers in online service chats.
Energy performance in heterogeneous built environments
This research aims at exploring the impacts of mixed-use and mixed-typology configurations on energy performance (i.e., supply, demand, and the balance between them) in the Israeli context. To do that - we are adopting a cross-use, cross-scale (from a room to a district), and a cross-climatic analytical approach (different clima... more

Energy performance in heterogeneous built environments

This research aims at exploring the impacts of mixed-use and mixed-typology configurations on energy performance (i.e., supply, demand, and the balance between them) in the Israeli context. To do that – we are adopting a cross-use, cross-scale (from a room to a district), and a cross-climatic analytical approach (different climate zones and future climate), which is applied here on several local test cases. The methodology includes an optimization module that offers a set of spatial and usage combinations which supply a favorable energy starting point in the heterogeneous design of buildings and districts.

.

A holistic generative cross-climatic method for solar-driven environmental design
This project aims to advance the existing scientific knowledge on solar design by harnessing novel computational optimization methods. We explore a generative approach in which a combination of solar-driven metrics drives the form-finding process based on a multi-objective optimization process. The workflow is applied to a real ... more

A holistic generative cross-climatic method for solar-driven environmental design

This project aims to advance the existing scientific knowledge on solar design by harnessing novel computational optimization methods. We explore a generative approach in which a combination of solar-driven metrics drives the form-finding process based on a multi-objective optimization process. The workflow is applied to a real district case study in Tel Aviv and yields a large set of spatial solar-driven building masses, rather than one solar envelope volume, which corresponds to the different trade-offs between the environmental performance metrics applied.

.

Environmentally responsive by urban design
This project offers new insights into the nexus between urban form and environmental performance both at the local and global contexts. We develop and explore a new set of harmonized workflows, which by capitalizing on the benefits of advanced computational intelligence, open new possibilities in the pursuit of a sustainable urb... more

Environmentally responsive by urban design

This project offers new insights into the nexus between urban form and environmental performance both at the local and global contexts. We develop and explore a new set of harmonized workflows, which by capitalizing on the benefits of advanced computational intelligence, open new possibilities in the pursuit of a sustainable urban form – going beyond energy considerations towards environmental quality and urban livability. As part of the project new simplified evaluation metrics are developed to be employed in multi-objective optimization studies of environmental performance at the urban scale.

Capturing the structural complexity of nuclear envelope invaginations
Using cutting-edge super-resolution microscopy technology, expansion microscopy, we discovered that tubular nuclear envelope invaginations are highly abundant in vertebrate embryonic cells. These structures are poised to extend the role of the nuclear envelope in regulating gene expression deep into the nucleus. Shedding light o... more

Capturing the structural complexity of nuclear envelope invaginations

Using cutting-edge super-resolution microscopy technology, expansion microscopy, we discovered that tubular nuclear envelope invaginations are highly abundant in vertebrate embryonic cells. These structures are poised to extend the role of the nuclear envelope in regulating gene expression deep into the nucleus. Shedding light on this phenomenon requires segmenting the 3D structure of invaginations in a huge dataset of microscopy data. We are interested in utilizing learning algorithms and in particular deep neural networks for this 3D segmentation task.

Probabilistic models of neural activity underlying decisions
At key points within neural circuits, neurons integrate information from multiple sources to make a choice. We are interested in unraveling how such choices are implemented by the circuits, by developing generative probabilistic models of neural activity in multiple neural populations involved in making a decision and comparing ... more

Probabilistic models of neural activity underlying decisions

At key points within neural circuits, neurons integrate information from multiple sources to make a choice. We are interested in unraveling how such choices are implemented by the circuits, by developing generative probabilistic models of neural activity in multiple neural populations involved in making a decision and comparing these predictions to experimental measurements of neural activity. In particular, we will focus on the circuit mediating the choice of the response type a larval zebrafish would present in the face of an alarming stimulus.

 

 

 

Design For Collaboration (DFC)
The focus is on recognizing and analyzing the challenges that arise when autonomous agents with different capabilities need to interact and collaborate on unknown tasks, on providing methods for the automated design of these environments to promote collaboration, and on specifying guarantees regarding the quality of the design s... more

Design For Collaboration (DFC)

The focus is on recognizing and analyzing the challenges that arise when autonomous agents with different capabilities need to interact and collaborate on unknown tasks, on providing methods for the automated design of these environments to promote collaboration, and on specifying guarantees regarding the quality of the design solutions produced by our suggested methods. This research combines data-driven approaches with symbolic AI techniques and involves both theoretical work and evaluations on multi-agent reinforcement learning settings and on multi robot systems.

Market of Information and Skills for Multi Agent AI and Multi Robot Teams
Promoting multi-agent collaboration via dynamic markets of information and skills in which AI agents and robots trade their physical capabilities and their ability to acquire new information. The value of these traded commodities is dynamically computed based on the agents' objectives, sensors and actuation capabilities as well ... more

Market of Information and Skills for Multi Agent AI and Multi Robot Teams

Promoting multi-agent collaboration via dynamic markets of information and skills in which AI agents and robots trade their physical capabilities and their ability to acquire new information. The value of these traded commodities is dynamically computed based on the agents’ objectives, sensors and actuation capabilities as well as their ability to communicate with each other and ask for assistance. This framework maximizes performance and team resilience, without relying on a centralized controller.

Task and Team Aware Motion Planning for Robotics (TATAM)
Most current approaches to robotic planning separate the low-level planning of basic behaviors and the high-level search for a sequence of behaviors that will accomplish a task. However, in complex settings such as packing, personal assistance, and cooking, this dichotomous view becomes inefficient, especially in environments sh... more

Task and Team Aware Motion Planning for Robotics (TATAM)

Most current approaches to robotic planning separate the low-level planning of basic behaviors and the high-level search for a sequence of behaviors that will accomplish a task. However, in complex settings such as packing, personal assistance, and cooking, this dichotomous view becomes inefficient, especially in environments shared by multiple autonomous agents. We therefore offer new ways for integrating task-level considerations when planning the robot’s movement, and for propagating motion-planning considerations into task planning.

Cognitive Text Personalization
In this project we explore how to use NLP and eye movements in reading to determine various properties of texts, such as their readability level and how engaging they are. We further investigate how to personalize readability and content in real time, with an emphasis on language learners and people with cognitive impairments.
Inferring Linguistic Knowledge from Eye Movements in Reading
We are developing computational frameworks for decoding  linguistic knowledge and cognitive state from eye movements during reading. In particular, we are developing a new type of language assessment technologies where language proficiency is determined as an automatic byproduct of ordinary reading.
Human-Like Natural Language Processing
In this project we investigate how to bring NLP closer to human language processing abilities by providing NLP systems with inductive biases from human eye movements in reading and brain activity during language comprehension. Example tasks include machine reading comprehension and language modeling.
Long term prediction of housing price bubbles
The objective of this proposal is to create a tool able to predict housing price bubbles by analyzing the long-term behavior of housing prices using sequential forecasting ML algorithms. Unlike common practices that deal with macroeconomic variables, we suggest focusing on dynamic micro-factors, such as neighborhood’s architec... more

Long term prediction of housing price bubbles

The objective of this proposal is to create a tool able to predict housing price bubbles by analyzing the long-term behavior of housing prices using sequential forecasting ML algorithms. Unlike common practices that deal with macroeconomic variables, we suggest focusing on dynamic micro-factors, such as neighborhood’s architectural characteristics, urban planning and changing socio-economic factors.

Highly Power Efficient Cloud Central Processing Unit (CPU)
The research calls for a new Cloud Processor Architecture that leverages the clouds’ high number of independent parallel general-purpose tasks and redesign the main Central Processing Unit (CPU) to achieve high throughput at a cost of single threaded performance. The concept is based on an efficient new multitasking architect... more

Highly Power Efficient Cloud Central Processing Unit (CPU)

The research calls for a new Cloud Processor Architecture that leverages the clouds’ high number of independent parallel general-purpose tasks and redesign the main Central Processing Unit (CPU) to achieve high throughput at a cost of single threaded performance.
The concept is based on an efficient new multitasking architecture driven to extreme via Machine Learning Reinforcement Learning architecture to achieve high utilization of the CPU resources.

People:
Uri Weiser
Non-Blocking Simultaneous MultiThreading – NB-SMT
A new DNN Architecture that leverages the DNN inherent sparsity and resilience to achieve a Non-Blocking Simultaneous MultiThreading technique (NB-SMT). The new technique enables DNN execution units to be shared among several computational flows to avoid idle computing element operations due to data sparsity. In the scenario of... more

Non-Blocking Simultaneous MultiThreading – NB-SMT

A new DNN Architecture that leverages the DNN inherent sparsity and resilience to achieve a Non-Blocking Simultaneous MultiThreading technique (NB-SMT).
The new technique enables DNN execution units to be shared among several computational flows to avoid idle computing element operations due to data sparsity. In the scenario of a structural hazard on a shared execution unit, we propose to temporarily and locally “squeeze in” the operations by reduced precision.

People:
Uri Weiser
Productive Failure in Data Science
This project seeks to build an intelligent environment that provides Data Science students with opportunities to construct their own data models. Though students often make mistakes in these tasks, we use an AI-driven combination of challenges, feedback, and instruction to help students become adaptive experts who are able to co... more

Productive Failure in Data Science

This project seeks to build an intelligent environment that provides Data Science students with opportunities to construct their own data models. Though students often make mistakes in these tasks, we use an AI-driven combination of challenges, feedback, and instruction to help students become adaptive experts who are able to complex relationship in unfamiliar data.

People:
Ido Roll
Measuring Creative Thinking
The main objective of the study is to develop Creative Thinking measurement tools in the context of scientific and authentic problem solving. This project is focused on the synergy between the measurement of the process, using learning analytics techniques, and measurement of the products of Creative Thinking. It serves as a cas... more

Measuring Creative Thinking

The main objective of the study is to develop Creative Thinking measurement tools in the context of scientific and authentic problem solving. This project is focused on the synergy between the measurement of the process, using learning analytics techniques, and measurement of the products of Creative Thinking. It serves as a case-study for intelligent assessment of complex constructs.

People:
Ido Roll
Learning analytics to teach scientific reasoning
This project seeks to support learning of scientific literacies with virtual labs using student facing learning-analytics dashboards. The design of student-facing dashboards is challenging due to the ill-defined nature of scientific skills and attitudes. As students are free to explore an open-ended design space, and as there ar... more

Learning analytics to teach scientific reasoning

This project seeks to support learning of scientific literacies with virtual labs using student facing learning-analytics dashboards. The design of student-facing dashboards is challenging due to the ill-defined nature of scientific skills and attitudes. As students are free to explore an open-ended design space, and as there are no “correct” answers, we identify metrics that can be extracted and interpreted so that students can understand the processes of doing science.

People:
Ido Roll
Advanced AI methods to meed the need of clinicians
Within this project we developed a set of deep-learning tools that enabled design of a robust, trustworthy, explainable, and transparent system, while retaining the superior level of performance expected of deep learning-based algorithms for classification of heart conditions from short ECG recordings collected using a two-lead ... more

Advanced AI methods to meed the need of clinicians

Within this project we developed a set of deep-learning tools that enabled design of a robust, trustworthy, explainable, and transparent system, while retaining the superior level of performance expected of deep learning-based algorithms for classification of heart conditions from short ECG recordings collected using a two-lead device.

People:
Yael Yaniv
Advanced AI methods to identify heart conditions
Within this project we developed an app which integrates an AI method that can automatically distinguish between atrial fibrillation, other rhythm disturbances and noise when using a mobile one-lead ECG device. In parallel we developed an automated AI-based system to identify heart conditions from 12-lead digital or image ECG re... more

Advanced AI methods to identify heart conditions

Within this project we developed an app which integrates an AI method that can automatically distinguish between atrial fibrillation, other rhythm disturbances and noise when using a mobile one-lead ECG device. In parallel we developed an automated AI-based system to identify heart conditions from 12-lead digital or image ECG recordings with high accuracy. We also demonstrated that the images scanned using a smartphone provided the same accuracy as machine images.

People:
Yael Yaniv
Machine-learning for Crohn’s disease assessment
Non-invasive assessment of the terminal ileum’s mucosal healing plays key role in managing Crohn’s disease (CD) patients. We develop machine-learning models to predict terminal-ileum’s mucosal healing from big-data databases of: 1) semi-quantitative clinical interpretation of Magnetic Resonance Imaging (MRI) data of CD pa... more

Machine-learning for Crohn’s disease assessment

Non-invasive assessment of the terminal ileum’s mucosal healing plays key role in managing Crohn’s disease (CD) patients. We develop machine-learning models to predict terminal-ileum’s mucosal healing from big-data databases of:
1) semi-quantitative clinical interpretation of Magnetic Resonance Imaging (MRI) data of CD patients, and
2) MRI images of CD patients. Our approach provides more accurate assessment of the terminal ileum’s mucosal healing compared to classical linear methods.

Non-parametric Bayesian deep-learning for medical imaging
Mechanisms to determine deep-neural-networks confidence in their prediction by estimating their predictions’ uncertainty play a critical role in adopting deep-learning techniques for safety-critical clinical applications. We introduce a principled way to non-parametrically characterize the true posterior distribution of the ne... more

Non-parametric Bayesian deep-learning for medical imaging

Mechanisms to determine deep-neural-networks confidence in their prediction by estimating their predictions’ uncertainty play a critical role in adopting deep-learning techniques for safety-critical clinical applications. We introduce a principled way to non-parametrically characterize the true posterior distribution of the neural-network predictions through stochastic gradient Langevin dynamics (SGLD). We demonstrated very high correlation between our measures of uncertainty and out-of-distribution data in MRI registration. Further, our approach improved registration accuracy and robustness.

Deep-learning for Quantitative MRI analysis
In-vivo quantification of tissue biophysical properties plays a key role in personalized medicine. Motivated by classical model-fitting approaches, we introduce a new class of deep-neural-network architectures and training processes, to enable accurate and reliable quantification of tissue biophysical properties from quantitativ... more

Deep-learning for Quantitative MRI analysis

In-vivo quantification of tissue biophysical properties plays a key role in personalized medicine. Motivated by classical model-fitting approaches, we introduce a new class of deep-neural-network architectures and training processes, to enable accurate and reliable quantification of tissue biophysical properties from quantitative MRI data.  We demonstrated the added-value of our approach for Intra-Voxel Incoherent motion analysis of Diffusion-Weighted MRI data with clinical applications in oncology and gastroenterology.

Decisions from experience
We study basic human decision making and learning processes when making repeated and/or sequential choice. Understanding the basic processes in these very common settings (e.g. driving, behavior in pandemics, using smartphone apps, health decisions) both improves our ability to predict behavior and to design mechanisms and polic... more

Decisions from experience

We study basic human decision making and learning processes when making repeated and/or sequential choice. Understanding the basic processes in these very common settings (e.g. driving, behavior in pandemics, using smartphone apps, health decisions) both improves our ability to predict behavior and to design mechanisms and policies that are robust to the likely behaviors of systems’ users.

Big data to understand and predict financial decisions
We use big data on transactions in financial markets to discover evidence supporting psychological theories of decision making and use these psychological insights within machine learning systems to improve predictions of financial markets.
Predicting human choice with machine learning & psychology
We integrate psychological theories and models of human decision making into machine learning systems to predict human decision making in state-of-the-art levels. Focusing on the most fundamental choice task from behavioral economics and using the largest datasets currently available, we study which theories and models, which ty... more

Predicting human choice with machine learning & psychology

We integrate psychological theories and models of human decision making into machine learning systems to predict human decision making in state-of-the-art levels. Focusing on the most fundamental choice task from behavioral economics and using the largest datasets currently available, we study which theories and models, which types of machine learning algorithms and tools, and which methods of integration lead to the best out-of-sample predictions.

Associations of the BNT162b2 COVID-19 vaccine effectiveness with patient age and comorbidities
Vaccinations are considered the major tool to curb the current SARS-CoV-2 pandemic. A randomized placebo-controlled trial of the BNT162b2 vaccine has demonstrated a 95% efficacy in preventing COVID-19 disease. These results are now corroborated with statistical analyses of real-world vaccination rollouts, but resolving vaccine e... more

Associations of the BNT162b2 COVID-19 vaccine effectiveness with patient age and comorbidities

Vaccinations are considered the major tool to curb the current SARS-CoV-2 pandemic. A randomized placebo-controlled trial of the BNT162b2 vaccine has demonstrated a 95% efficacy in preventing COVID-19 disease. These results are now corroborated with statistical analyses of real-world vaccination rollouts, but resolving vaccine effectiveness across demographic groups is challenging. Here, applying a multivariable logistic regression analysis approach to a large patient-level dataset, including SARS-CoV-2 tests, vaccine inoculations and personalized demographics, we model vaccine effectiveness at daily resolution and its interaction with sex, age and comorbidities. Vaccine effectiveness gradually increased post day 12 of inoculation, then plateaued, around 35 days, reaching 91.2% [CI 88.8%-93.1%] for all infections and 99.3% [CI 95.3%-99.9%] for symptomatic infections. Effectiveness was uniform for men and women yet declined mildly but significantly with age and for patients with specific chronic comorbidities, most notably type 2 diabetes. Quantifying real-world vaccine effectiveness, including both biological and behavioral effects, our analysis provides initial measurement of vaccine effectiveness across demographic groups.

Personal clinical history predicts antibiotic resistance of urinary tract infections
Antibiotic resistance is prevalent among the bacterial pathogens causing urinary tract infections. However, antimicrobial treatment is often prescribed ‘empirically’, in the absence of antibiotic susceptibility testing, risking mismatched and therefore ineffective treatment. Here, linking a 10-year longitudinal data set of o... more

Personal clinical history predicts antibiotic resistance of urinary tract infections

Antibiotic resistance is prevalent among the bacterial pathogens causing urinary tract infections. However, antimicrobial treatment is often prescribed ‘empirically’, in the absence of antibiotic susceptibility testing, risking mismatched and therefore ineffective treatment. Here, linking a 10-year longitudinal data set of over 700,000 community-acquired urinary tract infections with over 5,000,000 individually resolved records of antibiotic purchases, we identify strong associations of antibiotic resistance with the demographics, records of past urine cultures and history of drug purchases of the patients. When combined together, these associations allow for machine-learning-based personalized drug-specific predictions of antibiotic resistance, thereby enabling drug-prescribing algorithms that match an antibiotic treatment recommendation to the expected resistance of each sample. Applying these algorithms retrospectively, over a 1-year test period, we find that they greatly reduce the risk of mismatched treatment compared with the current standard of care. The clinical application of such algorithms may help improve the effectiveness of antimicrobial treatments.

Community-level evidence for SARS-CoV-2 vaccine protection of unvaccinated individuals
Mass vaccination has the potential to curb the current COVID19 pandemic by protecting individuals who have been vaccinated against the disease and possibly lowering the likelihood of transmission to individuals who have not been vaccinated. The high effectiveness of the widely administered BNT162b vaccine from Pfizer–BioNTech ... more

Community-level evidence for SARS-CoV-2 vaccine protection of unvaccinated individuals

Mass vaccination has the potential to curb the current COVID19 pandemic by protecting individuals who have been vaccinated against the disease and possibly lowering the likelihood of transmission to individuals who have not been vaccinated. The high effectiveness of the widely administered BNT162b vaccine from Pfizer–BioNTech in preventing not only the disease but also infection with SARS-CoV-2 suggests a potential for a population-level effect, which is critical for disease eradication. However, this putative effect is difficult to observe, especially in light of highly fluctuating spatiotemporal epidemic dynamics. Here, by analyzing vaccination records and test results collected during the rapid vaccine rollout in a large population from 177 geographically defined communities, we find that the rates of vaccination in each community are associated with a substantial later decline in infections among a cohort of individuals aged under 16 years, who are unvaccinated. On average, for each 20 percentage points of individuals who are vaccinated in a given population, the positive test fraction for the unvaccinated population decreased approximately twofold. These results provide observational evidence that vaccination not only protects individuals who have been vaccinated but also provides cross-protection to unvaccinated individuals in the community.

Machine Learning based MANET Traffic Performance Prediction Tool
Mobile Ad-hoc NETworks (MANET) is a communication platform for wireless first response units that creates a temporary network without any help of any centralized support. MANET is characterized by its rapidly changing connectivity and bandwidth over the communication links. Mobile Ad Hoc Network is a collection of wireless hosts... more

Machine Learning based MANET Traffic Performance Prediction Tool

Mobile Ad-hoc NETworks (MANET) is a communication platform for wireless first response units that creates a temporary network without any help of any centralized support. MANET is characterized by its rapidly changing connectivity and bandwidth over the communication links. Mobile Ad Hoc Network is a collection of wireless hosts that creates a temporary network without any help of any centralized support. At the same time, the application runs on the units often requires strict availability of end to end bandwidth and delay. It is essential to be build an optimization tool that will be able to predict the traffic bandwidth or the delay performance once the network topology changes or a new application starts running. Developing such tool requires network modeling. Nowadays, network models are either based on packet-level simulators or analytical models (e.g., queuing theory). Packet–level simulators are very costly computationally, while the analytical models are fast but not accurate. Hence, Machine Learning (ML) arises as a promising solution to build accurate network models able to operate in real time and to predict the resulting network performance according to the target policy, i.e maximum bandwidth or minimum end-to-end delay. Recently, Graph Neural Networks (GNN) have shown a strong potential to be integrated into commercial products for network control and management. Early works using GNN have demonstrated capability to learn from different network characteristics that are fundamentally represented as graphs, such as the topology, the routing configuration, or the traffic that flows along a series of nodes in the network. In contrast to previous ML-based solutions, GNN enables to produce accurate predictions even in networks unseen during the training phase. The main project target is to adjust GNN to MANET and test its prediction accuracy for such network.

People:
Danny Raz
A neural control theory of high-level cognition in aging
High-level cognition, e.g., intelligence, draws on multiple processes, following sequential transitions through a series of neural states. The ease of these transitions depends on the connectome - underlying network of white-matter connections. Yet, the link between connectome, brain state transitions, and cognition is unclear, ... more

A neural control theory of high-level cognition in aging

High-level cognition, e.g., intelligence, draws on multiple processes, following sequential transitions through a series of neural states. The ease of these transitions depends on the connectome – underlying network of white-matter connections. Yet, the link between connectome, brain state transitions, and cognition is unclear, nor how such a relation changes as people age, across their lifespan. Here, I leverage state-of-the-art methodology from network control theory to link network properties, state transitions, and high-level cognition across the human lifespan.

Predicting an individual’s creative ability
Creativity is a complex, multidimensional, elusive concept, that is vital to personal and societal needs. In this project, we leverage computational network science methods with machine learning, combined with psycholinguistics to develop a computational model to predict ones’ creative ability level. We analyze a simple semant... more

Predicting an individual’s creative ability

Creativity is a complex, multidimensional, elusive concept, that is vital to personal and societal needs. In this project, we leverage computational network science methods with machine learning, combined with psycholinguistics to develop a computational model to predict ones’ creative ability level. We analyze a simple semantic fluency task (name all the animals you can think of) as a mental navigation process over a multiplex cognitive network. Features of this mental navigation process are then being used to build creativity prediction and classification models.

Robustness and uncertainty in dynamic decision problems
Understanding how to deal with model uncertainty is key for building resilient agents that can overcome environments that are unforeseen. My research group has studied for years different approaches that build robust agents that can cope with different types of uncertainties. Robustness means that policies are immune to changes ... more

Robustness and uncertainty in dynamic decision problems

Understanding how to deal with model uncertainty is key for building resilient agents that can overcome environments that are unforeseen. My research group has studied for years different approaches that build robust agents that can cope with different types of uncertainties. Robustness means that policies are immune to changes in the environment leading to better real time performance. In a sequence of papers we developed robust reinforcement learning and planning algorithms including scaling up such algorithms, learning the uncertainty set online, adapting quickly to unknown uncertainties, and online adaptation. The main application areas here are energy and transport services.

Using Reinforcement Learning for bit-rate selection
We consider a reinforcement learning scheme for selecting how and what to transfer in 5G networks. The problem at hand is to decide which bit-rate to use and which channels would yield the best tradeoff in terms of power, performance, and cost. We employ multi-objective, multi-agent reinforcement learning to best decide how to t... more

Using Reinforcement Learning for bit-rate selection

We consider a reinforcement learning scheme for selecting how and what to transfer in 5G networks. The problem at hand is to decide which bit-rate to use and which channels would yield the best tradeoff in terms of power, performance, and cost. We employ multi-objective, multi-agent reinforcement learning to best decide how to transmit the data. In previous work, we proposed to use multi-armed bandit algorithms that ignore the current channel and agent state (see O. Avner and S. Mannor, Multi-User Communication Networks: A Coordinated Multi-Armed Bandit Approach, IEEE/ACM Transactions on Networking ( Volume: 27, Issue: 6, Dec. 2019), https://ieeexplore.ieee.org/document/8875003), but in this project we go further and consider the state of the transmission, the real time requirements, and the changing channel.

We consider the potential role of language as a regularizer in reinforcement learning. The objective is to create hierarchical reinforcement learning algorithms that are explainable by design: they use language to describe what they do. The language models can be learned, dictated, imitated, or created. In a paper that appeared ... more

Language models in reinforcement learning

We consider the potential role of language as a regularizer in reinforcement learning. The objective is to create hierarchical reinforcement learning algorithms that are explainable by design: they use language to describe what they do. The language models can be learned, dictated, imitated, or created. In a paper that appeared in ICML 2019, we introduced Act2Vec, a general framework for learning context-based action representation for Reinforcement Learning. Representing actions in a vector space help reinforcement learning algorithms achieve better performance by grouping similar actions and utilizing relations between different actions. We showed how prior knowledge of an environment can be extracted from demonstrations and injected into action vector representations that encode natural compatible behavior. We then used these for augmenting state representations as well as improving function approximation of Q-values. We visualize and test action embeddings in three domains including a drawing task, a high dimensional navigation task, and the large action space domain of StarCraft II.

Software Caching – W-TinyLFU
Caching is one of the most effective performance boosting techniques, in which hot data items are stored in a closer and faster memory to the application than the entire storage. In software managed caches, the cache is typically the local DRAM memory vs. SDDs, HDDs, or remote storage. The W-TinyLFU scheme for maintaining softwa... more

Software Caching – W-TinyLFU

Caching is one of the most effective performance boosting techniques, in which hot data items are stored in a closer and faster memory to the application than the entire storage. In software managed caches, the cache is typically the local DRAM memory vs. SDDs, HDDs, or remote storage. The W-TinyLFU scheme for maintaining software caches is now dominating the Java and Go eco-systems. It is applied, either directly or through the Caffeine and Ristretto caching libraries in Cassandra, Accumulo, HBase, Apache Solr, Infinispan, Open-Whisk, Corfu, Finagle, Spring, Akka, Neo4j, DGraph, Druid, and many others. We continue to expand it to new domains.

Smart Sketching
Sketches enable maintaining statistics regarding large data streams in sub-linear space with only a single pass on the data. In this project we seek novel sketching solutions with an emphasis on the sliding window model, multi-dimensional data, and implementations inside SDNs and programmable data-planes.
Development of structure and function in trained networks
Learning a new skill requires assimilating into our brain the regularities of the external world and how our body interacts with them as we engage in this skill. Mechanistically, this entails a translation of inputs, rules, and outputs into changes to the structure of neural networks in our brain. How this translation occurs is ... more

Development of structure and function in trained networks

Learning a new skill requires assimilating into our brain the regularities of the external world and how our body interacts with them as we engage in this skill. Mechanistically, this entails a translation of inputs, rules, and outputs into changes to the structure of neural networks in our brain. How this translation occurs is still largely unknown. We will follow the process of this assimilation using Trained Recurrent Neural Networks (TRNNs), which are increasingly used as models of neural circuits of trained animals.

People:
Omri Barak
Cancer resistance and metastasis as a learning process
Cancer cells embedded in healthy tissue can revert to normal cells, and vice versa for healthy tissue in a tumor environment. This highlights two parallel learning processes: cell and tissue, in the development or suppression of disease. Cancer cells use their intrinsic dynamic plasticity to escape and explore novel. Simultaneou... more

Cancer resistance and metastasis as a learning process

Cancer cells embedded in healthy tissue can revert to normal cells, and vice versa for healthy tissue in a tumor environment. This highlights two parallel learning processes: cell and tissue, in the development or suppression of disease. Cancer cells use their intrinsic dynamic plasticity to escape and explore novel. Simultaneously, tissue homeostasis is a target of the collective of cells forming the tissue, which oppresses this exploration and keeps cell type stable. We use the language of machine learning to characterize these two learning processes.

People:
Omri Barak
Space of solutions in recurrent neural networks
Training Machine learning algorithms often introduces the phenomenon of underspecification: A wide gap between the dataset used for training and the real task. A parallel phenomenon in Neuroscience is the variety of strategies with which animals can approach a given task. These observations imply that for every task and training... more

Space of solutions in recurrent neural networks

Training Machine learning algorithms often introduces the phenomenon of underspecification: A wide gap between the dataset used for training and the real task. A parallel phenomenon in Neuroscience is the variety of strategies with which animals can approach a given task. These observations imply that for every task and training set there exists a space of solutions that is equivalent on that set. Both the structure of this space and the rules of motion within it are not understood. In this work, we study the space of solutions that emerges from those degrees of freedom in Recurrent Neural Networks (RNNs) trained on neuroscience-inspired tasks.

People:
Omri Barak
Navigation of Mobility Impaired Pedestrians
Due to constraints associated with the urban form and the lack of customized assistive technologies, the mobility and independence of impaired pedestrians is limited, confining them to their home and causing them to be socially isolated. Using machine learning, the research objective is two-fold: predict missing environmental d... more

Navigation of Mobility Impaired Pedestrians

Due to constraints associated with the urban form and the lack of customized assistive technologies, the mobility and independence of impaired pedestrians is limited, confining them to their home and causing them to be socially isolated. Using machine learning, the research objective is two-fold: predict missing environmental data in digital maps, and using the augmented maps to calculate optimized routes tailored for this community. The outcome will be implemented in practical customized navigation services.
Nonsmooth bilevel optimization using first-order methods
Bilevel optimization problems arise in many ML and signal processing applications, where the aim is to find the minimal norm or most sparse optimal solution of an underdetermined optimization problem. Traditionally, these problems have been solved by regularization which requires tuning of the regularization parameter. We are fo... more

Nonsmooth bilevel optimization using first-order methods

Bilevel optimization problems arise in many ML and signal processing applications, where the aim is to find the minimal norm or most sparse optimal solution of an underdetermined optimization problem. Traditionally, these problems have been solved by regularization which requires tuning of the regularization parameter. We are focus on an alternative approach which utilizes first order optimization methods to directly solve this problem, for which we provide rate of convergence guarantees.

Adaptive robust radio therapy planning
Adaptive planning radiotherapy treatment based on inaccurate and evolving bio-marker information collected from imaging during the treatment. Radiotherapy plan is composed on the amount and angle of radiation in each stage of the treatment, where the goal is to get the maximal dose to the tumor while protecting healthy organs. T... more

Adaptive robust radio therapy planning

Adaptive planning radiotherapy treatment based on inaccurate and evolving bio-marker information collected from imaging during the treatment. Radiotherapy plan is composed on the amount and angle of radiation in each stage of the treatment, where the goal is to get the maximal dose to the tumor while protecting healthy organs. The challenge comes from the resulting problem being a large-scale mixed integer problem, and the dependence between optimal decision and the future bio-marker levels.

Data-driven multi-stage Stochastic Optimization
Multi-stage linear stochastic optimization problems are known to be challenging. An added difficulty arises when the distribution of the uncertainty is not known exactly, and alternatively only historical sample paths of the problem are available.  We explore solving this problem by using data-driven distributionally robust opt... more

Data-driven multi-stage Stochastic Optimization

Multi-stage linear stochastic optimization problems are known to be challenging. An added difficulty arises when the distribution of the uncertainty is not known exactly, and alternatively only historical sample paths of the problem are available.  We explore solving this problem by using data-driven distributionally robust optimization, for which we provide convergence guarantees. Additionally, we explore solving the resulting optimization problem by approximation methods.

Recommendation: a dynamical-systems perspective
Modern recommendation platforms have become complex, dynamic eco-systems. Platforms often rely on machine learning models to successfully match users to content, but most methods neglect to account for how they affect user behavior, satisfaction, and well-being of over time. Here we propose a novel dynamical-systems perspective ... more

Recommendation: a dynamical-systems perspective

Modern recommendation platforms have become complex, dynamic eco-systems. Platforms often rely on machine learning models to successfully match users to content, but most methods neglect to account for how they affect user behavior, satisfaction, and well-being of over time. Here we propose a novel dynamical-systems perspective to recommendation that allows to reason about, and control, macro-temporal aspects of recommendation policies as they relate to user behavior.

 

Learning Representations by Humans, for Humans
The task of optimizing machines to support human decision-making is often conflated with that of optimizing machines for accuracy, even though they are materially different. Whereas typical learning systems prescribe actions through prediction, our framework learns to to reframe problems in a way that directly supports human dec... more

Learning Representations by Humans, for Humans

The task of optimizing machines to support human decision-making is often conflated with that of optimizing machines for accuracy, even though they are materially different. Whereas typical learning systems prescribe actions through prediction, our framework learns to to reframe problems in a way that directly supports human decisions. Using a novel human-in-the-loop training procedure, our framework learns problem representations that directly optimize human performance.

Strategic Classification Made Practical
Machine learning has become imperative for informing decisions that affect the lives of humans across a multitude of domains. But when people benefit from certain predictive outcomes, they are prone to act strategically to improve those outcomes. Our goal in this project is to develop a practical learning framework that accounts... more

Strategic Classification Made Practical

Machine learning has become imperative for informing decisions that affect the lives of humans across a multitude of domains. But when people benefit from certain predictive outcomes, they are prone to act strategically to improve those outcomes. Our goal in this project is to develop a practical learning framework that accounts for how humans behaviourally respond to classification rules. Our framework provides robustness while also providing means to promote favourable social outcomes.

Machine learning tools for missing and censored data
When using machine learning algorithms, it is often assumed that the data is complete.  In real-life applications, however, this assumption is usually over-optimistic. "Missingness" can happen in many ways: some missing covariates, some missing responses, only a lower bound is given for the response (i.e., the response is right... more

Machine learning tools for missing and censored data

When using machine learning algorithms, it is often assumed that the data is complete.  In real-life applications, however, this assumption is usually over-optimistic. “Missingness” can happen in many ways: some missing covariates, some missing responses, only a lower bound is given for the response (i.e., the response is right censored), observations are seen only if they crossed some level (i.e., left truncation), or a label is given only to a bag of observations. We develop machine learning tools that can handle missing data, using imputation, inverse probability weighting, and doubly-robust estimators.

 

Measuring uncertainty of machine learning predictions
Data scientists are interested in answering questions such as how confident one is in a prediction, and whether a certain feature has a significant influence on the response variable. Drawing statistical inference for machine learning algorithms is difficult. We study methods for performing statistical inference for two common m... more

Measuring uncertainty of machine learning predictions

Data scientists are interested in answering questions such as how confident one is in a prediction, and whether a certain feature has a significant influence on the response variable. Drawing statistical inference for machine learning algorithms is difficult. We study methods for performing statistical inference for two common machine learning techniques: kernel machines and deep learning. We utilize Bayesian methods to quantify uncertainty, select hyper-parameter values, and to bound the generalization error. We propose novel PAC-Bayes generalization bounds which can be data-dependent.

 

Fighting COVID-19 by learning from data
To help policymakers set policy based on scientific methods, we use mathematical modeling and advanced statistical tools to study different aspects of the COVID-19 pandemic. Our research includes learning the susceptibility and infectivity of children and adolescents; the protection of vaccination and previous SARS-CoV-2 infecti... more

Fighting COVID-19 by learning from data

To help policymakers set policy based on scientific methods, we use mathematical modeling and advanced statistical tools to study different aspects of the COVID-19 pandemic. Our research includes learning the susceptibility and infectivity of children and adolescents; the protection of vaccination and previous SARS-CoV-2 infection in preventing subsequent SARS-CoV-2 infection and other COVID-19 outcomes; and the effect of COVID-19 on different aspects of public health, such as suicide rate and natural abortion.

Redundant Storage Service on the Edge
This project will enable unreliable edge computing nodes to jointly provide a reliable storage service for unpredictable user workloads. Edge systems consists small-scale servers (nodes) at the edge of the network whose root is in the cloud-based datacenter. Their premise is to bring data and computing closer to time-critical ap... more

Redundant Storage Service on the Edge

This project will enable unreliable edge computing nodes to jointly provide a reliable storage service for unpredictable user workloads. Edge systems consists small-scale servers (nodes) at the edge of the network whose root is in the cloud-based datacenter. Their premise is to bring data and computing closer to time-critical applications running on e.g., cellphones and autonomous vehicles. We combine storage redundancy schemes with scalable algorithms for object mapping and request scheduling.

Non-invasive Brain-Computer Interfaces
Non-invasive brain computer interfaces (BCIs) provide direct communication link from the brain to external devices. We develop non-invasive BCIs that are based on interpreting EEG measurements to identify user’s desired selection, action or movement. We focus on developing self- correction capabilities, based on error-related ... more

Non-invasive Brain-Computer Interfaces

Non-invasive brain computer interfaces (BCIs) provide direct communication link from the brain to external devices. We develop non-invasive BCIs that are based on interpreting EEG measurements to identify user’s desired selection, action or movement. We focus on developing self- correction capabilities, based on error-related potentials (ErrPs), which are evoked in the brain when errors are detected.  We investigate ErrPs, develop classifiers for detecting them and methods to integrate them to improve BCIs. This project is funded by Dr. Maria Ascoli Rossi Research Grant.

Invasive Brain-Machine Interfaces
Invasive Brain-Machine Interfaces (BMIs) provide direct communication link from the brain to external devices. Invasive BMIs are based on interpreting neural activity recorded with invasive electrodes, identifying desired movements and controlling external devices accordingly. We develop algorithms to identify error-related proc... more

Invasive Brain-Machine Interfaces

Invasive Brain-Machine Interfaces (BMIs) provide direct communication link from the brain to external devices. Invasive BMIs are based on interpreting neural activity recorded with invasive electrodes, identifying desired movements and controlling external devices accordingly. We develop algorithms to identify error-related processing in the neural activity and to correct the BMIs accordingly. This project is performed in collaboration with Chestek’s Lab at the University of Michigan and funded by Betty and Dan Kahn Foundation.

Reinforcement learning of assembly policies
Our research focuses on developing control policies that are based on admittance control to facilitate learning and sim2real. This is part of a Large project on Assembly by Robotic Technology (ART) funded by the Israel Innovation Authority. We developed a Residual Admittance Policy (RAP) that generalizes well over space, size an... more

Reinforcement learning of assembly policies

Our research focuses on developing control policies that are based on admittance control to facilitate learning and sim2real. This is part of a Large project on Assembly by Robotic Technology (ART) funded by the Israel Innovation Authority. We developed a Residual Admittance Policy (RAP) that generalizes well over space, size and shape, and facilitates quick transfer learning. Most impressively, we demonstrate that the policy learned in simulations is highly successful in controlling an industrial robot (UR5e) to insert pegs of different shapes and sizes, without further training.

Development of agonistic compounds for therapy of inflammatory autoimmunity
The activity of autoimmune T cells is tightly regulated by two major types of regulatory T. cells, those that primarily express the fork-head gene FOXP3 (FOXp3+ regulatory T cells, also named T regs) , and those that do not (T regulatory-1 cells, also named Tr1). We have developed agonists that potentiate each sub-type and could... more

Development of agonistic compounds for therapy of inflammatory autoimmunity

The activity of autoimmune T cells is tightly regulated by two major types of regulatory T. cells, those that primarily express the fork-head gene FOXP3 (FOXp3+ regulatory T cells, also named T regs) , and those that do not (T regulatory-1 cells, also named Tr1). We have developed agonists that potentiate each sub-type and could be used for therapy of different autoimmune diseases.

Novel stabilized CXCL9/CXCL10 compounds for cancer immunotherapy
Long ago we reported that the CXCR3 ligands CXCL10 and possibly CXCL9  potentiate effector T cells and therefore their stabilized form could be used for cancer immunotherapy. It appears that due to post transcriptional modifications (PTM)  these compounds are rapidly inactivated at the tumor site. We have developed unique comp... more

Novel stabilized CXCL9/CXCL10 compounds for cancer immunotherapy

Long ago we reported that the CXCR3 ligands CXCL10 and possibly CXCL9  potentiate effector T cells and therefore their stabilized form could be used for cancer immunotherapy. It appears that due to post transcriptional modifications (PTM)  these compounds are rapidly inactivated at the tumor site. We have developed unique compounds that are resistant  to these PTM that can effectively be used for cancer immunotherapy

Online Variance Reduction for Stochastic Optimization
Modern stochastic optimization methods often rely on uniform sampling which is agnostic to the underlying characteristics of the data. This might degrade the convergence by yielding estimates that suffer from a high variance. A possible remedy is to employ non-uniform importance sampling techniques, which take the structure of t... more

Online Variance Reduction for Stochastic Optimization

Modern stochastic optimization methods often rely on uniform sampling which is agnostic to the underlying characteristics of the data. This might degrade the convergence by yielding estimates that suffer from a high variance. A possible remedy is to employ non-uniform importance sampling techniques, which take the structure of the dataset into account. In this work, we investigate a recently proposed setting which poses variance reduction as an online optimization problem with bandit feedback. We devise a novel and efficient algorithm for this setting that finds a sequence of importance sampling distributions competitive with the best fixed distribution in hindsight, the first result of this kind. While we present our method for sampling data points, it naturally extends to selecting coordinates or even blocks of thereof. Empirical validations underline the benefits of our method in several settings.

Causal-inspired unsupervised domain adaptation
We are using ideas inspired by causal inference to address a difficult problem in machine learning: unsupervised domain adaptation. For example, we wish to train on data from one hospital and succeed on other, unseen hospitals; or train on images from one setting and test on images from many different settings.
People:
Uri Shalit
Fusing mechanistic and data-driven models
We are building theoretical and practical models that take as input both a mechanistic world model (for example and ordinary differential equation describing the cardio-vascular system) and data (for example ICU patient vital signs). The goal is to get the best of both worlds: the robustness, interpretability, and causal groundi... more

Fusing mechanistic and data-driven models

We are building theoretical and practical models that take as input both a mechanistic world model (for example and ordinary differential equation describing the cardio-vascular system) and data (for example ICU patient vital signs). The goal is to get the best of both worlds: the robustness, interpretability, and causal grounding of mechanistic models, together with the flexibility of black-box deep learning models.

People:
Uri Shalit
Individual-level causal inference for health outcomes
In collaboration with health providers such as Clalit Health Services and Rambam Health Campus we are developing individual-level causal inference tools that will give accurate and safe treatment recommendations to patients.
People:
Uri Shalit
Charge transport through heterojunctions
Charge transport formulation is developed for evaluating electronic conductivity across triple material junctions.
Identifying travel problems based on big data
Big data sources have been used extensively to analyze people’s travel patterns. This project breaks new ground by using big data on travel patterns to identify the incidence and severity of travel problems – defined here as any difficulty a person may experience in reaching desired destinations. Relying on a large-scale app... more

Identifying travel problems based on big data

Big data sources have been used extensively to analyze people’s travel patterns. This project breaks new ground by using big data on travel patterns to identify the incidence and severity of travel problems – defined here as any difficulty a person may experience in reaching desired destinations. Relying on a large-scale app-based mobility survey, data will be extracted on individual’s trip rates, travel horizons, trip speeds, and more, with the aim to detect individuals particularly likely to experience severe travel problems.

Studying response to immunotherapy via single-cell data
Immunotherapy has revolutionized cancer therapy, leading to the 2018 Nobel Prize in Physiology and Medicine. However, despite the dramatic response observed in several cancer types, many patients do not benefit from this treatment or relapse in a relatively short time. To improve our understanding of patient response we utilize ... more

Studying response to immunotherapy via single-cell data

Immunotherapy has revolutionized cancer therapy, leading to the 2018 Nobel Prize in Physiology and Medicine. However, despite the dramatic response observed in several cancer types, many patients do not benefit from this treatment or relapse in a relatively short time. To improve our understanding of patient response we utilize single-cell RNA-seq data to characterize the tumor’s microenvironment, identify biomarkers of response and predict novel drug targets.

Studying tumor-immune metabolic interactions
The use of immunotherapy for solid tumors has expanded dramatically with the development of checkpoint blockade therapy. Despite the unprecedented responses observed in different tumor types, many patients are refractory to therapy or acquire resistance. Growing evidence shows that the metabolic requirements of immune cells in t... more

Studying tumor-immune metabolic interactions

The use of immunotherapy for solid tumors has expanded dramatically with the development of checkpoint blockade therapy. Despite the unprecedented responses observed in different tumor types, many patients are refractory to therapy or acquire resistance. Growing evidence shows that the metabolic requirements of immune cells in the tumor microenvironment greatly influence the success of therapy. Here we use genomic and metabolic modeling analysis to reveal the metabolic dependencies between tumor and immune cells and identify perturbations that can increase immune activity.

Studying resistance to PARPi in pancreatic cancer
Pancreatic cancer is the most aggressive form of human malignancies, with only 6% 5-year survival rate. Recently, it was found that a subgroup of patients carry mutations in the homologous recombination (HR) genes BRCA1 or BRCA2 and these tumors are sensitive to PARP inhibitor. However, response rates are infrequent and the subs... more

Studying resistance to PARPi in pancreatic cancer

Pancreatic cancer is the most aggressive form of human malignancies, with only 6% 5-year survival rate. Recently, it was found that a subgroup of patients carry mutations in the homologous recombination (HR) genes BRCA1 or BRCA2 and these tumors are sensitive to PARP inhibitor. However, response rates are infrequent and the subset of patients suitable for the treatment is limited. Here we use genomic data to computationally identify molecular signatures of response to be used as biomarkers, and aim to increase the number of patients that can benefit from the treatment.

Emotional Load
In this project we developed and validated a new sentiment analysis engine for conversational data, called CustSent, in collaboration with LivePerson Inc. We then developed the novel concept of emotional load – the load that employees must bear due to the emotional strain inherent in the service interactions in which they eng... more

Emotional Load

In this project we developed and validated a new sentiment analysis engine for conversational data, called CustSent, in collaboration with LivePerson Inc.
We then developed the novel concept of emotional load – the load that employees must bear due to the emotional strain inherent in the service interactions in which they engage. Using contact center and healthcare data we investigate the impact of Emotional Load on agents and the progression of the service interaction.

Information Transparency in Emergency Departments
We investigate how the transparency of the medical process and wait time information influence ED patients. In collaboration with Clalit Health Services, we developed a web-based app that delivers information to ED patients through their mobile phones. The development combines methods of process mining, queueing theory, and huma... more

Information Transparency in Emergency Departments

We investigate how the transparency of the medical process and wait time information influence ED patients. In collaboration with Clalit Health Services, we developed a web-based app that delivers information to ED patients through their mobile phones. The development combines methods of process mining, queueing theory, and human-centered UX design. The system operates at Carmel Medical Center. Our research examines the impact of information transparency on ED efficiency and patient behavior.

Contact Center Operations
Contact centers (CS) are considered the future of service delivery, offering service via texting, social media, and apps. These provide companies with unique opportunities, such as providing service proactively only to the customers that need it the most, but are also prone to new operational challenges, such as concurrency mana... more

Contact Center Operations

Contact centers (CS) are considered the future of service delivery, offering service via texting, social media, and apps. These provide companies with unique opportunities, such as providing service proactively only to the customers that need it the most, but are also prone to new operational challenges, such as concurrency management and information uncertainty. CS data allow us to investigate the dynamics of service production and the behaviors of customers and agents. In a series of projects, we create new service models for CS and control policies for those systems.

Energy consumption and visual comfort in buildings
he aim of the project is to develop a new methodology for deciphering the human factor in illuminance-related building operation by taking advantage of recent developments in commercial building automation systems and the increasing prevalence of digital control systems for shading operation. The project involves the analysis of... more

Energy consumption and visual comfort in buildings

he aim of the project is to develop a new methodology for deciphering the human factor in illuminance-related building operation by taking advantage of recent developments in commercial building automation systems and the increasing prevalence of digital control systems for shading operation. The project involves the analysis of a large-scale dataset of long-term roller blinds operation in a multi-story office building in Tel Aviv, reflecting user preferences on indoor lighting conditions.

Development of an integral microclimatic analysis tool
The aim of the project is to address an existing gap in the evaluation and modelling of urban microclimates, their effects on human thermal stress and perception, and the application of scientific data in urban planning processes. This is achieved through the creation of a single computational data collection and analysis platfo... more

Development of an integral microclimatic analysis tool

The aim of the project is to address an existing gap in the evaluation and modelling of urban microclimates, their effects on human thermal stress and perception, and the application of scientific data in urban planning processes. This is achieved through the creation of a single computational data collection and analysis platform that integrates biophysical comfort indices and urban-scale physical, climatic, and pedestrian mapping.

(led by Prof. David Pearlmutter, Ben Gurion University of the Negev)

Shade maps for climatic urban planning and design in Tel Aviv-Yafo
The aim of the project is to develop a new methodology for evaluating microclimatic summer conditions across an entire city, focusing on the provision of outdoor shade as a primary comfort indicator. Based on high-resolution 2.5D mapping of buildings, ground, and tree canopies, we employ detailed calculation of solar exposure at... more

Shade maps for climatic urban planning and design in Tel Aviv-Yafo

The aim of the project is to develop a new methodology for evaluating microclimatic summer conditions across an entire city, focusing on the provision of outdoor shade as a primary comfort indicator. Based on high-resolution 2.5D mapping of buildings, ground, and tree canopies, we employ detailed calculation of solar exposure at street level and propose the use of a summer Shade Index as a quantifiable factor for revealing a city’s hierarchy of microclimatic qualities.

Dynamic Database Embeddings with FoRWaRD
We study the problem of computing embeddings tuples of a relational database in a manner that is extensible to dynamic changes of the database. Importantly, the embedding of existing tuples should not change due to the embedding of newly inserted tuples (as database applications might rely on existing embeddings), while the embe... more

Dynamic Database Embeddings with FoRWaRD

We study the problem of computing embeddings tuples of a relational database in a manner that is extensible to dynamic changes of the database. Importantly, the embedding of existing tuples should not change due to the embedding of newly inserted tuples (as database applications might rely on existing embeddings), while the embedding of all tuples, old and new, should retain high quality. Our preliminary solutions show promising results relative to the alternatives, consistently and often considerably.

Properties of Inconsistency Measures for Databases
How should we quantify the amount of inconsistency in the database? Proper inconsistency measures are important for various tasks, such as progress indication and action prioritization in data cleaning, and reliability estimation for datasets. We investigate a collection of basic measures in both the Knowledge Representation ... more

Properties of Inconsistency Measures for Databases

How should we quantify the amount of inconsistency in the database?

Proper inconsistency measures are important for various tasks, such as progress indication and action prioritization in data cleaning, and reliability estimation for datasets. We investigate a collection of basic measures in both the Knowledge Representation and Database communities, analyze their theoretical properties, and empirically observe their behavior in an experimental study. We demonstrate how the framework can lead to new inconsistency measures by introducing a new measure that satisfies all of the properties we consider and can be computed efficiently.

The Importance of Literacy in Young Children
In the study, we examine the language development among toddlers aged 2-3.5-years-old and the brain synchronization between the mother and the child using EEG, while performing various activities around stories reading and listening.
The Role of Executive Functions in Hebrew-Speaking Children
We examine the role of executive functions in reading in children aged 8-12 using an adaptive intervention program developed in our lab. We use neuroimaging tools such as functional and structural MRI as well as EEG to define patterns that may predict a better gain from intervention.
Cardiac Imaging
We are using echocardiography (ultrasound) to study the function of the heart in mice, rat and in patients.
Genomics and epigenetics
We map the chromatin in cells using high throughput sequencing approaches such as ATAC-seq, ChiP-seq and single cell sequencing. We are using CRISPR based functional assays to understand and identify regulatory elements.
Intelligent systems for supporting student learning
We are interested in developing intelligent systems that support students' learning. One project develops "invention activities" for students learning data science, supported by automatic feedback mechanisms. This approach aims to facilitate improved understanding of data science concepts by letting students invent and test quan... more

Intelligent systems for supporting student learning

We are interested in developing intelligent systems that support students’ learning. One project develops “invention activities” for students learning data science, supported by automatic feedback mechanisms. This approach aims to facilitate improved understanding of data science concepts by letting students invent and test quantitative measures. In a second project, we are developing an intelligent system for supporting student collaboration on joint project. We are designing algorithms for analyzing students’ and design interfaces that will provide collaborators with actionable information regarding the group’s progress.

People:
Ofra Amir
Explainable Reinforcement Learning
Understanding the capabilities and limitations of agents is important for users, as they need to choose between different agents, adjust the level of autonomy of an agent, or work alongside an agent. While prior work in explainable AI has developed methods for explaining individual decisions of an agent to a person retrospective... more

Explainable Reinforcement Learning

Understanding the capabilities and limitations of agents is important for users, as they need to choose between different agents, adjust the level of autonomy of an agent, or work alongside an agent. While prior work in explainable AI has developed methods for explaining individual decisions of an agent to a person retrospectively, these approaches do not provide users with a global understanding of an agent’s expected behavior in a range of situations. We are developing explanation methods for reinforcement learning agents.

People:
Ofra Amir
Precise Agriculture
Precision agriculture (PA) concept is based on observing, measuring and responding to inter and intra-field variability in crops or livestock. The goal is to facilitate a decision support system (DSS) for whole farm management with the goal of optimizing returns on inputs while preserving resources. Among these many approaches w... more

Precise Agriculture

Precision agriculture (PA) concept is based on observing, measuring and responding to inter and intra-field variability in crops or livestock. The goal is to facilitate a decision support system (DSS) for whole farm management with the goal of optimizing returns on inputs while preserving resources. Among these many approaches we focus on three specific applications: precise irrigation, early crops disease detection and early detection of pain in dairy cows.

Hydro-Informatics
This research consists of development and validation of effective, reliable and applicable algorithms for early detection (ED) of contaminations in drinking water (DW) from one or more sources, using data from WQ sensors. Specifically, anomaly detection in UV-absorbance spectra as means for contamination detection is presented. ... more

Hydro-Informatics

This research consists of development and validation of effective, reliable and applicable algorithms for early detection (ED) of contaminations in drinking water (DW) from one or more sources, using data from WQ sensors. Specifically, anomaly detection in UV-absorbance spectra as means for contamination detection is presented. An additional ED algorithm, has also been developed, utilizing WQ measurements of standard physicochemical parameters. The algorithm’s high performance, together with its simplicity, adjustability, ease of implementation and low computational complexity – make it a valuable addition to water monitoring systems. Testing the performance of the two ED algorithms showed that processing physicochemical WQ measurements to detect anomalies, can serve as effective EDSs’ for DW contaminations.

Atmospheric Informatics
Recent developments in sensory and communication technologies have made low-cost, micro-sensing units (MSUs) feasible. These MSUs can operate as a set of individual nodes, or may be interconnected to form a Wireless Distributed Environmental Sensor Network (WDESN). MSU’s lower power consumption and small size enable many new a... more

Atmospheric Informatics

Recent developments in sensory and communication technologies have made low-cost, micro-sensing units (MSUs) feasible. These MSUs can operate as a set of individual nodes, or may be interconnected to form a Wireless Distributed Environmental Sensor Network (WDESN). MSU’s lower power consumption and small size enable many new applications, such as mobile sensing. MSUs’ main limitation is their relatively low accuracy, with respect to laboratory equipment or an AQM station. In this project we examine algorithms for assessing these sensors in field operations, as well as autonomous calibration and error concealment, optimal placement of the sensors and the utilization of the mobile sensors in the process, and advanced algorithms for data analysis provide a comprehensive toolset for atmospheric data analysis.

Collaborative Top-K Queries with Uncertain Scores
The research overarching goal is to investigate recommendation tasks from a probabilistic perspective. We aim to confront directly with the data uncertainty as part of the recommendation process and to propose new probabilistic ranking techniques for various recommendation tasks. We look for new semantics, evaluation measures an... more

Collaborative Top-K Queries with Uncertain Scores

The research overarching goal is to investigate recommendation tasks from a probabilistic perspective. We aim to confront directly with the data uncertainty as part of the recommendation process and to propose new probabilistic ranking techniques for various recommendation tasks. We look for new semantics, evaluation measures and efficient processing methods suitable to various recommendation tasks, towards designing general framework for generating high-quality recommendation.

Goal Recognition Design
Goal recognition design is a problem, in which we take a domain theory and a set of goals and ask: 1) to what extent do the actions performed by an agent within the model reveal its objective, and 2) what is the best way to modify a model so that any agent acting in the model reveals its objective early on. As a first stage, Go... more

Goal Recognition Design

Goal recognition design is a problem, in which we take a domain theory and a set of goals and ask:
1) to what extent do the actions performed by an agent within the model reveal its objective, and 2) what is the best way to modify a model so that any agent acting in the model reveals its objective early on. As a first stage, Goal Recognition Design finds the Worst Case Distinctiveness (wcd) of a model and as a second stage, after finding the wcd of a model, we aim at minimizing it.

Dechiphering hippocampal calcium imaging activity during behavior
It is known that the hippocampus contain place cells, responsible for coding the position of the animal in the environment. We record of data of hundreds of cells simultaneously using calcium imaging in freely foraging mice, and thus we have an opportunity to analyze the network properties and dynamics of hippocampal place cells... more

Dechiphering hippocampal calcium imaging activity during behavior

It is known that the hippocampus contain place cells, responsible for coding the position of the animal in the environment. We record of data of hundreds of cells simultaneously using calcium imaging in freely foraging mice, and thus we have an opportunity to analyze the network properties and dynamics of hippocampal place cells during foraging and other behavioral tasks.

Algorithms an for Bi-Level optimization problems
This project focuses on methods that smartly exploit the special structure of the constraint set (as a solution set of another optimization problem) and involves explicit operations for solving bi-level optimization problems. Among several theoretical results, we have provided in recent papers, the convergence rate result of the... more

Algorithms an for Bi-Level optimization problems

This project focuses on methods that smartly exploit the special structure of the constraint set (as a solution set of another optimization problem) and involves explicit operations for solving bi-level optimization problems. Among several theoretical results, we have provided in recent papers, the convergence rate result of the sequence of function values is special since it is the first of its kind. This area of research is thriving for new algorithms for tackling various bi-level problems.

Methods for Wireless Sensor Network Localization
This project focuses on the design, analysis, development, and practical implementation of simple algorithms for solving the Wireless Sensor Network (WSN) Localization problems. In a recent paper, we solve the original non-convex and non-smooth formulation using first-order methods. We proposed a parameter-free algorithmic frame... more

Methods for Wireless Sensor Network Localization

This project focuses on the design, analysis, development, and practical implementation of simple algorithms for solving the Wireless Sensor Network (WSN) Localization problems. In a recent paper, we solve the original non-convex and non-smooth formulation using first-order methods. We proposed a parameter-free algorithmic framework that includes the whole spectrum ranging from a fully centralized to a fully distributed implementation, and that it can also achieve partial parallelization.

Optimization methods for deep neural networks
In this project, we address a structured deep learning optimization problems, which are given by the sum of non-convex and non-smooth functions. As an example, we study a particular case of structure where the non-smoothness is represented as the maximum of non-convex smooth functions. Recently, for the structure of maximum, we ... more

Optimization methods for deep neural networks

In this project, we address a structured deep learning optimization problems, which are given by the sum of non-convex and non-smooth functions. As an example, we study a particular case of structure where the non-smoothness is represented as the maximum of non-convex smooth functions. Recently, for the structure of maximum, we have developed, the Stochastic Proximal Linear Method (SPLM) that is guaranteed to reach a critical point of this learning objective and analyze its convergence rate.

Mechanisms of cancer cells’ anchorage-independence
A hallmark of cancer cells is their ‘anchorage-independence’, i.e., they are able to grow under conditions that do not support strong attachment of the cells. This trait has been identified more than six decades ago, but is still poorly understood from a mechano-biological point of view. Our lab studies the ways by which can... more

Mechanisms of cancer cells’ anchorage-independence

A hallmark of cancer cells is their ‘anchorage-independence’, i.e., they are able to grow under conditions that do not support strong attachment of the cells. This trait has been identified more than six decades ago, but is still poorly understood from a mechano-biological point of view. Our lab studies the ways by which cancer cells lose their normal mechanosensing abilities to become non-dependent on the signals from their environment.

Mechanobiology of pancreatic cancer
Pancreatic ductal adenocarcinoma (PDAC) is an extremely deadly disease that is projected to become the second-most deadly cancer in the next decade. PDAC is characterized by an extremely dense and stiff extracellular matrix that surrounds the tumor cells, which is considered to play a major role in PDAC progression and metastasi... more

Mechanobiology of pancreatic cancer

Pancreatic ductal adenocarcinoma (PDAC) is an extremely deadly disease that is projected to become the second-most deadly cancer in the next decade. PDAC is characterized by an extremely dense and stiff extracellular matrix that surrounds the tumor cells, which is considered to play a major role in PDAC progression and metastasis. Our lab studies the interactions between PDAC cells and their environment with the goal of identifying potential mechanobiological therapeutic targets.

Cellular sensing of environmental mechanical signals
Cells in our bodies respond not only to biochemical signals (hormones, growth factors), but also to the mechanical features of their environment, including, e.g., topography, rigidity. This indicates that cells can actively test the environment. Our lab studies the fundamental mechanisms by which this sensing is achieved. We com... more

Cellular sensing of environmental mechanical signals

Cells in our bodies respond not only to biochemical signals (hormones, growth factors), but also to the mechanical features of their environment, including, e.g., topography, rigidity. This indicates that cells can actively test the environment. Our lab studies the fundamental mechanisms by which this sensing is achieved. We combine the use of nano- and micro-fabricated surfaces with advanced imaging and machine learning for image analysis to study the subcellular machineries involved in this process.

Situated Temporal Planning
In domains where planning is slow compared to the evolution of the environment, it can be important to take into account the time taken by the planning process itself.  For one example, plans involving taking a certain bus are of no use if planning finishes after the bus departs.  We call this setting situated temporal plannin... more

Situated Temporal Planning

In domains where planning is slow compared to the evolution of the environment, it can be important to take into account the time taken by the planning process itself.  For one example, plans involving taking a certain bus are of no use if planning finishes after the bus departs.  We call this setting situated temporal planning and we define it as a variant of temporal planning with timed initial literals.

Coordinating Multiple Robots Using Social Laws
Robots operating in the real world must perform their task in an uncertain, partially observable environment, while interacting with other robots. This interaction makes the problem much more difficult to solve. The key insight motivating this project is that it is possible to make the robot's job online much easier by modifying... more

Coordinating Multiple Robots Using Social Laws

Robots operating in the real world must perform their task in an uncertain, partially observable environment, while interacting with other robots. This interaction makes the problem much more difficult to solve. The key insight motivating this project is that it is possible to make the robot’s job online much easier by modifying the problem setting offline, before the robot starts operating by instituting a social law — a convention governing what is allowed behavior.

Implementing a Precision Medicine Paradigm in Primary Care Clinics
A randomized controlled trial of 20 intervention clinics and 20 usual-care control clinics to establish the value (better health? Better use of resources?) of implementing precision medicine tools into primary clinical practice. Intervention includes testing of DNA with different level platforms (from NGS panels, to GWAS, WES an... more

Implementing a Precision Medicine Paradigm in Primary Care Clinics

A randomized controlled trial of 20 intervention clinics and 20 usual-care control clinics to establish the value (better health? Better use of resources?) of implementing precision medicine tools into primary clinical practice. Intervention includes testing of DNA with different level platforms (from NGS panels, to GWAS, WES and WGS), of microbiome, use of wearable devices/sensors. The adult population of the study clinics includes some 140,000 people and if enough resources will be obtained, the study is expected to reach some 100,000 participants. Current resources allowed us to break ground in one clinic with 1,660 people already signed a consent. Study is National IRB approved.

Precision medicine - pharmacogenetics
GWAS-based study of >10,000 Israelis of various ethnicities serving among other purposes to establish an ethnic-specific (Jews/Arabs, Ashkenazi/Sephardi) atlas of frequencies of pharmacogenetic variants. Identify new associations between medication use in this cohort and identified SNPs. GWAS was carried out using the Illumin... more

Precision medicine - pharmacogenetics

GWAS-based study of >10,000 Israelis of various ethnicities serving among other purposes to establish an ethnic-specific (Jews/Arabs, Ashkenazi/Sephardi) atlas of frequencies of pharmacogenetic variants. Identify new associations between medication use in this cohort and identified SNPs. GWAS was carried out using the Illumina 500K Onco SNP array. Study is National IRB approved. Funded by MOST.

gene-environment interactions in the etiology of common cancers
More than 40,000 participants in case-control studies of breast/colorectal/lung/gynecological/pancreato-hepato-biliary cancers. For each participant we have long entry questionnaire (800 questions: health habits, health status, family history, more…), blood sample (DNA), tumor tissue sample (for many), EMR of follow-up. Every ... more

gene-environment interactions in the etiology of common cancers

More than 40,000 participants in case-control studies of breast/colorectal/lung/gynecological/pancreato-hepato-biliary cancers. For each participant we have long entry questionnaire (800 questions: health habits, health status, family history, more…), blood sample (DNA), tumor tissue sample (for many), EMR of follow-up. Every cancer case has a matched control without cancer. All studies are National IRB approved. Partially funded by various agencies, BCRF, ICRF…

Adaptive LiDAR Sampling
As LiDAR sensors for depth acquisition advance to solid-state technologies, new capabilities raise new theoretical and technological challenges. In particular, we investigate benefits afforded by controlling and changing in real time the sampling scheme (adaptive sampling). We use neural-network to predict the optimal sampling s... more

Adaptive LiDAR Sampling

As LiDAR sensors for depth acquisition advance to solid-state technologies, new capabilities raise new theoretical and technological challenges. In particular, we investigate benefits afforded by controlling and changing in real time the sampling scheme (adaptive sampling). We use neural-network to predict the optimal sampling scheme per scene, given a fixed sampling budget. We found  that for a given RMSE, the sampling budget can be reduced by a factor of about 4 on average. Various strategies and algorithms are examined.

People:
Guy Gilboa
Gradient flows
We investigate analytic and numerical solutions of nonlinear gradient flows. We examine the flows as nonlinear PDE’s and use tools from nonlinear spectral theory. We have recently revealed relations between Dynamic mode decomposition (DMD), a common tool for fluid dynamics, and nonlinear eigenfunctions related to homogeneous f... more

Gradient flows

We investigate analytic and numerical solutions of nonlinear gradient flows. We examine the flows as nonlinear PDE’s and use tools from nonlinear spectral theory. We have recently revealed relations between Dynamic mode decomposition (DMD), a common tool for fluid dynamics, and nonlinear eigenfunctions related to homogeneous flows. We are investigating through this lens gradient descent algorithms of complex systems.

People:
Guy Gilboa
Nonlinear spectral theory
We examine how to define systems and signals through nonlinear eigenvalue analysis. For example – we developed an image representation based on the total-variation transform. We also examine neural-networks through eigen-analysis and design algorithms to reveal their (nonlinear) eigenfunctions.
People:
Guy Gilboa
A social-constructivist approach to online ethics education
The growing trend of shifting from classroom to distance learning in ethics education programs raises the need to examine ways for adapting best instructional practices to online modes. To address this need, the current study is set to apply a social constructivist approach to an online course in research ethics and to examine ... more

A social-constructivist approach to online ethics education

The growing trend of shifting from classroom to distance learning in ethics education programs raises the need to examine ways for adapting best instructional practices to online modes. To address this need, the current study is set to apply a social constructivist approach to an online course in research ethics and to examine its effect on the learning outcomes of science and engineering graduate students.

People:
Miri Barak
AugmentedWorld is an open, collaborative, and interactive location-based platform, purposefully designed to provide science teachers and students an online tool for generating multimedia-rich questions. It is based on the notion that questions are the source of all knowledge and that students should be skilled in generating ques... more

AugmentedWorld: Promoting 21st century skills

AugmentedWorld is an open, collaborative, and interactive location-based platform, purposefully designed to provide science teachers and students an online tool for generating multimedia-rich questions. It is based on the notion that questions are the source of all knowledge and that students should be skilled in generating questions and not only in answering them. Our goal is to examine the cognitive and social impact of AugmentedWorld on science teachers and students.

People:
Miri Barak
Cultivating innovation among engineering students
At the brink of the fourth industrial revolution, a significant transition is taking place from simple digitization to innovation-based technology. Innovation, the process of generating new ideas and transforming them into practical solutions, is a catalyst for progress in our fast-changing world. The goal of our study is to ass... more

Cultivating innovation among engineering students

At the brink of the fourth industrial revolution, a significant transition is taking place from simple digitization to innovation-based technology. Innovation, the process of generating new ideas and transforming them into practical solutions, is a catalyst for progress in our fast-changing world. The goal of our study is to assess the innovation level of engineering students’ team projects and to examine the relationships between project innovation and team heterogeneity in online and F2F environments.

People:
Miri Barak
Train medical surgery skills by using sensors in simulators
We have several simulators for training medical doctors in cutting edge surgery skills. We use insights regarding biases in mental effort regulation to improve self-training protocols.
Use user behavior to improve automatic database schema matching
Database schema matching is a challenging task that call for improvement for several decades. Automatic algorithms fail to provide reliable enough results. We use human matching to overcome algorithm failures and vice versa. We refer to human and algorithmic matchers as imperfect matchers with different strengths and weaknesses.... more

Use user behavior to improve automatic database schema matching

Database schema matching is a challenging task that call for improvement for several decades. Automatic algorithms fail to provide reliable enough results. We use human matching to overcome algorithm failures and vice versa. We refer to human and algorithmic matchers as imperfect matchers with different strengths and weaknesses. We use insights from cognitive research to predict human matchers behavior and identify those who can do better than others. We then merge their responses with algorithmic outcomes and get better results.

Infer subjective confidence of users based on mouse tracking
As a team, we currently work on several projects, with several challenging tasks, including riddle solving, database schema matching, and text design in a word processor. In all cases we aim to predict people’s confidence in their success in the task based on their mouse movements before choosing their response and while ratin... more

Infer subjective confidence of users based on mouse tracking

As a team, we currently work on several projects, with several challenging tasks, including riddle solving, database schema matching, and text design in a word processor. In all cases we aim to predict people’s confidence in their success in the task based on their mouse movements before choosing their response and while rating their confidence on a continuous scale.

Information design
Consider a setting where one agent holds private information and would like to use her information to motivate another agent to take some action. When agents’ interests co-incide the answer is easy - disclose the full information. In this project we study the optimal information design when agents’ incentives are mis-aligned... more

Information design

Consider a setting where one agent holds private information and would like to use her information to motivate another agent to take some action. When agents’ interests co-incide the answer is easy – disclose the full information. In this project we study the optimal information design when agents’ incentives are mis-aligned.

Expert testing
A self-proclaimed agent provides probabilistic forecasts over a sequence of events. In this project we ask how can we distinguish between genuine experts and charlatans?
Stochastic Image Denoising by Sampling from the Posterior Distribution
Image denoising is a well-known and well studied problem, commonly targeting a minimization of the mean squared error (MSE) between the outcome and the original image. Unfortunately, especially for severe noise levels, such Minimum MSE (MMSE) solutions may lead to blurry output images. In this work we propose a novel stochastic ... more

Stochastic Image Denoising by Sampling from the Posterior Distribution

Image denoising is a well-known and well studied problem, commonly targeting a minimization of the mean squared error (MSE) between the outcome and the original image. Unfortunately, especially for severe noise levels, such Minimum MSE (MMSE) solutions may lead to blurry output images. In this work we propose a novel stochastic denoising approach that produces viable and high perceptual quality results, while maintaining a small MSE. Our method employs Langevin dynamics that relies on a repeated application of any given MMSE denoiser, obtaining the reconstructed image by effectively sampling from the posterior distribution. Due to its stochasticity, the proposed algorithm can produce a variety of high-quality outputs for a given noisy input, all shown to be legitimate denoising results. In addition, we present an extension of our algorithm for handling the inpainting problem, recovering missing pixels while removing noise from partially given data.

High Perceptual Quality Image Denoising with a Posterior Sampling CGAN
The vast work in Deep Learning (DL) has led to a leap in image denoising research. Most DL solutions for this task have chosen to put their efforts on the denoiser's architecture while maximizing distortion performance. However, distortion driven solutions lead to blurry results with sub-optimal perceptual quality, especially in... more

High Perceptual Quality Image Denoising with a Posterior Sampling CGAN

The vast work in Deep Learning (DL) has led to a leap in image denoising research. Most DL solutions for this task have chosen to put their efforts on the denoiser’s architecture while maximizing distortion performance. However, distortion driven solutions lead to blurry results with sub-optimal perceptual quality, especially in immoderate noise levels. In this paper we propose a different perspective, aiming to produce sharp and visually pleasing denoised images that are still faithful to their clean sources. Formally, our goal is to achieve high perceptual quality with acceptable distortion. This is attained by a stochastic denoiser that samples from the posterior distribution, trained as a generator in the framework of conditional generative adversarial networks (CGAN). Contrary to distortion-based regularization terms that conflict with perceptual quality, we introduce to the CGAN objective a theoretically founded penalty term that does not force a distortion requirement on individual samples, but rather on their mean. We showcase our proposed method with a novel denoiser architecture that achieves the reformed denoising goal and produces vivid and diverse outcomes in immoderate noise levels.

Patch Craft: Video Denoising by Deep Modeling and Patch Matching
The non-local self-similarity property of natural images has been exploited extensively for solving various image processing problems. When it comes to video sequences, harnessing this force is even more beneficial due to the temporal redundancy. In the context of image and video denoising, many classically-oriented algorithms e... more

Patch Craft: Video Denoising by Deep Modeling and Patch Matching

The non-local self-similarity property of natural images has been exploited extensively for solving various image processing problems. When it comes to video sequences, harnessing this force is even more beneficial due to the temporal redundancy. In the context of image and video denoising, many classically-oriented algorithms employ self-similarity, splitting the data into overlapping patches, gathering groups of similar ones and processing these together somehow. With the emergence of convolutional neural networks (CNN), the patch-based framework has been abandoned. Most CNN denoisers operate on the whole image, leveraging non-local relations only implicitly by using a large receptive field. This work proposes a novel approach for leveraging self-similarity in the context of video denoising, while still relying on a regular convolutional architecture. We introduce a concept of patch-craft frames – artificial frames that are similar to the real ones, built by tiling matched patches. Our algorithm augments video sequences with patch-craft frames and feeds them to a CNN. We demonstrate the substantial boost in denoising performance obtained with the proposed approach.

Perturbation models
Statistically reasoning about complex systems involves a probability distribution over exponentially many configurations. For example, semantic labeling of an image requires to infer a discrete label for each image pixel, hence resulting in possible segmentations which are exponential in the numbers of pixels. Standard approache... more

Perturbation models

Statistically reasoning about complex systems involves a probability distribution over exponentially many configurations. For example, semantic labeling of an image requires to infer a discrete label for each image pixel, hence resulting in possible segmentations which are exponential in the numbers of pixels. Standard approaches such as Gibbs sampling are slow in practice and cannot be applied to many real-life problems. Our goal is to integrate optimization and sampling through extreme value statistics and to define new statistical framework for which sampling and parameter estimation in complex systems are efficient. This framework is based on measuring the stability of prediction to random changes in the potential interactions.

Factor Graph Attention models
Deep learning revolutionized AI and machine learning techniques can be used to achieve human-like behavior. To better address complex tasks such as visual-dialog or visual navigation we designed a general attention mechanism that use a factor graph based attention mechanism which can combines high-dimensional information that go... more

Factor Graph Attention models

Deep learning revolutionized AI and machine learning techniques can be used to achieve human-like behavior. To better address complex tasks such as visual-dialog or visual navigation we designed a general attention mechanism that use a factor graph based attention mechanism which can combines high-dimensional information that govern complex tasks. This framework allowed us to win the visual dialog challenge of CVPR 2020

Online Constrained Optimization
Online convex optimization has been extensively studied in the recent learning literature. In this ongoing theoretical work, we extend the framework to consider similarly formulated online average cost constraints.
Deep-Learning Flow Control in Cellular Channels
Cellular channels are increasingly used for sensitive real-time applications. For example, real time video can now be broadcast over parallel cellular channel, possibly from a moving vehicle. Such channels are characterized by high variability, and require improved flow control algorithms to maintain stable flow. This work addre... more

Deep-Learning Flow Control in Cellular Channels

Cellular channels are increasingly used for sensitive real-time applications. For example, real time video can now be broadcast over parallel cellular channel, possibly from a moving vehicle. Such channels are characterized by high variability, and require improved flow control algorithms to maintain stable flow. This work addresses the application of deep learning algorithms to develop suitable flow control and scheduling algorithm under real-time delay constraints.

Markov Decision Processes with Burstiness Constraints
Burstiness Constraints characterize various dynamic processes, such as traffic demand in communication networks. We consider the optimal control of MDPs subject to such constraints, providing the theoretical framework and effective algorithms for this problem.
Shape reconstruction
Computational methods in stereoscopic imaging and other depth from X, recognition, and understanding.
People:
Ron Kimmel
Non-rigid shape analysis
Finding computational methods for matching and analysis of non-rigid shapes. From a computational point of view we cannot use convolution here, so we design and explore other deep learning venues.
People:
Ron Kimmel
Computational Pathology
Using H&E-stained histology slides to predict treatment outcomes for efficient cancer treatment.  We use all variations of deep learning (mainly CNNs).
People:
Ron Kimmel
The vision of the consortium, of which our lab is part, is to develop generic technologies for analysis and use of information of raw materials, production processes and the consumer, in order to enable connectivity throughout the value chain and bring about a paradigm shift in which food is produced with the highest efficiency ... more

Food-IoT - Generic Technologies for Advancing the Food Value Chain

The vision of the consortium, of which our lab is part, is to develop generic technologies for analysis and use of information of raw materials, production processes and the consumer, in order to enable connectivity throughout the value chain and bring about a paradigm shift in which food is produced with the highest efficiency and safety.

People:
Dov Dori
TRACOD – short for TRAck the COD fish, is an EIT Food project aimed at improving the ability of producers and consumers to track freshness and nutritional values of fresh fish, including cod, salmon, and other white fish species. TRACOD uses models implemented in an app for interacting with stakeholders and includes an edu... more

TRACOD – Tracking the Cod Fish Nutritional Value

TRACOD – short for TRAck the COD fish, is an EIT Food project aimed at improving the ability of producers and consumers to track freshness and nutritional values of fresh fish, including cod, salmon, and other white fish species. TRACOD uses models implemented in an app for interacting with stakeholders and includes an education component for endowing food engineers with a systems approach. TRACOD also engages future food engineers in conceptual modeling as part of model-based systems engineering of food production and supply systems.

People:
Dov Dori
OPCloud is a web-based collaborative software environment for creating conceptual models of systems and phenomena with OPM standard  ISO 19450:2015. It is used in dozens of universities and enterprises, and its development is continuously adding new features and capabilities.
People:
Dov Dori
Creating quantum states of light of many photons
The need to create quantum states of light, such as entangled photons, arises from their importance in the fields of quantum information and quantum optics. In recent years, quantum cluster states were used in quantum computation, entangled photons were used to demonstrate quantum teleportation, and quantum hyper-dense coding pr... more

Creating quantum states of light of many photons

The need to create quantum states of light, such as entangled photons, arises from their importance in the fields of quantum information and quantum optics. In recent years, quantum cluster states were used in quantum computation, entangled photons were used to demonstrate quantum teleportation, and quantum hyper-dense coding protocols enable breaking the classical limit for information transfer. All these applications require efficient methods for generation of quantum light.

Our project develops new approaches for creating many-photon quantum light, by using recent advances in quantum electrodynamics and quantum optics. These advances are especially promising for creating deterministic, heralded, entangled photon sources.

Irrationality Measures and Polynomial Continued Fractions
Linear recursions with integer coefficients, such as the recursion of the Fibonacci sequence, have been intensely studied over millennia, yet still hide interesting undiscovered mathematics. Such a recursion was used by Apéry in his proof of the irrationality of certain values of the Riemann zeta function. Similar recursions ca... more

Irrationality Measures and Polynomial Continued Fractions

Linear recursions with integer coefficients, such as the recursion of the Fibonacci sequence, have been intensely studied over millennia, yet still hide interesting undiscovered mathematics. Such a recursion was used by Apéry in his proof of the irrationality of certain values of the Riemann zeta function. Similar recursions can prove the irrationality of other fundamental constants such as π and e. However, it is not generally known under what conditions a linear recursion can be used to prove irrationality.

Our project develops new hypotheses and proofs for linear recursions. Specifically, we generalize Apéry’s work, finding the conditions for which similar recursions can be used to prove irrationality.

Looking forward, we would like to search for a wider theory on sequences created by any linear recursion with integer coefficients. Such results can help develop systematic algorithms for finding formulas for fundamental constants and contribute to ongoing efforts to answer open questions like proving the irrationality of values of the Reimann zeta function (e.g., ζ(5)).

The Ramanujan Machine: Auto-Generated Conjectures on Fundamental Constants
Fundamental mathematical constants like e and π are ubiquitous in diverse fields of science, from abstract mathematics to physics and biology. For centuries, new formulas relating fundamental constants have been scarce and usually discovered sporadically. Our project develops systematic approaches to leverages algorithms... more

The Ramanujan Machine: Auto-Generated Conjectures on Fundamental Constants

Fundamental mathematical constants like e and π are ubiquitous in diverse fields of science, from abstract mathematics to physics and biology. For centuries, new formulas relating fundamental constants have been scarce and usually discovered sporadically.

Our project develops systematic approaches to leverages algorithms for deriving formulas for fundamental constants and help reveal their underlying structure.

This research reverses the conventional approach of sequential logic in formal proofs. Instead, our algorithms utilize numerical data to unveil mathematical structures, trying to play the role of intuition of great mathematicians of the past to find leads for future research.

Wearable tattoo for health monitoring
This project aims to develop a breakthrough smart health monitoring system, combining in the same solution, a sensor and an analytical modulus. The sensor modulus will be implemented by the construction of a new, tattoo-like wearable device. To both process and integrate the big data constantly gathered by the wearable, in a glo... more

Wearable tattoo for health monitoring

This project aims to develop a breakthrough smart health monitoring system, combining in the same solution, a sensor and an analytical modulus. The sensor modulus will be implemented by the construction of a new, tattoo-like wearable device. To both process and integrate the big data constantly gathered by the wearable, in a global databank, an analytical modulus is also developed, to enable the establishment of individualized health patterns.

Wearable for Advancing Care for High-Risk Elderly
This project aims to stratify patient populations according to advanced risk assessment, to enable personalized self-management of ageing multimorbidity patients. This is done by designing a personalized, patient-centred and holistic approach that accounts for the individual’s medical history and lifestyle conditions, mental a... more

Wearable for Advancing Care for High-Risk Elderly

This project aims to stratify patient populations according to advanced risk assessment, to enable personalized self-management of ageing multimorbidity patients. This is done by designing a personalized, patient-centred and holistic approach that accounts for the individual’s medical history and lifestyle conditions, mental and social state, etc., coupled with innovative non-invasive wearable sensing technology for continuous monitoring of health.

Sparsity Aware Normalization for GANs
Generative adversarial networks (GANs) are known to benefit from regularization or normalization of their discriminator network during training. In this work, we introduced sparsity aware normalization (SAN), a new method for stabilizing GAN training. Our method is particularly effective for image restoration and image-to-image ... more

Sparsity Aware Normalization for GANs

Generative adversarial networks (GANs) are known to benefit from regularization or normalization of their discriminator network during training. In this work, we introduced sparsity aware normalization (SAN), a new method for stabilizing GAN training. Our method is particularly effective for image restoration and image-to-image translation. There, it significantly improves upon existing methods, like spectral normalization, while allowing using shorter training and smaller capacity networks, at no computational overhead.

Explorable Image Restoration
Image restoration methods do not allow exploring the infinitely many plausible reconstructions that might have given rise to the measured image. In this work, we introduced the task of explorable image restoration, and illustrated it for the tasks of super resolution and JPEG decompression. We proposed a framework comprising a g... more

Explorable Image Restoration

Image restoration methods do not allow exploring the infinitely many plausible reconstructions that might have given rise to the measured image. In this work, we introduced the task of explorable image restoration, and illustrated it for the tasks of super resolution and JPEG decompression. We proposed a framework comprising a graphical user interface with a neural network backend, allowing editing the output to explore the abundance of plausible explanations to the input. We illustrated our approach in a variety of use cases, ranging from medical imaging and forensics to graphics (Oral presentations at CVPR`20, CVPR`21).

SinGAN: Learning a generative model from a single natural image
We introduced an unconditional generative model that can be learned from a single natural image. Our model, coined SinGAN, is trained to capture the internal distribution of patches within the image, and is then able to generate high quality, diverse samples of arbitrary size and aspect ratio, that carry the same visual content ... more

SinGAN: Learning a generative model from a single natural image

We introduced an unconditional generative model that can be learned from a single natural image. Our model, coined SinGAN, is trained to capture the internal distribution of patches within the image, and is then able to generate high quality, diverse samples of arbitrary size and aspect ratio, that carry the same visual content as the image. We illustrated the utility of SinGAN in a wide range of image manipulation tasks. This work won the Best Paper Award (Marr Prize) at ICCV`19.

Massive Parallelization of Deep Learning
Improvements in training speed are needed to develop the next generation of deep learning models. To perform such a massive amount of computation in a reasonable time, it is parallelized across multiple GPU cores. Perhaps the most popular parallelization method is to use a large batch of data in each iteration of SGD, so the gra... more

Massive Parallelization of Deep Learning

Improvements in training speed are needed to develop the next generation of deep learning models. To perform such a massive amount of computation in a reasonable time, it is parallelized across multiple GPU cores. Perhaps the most popular parallelization method is to use a large batch of data in each iteration of SGD, so the gradient computation can be performed in parallel on multiple workers. We aim to enable massive parallelization without performance degradation, as commonly observed.

Resource efficient deep learning
We aim to improve the resource efficiency of deep learning (e.g., energy, bandwidth) for training and inference. Our focus is decreasing the numerical precision of the neural network model is a simple and effective way to improve their resource efficiency. Nearly all recent deep learning related hardware relies heavily on lower ... more

Resource efficient deep learning

We aim to improve the resource efficiency of deep learning (e.g., energy, bandwidth) for training and inference. Our focus is decreasing the numerical precision of the neural network model is a simple and effective way to improve their resource efficiency. Nearly all recent deep learning related hardware relies heavily on lower precision math. The benefits are a reduction in the memory required to store the neural network, a reduction in chip area, and a drastic improvement in energy efficiency.

Understanding and controlling the implicit bias in deep learning
Significant research efforts are being invested in improving Deep Neural Networks (DNNs) via various modifications. However, such modifications often cause an unexplained degradation in the generalization performance DNNs to unseen data. Recent findings suggest that this degradation is caused by changes to the hidden algorithmi... more

Understanding and controlling the implicit bias in deep learning

Significant research efforts are being invested in improving Deep Neural Networks (DNNs) via various modifications. However, such modifications often cause an unexplained degradation in the generalization performance DNNs to unseen data. Recent findings suggest that this degradation is caused by changes to the hidden algorithmic bias of the training algorithm and model. This bias determines which solution is selected from all solutions which fit the data. We aim to understand and control this algorithmic bias.

Information Storage in Models of Human Language
This project seeks to elucidate the mechanisms of information storage and processing in machine learning systems of human language, by (a) measuring localization and distributivity of information in complex models; (b) discovering causal relationships between model components and automatic (potentially biased) decisions; and (c)... more

Information Storage in Models of Human Language

This project seeks to elucidate the mechanisms of information storage and processing in machine learning systems of human language, by (a) measuring localization and distributivity of information in complex models; (b) discovering causal relationships between model components and automatic (potentially biased) decisions; and (c) making language processing systems more interpretable and controllable. The research is expected to promote responsible and accountable adoption of language technology.

Interpretability and Robustness in NLP
Despite the empirical success of deep learning models in natural language processing (NLP), these models face two challenges: they are opaque and difficult to interpret; and they are fragile and not robust to shifts in the data distribution. This project studies the relationship between interpretability and robustness in NLP: ar... more

Interpretability and Robustness in NLP

Despite the empirical success of deep learning models in natural language processing (NLP), these models face two challenges: they are opaque and difficult to interpret; and they are fragile and not robust to shifts in the data distribution. This project studies the relationship between interpretability and robustness in NLP: are more robust models also more interpretable, and vice versa? This research is expected to facilitate the development of models that more trustworthy, fair, and reliable.

Queue mining for delay prediction in multi-class service processes
Information recorded by service systems (e.g., in the telecommunication, finance, and health sectors) during their operation provides an angle for operational process analysis, commonly referred to as process mining. Here we establish a queueing perspective in process mining to address the online delay prediction problem, which ... more

Queue mining for delay prediction in multi-class service processes

Information recorded by service systems (e.g., in the telecommunication, finance, and health sectors) during their operation provides an angle for operational process analysis, commonly referred to as process mining. Here we establish a queueing perspective in process mining to address the online delay prediction problem, which refers to the time that the execution of an activity for a running instance of a service process is delayed due to queueing effects. We develop predictors for waiting-times from event logs recorded by an information system during process execution. Based on large datasets from the telecommunications and financial sectors, our evaluation demonstrate accurate online predictions, which drastically improve over predictors neglecting the queueing perspective.

Data-Driven Appointment-Scheduling Under Uncertainty
Service systems are often stochastic and preplanned by appointments, yet implementations of their appointment systems are prevalently deterministic. We address this gap, between planned and reality, by developing data-driven methods for appointment scheduling and sequencing – the result are tractable and scalable solutions tha... more

Data-Driven Appointment-Scheduling Under Uncertainty

Service systems are often stochastic and preplanned by appointments, yet implementations of their appointment systems are prevalently deterministic. We address this gap, between planned and reality, by developing data-driven methods for appointment scheduling and sequencing – the result are tractable and scalable solutions that accommodate hundreds of jobs and servers. To test for practical performance, we leverage a unique data set from a cancer center that combines real-time locations, electronic health records, and appointments log. Focusing on one of the center’s infusion units, we reduce cost (waiting plus overtime) on the order of 15%–40% consistently.

Development of better tools for proteomics and peptidomics research
The analysis of the big data that is obtained by genomics, and combining it with the mass spectrometry-based proteomics and peptidomics data is that main scope of this research. The limiting factor is informatic analysis of the data.
People:
Arie Admon
Development of personal immunotherapy for cancer, and autoimmunity
The project focus is on molecular immunology, with a special interest HLA-peptidomics; aiming at characterization of the full repertoires of HLA peptides presented by human cells (HLA peptidome) and the implementation of the HLA peptidomics into development of personalized immunotherapy. The other aim is to block the specific i... more

Development of personal immunotherapy for cancer, and autoimmunity

The project focus is on molecular immunology, with a special interest HLA-peptidomics; aiming at characterization of the full repertoires of HLA peptides presented by human cells (HLA peptidome) and the implementation of the HLA peptidomics into development of personalized immunotherapy.
The other aim is to block the specific immune reaction during autoimmunity and inflammatory diseases. The main tools of peptidomics, and proteomics, are mass spectrometry and bioinformatics.

People:
Arie Admon
Real-Time Health Monitoring
Contemporary medicine suffers from impactful shortcomings in terms of successful disease diagnosis and treatment. Diagnostic delays and/or inaccuracies can cause harm to patients by preventing or delaying appropriate treatment, providing unnecessary or harmful treatment, or result in psychological burden or financial repercussio... more

Real-Time Health Monitoring

Contemporary medicine suffers from impactful shortcomings in terms of successful disease diagnosis and treatment. Diagnostic delays and/or inaccuracies can cause harm to patients by preventing or delaying appropriate treatment, providing unnecessary or harmful treatment, or result in psychological burden or financial repercussions. Our objective is to develop an AI-based smart health monitoring system for non-intrusive, continuous, real-time and personalized detection of physical and (bio)chemical markers that are linked with overall health of the human body.

Super-Resolution of Dynamic Elavation Model
Our goal is to develop multi-modal neural network architectures for the tasks of guided super-resolution (SR) of dynamic elevation models (DEM). Current DEMs for most of the earth surface is still low resolution (sometimes 2 meters per pixel, but more often 10, 15, or 30 meters per pixel) and thus cannot accurately represent the... more

Super-Resolution of Dynamic Elavation Model

Our goal is to develop multi-modal neural network architectures for the tasks of guided super-resolution (SR) of dynamic elevation models (DEM). Current DEMs for most of the earth surface is still low resolution (sometimes 2 meters per pixel, but more often 10, 15, or 30 meters per pixel) and thus cannot accurately represent the morphology of the terrain. High resolution DEMs, however, have many uses, including precision agriculture, urban mapping, high-definition maps for autonomous navigation, line-of-sight analysis, and more.

Residual Echo Suppression Using Deep Learning
We address the problem of residual echo suppression (RES) in real-life acoustic environments that often include low signal-to-noise ratios, reverberations, and degraded audio measurements. We propose a low power, low-resources, on-device system that receives a dual-channel audio streaming as waveform and applies deep learning-ba... more

Residual Echo Suppression Using Deep Learning

We address the problem of residual echo suppression (RES) in real-life acoustic environments that often include low signal-to-noise ratios, reverberations, and degraded audio measurements. We propose a low power, low-resources, on-device system that receives a dual-channel audio streaming as waveform and applies deep learning-based echo cancellation to it. This solution can benefit many practical speech-based hands-free communication platforms such as smart-phones, conference room speakerphones, and smart speakers like Amazon Alexa and Google Home.

PET/CT Analysis using Spectral Total Variation
Spectral total variation’s ability to provide metrics for the automatic detection of bone malignant lesions and to differentiate those lesions from non-cancerous findings will be assessed for a hybrid positron emission tomography/x-ray computed tomography (PET/CT) scanner.  By detecting tissue metabolism changes using fluorin... more

PET/CT Analysis using Spectral Total Variation

Spectral total variation’s ability to provide metrics for the automatic detection of bone malignant lesions and to differentiate those lesions from non-cancerous findings will be assessed for a hybrid positron emission tomography/x-ray computed tomography (PET/CT) scanner.  By detecting tissue metabolism changes using fluorine-18-2-fluoro-2-deoxy-D-glucose PET and demonstrating bone structure changes using CT, PET/CT can identify cancer lesions and impact patient diagnosis and management.

Function-Correcting Codes
Motivated by applications in machine learning and archival storage, we introduce function-correcting codes (FCCs), a new class of codes to protect a function evaluation of the data against errors. We show that FCCs are equivalent to irregular-distance codes, i.e., codes that obey some given distance requirement between each pair... more

Function-Correcting Codes

Motivated by applications in machine learning and archival storage, we introduce function-correcting codes (FCCs), a new class of codes to protect a function evaluation of the data against errors. We show that FCCs are equivalent to irregular-distance codes, i.e., codes that obey some given distance requirement between each pair of codewords. Using these connections, we study these codes and derive general upper and lower bounds on their optimal redundancy. Since these bounds depend on the specific function, we provide simplified, suboptimal bounds that are easier to evaluate.

Weakly Private Information Retrieval
Private information retrieval (PIR) protocols make it possible to retrieve a file from a database without disclosing any information about the identity of the file being retrieved. While existing protocols strictly impose that no information is leaked on the file's identity, this project initiates the study of the tradeoffs that... more

Weakly Private Information Retrieval

Private information retrieval (PIR) protocols make it possible to retrieve a file from a database without disclosing any information about the identity of the file being retrieved. While existing protocols strictly impose that no information is leaked on the file’s identity, this project initiates the study of the tradeoffs that can be achieved by relaxing the requirement of perfect privacy. We propose to study this problem when the database is either replicated or is stored distributively over several servers, and when it is simply stored by a single server.

Weakly Private Information Retrieval
Private information retrieval (PIR) protocols make it possible to retrieve a file from a database without disclosing any information about the identity of the file being retrieved. While existing protocols strictly impose that no information is leaked on the file's identity, this project initiates the study of the tradeoffs that... more

Weakly Private Information Retrieval

Private information retrieval (PIR) protocols make it possible to retrieve a file from a database without disclosing any information about the identity of the file being retrieved. While existing protocols strictly impose that no information is leaked on the file’s identity, this project initiates the study of the tradeoffs that can be achieved by relaxing the requirement of perfect privacy. We propose to study this problem when the database is either replicated or is stored distributively over several servers, and when it is simply stored by a single server.

Reconstruction Algorithms for DNA-Storage Systems
In the trace reconstruction problem a length-n string x yields a collection of noisy traces, where each is independently obtained from x by passing through a deletion channel, which deletes every symbol with some fixed probability. The main goal under this paradigm is to determine the required minimum number of i.i.d traces in o... more

Reconstruction Algorithms for DNA-Storage Systems

In the trace reconstruction problem a length-n string x yields a collection of noisy traces, where each is independently obtained from x by passing through a deletion channel, which deletes every symbol with some fixed probability. The main goal under this paradigm is to determine the required minimum number of i.i.d traces in order to reconstruct x with high probability. The focus of this work is to extend this problem to the model where each trace is a result of x passing through a deletion-insertion-substitution channel.

Domain adaptation
We currently put a special focus on the problem of domain adaptation. More specifically, we study the problem of domain adaptation on manifolds (learned and analytic) and develop methods based on geometric considerations.
Riemannian Geometry & Manifolds
We study the manifold of diffusion operators, on which we can define geometric, differential, and probabilistic structures. This research direction entails a fresh approach to multi-manifold learning, departing from the traditional use of spectral decomposition of diffusion operators for embedding. Diffusion operators are posit... more

Riemannian Geometry & Manifolds

We study the manifold of diffusion operators, on which we can define geometric, differential, and probabilistic structures. This research direction entails a fresh approach to multi-manifold learning, departing from the traditional use of spectral decomposition of diffusion operators for embedding.
Diffusion operators are positive (semi-)definite and have a particular Riemannian geometry. While each diffusion operator extracts the manifold of a single data set, transportation of diffusion operators on the associated Riemannian manifold enables us to merge and compare multiple data sets.

Multimodal Data Analysis & Fusion
One of the long-standing challenges in signal processing and data analysis is the fusion of information acquired by multiple, multimodal sensors. Of particular interest in the context of our research are the massive data sets of medical recordings and healthcare-related information, acquired routinely in operation rooms, intens... more

Multimodal Data Analysis & Fusion

One of the long-standing challenges in signal processing and data analysis is the fusion of information acquired by multiple, multimodal sensors.
Of particular interest in the context of our research are the massive data sets of medical recordings and healthcare-related information, acquired routinely in operation rooms, intensive care units, and clinics. Such distinct and complementary information calls for the development of new theories and methods, leveraging it toward achieving concrete objectives such as analysis, filtering, and prediction, in a broad range of fields.

Sparse Integer Programming
Integer Programming is a fundamental framework for discrete optimization with generic modeling power and numerous applications. We are developing an algebraic theory that enables to solve large integer programming problems with large numbers of variables over sparse systems. In particular, we have recently shown that integer pr... more

Sparse Integer Programming

Integer Programming is a fundamental framework for discrete optimization with generic modeling power and numerous applications. We are developing an algebraic theory that enables to solve large integer programming problems with large numbers of variables over sparse systems.
In particular, we have recently shown that integer programming is fixed-parameter tractable when parameterized by the numeric measure and the sparsity measure of the system at hand.

People:
Shmuel Onn
Empirical Bayes Approach to Truth Discovery
Consider a group of workers who answered questions, which have a correct yet unknown answers. The workers are heterogenous, they could be ordinary people, trained volunteers, a panel of experts, different computer algorithms, or a mix of all the above. Our approach is based on empirical Bayes methods and the aim is to construct ... more

Empirical Bayes Approach to Truth Discovery

Consider a group of workers who answered questions, which have a correct yet unknown answers. The workers are heterogenous, they could be ordinary people, trained volunteers, a panel of experts, different computer algorithms, or a mix of all the above. Our approach is based on empirical Bayes methods and the aim is to construct an algorithm that aggregates all workers’ answers to a single output that is close to the unknown truth.  (MSc student: Tsviel Ben-Shabat, co-advisor: Reshef Meir)

Variance Estimation in High-Dimensional Data
We study a regression model in the context of high-dimensional and semi-supervised settings, making minimal assumptions on the distribution of the data. The goal is to estimate the fraction of variance explained by the best linear model without assuming linearity. (PhD student: Ilan Livne, co-advisor: Yair Goldberg).
Intelligent patient monitoring in the intensive care unit
Intensive care medicine is complex, resource intensive and expensive. It is a dynamic and highly technical field of medicine, taking care of the sickest patients. Decisions need to be made rapidly based on the evolving clinical state of the patient which can fluctuate over seconds and minutes. We develop ML models to tackle majo... more

Intelligent patient monitoring in the intensive care unit

Intensive care medicine is complex, resource intensive and expensive. It is a dynamic and highly technical field of medicine, taking care of the sickest patients. Decisions need to be made rapidly based on the evolving clinical state of the patient which can fluctuate over seconds and minutes. We develop ML models to tackle major predictive challenges for critically-ill patients. Specifically, models that predict an upcoming possible adverse event to provide the clinical team time to intervene and thus improve outcome and save lives, and models that predict the future course and treatment response in a patient-specific manner.

.

Digital oximetry biomarkers for respiratory conditions
Pulse oximetry is routinely used for monitoring patient’s oxygen saturation level non-invasively. A low oxygen level in the blood means low oxygen in the tissues and ultimately this can lead to organ failure. The development of digital oximetry biomarkers (OBM) engineered from the oxygen saturation time series can support diag... more

Digital oximetry biomarkers for respiratory conditions

Pulse oximetry is routinely used for monitoring patient’s oxygen saturation level non-invasively. A low oxygen level in the blood means low oxygen in the tissues and ultimately this can lead to organ failure. The development of digital oximetry biomarkers (OBM) engineered from the oxygen saturation time series can support diagnosis, characterize subgroups of patients with various disease severity (phenotyping) and enable continuous monitoring of patient’s pulmonary function to predict eventual deteriorations (prognosis). We create new OBM and ML models for the diagnosis of respiratory conditions such as obstructive sleep apnea, chronic obstructive pulmonary disease and pneumonia.

Deep representation learning for cardiovascular diseases
Major cardiovascular and cerebrovascular events occur in individuals without known pre-existing cardiovascular conditions. Preventing such events remains a serious public health challenge. For that purpose, clinical risk scores can be used to identify individuals with high cardiovascular risks. However, available scoring scales ... more

Deep representation learning for cardiovascular diseases

Major cardiovascular and cerebrovascular events occur in individuals without known pre-existing cardiovascular conditions. Preventing such events remains a serious public health challenge. For that purpose, clinical risk scores can be used to identify individuals with high cardiovascular risks. However, available scoring scales have shown moderate performance. Despite being part of the routine evaluation of many patients in both primary and specialized care, the role of electrocardiogram (ECG) analysis in cardiovascular disease prediction and, hence, prevention is not as clear. We research digital biomarkers and deep representation learning approaches to cardiovascular diseases risk prediction using the ECG.

Distributed Compression of DNA Information
DNA information is rapidly growing in importance, and blowing up in volumes. Most data compressors for DNA have extreme encoding complexities, which is prohibitive for low-cost and portable sequencers. We develop a compression scheme with minimal encoding complexity, taking advantage of the availability of DNA references and com... more

Distributed Compression of DNA Information

DNA information is rapidly growing in importance, and blowing up in volumes. Most data compressors for DNA have extreme encoding complexities, which is prohibitive for low-cost and portable sequencers. We develop a compression scheme with minimal encoding complexity, taking advantage of the availability of DNA references and computation resources in the cloud.

Distributed Storage and Computation through Coded Sharding
When a distributed storage system is used by decentralized applications (for example: blockchains), accessing individual shards of large data units, new features are needed that are not offered by existing distributed storage systems. In particular, coding the data with standard erasure codes does not allow adequate access perfo... more

Distributed Storage and Computation through Coded Sharding

When a distributed storage system is used by decentralized applications (for example: blockchains), accessing individual shards of large data units, new features are needed that are not offered by existing distributed storage systems. In particular, coding the data with standard erasure codes does not allow adequate access performance. We develop erasure codes specifically addressing efficient recovery and access in decentralized applications.

Reliability of Machine Learning in Distributed Systems
The common use of AI today is that data is provided to some central computing facility (in the cloud), where the learning tasks (training and inference) are performed. The main issues with this practice are high communication cost and compromised data privacy. Moving part of the learning tasks to the edges mitigates these issues... more

Reliability of Machine Learning in Distributed Systems

The common use of AI today is that data is provided to some central computing facility (in the cloud), where the learning tasks (training and inference) are performed. The main issues with this practice are high communication cost and compromised data privacy. Moving part of the learning tasks to the edges mitigates these issues. The key question is how to aggregate multiple unreliable outputs from the edge to one reliable learning output, where unreliability is manifested in: missing inputs (stragglers), wrong inputs, and malicious inputs.

Robust economic design
In many economic design settings, strong assumptions are made about the knowledge of the designer. A canonical example from auction design is assuming perfect knowledge of how bidders’ willingness to pay is distributed. In which settings can we achieve designs with similar guarantees as those under full knowledge, despite know... more

Robust economic design

In many economic design settings, strong assumptions are made about the knowledge of the designer. A canonical example from auction design is assuming perfect knowledge of how bidders’ willingness to pay is distributed. In which settings can we achieve designs with similar guarantees as those under full knowledge, despite knowing only a sample or a first moment of the prior distribution?

Strategic Classification
The goal of this research is to design classifiers robust to strategic behavior of the agents being classified. Here strategic behavior means incurring some cost in order to improve personal features and thus classification. This improvement can be superficial – i.e., gaming the classifier – or substantial, thus leading to t... more

Strategic Classification

The goal of this research is to design classifiers robust to strategic behavior of the agents being classified. Here strategic behavior means incurring some cost in order to improve personal features and thus classification. This improvement can be superficial – i.e., gaming the classifier – or substantial, thus leading to true self-improvement. In the latter case (and only in this case), the robust classifier should actually encourage strategic behavior.

Constrained Bayesian Persuasion
Consider two strategic players, one more informed about the state of the world and the other less informed. How should the more informed side select what data to communicate to the other side, in order to inspire actions that benefit goals like social welfare? Can this be done under constraints such as privacy, limited communica... more

Constrained Bayesian Persuasion

Consider two strategic players, one more informed about the state of the world and the other less informed. How should the more informed side select what data to communicate to the other side, in order to inspire actions that benefit goals like social welfare? Can this be done under constraints such as privacy, limited communication, limited attention span, fairness, etc.?

Reproducible and interpretable data-driven feature selection
Design learning and statistical methodologies to effectively identify explanatory features (e.g., genetic variations) truly linked to a phenomenon under study (e.g., disease risk) while rigorously controlling the number of false positives among the reported features.
Certified Robustness of Modern Machine Learning
Develop methodologies that provide provably robust predictions in a challenging setting where the train and test distribution differ, e.g., due to adversarial attacks.
Prediction with confidence
Develop statistical tools that can work in combination with any complex machine learning algorithm (e.g., a deep neural network) to provide a reliable assessment of prediction uncertainty. The tools we invent treat regression, classification, and out-of-distribution detection problems.
Managing Capacity in Deduplicated Storage Systems
Data deduplication is one of the most effective ways to reduce data size in large-scale systems. In a nutshell, duplicate copies of data chunks in different files are replaced with pointers to a single copy of each unique chunk. Optimized deduplication mechanisms facilitated its adoption to online primary storage, introducing ne... more

Managing Capacity in Deduplicated Storage Systems

Data deduplication is one of the most effective ways to reduce data size in large-scale systems. In a nutshell, duplicate copies of data chunks in different files are replaced with pointers to a single copy of each unique chunk. Optimized deduplication mechanisms facilitated its adoption to online primary storage, introducing new complexities to which traditional solutions do not directly apply. Our objective is to optimize capacity planning, management and load balancing in such systems.

 

SSD Management with Predictions
The infrastructure for the “big data revolution” is built of systems that support storing, processing, and delivering large amounts of data efficiently. Flash-based solid-state drives (SSDs) are a key component in such systems, thanks to their ability to support parallel I/O at sub-millisecond latency and consistently high t... more

SSD Management with Predictions

The infrastructure for the “big data revolution” is built of systems that support storing, processing, and delivering large amounts of data efficiently. Flash-based solid-state drives (SSDs) are a key component in such systems, thanks to their ability to support parallel I/O at sub-millisecond latency and consistently high throughput. We develop theoretically-optimal algorithms for the SSD firmware which is responsible for the internal management of data and resources within the storage device.

 

Online POMDP and BSP Planning via Simplification
We develop a fundamentally novel paradigm that seeks to find a simplification of a given POMDP problem, which is computationally easier, while at the same time providing performance guarantees, and ideally, similar levels of performance as the original decision making problem. Based on this conceptually novel paradigm, we devel... more

Online POMDP and BSP Planning via Simplification

We develop a fundamentally novel paradigm that seeks to find a simplification of a given POMDP problem, which is computationally easier, while at the same time providing performance guarantees, and ideally, similar levels of performance as the original decision making problem.
Based on this conceptually novel paradigm, we develop approaches that simplify the decision making problem, for example, by resorting to belief simplification or reward function simplification.

Autonomous Semantic Perception under Uncertainty
We develop approaches for autonomous semantic perception addressing key challenges such as: classification aliasing for certain relative viewpoints between object & camera, localization uncertainty, and epistemic uncertainty of the classifier. Specifically, approaches for computationally efficient probabilistic inference and... more

Autonomous Semantic Perception under Uncertainty

We develop approaches for autonomous semantic perception addressing key challenges such as: classification aliasing for certain relative viewpoints between object & camera, localization uncertainty, and epistemic uncertainty of the classifier. Specifically, approaches for computationally efficient probabilistic inference and decision making, are developed, in the context of semantic perception and SLAM. A key component here is a learned viewpoint-dependent classifier model.

Using the 3D genome to solve the 1D genome
Despite advances in DNA sequencing, full accurate measurement of complex genomes remains a huge challenge. We have discovered that certain 3D structural patterns can be used to solve a range of problems in the field of genome assembly, including the identification of disease mutations that are currently difficult to detect. Usin... more

Using the 3D genome to solve the 1D genome

Despite advances in DNA sequencing, full accurate measurement of complex genomes remains a huge challenge. We have discovered that certain 3D structural patterns can be used to solve a range of problems in the field of genome assembly, including the identification of disease mutations that are currently difficult to detect. Using machine learning models, we are developing new ways to utilize data from 3D genome measurements to better characterize its 1D sequence in healthy and disease genomes.

Learning 3D genome organization
The 3D organization of genomes is tightly linked to how the genetic information is accessed, regulated and propagated. Using machine learning, with a special emphasis on probabilistic models, we build computational models aimed to gain mechanistic insights of how 3D genome structures are specified and how they change in disease.
Optimal dynamic tolls for managed lanes
A framework for modelling an optimal dynamic toll pricing strategy for a system of managed lanes is developed. A macroscopic traffic simulation model is used to estimate the traffic states subject to initial and boundary conditions while incorporating the tolling policy. Traffic states and optimal toll actions are derived for a ... more

Optimal dynamic tolls for managed lanes

A framework for modelling an optimal dynamic toll pricing strategy for a system of managed lanes is developed. A macroscopic traffic simulation model is used to estimate the traffic states subject to initial and boundary conditions while incorporating the tolling policy. Traffic states and optimal toll actions are derived for a set of scenarios. These are used with an artificial neural network to develop a tolling policy for toll actions at each step.

Automatic design of actuated traffic signal plans
The aim of this research is to develop a new tool to automatically design complex actuated traffic signal plans. The method uses an automatic programming approach, combined with a mesoscopic traffic simulation model to design and evaluate optimal intersection traffic signal plans. Thus, reducing the need of human intervention in... more

Automatic design of actuated traffic signal plans

The aim of this research is to develop a new tool to automatically design complex actuated traffic signal plans. The method uses an automatic programming approach, combined with a mesoscopic traffic simulation model to design and evaluate optimal intersection traffic signal plans. Thus, reducing the need of human intervention in the design process. The tool takes into consideration not only the plan parameters but also the control logic as well.

Cross-sectorial collaborations in the new economy
In the new economy, new kinds of organizations emerge. Among others, cross-sectorial collaborations are created, based on the recognition that they benefit all sectors: governmental organizations and local authorities (1st sector), for-profit organizations (2nd sector), and non-governmental non-profit organizations (3rd sector).... more

Cross-sectorial collaborations in the new economy

In the new economy, new kinds of organizations emerge. Among others, cross-sectorial collaborations are created, based on the recognition that they benefit all sectors: governmental organizations and local authorities (1st sector), for-profit organizations (2nd sector), and non-governmental non-profit organizations (3rd sector). This research, conducted in collaboration with tech-organizations in the context of STEM education, explores benefits that each sector earns from the collaboration.

A new discipline is born: Data science education
Data science is a new interdisciplinary field of research that focuses on extracting knowledge and value from data. As data science is becoming relevant for many scientific, engineering and social research and applications, new data science education programs are being launched and adequate teaching methods are needed for differ... more

A new discipline is born: Data science education

Data science is a new interdisciplinary field of research that focuses on extracting knowledge and value from data. As data science is becoming relevant for many scientific, engineering and social research and applications, new data science education programs are being launched and adequate teaching methods are needed for different learning populations. This research, conducted by my doctoral student, Koby Mike, explores the essence of this new evolving discipline – data science education.

Human Behavior Prediction with Language-driven Models
Language is a window to the person's mind and soul. Surprisingly, while few would disagree with this statement, most behavior prediction and analysis models do not consider language usage. We develop models that do exactly this, considering both economics setups (where game theory predictions consider only the numerical incentiv... more

Human Behavior Prediction with Language-driven Models

Language is a window to the person’s mind and soul. Surprisingly, while few would disagree with this statement, most behavior prediction and analysis models do not consider language usage. We develop models that do exactly this, considering both economics setups (where game theory predictions consider only the numerical incentive of the participants) as well as psychological and psychiatric challenges (e.g. predicting suicide risk in the general population based on social media postings). Our goal is to integrate linguistic signals along with other behavioral and medical signals, and provide better prediction capabilities along with improved understanding of the underlying phenomena.

Causal Inference in Natural Language Processing: Model Design and Interpretation
A fundamental problem of machine and deep learning models in NLP is that of spurious correlations. Such heavily parametrized models often capture data-driven patterns that are correlated with their task variables, but these patterns have little connection to the actual task they are trying to perform. This, in turn, substantial... more

Causal Inference in Natural Language Processing: Model Design and Interpretation

A fundamental problem of machine and deep learning models in NLP is that of spurious correlations. Such heavily parametrized models often capture data-driven patterns that are correlated with their task variables, but these patterns have little connection to the actual task they are trying to perform.
This, in turn, substantially harms their generalization capacity. We hence develop methods that follow the causal inference methodology for improved model generalization, interpretation, and stability.

Domain Adaptation for Natural Language Processing
Domain adaptation is the problem of adapting an algorithm trained on one domain (training distribution) so that it can effectively process data from other domains (e.g. adapting a sentiment classification algorithm trained on book reviews so that it can perform well on reviews of patient experience in clinics).  We consider var... more

Domain Adaptation for Natural Language Processing

Domain adaptation is the problem of adapting an algorithm trained on one domain (training distribution) so that it can effectively process data from other domains (e.g. adapting a sentiment classification algorithm trained on book reviews so that it can perform well on reviews of patient experience in clinics).  We consider various very challenging setups of domain adaptation, focusing on setups where very limited resources and knowledge of the target domains are available when training the algorithm.

Metastases cause ~90% of cancer mortality and prognosis is currently based on histopathology, disease-statistics, or genetics. Weihs lab developed a rapid (~2hr) early-prognostic of the clinical metastatic risk, adding predictive machine learning models, to support disease management. Two-class and 5-class models successfully s... more

Mechanobiology-based prediction of metastatic risk

Metastases cause ~90% of cancer mortality and prognosis is currently based on histopathology, disease-statistics, or genetics. Weihs lab developed a rapid (~2hr) early-prognostic of the clinical metastatic risk, adding predictive machine learning models, to support disease management.
Two-class and 5-class models successfully separated invasive/non-invasive or varying invasiveness-level samples with high sensitivity and specificity.

Chebyshev Nets from Commuting PolyVector Fields
In this project, we propose a method for computing global Chebyshev nets on triangular meshes. We formulate the corresponding global parameterization problem in terms of commuting PolyVector fields, and design an efficient optimization method to solve it. We compute, for the first time, Chebyshev nets with automatically-placed s... more

Chebyshev Nets from Commuting PolyVector Fields

In this project, we propose a method for computing global Chebyshev nets on triangular meshes. We formulate the corresponding global parameterization problem in terms of commuting PolyVector fields, and design an efficient optimization method to solve it. We compute, for the first time, Chebyshev nets with automatically-placed singularities, and demonstrate the realizability of our approach using real material.

Understanding the inheritance of RNA modifications
The first steps of embryogenesis lack transcription and rely on maternal mRNAs stored in oocytes. Thus, maternal mRNA stability is tightly regulated. A-to-I RNA editing is the most common RNA modification, which is important for normal embryonic development and regulation of innate immunity. Using dozens high-throughput sequenci... more

Understanding the inheritance of RNA modifications

The first steps of embryogenesis lack transcription and rely on maternal mRNAs stored in oocytes. Thus, maternal mRNA stability is tightly regulated. A-to-I RNA editing is the most common RNA modification, which is important for normal embryonic development and regulation of innate immunity. Using dozens high-throughput sequencing databases, we are testing if edited mRNAs are inherited to prevent activation of the immunity system against self RNA in the next generations.

A comprehensive RNA editing site identification
A-to-I RNA editing is the most prevalent type of RNA editing in metazoans. As part of this project, we generated RESIC, an efficient pipeline that combines several approaches for the detection and classification of RNA editing sites. The pipeline can be used for all organisms and can use any number of RNA-sequencing datasets as ... more

A comprehensive RNA editing site identification

A-to-I RNA editing is the most prevalent type of RNA editing in metazoans. As part of this project, we generated RESIC, an efficient pipeline that combines several approaches for the detection and classification of RNA editing sites. The pipeline can be used for all organisms and can use any number of RNA-sequencing datasets as input. Testing this tool on SARS-CoV-2 infection, our analysis implies the involvement of RNA editing in conceiving the unpredicted phenotype of COVID-19 disease.

Generating a platform for a reliable differential expression analysis
Differential Expression Analysis (DEA) of RNA-sequencing data is frequently performed for detecting key genes, affected across different conditions. Preceding reliability-testing of the input material is crucial for consistent and strong results, yet can be challenging. In this project, we generated a tool: Biological Sequence E... more

Generating a platform for a reliable differential expression analysis

Differential Expression Analysis (DEA) of RNA-sequencing data is frequently performed for detecting key genes, affected across different conditions. Preceding reliability-testing of the input material is crucial for consistent and strong results, yet can be challenging. In this project, we generated a tool: Biological Sequence Expression Kit (BiSEK) – a UI-based platform for DEA, dedicated to a reliable inquiry.
BiSEK is based on a novel algorithm to track discrepancies between the data and the statistical model design.

memristive Memory Processing Unit (mMPU)
Modern computing systems are limited by the need to move data between the processing units and the memory ("memory wall"). We developed a unit that combines the data processing and storage using the same physical cells using memristive devices. This unit, called mMPU, can execute numerous logical operations simultaneously, offer... more

memristive Memory Processing Unit (mMPU)

Modern computing systems are limited by the need to move data between the processing units and the memory (“memory wall”). We developed a unit that combines the data processing and storage using the same physical cells using memristive devices. This unit, called mMPU, can execute numerous logical operations simultaneously, offering an energy efficient, high performance machine that is backward compatible with standard computer architectures. The mMPU is especially efficient for applications such as genomics, databases, image processing, DNN and BNN.

Smart Trainable Data Converters
Data converters (analog to digital and digital to analog) are ubiquities in modern electronic devices and connect the real world with digital computing systems. These converters suffer from the speed-accuracy-power tradeoff. We use neuromorphic computing to build data converters that can be trained to adjust to different applica... more

Smart Trainable Data Converters

Data converters (analog to digital and digital to analog) are ubiquities in modern electronic devices and connect the real world with digital computing systems. These converters suffer from the speed-accuracy-power tradeoff. We use neuromorphic computing to build data converters that can be trained to adjust to different applications and environmental changes and by that achieve a better figure-of-merit compared to standard data converters.

Accelerators for DNN Training
We use emerging memristive technologies to design circuits and systems that accelerate deep neural networks, including their training. Our recent work has shown how to accelerate vanilla gradient descent and gradient descent with momentum using memristors. Our proposed circuits rely on using memristors to both compute and store ... more

Accelerators for DNN Training

We use emerging memristive technologies to design circuits and systems that accelerate deep neural networks, including their training. Our recent work has shown how to accelerate vanilla gradient descent and gradient descent with momentum using memristors. Our proposed circuits rely on using memristors to both compute and store the weights.

Learning causal estimators from unlabeled data
Models of real-world phenomena, e.g., human physiology, offer significant utility in health and disease. However, they often suffer from misspecification. To understand the implications of such misspecification, we develop some basic theory for the simple setting of linear models, aiming to understand the benefit of the ubiquit... more

Learning causal estimators from unlabeled data

Models of real-world phenomena, e.g., human physiology, offer significant utility in health and disease.
However, they often suffer from misspecification. To understand the implications of such misspecification, we develop some basic theory for the simple setting of linear models, aiming to understand the benefit of the ubiquitously available unlabelled offline data in enhancing misspecified causal models. We implement these ideas on non-linear models, focusing on the cardiovascular system, where an abundance of unlabelled data and (partial) physiological models are available.

People:
Ron Meir
Curriculum learning agents
The effectiveness of learning systems depends on both the attributes of the learner and the teacher. Indeed, an optimal setup for learning is when the student and teacher/environment operate collaboratively to enhance learning, where the teacher’s task is to develop an appropriate learning curriculum that facilitates learning ... more

Curriculum learning agents

The effectiveness of learning systems depends on both the attributes of the learner and the teacher. Indeed, an optimal setup for learning is when the student and teacher/environment operate collaboratively to enhance learning, where the teacher’s task is to develop an appropriate learning curriculum that facilitates learning by the student. We develop approaches to enhance agents’ learning within a curriculum setting, focusing on the model-based Reinforcement Learning agents and continuous control settings.

People:
Ron Meir
Lifelong learning agents
Effective learning from data requires prior assumptions, referred to as inductive bias. A fundamental question pertains to the source of a ‘good’ inductive bias. One natural way to form such a bias is through lifelong learning, where an agent continually interacts with the world through a sequence of tasks, aiming to improve... more

Lifelong learning agents

Effective learning from data requires prior assumptions, referred to as inductive bias. A fundamental question pertains to the source of a ‘good’ inductive bias. One natural way to form such a bias is through lifelong learning, where an agent continually interacts with the world through a sequence of tasks, aiming to improve its performance on future tasks based on the tasks it has encountered so far. We develop a theoretical framework for incremental inductive bias formation, and demonstrate its effectiveness in problems of sequential learning and decision making.

People:
Ron Meir
Randomness for deep neural networks
Preliminary results of this young and exciting project show that we can a. improve the accuracy results of deep neural networks;  b. reduce their size (required accelerator memory) by more than two orders of magnitude while keeping their accuracy intact and c. significantly reduce their depth.
ECG analysis using deep neural networks
We are developing a smartphone app for cardiologists to help analyze ECG charts. Our methods identify dozens of cardio-related conditions: “Automatic classification of healthy and disease conditions from images or digital standard 12-lead ECGs.” Vadim Gliner, Noam Keidar, Vladimir Makarov, Arutyun I. Avetisyan, Assaf Schuste... more

ECG analysis using deep neural networks

We are developing a smartphone app for cardiologists to help analyze ECG charts. Our methods identify dozens of cardio-related conditions: “Automatic classification of healthy and disease conditions from images or digital standard 12-lead ECGs.” Vadim Gliner, Noam Keidar, Vladimir Makarov, Arutyun I. Avetisyan, Assaf Schuster and Yael Yaniv. Scientific Reports. September 2020. We develop tools to assist physicians use AI tools: “Meeting the unmet needs of clinicians from AI systems in cardiology: A systematic formulation, and a suggested framework.” Yonatan Elul, Aviv Rosenberg, Assaf Schuster, Alex Bronstein, Yael Yaniv. Proceedings of the National Academy of Sciences of the United States of America (PNAS). April 2021.  We are working on predicting cardiovascular events.

Asynchronous Distributed Training of Deep Neural Networks
We developed asynchronous versions of data parallel training and showed them to be faster than their synchronous counterparts : “Taming Momentum in a Distributed Asynchronous Environment.” Ido Hakimi, Saar Barkai, Moshe Gabel, Assaf Schuster. Aug 2019, arXiv. We also solved the issue associated with asynchrony, named “stal... more

Asynchronous Distributed Training of Deep Neural Networks

We developed asynchronous versions of data parallel training and showed them to be faster than their synchronous counterparts : “Taming Momentum in a Distributed Asynchronous Environment.” Ido Hakimi, Saar Barkai, Moshe Gabel, Assaf Schuster. Aug 2019, arXiv. We also solved the issue associated with asynchrony, named “staleness”: “Gap-Aware Mitigation of Gradient Staleness.” Saar Barkai, Ido Hakimi, Assaf Schuster. ICLR 2020. We developed model parallel approach for fine tuning of giant deep models on commodity hardware (submitted for publication).

Dimensionality reduction
In this setting we study how to reduce the dimensionality of data for learning and for optimization, avoiding the “curse of dimensionality”.
People:
Nir Ailon
Ranking and preference learning
In this setting we study how to model people’s preferences over a set of choices, and how to optimize and learn given user preferences in a variety of applications.
People:
Nir Ailon
Online and bandit optimization
In this project we study how to make decisions in an unknown environment in an online setting.
People:
Nir Ailon
Large matrix approximation for acceleration of deep networks
In this work we apply matrix approximation theory to reduce the cost of training and deploying of dense layers in deep networks.
People:
Nir Ailon
Predicting Neural Dynamics with AI Models
This project uses transformers and linear systems to forecast neural activity. It aims to create predictive embeddings that capture the brain’s dynamics. By integrating these models, the project seeks to capture complex temporal dependencies in neural data and generate accurate forecasts, exploring the intersection of machine ... more

Predicting Neural Dynamics with AI Models

This project uses transformers and linear systems to forecast neural activity. It aims to create predictive embeddings that capture the brain’s dynamics. By integrating these models, the project seeks to capture complex temporal dependencies in neural data and generate accurate forecasts, exploring the intersection of machine learning and neuroscience.