FB LinkEdIn Instagram

Research

Projects

Prediction of pharmaceutical molecule shelf life
Understanding the chemical stability of active pharmaceutical molecules can affect the quality, safety, robustness, and efficacy of a drug product. The ultimate goal of this project is to develop a predictive tool for drug resistance to oxidation and degradation to facilitate early developmental efforts of potential pharmaceutic... more

Prediction of pharmaceutical molecule shelf life

Understanding the chemical stability of active pharmaceutical molecules can affect the quality, safety, robustness, and efficacy of a drug product. The ultimate goal of this project is to develop a predictive tool for drug resistance to oxidation and degradation to facilitate early developmental efforts of potential pharmaceuticals, even before they are synthesized. We use detailed chemical kinetic models automatically generated with on-the-fly quantum chemical thermo-kinetic computations.

General-Domain Truth Discovery via Average Proximity
Truth discovery is a general name for statistical methods aimed to extract the correct answers to questions, based on multiple answers coming from noisy sources. For example, workers in a crowdsourcing platform. We suggest a simple heuristic for estimating workers' competence using average proximity to other workers. We prove th... more

General-Domain Truth Discovery via Average Proximity

Truth discovery is a general name for statistical methods aimed to extract the correct answers to questions, based on multiple answers coming from noisy sources. For example, workers in a crowdsourcing platform. We suggest a simple heuristic for estimating workers’ competence using average proximity to other workers. We prove this estimates well the actual competence level and enables separating high and low quality workers in a wide spectrum of domains and statistical models.

Effects of offering hospital patients information about the process of their treatment
We extracted information from hospital medical records and designed a personal platform through which patients could receive information about what they can expect regarding the type of tests and treatments and the expected duration of their hospital stay. We assess the effects of the offered information on patient satisfaction,... more

Effects of offering hospital patients information about the process of their treatment

We extracted information from hospital medical records and designed a personal platform through which patients could receive information about what they can expect regarding the type of tests and treatments and the expected duration of their hospital stay. We assess the effects of the offered information on patient satisfaction, duration of stay and duration of treatments.

Content and social dynamics of Slack communication
Analyses of all communication conducted on public Slack channels of a mid-size firm, using topic analysis, sentiment analysis and behavior analysis as well as network analysis to deconstruct the dynamics of the firm’s Slack communication.
Effects of customer emotions on employee response time
Analyses of archives of over 100K real-time interactions between human agents and customers in online service chats.
Capturing the structural complexity of nuclear envelope invaginations
Using cutting-edge super-resolution microscopy technology, expansion microscopy, we discovered that tubular nuclear envelope invaginations are highly abundant in vertebrate embryonic cells. These structures are poised to extend the role of the nuclear envelope in regulating gene expression deep into the nucleus. Shedding light o... more

Capturing the structural complexity of nuclear envelope invaginations

Using cutting-edge super-resolution microscopy technology, expansion microscopy, we discovered that tubular nuclear envelope invaginations are highly abundant in vertebrate embryonic cells. These structures are poised to extend the role of the nuclear envelope in regulating gene expression deep into the nucleus. Shedding light on this phenomenon requires segmenting the 3D structure of invaginations in a huge dataset of microscopy data. We are interested in utilizing learning algorithms and in particular deep neural networks for this 3D segmentation task.

Probabilistic models of neural activity underlying decisions
At key points within neural circuits, neurons integrate information from multiple sources to make a choice. We are interested in unraveling how such choices are implemented by the circuits, by developing generative probabilistic models of neural activity in multiple neural populations involved in making a decision and comparing ... more

Probabilistic models of neural activity underlying decisions

At key points within neural circuits, neurons integrate information from multiple sources to make a choice. We are interested in unraveling how such choices are implemented by the circuits, by developing generative probabilistic models of neural activity in multiple neural populations involved in making a decision and comparing these predictions to experimental measurements of neural activity. In particular, we will focus on the circuit mediating the choice of the response type a larval zebrafish would present in the face of an alarming stimulus.

 

 

 

Design For Collaboration (DFC)
The focus is on recognizing and analyzing the challenges that arise when autonomous agents with different capabilities need to interact and collaborate on unknown tasks, on providing methods for the automated design of these environments to promote collaboration, and on specifying guarantees regarding the quality of the design s... more

Design For Collaboration (DFC)

The focus is on recognizing and analyzing the challenges that arise when autonomous agents with different capabilities need to interact and collaborate on unknown tasks, on providing methods for the automated design of these environments to promote collaboration, and on specifying guarantees regarding the quality of the design solutions produced by our suggested methods. This research combines data-driven approaches with symbolic AI techniques and involves both theoretical work and evaluations on multi-agent reinforcement learning settings and on multi robot systems.

Market of Information and Skills for Multi Agent AI and Multi Robot Teams
Promoting multi-agent collaboration via dynamic markets of information and skills in which AI agents and robots trade their physical capabilities and their ability to acquire new information. The value of these traded commodities is dynamically computed based on the agents' objectives, sensors and actuation capabilities as well ... more

Market of Information and Skills for Multi Agent AI and Multi Robot Teams

Promoting multi-agent collaboration via dynamic markets of information and skills in which AI agents and robots trade their physical capabilities and their ability to acquire new information. The value of these traded commodities is dynamically computed based on the agents’ objectives, sensors and actuation capabilities as well as their ability to communicate with each other and ask for assistance. This framework maximizes performance and team resilience, without relying on a centralized controller.

Task and Team Aware Motion Planning for Robotics (TATAM)
Most current approaches to robotic planning separate the low-level planning of basic behaviors and the high-level search for a sequence of behaviors that will accomplish a task. However, in complex settings such as packing, personal assistance, and cooking, this dichotomous view becomes inefficient, especially in environments sh... more

Task and Team Aware Motion Planning for Robotics (TATAM)

Most current approaches to robotic planning separate the low-level planning of basic behaviors and the high-level search for a sequence of behaviors that will accomplish a task. However, in complex settings such as packing, personal assistance, and cooking, this dichotomous view becomes inefficient, especially in environments shared by multiple autonomous agents. We therefore offer new ways for integrating task-level considerations when planning the robot’s movement, and for propagating motion-planning considerations into task planning.

Advanced AI methods to meed the need of clinicians
Within this project we developed a set of deep-learning tools that enabled design of a robust, trustworthy, explainable, and transparent system, while retaining the superior level of performance expected of deep learning-based algorithms for classification of heart conditions from short ECG recordings collected using a two-lead ... more

Advanced AI methods to meed the need of clinicians

Within this project we developed a set of deep-learning tools that enabled design of a robust, trustworthy, explainable, and transparent system, while retaining the superior level of performance expected of deep learning-based algorithms for classification of heart conditions from short ECG recordings collected using a two-lead device.

People:
Yael Yaniv
Advanced AI methods to identify heart conditions
Within this project we developed an app which integrates an AI method that can automatically distinguish between atrial fibrillation, other rhythm disturbances and noise when using a mobile one-lead ECG device. In parallel we developed an automated AI-based system to identify heart conditions from 12-lead digital or image ECG re... more

Advanced AI methods to identify heart conditions

Within this project we developed an app which integrates an AI method that can automatically distinguish between atrial fibrillation, other rhythm disturbances and noise when using a mobile one-lead ECG device. In parallel we developed an automated AI-based system to identify heart conditions from 12-lead digital or image ECG recordings with high accuracy. We also demonstrated that the images scanned using a smartphone provided the same accuracy as machine images.

People:
Yael Yaniv
Machine-learning for Crohn’s disease assessment
Non-invasive assessment of the terminal ileum’s mucosal healing plays key role in managing Crohn’s disease (CD) patients. We develop machine-learning models to predict terminal-ileum’s mucosal healing from big-data databases of: 1) semi-quantitative clinical interpretation of Magnetic Resonance Imaging (MRI) data of CD pa... more

Machine-learning for Crohn’s disease assessment

Non-invasive assessment of the terminal ileum’s mucosal healing plays key role in managing Crohn’s disease (CD) patients. We develop machine-learning models to predict terminal-ileum’s mucosal healing from big-data databases of:
1) semi-quantitative clinical interpretation of Magnetic Resonance Imaging (MRI) data of CD patients, and
2) MRI images of CD patients. Our approach provides more accurate assessment of the terminal ileum’s mucosal healing compared to classical linear methods.

Non-parametric Bayesian deep-learning for medical imaging
Mechanisms to determine deep-neural-networks confidence in their prediction by estimating their predictions’ uncertainty play a critical role in adopting deep-learning techniques for safety-critical clinical applications. We introduce a principled way to non-parametrically characterize the true posterior distribution of the ne... more

Non-parametric Bayesian deep-learning for medical imaging

Mechanisms to determine deep-neural-networks confidence in their prediction by estimating their predictions’ uncertainty play a critical role in adopting deep-learning techniques for safety-critical clinical applications. We introduce a principled way to non-parametrically characterize the true posterior distribution of the neural-network predictions through stochastic gradient Langevin dynamics (SGLD). We demonstrated very high correlation between our measures of uncertainty and out-of-distribution data in MRI registration. Further, our approach improved registration accuracy and robustness.

Deep-learning for Quantitative MRI analysis
In-vivo quantification of tissue biophysical properties plays a key role in personalized medicine. Motivated by classical model-fitting approaches, we introduce a new class of deep-neural-network architectures and training processes, to enable accurate and reliable quantification of tissue biophysical properties from quantitativ... more

Deep-learning for Quantitative MRI analysis

In-vivo quantification of tissue biophysical properties plays a key role in personalized medicine. Motivated by classical model-fitting approaches, we introduce a new class of deep-neural-network architectures and training processes, to enable accurate and reliable quantification of tissue biophysical properties from quantitative MRI data.  We demonstrated the added-value of our approach for Intra-Voxel Incoherent motion analysis of Diffusion-Weighted MRI data with clinical applications in oncology and gastroenterology.

Decisions from experience
We study basic human decision making and learning processes when making repeated and/or sequential choice. Understanding the basic processes in these very common settings (e.g. driving, behavior in pandemics, using smartphone apps, health decisions) both improves our ability to predict behavior and to design mechanisms and polic... more

Decisions from experience

We study basic human decision making and learning processes when making repeated and/or sequential choice. Understanding the basic processes in these very common settings (e.g. driving, behavior in pandemics, using smartphone apps, health decisions) both improves our ability to predict behavior and to design mechanisms and policies that are robust to the likely behaviors of systems’ users.

Predicting human choice with machine learning & psychology
We integrate psychological theories and models of human decision making into machine learning systems to predict human decision making in state-of-the-art levels. Focusing on the most fundamental choice task from behavioral economics and using the largest datasets currently available, we study which theories and models, which ty... more

Predicting human choice with machine learning & psychology

We integrate psychological theories and models of human decision making into machine learning systems to predict human decision making in state-of-the-art levels. Focusing on the most fundamental choice task from behavioral economics and using the largest datasets currently available, we study which theories and models, which types of machine learning algorithms and tools, and which methods of integration lead to the best out-of-sample predictions.

Associations of the BNT162b2 COVID-19 vaccine effectiveness with patient age and comorbidities
Vaccinations are considered the major tool to curb the current SARS-CoV-2 pandemic. A randomized placebo-controlled trial of the BNT162b2 vaccine has demonstrated a 95% efficacy in preventing COVID-19 disease. These results are now corroborated with statistical analyses of real-world vaccination rollouts, but resolving vaccine e... more

Associations of the BNT162b2 COVID-19 vaccine effectiveness with patient age and comorbidities

Vaccinations are considered the major tool to curb the current SARS-CoV-2 pandemic. A randomized placebo-controlled trial of the BNT162b2 vaccine has demonstrated a 95% efficacy in preventing COVID-19 disease. These results are now corroborated with statistical analyses of real-world vaccination rollouts, but resolving vaccine effectiveness across demographic groups is challenging. Here, applying a multivariable logistic regression analysis approach to a large patient-level dataset, including SARS-CoV-2 tests, vaccine inoculations and personalized demographics, we model vaccine effectiveness at daily resolution and its interaction with sex, age and comorbidities. Vaccine effectiveness gradually increased post day 12 of inoculation, then plateaued, around 35 days, reaching 91.2% [CI 88.8%-93.1%] for all infections and 99.3% [CI 95.3%-99.9%] for symptomatic infections. Effectiveness was uniform for men and women yet declined mildly but significantly with age and for patients with specific chronic comorbidities, most notably type 2 diabetes. Quantifying real-world vaccine effectiveness, including both biological and behavioral effects, our analysis provides initial measurement of vaccine effectiveness across demographic groups.

Personal clinical history predicts antibiotic resistance of urinary tract infections
Antibiotic resistance is prevalent among the bacterial pathogens causing urinary tract infections. However, antimicrobial treatment is often prescribed ‘empirically’, in the absence of antibiotic susceptibility testing, risking mismatched and therefore ineffective treatment. Here, linking a 10-year longitudinal data set of o... more

Personal clinical history predicts antibiotic resistance of urinary tract infections

Antibiotic resistance is prevalent among the bacterial pathogens causing urinary tract infections. However, antimicrobial treatment is often prescribed ‘empirically’, in the absence of antibiotic susceptibility testing, risking mismatched and therefore ineffective treatment. Here, linking a 10-year longitudinal data set of over 700,000 community-acquired urinary tract infections with over 5,000,000 individually resolved records of antibiotic purchases, we identify strong associations of antibiotic resistance with the demographics, records of past urine cultures and history of drug purchases of the patients. When combined together, these associations allow for machine-learning-based personalized drug-specific predictions of antibiotic resistance, thereby enabling drug-prescribing algorithms that match an antibiotic treatment recommendation to the expected resistance of each sample. Applying these algorithms retrospectively, over a 1-year test period, we find that they greatly reduce the risk of mismatched treatment compared with the current standard of care. The clinical application of such algorithms may help improve the effectiveness of antimicrobial treatments.

Community-level evidence for SARS-CoV-2 vaccine protection of unvaccinated individuals
Mass vaccination has the potential to curb the current COVID19 pandemic by protecting individuals who have been vaccinated against the disease and possibly lowering the likelihood of transmission to individuals who have not been vaccinated. The high effectiveness of the widely administered BNT162b vaccine from Pfizer–BioNTech ... more

Community-level evidence for SARS-CoV-2 vaccine protection of unvaccinated individuals

Mass vaccination has the potential to curb the current COVID19 pandemic by protecting individuals who have been vaccinated against the disease and possibly lowering the likelihood of transmission to individuals who have not been vaccinated. The high effectiveness of the widely administered BNT162b vaccine from Pfizer–BioNTech in preventing not only the disease but also infection with SARS-CoV-2 suggests a potential for a population-level effect, which is critical for disease eradication. However, this putative effect is difficult to observe, especially in light of highly fluctuating spatiotemporal epidemic dynamics. Here, by analyzing vaccination records and test results collected during the rapid vaccine rollout in a large population from 177 geographically defined communities, we find that the rates of vaccination in each community are associated with a substantial later decline in infections among a cohort of individuals aged under 16 years, who are unvaccinated. On average, for each 20 percentage points of individuals who are vaccinated in a given population, the positive test fraction for the unvaccinated population decreased approximately twofold. These results provide observational evidence that vaccination not only protects individuals who have been vaccinated but also provides cross-protection to unvaccinated individuals in the community.

Machine Learning based MANET Traffic Performance Prediction Tool
Mobile Ad-hoc NETworks (MANET) is a communication platform for wireless first response units that creates a temporary network without any help of any centralized support. MANET is characterized by its rapidly changing connectivity and bandwidth over the communication links. Mobile Ad Hoc Network is a collection of wireless hosts... more

Machine Learning based MANET Traffic Performance Prediction Tool

Mobile Ad-hoc NETworks (MANET) is a communication platform for wireless first response units that creates a temporary network without any help of any centralized support. MANET is characterized by its rapidly changing connectivity and bandwidth over the communication links. Mobile Ad Hoc Network is a collection of wireless hosts that creates a temporary network without any help of any centralized support. At the same time, the application runs on the units often requires strict availability of end to end bandwidth and delay. It is essential to be build an optimization tool that will be able to predict the traffic bandwidth or the delay performance once the network topology changes or a new application starts running. Developing such tool requires network modeling. Nowadays, network models are either based on packet-level simulators or analytical models (e.g., queuing theory). Packet–level simulators are very costly computationally, while the analytical models are fast but not accurate. Hence, Machine Learning (ML) arises as a promising solution to build accurate network models able to operate in real time and to predict the resulting network performance according to the target policy, i.e maximum bandwidth or minimum end-to-end delay. Recently, Graph Neural Networks (GNN) have shown a strong potential to be integrated into commercial products for network control and management. Early works using GNN have demonstrated capability to learn from different network characteristics that are fundamentally represented as graphs, such as the topology, the routing configuration, or the traffic that flows along a series of nodes in the network. In contrast to previous ML-based solutions, GNN enables to produce accurate predictions even in networks unseen during the training phase. The main project target is to adjust GNN to MANET and test its prediction accuracy for such network.

People:
Danny Raz
A neural control theory of high-level cognition in aging
High-level cognition, e.g., intelligence, draws on multiple processes, following sequential transitions through a series of neural states. The ease of these transitions depends on the connectome - underlying network of white-matter connections. Yet, the link between connectome, brain state transitions, and cognition is unclear, ... more

A neural control theory of high-level cognition in aging

High-level cognition, e.g., intelligence, draws on multiple processes, following sequential transitions through a series of neural states. The ease of these transitions depends on the connectome – underlying network of white-matter connections. Yet, the link between connectome, brain state transitions, and cognition is unclear, nor how such a relation changes as people age, across their lifespan. Here, I leverage state-of-the-art methodology from network control theory to link network properties, state transitions, and high-level cognition across the human lifespan.

Development of structure and function in trained networks
Learning a new skill requires assimilating into our brain the regularities of the external world and how our body interacts with them as we engage in this skill. Mechanistically, this entails a translation of inputs, rules, and outputs into changes to the structure of neural networks in our brain. How this translation occurs is ... more

Development of structure and function in trained networks

Learning a new skill requires assimilating into our brain the regularities of the external world and how our body interacts with them as we engage in this skill. Mechanistically, this entails a translation of inputs, rules, and outputs into changes to the structure of neural networks in our brain. How this translation occurs is still largely unknown. We will follow the process of this assimilation using Trained Recurrent Neural Networks (TRNNs), which are increasingly used as models of neural circuits of trained animals.

People:
Omri Barak
Cancer resistance and metastasis as a learning process
Cancer cells embedded in healthy tissue can revert to normal cells, and vice versa for healthy tissue in a tumor environment. This highlights two parallel learning processes: cell and tissue, in the development or suppression of disease. Cancer cells use their intrinsic dynamic plasticity to escape and explore novel. Simultaneou... more

Cancer resistance and metastasis as a learning process

Cancer cells embedded in healthy tissue can revert to normal cells, and vice versa for healthy tissue in a tumor environment. This highlights two parallel learning processes: cell and tissue, in the development or suppression of disease. Cancer cells use their intrinsic dynamic plasticity to escape and explore novel. Simultaneously, tissue homeostasis is a target of the collective of cells forming the tissue, which oppresses this exploration and keeps cell type stable. We use the language of machine learning to characterize these two learning processes.

People:
Omri Barak
Space of solutions in recurrent neural networks
Training Machine learning algorithms often introduces the phenomenon of underspecification: A wide gap between the dataset used for training and the real task. A parallel phenomenon in Neuroscience is the variety of strategies with which animals can approach a given task. These observations imply that for every task and training... more

Space of solutions in recurrent neural networks

Training Machine learning algorithms often introduces the phenomenon of underspecification: A wide gap between the dataset used for training and the real task. A parallel phenomenon in Neuroscience is the variety of strategies with which animals can approach a given task. These observations imply that for every task and training set there exists a space of solutions that is equivalent on that set. Both the structure of this space and the rules of motion within it are not understood. In this work, we study the space of solutions that emerges from those degrees of freedom in Recurrent Neural Networks (RNNs) trained on neuroscience-inspired tasks.

People:
Omri Barak
Adaptive robust radio therapy planning
Adaptive planning radiotherapy treatment based on inaccurate and evolving bio-marker information collected from imaging during the treatment. Radiotherapy plan is composed on the amount and angle of radiation in each stage of the treatment, where the goal is to get the maximal dose to the tumor while protecting healthy organs. T... more

Adaptive robust radio therapy planning

Adaptive planning radiotherapy treatment based on inaccurate and evolving bio-marker information collected from imaging during the treatment. Radiotherapy plan is composed on the amount and angle of radiation in each stage of the treatment, where the goal is to get the maximal dose to the tumor while protecting healthy organs. The challenge comes from the resulting problem being a large-scale mixed integer problem, and the dependence between optimal decision and the future bio-marker levels.

Recommendation: a dynamical-systems perspective
Modern recommendation platforms have become complex, dynamic eco-systems. Platforms often rely on machine learning models to successfully match users to content, but most methods neglect to account for how they affect user behavior, satisfaction, and well-being of over time. Here we propose a novel dynamical-systems perspective ... more

Recommendation: a dynamical-systems perspective

Modern recommendation platforms have become complex, dynamic eco-systems. Platforms often rely on machine learning models to successfully match users to content, but most methods neglect to account for how they affect user behavior, satisfaction, and well-being of over time. Here we propose a novel dynamical-systems perspective to recommendation that allows to reason about, and control, macro-temporal aspects of recommendation policies as they relate to user behavior.

 

Strategic Classification Made Practical
Machine learning has become imperative for informing decisions that affect the lives of humans across a multitude of domains. But when people benefit from certain predictive outcomes, they are prone to act strategically to improve those outcomes. Our goal in this project is to develop a practical learning framework that accounts... more

Strategic Classification Made Practical

Machine learning has become imperative for informing decisions that affect the lives of humans across a multitude of domains. But when people benefit from certain predictive outcomes, they are prone to act strategically to improve those outcomes. Our goal in this project is to develop a practical learning framework that accounts for how humans behaviourally respond to classification rules. Our framework provides robustness while also providing means to promote favourable social outcomes.

Fighting COVID-19 by learning from data
To help policymakers set policy based on scientific methods, we use mathematical modeling and advanced statistical tools to study different aspects of the COVID-19 pandemic. Our research includes learning the susceptibility and infectivity of children and adolescents; the protection of vaccination and previous SARS-CoV-2 infecti... more

Fighting COVID-19 by learning from data

To help policymakers set policy based on scientific methods, we use mathematical modeling and advanced statistical tools to study different aspects of the COVID-19 pandemic. Our research includes learning the susceptibility and infectivity of children and adolescents; the protection of vaccination and previous SARS-CoV-2 infection in preventing subsequent SARS-CoV-2 infection and other COVID-19 outcomes; and the effect of COVID-19 on different aspects of public health, such as suicide rate and natural abortion.

Non-invasive Brain-Computer Interfaces
Non-invasive brain computer interfaces (BCIs) provide direct communication link from the brain to external devices. We develop non-invasive BCIs that are based on interpreting EEG measurements to identify user’s desired selection, action or movement. We focus on developing self- correction capabilities, based on error-related ... more

Non-invasive Brain-Computer Interfaces

Non-invasive brain computer interfaces (BCIs) provide direct communication link from the brain to external devices. We develop non-invasive BCIs that are based on interpreting EEG measurements to identify user’s desired selection, action or movement. We focus on developing self- correction capabilities, based on error-related potentials (ErrPs), which are evoked in the brain when errors are detected.  We investigate ErrPs, develop classifiers for detecting them and methods to integrate them to improve BCIs. This project is funded by Dr. Maria Ascoli Rossi Research Grant.

Invasive Brain-Machine Interfaces
Invasive Brain-Machine Interfaces (BMIs) provide direct communication link from the brain to external devices. Invasive BMIs are based on interpreting neural activity recorded with invasive electrodes, identifying desired movements and controlling external devices accordingly. We develop algorithms to identify error-related proc... more

Invasive Brain-Machine Interfaces

Invasive Brain-Machine Interfaces (BMIs) provide direct communication link from the brain to external devices. Invasive BMIs are based on interpreting neural activity recorded with invasive electrodes, identifying desired movements and controlling external devices accordingly. We develop algorithms to identify error-related processing in the neural activity and to correct the BMIs accordingly. This project is performed in collaboration with Chestek’s Lab at the University of Michigan and funded by Betty and Dan Kahn Foundation.

Development of agonistic compounds for therapy of inflammatory autoimmunity
The activity of autoimmune T cells is tightly regulated by two major types of regulatory T. cells, those that primarily express the fork-head gene FOXP3 (FOXp3+ regulatory T cells, also named T regs) , and those that do not (T regulatory-1 cells, also named Tr1). We have developed agonists that potentiate each sub-type and could... more

Development of agonistic compounds for therapy of inflammatory autoimmunity

The activity of autoimmune T cells is tightly regulated by two major types of regulatory T. cells, those that primarily express the fork-head gene FOXP3 (FOXp3+ regulatory T cells, also named T regs) , and those that do not (T regulatory-1 cells, also named Tr1). We have developed agonists that potentiate each sub-type and could be used for therapy of different autoimmune diseases.

Novel stabilized CXCL9/CXCL10 compounds for cancer immunotherapy
Long ago we reported that the CXCR3 ligands CXCL10 and possibly CXCL9  potentiate effector T cells and therefore their stabilized form could be used for cancer immunotherapy. It appears that due to post transcriptional modifications (PTM)  these compounds are rapidly inactivated at the tumor site. We have developed unique comp... more

Novel stabilized CXCL9/CXCL10 compounds for cancer immunotherapy

Long ago we reported that the CXCR3 ligands CXCL10 and possibly CXCL9  potentiate effector T cells and therefore their stabilized form could be used for cancer immunotherapy. It appears that due to post transcriptional modifications (PTM)  these compounds are rapidly inactivated at the tumor site. We have developed unique compounds that are resistant  to these PTM that can effectively be used for cancer immunotherapy

Causal-inspired unsupervised domain adaptation
We are using ideas inspired by causal inference to address a difficult problem in machine learning: unsupervised domain adaptation. For example, we wish to train on data from one hospital and succeed on other, unseen hospitals; or train on images from one setting and test on images from many different settings.
People:
Uri Shalit
Fusing mechanistic and data-driven models
We are building theoretical and practical models that take as input both a mechanistic world model (for example and ordinary differential equation describing the cardio-vascular system) and data (for example ICU patient vital signs). The goal is to get the best of both worlds: the robustness, interpretability, and causal groundi... more

Fusing mechanistic and data-driven models

We are building theoretical and practical models that take as input both a mechanistic world model (for example and ordinary differential equation describing the cardio-vascular system) and data (for example ICU patient vital signs). The goal is to get the best of both worlds: the robustness, interpretability, and causal grounding of mechanistic models, together with the flexibility of black-box deep learning models.

People:
Uri Shalit
Individual-level causal inference for health outcomes
In collaboration with health providers such as Clalit Health Services and Rambam Health Campus we are developing individual-level causal inference tools that will give accurate and safe treatment recommendations to patients.
People:
Uri Shalit
Studying response to immunotherapy via single-cell data
Immunotherapy has revolutionized cancer therapy, leading to the 2018 Nobel Prize in Physiology and Medicine. However, despite the dramatic response observed in several cancer types, many patients do not benefit from this treatment or relapse in a relatively short time. To improve our understanding of patient response we utilize ... more

Studying response to immunotherapy via single-cell data

Immunotherapy has revolutionized cancer therapy, leading to the 2018 Nobel Prize in Physiology and Medicine. However, despite the dramatic response observed in several cancer types, many patients do not benefit from this treatment or relapse in a relatively short time. To improve our understanding of patient response we utilize single-cell RNA-seq data to characterize the tumor’s microenvironment, identify biomarkers of response and predict novel drug targets.

Studying tumor-immune metabolic interactions
The use of immunotherapy for solid tumors has expanded dramatically with the development of checkpoint blockade therapy. Despite the unprecedented responses observed in different tumor types, many patients are refractory to therapy or acquire resistance. Growing evidence shows that the metabolic requirements of immune cells in t... more

Studying tumor-immune metabolic interactions

The use of immunotherapy for solid tumors has expanded dramatically with the development of checkpoint blockade therapy. Despite the unprecedented responses observed in different tumor types, many patients are refractory to therapy or acquire resistance. Growing evidence shows that the metabolic requirements of immune cells in the tumor microenvironment greatly influence the success of therapy. Here we use genomic and metabolic modeling analysis to reveal the metabolic dependencies between tumor and immune cells and identify perturbations that can increase immune activity.

Studying resistance to PARPi in pancreatic cancer
Pancreatic cancer is the most aggressive form of human malignancies, with only 6% 5-year survival rate. Recently, it was found that a subgroup of patients carry mutations in the homologous recombination (HR) genes BRCA1 or BRCA2 and these tumors are sensitive to PARP inhibitor. However, response rates are infrequent and the subs... more

Studying resistance to PARPi in pancreatic cancer

Pancreatic cancer is the most aggressive form of human malignancies, with only 6% 5-year survival rate. Recently, it was found that a subgroup of patients carry mutations in the homologous recombination (HR) genes BRCA1 or BRCA2 and these tumors are sensitive to PARP inhibitor. However, response rates are infrequent and the subset of patients suitable for the treatment is limited. Here we use genomic data to computationally identify molecular signatures of response to be used as biomarkers, and aim to increase the number of patients that can benefit from the treatment.

Emotional Load
In this project we developed and validated a new sentiment analysis engine for conversational data, called CustSent, in collaboration with LivePerson Inc. We then developed the novel concept of emotional load – the load that employees must bear due to the emotional strain inherent in the service interactions in which they eng... more

Emotional Load

In this project we developed and validated a new sentiment analysis engine for conversational data, called CustSent, in collaboration with LivePerson Inc.
We then developed the novel concept of emotional load – the load that employees must bear due to the emotional strain inherent in the service interactions in which they engage. Using contact center and healthcare data we investigate the impact of Emotional Load on agents and the progression of the service interaction.

Information Transparency in Emergency Departments
We investigate how the transparency of the medical process and wait time information influence ED patients. In collaboration with Clalit Health Services, we developed a web-based app that delivers information to ED patients through their mobile phones. The development combines methods of process mining, queueing theory, and huma... more

Information Transparency in Emergency Departments

We investigate how the transparency of the medical process and wait time information influence ED patients. In collaboration with Clalit Health Services, we developed a web-based app that delivers information to ED patients through their mobile phones. The development combines methods of process mining, queueing theory, and human-centered UX design. The system operates at Carmel Medical Center. Our research examines the impact of information transparency on ED efficiency and patient behavior.

The Importance of Literacy in Young Children
In the study, we examine the language development among toddlers aged 2-3.5-years-old and the brain synchronization between the mother and the child using EEG, while performing various activities around stories reading and listening.
The Role of Executive Functions in Hebrew-Speaking Children
We examine the role of executive functions in reading in children aged 8-12 using an adaptive intervention program developed in our lab. We use neuroimaging tools such as functional and structural MRI as well as EEG to define patterns that may predict a better gain from intervention.
Cardiac Imaging
We are using echocardiography (ultrasound) to study the function of the heart in mice, rat and in patients.
Genomics and epigenetics
We map the chromatin in cells using high throughput sequencing approaches such as ATAC-seq, ChiP-seq and single cell sequencing. We are using CRISPR based functional assays to understand and identify regulatory elements.
Hydro-Informatics
This research consists of development and validation of effective, reliable and applicable algorithms for early detection (ED) of contaminations in drinking water (DW) from one or more sources, using data from WQ sensors. Specifically, anomaly detection in UV-absorbance spectra as means for contamination detection is presented. ... more

Hydro-Informatics

This research consists of development and validation of effective, reliable and applicable algorithms for early detection (ED) of contaminations in drinking water (DW) from one or more sources, using data from WQ sensors. Specifically, anomaly detection in UV-absorbance spectra as means for contamination detection is presented. An additional ED algorithm, has also been developed, utilizing WQ measurements of standard physicochemical parameters. The algorithm’s high performance, together with its simplicity, adjustability, ease of implementation and low computational complexity – make it a valuable addition to water monitoring systems. Testing the performance of the two ED algorithms showed that processing physicochemical WQ measurements to detect anomalies, can serve as effective EDSs’ for DW contaminations.

Atmospheric Informatics
Recent developments in sensory and communication technologies have made low-cost, micro-sensing units (MSUs) feasible. These MSUs can operate as a set of individual nodes, or may be interconnected to form a Wireless Distributed Environmental Sensor Network (WDESN). MSU’s lower power consumption and small size enable many new a... more

Atmospheric Informatics

Recent developments in sensory and communication technologies have made low-cost, micro-sensing units (MSUs) feasible. These MSUs can operate as a set of individual nodes, or may be interconnected to form a Wireless Distributed Environmental Sensor Network (WDESN). MSU’s lower power consumption and small size enable many new applications, such as mobile sensing. MSUs’ main limitation is their relatively low accuracy, with respect to laboratory equipment or an AQM station. In this project we examine algorithms for assessing these sensors in field operations, as well as autonomous calibration and error concealment, optimal placement of the sensors and the utilization of the mobile sensors in the process, and advanced algorithms for data analysis provide a comprehensive toolset for atmospheric data analysis.

Dechiphering hippocampal calcium imaging activity during behavior
It is known that the hippocampus contain place cells, responsible for coding the position of the animal in the environment. We record of data of hundreds of cells simultaneously using calcium imaging in freely foraging mice, and thus we have an opportunity to analyze the network properties and dynamics of hippocampal place cells... more

Dechiphering hippocampal calcium imaging activity during behavior

It is known that the hippocampus contain place cells, responsible for coding the position of the animal in the environment. We record of data of hundreds of cells simultaneously using calcium imaging in freely foraging mice, and thus we have an opportunity to analyze the network properties and dynamics of hippocampal place cells during foraging and other behavioral tasks.

Mechanisms of cancer cells’ anchorage-independence
A hallmark of cancer cells is their ‘anchorage-independence’, i.e., they are able to grow under conditions that do not support strong attachment of the cells. This trait has been identified more than six decades ago, but is still poorly understood from a mechano-biological point of view. Our lab studies the ways by which can... more

Mechanisms of cancer cells’ anchorage-independence

A hallmark of cancer cells is their ‘anchorage-independence’, i.e., they are able to grow under conditions that do not support strong attachment of the cells. This trait has been identified more than six decades ago, but is still poorly understood from a mechano-biological point of view. Our lab studies the ways by which cancer cells lose their normal mechanosensing abilities to become non-dependent on the signals from their environment.

Mechanobiology of pancreatic cancer
Pancreatic ductal adenocarcinoma (PDAC) is an extremely deadly disease that is projected to become the second-most deadly cancer in the next decade. PDAC is characterized by an extremely dense and stiff extracellular matrix that surrounds the tumor cells, which is considered to play a major role in PDAC progression and metastasi... more

Mechanobiology of pancreatic cancer

Pancreatic ductal adenocarcinoma (PDAC) is an extremely deadly disease that is projected to become the second-most deadly cancer in the next decade. PDAC is characterized by an extremely dense and stiff extracellular matrix that surrounds the tumor cells, which is considered to play a major role in PDAC progression and metastasis. Our lab studies the interactions between PDAC cells and their environment with the goal of identifying potential mechanobiological therapeutic targets.

Cellular sensing of environmental mechanical signals
Cells in our bodies respond not only to biochemical signals (hormones, growth factors), but also to the mechanical features of their environment, including, e.g., topography, rigidity. This indicates that cells can actively test the environment. Our lab studies the fundamental mechanisms by which this sensing is achieved. We com... more

Cellular sensing of environmental mechanical signals

Cells in our bodies respond not only to biochemical signals (hormones, growth factors), but also to the mechanical features of their environment, including, e.g., topography, rigidity. This indicates that cells can actively test the environment. Our lab studies the fundamental mechanisms by which this sensing is achieved. We combine the use of nano- and micro-fabricated surfaces with advanced imaging and machine learning for image analysis to study the subcellular machineries involved in this process.

Situated Temporal Planning
In domains where planning is slow compared to the evolution of the environment, it can be important to take into account the time taken by the planning process itself.  For one example, plans involving taking a certain bus are of no use if planning finishes after the bus departs.  We call this setting situated temporal plannin... more

Situated Temporal Planning

In domains where planning is slow compared to the evolution of the environment, it can be important to take into account the time taken by the planning process itself.  For one example, plans involving taking a certain bus are of no use if planning finishes after the bus departs.  We call this setting situated temporal planning and we define it as a variant of temporal planning with timed initial literals.

Coordinating Multiple Robots Using Social Laws
Robots operating in the real world must perform their task in an uncertain, partially observable environment, while interacting with other robots. This interaction makes the problem much more difficult to solve. The key insight motivating this project is that it is possible to make the robot's job online much easier by modifying... more

Coordinating Multiple Robots Using Social Laws

Robots operating in the real world must perform their task in an uncertain, partially observable environment, while interacting with other robots. This interaction makes the problem much more difficult to solve. The key insight motivating this project is that it is possible to make the robot’s job online much easier by modifying the problem setting offline, before the robot starts operating by instituting a social law — a convention governing what is allowed behavior.

Implementing a Precision Medicine Paradigm in Primary Care Clinics
A randomized controlled trial of 20 intervention clinics and 20 usual-care control clinics to establish the value (better health? Better use of resources?) of implementing precision medicine tools into primary clinical practice. Intervention includes testing of DNA with different level platforms (from NGS panels, to GWAS, WES an... more

Implementing a Precision Medicine Paradigm in Primary Care Clinics

A randomized controlled trial of 20 intervention clinics and 20 usual-care control clinics to establish the value (better health? Better use of resources?) of implementing precision medicine tools into primary clinical practice. Intervention includes testing of DNA with different level platforms (from NGS panels, to GWAS, WES and WGS), of microbiome, use of wearable devices/sensors. The adult population of the study clinics includes some 140,000 people and if enough resources will be obtained, the study is expected to reach some 100,000 participants. Current resources allowed us to break ground in one clinic with 1,660 people already signed a consent. Study is National IRB approved.

Precision medicine - pharmacogenetics
GWAS-based study of >10,000 Israelis of various ethnicities serving among other purposes to establish an ethnic-specific (Jews/Arabs, Ashkenazi/Sephardi) atlas of frequencies of pharmacogenetic variants. Identify new associations between medication use in this cohort and identified SNPs. GWAS was carried out using the Illumin... more

Precision medicine - pharmacogenetics

GWAS-based study of >10,000 Israelis of various ethnicities serving among other purposes to establish an ethnic-specific (Jews/Arabs, Ashkenazi/Sephardi) atlas of frequencies of pharmacogenetic variants. Identify new associations between medication use in this cohort and identified SNPs. GWAS was carried out using the Illumina 500K Onco SNP array. Study is National IRB approved. Funded by MOST.

gene-environment interactions in the etiology of common cancers
More than 40,000 participants in case-control studies of breast/colorectal/lung/gynecological/pancreato-hepato-biliary cancers. For each participant we have long entry questionnaire (800 questions: health habits, health status, family history, more…), blood sample (DNA), tumor tissue sample (for many), EMR of follow-up. Every ... more

gene-environment interactions in the etiology of common cancers

More than 40,000 participants in case-control studies of breast/colorectal/lung/gynecological/pancreato-hepato-biliary cancers. For each participant we have long entry questionnaire (800 questions: health habits, health status, family history, more…), blood sample (DNA), tumor tissue sample (for many), EMR of follow-up. Every cancer case has a matched control without cancer. All studies are National IRB approved. Partially funded by various agencies, BCRF, ICRF…

Train medical surgery skills by using sensors in simulators
We have several simulators for training medical doctors in cutting edge surgery skills. We use insights regarding biases in mental effort regulation to improve self-training protocols.
Use user behavior to improve automatic database schema matching
Database schema matching is a challenging task that call for improvement for several decades. Automatic algorithms fail to provide reliable enough results. We use human matching to overcome algorithm failures and vice versa. We refer to human and algorithmic matchers as imperfect matchers with different strengths and weaknesses.... more

Use user behavior to improve automatic database schema matching

Database schema matching is a challenging task that call for improvement for several decades. Automatic algorithms fail to provide reliable enough results. We use human matching to overcome algorithm failures and vice versa. We refer to human and algorithmic matchers as imperfect matchers with different strengths and weaknesses. We use insights from cognitive research to predict human matchers behavior and identify those who can do better than others. We then merge their responses with algorithmic outcomes and get better results.

Information design
Consider a setting where one agent holds private information and would like to use her information to motivate another agent to take some action. When agents’ interests co-incide the answer is easy - disclose the full information. In this project we study the optimal information design when agents’ incentives are mis-aligned... more

Information design

Consider a setting where one agent holds private information and would like to use her information to motivate another agent to take some action. When agents’ interests co-incide the answer is easy – disclose the full information. In this project we study the optimal information design when agents’ incentives are mis-aligned.

Expert testing
A self-proclaimed agent provides probabilistic forecasts over a sequence of events. In this project we ask how can we distinguish between genuine experts and charlatans?
Stochastic Image Denoising by Sampling from the Posterior Distribution
Image denoising is a well-known and well studied problem, commonly targeting a minimization of the mean squared error (MSE) between the outcome and the original image. Unfortunately, especially for severe noise levels, such Minimum MSE (MMSE) solutions may lead to blurry output images. In this work we propose a novel stochastic ... more

Stochastic Image Denoising by Sampling from the Posterior Distribution

Image denoising is a well-known and well studied problem, commonly targeting a minimization of the mean squared error (MSE) between the outcome and the original image. Unfortunately, especially for severe noise levels, such Minimum MSE (MMSE) solutions may lead to blurry output images. In this work we propose a novel stochastic denoising approach that produces viable and high perceptual quality results, while maintaining a small MSE. Our method employs Langevin dynamics that relies on a repeated application of any given MMSE denoiser, obtaining the reconstructed image by effectively sampling from the posterior distribution. Due to its stochasticity, the proposed algorithm can produce a variety of high-quality outputs for a given noisy input, all shown to be legitimate denoising results. In addition, we present an extension of our algorithm for handling the inpainting problem, recovering missing pixels while removing noise from partially given data.

High Perceptual Quality Image Denoising with a Posterior Sampling CGAN
The vast work in Deep Learning (DL) has led to a leap in image denoising research. Most DL solutions for this task have chosen to put their efforts on the denoiser's architecture while maximizing distortion performance. However, distortion driven solutions lead to blurry results with sub-optimal perceptual quality, especially in... more

High Perceptual Quality Image Denoising with a Posterior Sampling CGAN

The vast work in Deep Learning (DL) has led to a leap in image denoising research. Most DL solutions for this task have chosen to put their efforts on the denoiser’s architecture while maximizing distortion performance. However, distortion driven solutions lead to blurry results with sub-optimal perceptual quality, especially in immoderate noise levels. In this paper we propose a different perspective, aiming to produce sharp and visually pleasing denoised images that are still faithful to their clean sources. Formally, our goal is to achieve high perceptual quality with acceptable distortion. This is attained by a stochastic denoiser that samples from the posterior distribution, trained as a generator in the framework of conditional generative adversarial networks (CGAN). Contrary to distortion-based regularization terms that conflict with perceptual quality, we introduce to the CGAN objective a theoretically founded penalty term that does not force a distortion requirement on individual samples, but rather on their mean. We showcase our proposed method with a novel denoiser architecture that achieves the reformed denoising goal and produces vivid and diverse outcomes in immoderate noise levels.

Patch Craft: Video Denoising by Deep Modeling and Patch Matching
The non-local self-similarity property of natural images has been exploited extensively for solving various image processing problems. When it comes to video sequences, harnessing this force is even more beneficial due to the temporal redundancy. In the context of image and video denoising, many classically-oriented algorithms e... more

Patch Craft: Video Denoising by Deep Modeling and Patch Matching

The non-local self-similarity property of natural images has been exploited extensively for solving various image processing problems. When it comes to video sequences, harnessing this force is even more beneficial due to the temporal redundancy. In the context of image and video denoising, many classically-oriented algorithms employ self-similarity, splitting the data into overlapping patches, gathering groups of similar ones and processing these together somehow. With the emergence of convolutional neural networks (CNN), the patch-based framework has been abandoned. Most CNN denoisers operate on the whole image, leveraging non-local relations only implicitly by using a large receptive field. This work proposes a novel approach for leveraging self-similarity in the context of video denoising, while still relying on a regular convolutional architecture. We introduce a concept of patch-craft frames – artificial frames that are similar to the real ones, built by tiling matched patches. Our algorithm augments video sequences with patch-craft frames and feeds them to a CNN. We demonstrate the substantial boost in denoising performance obtained with the proposed approach.

Perturbation models
Statistically reasoning about complex systems involves a probability distribution over exponentially many configurations. For example, semantic labeling of an image requires to infer a discrete label for each image pixel, hence resulting in possible segmentations which are exponential in the numbers of pixels. Standard approache... more

Perturbation models

Statistically reasoning about complex systems involves a probability distribution over exponentially many configurations. For example, semantic labeling of an image requires to infer a discrete label for each image pixel, hence resulting in possible segmentations which are exponential in the numbers of pixels. Standard approaches such as Gibbs sampling are slow in practice and cannot be applied to many real-life problems. Our goal is to integrate optimization and sampling through extreme value statistics and to define new statistical framework for which sampling and parameter estimation in complex systems are efficient. This framework is based on measuring the stability of prediction to random changes in the potential interactions.

Shape reconstruction
Computational methods in stereoscopic imaging and other depth from X, recognition, and understanding.
People:
Ron Kimmel
Non-rigid shape analysis
Finding computational methods for matching and analysis of non-rigid shapes. From a computational point of view we cannot use convolution here, so we design and explore other deep learning venues.
People:
Ron Kimmel
Computational Pathology
Using H&E-stained histology slides to predict treatment outcomes for efficient cancer treatment.  We use all variations of deep learning (mainly CNNs).
People:
Ron Kimmel
OPCloud is a web-based collaborative software environment for creating conceptual models of systems and phenomena with OPM standard  ISO 19450:2015. It is used in dozens of universities and enterprises, and its development is continuously adding new features and capabilities.
People:
Dov Dori
Wearable tattoo for health monitoring
This project aims to develop a breakthrough smart health monitoring system, combining in the same solution, a sensor and an analytical modulus. The sensor modulus will be implemented by the construction of a new, tattoo-like wearable device. To both process and integrate the big data constantly gathered by the wearable, in a glo... more

Wearable tattoo for health monitoring

This project aims to develop a breakthrough smart health monitoring system, combining in the same solution, a sensor and an analytical modulus. The sensor modulus will be implemented by the construction of a new, tattoo-like wearable device. To both process and integrate the big data constantly gathered by the wearable, in a global databank, an analytical modulus is also developed, to enable the establishment of individualized health patterns.

Wearable for Advancing Care for High-Risk Elderly
This project aims to stratify patient populations according to advanced risk assessment, to enable personalized self-management of ageing multimorbidity patients. This is done by designing a personalized, patient-centred and holistic approach that accounts for the individual’s medical history and lifestyle conditions, mental a... more

Wearable for Advancing Care for High-Risk Elderly

This project aims to stratify patient populations according to advanced risk assessment, to enable personalized self-management of ageing multimorbidity patients. This is done by designing a personalized, patient-centred and holistic approach that accounts for the individual’s medical history and lifestyle conditions, mental and social state, etc., coupled with innovative non-invasive wearable sensing technology for continuous monitoring of health.

Sparsity Aware Normalization for GANs
Generative adversarial networks (GANs) are known to benefit from regularization or normalization of their discriminator network during training. In this work, we introduced sparsity aware normalization (SAN), a new method for stabilizing GAN training. Our method is particularly effective for image restoration and image-to-image ... more

Sparsity Aware Normalization for GANs

Generative adversarial networks (GANs) are known to benefit from regularization or normalization of their discriminator network during training. In this work, we introduced sparsity aware normalization (SAN), a new method for stabilizing GAN training. Our method is particularly effective for image restoration and image-to-image translation. There, it significantly improves upon existing methods, like spectral normalization, while allowing using shorter training and smaller capacity networks, at no computational overhead.

Explorable Image Restoration
Image restoration methods do not allow exploring the infinitely many plausible reconstructions that might have given rise to the measured image. In this work, we introduced the task of explorable image restoration, and illustrated it for the tasks of super resolution and JPEG decompression. We proposed a framework comprising a g... more

Explorable Image Restoration

Image restoration methods do not allow exploring the infinitely many plausible reconstructions that might have given rise to the measured image. In this work, we introduced the task of explorable image restoration, and illustrated it for the tasks of super resolution and JPEG decompression. We proposed a framework comprising a graphical user interface with a neural network backend, allowing editing the output to explore the abundance of plausible explanations to the input. We illustrated our approach in a variety of use cases, ranging from medical imaging and forensics to graphics (Oral presentations at CVPR`20, CVPR`21).

SinGAN: Learning a generative model from a single natural image
We introduced an unconditional generative model that can be learned from a single natural image. Our model, coined SinGAN, is trained to capture the internal distribution of patches within the image, and is then able to generate high quality, diverse samples of arbitrary size and aspect ratio, that carry the same visual content ... more

SinGAN: Learning a generative model from a single natural image

We introduced an unconditional generative model that can be learned from a single natural image. Our model, coined SinGAN, is trained to capture the internal distribution of patches within the image, and is then able to generate high quality, diverse samples of arbitrary size and aspect ratio, that carry the same visual content as the image. We illustrated the utility of SinGAN in a wide range of image manipulation tasks. This work won the Best Paper Award (Marr Prize) at ICCV`19.

Massive Parallelization of Deep Learning
Improvements in training speed are needed to develop the next generation of deep learning models. To perform such a massive amount of computation in a reasonable time, it is parallelized across multiple GPU cores. Perhaps the most popular parallelization method is to use a large batch of data in each iteration of SGD, so the gra... more

Massive Parallelization of Deep Learning

Improvements in training speed are needed to develop the next generation of deep learning models. To perform such a massive amount of computation in a reasonable time, it is parallelized across multiple GPU cores. Perhaps the most popular parallelization method is to use a large batch of data in each iteration of SGD, so the gradient computation can be performed in parallel on multiple workers. We aim to enable massive parallelization without performance degradation, as commonly observed.

Resource efficient deep learning
We aim to improve the resource efficiency of deep learning (e.g., energy, bandwidth) for training and inference. Our focus is decreasing the numerical precision of the neural network model is a simple and effective way to improve their resource efficiency. Nearly all recent deep learning related hardware relies heavily on lower ... more

Resource efficient deep learning

We aim to improve the resource efficiency of deep learning (e.g., energy, bandwidth) for training and inference. Our focus is decreasing the numerical precision of the neural network model is a simple and effective way to improve their resource efficiency. Nearly all recent deep learning related hardware relies heavily on lower precision math. The benefits are a reduction in the memory required to store the neural network, a reduction in chip area, and a drastic improvement in energy efficiency.

Understanding and controlling the implicit bias in deep learning
Significant research efforts are being invested in improving Deep Neural Networks (DNNs) via various modifications. However, such modifications often cause an unexplained degradation in the generalization performance DNNs to unseen data. Recent findings suggest that this degradation is caused by changes to the hidden algorithmi... more

Understanding and controlling the implicit bias in deep learning

Significant research efforts are being invested in improving Deep Neural Networks (DNNs) via various modifications. However, such modifications often cause an unexplained degradation in the generalization performance DNNs to unseen data. Recent findings suggest that this degradation is caused by changes to the hidden algorithmic bias of the training algorithm and model. This bias determines which solution is selected from all solutions which fit the data. We aim to understand and control this algorithmic bias.

Queue mining for delay prediction in multi-class service processes
Information recorded by service systems (e.g., in the telecommunication, finance, and health sectors) during their operation provides an angle for operational process analysis, commonly referred to as process mining. Here we establish a queueing perspective in process mining to address the online delay prediction problem, which ... more

Queue mining for delay prediction in multi-class service processes

Information recorded by service systems (e.g., in the telecommunication, finance, and health sectors) during their operation provides an angle for operational process analysis, commonly referred to as process mining. Here we establish a queueing perspective in process mining to address the online delay prediction problem, which refers to the time that the execution of an activity for a running instance of a service process is delayed due to queueing effects. We develop predictors for waiting-times from event logs recorded by an information system during process execution. Based on large datasets from the telecommunications and financial sectors, our evaluation demonstrate accurate online predictions, which drastically improve over predictors neglecting the queueing perspective.

Data-Driven Appointment-Scheduling Under Uncertainty
Service systems are often stochastic and preplanned by appointments, yet implementations of their appointment systems are prevalently deterministic. We address this gap, between planned and reality, by developing data-driven methods for appointment scheduling and sequencing – the result are tractable and scalable solutions tha... more

Data-Driven Appointment-Scheduling Under Uncertainty

Service systems are often stochastic and preplanned by appointments, yet implementations of their appointment systems are prevalently deterministic. We address this gap, between planned and reality, by developing data-driven methods for appointment scheduling and sequencing – the result are tractable and scalable solutions that accommodate hundreds of jobs and servers. To test for practical performance, we leverage a unique data set from a cancer center that combines real-time locations, electronic health records, and appointments log. Focusing on one of the center’s infusion units, we reduce cost (waiting plus overtime) on the order of 15%–40% consistently.

Development of better tools for proteomics and peptidomics research
The analysis of the big data that is obtained by genomics, and combining it with the mass spectrometry-based proteomics and peptidomics data is that main scope of this research. The limiting factor is informatic analysis of the data.
People:
Arie Admon
Development of personal immunotherapy for cancer, and autoimmunity
The project focus is on molecular immunology, with a special interest HLA-peptidomics; aiming at characterization of the full repertoires of HLA peptides presented by human cells (HLA peptidome) and the implementation of the HLA peptidomics into development of personalized immunotherapy. The other aim is to block the specific i... more

Development of personal immunotherapy for cancer, and autoimmunity

The project focus is on molecular immunology, with a special interest HLA-peptidomics; aiming at characterization of the full repertoires of HLA peptides presented by human cells (HLA peptidome) and the implementation of the HLA peptidomics into development of personalized immunotherapy.
The other aim is to block the specific immune reaction during autoimmunity and inflammatory diseases. The main tools of peptidomics, and proteomics, are mass spectrometry and bioinformatics.

People:
Arie Admon
Real-Time Health Monitoring
Contemporary medicine suffers from impactful shortcomings in terms of successful disease diagnosis and treatment. Diagnostic delays and/or inaccuracies can cause harm to patients by preventing or delaying appropriate treatment, providing unnecessary or harmful treatment, or result in psychological burden or financial repercussio... more

Real-Time Health Monitoring

Contemporary medicine suffers from impactful shortcomings in terms of successful disease diagnosis and treatment. Diagnostic delays and/or inaccuracies can cause harm to patients by preventing or delaying appropriate treatment, providing unnecessary or harmful treatment, or result in psychological burden or financial repercussions. Our objective is to develop an AI-based smart health monitoring system for non-intrusive, continuous, real-time and personalized detection of physical and (bio)chemical markers that are linked with overall health of the human body.

PET/CT Analysis using Spectral Total Variation
Spectral total variation’s ability to provide metrics for the automatic detection of bone malignant lesions and to differentiate those lesions from non-cancerous findings will be assessed for a hybrid positron emission tomography/x-ray computed tomography (PET/CT) scanner.  By detecting tissue metabolism changes using fluorin... more

PET/CT Analysis using Spectral Total Variation

Spectral total variation’s ability to provide metrics for the automatic detection of bone malignant lesions and to differentiate those lesions from non-cancerous findings will be assessed for a hybrid positron emission tomography/x-ray computed tomography (PET/CT) scanner.  By detecting tissue metabolism changes using fluorine-18-2-fluoro-2-deoxy-D-glucose PET and demonstrating bone structure changes using CT, PET/CT can identify cancer lesions and impact patient diagnosis and management.

Reconstruction Algorithms for DNA-Storage Systems
In the trace reconstruction problem a length-n string x yields a collection of noisy traces, where each is independently obtained from x by passing through a deletion channel, which deletes every symbol with some fixed probability. The main goal under this paradigm is to determine the required minimum number of i.i.d traces in o... more

Reconstruction Algorithms for DNA-Storage Systems

In the trace reconstruction problem a length-n string x yields a collection of noisy traces, where each is independently obtained from x by passing through a deletion channel, which deletes every symbol with some fixed probability. The main goal under this paradigm is to determine the required minimum number of i.i.d traces in order to reconstruct x with high probability. The focus of this work is to extend this problem to the model where each trace is a result of x passing through a deletion-insertion-substitution channel.

Multimodal Data Analysis & Fusion
One of the long-standing challenges in signal processing and data analysis is the fusion of information acquired by multiple, multimodal sensors. Of particular interest in the context of our research are the massive data sets of medical recordings and healthcare-related information, acquired routinely in operation rooms, intens... more

Multimodal Data Analysis & Fusion

One of the long-standing challenges in signal processing and data analysis is the fusion of information acquired by multiple, multimodal sensors.
Of particular interest in the context of our research are the massive data sets of medical recordings and healthcare-related information, acquired routinely in operation rooms, intensive care units, and clinics. Such distinct and complementary information calls for the development of new theories and methods, leveraging it toward achieving concrete objectives such as analysis, filtering, and prediction, in a broad range of fields.

Intelligent patient monitoring in the intensive care unit
Intensive care medicine is complex, resource intensive and expensive. It is a dynamic and highly technical field of medicine, taking care of the sickest patients. Decisions need to be made rapidly based on the evolving clinical state of the patient which can fluctuate over seconds and minutes. We develop ML models to tackle majo... more

Intelligent patient monitoring in the intensive care unit

Intensive care medicine is complex, resource intensive and expensive. It is a dynamic and highly technical field of medicine, taking care of the sickest patients. Decisions need to be made rapidly based on the evolving clinical state of the patient which can fluctuate over seconds and minutes. We develop ML models to tackle major predictive challenges for critically-ill patients. Specifically, models that predict an upcoming possible adverse event to provide the clinical team time to intervene and thus improve outcome and save lives, and models that predict the future course and treatment response in a patient-specific manner.

.

Digital oximetry biomarkers for respiratory conditions
Pulse oximetry is routinely used for monitoring patient’s oxygen saturation level non-invasively. A low oxygen level in the blood means low oxygen in the tissues and ultimately this can lead to organ failure. The development of digital oximetry biomarkers (OBM) engineered from the oxygen saturation time series can support diag... more

Digital oximetry biomarkers for respiratory conditions

Pulse oximetry is routinely used for monitoring patient’s oxygen saturation level non-invasively. A low oxygen level in the blood means low oxygen in the tissues and ultimately this can lead to organ failure. The development of digital oximetry biomarkers (OBM) engineered from the oxygen saturation time series can support diagnosis, characterize subgroups of patients with various disease severity (phenotyping) and enable continuous monitoring of patient’s pulmonary function to predict eventual deteriorations (prognosis). We create new OBM and ML models for the diagnosis of respiratory conditions such as obstructive sleep apnea, chronic obstructive pulmonary disease and pneumonia.

Deep representation learning for cardiovascular diseases
Major cardiovascular and cerebrovascular events occur in individuals without known pre-existing cardiovascular conditions. Preventing such events remains a serious public health challenge. For that purpose, clinical risk scores can be used to identify individuals with high cardiovascular risks. However, available scoring scales ... more

Deep representation learning for cardiovascular diseases

Major cardiovascular and cerebrovascular events occur in individuals without known pre-existing cardiovascular conditions. Preventing such events remains a serious public health challenge. For that purpose, clinical risk scores can be used to identify individuals with high cardiovascular risks. However, available scoring scales have shown moderate performance. Despite being part of the routine evaluation of many patients in both primary and specialized care, the role of electrocardiogram (ECG) analysis in cardiovascular disease prediction and, hence, prevention is not as clear. We research digital biomarkers and deep representation learning approaches to cardiovascular diseases risk prediction using the ECG.

Distributed Compression of DNA Information
DNA information is rapidly growing in importance, and blowing up in volumes. Most data compressors for DNA have extreme encoding complexities, which is prohibitive for low-cost and portable sequencers. We develop a compression scheme with minimal encoding complexity, taking advantage of the availability of DNA references and com... more

Distributed Compression of DNA Information

DNA information is rapidly growing in importance, and blowing up in volumes. Most data compressors for DNA have extreme encoding complexities, which is prohibitive for low-cost and portable sequencers. We develop a compression scheme with minimal encoding complexity, taking advantage of the availability of DNA references and computation resources in the cloud.

Strategic Classification
The goal of this research is to design classifiers robust to strategic behavior of the agents being classified. Here strategic behavior means incurring some cost in order to improve personal features and thus classification. This improvement can be superficial – i.e., gaming the classifier – or substantial, thus leading to t... more

Strategic Classification

The goal of this research is to design classifiers robust to strategic behavior of the agents being classified. Here strategic behavior means incurring some cost in order to improve personal features and thus classification. This improvement can be superficial – i.e., gaming the classifier – or substantial, thus leading to true self-improvement. In the latter case (and only in this case), the robust classifier should actually encourage strategic behavior.

Constrained Bayesian Persuasion
Consider two strategic players, one more informed about the state of the world and the other less informed. How should the more informed side select what data to communicate to the other side, in order to inspire actions that benefit goals like social welfare? Can this be done under constraints such as privacy, limited communica... more

Constrained Bayesian Persuasion

Consider two strategic players, one more informed about the state of the world and the other less informed. How should the more informed side select what data to communicate to the other side, in order to inspire actions that benefit goals like social welfare? Can this be done under constraints such as privacy, limited communication, limited attention span, fairness, etc.?

Reproducible and interpretable data-driven feature selection
Design learning and statistical methodologies to effectively identify explanatory features (e.g., genetic variations) truly linked to a phenomenon under study (e.g., disease risk) while rigorously controlling the number of false positives among the reported features.
Using the 3D genome to solve the 1D genome
Despite advances in DNA sequencing, full accurate measurement of complex genomes remains a huge challenge. We have discovered that certain 3D structural patterns can be used to solve a range of problems in the field of genome assembly, including the identification of disease mutations that are currently difficult to detect. Usin... more

Using the 3D genome to solve the 1D genome

Despite advances in DNA sequencing, full accurate measurement of complex genomes remains a huge challenge. We have discovered that certain 3D structural patterns can be used to solve a range of problems in the field of genome assembly, including the identification of disease mutations that are currently difficult to detect. Using machine learning models, we are developing new ways to utilize data from 3D genome measurements to better characterize its 1D sequence in healthy and disease genomes.

Learning 3D genome organization
The 3D organization of genomes is tightly linked to how the genetic information is accessed, regulated and propagated. Using machine learning, with a special emphasis on probabilistic models, we build computational models aimed to gain mechanistic insights of how 3D genome structures are specified and how they change in disease.
Human Behavior Prediction with Language-driven Models
Language is a window to the person's mind and soul. Surprisingly, while few would disagree with this statement, most behavior prediction and analysis models do not consider language usage. We develop models that do exactly this, considering both economics setups (where game theory predictions consider only the numerical incentiv... more

Human Behavior Prediction with Language-driven Models

Language is a window to the person’s mind and soul. Surprisingly, while few would disagree with this statement, most behavior prediction and analysis models do not consider language usage. We develop models that do exactly this, considering both economics setups (where game theory predictions consider only the numerical incentive of the participants) as well as psychological and psychiatric challenges (e.g. predicting suicide risk in the general population based on social media postings). Our goal is to integrate linguistic signals along with other behavioral and medical signals, and provide better prediction capabilities along with improved understanding of the underlying phenomena.

Causal Inference in Natural Language Processing: Model Design and Interpretation
A fundamental problem of machine and deep learning models in NLP is that of spurious correlations. Such heavily parametrized models often capture data-driven patterns that are correlated with their task variables, but these patterns have little connection to the actual task they are trying to perform. This, in turn, substantial... more

Causal Inference in Natural Language Processing: Model Design and Interpretation

A fundamental problem of machine and deep learning models in NLP is that of spurious correlations. Such heavily parametrized models often capture data-driven patterns that are correlated with their task variables, but these patterns have little connection to the actual task they are trying to perform.
This, in turn, substantially harms their generalization capacity. We hence develop methods that follow the causal inference methodology for improved model generalization, interpretation, and stability.

Domain Adaptation for Natural Language Processing
Domain adaptation is the problem of adapting an algorithm trained on one domain (training distribution) so that it can effectively process data from other domains (e.g. adapting a sentiment classification algorithm trained on book reviews so that it can perform well on reviews of patient experience in clinics).  We consider var... more

Domain Adaptation for Natural Language Processing

Domain adaptation is the problem of adapting an algorithm trained on one domain (training distribution) so that it can effectively process data from other domains (e.g. adapting a sentiment classification algorithm trained on book reviews so that it can perform well on reviews of patient experience in clinics).  We consider various very challenging setups of domain adaptation, focusing on setups where very limited resources and knowledge of the target domains are available when training the algorithm.

Metastases cause ~90% of cancer mortality and prognosis is currently based on histopathology, disease-statistics, or genetics. Weihs lab developed a rapid (~2hr) early-prognostic of the clinical metastatic risk, adding predictive machine learning models, to support disease management. Two-class and 5-class models successfully s... more

Mechanobiology-based prediction of metastatic risk

Metastases cause ~90% of cancer mortality and prognosis is currently based on histopathology, disease-statistics, or genetics. Weihs lab developed a rapid (~2hr) early-prognostic of the clinical metastatic risk, adding predictive machine learning models, to support disease management.
Two-class and 5-class models successfully separated invasive/non-invasive or varying invasiveness-level samples with high sensitivity and specificity.

Chebyshev Nets from Commuting PolyVector Fields
In this project, we propose a method for computing global Chebyshev nets on triangular meshes. We formulate the corresponding global parameterization problem in terms of commuting PolyVector fields, and design an efficient optimization method to solve it. We compute, for the first time, Chebyshev nets with automatically-placed s... more

Chebyshev Nets from Commuting PolyVector Fields

In this project, we propose a method for computing global Chebyshev nets on triangular meshes. We formulate the corresponding global parameterization problem in terms of commuting PolyVector fields, and design an efficient optimization method to solve it. We compute, for the first time, Chebyshev nets with automatically-placed singularities, and demonstrate the realizability of our approach using real material.

Understanding the inheritance of RNA modifications
The first steps of embryogenesis lack transcription and rely on maternal mRNAs stored in oocytes. Thus, maternal mRNA stability is tightly regulated. A-to-I RNA editing is the most common RNA modification, which is important for normal embryonic development and regulation of innate immunity. Using dozens high-throughput sequenci... more

Understanding the inheritance of RNA modifications

The first steps of embryogenesis lack transcription and rely on maternal mRNAs stored in oocytes. Thus, maternal mRNA stability is tightly regulated. A-to-I RNA editing is the most common RNA modification, which is important for normal embryonic development and regulation of innate immunity. Using dozens high-throughput sequencing databases, we are testing if edited mRNAs are inherited to prevent activation of the immunity system against self RNA in the next generations.

A comprehensive RNA editing site identification
A-to-I RNA editing is the most prevalent type of RNA editing in metazoans. As part of this project, we generated RESIC, an efficient pipeline that combines several approaches for the detection and classification of RNA editing sites. The pipeline can be used for all organisms and can use any number of RNA-sequencing datasets as ... more

A comprehensive RNA editing site identification

A-to-I RNA editing is the most prevalent type of RNA editing in metazoans. As part of this project, we generated RESIC, an efficient pipeline that combines several approaches for the detection and classification of RNA editing sites. The pipeline can be used for all organisms and can use any number of RNA-sequencing datasets as input. Testing this tool on SARS-CoV-2 infection, our analysis implies the involvement of RNA editing in conceiving the unpredicted phenotype of COVID-19 disease.

Generating a platform for a reliable differential expression analysis
Differential Expression Analysis (DEA) of RNA-sequencing data is frequently performed for detecting key genes, affected across different conditions. Preceding reliability-testing of the input material is crucial for consistent and strong results, yet can be challenging. In this project, we generated a tool: Biological Sequence E... more

Generating a platform for a reliable differential expression analysis

Differential Expression Analysis (DEA) of RNA-sequencing data is frequently performed for detecting key genes, affected across different conditions. Preceding reliability-testing of the input material is crucial for consistent and strong results, yet can be challenging. In this project, we generated a tool: Biological Sequence Expression Kit (BiSEK) – a UI-based platform for DEA, dedicated to a reliable inquiry.
BiSEK is based on a novel algorithm to track discrepancies between the data and the statistical model design.

Learning causal estimators from unlabeled data
Models of real-world phenomena, e.g., human physiology, offer significant utility in health and disease. However, they often suffer from misspecification. To understand the implications of such misspecification, we develop some basic theory for the simple setting of linear models, aiming to understand the benefit of the ubiquit... more

Learning causal estimators from unlabeled data

Models of real-world phenomena, e.g., human physiology, offer significant utility in health and disease.
However, they often suffer from misspecification. To understand the implications of such misspecification, we develop some basic theory for the simple setting of linear models, aiming to understand the benefit of the ubiquitously available unlabelled offline data in enhancing misspecified causal models. We implement these ideas on non-linear models, focusing on the cardiovascular system, where an abundance of unlabelled data and (partial) physiological models are available.

People:
Ron Meir
Lifelong learning agents
Effective learning from data requires prior assumptions, referred to as inductive bias. A fundamental question pertains to the source of a ‘good’ inductive bias. One natural way to form such a bias is through lifelong learning, where an agent continually interacts with the world through a sequence of tasks, aiming to improve... more

Lifelong learning agents

Effective learning from data requires prior assumptions, referred to as inductive bias. A fundamental question pertains to the source of a ‘good’ inductive bias. One natural way to form such a bias is through lifelong learning, where an agent continually interacts with the world through a sequence of tasks, aiming to improve its performance on future tasks based on the tasks it has encountered so far. We develop a theoretical framework for incremental inductive bias formation, and demonstrate its effectiveness in problems of sequential learning and decision making.

People:
Ron Meir
ECG analysis using deep neural networks
We are developing a smartphone app for cardiologists to help analyze ECG charts. Our methods identify dozens of cardio-related conditions: “Automatic classification of healthy and disease conditions from images or digital standard 12-lead ECGs.” Vadim Gliner, Noam Keidar, Vladimir Makarov, Arutyun I. Avetisyan, Assaf Schuste... more

ECG analysis using deep neural networks

We are developing a smartphone app for cardiologists to help analyze ECG charts. Our methods identify dozens of cardio-related conditions: “Automatic classification of healthy and disease conditions from images or digital standard 12-lead ECGs.” Vadim Gliner, Noam Keidar, Vladimir Makarov, Arutyun I. Avetisyan, Assaf Schuster and Yael Yaniv. Scientific Reports. September 2020. We develop tools to assist physicians use AI tools: “Meeting the unmet needs of clinicians from AI systems in cardiology: A systematic formulation, and a suggested framework.” Yonatan Elul, Aviv Rosenberg, Assaf Schuster, Alex Bronstein, Yael Yaniv. Proceedings of the National Academy of Sciences of the United States of America (PNAS). April 2021.  We are working on predicting cardiovascular events.

Dimensionality reduction
In this setting we study how to reduce the dimensionality of data for learning and for optimization, avoiding the “curse of dimensionality”.
People:
Nir Ailon
Ranking and preference learning
In this setting we study how to model people’s preferences over a set of choices, and how to optimize and learn given user preferences in a variety of applications.
People:
Nir Ailon