FB LinkEdIn Instagram

Research

Projects

General-Domain Truth Discovery via Average Proximity
Truth discovery is a general name for statistical methods aimed to extract the correct answers to questions, based on multiple answers coming from noisy sources. For example, workers in a crowdsourcing platform. We suggest a simple heuristic for estimating workers' competence using average proximity to other workers. We prove th... more

General-Domain Truth Discovery via Average Proximity

Truth discovery is a general name for statistical methods aimed to extract the correct answers to questions, based on multiple answers coming from noisy sources. For example, workers in a crowdsourcing platform. We suggest a simple heuristic for estimating workers’ competence using average proximity to other workers. We prove this estimates well the actual competence level and enables separating high and low quality workers in a wide spectrum of domains and statistical models.

Content and social dynamics of Slack communication
Analyses of all communication conducted on public Slack channels of a mid-size firm, using topic analysis, sentiment analysis and behavior analysis as well as network analysis to deconstruct the dynamics of the firm’s Slack communication.
Effects of customer emotions on employee response time
Analyses of archives of over 100K real-time interactions between human agents and customers in online service chats.
Design For Collaboration (DFC)
The focus is on recognizing and analyzing the challenges that arise when autonomous agents with different capabilities need to interact and collaborate on unknown tasks, on providing methods for the automated design of these environments to promote collaboration, and on specifying guarantees regarding the quality of the design s... more

Design For Collaboration (DFC)

The focus is on recognizing and analyzing the challenges that arise when autonomous agents with different capabilities need to interact and collaborate on unknown tasks, on providing methods for the automated design of these environments to promote collaboration, and on specifying guarantees regarding the quality of the design solutions produced by our suggested methods. This research combines data-driven approaches with symbolic AI techniques and involves both theoretical work and evaluations on multi-agent reinforcement learning settings and on multi robot systems.

Market of Information and Skills for Multi Agent AI and Multi Robot Teams
Promoting multi-agent collaboration via dynamic markets of information and skills in which AI agents and robots trade their physical capabilities and their ability to acquire new information. The value of these traded commodities is dynamically computed based on the agents' objectives, sensors and actuation capabilities as well ... more

Market of Information and Skills for Multi Agent AI and Multi Robot Teams

Promoting multi-agent collaboration via dynamic markets of information and skills in which AI agents and robots trade their physical capabilities and their ability to acquire new information. The value of these traded commodities is dynamically computed based on the agents’ objectives, sensors and actuation capabilities as well as their ability to communicate with each other and ask for assistance. This framework maximizes performance and team resilience, without relying on a centralized controller.

Task and Team Aware Motion Planning for Robotics (TATAM)
Most current approaches to robotic planning separate the low-level planning of basic behaviors and the high-level search for a sequence of behaviors that will accomplish a task. However, in complex settings such as packing, personal assistance, and cooking, this dichotomous view becomes inefficient, especially in environments sh... more

Task and Team Aware Motion Planning for Robotics (TATAM)

Most current approaches to robotic planning separate the low-level planning of basic behaviors and the high-level search for a sequence of behaviors that will accomplish a task. However, in complex settings such as packing, personal assistance, and cooking, this dichotomous view becomes inefficient, especially in environments shared by multiple autonomous agents. We therefore offer new ways for integrating task-level considerations when planning the robot’s movement, and for propagating motion-planning considerations into task planning.

Robustness and uncertainty in dynamic decision problems
Understanding how to deal with model uncertainty is key for building resilient agents that can overcome environments that are unforeseen. My research group has studied for years different approaches that build robust agents that can cope with different types of uncertainties. Robustness means that policies are immune to changes ... more

Robustness and uncertainty in dynamic decision problems

Understanding how to deal with model uncertainty is key for building resilient agents that can overcome environments that are unforeseen. My research group has studied for years different approaches that build robust agents that can cope with different types of uncertainties. Robustness means that policies are immune to changes in the environment leading to better real time performance. In a sequence of papers we developed robust reinforcement learning and planning algorithms including scaling up such algorithms, learning the uncertainty set online, adapting quickly to unknown uncertainties, and online adaptation. The main application areas here are energy and transport services.

We consider the potential role of language as a regularizer in reinforcement learning. The objective is to create hierarchical reinforcement learning algorithms that are explainable by design: they use language to describe what they do. The language models can be learned, dictated, imitated, or created. In a paper that appeared ... more

Language models in reinforcement learning

We consider the potential role of language as a regularizer in reinforcement learning. The objective is to create hierarchical reinforcement learning algorithms that are explainable by design: they use language to describe what they do. The language models can be learned, dictated, imitated, or created. In a paper that appeared in ICML 2019, we introduced Act2Vec, a general framework for learning context-based action representation for Reinforcement Learning. Representing actions in a vector space help reinforcement learning algorithms achieve better performance by grouping similar actions and utilizing relations between different actions. We showed how prior knowledge of an environment can be extracted from demonstrations and injected into action vector representations that encode natural compatible behavior. We then used these for augmenting state representations as well as improving function approximation of Q-values. We visualize and test action embeddings in three domains including a drawing task, a high dimensional navigation task, and the large action space domain of StarCraft II.

Recommendation: a dynamical-systems perspective
Modern recommendation platforms have become complex, dynamic eco-systems. Platforms often rely on machine learning models to successfully match users to content, but most methods neglect to account for how they affect user behavior, satisfaction, and well-being of over time. Here we propose a novel dynamical-systems perspective ... more

Recommendation: a dynamical-systems perspective

Modern recommendation platforms have become complex, dynamic eco-systems. Platforms often rely on machine learning models to successfully match users to content, but most methods neglect to account for how they affect user behavior, satisfaction, and well-being of over time. Here we propose a novel dynamical-systems perspective to recommendation that allows to reason about, and control, macro-temporal aspects of recommendation policies as they relate to user behavior.

 

Redundant Storage Service on the Edge
This project will enable unreliable edge computing nodes to jointly provide a reliable storage service for unpredictable user workloads. Edge systems consists small-scale servers (nodes) at the edge of the network whose root is in the cloud-based datacenter. Their premise is to bring data and computing closer to time-critical ap... more

Redundant Storage Service on the Edge

This project will enable unreliable edge computing nodes to jointly provide a reliable storage service for unpredictable user workloads. Edge systems consists small-scale servers (nodes) at the edge of the network whose root is in the cloud-based datacenter. Their premise is to bring data and computing closer to time-critical applications running on e.g., cellphones and autonomous vehicles. We combine storage redundancy schemes with scalable algorithms for object mapping and request scheduling.

Non-invasive Brain-Computer Interfaces
Non-invasive brain computer interfaces (BCIs) provide direct communication link from the brain to external devices. We develop non-invasive BCIs that are based on interpreting EEG measurements to identify user’s desired selection, action or movement. We focus on developing self- correction capabilities, based on error-related ... more

Non-invasive Brain-Computer Interfaces

Non-invasive brain computer interfaces (BCIs) provide direct communication link from the brain to external devices. We develop non-invasive BCIs that are based on interpreting EEG measurements to identify user’s desired selection, action or movement. We focus on developing self- correction capabilities, based on error-related potentials (ErrPs), which are evoked in the brain when errors are detected.  We investigate ErrPs, develop classifiers for detecting them and methods to integrate them to improve BCIs. This project is funded by Dr. Maria Ascoli Rossi Research Grant.

Reinforcement learning of assembly policies
Our research focuses on developing control policies that are based on admittance control to facilitate learning and sim2real. This is part of a Large project on Assembly by Robotic Technology (ART) funded by the Israel Innovation Authority. We developed a Residual Admittance Policy (RAP) that generalizes well over space, size an... more

Reinforcement learning of assembly policies

Our research focuses on developing control policies that are based on admittance control to facilitate learning and sim2real. This is part of a Large project on Assembly by Robotic Technology (ART) funded by the Israel Innovation Authority. We developed a Residual Admittance Policy (RAP) that generalizes well over space, size and shape, and facilitates quick transfer learning. Most impressively, we demonstrate that the policy learned in simulations is highly successful in controlling an industrial robot (UR5e) to insert pegs of different shapes and sizes, without further training.

Precise Agriculture
Precision agriculture (PA) concept is based on observing, measuring and responding to inter and intra-field variability in crops or livestock. The goal is to facilitate a decision support system (DSS) for whole farm management with the goal of optimizing returns on inputs while preserving resources. Among these many approaches w... more

Precise Agriculture

Precision agriculture (PA) concept is based on observing, measuring and responding to inter and intra-field variability in crops or livestock. The goal is to facilitate a decision support system (DSS) for whole farm management with the goal of optimizing returns on inputs while preserving resources. Among these many approaches we focus on three specific applications: precise irrigation, early crops disease detection and early detection of pain in dairy cows.

Atmospheric Informatics
Recent developments in sensory and communication technologies have made low-cost, micro-sensing units (MSUs) feasible. These MSUs can operate as a set of individual nodes, or may be interconnected to form a Wireless Distributed Environmental Sensor Network (WDESN). MSU’s lower power consumption and small size enable many new a... more

Atmospheric Informatics

Recent developments in sensory and communication technologies have made low-cost, micro-sensing units (MSUs) feasible. These MSUs can operate as a set of individual nodes, or may be interconnected to form a Wireless Distributed Environmental Sensor Network (WDESN). MSU’s lower power consumption and small size enable many new applications, such as mobile sensing. MSUs’ main limitation is their relatively low accuracy, with respect to laboratory equipment or an AQM station. In this project we examine algorithms for assessing these sensors in field operations, as well as autonomous calibration and error concealment, optimal placement of the sensors and the utilization of the mobile sensors in the process, and advanced algorithms for data analysis provide a comprehensive toolset for atmospheric data analysis.

Situated Temporal Planning
In domains where planning is slow compared to the evolution of the environment, it can be important to take into account the time taken by the planning process itself.  For one example, plans involving taking a certain bus are of no use if planning finishes after the bus departs.  We call this setting situated temporal plannin... more

Situated Temporal Planning

In domains where planning is slow compared to the evolution of the environment, it can be important to take into account the time taken by the planning process itself.  For one example, plans involving taking a certain bus are of no use if planning finishes after the bus departs.  We call this setting situated temporal planning and we define it as a variant of temporal planning with timed initial literals.

Coordinating Multiple Robots Using Social Laws
Robots operating in the real world must perform their task in an uncertain, partially observable environment, while interacting with other robots. This interaction makes the problem much more difficult to solve. The key insight motivating this project is that it is possible to make the robot's job online much easier by modifying... more

Coordinating Multiple Robots Using Social Laws

Robots operating in the real world must perform their task in an uncertain, partially observable environment, while interacting with other robots. This interaction makes the problem much more difficult to solve. The key insight motivating this project is that it is possible to make the robot’s job online much easier by modifying the problem setting offline, before the robot starts operating by instituting a social law — a convention governing what is allowed behavior.

Adaptive LiDAR Sampling
As LiDAR sensors for depth acquisition advance to solid-state technologies, new capabilities raise new theoretical and technological challenges. In particular, we investigate benefits afforded by controlling and changing in real time the sampling scheme (adaptive sampling). We use neural-network to predict the optimal sampling s... more

Adaptive LiDAR Sampling

As LiDAR sensors for depth acquisition advance to solid-state technologies, new capabilities raise new theoretical and technological challenges. In particular, we investigate benefits afforded by controlling and changing in real time the sampling scheme (adaptive sampling). We use neural-network to predict the optimal sampling scheme per scene, given a fixed sampling budget. We found  that for a given RMSE, the sampling budget can be reduced by a factor of about 4 on average. Various strategies and algorithms are examined.

People:
Guy Gilboa
Gradient flows
We investigate analytic and numerical solutions of nonlinear gradient flows. We examine the flows as nonlinear PDE’s and use tools from nonlinear spectral theory. We have recently revealed relations between Dynamic mode decomposition (DMD), a common tool for fluid dynamics, and nonlinear eigenfunctions related to homogeneous f... more

Gradient flows

We investigate analytic and numerical solutions of nonlinear gradient flows. We examine the flows as nonlinear PDE’s and use tools from nonlinear spectral theory. We have recently revealed relations between Dynamic mode decomposition (DMD), a common tool for fluid dynamics, and nonlinear eigenfunctions related to homogeneous flows. We are investigating through this lens gradient descent algorithms of complex systems.

People:
Guy Gilboa
Use user behavior to improve automatic database schema matching
Database schema matching is a challenging task that call for improvement for several decades. Automatic algorithms fail to provide reliable enough results. We use human matching to overcome algorithm failures and vice versa. We refer to human and algorithmic matchers as imperfect matchers with different strengths and weaknesses.... more

Use user behavior to improve automatic database schema matching

Database schema matching is a challenging task that call for improvement for several decades. Automatic algorithms fail to provide reliable enough results. We use human matching to overcome algorithm failures and vice versa. We refer to human and algorithmic matchers as imperfect matchers with different strengths and weaknesses. We use insights from cognitive research to predict human matchers behavior and identify those who can do better than others. We then merge their responses with algorithmic outcomes and get better results.

Information design
Consider a setting where one agent holds private information and would like to use her information to motivate another agent to take some action. When agents’ interests co-incide the answer is easy - disclose the full information. In this project we study the optimal information design when agents’ incentives are mis-aligned... more

Information design

Consider a setting where one agent holds private information and would like to use her information to motivate another agent to take some action. When agents’ interests co-incide the answer is easy – disclose the full information. In this project we study the optimal information design when agents’ incentives are mis-aligned.

Stochastic Image Denoising by Sampling from the Posterior Distribution
Image denoising is a well-known and well studied problem, commonly targeting a minimization of the mean squared error (MSE) between the outcome and the original image. Unfortunately, especially for severe noise levels, such Minimum MSE (MMSE) solutions may lead to blurry output images. In this work we propose a novel stochastic ... more

Stochastic Image Denoising by Sampling from the Posterior Distribution

Image denoising is a well-known and well studied problem, commonly targeting a minimization of the mean squared error (MSE) between the outcome and the original image. Unfortunately, especially for severe noise levels, such Minimum MSE (MMSE) solutions may lead to blurry output images. In this work we propose a novel stochastic denoising approach that produces viable and high perceptual quality results, while maintaining a small MSE. Our method employs Langevin dynamics that relies on a repeated application of any given MMSE denoiser, obtaining the reconstructed image by effectively sampling from the posterior distribution. Due to its stochasticity, the proposed algorithm can produce a variety of high-quality outputs for a given noisy input, all shown to be legitimate denoising results. In addition, we present an extension of our algorithm for handling the inpainting problem, recovering missing pixels while removing noise from partially given data.

High Perceptual Quality Image Denoising with a Posterior Sampling CGAN
The vast work in Deep Learning (DL) has led to a leap in image denoising research. Most DL solutions for this task have chosen to put their efforts on the denoiser's architecture while maximizing distortion performance. However, distortion driven solutions lead to blurry results with sub-optimal perceptual quality, especially in... more

High Perceptual Quality Image Denoising with a Posterior Sampling CGAN

The vast work in Deep Learning (DL) has led to a leap in image denoising research. Most DL solutions for this task have chosen to put their efforts on the denoiser’s architecture while maximizing distortion performance. However, distortion driven solutions lead to blurry results with sub-optimal perceptual quality, especially in immoderate noise levels. In this paper we propose a different perspective, aiming to produce sharp and visually pleasing denoised images that are still faithful to their clean sources. Formally, our goal is to achieve high perceptual quality with acceptable distortion. This is attained by a stochastic denoiser that samples from the posterior distribution, trained as a generator in the framework of conditional generative adversarial networks (CGAN). Contrary to distortion-based regularization terms that conflict with perceptual quality, we introduce to the CGAN objective a theoretically founded penalty term that does not force a distortion requirement on individual samples, but rather on their mean. We showcase our proposed method with a novel denoiser architecture that achieves the reformed denoising goal and produces vivid and diverse outcomes in immoderate noise levels.

Patch Craft: Video Denoising by Deep Modeling and Patch Matching
The non-local self-similarity property of natural images has been exploited extensively for solving various image processing problems. When it comes to video sequences, harnessing this force is even more beneficial due to the temporal redundancy. In the context of image and video denoising, many classically-oriented algorithms e... more

Patch Craft: Video Denoising by Deep Modeling and Patch Matching

The non-local self-similarity property of natural images has been exploited extensively for solving various image processing problems. When it comes to video sequences, harnessing this force is even more beneficial due to the temporal redundancy. In the context of image and video denoising, many classically-oriented algorithms employ self-similarity, splitting the data into overlapping patches, gathering groups of similar ones and processing these together somehow. With the emergence of convolutional neural networks (CNN), the patch-based framework has been abandoned. Most CNN denoisers operate on the whole image, leveraging non-local relations only implicitly by using a large receptive field. This work proposes a novel approach for leveraging self-similarity in the context of video denoising, while still relying on a regular convolutional architecture. We introduce a concept of patch-craft frames – artificial frames that are similar to the real ones, built by tiling matched patches. Our algorithm augments video sequences with patch-craft frames and feeds them to a CNN. We demonstrate the substantial boost in denoising performance obtained with the proposed approach.

Perturbation models
Statistically reasoning about complex systems involves a probability distribution over exponentially many configurations. For example, semantic labeling of an image requires to infer a discrete label for each image pixel, hence resulting in possible segmentations which are exponential in the numbers of pixels. Standard approache... more

Perturbation models

Statistically reasoning about complex systems involves a probability distribution over exponentially many configurations. For example, semantic labeling of an image requires to infer a discrete label for each image pixel, hence resulting in possible segmentations which are exponential in the numbers of pixels. Standard approaches such as Gibbs sampling are slow in practice and cannot be applied to many real-life problems. Our goal is to integrate optimization and sampling through extreme value statistics and to define new statistical framework for which sampling and parameter estimation in complex systems are efficient. This framework is based on measuring the stability of prediction to random changes in the potential interactions.

Deep-Learning Flow Control in Cellular Channels
Cellular channels are increasingly used for sensitive real-time applications. For example, real time video can now be broadcast over parallel cellular channel, possibly from a moving vehicle. Such channels are characterized by high variability, and require improved flow control algorithms to maintain stable flow. This work addre... more

Deep-Learning Flow Control in Cellular Channels

Cellular channels are increasingly used for sensitive real-time applications. For example, real time video can now be broadcast over parallel cellular channel, possibly from a moving vehicle. Such channels are characterized by high variability, and require improved flow control algorithms to maintain stable flow. This work addresses the application of deep learning algorithms to develop suitable flow control and scheduling algorithm under real-time delay constraints.

Markov Decision Processes with Burstiness Constraints
Burstiness Constraints characterize various dynamic processes, such as traffic demand in communication networks. We consider the optimal control of MDPs subject to such constraints, providing the theoretical framework and effective algorithms for this problem.
Shape reconstruction
Computational methods in stereoscopic imaging and other depth from X, recognition, and understanding.
People:
Ron Kimmel
Non-rigid shape analysis
Finding computational methods for matching and analysis of non-rigid shapes. From a computational point of view we cannot use convolution here, so we design and explore other deep learning venues.
People:
Ron Kimmel
The vision of the consortium, of which our lab is part, is to develop generic technologies for analysis and use of information of raw materials, production processes and the consumer, in order to enable connectivity throughout the value chain and bring about a paradigm shift in which food is produced with the highest efficiency ... more

Food-IoT - Generic Technologies for Advancing the Food Value Chain

The vision of the consortium, of which our lab is part, is to develop generic technologies for analysis and use of information of raw materials, production processes and the consumer, in order to enable connectivity throughout the value chain and bring about a paradigm shift in which food is produced with the highest efficiency and safety.

People:
Dov Dori
TRACOD – short for TRAck the COD fish, is an EIT Food project aimed at improving the ability of producers and consumers to track freshness and nutritional values of fresh fish, including cod, salmon, and other white fish species. TRACOD uses models implemented in an app for interacting with stakeholders and includes an edu... more

TRACOD – Tracking the Cod Fish Nutritional Value

TRACOD – short for TRAck the COD fish, is an EIT Food project aimed at improving the ability of producers and consumers to track freshness and nutritional values of fresh fish, including cod, salmon, and other white fish species. TRACOD uses models implemented in an app for interacting with stakeholders and includes an education component for endowing food engineers with a systems approach. TRACOD also engages future food engineers in conceptual modeling as part of model-based systems engineering of food production and supply systems.

People:
Dov Dori
OPCloud is a web-based collaborative software environment for creating conceptual models of systems and phenomena with OPM standard  ISO 19450:2015. It is used in dozens of universities and enterprises, and its development is continuously adding new features and capabilities.
People:
Dov Dori
Sparsity Aware Normalization for GANs
Generative adversarial networks (GANs) are known to benefit from regularization or normalization of their discriminator network during training. In this work, we introduced sparsity aware normalization (SAN), a new method for stabilizing GAN training. Our method is particularly effective for image restoration and image-to-image ... more

Sparsity Aware Normalization for GANs

Generative adversarial networks (GANs) are known to benefit from regularization or normalization of their discriminator network during training. In this work, we introduced sparsity aware normalization (SAN), a new method for stabilizing GAN training. Our method is particularly effective for image restoration and image-to-image translation. There, it significantly improves upon existing methods, like spectral normalization, while allowing using shorter training and smaller capacity networks, at no computational overhead.

Explorable Image Restoration
Image restoration methods do not allow exploring the infinitely many plausible reconstructions that might have given rise to the measured image. In this work, we introduced the task of explorable image restoration, and illustrated it for the tasks of super resolution and JPEG decompression. We proposed a framework comprising a g... more

Explorable Image Restoration

Image restoration methods do not allow exploring the infinitely many plausible reconstructions that might have given rise to the measured image. In this work, we introduced the task of explorable image restoration, and illustrated it for the tasks of super resolution and JPEG decompression. We proposed a framework comprising a graphical user interface with a neural network backend, allowing editing the output to explore the abundance of plausible explanations to the input. We illustrated our approach in a variety of use cases, ranging from medical imaging and forensics to graphics (Oral presentations at CVPR`20, CVPR`21).

SinGAN: Learning a generative model from a single natural image
We introduced an unconditional generative model that can be learned from a single natural image. Our model, coined SinGAN, is trained to capture the internal distribution of patches within the image, and is then able to generate high quality, diverse samples of arbitrary size and aspect ratio, that carry the same visual content ... more

SinGAN: Learning a generative model from a single natural image

We introduced an unconditional generative model that can be learned from a single natural image. Our model, coined SinGAN, is trained to capture the internal distribution of patches within the image, and is then able to generate high quality, diverse samples of arbitrary size and aspect ratio, that carry the same visual content as the image. We illustrated the utility of SinGAN in a wide range of image manipulation tasks. This work won the Best Paper Award (Marr Prize) at ICCV`19.

Massive Parallelization of Deep Learning
Improvements in training speed are needed to develop the next generation of deep learning models. To perform such a massive amount of computation in a reasonable time, it is parallelized across multiple GPU cores. Perhaps the most popular parallelization method is to use a large batch of data in each iteration of SGD, so the gra... more

Massive Parallelization of Deep Learning

Improvements in training speed are needed to develop the next generation of deep learning models. To perform such a massive amount of computation in a reasonable time, it is parallelized across multiple GPU cores. Perhaps the most popular parallelization method is to use a large batch of data in each iteration of SGD, so the gradient computation can be performed in parallel on multiple workers. We aim to enable massive parallelization without performance degradation, as commonly observed.

Resource efficient deep learning
We aim to improve the resource efficiency of deep learning (e.g., energy, bandwidth) for training and inference. Our focus is decreasing the numerical precision of the neural network model is a simple and effective way to improve their resource efficiency. Nearly all recent deep learning related hardware relies heavily on lower ... more

Resource efficient deep learning

We aim to improve the resource efficiency of deep learning (e.g., energy, bandwidth) for training and inference. Our focus is decreasing the numerical precision of the neural network model is a simple and effective way to improve their resource efficiency. Nearly all recent deep learning related hardware relies heavily on lower precision math. The benefits are a reduction in the memory required to store the neural network, a reduction in chip area, and a drastic improvement in energy efficiency.

Understanding and controlling the implicit bias in deep learning
Significant research efforts are being invested in improving Deep Neural Networks (DNNs) via various modifications. However, such modifications often cause an unexplained degradation in the generalization performance DNNs to unseen data. Recent findings suggest that this degradation is caused by changes to the hidden algorithmi... more

Understanding and controlling the implicit bias in deep learning

Significant research efforts are being invested in improving Deep Neural Networks (DNNs) via various modifications. However, such modifications often cause an unexplained degradation in the generalization performance DNNs to unseen data. Recent findings suggest that this degradation is caused by changes to the hidden algorithmic bias of the training algorithm and model. This bias determines which solution is selected from all solutions which fit the data. We aim to understand and control this algorithmic bias.

Super-Resolution of Dynamic Elavation Model
Our goal is to develop multi-modal neural network architectures for the tasks of guided super-resolution (SR) of dynamic elevation models (DEM). Current DEMs for most of the earth surface is still low resolution (sometimes 2 meters per pixel, but more often 10, 15, or 30 meters per pixel) and thus cannot accurately represent the... more

Super-Resolution of Dynamic Elavation Model

Our goal is to develop multi-modal neural network architectures for the tasks of guided super-resolution (SR) of dynamic elevation models (DEM). Current DEMs for most of the earth surface is still low resolution (sometimes 2 meters per pixel, but more often 10, 15, or 30 meters per pixel) and thus cannot accurately represent the morphology of the terrain. High resolution DEMs, however, have many uses, including precision agriculture, urban mapping, high-definition maps for autonomous navigation, line-of-sight analysis, and more.

Residual Echo Suppression Using Deep Learning
We address the problem of residual echo suppression (RES) in real-life acoustic environments that often include low signal-to-noise ratios, reverberations, and degraded audio measurements. We propose a low power, low-resources, on-device system that receives a dual-channel audio streaming as waveform and applies deep learning-ba... more

Residual Echo Suppression Using Deep Learning

We address the problem of residual echo suppression (RES) in real-life acoustic environments that often include low signal-to-noise ratios, reverberations, and degraded audio measurements. We propose a low power, low-resources, on-device system that receives a dual-channel audio streaming as waveform and applies deep learning-based echo cancellation to it. This solution can benefit many practical speech-based hands-free communication platforms such as smart-phones, conference room speakerphones, and smart speakers like Amazon Alexa and Google Home.

Online POMDP and BSP Planning via Simplification
We develop a fundamentally novel paradigm that seeks to find a simplification of a given POMDP problem, which is computationally easier, while at the same time providing performance guarantees, and ideally, similar levels of performance as the original decision making problem. Based on this conceptually novel paradigm, we devel... more

Online POMDP and BSP Planning via Simplification

We develop a fundamentally novel paradigm that seeks to find a simplification of a given POMDP problem, which is computationally easier, while at the same time providing performance guarantees, and ideally, similar levels of performance as the original decision making problem.
Based on this conceptually novel paradigm, we develop approaches that simplify the decision making problem, for example, by resorting to belief simplification or reward function simplification.

Autonomous Semantic Perception under Uncertainty
We develop approaches for autonomous semantic perception addressing key challenges such as: classification aliasing for certain relative viewpoints between object & camera, localization uncertainty, and epistemic uncertainty of the classifier. Specifically, approaches for computationally efficient probabilistic inference and... more

Autonomous Semantic Perception under Uncertainty

We develop approaches for autonomous semantic perception addressing key challenges such as: classification aliasing for certain relative viewpoints between object & camera, localization uncertainty, and epistemic uncertainty of the classifier. Specifically, approaches for computationally efficient probabilistic inference and decision making, are developed, in the context of semantic perception and SLAM. A key component here is a learned viewpoint-dependent classifier model.

Chebyshev Nets from Commuting PolyVector Fields
In this project, we propose a method for computing global Chebyshev nets on triangular meshes. We formulate the corresponding global parameterization problem in terms of commuting PolyVector fields, and design an efficient optimization method to solve it. We compute, for the first time, Chebyshev nets with automatically-placed s... more

Chebyshev Nets from Commuting PolyVector Fields

In this project, we propose a method for computing global Chebyshev nets on triangular meshes. We formulate the corresponding global parameterization problem in terms of commuting PolyVector fields, and design an efficient optimization method to solve it. We compute, for the first time, Chebyshev nets with automatically-placed singularities, and demonstrate the realizability of our approach using real material.

Large matrix approximation for acceleration of deep networks
In this work we apply matrix approximation theory to reduce the cost of training and deploying of dense layers in deep networks.
People:
Nir Ailon