colloquium

Organizer(s): 
Usual Time: 
Where: 

Upcoming Lectures

Ilya Soloveychik
23/11/2017 - 12:00

Random matrices have become a very active area of research in the recent years and have found enormous applications in modern mathematics, physics, engineering, biological modeling, and other fields. In this work, we focus on symmetric sign (+/-1) matrices (SSMs) that were originally utilized by Wigner to model the nuclei of heavy atoms in mid-50s. Assuming the entries of the upper triangular part to be independent +/-1 with equal probabilities, Wigner showed in his pioneering works that when the sizes of matrices grow, their empirical spectra converge to a non-random measure having a semicircular shape. Later, this fundamental result was improved and substantially extended to more general families of matrices and finer spectral properties. In many physical phenomena, however, the entries of matrices exhibit significant correlations. At the same time, almost all available analytical tools heavily rely on the independence condition making the study of matrices with structure (dependencies) very challenging. The few existing works in this direction consider very specific setups and are limited by particular techniques, lacking a unified framework and tight information-theoretic bounds that would quantify the exact amount of structure that matrices may possess without affecting the limiting semicircular form of their spectra.

 

From a different perspective, in many applications one needs to simulate random objects. Generation of large random matrices requires very powerful sources of randomness due to the independence condition, the experiments are impossible to reproduce, and atypical or non-random looking outcomes may appear with positive probability. Reliable deterministic construction of SSMs with random-looking spectra and low algorithmic and computational complexity is of particular interest due to the natural correspondence of SSMs and undirected graphs, since the latter are extensively used in combinatorial and CS applications e.g. for the purposes of derandomization. Unfortunately, most of the existing constructions of pseudo-random graphs focus on the extreme eigenvalues and do not provide guaranties on the whole spectrum. In this work, using binary Golomb sequences, we propose a simple completely deterministic construction of circulant SSMs with spectra converging to the semicircular law with the same rate as in the original Wigner ensemble. We show that this construction has close to lowest possible algorithmic complexity and is very explicit. Essentially, the algorithm requires at most 2log(n) bits implying that the real amount of randomness conveyed by the semicircular property is quite small.

-----------------------------------------------------------------------------------------------------------------------

Previous Lectures

Martin Farach-Colton
16/11/2017 - 12:00

Bloom filters and other approximate membership query data structures (AMQs), are one of the great successes of theoretical computer science, with uses throughout databases, file systems, networks and beyond.  An AMQ maintains a set under insertions, sometimes deletions, and queries with one-sided error: if a queried element is in the set, the AMQ returns \texttt{present}, if it is not, the AMQ returns \texttt{present} with probability at most $\epsilon$. An optimal AMQ uses $n\log\epsilon^{-1}$ bits of space when the represented set has $n$ elements.

 

Yet there is a gap between their theoretical bounds provided by AMQs and their requirements in the field.  

 

In this talk, I will describe how AMQs are used in practice and how this changes (for the better) our theoretical understanding of these data structures.

 

 

Martin Farach-Colton is a Professor of Computer Science at Rutgers University, New Brunswick, New Jersey. His research focuses on both the theory and practice of external memory and storage systems. He was a pioneer in the theory of cache oblivious analysis. His current research focuses on the use of write optimization to improve performance in both read- and write-intensive big data systems. He has also worked on the algorithmics of strings and metric spaces, with applications to bioinformatics. In addition to his academic work, Professor Farach-Colton has extensive industrial experience. He was CTO and co-founder of Tokutek, a database company that was founded to commercialize his research and acquired in 2015. During 2000–2002, he was a Senior Research Scientist at Google.

Raphael Cohen
09/11/2017 - 12:00

Audio recordings are a data source of great value used for analyzing conversations and enabling digital assistants. An important aspect of analyzing single-channel audio conversations is identifying who said what, a task known as speaker diarization. The task is further complicated when the number of speakers is a priori unknown. In this talk we’ll review the motivation for speaker diarization and verification, outline previous approaches to diarization and their relation to “real” diarization needs (as in audio applications such as Chorus.ai’s Conversation Analytics platform), and present the pipeline required for integrating these solutions, as well as recent SOTA end-to-end deep learning approaches to this problem.

Nachshon Cohen
02/11/2017 - 12:00

Non-volatile memory (NVM) technologies such as PCM, ReRAM and STT-RAM allow processors to directly write values to persistent storage at speeds that are significantly faster than previous durable media such as hard drives or SSDs.

Many applications of NVM are constructed on a logging subsystem, which enables operations to appear to execute atomically and facilitates recovery from failures.

Writes to NVM, however, pass through a processor's memory system, which can delay and reorder them and can impair the correctness and cost of logging algorithms.

 

Reordering arises because of out-of-order execution in a CPU and the inter-processor cache coherence protocol.

By carefully considering the properties of these reorderings, this paper develops a logging protocol that requires only one round trip to non-volatile memory while avoiding expensive computations.

We show how to extend the logging protocol to building a persistent set (hash map) that also requires only a single round trip to non-volatile memory for insertion, updating, or deletion.

 

GIL EINZIGER
26/10/2017 - 12:00

In this talk, I introduce the often overlooked concept of cache admission policies. Once the cache is full, in order to admit a new item, we need to evict one of the stored items. An admission policy decides if the new item should be admitted to the cache. I will present two admission policies that enhance performance in different fields. The TinyLFU cache admission policy uses past frequency to estimate the benefit of data items. This policy is implemented in the Caffeine high-performance Java library and is used by many open source projects such as Apache Cassandra, Apache Accumulo, JBoss Infinispan, Druid, Neo4J, Spring, to name a few. It is also used in industry by companies such as Google, Netflix, and Linkedin. Alternatively, the RAP admission policy is a randomized streaming algorithm for finding the most frequent items. In RAP, we use a coin flip to decide if a new item should be admitted to the cache. This approach improves the space to accuracy ratio of state of the art algorithms by up to x32 on real traces and up to x128 on synthetic ones.

Ami Paz - Technion
29/06/2017 - 12:00

A spanner of a given graph is a sparse subgraph that approximately preserves distances. Since their introduction in the late 1980's, spanners have found numerous applications in synchronization problems, information dissemination, routing schemes and more.

Many applications of spanners are in computer networks, where the network needs to find a spanner for its own communication graph. We present distributed algorithms for constructing additive spanners in networks of bounded message size, namely in the CONGEST model. In addition, we present an innovative technique for showing lower bounds for constructing spanners in this setting, a technique that can be useful for other distributed graph problems.

 

Based on joint works with Keren Censor-Hillel, Telikepalli Kavitha, Noam Ravid and Amir Yehudayoff

 

Makoto Yokoo + Taiki Todo
28/06/2017 - 12:00

The theory of two-sided matching (e.g., assigning residents to hospitals, students to schools) has been extensively developed, and it has been applied to design clearinghouse mechanisms in various markets in practice. As the theory has been applied to increasingly diverse types of environments, however, researchers and practitioners have encountered various forms of distributional constraints. As these features have been precluded from consideration until recently, they pose new challenges for market designers. One example of such distributional constraints is a minimum quota, e.g., school districts may need at least a certain number of students in each school in order for the school to operate. In this talk, I present an overview of research on designing mechanisms that work under distributional
constraints.

+

In this work we study a facility location problem from a perspective of mechanism design. We focus on the problem of locating a facility deterministically on discrete graphs with cycles, including grids and hypercubes. We further assume that mechanisms are defined for any number
of participating agents, so that we can discuss consistency properties of mechanisms' behavior with respect to the number of agents. One of such well-studied properties is false-name-proofness, requiring that no agent can benefit by adding fake agents into the market. Our contribution is two fold. First, for cycle graphs, we show that there exists a false-name-proof and Pareto efficient mechanism if and only if the underlying graph has at most five vertices. Second, for l*m-grid
graphs (where l,m >= 2), we show that such a mechanism exists if and only if either l=2 or m=2 holds. We finally show that for any n(>2)-dimensional binary-cube graphs, there is no such mechanisms.

 

Amitai Armon
22/06/2017 - 12:00

Machine-learning attracts a lot of interest in recent years, more than ever before, as it utilizes the increasing amounts of data for many smart applications. This creates a flood of innovative ideas in this domain, constantly introducing new algorithmic and modeling techniques. The rapid pace of innovation poses a challenge in designing hardware that would best fit the needs of this domain several years from now. The goal of this work was to help coping with this challenge, by constructing a benchmark that represents the variety of fundamental compute requirements relevant to this field. The talk describes the analysis method used for building this benchmark. While no one can predict the future of this domain, interesting insights were deduced from its past, and helped hardware and software experts assess several possibilities for the future.

EITAN FARCHI
15/06/2017 - 12:00

Traditional software is built using hiding and the decomposition process.   The system is broken into sub-systems which are then broken into components and so on until the lowest object in the hierarchy is defined.   With the introduction of components implemented using machine learning and dependent on data a host of new challenges to the system readability are introduced.    Challenges include a good match of business objectives, machine learning optimization objectives and relevant data, stability of the quality of the solution due to changes in data, artistic choices of machine learning parameters as well as correctness of components in probability.  In this talk I present an end-to-end view of the challenges and possible ways to address them as well as some resulting interesting research directions.

 

NETHANEL GELERNTER
08/06/2017 - 12:00

Joint work with Senia Kalma, Bar Magnezi and Hen Porcilan (The College of Management Academic Studies)

Was presented in IEEE Security & Privacy 2017.

 

We present the password reset MitM (PRMitM) attack and show how it can be used to take over user accounts. The PRMitM attack exploits the similarity of the registration and password reset processes to launch a man in the middle (MitM) attack at the application level. The attacker initiates a password reset process with a website and forwards every challenge to the victim who either wishes to register in the attacking site or to access a particular resource on it. The attack has several variants, including exploitation of a password reset process that relies on the victim’s mobile phone, using either SMS or phone call. We evaluated the PRMitM attacks on Google and Facebook users in several experiments, and found that their password reset process is vulnerable to the PRMitM attack. Other websites and some popular mobile applications are vulnerable as well. Although solutions seem trivial in some cases, our experiments show that the straightforward solutions are not as effective as expected. We designed and evaluated two secure password reset processes and evaluated them on users of Google and Facebook. Our results indicate a significant improvement in the security. Since millions of accounts are currently vulnerable to the PRMitM attack, we also present a list of recommendations for implementing and auditing the password reset process.

 

My affiliation: (Cyberpion & College of Management Academic Studies)

 

Prof. Yair Weiss
25/05/2017 - 12:00

This talk discusses some of the history of graphical models and neural networks and speculates on the future of both fields with examples from the particular problem of image restoration.

Pawel Gawrychowski
18/05/2017 - 12:00

Labeling schemes seek to assign a label to each node in a network, so that a function on two nodes can be computed by examining their labels alone. The goal is to minimize the maximum length of a label and (as a secondary goal) the time to evaluate the function. As a prime example, we might want to compute the distance between two nodes of a network using only their labels. We consider this question for two natural classes of networks: trees and planar graphs. For trees on n nodes, we
design labels consisting of 1/4log^2(n) bits (up to lower order terms), thus matching a recent lower bound of Alstrup et al. [ICALP 2016]. For planar graphs, the situation is much more complex. A major open problem is to close the gap between an upper bound of O(n^{1/2}log(n))-bits and a lower bound of O(n^{1/3})-bits for unweighted planar graphs. We show that, for undirected unweighted planar graphs, there is no hope to achieve a higher lower bound using the known techniques. This is done by designing a centralized structure of size ~O(min(k^2,(kn)^{1/2})) that can calculate the distance between any pair of designated k terminal nodes. We show that this size it tight, up to polylogarithmic terms, for such centralized structures. This is complemented by an improved upper bound of O(n^{1/2}) for labeling nodes of an undirected unweighted planar graph for calculating the distances.

 

Roee Aharoni
11/05/2017 - 12:00

Sequence-to-Sequence learning (Sutskever et. al, 2014) equipped with the attention mechanism (Bahdanau et. al, 2015) became the state-of-the-art approach to machine translation, automatic summarization and many other natural language processing tasks, while being much simpler than the previously dominant methods. In this talk we will overview sequence-to-sequence learning with neural networks, followed by two of our recent contributions: the Hard Attention mechanism for linear time inference, and the String-to-Tree model for syntax-aware neural machine translation.

TAMIR TASSA
20/04/2017 - 12:00

Consider a multi-layered graph, where the different layers correspond to different proprietary social networks on the same ground set of users. Suppose that the owners of the different networks (called hosts) are mutually non-trusting parties: how can they compute a centrality score for each of the users using all the layers, but without disclosing information about their private graphs? Under this setting we study a suite of three centrality measures whose algebraic structure allows performing that computation with provable security and efficiency. The first measure counts the nodes reachable from a node within a given radius. The second measure extends the first one by counting the number of paths between any two nodes. The final one is a generalization to the multi-layered graph case: not only the number of paths is counted, but also the multiplicity of these paths in the different layers is considered.

 

We devise a suite of multiparty protocols to compute those centrality measures, which are all provably secure in the information-theoretic sense. One typical challenge and limitation of secure multiparty computation protocols is their scalability. We tackle this problem and devise a protocol which is highly scalable and still provably secure.

 

Daniel Soudry - Columbia Univ.
30/03/2017 - 12:00

Background:
Multilayer neural networks, trained with simple variants of stochastic gradient descent (SGD), have achieved state-of-the-art performances in many areas of machine learning. It has long been a mystery why does SGD work so well – rather than converging to sub-optimal local minima with high training error (and therefore, high test error).

Results:
We examine a neural network with a single hidden layer, quadratic loss, and piecewise linear units, trained in a binary classification task on a standard normal input. We prove that the volume of differentiable regions of the empiric loss containing sub-optimal differentiable local minima is exponentially vanishing in comparison with the same volume of global minima, given "mild" (polylogarithmic) over-parameterization. This suggests why SGD tends to converge to global minima in such networks.

Henny Admoni - CMU
23/03/2017 - 12:00

As robots become integrated into human environments, they increasingly interact directly with people. This is particularly true for assistive robots, which help people through social interactions (like tutoring) or physical interactions (like preparing a meal). Developing effective human-robot interactions in these cases requires a multidisciplinary approach involving both fundamental algorithms from robotics and insights from cognitive science. My research brings together these two areas to extend the science of human-robot interaction, with a particular focus on assistive robotics. In the process of developing cognitively-inspired algorithms for robot behavior, I seek to answer fundamental questions about human-robot interaction: what makes a robot appear intelligent? How can robots communicate their internal states to human partners to improve their ability to collaborate? Vice versa, how can robots "read" human behaviors that reveal people's goals, intentions, and difficulties, to identify where assistance is required? 

 

In this talk, I describe my vision for robots that collaborate with humans on complex tasks by leveraging natural, intuitive human behaviors. I explain how models of human attention, drawn from cognitive science, can help select robot behaviors that improve human performance on a collaborative task. I detail my work on algorithms that predict people's mental states based on their eye gaze and provide assistance in response to those predictions. And I show how breaking the seamlessness of an interaction can make robots appear smarter. Throughout the talk, I will describe how techniques and knowledge from cognitive science help us develop robot algorithms that lead to more effective interactions between people and their robot partners.

 

 

Sergei Nirenburg
19/03/2017 - 12:00

That context plays an important – possibly, crucial – role in language processing and decision making is a truism. But what counts as context is different for different tasks and different people. I will discuss several facets of context used as major sources of heuristic knowledge required of an artificial intelligent agent emulating human functioning. I will concentrate on tasks related to language understanding and generation as well as decision making in a goal-oriented task environment where the agent is a member of a team whose members can be other agents or humans.

 

I will be happy to talk to whoever would want to talk with me! 

But, of course, the main reason for my visit is being able to talk with you about potential collaborations.

 

Elad Haramaty - Harvard University
02/02/2017 - 12:00

Communication has played a growing role in our lives over the past decade. It often becomes a bottleneck in the performance of our daily tasks. This motivates the pursuit for more efficient communication. However, efficiency is becoming more challenging from the computational aspect, due to several of its characteristics in modern communications.

One such characteristic is the interactivity of the protocols in today's noisy communication environments. One natural approach to overcome such a challenge is called Interactive Coding. Interactive coding is an efficient black-box mechanism to convert any interactive protocol that performs well under noiseless environment into one that is also resilient to errors, while preserving the efficiency of the protocol.

Another characteristic which challenges today’s communications is the dynamic and interactive nature of modern protocols. This may lead to desynchronization between the communicating parties. However, until now, almost all designed systems assume that both parties are perfectly synchronized and all context is shared perfectly by the communicating agents. Thus, any violation of those protocols leads to a breakdown in communication.

This talk will address both of the aforementioned challenges.The first part of the talk will be devoted to interactive coding and in the second part, we will  focus on designing protocols under uncertainty of the shared context.

Merav Parter - MIT
23/01/2017 - 12:00

I will describe two branches of my work related to algorithms for distributed networks. 

The main focus will be devoted for Fault-Tolerant (FT) Network Structures. 

The undisrupted operation of structures and services is a crucial requirement in modern day communication networks. As the vertices and edges of the network may occasionally fail or malfunction, it is desirable to make those structures robust against failures.

FT Network Structures are low cost highly resilient structures, constructed on top of a given network, that satisfy certain desirable performance requirements

concerning, e.g., connectivity, distance or capacity. We will overview some results on fault tolerant graph structures with a special focus on FT Breadth-First-Search.

 

The second part of the talk will discuss distributed models and algorithms for large-scale networks. Towards the end, we will see some connections between distributed computing and other areas such as EE and Biology.

 

 

EDITH ELKIND
22/01/2017 - 13:30

Arrow's famous impossibility theorem (1951) states that there is no perfect voting rule: for three or more candidates, no voting rule can satisfy a small set of very appealing axioms. However, this is no longer the case if we assume that voters' preferences satisfy certain restrictions, such as being single-peaked or single-crossing. In this talk, we discuss single-peaked and single-crossing elections, as well as some other closely related restricted preference domains, and provide an overview of recent algorithmic results for these domains.

Inbal TALGAM-COHEN
19/01/2017 - 12:00

Algorithms and the Internet are revolutionizing how society allocates its resources. Examples range from wireless spectrum and electricity to online advertising and carpooling opportunities. A fundamental question is how to allocate such resources efficiently by designing robust computational markets.

In this talk I will demonstrate recent progress on this question by considering a problem crucial for major industry players like Google: how to design revenue-maximizing allocation mechanisms. Most existing designs hinge on “getting the price right” – selling goods to buyers at prices low enough to encourage a sale, but high enough to garner non-trivial revenue. This approach is difficult to implement when the seller has little or no a priori information about buyers’ valuations, or when the setting is sufficiently complex, as in the case of markets with heterogeneous goods.

I will show a robust approach to designing auctions for revenue, which “lets the market do the work” by allowing prices to emerge from enhanced competition for scarce goods. This work provides guidelines for a seller in choosing among data acquisition and sophisticated pricing, and investment in drawing additional buyers.

 

Bio: Inbal Talgam-Cohen is a Marie Curie postdoctoral researcher at HUJI and a visiting postdoctoral researcher at TAU. She holds a PhD from Stanford supervised by Tim Roughgarden, an MSc from Weizmann and a BSc from TAU in computer science, as well as a law LLB.  Her research is in algorithmic game theory, including computational and data aspects of market design and applications to Internet economics. Her awards include the 2015 Best Doctoral Dissertation Award of ACM SIGecom, the Stanford Interdisciplinary Graduate Fellowship, and the Best Student Paper Award at EC’15.

Omer Lev - University of Toronto
05/01/2017 - 12:00

In the past several years, crowd-activities have expanded dramatically in both their scope and their size – more diverse activities are involving online crowds, and the number of people (and money) involved in such activities is growing fast. I will focus in this talk on 2 topics, which share as a central feature the partitioning of the crowd into sub-groups.

 

In peer-selection, agents try to select the top k agents among themselves. We present several algorithms that try to make this process impartial, i.e., agents will not benefit from lying about their peers, while still choosing "good" agents. We will show algorithms that provably reach impartiality using the division of the agents into subgroups, and then show a set of simulations that give us further insight into which techniques work better in scenarios resembling real-world settings.

 

From there, we will move to district selection: decision making in settings where each partition (a company sub-unit, an electoral district, etc.) reaches a decision, and the decisions from all partitions are amalgamated to a final choice. We explore how this effects representability, and in the case of geographical manipulations ("gerrymandering"), present both computational hardness results and experiments on real-world data.

Meirav Zehavi
29/12/2016 - 12:00

Parameterized Complexity was introduced in the 1990s by Downey and Fellows. Nowadays, various aspects of this field are ubiquitous, vibrant areas of research. In a nutshell, Parameterized Complexity aims to reduce the running times of algorithms for NP-hard problems by confining the combinatorial explosion to a parameter k. More precisely, a problem is fixed-parameter tractable (FPT) with respect to a parameter k if it can be solved by an algorithm, called a parameterized algorithm, whose time complexity is bounded by f(k)n^{O(1)} for some function f that depends only on k. By careful examination of real datasets, one can choose a parameter k that will often be significantly smaller than the input size n. Indeed, parameterized problems arise in a variety of areas outside of pure theoretical computer science and math.

 

In this talk, I give a brief overview of some of my recent contributions in Parameterized Complexity. In the first part of my talk, I present results in core research areas such as Kernelization and Parameterized Algorithms. Here, problems are primarily concerned with graphs. However, in practice, inputs are not necessarily graphs. In the second part of my talk, I address this issue. Here, I describe my foray into input domains that consist of polygons, preferences and matrices.

 

Ori Lahav
22/12/2016 - 12:00

With the proliferation of multi-core processors, shared-memory concurrent programming has become increasingly important. Nevertheless, despite decades of research, there are still no adequate answers to the following fundamental questions:

 

1. What is the right semantics for concurrent programs in higher-level languages?

 

2. Which reasoning principles apply to realistic shared-memory concurrency?

 

Concerning the first question, the challenge lies in simultaneously allowing efficient implementation on modern hardware with compiler optimizations, while exposing a well-behaved programming model. Both the Java and C/C++ standards tried to address this question, but their

solutions were discovered to be flawed. They either allow spurious "out-of-thin-air" program behaviors or incur a prohibitive runtime cost. In the main part of the talk, I will present a solution to this problem based on a novel idea of promises, which supports efficient implementations and useful programming guarantees.

 

As for the second question, I will outline my ongoing efforts to identify (i) logical laws for deductive reasoning about concurrent programs; and (ii) programming disciplines that enable the application of well-established verification techniques.

 

Short Bio:

 

Ori Lahav is a postdoctoral researcher at Max Planck Institute for Software Systems (MPI-SWS). His research interests include programming languages, formal methods and verification, semantics and logic. Previously, Ori was a postdoctoral researcher at the Programming Languages Group at Tel Aviv University. He obtained his Ph.D. from Tel Aviv University in 2014 under the supervision of Arnon Avron. Ori received a Dan David Prize scholarship for young researchers in 2014, the Kleene award for the best student paper at LICS 2013, and a Wolf Prize for excellent graduate students in 2012.

Karen Livescu
20/12/2016 - 12:00

This talk covers two closely related lines of work.  

For a number of speech tasks, it can be useful to represent speech segments of arbitrary length by fixed-dimensional vectors, or embeddings.  In particular, vectors representing word segments -- acoustic word embeddings -- can be used in query-by-example tasks, example-based speech recognition, or spoken term discovery.  *Textual* word embeddings have been common in natural language processing for a number of years now; the acoustic analogue is only recently starting to be explored.  This talk will present our work on acoustic word embeddings, including a variety of models in unsupervised, weakly supervised, and supervised settings.

 


 

Ofra Amir - Harvard University
15/12/2016 - 12:00

Teamwork is a core human activity, essential to progress in many areas. A vast body of research in the social sciences and in computer science has studied teamwork and developed tools to support teamwork. Although the technologies resulting from this work have enabled teams to work together more effectively in many settings, they have proved inadequate for supporting the coordination of distributed teams that operate in a loosely-coupled manner. In this talk, I will present three integrated research efforts towards developing intelligent systems that reduce coordination overhead in such teams: an in depth formative study of complex healthcare teams, the design of new computational methods for personalizing the information shared with team members about collaborators' activities, and an evaluation of those methods in a realistic teamwork setting, demonstrating that personalized information sharing resulted in higher productivity and lower perceived workload of team members.

Privacy-Preserving Computations in the Digital Era המרצים: פרופ' יהודה לינדל
08/12/2016 - 12:00

אנו שמחים להתכבד בנוכחותו של

מר רוג'ר גראס – Mr. Roger Grass

תורם הבניין למדעי המחשב

Privacy-Preserving Computations in the Digital Era  

המרצים: פרופ' יהודה לינדל

Natural Language Processing with Neural Networks

דוקטורנט רועי אהרוני

Automated Agents that Interact Proficiently with People

פרופ' שרית קראוס

 

Privacy-Preserving Computations in the Digital Era המרצים: פרופ' יהודה לינדל
08/12/2016 - 12:00

Privacy-Preserving Computations in the Digital Era  

המרצים: פרופ' יהודה לינדל

Natural Language Processing with Neural Networks

דוקטורנט רועי אהרוני

Automated Agents that Interact Proficiently with People

פרופ' שרית קראוס

Amir Herzberg - BIU
01/12/2016 - 12:00

There is a lot of concern, by legislators, experts and the general public, about the loss of privacy. In the computer science research community, extensive efforts are dedicated to develop Privacy Enhancing Technologies, mostly based on cryptography; much of these efforts are designed to provide privacy even against (cloud) providers. 

 

However, the reality is that users explicitly agree to share their private data with few providers (Google, FB,...), although these providers explicitly state that they may make commercial use of this private data, and in spite of numerous incidents where such data was exposed to unauthorized third parties, and papers showing design-failures allowing such unauthorized disclosure of private data. This is known as the privacy paradox. 

 

We present simple economical analysis of privacy, explaining the privacy paradox, and showing potential market failure, harming social welfare. We discuss possible solutions, including regulatory and cryptographic controls.

 

Hemi Leibowitz - Bar-lan University
27/11/2016 - 18:00

 

 

Recently, many popular Instant-Messaging (IM) applications announced support for end-to-end encryption, claiming confidentiality even against a rogue operator. Is this, finally, a positive answer to the basic challenge of usable-security presented in the seminal paper, ‘Why Johnny Can’t Encrypt’?

 

Our work evaluates the implementation of end-to-end encryption in popular IM applications: WhatsApp, Viber, Telegram, and Signal, against established usable-security principles, and in quantitative and qualitative usability experiments. Unfortunately, although participants expressed interest in confidentiality, even against a rogue operator, our results show that current mechanisms are impractical to use, leaving users with only the illusion of security.

 

Hope is not lost. We conclude with directions which may allow usable End-to-End encryption for IM applications.

Veronika Loitzenbauer
24/11/2016 - 12:00

Connectivity related concepts are of fundamental interest in graph theory. The area has received extensive attention over four decades, but many problems remain unsettled, especially for directed graphs. A strongly connected directed graph is 2-edge-connected (resp.,2-vertex-connected) if the removal of any edge (resp., vertex) leaves the graph strongly connected. In this talk we present improved algorithms for computing the maximal 2-edge- and 2-vertex-connected induced subgraphs of a given directed graph. These problems were first studied more than 35 years ago, with O(mn) time algorithms for graphs with m edges and n vertices being known since the late 1980s. In contrast, the same problems for undirected graphs are known to be solvable in linear time. For the directed case, we showed 1) O(n^2) time algorithms in joint work with Monika Henzinger and Sebastian Krinninger (ICALP'15) and 2) O(m^1.5) time algorithms in joint work with Shiri Chechik, Thomas D. Hansen, Giuseppe F. Italiano, and Nikos Parotsidis (SODA'17).

 

Shachar Itzhaky - MIT
17/11/2016 - 12:00

Everyone wants to program with “high-level concepts”, rather than meddle with the fine details of the implementation, such as pointers, network packets, and asynchronous callbacks. This is usually achieved by introducing layers of abstraction – but every layer incurs some overhead, and when they accumulate, this overhand becomes significant and sometimes prohibitive. Optimizing the program often requires to break abstractions, which leads to suboptimal design, higher maintenance costs, and subtle hard-to-trace bugs.

I will present two recent projects that attempt to address this difficulty. TransCal is a rewrite-based interactive synthesis engine that helps a developer mechanically transform a high-level program into a tractable, semantically equivalent version. TransCal attempts to encode and reuse practices that programmers usually reason about informally. The programs targeted are general functional-style programs with operations on collections.

Bellmania is a specialized tool for accelerating dynamic-programming algorithms by generating recursive divide-and-conquer implementations of them. Recursive divide-and-conquer is an algorithmic technique that was developed to obtain better memory locality properties. Bellmania includes a high-level language for specifying dynamic programming algorithms and a calculus that facilitates gradual transformation of these specifications into efficient implementations. These transformations formalize the divide-and-conquer technique; a visualization interface helps users to interactively guide the process, while an SMT-based back-end verifies each step and takes care of low-level reasoning required for parallelism.

 

Dan Garber - Toyota Technological Institute at Chicago
10/11/2016 - 12:00

Projected gradient descent (PGD), and its close variants, are often considered the methods of choice for solving a large variety of machine learning optimization problems, including empirical risk minimization, statistical learning, and online convex optimization. This is not surprising, since PGD is often optimal in a very appealing information-theoretic sense. However, for many problems PGD is infeasible both in theory and practice since each step requires to compute an orthogonal projection onto the feasible set. In many important cases, such as when the feasible set is a non-trivial polytope, or a convex surrogate for a low-rank structure, computing the projection is computationally inefficient in high-dimensional settings. An alternative is the conditional gradient method (CG), aka Frank-Wolfe algorithm, that replaces the expensive projection step with a linear optimization step over the feasible set. Indeed in many problems of interest, the linear optimization step admits much more efficient algorithms than the projection step, which is the reason to the substantial regained interest in this method in the past decade. On the downside, the convergence rates of the CG method often fall behind that of PGD and its variants. 

In this talk I will survey an ongoing effort to design CG variants that on one hand enjoy the cheap iteration complexity of the original method, and on the other hand converge provably faster, and are applicable to a wider variety of machine learning settings. In particular I will focus on the cases in which the feasible set is either a polytope or a convex surrogate for low-rank matrices. Results will be demonstrated on applications including: LASSO, video co-localization, optical character recognition, matrix completion, and multi-class classification.

הרצאת קולוקוויום ביום חמישי 23.6.16 מבוטלת.
23/06/2016 - 12:00

הרצאת קולוקוויום ביום חמישי 23.6.16 מבוטלת.

הרצאת קולוקוויום ביום חמישי 23.6.16 מבוטלת.
23/06/2016 - 12:00

הרצאת קולוקוויום ביום חמישי 23.6.16 מבוטלת.

Or Zuk - the Hebrew University
16/06/2016 - 12:00

 

We propose a new measurement scheme for low-rank matrix recovery, where each measurement 

is a linear combination of elements in one row or one column of the unknown matrix. 

 

This setting arises naturally in applications but current algorithms, developed for standard matrix recovery problems, do not perform well in this case, hence the need for developing new algorithms and theory. 

 

We propose a simple algorithm for the problem based on Singular Value Decomposition (SVD) and least-squares (LS), which we term SVLS. We prove favourable theoretical guarantees for our algorithm 

for the noiseless and noisy case, compared to standard matrix completion measurement schemes 

and algorithms.  Simulations show improved speed and accuracy, including for the problem of unknown rank estimation.  

 

Our results suggest that the proposed row-and-column measurements scheme, together with our recovery algorithm, may provide a powerful framework for affine matrix recovery.

 

Time permitting, I will describe also progress on other data analysis projects for sparse and structured-sparse data, including a new group-sparse clustering algorithm 

Eyal Niv - Pitango Venture Capital
09/06/2016 - 12:00

TBA

Lior Wold - Tel Aviv University
02/06/2016 - 12:00

Transfer learning plays a major role in the recent success of deep face recognition methods. Deep networks are trained to solve the multiclass classification problem using a cross entropy loss and the learned representations are then transferred to a different domain. Moreover, the task changes post-transfer to face verification (same/not-same). In the talk, I will point to a few research questions, including: What is the ideal source dataset? How to train in the target domain? How to estimate the certainty of the identification?

 

One of the observations we make is that the transferred representations support only a few modes of separation and much of their dimensionality is underutilized. To alleviate this, we suggest to learn, in the source domain, multiple orthogonal classifiers. We prove that this leads to a reduced rank representation, which, however, supports more discriminative directions. For example, we obtain the 2nd best result on LFW for a single network. This is achieved using a training set that is a few orders of magnitude smaller than that of the leading literature network, and using a very compact representation of only 51 dimensions.

 

Given time, I will describe recent achievements in other computer vision domains: optical flow, action recognition, image annotation, and more.

 

Tsvi Kopelowitz - University of Michigan
26/05/2016 - 12:00

Over the past 30 years numerous algorithms have been designed for symmetry breaking problems in the distributed LOCAL model, such as maximal matching, MIS, vertex coloring, and edge-coloring.  For most problems the best randomized algorithm is at least exponentially faster than the best deterministic algorithm.  In this talk we show that these exponential gaps are necessary and establish connections between the deterministic and randomized complexities in the LOCAL model.  Each of our results has a very compelling take-away message:

1. Fast Delta-coloring of trees requires random bits.

2. Randomized lower bounds imply higher deterministic lower bounds.

3. Deterministic lower bounds imply randomized lower bounds.

Joint work with Yi-Jun Chang and Seth Pettie

 

Matthieu Mayran - Google
14/04/2016 - 12:00

We will be giving an overview of technologies built in Google for Google and how we externalize some of them to the world either as a service or through OOS or both. We will talk about several products including: 

  • Google Cloud Platform: A review of the products that are part of it and when it makes sense to use the Cloud
  • Big Data tools and services: Best practices and demo showing services that make big data manageable
  • Machine Learning: Tensorflow open-sourcing and CloudML with some preview of the Vision API and other libraries.

I believe lecture will be one hour to an hour and a half and will be held in english.

The presenter will be Matthieu Mayran, Cloud Solutions Architect  

 

Oded Margalit - IBM
07/04/2016 - 12:00

The research division of IBM has been publishing a monthly challenge since May 1998. In the talk we will walk through some of these challenges, give some examples of the solvers and their solutions and some insights on the 'behind the scenes' of authoring PonderThis.

Simina Brânzei - The Hebrew University
22/03/2016 - 14:00

I will talk about fair division for strategic agents and will start with the cake cutting problem, which models the division of a heterogeneous divisible good among players with different preferences. Here I will talk about an impossibility result (dictatorship theorem), as well as an
algorithmic framework for strategic agents, where fair outcomes can be implemented in the equilibria of protocols.

If enough time I will also mention the problem of allocating multiple divisible goods for the fairness solution concept of Nash social welfare. When the preferences are known, this solution is achieved in the Fisher market equilibrium outcomes. However, with strategic agents the strong fairness guarantees of market equilibria can be destroyed. We find that for
additive valuations, the Fisher market approximates the NSW within a factor of 2, but for Leontief utilities the approximation degrades linearly with the number of players. Surprisingly, a mechanism known as Trading Post not only retains the constant 2-approximation of Fisher for additive utilities, but approximately implements the NSW objective for Leontief valuations: its price of anarchy is 1 + epsilon for every epsilon > 0.

This is joint work with Peter Bro Miltersen, Ioannis Caragiannis, David Kurokawa, Ariel Procaccia, Vasilis Gkatzelis, and Ruta Mehta.

 

Simina Brânzei - The Hebrew University
22/03/2016 - 14:00

I will talk about fair division for strategic agents and will start with the cake cutting problem, which models the division of a heterogeneous divisible good among players with different preferences. Here I will talk about an impossibility result (dictatorship theorem), as well as an
algorithmic framework for strategic agents, where fair outcomes can be implemented in the equilibria of protocols.

If enough time I will also mention the problem of allocating multiple divisible goods for the fairness solution concept of Nash social welfare. When the preferences are known, this solution is achieved in the Fisher market equilibrium outcomes. However, with strategic agents the strong fairness guarantees of market equilibria can be destroyed. We find that for
additive valuations, the Fisher market approximates the NSW within a factor of 2, but for Leontief utilities the approximation degrades linearly with the number of players. Surprisingly, a mechanism known as Trading Post not only retains the constant 2-approximation of Fisher for additive utilities, but approximately implements the NSW objective for Leontief valuations: its price of anarchy is 1 + epsilon for every epsilon > 0.

This is joint work with Peter Bro Miltersen, Ioannis Caragiannis, David Kurokawa, Ariel Procaccia, Vasilis Gkatzelis, and Ruta Mehta.

 

Yariv Aizenbud - Tel-Aviv University
17/03/2016 - 12:00

Matrix decompositions, and especially SVD, are very important tools in data analysis. When big data is processed, the computation of matrix decompositions becomes expensive and impractical. In recent years, several algorithms, which approximate matrix decomposition, have been developed. These algorithms are based on metric conservation features for linear spaces of random projections. We present a randomized method based on sparse matrix distribution that achieves a fast approximation with bounded error for low rank matrix decomposition.

Meir Kalech - Ben-Gurion University
10/03/2016 - 12:00

In this talk I will present our recent successful integration of various techniques from the Artificial Intelligence (AI) literature into the software debugging and testing process.  First, we show how data that is already stored by industry standard software engineering tools can be used to learn a fault prediction model able to predict accurate the software components that are likely to contain bugs. This allows focusing testing efforts on such error-prone components.  Then, we show how this learned fault prediction model can be used to augment existing software diagnosis algorithms, providing a better understanding of which software components need to be replaced to correct an observed bug. Moreover, for the case where further tests are needed to identify the faulty software component, we present a test-planning algorithm based on Markov Decision Processes (MDP).

Importantly, the presented approach for considering both a fault prediction model, learned from past failures, and a diagnosis algorithm that is model-based, is general, and can be applied to other fields, beyond software troubleshooting.

Amos Fiat
03/03/2016 - 12:00

Will describe how dynamic pricing can be used to solve parking related problems. Will also discuss dynamic pricing for k-server problems, task systems, unit demand bidders, and other problems.

David Zarrouk - Ben-Gurion University
21/01/2016 - 12:00

Miniature crawling robots, on the order of centimeter scale and below, have a variety of applications including medical procedures, search and rescue operations, maintenance, reconnaissance, environmental monitoring, assisted agriculture, cave exploration, mapping, studying biological organisms, security, defense, exploration of hazardous environments. Their small size allows them to navigate in remote areas otherwise inaccessible to wheeled or larger robots such as collapsed building and caves and biological vessels, while their low cost makes them disposable and allows their use in large quantities and swarm formations. As miniature crawling robots at this scale are under-actuated are achieving stability at high velocities over varying terrain, reducing the cost of transport, overcoming obstacles, controlling jumping and landing, adhering quickly to different surfaces, transitioning between horizontal to vertical motion, and climbing.

This talk will address some of the modeling and actuation challenges of crawling robots outdoors and inside biological vessels, while taking into account contact compliance and sliding.

 

Biography: David Zarrouk is currently a Senior Lecturer at the ME depart of BGU. He received his M.S. (2007) and Ph.D (2011) degrees from the faculty of mechanical engineering at the Technion. Between Aug. 2011 and Oct. 2013, he was postdoctoral scholar at the Biomimetics and Millisystems Lab. in the EECS dep. of U.C. Berkeley, working on miniature crawling robots. His research interests are in the fields of biomimetics, millisystems, miniature robotics, flexible and slippery interactions, underactuated mechanisms and theoretical kinematics. He received many prizes for excellence in research and teaching, which include a Fulbright postdoctoral Fellowship, Fulbright-Ilan Ramon postdoctoral Fellowship, Hershel Rich Innovation award, Aharon and Ovadia Barazani prize for excellent Ph.D. thesis, and Alfred and Yehuda Weisman prize for consistent excellence in teaching. a

Ofra Amir - Harvard University
14/01/2016 - 12:00

People collaborate in carrying out such complex activities as treating patients, co-authoring documents and developing software. Technologies such as Google Drive, Dropbox and Github enable teams to share work artifacts remotely and asynchronously. The coordination of team members’ activities remains a challenge, however, because these technologies do not have capabilities for focusing people’s attention on the actions taken by others that matter most to their own activities. In this talk, I will present our work towards developing intelligent systems for supporting information sharing in distributed teams. Based on a study of complex health care teams, we formalize the problem of  information sharing in loosely-coupled extended-duration teamwork. We develop a new representation, Mutual Influence Potential Networks, that implicitly learns collaboration patterns and dependencies among activities from team members’ interactions, and an algorithm that uses this representation to determine the information that is most relevant to each team member. Analysis of Wikipedia revision history data and an evaluation in a simulation environment demonstrate the ability of the proposed approach to identify relevant information to share with team members.

 

Joint work with Barbara Grosz and Krzysztof Gajos.a

Dana Fisman - University of Pennsylvania
07/01/2016 - 12:00

System synthesis refers to the task of automatically generating an executable component of a system (e.g. a software or hardware component) from a specification of the component's behavior. The traditional formalization of the problem assumes the specification is given by a logical formalism. Recent trends to synthesis relax the problem definition and consider a variety of inputs including logical requirements, incomplete programs, and examples behaviors. In this talk I will describe some of the challenges on the road to usable synthesis, a variety of current approaches for coping with them, and some success stories.

 

Short bio:

 

Dana Fisman is a research scientist at the University of Pennsylvania, the Associate Director of the NSF expedition ExCAPE about system synthesis, and a visiting fellow at Yale University. She did her PhD in Weizmann under the supervision of Amir Pnueli, and worked many years in the industry in IBM Haifa Research Labs, and in Synopsys Inc. Dana’s research interests are in the area of formal methods in system design. She is mostly known for her work on PSL, the IEEE standard for property specification language, on which she received numerous awards from IEEE, IBM and Synopsys.

 

Ofer Meshi - Toyota Technological Institute at Chicago
05/01/2016 - 11:00

In recent years, machine learning has emerged as an important and influential discipline in computer science and engineering. Modern applications of machine learning involve reasoning about complex objects like images, videos, and large documents. Treatment of such high-dimensional data requires the development of new tools, since traditional methods in machine learning no longer apply. In this talk I will present two recent works in this direction. The first work introduces a family of novel and efficient methods for inference and learning in structured output spaces. This framework is based on applying principles from convex optimization while exploiting the special structure of these problems to obtain efficient algorithms. The second work studies the success of a certain type of approximate inference methods based on linear programming relaxations. In particular, it has been observed that such relaxations are often tight in real applications, and I will present a theoretical explanation for this interesting phenomenon.

 

 

Bio:

 

Ofer Meshi is a Research Assistant Professor at the Toyota Technological Institute at Chicago. Prior to that he obtained his Ph.D. and M.Sc. in Computer Science from the Hebrew University of Jerusalem. His B.Sc. in Computer Science and Philosophy is from Tel Aviv University. Ofer’s research focuses on machine learning, with an emphasis on efficient optimization methods for inference and learning with high-dimensional structured outputs. During his doctoral studies Ofer was a recipient of Google's European Fellowship in Machine Learning.

 

Oren Tsur - Harvard's School for Engineering and Applied Science (SEAS), Network Science Institute at Northeastern University and Harvard's Institute for Quantitative Social Science (IQSS).
31/12/2015 - 12:00

Information diffusion is the process in which nuggets of information spread in a network (typically a social network). This is a complex process that depends on the network topology, the social structures and the information itself (content/language). In this talk I will discuss information diffusion from these different yet complementary perspectives. In the first part  of the talk I will focus on the features of the diffusing information.  I will  present a gradient boosted trees algorithm modified for learning user preferences of Twitter hashtags. In the second part of my talk I will focus on the network structure. I will use exponential random graph models (ERGM) in order to learn what latent factors contribute to network formation and I will show how network structure and social roles contribute to the information spread. Specifically, I will present promising results obtained on political networks in the American political systems and analysis of the partisan use of political hashtags.

In both parts I put emphasis on interpretable models that go beyond accurate prediction and  facilitate a better understanding of complex social processes.  

Bio sketch:

Oren is a postdoctoral fellow at the Harvard's School for Engineering and Applied Science (SEAS) jointly with the Network Science Institute at Northeastern University and is also affiliated with Harvard's Institute for Quantitative Social Science (IQSS). He received  his Ph.D. in Computer Science from the Hebrew University at 2013. In his work he combines natural language processing and network analysis in order to model and predict complex social dynamics. His work was published in both the NLP and the web/data science communities. Oren co-organized the workshop on NLP and Social Dynamics at ACL 2014, is co-organizing the EMNLP 2016 workshop on NLP and Computational Social Science  and will be giving the tutorial on Understanding Offline Political Systems by Mining Online Political Data at WSDM 2016. His research on sarcasm detection (with Dmitry Davidov and Ari Rappoport) was listed in Time Magazine special issue as one of the 50 best inventions of the year (2010).

    Homepage: http://people.seas.harvard.edu/~orentsur, Oren http://people.seas.harvard.edu/~orentsur/

 

Natalia Silberstein - Technion
29/12/2015 - 11:00

 

In distributed storage systems (DSS) data is stored over a large number of storage nodes in such a way that a user can always retrieve the stored data, even if some storage nodes fail. To achieve such resilience against node failures, DSS introduce data redundancy based on different coding techniques. When a single node fails, the system performs node repair, i.e., reconstructs the data stored in the failed node in order to maintain the required level of redundancy.
In this talk we introduce a new approach to constructing different families of optimal codes for DSS via rank-metric codes. In the first part of the talk we deal with two important goals that guide the design of codes for DSS: reducing the repair bandwidth, i.e., the amount of data downloaded from the nodes during  node repair, and achieving locality, i.e., reducing the number of nodes participating in a node repair process. First, we present a construction of a new family of optimal locally repairable codes, which enable repair of a failed node while using only few other nodes. Second, we introduce a new class of codes which minimize repair bandwidth for given locality parameters.
In the second part of the talk we consider the problem of making DSS resilient to adversarial attacks. In such attacks, an adversary may gain read-only access to information stored in DSS (passive eavesdropper) or may modify it (active adversary). We present a novel construction of codes for DSS which provide perfect secrecy, i.e., codes which allow to store data without revealing any information to an eavesdropper. This construction is also based on rank-metric codes.

Bio:
Natalia Silberstein received her B.A. degree in Computer Science (cum laude), B.A. degree in  Mathematics (cum laude), M.Sc. degree in Applied Mathematics (summa cum laude), and Ph.D. degree in Computer Science from the Technion, in 2004, 2007, and 2011, respectively. She then spent two years as a postdoctoral fellow in the Wireless Networking and Communications Group, at the Department of Electrical and Computer Engineering at the University of Texas at Austin, USA. She is currently a postdoctoral fellow at the Computer Science Department at the Technion.
Her research focuses on coding for distributed storage systems, network coding, algebraic error-correcting codes, applications of combinatorial designs and graphs to coding theory.