STATISTICAL LEARNING THEORY AND GAME THEORY FOR CROWDSOURCING
Since the second half of my PhD so far, I am working in the areas of machine learning and game theory, with a particular focus on the application to crowdsourcing. Here is a brief description of some of my work.
Statistical inference from crowdsourced data
Data in the form of pairwise comparisons arises in many applications. Pairwise comparisons are free from several biases and are much faster and easier to make as compared to numeric scores. Given a noisy comparisons between various pairs of items, the question that now arises is how to draw meaningful inferences from this data? While prior literature mostly focus on quite restrictive "parametric" models, in my work, I instead consider models go beyond these parametric notions and are much more flexible. I develop estimation algorithms and fundamental theoretical guarantees for these models.
References:
 "Feeling the Bern: Adaptive Estimators for Bernoulli Probabilities of Pairwise Comparisons, Nihar B. Shah, Sivaraman Balakrishnan and Martin J. Wainwright.
 "Simple, Robust and Optimal Ranking from Pairwise Comparisons, Nihar B. Shah and Martin J. Wainwright.
 "Stochastically Transitive Models for Pairwise Comparisons: Statistical and Computational Issues, Nihar B. Shah, Sivaraman Balakrishnan, Adityanand Guntuboyina and Martin J. Wainwright.
 "Estimation from Pairwise Comparisons: Sharp Minimax Bounds with Topology Dependence, "Nihar B. Shah, Sivaraman Balakrishnan, Joseph Bradley, Abhay Parekh, Kannan Ramchandran, and Martin J. Wainwright, May 2015.
 "On the Impossibility of Convex Inference in Human Computation," Nihar B. Shah and Dengyong Zhou, Jan. 2015.
 "Regularized Minimax Conditional Entropy for Crowdsourcing," Dengyong Zhou, Qiang Liu, John Platt, Christopher Meek, and Nihar B. Shah, Dec. 2014.
 "A Case for Ordinal Peerevaluation in MOOCs," Nihar B. Shah, Joseph Bradley, Abhay Parekh, Martin J. Wainwright, Kannan Ramchandran, Dec. 2013.


"Unique" mechanisms for obtaining highquality data from crowdsourcing
A major problem with the data obtained from crowdsourcing is that it is extremely noisy. My work designs novel data collection mechanisms that lead to a collection of higher highquality data. The proposed mechanisms are rooted in fundamental theory  we show that our mechanisms are unique in that they the only mechanisms that can satisfy a natural 'nofreelunch' axiom. Our mechanisms have a simple and interesting "multiplicative" form.
References:


Fun and games with rank aggregation
And finally, just for fun, you may like playing this game that I've made. Here are the in instructions. Here is the algorithm underlying the game.


PAST RESEARCH: ALGORITHMS FOR DISTRIBUTED DATA STORAGE AND RETRIEVAL
I have previously worked on problems in the area of coding and information theory. More specifically, my prior research focused on constructing (error correcting) codes for distributed storage systems which need to be highly reliable, make the data widely available, handle node failures efficiently, and must also be able to optimally use resources such as storage space and bandwidth.
The video on the left gives a gentle introduction to an aspect of our work along with a toy example of one of our code constructions. More details are provided below.


Click here to download the video

Today's large scale distributed storage systems comprise of thousands of nodes, storing hundreds of petabytes of data. In these systems, component failures are common, and this makes it essential to store the data in a redundant fashion to ensure reliability. The most common way of adding redundancy is replication. However, replication is highly inefficient in terms of storage utilization, and hence many distributed storage systems are now turning to ReedSolomon (erasure) codes. While these codes are optimal in terms of storage utilization, they perform poorly in terms of network bandwidth. During repair of a failed node, these codes require download of the entire data to recover the small fraction that was stored in the failed node. Recovering data stored in a particular node is of interest not only during repair but also for many other applications.
Regenerating Codes are a new class of error correcting codes for distributed storage systems which are optimal with respect to both storage space and network bandwidth. Regenerating codes come in two flavors: (i) Minimum Storage (MSR): which minimize bandwidth usage while storing an optimal amount of data, (ii) Minimum Bandwidth (MBR): which further minimize bandwidth usage by allowing for a slightly higher storage space.
Here is a description of some of our codes.
ProductMatrix Codes
ProductMatrix codes are the most general class of regenerating code constructions available in the literature, spanning both MBR and MSR, and providing a complete solution to the highredundancy regime. These codes are the first and the only explicit construction of regenerating codes that allow the number of nodes in the system to scale irrespective of other system parameters. This attribute allows one to vary the number of nodes in the system on the fly, which is very useful, for example, in provisioning resources based on the demand. Further, these codes can be implemented using ReedSolomon encoders and decoders (which are already existing) as building blocks.
Under these codes, each node is associated with an encoding vector. The source data is assembled in the form of a matrix, and each node stores the projection of this matrix along its encoding vector. To recover data stored in a node, all helper nodes pass a projection of the data stored in them along the direction of the encoding vector of the failed node.
We have also optimized these codes with respect to other aspects of distributed storage systems such as security, error correction, scaling via ratelessness, etc.


References:
 "Optimal ExactRegenerating Codes for Distributed Storage at the MSR and MBR Points via a ProductMatrix Construction", K. V. Rashmi, N. B. Shah and P. V. Kumar , IEEE Transactions on Information Theory, August 2011.
 "Regenerating Codes for Errors and Erasures in Distributed Storage", K. V. Rashmi, N. B. Shah, K. Ramchandran, and P. V. Kumar, IEEE International Symposium on Information Theory (ISIT), July 2012.

Informationtheoretically Secure Regenerating Codes for Distributed Storage,
Nihar B. Shah, K. V. Rashmi, and P. Vijay Kumar, Globecom 2011.

Twin Codes
This is a simple, yet powerful, framework which allows one to use any erasure code and still remain bandwidth efficient. Under this framework, nodes are partitioned into two types, and the data is encoded using two (possibly different) codes. The data is encoded in a manner such that the problem of repairing nodes of one type is reduced to that of erasuredecoding of the code employed by the other type. Here, one can choose the constituent codes based on the properties required, for instance, employing LDPC codes allows for lowcomplexity decoding algorithms.

References:
RepairbyTransfer Codes
These are MBR codes which perform repairbytransfer: the data stored in any node can be recovered by mere transfer of data from other nodes, without requiring any computation. This minimizes the disk IO since each node reads only the data that it transfers, and also permits the use of "dumb" nodes. The animated video above presents the working of these codes.


References:
 "Explicit Construction of Optimal Exact Regenerating Codes for Distributed Storage", K. V. Rashmi, N. B. Shah, P. V. Kumar and K. Ramchandran, Proc. Allerton Conf., Sep. 2009.
 "Distributed Storage Codes with RepairbyTransfer and Nonachievability of Interior Points on the StorageBandwidth Tradeoff", N. B. Shah, K. V. Rashmi, P. V. Kumar and K. Ramchandran, IEEE Transactions on Information Theory, March 2012.

Interference Alignment based MISER Codes
Interference Alignment is a concept that was proposed in the field of wireless communication to efficiently handle multiple interfering communications. We show that this concept necessarily arises during repair in MSR codes. Using these insights we construct codes operating at the MSR point.

References:
 "Explicit Codes Minimizing Repair Bandwidth for Distributed Storage", N. B. Shah, K. V. Rashmi, P. V. Kumar and K. Ramchandran, Proc. IEEE Information Theory Workshop (ITW), Jan. 2010.
 "Interference Alignment in Regenerating Codes for Distributed Storage: Necessity and Code Constructions", N. B. Shah, K. V. Rashmi, P. V. Kumar and K. Ramchandran, IEEE Transactions on Information Theory, April 2012.
Nonachievability Results
It has been shown that there is a fundamental tradeoff between the two resources: storage space and network bandwidth, and MSR and MBR are the two extreme points of this tradeoff. While the codes described above operate at these extreme points, we have also shown that there are no codes which can achieve the interior points on this tradeoff. 

References:
