INDIGO Home University of Illinois at Urbana-Champaign logo uic building uic pavilion uic student center

Algorithms and Complexity Results for Learning and Big Data

Show full item record

Bookmark or cite this item: http://hdl.handle.net/10027/21729

Files in this item

File Description Format
PDF LELKES-DISSERTATION-2017.pdf (1MB) Restricted to UIC (no description provided) PDF
Title: Algorithms and Complexity Results for Learning and Big Data
Author(s): Lelkes, Ádám D.
Advisor(s): Turán, György; Reyzin, Lev
Contributor(s): Friedland, Shmuel; Hellerstein, Lisa; Sloan, Robert; Turán, György
Department / Program: Mathematics, Statistics, and Computer Science
Degree Granting Institution: University of Illinois at Chicago
Degree: PhD, Doctor of Philosophy
Genre: Doctoral
Subject(s): theoretical computer science learning theory machine learning big data complexity theory mapreduce clustering
Abstract: This thesis focuses on problems in the theory and practice of machine learning and big data. We will explore the complexity-theoretic properties of MapReduce, one of the most ubiquitous distributed computing frameworks for big data, give new algorithms and prove computational hardness results for a model of clustering, and study fairness in machine learning applications. In our study of MapReduce, we address some of the central questions that computational complexity theory asks about models of computation. After giving a detailed and precise formalization of MapReduce as a model of computation, based on the work of Karloff et al., we compare it to classical Turing machines, and show that languages which can be decided by a Turing machine using sublogarithmic space can also be decided by a constant-round MapReduce computation. In the second half of the chapter, we turn our attention to the question of whether an increased number of rounds or an increased amount of computation time per processor leads to strictly more computational power. We answer this question in the affirmative, proving a hierarchy theorem for MapReduce computations, conditioned on the Exponential Time Hypothesis. We will also study an interactive model of clustering introduced by Balcan and Blum. In this framework of clustering, we give algorithms for clustering linear functionals and hyperplanes, and give computational hardness results that show that other concept classes, including deterministic finite automata, constant-depth threshold circuits, and Boolean formulas, are not possible to efficiently cluster if standard cryptographic assumptions hold. Finally, we address the issue of fairness in machine learning. We propose a novel approach for modifying three popular machine learning algorithms, AdaBoost, logistic regression, and support vector machines, to eliminate bias against a protected group. We empirically compare our method to previous approaches in the literature as well as various baseline algorithms by evaluating them on various real-world datasets, and also give theoretical justification for its performance. We also propose a new measure of fairness for machine learning classifiers, and demonstrate that it can help distinguish between naive and more sophisticated approaches even in the cases when measuring error and bias is not sufficient.
Issue Date: 2017-03-14
Type: Thesis
URI: http://hdl.handle.net/10027/21729
Date Available in INDIGO: 2017-10-27
Date Deposited: May 2017
 

This item appears in the following Collection(s)

Show full item record

Statistics

Country Code Views
United States of America 13
China 11
Ukraine 9
Germany 2
Unknown Country 1

Browse

My Account

Information

Access Key