Dissertation Defense - University of Houston
Skip to main content

Dissertation Defense

In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy

Yiqun Zhang

will defend his dissertation

Multidimensional Aggregations in Parallel Database Systems


Abstract

Aggregations help computing summaries of a data set, which are ubiquitous in diverse big data analytics problems. In this dissertation, we provide two major technical contributions which work on parallel database systems that significantly extend the capabilities of aggregations, studying two complementary multidimensional mathematical structures: cubes and matrices. Cubes present a combinatorial problem in a set of discrete dimensions, widely studied in database systems. On the other hand, matrices are widely used in machine learning models, requiring iterative numerical methods taking as input multidimensional vectors. Both problems are difficult to solve on large data sets residing on secondary storage, and their algorithms difficult to optimize in a parallel cluster. First, we extend cubes to intuitively show the relationship between measures aggregated at different grouping levels by introducing the percentage cube, a generalized database cube that takes percentages as its basic measure instead of simple sums. We show the percentage cube is significantly harder to compute than the standard cube due to a higher exponential complexity. We propose SQL syntax and introduce novel query optimizations to materialize the percentage cube without any memory limitations. We compare our optimized queries with existing SQL functions, evaluating time, speed-up and effectiveness of lattice pruning methods. In addition, we show columnar storage provides significant acceleration over row storage, the standard storage mechanism. Second, we study parallel aggregation on large matrices stored as tables and study how to compute a comprehensive data summarization matrix, called Gamma. Gamma generally fits in main memory and it is shown to enable the efficient derivation of many machine learning models. Specifically, we show our Gamma summarization matrix benefits many machine learning models, including PCA, linear regression, classification and variable selection. We analytically show our summarization matrix captures essential statistical properties of the data set and we experimentally show Gamma allows iterative algorithms to iterate faster in main memory. In addition, we show Gamma is further accelerated with array and columnar storage. We experimentally prove our parallel aggregations allow faster computation than existing machine learning libraries for model computations in R and Spark, two popular analytic platforms.


Date: Tuesday, October 31, 2017
Time: 10:30 AM - 12:30 PM
Place: PGH 575
Advisor: Dr. Carlos Ordonez

Faculty, students, and the general public are invited.