Computer Science Focus on Research - University of Houston
Skip to main content

Computer Science Focus on Research

When: Monday, September 16, 2019
Where: PGH 563
Time: 11:00 AM


Fault-Tolerant Regularity-Based Real-Time Virtual Resourcess

Pavan Kumar Paluri, PhD Student

Many safety-critical applications employ embedded real-time systems where both timing and fault tolerance requirements must be continually satisfied. The Regularity-based Resource Partition Model (RRP), which is known for its code level independence between resource level and task level, is used to schedule resource partitions in virtualized real-time systems. This paper presents a fault tolerance model for Regularity-based Real-Time Virtual Resources to recover from transient hardware faults without modifying user applications. The proposed framework consists of a checkpointing mechanism called Fault-Tolerant RRP with a checkpointing partition followed by a redundancy partition prepared for re-execution to satisfy task deadlines despite the occurrence of faults. The frequency of checkpoints and the number of time slices in the redundancy partition are parameterized by the fault rate of the hardware resource and the sum of the availability factors of the original partition sets. Extensive theoretical analysis and simulation-based experiments show the effectiveness of the proposed framework while incurring minimal overhead.

Pavan Kumar Paluri received his Bachelor's degree in Computer Science and Engineering from Vellore Institute of Technology, India in 2016.

He is currently a 3rd Year Ph.D. student in Computer Science working under Dr. Albert M.K.Cheng in the Real-Time Systems Lab. His research interests include Real-Time Operating Systems, Virtualization, Safety-Critical systems and Fault-tolerant Computing.

 

Scalable Distributed Kernel Support Vector Machine Training

Ruchi Shah, PhD Student

Kernel Support vector machine (SVM) is a popular machine learning model that is well suited for many classification problems. A survey conducted by Kaggle in 2017 shows that 26% of the data science practitioners use SVMs to solve their problems. It is well known that kernel trick is very useful to augment linear SVM into a non-linear classifier at the expense that the resulting optimization problem becomes much larger and non-separable. A significant challenge of large-scale Kernel SVM is the size of the Gram matrix (n × n), which cannot be stored or processed efficiently when training data-set is large (e.g. n in the millions). We focused on optimizing three components of the SVM training system, random projection based low- rank matrix approximation, a primal-dual interior point optimization method to solve the approximated kernel SVM problem and parallelization of the low rank approximated components to build a training system that is fast, scalable, and accurate on large scale data-sets and computing nodes.

Ruchi Shah received her Bachelor’s degree in Information Technology from University of Mumbai, India in 2009 and her Master’s degree in Computer Application from V.J.T.I (Veermata Jijabai Technological Institute), India in 2012. She is currently a Ph.D. student in Computer Science working under Dr. Panruo Wu in the HPC Lab. Her research interests include high performance computing, distributed computing, machine learning, natural language processing, deep learning, big data analytics.