Recommended Citation
Postprint version. Published in 2014 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), November 2, 2014.
The definitive version is available at https://doi.org/10.1109/BIBM.2014.6999186.
Abstract
This paper presents a heterogeneous computing solution for an optimized genetic selection analysis tool, GenSel. GenSel can be used to efficiently infer the effects of genetic markers on a desired trait or to determine the genomic estimated breeding values (GEBV) of genotyped individuals. To predict which genetic markers are informational, GenSel performs Bayesian inference using Gibbs sampling, a Markov Chain Monte Carlo (MCMC) algorithm. Parallelizing this algorithm proves to be a technically challenging problem because there exists a loop carried dependence between each iteration of the Markov chain. The approach presented in this paper exploits both task-level parallelism (TLP) and data-level parallelism (DLP) that exists within each iteration of the Markov chain. More specifically, a combination of CPU threads using OpenMP and GPU threads using NVIDIA's CUDA paradigm is implemented to speed up the sampling of each genetic marker used in creating the model. Performance speedup will allow this algorithm to accommodate the expected increase in observations on animals and genetic markers per observation. The current implementation executes 1.84 times faster than the optimized CPU implementation.
Disciplines
Computer Sciences
Copyright
Copyright © 2014 IEEE.
Number of Pages
8
Publisher statement
Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
URL: https://digitalcommons.calpoly.edu/csse_fac/259