Curriculum vitae

About me

I’m Daniel Bourgeois. I am a postdoctoral researcher in the department of computer science at Rice University in Houston, Texas, USA. My research interests are in building Machine Learning systems. In particular, my work includes building and designing a prototype distributed ML system, Einsummable. Broader professional interests include functional programming and data science.

I also play the violin and am currently working through Suzuki book number two.

Skills

C++, Haskell, Python, R programming My research work is primarily developing a custom ML system, using C++ and CUDA. For previous course work, data science and scripting purposes, R and python are used. For side projects, including my personal notetaking application, I program in Haskell.

Deep Learning For ML programming, I’m familar with PyTorch, TensorFlow and Jax, particularly with respect to distributed computing applications therein. As part of my research, I program with several NVIDIA tools, including CUDA, CUDA Graphs, CuTensor and NCCL. For distributed computing, I’ve used MPI, UCX and even wrote my own Infiniband communicator.

Education

2017-2024: PhD in Statistics, Rice University, Thesis: “Declarative Machine Learning with Einsummable”

2012-2016: BSc Mathematics, Louisiana State University

Work Experience

2024-: Postdoctoral Researcher, Computer Science, Rice University

2022: Software Engineering Intern, FlightAware

2017: Resarch Intern, Computer Science Research Initiative, Sandia National Laboratories

2015: Undergraduate Research Assistant, Ste||ar Group

Awards

Rice University’s “Data to Knowledge Lab” Fellow, Spring 2020

Rice University’s “Data to Knowledge Lab” Fellow, Fall 2019

2017 recipient of National Science Foundation Graduate Research Fellowship

Publications

D. Bourgeois, Z. Ding, D. Jankov, J. Li, M. Sleem, Y. Tang, J. Yao, X. Yao, and C. Jermaine, “EinDecomp: Decomposition of declaratively-specified machine learning and numerical computations for parallel execution.” 2024.

Z. Ding, J. Yao, B. Barrow, T. L. Botran, C. Jermaine, Y. Tang, J. Li, X. Yao, S. M. Abdelghafar, and D. Bourgeois, “TURNIP: A”nondeterministic" gpu runtime with cpu ram offload.” 2024.

Y. Tang, Z. Ding, D. Jankov, B. Yuan, D. Bourgeois, and C. Jermaine, “Auto-differentiation of relational computations for very large scale machine learning,” in International conference on machine learning, 2023, pp. 33581–33598.

B. Yuan, D. Jankov, J. Zou, Y. Tang, D. Bourgeois, and C. Jermaine, “Tensor relational algebra for distributed machine learning system design,” Proceedings of the VLDB Endowment, vol. 14, no. 8, 2021.

D. R. Kowal and D. C. Bourgeois, “Bayesian function-on-scalars regression for high-dimensional data,” Journal of Computational and Graphical Statistics, vol. 29, no. 3, pp. 629–638, 2020.

H. Kaiser, T. Heller, D. Bourgeois, and D. Fey, “Higher-level parallelization for local and distributed asynchronous task-based programming,” in Proceedings of the first international workshop on extreme scale programming models and middleware, 2015, pp. 29–37.