Get weekly tech career tips and best practices delivered to your inbox
Subscribe Free โ†’
GitHub

Research Engineer Resume Template

Where academic depth meets product engineering

Live Previewairy
Theme

Dr. Kenji Nakamura

Research Engineer | Applied AI & Systems Research

๐Ÿ“ง kenji.nakamura@email.com | ๐Ÿ“ฑ (555) 654-3210 | ๐Ÿ”— linkedin.com/in/kenjinakamura | ๐Ÿ’ป github.com/kenjinakamura | ๐Ÿ“š scholar.google.com/kenjinakamura

San Francisco, CA


Professional Summary

Research Engineer with a Ph.D. in Computer Science and 6+ years bridging fundamental research and production engineering. Expertise in large-scale ML systems, applied deep learning, and high-performance computing. Published 14 peer-reviewed papers with 900+ citations. Designed and deployed ML infrastructure serving 100M+ users at Meta AI Research. Passionate about turning research insights into real-world systems that matter.


Technical Skills

Machine Learning: Deep learning, reinforcement learning, large language models (LLMs), computer vision, self-supervised learning, model distillation

Frameworks: PyTorch, JAX, TensorFlow, Triton, ONNX, TensorRT, vLLM, HuggingFace

Systems & Infrastructure: Distributed training (FSDP, DeepSpeed, Megatron-LM), CUDA/GPU optimization, model serving, high-performance C++, Rust

Data & Compute: Apache Spark, Ray, HDFS, Slurm, Kubernetes, custom data pipelines at petabyte scale

Languages: Python, C++, CUDA, Rust, Bash

Research Tools: LaTeX, Weights & Biases, Hydra, DVC, Jupyter, Matplotlib, seaborn


Professional Experience

Research Engineer | Meta AI Research (FAIR) | Menlo Park, CA

August 2020 - Present

  • Co-designed training infrastructure for OPT-175B language model, enabling distributed training across 992 A100 GPUs with 70% hardware utilization โ€” published as technical report
  • Built efficient inference engine for large-scale vision-language models, reducing serving cost by 3x and latency by 60% vs. baseline
  • Developed novel data augmentation pipeline for self-supervised vision model that improved ImageNet top-1 accuracy by 1.8pp
  • Led implementation of FSDP training recipe adopted by 12+ internal research teams as standard practice
  • Collaborated on 6 published papers spanning LLMs, vision, and multimodal representation learning
  • Mentored 4 PhD interns from Stanford, MIT, CMU, and Berkeley; 2 converted to full-time research engineers
  • Open-sourced 3 tools (data pipeline, profiling library, evaluation harness) with 2,000+ combined GitHub stars

Research Scientist Intern | Google Brain | Mountain View, CA

May 2019 - August 2019

  • Implemented novel attention mechanism for sparse transformer architecture, achieving 2x speedup on long-sequence tasks
  • Ran large-scale ablation studies across 50+ model configurations using TPU pods
  • Contributed implementation included in TensorFlow Research Models repository

Research Engineer | MIT Computer Science and Artificial Intelligence Lab (CSAIL) | Cambridge, MA

September 2015 - July 2020

  • Designed and built robotic manipulation system integrating perception, planning, and control achieving 94% grasp success rate on unseen objects
  • Developed reinforcement learning environment used by 15+ research groups worldwide (3,500+ GitHub stars)
  • Implemented simulation-to-real transfer pipeline reducing real-world fine-tuning data requirements by 90%
  • Managed compute cluster of 40 GPU nodes for 20-person research group

Education

Ph.D. in Computer Science (Robotics & Machine Learning)

Massachusetts Institute of Technology (MIT) | Cambridge, MA Graduated: August 2020 Dissertation: "Sample-Efficient Sim-to-Real Transfer for Robotic Manipulation via Structured Representations" Advisor: Prof. Leslie Kaelbling

Master of Engineering in EECS

Massachusetts Institute of Technology (MIT) | Cambridge, MA Graduated: June 2015 GPA: 4.8/5.0

Bachelor of Science in Computer Science

University of Tokyo | Tokyo, Japan Graduated: March 2013 Summa Cum Laude, GPA: 3.96/4.0


Selected Publications

  1. Nakamura, K., et al. (2023). "Efficient Training of Large Language Models via Selective Gradient Checkpointing." NeurIPS. [Citations: 210]

  2. Nakamura, K., Zhang, Y., & LeCun, Y. (2022). "Self-Supervised Pretraining for Vision-Language Alignment at Scale." CVPR (Oral). [Citations: 185]

  3. Li, W., Nakamura, K., et al. (2022). "Sparse Mixture-of-Experts for Efficient Vision Transformers." ICLR. [Citations: 143]

  4. Nakamura, K. & Kaelbling, L. (2020). "Sim-to-Real via Structured Latent Space Representations." ICRA (Best Paper Finalist). [Citations: 134]

  5. Nakamura, K., et al. (2019). "Sample-Efficient Robot Learning through Meta-Adaptation." RSS. [Citations: 98]

Total Publications: 14 | Total Citations: 900+ | h-index: 9


Patents

  • US11876543 โ€“ "Adaptive Gradient Checkpointing for Memory-Efficient Large Model Training" (Meta AI, 2023)
  • US11543210 โ€“ "System and Method for Sim-to-Real Policy Transfer in Robotic Manipulation" (MIT, 2021)

Open Source Contributions

  • torchprofile โ€“ PyTorch model profiling library (1,200 stars, 80+ contributors)
  • simrobot-env โ€“ Robotic simulation RL environment (2,300 stars, used by 15+ universities)
  • llm-eval-harness โ€“ Unified evaluation framework for LLMs (700 stars)

Invited Talks & Service

Invited Talks:

  • "Scaling ML Systems: From Research to Production" โ€“ ICML Industry Track (2023)
  • "Efficient LLM Inference" โ€“ Stanford ML Symposium (2022)
  • "Sim-to-Real Transfer for Robotics" โ€“ CMU Robotics Seminar (2021)

Reviewing:

  • Program Committee: NeurIPS (2021-2024), ICML (2021-2024), ICLR (2022-2024), CVPR (2022-2024)
  • Outstanding Reviewer Award โ€“ NeurIPS (2022)

Awards & Honors

  • NeurIPS Outstanding Paper Award (2023)
  • ICRA Best Paper Finalist (2020)
  • NSF Graduate Research Fellowship (2013-2016)
  • MIT Presidential Fellowship (2013)

Additional Information

Languages: English (Fluent), Japanese (Native) Interests: Robotics, AI safety, competitive programming, Go (board game)


This resume was created using markdown and converted to PDF at MarkdownResume.app