Marina Neseem

Research Scientist @ Nvidia | PhD @ Brown University

About Me

👩‍💻 I am a Research Scientist at Nvidia.

🎓 I got my PhD from Brown University.

📚 I am interested in Efficient AI, Deep Learning Optimizations, and Edge Intelligence.

Work & Internships Experience

Research Scientist - Nvidia, Santa Clara, CA. (July 2024 - Present)

  • My research focuses on improving the efficiency of deep learning models.

Research Intern - Google, Mountain View, CA. (June 2023 - August 2023)

  • I joined as a Reseach Intern in YouTube.
  • The goal of my internship was to develop an efficient quantized general-purpose backbone for vision tasks and co-design it with custom hardware to efficiently support high-throughput workloads at YouTube.
  • My developed model along with the co-designed hardware can be widely used for various downstream computer-vision tasks where efficiency is critical.

Research Intern - Microsoft Research, Redmond, WA. (June 2022 - August 2022)

  • I re-joined the AI Compiler group at Microsoft Research.
  • During my internship, I developed a tool named SupersONNX.
  • SupersONNX achieves high performance end-to-end inference for Deep Neural Networks with minimal runtime-overheads.
  • SupersONNX applies multiple graph optimization techniques as well as optimized memory and buffer allocation.
  • SupersONNX achieves 3-4X improvement in the memory footprint on Transformer models as well as Residual Convolutional models.

Research Intern - Microsoft Research, Redmond, WA. (June 2021 - August 2021)

  • I joined the AI Compiler group at Microsoft Research.
  • During my internship, I developed a heuristics-based grid search algorithm to search the design space of the compiler for the efficient configurations.
  • I applied the heuristic-based grid search to matrix multiplication and convolution implementations with state-of-the-art performance.
  • I also created case-studies to show-case the AI compiler.

Research Assistant - SCALE lab, Brown University (Sept 2019 - May 2024)

My research interests include Edge Intelligence, Adaptive/Dynamic Neural Networks, and Machine Learning optimization for resource-constrained devices.

Digital Design Teaching Assistant - Brown University (Sept 2020 - Dec 2020)

My responsibility is to assist the students in completing their Labs that include using simulation digital design tools like LTspice, Electric, and other commercial tools for Logic Synthesis, Placement and Routing.

Research Intern - SCALE lab, Brown University (Jan 2019 - June 2019)

As a part of the OpenROAD project, I was responsible for implementing a physical synthesis tool on top of the open source logic synthesis tools Yosys and ABC in C/C++.

Software Development Engineer - Mentor Graphics, Egypt (Oct 2017 - Dec 2018)

  • Worked on Physical Verification tooling in C++
  • Communicated with Marketing and Quality Assurance team members to analyze problems and customer needs
  • Developed timeline plans for development tasks

Software Development Intern - Mentor Graphics, Egypt (April 2017 - Oct 2017)

  • Worked on Physical Verification tooling in C++
  • Implement new features to Graphical user interface design tool in QT and C++
  • Developed and maintained testing suites as a part of the tool’s regression

Networks Security Teacher assistant - Ain Shams University, Egypt (Sept 2017 - Jan 2018)

  • Lead sections and labs to explain the class materials in detail and with examples
  • Helped in developing exams
  • Helped in developing assignments and programming exercises to ensure class materials understanding by all student

Publications

M. Neseem, C. McCullough, R. Hsin, C. Leichner, S. Li, I. Chong, A. Howard, L. Lew, S. Reda, V. Rautio, D. Moro, “PikeLPN: Mitigating Overlooked Inefficiencies of Low-Precision Neural Networks", Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition 2024. [pdf]

➡ A. Agiza, M. Neseem, S. Reda, “MTLoRA: Low-Rank Adaptation Approach for Efficient Multi-Task Learning", Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition 2024. [pdf]

M. Neseem, A. Agiza, S. Reda, “AdaMTL: Adaptive Input-dependent Inference for Efficient Multi-Task Learning", Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops 2023.[pdf]

M. Neseem, A. Hosny, S. Reda, “Exploiting Activations Sparsity for Efficient Training and Inference of DNNs on the Edge", Preprint.

M. Neseem, A. Hosny, S. Reda, “Sparse Bitmap Compression for Memory-Efficient Training on the Edge", Proceedings of the ACM/IEEE Symposium on Edge Computing 2021.[pdf]

M. Neseem, S. Reda, “AdaCon: Adaptive Context-Aware Object Detection for Resource-Constrained Embedded Devices“, Proceedings of the International Conference On Computer Aided Design 2021.[pdf][Video]

M. Neseem, J. Nelson, S. Reda, “AdaSense: Adaptive Low-Power Sensing and Activity Recognition for Wearable Devices“, Proceedings of ACM/IEEE Design Automation Conference 2020.[pdf][Video]

➡ T. Ajayi, V. A. Chhabria, M. Foga a, S. Hashemi, A. Hosny, A. B. Kahng, M. Kim, J. Lee, U. Mallappa, M. Neseem, G. Pradipta, S. Reda, M. Saligane, S. S. Sapatnekar, C. Sechen, M. Shalan, W. Swartz, L. Wang, ZWang, M. Woo and B. Xu, “Toward an OpenSource Digital Flow: First Learnings from the OpenROAD Project“, Proc. ACM/IEEE Design Automation Conference 2019.[pdf]

Awards

🎖 Richard Newton Young Fellow award in the Design Automation Conference, Las Vegas, Nevada June 2019.

🎖 Top Students Financial Awards, Ain Shams University, Cairo, Egypt 2013, 2014, 2015, 2016.