Skip to content
/ 18337 Public
forked from SciML/SciMLBook

18.337 - Parallel Computing and Scientific Machine Learning

Notifications You must be signed in to change notification settings

mitmath/18337

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

552 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

18.337/6.7320: Parallel Computing, Scientific Machine Learning, and Modern Agentic Modelling (Spring 2026)

Professor Alan Edelman (and Philip the Corgi), affiliate Professor Chris Rackauckas

MW 2:30 to 4:00 @ 45-230 (before spring break), then switch to 32-123 (after spring break)

TA and Office hours: (To be confirmed)

Canvas will only be used for homework and project (+proposal) submission

Note for 2026: We have a much higher enrollment than we have had in the past. The below will have to be modified if the enrollment remains this high.
Below is what appeared in the 2023 class. We may not be able to get as many computing resources for everyone. Math does not usually have the same resources as Course 6, so we may not have as much TA support as we would hope to have.

Julia:

  • Really nice Julia tutorial for the fall 2022 class Tutorial

  • Julia cheatsheets

  • Julia tutorial by Steven Johnson Fri Feb 6, 2026 Room 34-101 Optional Julia Tutorial: Fri Feb 6 5-7pm Room 34-101

  • Virtually via Zoom. Recording will be posted.

A basic overview of the Julia programming environment for numerical computations that we will use in 18.06 for simple computational exploration. This (Zoom-based) tutorial will cover what Julia is and the basics of interaction, scalar/vector/matrix arithmetic, and plotting — we'll be using it as just a "fancy calculator" and no "real programming" will be required.

If possible, try to install Julia on your laptop beforehand using the instructions at the above link. Failing that, you can run Julia in the cloud (see instructions above).

Modelling (New for 2026)

Scientists and Engineers build physics-based models of the world. As manufacturing resurges in America, engineers are increasingly resorting to model based design. Physical AI is reinventing product design. This includes Scientific AI methods that combine models with data, model autocompletion that provides the missing physics, Agentic AI systems that build physics based models, and compilers and solvers that not only leverage automatic differentiation, but are also redesigned to provide introspection to agents.

Parallel Computing

Take a look at MIT Engaging and see what is there.

Tentative Schedule

# Date Day Comments who gives live lecture? topic
1 2/2/2026 Monday Chris Intro to SciML, PINNs
2 2/4/2026 Wednesday Chris Forward mode automatic differentiation
3 2/9/2026 Monday Alan Writing fast single core code
4 2/11/2026 Wednesday Alan Single core code, continued
2/16/2026 Monday Presidents' Day
5 2/17/2026 Tuesday Monday Schedule Chris Reverse mode automatic differentiation
6 2/18/2026 Wednesday Chris Adjoint methods
7 2/23/2026 Monday Chris Neural/Universal differential equations
8 2/25/2026 Wednesday Chris Differential Algebraic Equations
9 3/2/2026 Monday Chris Machine Learning with Conservation Laws
10 3/4/2026 Wednesday Chris Make-up Day
11 3/9/2026 Monday Alan
12 3/11/2026 Wednesday Alan
13 3/16/2026 Monday Alan
14 3/18/2026 Wednesday Alan
3/23/2026 Monday Spring Break
3/25/2026 Wednesday Spring Break
15 3/30/2026 Monday Alan
16 4/1/2026 Wednesday Alan
17 4/6/2026 Monday Alan
18 4/8/2026 Wednesday Alan
19 4/13/2026 Monday Alan
20 4/15/2026 Wednesday Alan
4/20/2026 Monday Patriots Day
21 4/22/2026 Wednesday Alan
22 4/27/2026 Monday Alan
23 4/29/2026 Wednesday Alan
24 5/4/2026 Monday Alan
25 5/6/2026 Wednesday Alan
26 5/11/2026 Monday Alan

Announcement:

There will be homeworks, followed by the final project. Everyone needs to present their work and submit a project report.

1-page Final Project proposal due : just before Spring Break

Final Project presentations : we may have some done by video and some in class

Final Project reports due: May 11

Grading:

25% problem sets, 25% quizzes, 10% for the final project proposal, and 40% for the final project. Problem sets and final projects will be submitted electronically.

HW

# Notebook

Lecture Schedule (Old: from 2023)

# Day Date Topic SciML lecture Materials
1 M 2/6 Intro to Julia. My Two Favorite Notebooks. [Julia is fast], [AutoDiff], [autodiff video],
2 W 2/8 Matrix Calculus I and The Parallel Dream See [IAP 2023 Class on Matrix Calculus],[handwritten notes],[The Parallel Dream]
3 M 2/13 Matrix Calculus II [handwritten notes],[Corgi in the Washing Machine],[2x2 Matrix Jacobians]
4 W 2/15 Serial Performance 2 [handwritten notes], [Serial Performance .jl file], [Loop Fusion Blog ]
5 T 2/21 Intro to PINNs and Automatic differentiation I : Forward mode AD 3 and 8 ode and Pinns,intro to pinn handwritten notes,autodiff handwritten notes
6 W 2/22 Automatic differentiation II : Reverse mode AD 10 pinn.jl, reverse mode ad demo,handwritten notes
7 M 2/27 Dynamical Systems & Serial Performance on Iterations 4 Lorenz many ways, Dynamical Systems, handwriten notes
8 W 3/1 HPC & Threading 5 and 6 pi.jl, threads.jl,HPC Slides
9 M 3/6 Parallelism Parallelism in Julia Slides,reduce/prefix notebook
10 W 3/8 Prefix (and more) ppt slides, reduce/prefix notebook,ThreadedScans.jl,cuda blog
11 M 3/13 Adjoint Method Example 10 Handwritten Notes
12 W 3/15 Guest Lecture - Chris Rackauckas
13 M 3/21 Vectors, Operators and Adjoints Handwritten Notes
14 W 3/23 Adjoints of Linear, Nonlinear, Ode 11 Handwritten Notes, 18.335 adjoint notes (Johnson)
Spring Break
15 M 4/3 Guest Lecture, Billy Moses Enzyme AD
16 W 4/5 Guest Lecture, Keaton Burns Dedalus PDE Solver
17 M 4/10 Adjoints of ODE Handwritten Notes
18 W 4/12 Partitioning
M 4/17 Patriots' Day
19 W 4/19 Fast Multipole and Parallel Prefix Unfinished Draft
20 M 4/24
21 W 4/26 Project Presentation I
22 M 5/1 Project Presentation II
23 W 5/3 Project Presentation III
24 M 5/8 Project Presentation IV
25 W 5/10 Project Presentation V
M 5/15 Class Cancelled

|8|W|3/1| GPU Parallelism I |7| [video 1],[video2] |9|M|3/6| GPU Paralellism II | | [video], [Eig&SVD derivatives notebooks], [2022 IAP Class Matrix Calculus] |10|W|3/8| MPI | | Slides, [video, Lauren Milichen],[Performance Metrics] see p317,15.6 |11|M|3/13| Differential Equations I | 9| |12|W|3/15| Differential Equations II |10 | |13|M|3/20| Neural ODE |11 | |14|W|3/22| |13 | | | | | Spring Break | |15|M|4/3| | | GPU Slides Prefix Materials |16|W|4/5| Convolutions and PDEs | 14 | |17|M|4/10| Chris R on ode adjoints, PRAM Model |11 | [video]| |18|W|4/12| Linear and Nonlinear System Adjoints | 11 | [video]| | |M|4/17| Patriots' Day |19|W|4/19| Lagrange Multipliers, Spectral Partitioning || Partitioning Slides| | |20|M|4/24| |15| [video],notes on adjoint| |21|W|4/26| Project Presentation I | |22|M|5/1| Project Presentation II | Materials |23|W|5/3| Project Presentation III | 16 | [video] |24|M|5/8| Project Presentation IV |
|25|W|5/10| Project Presentation V | |26|M|5/15| Project Presentation VI|

Lecture Summaries and Handouts

Class Videos

Lecture 1: Syllabus, Introduction to Performance, Introduction to Automatic Differentiation

Setting the stage for this course which will involve high performance computing, mathematics, and scientific machine learning, we looked at two introductory notebooks. The first [Julia is fast]](https://github.com/mitmath/18337/blob/master/lecture1/Julia%20is%20fast.ipynb) primarily reveals just how much performance languages like Python can leave on the table. Many people don't compare languages, so they are unlikely to be aware. The second [AutoDiff]](https://github.com/mitmath/18337/blob/master/lecture1/AutoDiff.ipynb) reveals the "magic" of forward mode autodifferentiation showing how a compiler can "rewrite" a program through the use of software overloading and still maintain performance. This is a whole new way to see calculus, not the way you learned it in a first year class, and not finite differences either.

Lecture 2: The Parallel Dream and Intro to Matrix Calculus

We gave an example [The Parallel Dream]](https://github.com/mitmath/18337/blob/master/lecture1/the_dream.ipynb)

Lecture and Notes

Homeworks

HW1 will be due Thursday Feb 16. This is really just a getting started homework.

Hw1

Final Project

For the second half of the class students will work on the final project. A one-page final project proposal must be sumbitted by March 24 Friday, through canvas.

Last three weeks (tentative) will be student presentations.

Possible Project Topics

Here's a list of current projects of interest to the julialab

One possibility is to review an interesting algorithm not covered in the course and develop a high performance implementation. Some examples include:

  • High performance PDE solvers for specific PDEs like Navier-Stokes
  • Common high performance algorithms (Ex: Jacobian-Free Newton Krylov for PDEs)
  • Recreation of a parameter sensitivity study in a field like biology, pharmacology, or climate science
  • Augmented Neural Ordinary Differential Equations
  • Neural Jump Stochastic Differential Equations
  • Parallelized stencil calculations
  • Distributed linear algebra kernels
  • Parallel implementations of statistical libraries, such as survival statistics or linear models for big data. Here's one example parallel library) and a second example.
  • Parallelization of data analysis methods
  • Type-generic implementations of sparse linear algebra methods
  • A fast regex library
  • Math library primitives (exp, log, etc.)

Another possibility is to work on state-of-the-art performance engineering. This would be implementing a new auto-parallelization or performance enhancement. For these types of projects, implementing an application for benchmarking is not required, and one can instead benchmark the effects on already existing code to find cases where it is beneficial (or leads to performance regressions). Possible examples are:

Additionally, Scientific Machine Learning is a wide open field with lots of low hanging fruit. Instead of a review, a suitable research project can be used for chosen for the final project. Possibilities include:

  • Acceleration methods for adjoints of differential equations
  • Improved methods for Physics-Informed Neural Networks
  • New applications of neural differential equations
  • Parallelized implicit ODE solvers for large ODE systems
  • GPU-parallelized ODE/SDE solvers for small systems

About

18.337 - Parallel Computing and Scientific Machine Learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 92.2%
  • Julia 4.6%
  • HTML 3.2%