26. Execution Statistics#

This table contains the latest execution statistics.

Document

Modified

Method

Run Time (s)

Status

aiyagari_jax

2025-06-23 03:38

cache

71.15

arellano

2025-06-23 03:38

cache

21.98

autodiff

2025-06-23 03:38

cache

13.4

hopenhayn

2025-06-23 03:39

cache

24.45

ifp_egm

2025-06-23 03:41

cache

143.64

intro

2025-06-23 03:41

cache

1.0

inventory_dynamics

2025-06-23 03:41

cache

9.74

inventory_ssd

2025-06-23 04:17

cache

2152.07

jax_intro

2025-06-23 04:18

cache

36.13

jax_nn

2025-06-23 04:19

cache

89.14

job_search

2025-06-23 04:19

cache

9.85

keras

2025-06-23 04:20

cache

27.13

kesten_processes

2025-06-25 01:18

cache

18.07

lucas_model

2025-06-23 04:20

cache

19.54

markov_asset

2025-06-23 04:21

cache

11.31

mle

2025-06-23 04:21

cache

15.05

newtons_method

2025-06-23 04:24

cache

182.87

opt_invest

2025-06-23 04:24

cache

22.32

opt_savings_1

2025-06-23 04:25

cache

46.08

opt_savings_2

2025-06-23 04:25

cache

20.2

overborrowing

2025-06-23 04:26

cache

22.8

short_path

2025-06-23 04:26

cache

3.97

status

2025-06-23 04:26

cache

2.4

troubleshooting

2025-06-23 03:41

cache

1.0

wealth_dynamics

2025-06-23 04:29

cache

155.46

zreferences

2025-06-23 03:41

cache

1.0

These lectures are built on linux instances through github actions that has access to a gpu. These lectures make use of the nvidia T4 card.

You can check the backend used by JAX using:

import jax
# Check if JAX is using GPU
print(f"JAX backend: {jax.devices()[0].platform}")
JAX backend: gpu

and the hardware we are running on:

!nvidia-smi
/home/runner/miniconda3/envs/quantecon/lib/python3.12/pty.py:95: RuntimeWarning: os.fork() was called. os.fork() is incompatible with multithreaded code, and JAX is multithreaded, so this will likely lead to a deadlock.
  pid, fd = os.forkpty()
Mon Jun 23 04:26:25 2025       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 575.51.03              Driver Version: 575.51.03      CUDA Version: 12.9     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  Tesla T4                       Off |   00000000:00:1E.0 Off |                    0 |
| N/A   36C    P0             32W /   70W |     109MiB /  15360MiB |      2%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A            6926      C   ...da3/envs/quantecon/bin/python        106MiB |
+-----------------------------------------------------------------------------------------+