#!/usr/bin/env python # coding: utf-8 # # Tracing memory consumption # When setting up complex workflows, it might male sense to take a look at memory consumption. In interactive environments, the user can use the Windows Task manager to see how busy GPU memory is. That might be cumbersome for scipting. When using an nvidia GPU, the following procedure can be used for workflow memory consumption debugging. # In[1]: import numpy as np import pyclesperanto_prototype as cle cle.select_device("RTX") # For overseeing memory consumption, one can use [nvidia-smi](https://nvidia.custhelp.com/app/answers/detail/a_id/3751/~/useful-nvidia-smi-queries), a command line tool that can print out how much memory is currently blocked in a given GPU, by any application: # In[2]: get_ipython().system('nvidia-smi --query-gpu=memory.used --format=csv') # If we then run an operation on the GPU and check memory consumption again, we should see an increase. # In[3]: image = np.random.random((1024, 1024, 100)) blurred = cle.gaussian_blur(image) # In[4]: get_ipython().system('nvidia-smi --query-gpu=memory.used --format=csv') # The `del` command allows to free memory. Note: The memory behind the variable may not be freed immediately, depending on how busy the system is at the moment. # In[5]: del blurred # In[6]: get_ipython().system('nvidia-smi --query-gpu=memory.used --format=csv') # In[ ]: