$\newcommand{\re}{\mathbb{R}}$ Numpy provides many useful types and methods to perform numerical computations including vectors, matrices and linear algebra.
First import the numpy module.
import numpy
You only need to do this once, usually at the beginning of your code.
To access methods, etc. inside this module, use dot operator.
print(numpy.pi)
print(numpy.sin(numpy.pi/2))
3.141592653589793 1.0
It is common practice to import numpy like this.
import numpy as np
Note that np
acts as an alias or short-hand for numpy
.
print(np.pi)
print(np.sin(np.pi/2))
3.141592653589793 1.0
Create an array of zeros
x = np.zeros(5)
print(x)
[0. 0. 0. 0. 0.]
These one dimensional arrays are of type ndarray
type(x)
numpy.ndarray
Create an array of ones
x = np.ones(5)
print(x)
[1. 1. 1. 1. 1.]
Create and add two arrays
x = np.array([1.0, 2.0, 3.0])
y = np.array([4.0, 5.0, 6.0])
z = x + y
print(z)
[5. 7. 9.]
Get the size of array
print(len(x))
print(x.size)
3 3
Get the shape of an array
print(x.shape)
(3,)
We see that these are arrays of reals by default. We can specify the type
a = np.zeros(5, dtype=int)
print(a)
[0 0 0 0 0]
This behaves same way as Matlab's linspace function.
Generate 10 uniformly spaced numbers in [1,10]
x = np.linspace(1,10,10)
print(x)
[ 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.]
Note that this includes the end points 1 and 10. The output of linspace is an ndarray
of floats.
type(x)
numpy.ndarray
x = linspace(a,b,n)
is such that x
is an array of n
elements
x[i] = a + i*h, i=0,1,2,...,n-1, h = (b-a)/(n-1)
so that
x[0] = a, x[n-1] = x[-1] = b
Note that we get last element of array using x[-1]
; similarly, the last but one is given by x[-2]
, etc.
x = np.arange(1,10)
print(x)
print(type(x))
[1 2 3 4 5 6 7 8 9] <class 'numpy.ndarray'>
For $m < n$, arange(m,n)
returns
Note the last value $n$ is not included.
We can specify a step size
x = np.arange(1,10,2)
print(x)
[1 3 5 7 9]
If the arguments are float, the returned value is also float.
x = np.arange(1.0,10.0)
print(x)
[1. 2. 3. 4. 5. 6. 7. 8. 9.]
x = np.arange(0,1,0.1)
print(x)
[0. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9]
Create an array of ones.
x = np.ones(10)
print(x)
[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
Maybe we want set all elements to zero, so we might try this
x = 0.0
print(x)
0.0
x
has changed from an array to a scalar !!! In Python, the type of a variable is determined by what we assign to it. The correct way to set zeroes is this.
x = np.ones(10)
print(x)
x[:] = 0.0
print(x)
[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.] [0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
x = np.ones(5)
y = x
x[:] = 0.0
print('x = ', x)
print('y = ', y)
x = [0. 0. 0. 0. 0.] y = [0. 0. 0. 0. 0.]
Why did y
change ? This happened because when we do
y = x
then y
is just a pointer to x
, so that changing x
changes y
also. If we want y
to be an independent copy of x
then do this
x = np.ones(5)
y = x.copy() # or y = np.copy(x)
x[:] = 0.0
print('x = ', x)
print('y = ', y)
x = [0. 0. 0. 0. 0.] y = [1. 1. 1. 1. 1.]
2-d arrays can be considered as matrices, though Numpy has a separate matrix class.
Create an array of zeros
A = np.zeros((5,5))
print(A)
[[0. 0. 0. 0. 0.] [0. 0. 0. 0. 0.] [0. 0. 0. 0. 0.] [0. 0. 0. 0. 0.] [0. 0. 0. 0. 0.]]
The size must be specified as a tuple or list.
We can access the elements by two indices A[i,j]
: here i
is the row index and j
is the column index.
-----------j-------->
| [[0. 0. 0. 0. 0.]
| [0. 0. 0. 0. 0.]
i [0. 0. 0. 0. 0.]
| [0. 0. 0. 0. 0.]
v [0. 0. 0. 0. 0.]]
The top, left element is A[0,0]
.
We can modify the elements of an array.
A[1,1] = 1.0
print(A)
[[0. 0. 0. 0. 0.] [0. 1. 0. 0. 0.] [0. 0. 0. 0. 0.] [0. 0. 0. 0. 0.] [0. 0. 0. 0. 0.]]
Create an array of ones
A = np.ones((2,3))
print(A)
[[1. 1. 1.] [1. 1. 1.]]
Create identity matrix
A = np.eye(5)
print(A)
[[1. 0. 0. 0. 0.] [0. 1. 0. 0. 0.] [0. 0. 1. 0. 0.] [0. 0. 0. 1. 0.] [0. 0. 0. 0. 1.]]
Create an array by specifying its elements
A = np.array([[1.0, 2.0],
[3.0, 4.0]])
print(A)
[[1. 2.] [3. 4.]]
Create a random array and inspect its shape
m, n = 2, 3
A = np.random.rand(m,n)
print(A)
print('A.size =',A.size)
print('A.shape =',A.shape)
print('A.shape[0] =',A.shape[0],' (number of rows)')
print('A.shape[1] =',A.shape[1],' (number of columns)')
[[0.32734654 0.03457166 0.47404187] [0.94912781 0.98493632 0.34524815]] A.size = 6 A.shape = (2, 3) A.shape[0] = 2 (number of rows) A.shape[1] = 3 (number of columns)
To print the elements of an array, we need two for loops, one over rows and another over the columns.
for i in range(m): # loop over rows
for j in range(n): # loop over columns
print(i,j,A[i,j])
0 0 0.3273465439188714 0 1 0.034571657752678786 0 2 0.47404187477678794 1 0 0.9491278092639442 1 1 0.9849363169144089 1 2 0.345248146920101
To transpose a 2-d array
A = np.array([[1,2],[3,4]])
print("A ="); print(A)
B = A.T
print("B ="); print(B)
A = [[1 2] [3 4]] B = [[1 3] [2 4]]
Create
$$ A = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 2 & 0 & 0 \\ 0 & 0 & 3 & 0 \\ 0 & 0 & 0 & 4 \end{bmatrix} $$a = np.array([1,2,3,4]) # diagonal elements
A = np.diag(a)
print(A)
[[1 0 0 0] [0 2 0 0] [0 0 3 0] [0 0 0 4]]
Create $n \times n$ tri-diagonal matrix
a = np.array([1,2,3]) # sub-diagonal : length = n-1
b = np.array([4,5,6,7]) # main diagonal : length = n
c = np.array([-1,-2,-3]) # super-diagonal: length = n-1
A = np.diag(a,-1) + np.diag(b,0) + np.diag(c,+1)
print(A)
[[ 4 -1 0 0] [ 1 5 -2 0] [ 0 2 6 -3] [ 0 0 3 7]]
Sometimes we have an existing array and we want to create a new array of the same shape and type, but without initializing its elements.
A = np.random.rand(2,3)
B = np.empty_like(A)
print(A.shape,B.shape)
(2, 3) (2, 3)
C = np.zeros_like(A)
print(C)
[[0. 0. 0.] [0. 0. 0.]]
D = np.ones_like(A)
print(D)
[[1. 1. 1.] [1. 1. 1.]]
x = np.array([1,2,3])
print(x.shape, x)
y = x.T
print(y.shape, y)
(3,) [1 2 3] (3,) [1 2 3]
Create a row vector
x = np.array([[1,2,3]]) # row vector
print('x.shape =',x.shape)
print(x)
y = x.T
print('y.shape =',y.shape)
print(y)
x.shape = (1, 3) [[1 2 3]] y.shape = (3, 1) [[1] [2] [3]]
Create a column vector
x = np.array([[1],
[2],
[3]]) # column vector
print('x.shape =',x.shape)
print(x)
y = x.T
print('y.shape =',y.shape)
print(y)
x.shape = (3, 1) [[1] [2] [3]] y.shape = (1, 3) [[1 2 3]]
Functions like zeros
and ones
can return row/column vector rather than array.
x = np.ones((3,1)) # column vector
print('x ='); print(x)
y = np.ones((1,3)) # row vector
print('y ='); print(y)
x = [[1.] [1.] [1.]] y = [[1. 1. 1.]]
Row/column vectors are like row/column matrix. We have to use two indices to access the elements of such row/column vectors.
Here we access elements of a column vector.
print(x[0][0]) # or x[0,0]
print(x[1][0])
print(x[2][0])
1.0 1.0 1.0
x[0]
gives an array for the first row.
print('First row of x =',x[0], type(x[0]))
print('Element in first row =',x[0][0], type(x[0][0]))
First row of x = [1.] <class 'numpy.ndarray'> Element in first row = 1.0 <class 'numpy.float64'>
Array of 10 elements
x[0] | x[1] | x[2] | x[3] | x[4] | x[5] | x[6] | x[7] | x[8] | x[9] |
---|---|---|---|---|---|---|---|---|---|
0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
x[-10] | x[-9] | x[-8] | x[-7] | x[-6] | x[-5] | x[-4] | x[-3] | x[-2] | x[-1] |
x = np.linspace(0,9,10)
print(x)
[0. 1. 2. 3. 4. 5. 6. 7. 8. 9.]
Get elements x[2],...,x[5]
print(x[2:6])
[2. 3. 4. 5.]
Hence x[m:n]
gives the elements x[m],x[m+1],...,x[n-1]
.
Get elements from x[5]
upto the last element
print(x[5:])
[5. 6. 7. 8. 9.]
Get the last element
print(x[-1])
9.0
Get element x[5]
upto last but one element
print(x[5:-1])
[5. 6. 7. 8.]
Access every alternate element of array starting from beginning x[0]
print(x[0::2])
[0. 2. 4. 6. 8.]
and starting from x[1]
print(x[1::2])
[1. 3. 5. 7. 9.]
These operations work on multi dimensional arrays also.
A = np.random.rand(3,4)
print(A)
[[0.6215893 0.29638408 0.27486992 0.27735668] [0.92988958 0.21306569 0.67360278 0.45209752] [0.37957801 0.17223057 0.63348467 0.57251602]]
print(A[0,:]) # 0'th row
[0.6215893 0.29638408 0.27486992 0.27735668]
print(A[:,0]) # 0'th column
[0.6215893 0.92988958 0.37957801]
print(A[0:2,0:3]) # print submatrix
[[0.6215893 0.29638408 0.27486992] [0.92988958 0.21306569 0.67360278]]
A[0,:] = 0.0 # zero out zeroth row
print(A)
[[0. 0. 0. 0. ] [0.92988958 0.21306569 0.67360278 0.45209752] [0.37957801 0.17223057 0.63348467 0.57251602]]
Some arithmetic operations act element-wise
x = np.array([1.0, 2.0, 3.0])
y = np.array([4.0, 5.0, 6.0])
print(x*y) # multiply
[ 4. 10. 18.]
print(x/y) # divide
[0.25 0.4 0.5 ]
print(y**x) # exponentiation
[ 4. 25. 216.]
A = np.ones((3,3))
print(A*x)
[[1. 2. 3.] [1. 2. 3.] [1. 2. 3.]]
If A
and x
are arrays, then A*x
does not give matrix-vector product. For that use dot
print(A.dot(x))
[6. 6. 6.]
or equivalently
print(np.dot(A,x))
[6. 6. 6.]
In Python3 versions, we can use @
to achieve matrix operations
print(A@x)
[6. 6. 6.]
We can of course do matrix-matrix products using dot
or @
A = np.ones((3,3))
B = 2*A
print('A =\n',A)
print('B =\n',B)
print('A*B =\n',A@B)
A = [[1. 1. 1.] [1. 1. 1.] [1. 1. 1.]] B = [[2. 2. 2.] [2. 2. 2.] [2. 2. 2.]] A*B = [[6. 6. 6.] [6. 6. 6.] [6. 6. 6.]]
On vectors, @
performs dot product, i.e., x@y = dot(x,y)
x = np.array([1,1,1])
y = np.array([2,3,4])
print(x@y)
9
x = np.array([-3,-2,-1,0,1,2,3])
print('min = ',np.min(x))
print('max = ',np.max(x))
print('abs min = ',np.abs(x).min()) # np.min(np.abs(x))
print('abs max = ',np.abs(x).max()) # np.max(np.abs(x))
print('sum = ',np.sum(x))
print('dot = ',np.dot(x,x))
print('dot = ',np.sum(x*x))
min = -3 max = 3 abs min = 0 abs max = 3 sum = 0 dot = 28 dot = 28
Note that np.sum(x*x)
gives
We can compute vector norms using numpy.linalg.norm (also see scipy.linalg.norm)
from numpy.linalg import norm
print(norm(x)) # L2 norm
print(norm(x,2)) # L2 norm
print(norm(x,1)) # L1 norm
print(norm(x,np.inf)) # Linf norm
5.291502622129181 5.291502622129181 12.0 3.0
or import the linalg
module and use methods inside it.
import numpy.linalg as la
print(la.norm(x)) # L2 norm
print(la.norm(x,2)) # L2 norm
print(la.norm(x,1)) # L1 norm
print(la.norm(x,np.inf)) # Linf norm
5.291502622129181 5.291502622129181 12.0 3.0
These methods work on 2-d arrays treated as matrices.
A = 2*np.random.rand(3,3) - 1
print(A)
[[-0.07208806 -0.62470188 0.10199786] [ 0.00363436 0.02069819 0.28161047] [ 0.11432236 -0.21794329 0.68529942]]
print('min = ',np.min(A))
print('max = ',np.max(A))
print('abs min = ',np.abs(A).min()) # np.min(np.abs(x))
print('abs max = ',np.abs(A).max()) # np.max(np.abs(x))
print('sum = ',np.sum(A))
print('Frob norm = ',la.norm(A)) # Frobenius norm
print('L1 norm = ',la.norm(A,1)) # L1 norm
print('L2 norm = ',la.norm(A,2)) # L2 norm
print('Linf norm = ',la.norm(A,np.inf)) # Linf norm
min = -0.6247018811193785 max = 0.6852994164180519 abs min = 0.003634363750697478 abs max = 0.6852994164180519 sum = 0.2928294310434101 Frob norm = 1.007870497260461 L1 norm = 1.0689077424953188 L2 norm = 0.8471583418152143 Linf norm = 1.0175650633947781
Let
$$ x \in \re^n, \qquad A \in \re^{m \times n}, \qquad y = A x \in \re^m $$We can compute this element-wise
$$ y_i = \sum_{j=0}^{n-1} A_{ij} x_j, \qquad 0 \le i \le m-1 $$m, n = 5, 10
x = np.random.rand(n)
A = np.random.rand(m,n)
y = np.zeros(m) # Important to set to zero
assert A.shape[1] == len(x) and len(y) == A.shape[0]
for i in range(m):
for j in range(n):
y[i] += A[i,j]*x[j]
We can verify that our result is correct by computing $\| y - A x\|_\infty$
print(np.linalg.norm(y-A@x, np.inf))
4.440892098500626e-16
We can also compute the product column-wise. Let
$$ A_{:,j} = \textrm{j'th column of A} \in \re^m $$Then the matrix-vector product can also be written as
$$ y = \sum_{j=0}^{n-1} A_{:,j} x_j $$Warning: This may have inefficient memory access since by default, numpy arrays have column-major ordering, see below.
y[:] = 0.0
for j in range(n):
y += A[:,j]*x[j]
# Now check the result
print(np.linalg.norm(y-A@x, np.inf))
4.440892098500626e-16
If $A \in \re^{m\times n}$ and $B \in \re^{n \times p}$ then $C = AB \in \re^{m \times p}$ is given by
$$ C_{ij} = \sum_{k=0}^{n-1} A_{ik} B_{kj}, \qquad 0 \le i \le m-1, \quad 0 \le j \le p-1 $$m,n,p = 10,8,6
A = np.random.rand(m,n)
B = np.random.rand(n,p)
C = np.zeros((m,p))
assert A.shape[1] == B.shape[0]
for i in range(m):
for j in range(p):
for k in range(n):
C[i,j] += A[i,k]*B[k,j]
Let us verify the result is correct by computing
$$ |C-AB|_\infty = \max_{i,j} |C_{ij} - (AB)_{ij}| $$using @
operator
print(np.abs(C - A@B).max())
4.440892098500626e-16
Another view-point is the following
$$ C_{ij} = (\textrm{i'th row of A}) \cdot (\textrm{j'th column of B}) $$for i in range(m):
for j in range(p):
C[i,j] = A[i,:] @ B[:,j]
# Now check the result
print(np.abs(C - A@B).max())
0.0
Given $a \in \re^m$ and $b \in \re^n$, both considered as column vectors, their outer product
$$ A = a b^\top = \begin{bmatrix} | \\ a \\ | \end{bmatrix} \begin{bmatrix} - b^\top - \end{bmatrix} = \begin{bmatrix} | & | & & | \\ a b_0 & a b_1 & \ldots & a b_{n-1} \\ | & | & & | \end{bmatrix} \in \re^{m \times n} $$is an $m \times n$ matrix with elements
$$ A_{ij} = a_i b_j, \qquad 0 \le i \le m-1, \quad 0 \le j \le n-1 $$a = np.array([1,2,3])
b = np.array([1,2])
A = np.outer(a,b)
print('A.shape = ', A.shape)
print('A = \n', A)
A.shape = (3, 2) A = [[1 2] [2 4] [3 6]]
Numpy provides standard functions like sin
, cos
, log
, etc. which can act on arrays in an element-by-element manner. This is not the case for functions in math
module, which can only take scalar arguments.
x = np.linspace(0.0, 2.0*np.pi, 5)
y = np.sin(x)
print('x =',x)
print('y =',y)
x = [0. 1.57079633 3.14159265 4.71238898 6.28318531] y = [ 0.0000000e+00 1.0000000e+00 1.2246468e-16 -1.0000000e+00 -2.4492936e-16]
A = np.random.rand(5,5) # 5x5 matrix
Y = np.sin(A)
print('A =\n',A)
print('Y =\n',Y)
A = [[2.01834302e-01 8.21990204e-01 4.00140159e-01 3.03116076e-01 4.89941443e-01] [2.82851504e-02 4.51802138e-01 8.28266760e-04 9.53618108e-02 9.03854344e-01] [3.99784659e-01 4.49333900e-01 6.13616422e-01 5.72754136e-01 9.78728646e-01] [3.67672750e-01 4.89654761e-01 7.41054904e-01 9.38894701e-01 7.21882457e-01] [3.08184583e-01 8.60532390e-01 2.45521117e-01 7.73480119e-01 8.71164928e-01]] Y = [[2.00466734e-01 7.32502140e-01 3.89547434e-01 2.98495668e-01 4.70574221e-01] [2.82813790e-02 4.36587556e-01 8.28266665e-04 9.52173418e-02 7.85716984e-01] [3.89219991e-01 4.34365650e-01 5.75827901e-01 5.41948709e-01 8.29788526e-01] [3.59444696e-01 4.70321244e-01 6.75066550e-01 8.06905715e-01 6.60798745e-01] [3.03329252e-01 7.58189806e-01 2.43061847e-01 6.98629433e-01 7.65079595e-01]]
By default, the ordering is same as in C/C++, the inner-most index is the fastest running one. For example, if we have an array of size (2,3), they are stored in memory in this order
a[0,0], a[0,1], a[0,2], a[1,0], a[1,1], a[1,2]
i.e.,
a[0,0] --> a[0,1] --> a[0,2]
|
|----------<----------|
|
a[1,0] --> a[1,1] --> a[1,2]
a = np.array([[1,2,3], [4,5,6]])
print('Row contiguous =', a[0,:].data.contiguous)
print('Col contiguous =', a[:,0].data.contiguous)
a.flags
Row contiguous = True Col contiguous = False
C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : True ALIGNED : True WRITEBACKIFCOPY : False
To get fortran style ordering, where the outer-most index is the fastest running one, which corresponds to the following layout in memory
a[0,0], a[1,0], a[0,1], a[1,1], a[0,2], a[1,2]
i.e.,
a[0,0] -- a[0,1] -- a[0,2]
| | | | |
v ^ v ^ v
| | | | |
a[1,0] -- a[1,1] -- a[1,2]
create like this
b = np.array([[1,2,3], [4,5,6]], order='F')
print('Row contiguous =', b[0,:].data.contiguous)
print('Col contiguous =', b[:,0].data.contiguous)
b.flags
Row contiguous = False Col contiguous = True
C_CONTIGUOUS : False F_CONTIGUOUS : True OWNDATA : True WRITEABLE : True ALIGNED : True WRITEBACKIFCOPY : False
x = np.linspace(0,3,4)
y = np.linspace(0,2,3)
X, Y = np.meshgrid(x,y)
print('len(x) = ',len(x))
print('len(y) = ',len(y))
print('shape X= ',X.shape)
print('shape Y= ',Y.shape)
print('x = ', x)
print('y = ', y)
print('X = '); print(X)
print('Y = '); print(Y)
len(x) = 4 len(y) = 3 shape X= (3, 4) shape Y= (3, 4) x = [0. 1. 2. 3.] y = [0. 1. 2.] X = [[0. 1. 2. 3.] [0. 1. 2. 3.] [0. 1. 2. 3.]] Y = [[0. 0. 0. 0.] [1. 1. 1. 1.] [2. 2. 2. 2.]]
If $x \in \re^m$ and $y \in \re^n$, then the output of meshgrid(x,y)
is arranged like this
If we want the following arrangement
$$ X[i,j] = x[i], \qquad Y[i,j] = y[j], \qquad 0 \le i \le m-1, \quad 0 \le j \le n-1 $$we have to do the following
X, Y = np.meshgrid(x,y,indexing='ij')
print('len(x) = ',len(x))
print('len(y) = ',len(y))
print('shape X= ',X.shape)
print('shape Y= ',Y.shape)
len(x) = 4 len(y) = 3 shape X= (4, 3) shape Y= (4, 3)
The second form is useful when working with finite difference schemes on Cartesian grids, where we want to use i
index running along x-axis and j
index running along y-axis.
A = np.array([[1,2,3],
[4,5,6]])
print(A)
[[1 2 3] [4 5 6]]
B = np.reshape(A,2*3,order='C')
print(B)
[1 2 3 4 5 6]
A1 = np.reshape(B,(2,3),order='C')
print(A1)
[[1 2 3] [4 5 6]]
C = np.reshape(A,2*3,order='F')
print(C)
[1 4 2 5 3 6]
A2 = np.reshape(C,(2,3),order='F')
print(A2)
[[1 2 3] [4 5 6]]
Write an array to file
A = np.random.rand(3,3)
print(A)
np.savetxt('A.txt',A)
[[0.3626125 0.77641936 0.4039891 ] [0.93730607 0.13083836 0.73988766] [0.08362296 0.00978123 0.43667311]]
Check the contents of the file in your terminal
cat A.txt
We can also print it from within the notebook
!cat A.txt
3.626125032358201716e-01 7.764193611842552523e-01 4.039890993129562347e-01 9.373060676274622693e-01 1.308383594397504179e-01 7.398876649050106780e-01 8.362295977886535780e-02 9.781232310500209692e-03 4.366731147097449028e-01
Write two 1-D arrays as columns into file
x = np.array([1.0,2.0,3.0,4.0])
y = np.array([2.0,4.0,6.0,8.0])
np.savetxt('data.txt',np.column_stack([x,y]))
!cat data.txt
1.000000000000000000e+00 2.000000000000000000e+00 2.000000000000000000e+00 4.000000000000000000e+00 3.000000000000000000e+00 6.000000000000000000e+00 4.000000000000000000e+00 8.000000000000000000e+00
We can control the number of decimals saved, and use scientific notation
np.savetxt('data.txt',np.column_stack([x,y]),fmt='%8.4e')
!cat data.txt
1.0000e+00 2.0000e+00 2.0000e+00 4.0000e+00 3.0000e+00 6.0000e+00 4.0000e+00 8.0000e+00
We can read an existing file like this
d = np.loadtxt('data.txt')
print('Shape d =', d.shape)
x1 = d[:,0]
y1 = d[:,1]
print('x =',x1)
print('y =',y1)
Shape d = (4, 2) x = [1. 2. 3. 4.] y = [2. 4. 6. 8.]
Loops are slow in Python and it is good to avoid them. The following example shows how to time code execution.
Create some random matrices
m = 1000
A = np.random.random((m,m))
B = np.random.random((m,m))
First we add two matrices with an explicit for loop.
%%timeit -r10 -n10
C = np.empty_like(A)
for i in range(m):
for j in range(m):
C[i,j] = A[i,j] + B[i,j]
253 ms ± 1.27 ms per loop (mean ± std. dev. of 10 runs, 10 loops each)
Now we use the + operator.
%%timeit -r10 -n10
C = A + B
904 μs ± 68.6 μs per loop (mean ± std. dev. of 10 runs, 10 loops each)
The loop version is significantly slower.