Sparse Matrix Types
Block Sparse Row matrix
class scipy.sparse.bsr_matrix(arg1, shape=None, dtype=None, copy=False, blocksize=None)
The Block Compressed Row (BSR) format is very similar to the Compressed Sparse Row (CSR) format. BSR is appropriate for sparse matrices with dense sub matrices like the last example below. Block matrices often arise in vector-valued finite element discretizations. In such cases, BSR is considerably more efficient than CSR and CSC for many sparse arithmetic operations.
A sparse matrix in COOrdinate format
class scipy.sparse.coo_matrix(arg1, shape=None, dtype=None, copy=False)
Also known as the ‘ijv’ or ‘triplet’ format.
Advantages of the COO format
•facilitates fast conversion among sparse formats
•permits duplicate entries (see example)
•very fast conversion to and from CSR/CSC formats
Disadvantages of the COO format
•does not directly support:
–arithmetic operations
–slicing
Intended Usage
•COO is a fast format for constructing sparse matrices
•Once a matrix has been constructed, convert to CSR or CSC format for fast arithmetic and matrix vector operations
•By default when converting to CSR or CSC format, duplicate (i,j) entries will be summed together. This facilitates efficient construction of finite element matrices and the like. (see example)
Compressed Sparse Column matrix
class scipy.sparse.csc_matrix(arg1, shape=None, dtype=None, copy=False)
Advantages of the CSC format
•efficient arithmetic operations CSC + CSC, CSC * CSC, etc.
•efficient column slicing
•fast matrix vector products (CSR, BSR may be faster)
Disadvantages of the CSC format
•slow row slicing operations (consider CSR)
•changes to the sparsity structure are expensive (consider LIL or DOK)
Compressed Sparse Row matrix
class scipy.sparse.csr_matrix(arg1, shape=None, dtype=None, copy=False)
Advantages of the CSR format
•efficient arithmetic operations CSR + CSR, CSR * CSR, etc.
•efficient row slicing
•fast matrix vector products
Disadvantages of the CSR format
•slow column slicing operations (consider CSC)
•changes to the sparsity structure are expensive (consider LIL or DOK)
Sparse matrix with DIAgonal storage
class scipy.sparse.dia_matrix(arg1, shape=None, dtype=None, copy=False)
Dictionary Of keys based sparse matrix
class scipy.sparse.dok_matrix(arg1, shape=None, dtype=None, copy=False)
This is an efficient structure for constructing sparse matrices incrementally.
Allows for efficient O(1) access of individual elements. Duplicates are not allowed. Can be efficiently converted to a coo_matrix once constructed.
Row-based linked list sparse matrix
class scipy.sparse.lil_matrix(arg1, shape=None, dtype=None, copy=False)
This is an efficient structure for constructing sparse matrices incrementally.
Advantages of the LIL format
•supports flexible slicing
•changes to the matrix sparsity structure are efficient
Disadvantages of the LIL format
•arithmetic operations LIL + LIL are slow (consider CSR or CSC)
•slow column slicing (consider CSC)
•slow matrix vector products (consider CSR or CSC)
Intended Usage
•LIL is a convenient format for constructing sparse matrices
•once a matrix has been constructed, convert to CSR or CSC format for fast arithmetic and matrix vector operations
•consider using the COO format when constructing large matrices
Data Structure
•An array (self.rows) of rows, each of which is a sorted list of column indices of nonzero elements.
•The corresponding nonzero values are stored in similar fashion in self.data.
Building sparse matrices
eye(m, n[, k, dtype, format])
eye(m, n) returns a sparse (m x n) matrix where the k-th diagonal
identity(n[, dtype, format])
Identity matrix in sparse format
kron(A, B[, format])
kronecker product of sparse matrices A and B
kronsum(A, B[, format])
kronecker sum of sparse matrices A and B
diags(diagonals, offsets[, shape, format, dtype])
Construct a sparse matrix from diagonals.
spdiags(data, diags, m, n[, format])
Return a sparse matrix from diagonals.
block_diag(mats[, format, dtype])
Build a block diagonal sparse matrix from provided matrices.
tril(A[, k, format])
Return the lower triangular portion of a matrix in sparse format
triu(A[, k, format])
Return the upper triangular portion of a matrix in sparse format
bmat(blocks[, format, dtype])
Build a sparse matrix from sparse sub-blocks
hstack(blocks[, format, dtype])
Stack sparse matrices horizontally (column wise)
vstack(blocks[, format, dtype])
Stack sparse matrices vertically (row wise)
rand(m, n[, density, format, dtype])
Generate a sparse matrix of the given shape and density with uniformely distributed
Compressed Sparse Graph Routines (scipy.sparse.csgraph)
connected_components(csgraph[, directed, ...])
Analyze the connected components of a sparse graph
laplacian(csgraph[, normed, return_diag])
Return the Laplacian matrix of a directed graph. For non-symmetric
shortest_path(csgraph[, method, directed, ...])
Perform a shortest-path graph search on a positive directed or undirected graph.
dijkstra(csgraph[, directed, indices, ...])
Dijkstra algorithm using Fibonacci Heaps
floyd_warshall(csgraph[, directed, ...])
Compute the shortest path lengths using the Floyd-Warshall algorithm
bellman_ford(csgraph[, directed, indices, ...])
Compute the shortest path lengths using the Bellman-Ford algorithm.
johnson(csgraph[, directed, indices, ...])
Compute the shortest path lengths using Johnson’s algorithm.
breadth_first_order(csgraph, i_start[, ...])
Return a breadth-first ordering starting with specified node.
depth_first_order(csgraph, i_start[, ...])
Return a depth-first ordering starting with specified node.
breadth_first_tree(csgraph, i_start[, directed])
Return the tree generated by a breadth-first search
depth_first_tree(csgraph, i_start[, directed])
Return a tree generated by a depth-first search.
minimum_spanning_tree(csgraph[, overwrite])
Return a minimum spanning tree of an undirected graph
Sparse linear algebra (scipy.sparse.linalg)
class scipy.sparse.linalg.LinearOperator(shape, matvec, rmatvec=None, matmat=None,dtype=None)
Common interface for performing matrix vector products
Many iterative methods (e.g. cg, gmres) do not need to know the individual entries of a matrix to solve a linear system A*x=b. Such solvers only require the computation of matrix vector products, A*v where v is a dense vector. This class serves as an abstract interface between iterative solvers and matrix-like objects.
scipy.sparse.linalg.aslinearoperator(A)
Return A as a LinearOperator.
‘A’ may be any of the following types:
•ndarray
•matrix
•sparse matrix (e.g. csr_matrix, lil_matrix, etc.)
•LinearOperator
•An object with .shape and .matvec attributes
matmat(X)
Matrix-matrix multiplication
matvec(x)
Matrix-vector multiplication
Solving linear problems
spsolve(A, b[, permc_spec, use_umfpack])
Solve the sparse linear system Ax=b
factorized(A)
Return a fuction for solving a sparse linear system, with A pre-factorized.
bicg(A, b[, x0, tol, maxiter, xtype, M, ...])
Use BIConjugate Gradient iteration to solve A x = b
bicgstab(A, b[, x0, tol, maxiter, xtype, M, ...])
Use BIConjugate Gradient STABilized iteration to solve A x = b
cg(A, b[, x0, tol, maxiter, xtype, M, callback])
Use Conjugate Gradient iteration to solve A x = b
cgs(A, b[, x0, tol, maxiter, xtype, M, callback])
Use Conjugate Gradient Squared iteration to solve A x = b
gmres(A, b[, x0, tol, restart, maxiter, ...])
Use Generalized Minimal RESidual iteration to solve A x = b.
lgmres(A, b[, x0, tol, maxiter, M, ...])
Solve a matrix equation using the LGMRES algorithm.
minres(A, b[, x0, shift, tol, maxiter, ...])
Use MINimum RESidual iteration to solve Ax=b
qmr(A, b[, x0, tol, maxiter, xtype, M1, M2, ...])
Use Quasi-Minimal Residual iteration to solve A x = b
Iterative methods for least-squares problems:
lsqr(A, b[, damp, atol, btol, conlim, ...])
Find the least-squares solution to a large, sparse, linear system of equations.
lsmr(A, b[, damp, atol, btol, conlim, ...])
Iterative solver for least-squares problems.
Eigenvalue problems:
eigs(A[, k, M, sigma, which, v0, ncv, ...])
Find k eigenvalues and eigenvectors of the square matrix A.
eigsh(A[, k, M, sigma, which, v0, ncv, ...])
Find k eigenvalues and eigenvectors of the real symmetric square matrix
lobpcg(A, X[, B, M, Y, tol, maxiter, ...])
Solve symmetric partial eigenproblems with optional preconditioning
>>> id = np.identity(13) >>> vals, vecs = sp.sparse.linalg.eigs(id, k=6) >>> vals array([ 1.+0.j, 1.+0.j, 1.+0.j, 1.+0.j, 1.+0.j, 1.+0.j]) >>> vecs.shape
Singular values problems:
svds(A[, k, ncv, tol])
Compute the largest k singular values/vectors for a sparse matrix.
Complete or incomplete LU factorizations:
splu(A[, permc_spec, diag_pivot_thresh, ...])
Compute the LU decomposition of a sparse, square matrix.
spilu(A[, drop_tol, fill_factor, drop_rule, ...])
Compute an incomplete LU decomposition for a sparse, square matrix A.