Example: stock market

matrix structure and algorithm complexity solving linear ...

Convex Optimization Boyd & Vandenberghe9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDLT factorization block elimination and the matrix inversion lemma solving underdetermined equations9 1 matrix structure and algorithm complexitycost (execution time) of solvingAx=bwithA Rn n for general methods, grows asn3 less ifAis structured (banded, sparse, Toeplitz, .. )flop counts flop (floating-point operation): one addition, subtraction,multiplication, or division of two floating-point numbers to estimate complexity of an algorithm : express number of flops as a(polynomial) function of the problem dimensions, and simplify bykeeping only the leading terms not an accurate predictor of computation time on modern computers useful as a rough estimate of complexityNumerical linear algebra background9 2vector-vector operations(x,y Rn) inner productxTy:2n 1flops (or2nifnis large) sumx+y, scalar multiplication x.

Matrix structure and algorithm complexity cost (execution time) of solving Ax =b with A ∈ Rn×n • for general methods, grows as n3 • less if A is structured (banded, sparse, Toeplitz, . . . ) flop counts • flop (floating-point operation): one addition, subtraction, multiplication, or division of two floating-point numbers

Tags:

  Structure, Matrix, Complexity, Matrix structure

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of matrix structure and algorithm complexity solving linear ...

1 Convex Optimization Boyd & Vandenberghe9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDLT factorization block elimination and the matrix inversion lemma solving underdetermined equations9 1 matrix structure and algorithm complexitycost (execution time) of solvingAx=bwithA Rn n for general methods, grows asn3 less ifAis structured (banded, sparse, Toeplitz, .. )flop counts flop (floating-point operation): one addition, subtraction,multiplication, or division of two floating-point numbers to estimate complexity of an algorithm : express number of flops as a(polynomial) function of the problem dimensions, and simplify bykeeping only the leading terms not an accurate predictor of computation time on modern computers useful as a rough estimate of complexityNumerical linear algebra background9 2vector-vector operations(x,y Rn) inner productxTy:2n 1flops (or2nifnis large) sumx+y, scalar multiplication x.

2 Nflopsmatrix-vector producty=AxwithA Rm n m(2n 1)flops (or2mnifnlarge) 2 NifAis sparse withNnonzero elements 2p(n+m)ifAis given asA=U VT,U Rm p,V Rn pmatrix- matrix productC=ABwithA Rm n,B Rn p mp(2n 1)flops (or2mnpifnlarge) less ifAand/orBare sparse (1/2)m(m+ 1)(2n 1) m2nifm=pandCsymmetricNumerical linear algebra background9 3 linear equations that are easy to solvediagonal matrices(aij= 0ifi6=j):nflopsx=A 1b= (b1/a11, .. , bn/ann)lower triangular(aij= 0ifj > i):n2flopsx1:=b1/a11x2:= (b2 a21x1)/a22x3:= (b3 a31x1 a32x2) := (bn an1x1 an2x2 an,n 1xn 1)/anncalled forward substitutionupper triangular(aij= 0ifj < i):n2flops via backward substitutionNumerical linear algebra background9 4orthogonal matrices:A 1=AT 2n2flops to computex=ATbfor generalA less with structure , , ifA=I 2uuTwithkuk2= 1, we cancomputex=ATb=b 2(uTb)uin4nflopspermutation matrices:aij= 1j= i0otherwisewhere = ( 1, 2, .. , n)is a permutation of(1,2, .. , n) interpretation:Ax= (x 1.)

3 , x n) satisfiesA 1=AT, hence cost of solvingAx=bis 0 flopsexample:A= 0 1 00 0 11 0 0 ,A 1=AT= 0 0 11 0 00 1 0 Numerical linear algebra background9 5 The factor-solve method for solvingAx=b factorAas a product of simple matrices (usually 2 or 3):A=A1A2 Ak(Aidiagonal, upper or lower triangular, etc) computex=A 1b=A 1k A 12A 11bby solvingk easy equationsA1x1=b,A2x2=x1,.. ,Akx=xk 1cost of factorization step usually dominates cost of solve stepequations with multiple righthand sidesAx1=b1,Ax2=b2,.. ,Axm=bmcost: one factorization plusmsolvesNumerical linear algebra background9 6LU factorizationevery nonsingular matrixAcan be factored asA=P LUwithPa permutation matrix ,Llower triangular,Uupper triangularcost:(2/3)n3flopsSolving linear equations by LU set of linear equationsAx=b, LU((2/3)n3flops). z1=b(0flops). (n2flops). (n2flops).cost:(2/3)n3+ 2n2 (2/3)n3for largenNumerical linear algebra background9 7sparse LU factorizationA=P1LU P2 adding permutation matrixP2offers possibility of sparserL,U(hence,cheaper factor and solve steps) P1andP2chosen (heuristically) to yield sparseL,U choice ofP1andP2depends on sparsity pattern and values ofA cost is usually much less than(2/3)n3; exact value depends in acomplicated way onn, number of zeros inA, sparsity patternNumerical linear algebra background9 8 Cholesky factorizationevery positive definiteAcan be factored asA=LLTwithLlower triangularcost:(1/3)n3flopsSolving linear equations by Cholesky set of linear equationsAx=b, withA Sn++.

4 ((1/3)n3flops). (n2flops). (n2flops).cost:(1/3)n3+ 2n2 (1/3)n3for largenNumerical linear algebra background9 9sparse Cholesky factorizationA=P LLTPT adding permutation matrixPoffers possibility of sparserL Pchosen (heuristically) to yield sparseL choice ofPonly depends on sparsity pattern ofA(unlike sparse LU) cost is usually much less than(1/3)n3; exact value depends in acomplicated way onn, number of zeros inA, sparsity patternNumerical linear algebra background9 10 LDLT factorizationevery nonsingular symmetric matrixAcan be factored asA=P LDLTPT withPa permutation matrix ,Llower triangular,Dblock diagonal with1 1or2 2diagonal blockscost:(1/3)n3 cost of solving symmetric sets of linear equations by LDLT factorization:(1/3)n3+ 2n2 (1/3)n3for largen for sparseA, can choosePto yield sparseL; cost (1/3)n3 Numerical linear algebra background9 11 Equations with structured sub-blocks A11A12A21A22 x1x2 = b1b2 (1) variablesx1 Rn1,x2 Rn2.

5 BlocksAij Rni nj ifA11is nonsingular, can eliminatex1:x1=A 111(b1 A12x2);to computex2, solve(A22 A21A 111A12)x2=b2 A21A 111b1 solving linear equations by block nonsingular set of linear equations (1), FormA 111A12andA FormS=A22 A21A 111A12and b=b2 A21A Determinex2by solvingSx2= Determinex1by solvingA11x1=b1 linear algebra background9 12dominant terms in flop count step 1:f+n2s(fis cost of factoringA11;sis cost of solve step) step 2:2n22n1(cost dominated by product ofA21andA 111A12) step 3:(2/3)n32total:f+n2s+ 2n22n1+ (2/3)n32examples generalA11(f= (2/3)n31,s= 2n21): no gain over standard method#flops= (2/3)n31+ 2n21n2+ 2n22n1+ (2/3)n32= (2/3)(n1+n2)3 block elimination is useful for structuredA11(f n31)for example, diagonal (f= 0,s=n1): #flops 2n22n1+ (2/3)n32 Numerical linear algebra background9 13 Structured matrix plus low rank term(A+BC)x=b A Rn n,B Rn p,C Rp n assumeAhas structure (Ax=beasy to solve)first write as A BC I xy = b0 now apply block elimination: solve(I+CA 1B)y=CA 1b,then solveAx=b Bythis proves thematrix inversion lemma:ifAandA+BCnonsingular,(A+BC) 1=A 1 A 1B(I+CA 1B) 1CA 1 Numerical linear algebra background9 14example:Adiagonal,B, Cdense method 1: formD=A+BC, then solveDx=bcost:(2/3)n3+ 2pn2 method 2 (via matrix inversion lemma).

6 Solve(I+CA 1B)y=CA 1b,(2)then computex=A 1b A 1 Bytotal cost is dominated by (2):2p2n+ (2/3)p3( , linear inn)Numerical linear algebra background9 15 Underdetermined linear equationsifA Rp nwithp < n,rankA=p,{x|Ax=b}={F z+ x|z Rn p} xis (any) particular solution columns ofF Rn (n p)span nullspace ofA there exist several numerical methods for computingF(QR factorization, rectangular LU factorization, .. )Numerical linear algebra background9 16


Related search queries