# Feed aggregator

### Fun With The Pascal Triangle

- Blaise Pascal
- Binomial Coefficients
- Pascal Matrices
- Pascal Triangle
- Square Root of Identity
- Cube Root of Identity
- Sierpinski
- Fibonacci
- pi
- Matrix Exponential
- Thanks

*Traité du triangle arithmétique*(

*Treatise on Arithmetical Triangle*) was published posthumously in 1665. But this was not the first publication about the triangle. Various versions appear in Indian, Chinese, Persian, Italian and other manuscripts centuries before Pascal. Binomial Coefficients The

*binomial coefficient*usually denoted by ${n} \choose {k}$ is the number of ways of picking $k$ unordered outcomes from $n$ possibilities. These coefficients appear in the expansion of the binomial $(x+1)^n$. For example, when $n = 7$ syms x n = 7; x7 = expand((x+1)^n) x7 = x^7 + 7*x^6 + 21*x^5 + 35*x^4 + 35*x^3 + 21*x^2 + 7*x + 1 Formally, the binomial coefficients are given by $${{n} \choose {k}} = \frac {n!} {k! (n-k)!}$$ But premature floating point overflow of the factorials makes this an unsatisfactory basis for computation. A better way employs the recursion $$ {{n} \choose {k}} = {{n-1} \choose {k}} + {{n-1} \choose {k-1}}$$ This is used by the MATLAB function nchoosek(n,k). Pascal Matrices MATLAB offers two Pascal matrices. One is symmetric, positive definite and has the binomial coefficients on the antidiagonals. P = pascal(7) P = 1 1 1 1 1 1 1 1 2 3 4 5 6 7 1 3 6 10 15 21 28 1 4 10 20 35 56 84 1 5 15 35 70 126 210 1 6 21 56 126 252 462 1 7 28 84 210 462 924 The other is lower triangular, with the binomial coefficients in the rows. (We will see why the even numbered columns have minus signs in a moment.) L = pascal(7,1) L = 1 0 0 0 0 0 0 1 -1 0 0 0 0 0 1 -2 1 0 0 0 0 1 -3 3 -1 0 0 0 1 -4 6 -4 1 0 0 1 -5 10 -10 5 -1 0 1 -6 15 -20 15 -6 1 The individual elements are P(i,j) = P(j,i) = nchoosek(i+j-2,j-1) And (temporarily ignoring the minus signs) for i $\ge$ j L(i,j) = nchoosek(i-1,j-1) The first fun fact is that L is the (lower) Cholesky factor of P. L = chol(P)' L = 1 0 0 0 0 0 0 1 1 0 0 0 0 0 1 2 1 0 0 0 0 1 3 3 1 0 0 0 1 4 6 4 1 0 0 1 5 10 10 5 1 0 1 6 15 20 15 6 1 So we can reconstruct P from L. P = L*L' P = 1 1 1 1 1 1 1 1 2 3 4 5 6 7 1 3 6 10 15 21 28 1 4 10 20 35 56 84 1 5 15 35 70 126 210 1 6 21 56 126 252 462 1 7 28 84 210 462 924 Pascal Triangle The traditional Pascal triangle is obtained by rotating P clockwise 45 degrees, or by sliding the rows of L to the right in half increments. Each element of the resulting triangle is the sum of the two above it. triprint(L) 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 1 5 10 10 5 1 1 6 15 20 15 6 1 Square Root of Identity When the even numbered columns of L are given minus signs the matrix becomes a square root of the identity. L = pascal(n,1) L_squared = L^2 L = 1 0 0 0 0 0 0 1 -1 0 0 0 0 0 1 -2 1 0 0 0 0 1 -3 3 -1 0 0 0 1 -4 6 -4 1 0 0 1 -5 10 -10 5 -1 0 1 -6 15 -20 15 -6 1 L_squared = 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 Here is an exercise for you. What is sqrt(eye(n))? Why isn't it L? Cube Root of Identity When I first saw this, I was amazed. Rotate L counterclockwise. The result is a cube root of the identity. X = rot90(L,-1) X_cubed = X^3 X = 1 1 1 1 1 1 1 -6 -5 -4 -3 -2 -1 0 15 10 6 3 1 0 0 -20 -10 -4 -1 0 0 0 15 5 1 0 0 0 0 -6 -1 0 0 0 0 0 1 0 0 0 0 0 0 X_cubed = 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 Sierpinski Which binomial coefficients are odd? It's a fledgling fractal. odd = @(x) mod(x,2)==1; n = 56; L = abs(pascal(n,1)); spy(odd(L)) title('odd(L)') Fibonacci The sums of the antidiagonals of L are the Fibonacci numbers. n = 12; A = fliplr(abs(pascal(n,1))) for k = 1:n F(k) = sum(diag(A,n-k)); end F A = 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 1 2 1 0 0 0 0 0 0 0 0 1 3 3 1 0 0 0 0 0 0 0 1 4 6 4 1 0 0 0 0 0 0 1 5 10 10 5 1 0 0 0 0 0 1 6 15 20 15 6 1 0 0 0 0 1 7 21 35 35 21 7 1 0 0 0 1 8 28 56 70 56 28 8 1 0 0 1 9 36 84 126 126 84 36 9 1 0 1 10 45 120 210 252 210 120 45 10 1 1 11 55 165 330 462 462 330 165 55 11 1 F = 1 1 2 3 5 8 13 21 34 55 89 144 pi The elements in the third column of lower triangular Pascal matrix are the

*triangle numbers*. The n-th triangle number is the number of bowling pins in the n-th row of an array of bowling pins. $$t_n = {{n+1} \choose {2}}$$ L = pascal(12,1); t = L(3:end,3)' t = 1 3 6 10 15 21 28 36 45 55 Here's an unusual series relating the triangle numbers to $\pi$. The signs go + + - - + + - - . pi - 2 = 1 + 1/3 - 1/6 - 1/10 + 1/15 + 1/21 - 1/28 - 1/36 + 1/45 + 1/55 - ... type pi_pascal function pie = pi_pascal(n) tk = 1; s = 1; for k = 2:n tk = tk + k; if mod(k+1,4) > 1 s = s + 1/tk; else s = s - 1/tk; end end pie = 2 + s; Ten million terms gives $\pi$ to 14 decimal places. format long pie = pi_pascal(10000000) err = pi - pie pie = 3.141592653589817 err = -2.398081733190338e-14 Matrix Exponential Finally, I love this one. The solution to the (potentially infinite) set of ordinary differential equations $\dot{x_1} = x_1$ $\dot{x_j} = x_j + (j-1) x_{j-1}$ is $x_j = e^t (t + 1)^{j-1}$ This means that the matrix exponential of the simple diagonal matrix D = diag(1:7,-1) D = 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 4 0 0 0 0 0 0 0 0 5 0 0 0 0 0 0 0 0 6 0 0 0 0 0 0 0 0 7 0 is expm_D = round(expm(D)) expm_D = 1 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 2 1 0 0 0 0 0 1 3 3 1 0 0 0 0 1 4 6 4 1 0 0 0 1 5 10 10 5 1 0 0 1 6 15 20 15 6 1 0 1 7 21 35 35 21 7 1 Thanks Thanks to Nick Higham for pascal.m, gallery.m and section 28.4 of N. J. Higham,

*Accuracy and Stability of Numerical Algorithms*, Second edition, SIAM, 2002. http://epubs.siam.org/doi/book/10.1137/1.9780898718027

\n'); d.write(code_string); // Add copyright line at the bottom if specified. if (copyright.length > 0) { d.writeln(''); d.writeln('%%'); if (copyright.length > 0) { d.writeln('% _' + copyright + '_'); } } d.write('

\n'); d.title = title + ' (MATLAB code)'; d.close(); } -->

Published with MATLAB® R2018a has hundreds of properties of the % triangle and there are dozens of other Web pages devoted to it. % Here are a few facts that I find most interesting. %% Blaise Pascal % % <> % %% % (1623-1662) % was a 17th century French mathematician, physicist, inventor and % theologian. His _TraitÃ© du triangle arithmÃ©tique_ (_Treatise on % Arithmetical Triangle_) was published posthumously in 1665. % But this was not the first publication about the triangle. % Various versions appear in Indian, Chinese, Persian, Italian % and other manuscripts centuries before Pascal. %% Binomial Coefficients % The _binomial coefficient_ usually denoted by ${n} \choose {k}$ % is the number of ways of picking $k$ unordered outcomes from $n$ % possibilities. These coefficients appear in the expansion of % the binomial $(x+1)^n$. For example, when $n = 7$ syms x n = 7; x7 = expand((x+1)^n) %% % Formally, the binomial coefficients are given by % % $${{n} \choose {k}} = \frac {n!} {k! (n-k)!}$$ %% % But premature floating point overflow of the factorials makes this an % unsatisfactory basis for computation. A better way employs the recursion % % $$ {{n} \choose {k}} = {{n-1} \choose {k}} + {{n-1} \choose {k-1}}$$ % % This is used by the MATLAB function |nchoosek(n,k)|. %% Pascal Matrices % MATLAB offers two Pascal matrices. One is symmetric, positive % definite and has the binomial coefficients on the antidiagonals. P = pascal(7) %% % The other is lower triangular, with the binomial coefficients in the % rows. (We will see why the even numbered columns have minus signs % in a moment.) L = pascal(7,1) %% % The individual elements are % % P(i,j) = P(j,i) = nchoosek(i+j-2,j-1) % % And (temporarily ignoring the minus signs) for |i| $\ge$ |j| % % L(i,j) = nchoosek(i-1,j-1) %% % The first fun fact is that |L| is the (lower) Cholesky factor of |P|. L = chol(P)' %% % So we can reconstruct |P| from |L|. P = L*L' %% Pascal Triangle % The traditional Pascal triangle is obtained by rotating P clockwise % 45 degrees, or by sliding the rows of L to the right in half increments. % Each element of the resulting triangle is the sum of the two above it. triprint(L) %% Square Root of Identity % When the even numbered columns of |L| are given minus signs % the matrix becomes a square root of the identity. L = pascal(n,1) L_squared = L^2 %% % Here is an exercise for you. % What is |sqrt(eye(n))|? Why isn't it |L|? %% Cube Root of Identity % When I first saw this, I was amazed. Rotate L counterclockwise. % The result is a cube root of the identity. X = rot90(L,-1) X_cubed = X^3 %% Sierpinski % Which binomial coefficients are odd? It's a fledgling fractal. odd = @(x) mod(x,2)==1; n = 56; L = abs(pascal(n,1)); spy(odd(L)) title('odd(L)') %% Fibonacci % The sums of the antidiagonals of |L| are the Fibonacci numbers. n = 12; A = fliplr(abs(pascal(n,1))) for k = 1:n F(k) = sum(diag(A,n-k)); end F %% pi % The elements in the third column of lower triangular Pascal matrix % are the _triangle numbers_. % The n-th triangle number is the number of bowling pins in the n-th row % of an array of bowling pins. % % $$t_n = {{n+1} \choose {2}}$$ L = pascal(12,1); t = L(3:end,3)' %% % Here's an unusual series relating the triangle numbers to $\pi$. % The signs go + + - - + + - - . % % pi - 2 = 1 + 1/3 - 1/6 - 1/10 + 1/15 + 1/21 - 1/28 - 1/36 + 1/45 + 1/55 - ... %% type pi_pascal %% % Ten million terms gives $\pi$ to 14 decimal places. format long pie = pi_pascal(10000000) err = pi - pie %% Matrix Exponential % Finally, I love this one. % The solution to the (potentially infinite) set of ordinary differential % equations % % $\dot{x_1} = x_1$ % % $\dot{x_j} = x_j + (j-1) x_{j-1}$ % % is % % $x_j = e^t (t + 1)^{j-1}$ %% % This means that the matrix exponential of the simple diagonal matrix D = diag(1:7,-1) %% % is expm_D = round(expm(D)) %% Thanks % Thanks to Nick Higham for |pascal.m|, |gallery.m| and section 28.4 of % % N. J. Higham, _Accuracy and Stability of Numerical Algorithms_, % Second edition, SIAM, 2002. % % % ##### SOURCE END ##### 1edd50d2d04740ebb72fefb92312caa5 -->### Happy Valentine’s Day

Edward Scheinerman, *The Mathematics Lover's Companion*, Yale University Press, 2017 https://yalebooks.yale.edu/book/9780300223002/mathematics-lovers-companion

Get
the MATLAB code (requires JavaScript)

Published with MATLAB® R2018a

### The Historic MATLAB Users’ Guide

In the 1970s and early 1980s, while I was working on the LINPACK and EISPACK projects that I discussed in two previous posts, I was a Professor of Mathematics and then of Computer Science at the University of New Mexico in Albuquerque. I was teaching courses in Linear Algebra and Numerical Analysis. I wanted my students to have easy access to LINPACK and EISPACK without writing Fortran programs. By "easy access" I meant not going through the remote batch processing and the repeated edit-compile-link-load-execute process that was ordinarily required on the campus central mainframe computer.

So, I studied Niklaus Wirth's book *Algorithms + Data Structures = Programs* and learned how to parse programming languages. Wirth calls his example language PL/0. (A *zinger* aimed at PL/I, the monolithic, committee-designed language being promoted by IBM and the U.S. Defense Department at the time.) PL/0 was a pedagogical subset of Wirth's Pascal, which, in turn, was his response to Algol. Quoting Wirth, "In the realm of data types, however, PL/0 adheres to the demand of simplicity without compromise: integers are its only data type."

Following Wirth's approach, I wrote the first MATLAB -- an acronym for Matrix Laboratory -- in Fortran, as a dialect of PL/0 with matrix as the only data type. The project was a kind of hobby, a new aspect of programming for me to learn and something for my students to use. There was never any formal outside support and certainly no business plan. MathWorks was a few years in the future.

This first MATLAB was not a programming language; it was just a simple interactive matrix calculator. There were no m-files, no toolboxes, no graphics. And no ODEs or FFTs. This snapshot of the start-up screen shows all the reserved words and functions. There are only 71 of them. If you wanted to add another function, you would get the source code from me, write a Fortran subroutine, add your new name to the parse table, and recompile MATLAB.

Here is the first Users' Guide, in its entirety, from 1981.

Contents- Users' Guide
- 1. Elementary operations
- 2. MATLAB functions
- 3. Rows, columns and submatrices
- 4. FOR, WHILE and IF
- 5. Commands, text, files and macros.
- 6. Census example
- 7. Partial differential equation example
- 8. Eigenvalue sensitivity example
- 9. Syntax diagrams
- 10. The parser-interpreter
- 11. The numerical algorithms
- 12. FLOP and CHOP
- 13. Communicating with other programs
- Appendix. The HELP document

Get
the MATLAB code (requires JavaScript)

Published with MATLAB® R2018a

= HESS(A)
% produces a unitary matrix P and a Hessenberg matrix H so
% that A = P*H*P'. By itself, HESS(A) returns H.
%
% HILB Inverse Hilbert matrix. HILB(N) is the inverse of the N
% by N matrix with elements 1/(i+j-1), which is a famous
% example of a badly conditioned matrix. The result is exact
% for N less than about 15, depending upon the computer.
%
% IF Conditionally execute statements. Simple form...
% IF expression rop expression, statements
% where rop is =, <, >, <=, >=, or <> (not equal) . The
% statements are executed once if the indicated comparison
% between the real parts of the first components of the two
% expressions is true, otherwise the statements are skipped.
% Example.
% IF ABS(I-J) = 1, A(I,J) = -1;
% More complicated forms use END in the same way it is used
% with FOR and WHILE and use ELSE as an abbreviation for END,
% IF expression not rop expression . Example
% FOR I = 1:N, FOR J = 1:N, ...
% IF I = J, A(I,J) = 2; ELSE IF ABS(I-J) = 1, A(I,J) = -1; ...
% ELSE A(I,J) = 0;
% An easier way to accomplish the same thing is
% A = 2*EYE(N);
% FOR I = 1:N-1, A(I,I+1) = -1; A(I+1,I) = -1;
%
% IMAG IMAG(X) is the imaginary part of X .
%
% INV INV(X) is the inverse of the square matrix X . A warning
% message is printed if X is badly scaled or nearly
% singular.
%
% KRON KRON(X,Y) is the Kronecker tensor product of X and Y . It
% is also denoted by X .*. Y . The result is a large matrix
% formed by taking all possible products between the elements
% of X and those of Y . For example, if X is 2 by 3, then
% X .*. Y is
%
% < x(1,1)*Y x(1,2)*Y x(1,3)*Y
% x(2,1)*Y x(2,2)*Y x(2,3)*Y >
%
% The five-point discrete Laplacian for an n-by-n grid can be
% generated by
%
% T = diag(ones(n-1,1),1); T = T + T'; I = EYE(T);
% A = T.*.I + I.*.T - 4*EYE;
%
% Just in case they might be useful, MATLAB includes
% constructions called Kronecker tensor quotients, denoted by
% X ./. Y and X .\. Y . They are obtained by replacing the
% elementwise multiplications in X .*. Y with divisions.
%
% LINES An internal count is kept of the number of lines of output
% since the last input. Whenever this count approaches a
% limit, the user is asked whether or not to suppress
% printing until the next input. Initially the limit is 25.
% LINES(N) resets the limit to N .
%
% LOAD LOAD('file') retrieves all the variables from the file .
% See FILE and SAVE for more details. To prepare your own
% file for LOADing, change the READs to WRITEs in the code
% given under SAVE.
%
% LOG LOG(X) is the natural logarithm of X . See FUN .
% Complex results are produced if X is not positive, or has
% nonpositive eigenvalues.
%
% LONG Determine output format. All computations are done in
% complex arithmetic and double precision if it is available.
% SHORT and LONG merely switch between different output
% formats.
% SHORT Scaled fixed point format with about 5 digits.
% LONG Scaled fixed point format with about 15 digits.
% SHORT E Floating point format with about 5 digits.
% LONG E Floating point format with about 15 digits.
% LONG Z System dependent format, often hexadecimal.
%
% LU Factors from Gaussian elimination. = LU(X) stores a
% upper triangular matrix in U and a 'psychologically lower
% triangular matrix', i.e. a product of lower triangular and
% permutation matrices, in L , so that X = L*U . By itself,
% LU(X) returns the output from CGEFA .
%
% MACRO The macro facility involves text and inward pointing angle
% brackets. If STRING is the source text for any MATLAB
% expression or statement, then
% t = 'STRING';
% encodes the text as a vector of integers and stores that
% vector in t . DISP(t) will print the text and
% >t<
% causes the text to be interpreted, either as a statement or
% as a factor in an expression. For example
% t = '1/(i+j-1)';
% disp(t)
% for i = 1:n, for j = 1:n, a(i,j) = >t<;
% generates the Hilbert matrix of order n.
% Another example showing indexed text,
% S = <'x = 3 '
% 'y = 4 '
% 'z = sqrt(x*x+y*y)'>
% for k = 1:3, >S(k,:)<
% It is necessary that the strings making up the "rows" of
% the "matrix" S have the same lengths.
%
% MAGIC Magic square. MAGIC(N) is an N by N matrix constructed
% from the integers 1 through N**2 with equal row and column
% sums.
%
% NORM For matrices..
% NORM(X) is the largest singular value of X .
% NORM(X,1) is the 1-norm of X .
% NORM(X,2) is the same as NORM(X) .
% NORM(X,'INF') is the infinity norm of X .
% NORM(X,'FRO') is the F-norm, i.e. SQRT(SUM(DIAG(X'*X))) .
% For vectors..
% NORM(V,P) = (SUM(V(I)**P))**(1/P) .
% NORM(V) = NORM(V,2) .
% NORM(V,'INF') = MAX(ABS(V(I))) .
%
% ONES All ones. ONES(N) is an N by N matrix of ones. ONES(M,N)
% is an M by N matrix of ones . ONES(A) is the same size as
% A and all ones .
%
% ORTH Orthogonalization. Q = ORTH(X) is a matrix with
% orthonormal columns, i.e. Q'*Q = EYE, which span the same
% space as the columns of X .
%
% PINV Pseudoinverse. X = PINV(A) produces a matrix X of the
% same dimensions as A' so that A*X*A = A , X*A*X = X and
% AX and XA are Hermitian . The computation is based on
% SVD(A) and any singular values less than a tolerance are
% treated as zero. The default tolerance is
% NORM(SIZE(A),'inf')*NORM(A)*EPS. This tolerance may be
% overridden with X = PINV(A,tol). See RANK.
%
% PLOT PLOT(X,Y) produces a plot of the elements of Y against
% those of X . PLOT(Y) is the same as PLOT(1:n,Y) where n is
% the number of elements in Y . PLOT(X,Y,P) or
% PLOT(X,Y,p1,...,pk) passes the optional parameter vector P
% or scalars p1 through pk to the plot routine. The default
% plot routine is a crude printer-plot. It is hoped that an
% interface to local graphics equipment can be provided.
% An interesting example is
% t = 0:50;
% PLOT( t.*cos(t), t.*sin(t) )
%
% POLY Characteristic polynomial.
% If A is an N by N matrix, POLY(A) is a column vector with
% N+1 elements which are the coefficients of the
% characteristic polynomial, DET(lambda*EYE - A) .
% If V is a vector, POLY(V) is a vector whose elements are
% the coefficients of the polynomial whose roots are the
% elements of V . For vectors, ROOTS and POLY are inverse
% functions of each other, up to ordering, scaling, and
% roundoff error.
% ROOTS(POLY(1:20)) generates Wilkinson's famous example.
%
% PRINT PRINT('file',X) prints X on the file using the current
% format determined by SHORT, LONG Z, etc. See FILE.
%
% PROD PROD(X) is the product of all the elements of X .
%
% QR Orthogonal-triangular decomposition.
% = QR(X) produces an upper triangular matrix R of
% the same dimension as X and a unitary matrix Q so that
% X = Q*R .
% = QR(X) produces a permutation matrix E , an
% upper triangular R with decreasing diagonal elements and
% a unitary Q so that X*E = Q*R .
% By itself, QR(X) returns the output of CQRDC . TRIU(QR(X))
% is R .
%
% RAND Random numbers and matrices. RAND(N) is an N by N matrix
% with random entries. RAND(M,N) is an M by N matrix with
% random entries. RAND(A) is the same size as A . RAND
% with no arguments is a scalar whose value changes each time
% it is referenced.
% Ordinarily, random numbers are uniformly distributed in
% the interval (0.0,1.0) . RAND('NORMAL') switches to a
% normal distribution with mean 0.0 and variance 1.0 .
% RAND('UNIFORM') switches back to the uniform distribution.
% RAND('SEED') returns the current value of the seed for the
% generator. RAND('SEED',n) sets the seed to n .
% RAND('SEED',0) resets the seed to 0, its value when MATLAB
% is first entered.
%
% RANK Rank. K = RANK(X) is the number of singular values of X
% that are larger than NORM(SIZE(X),'inf')*NORM(X)*EPS.
% K = RANK(X,tol) is the number of singular values of X that
% are larger than tol .
%
% RCOND RCOND(X) is an estimate for the reciprocal of the
% condition of X in the 1-norm obtained by the LINPACK
% condition estimator. If X is well conditioned, RCOND(X)
% is near 1.0 . If X is badly conditioned, RCOND(X) is
% near 0.0 .
% = RCOND(A) sets R to RCOND(A) and also produces a
% vector Z so that
% NORM(A*Z,1) = R*NORM(A,1)*NORM(Z,1)
% So, if RCOND(A) is small, then Z is an approximate null
% vector.
%
% RAT An experimental function which attempts to remove the
% roundoff error from results that should be "simple"
% rational numbers.
% RAT(X) approximates each element of X by a continued
% fraction of the form
%
% a/b = d1 + 1/(d2 + 1/(d3 + ... + 1/dk))
%
% with k <= len, integer di and abs(di) <= max . The default
% values of the parameters are len = 5 and max = 100.
% RAT(len,max) changes the default values. Increasing either
% len or max increases the number of possible fractions.
% = RAT(X) produces integer matrices A and B so that
%
% A ./ B = RAT(X)
%
% Some examples:
%
% long
% T = hilb(6), X = inv(T)
% = rat(X)
% H = A ./ B, S = inv(H)
%
% short e
% d = 1:8, e = ones(d), A = abs(d'*e - e'*d)
% X = inv(A)
% rat(X)
% display(ans)
%
%
% REAL REAL(X) is the real part of X .
%
% RETURN From the terminal, causes return to the operating system
% or other program which invoked MATLAB. From inside an
% EXEC, causes return to the invoking EXEC, or to the
% terminal.
%
% RREF RREF(A) is the reduced row echelon form of the rectangular
% matrix. RREF(A,B) is the same as RREF() .
%
% ROOTS Find polynomial roots. ROOTS(C) computes the roots of the
% polynomial whose coefficients are the elements of the
% vector C . If C has N+1 components, the polynomial is
% C(1)*X**N + ... + C(N)*X + C(N+1) . See POLY.
%
% ROUND ROUND(X) rounds the elements of X to the nearest
% integers.
%
% SAVE SAVE('file') stores all the current variables in a file.
% SAVE('file',X) saves only X . See FILE .
% The variables may be retrieved later by LOAD('file') or by
% your own program using the following code for each matrix.
% The lines involving XIMAG may be eliminated if everything
% is known to be real.
%
% attach lunit to 'file'
% REAL or DOUBLE PRECISION XREAL(MMAX,NMAX)
% REAL or DOUBLE PRECISION XIMAG(MMAX,NMAX)
% READ(lunit,101) ID,M,N,IMG
% DO 10 J = 1, N
% READ(lunit,102) (XREAL(I,J), I=1,M)
% IF (IMG .NE. 0) READ(lunit,102) (XIMAG(I,J),I=1,M)
% 10 CONTINUE
%
% The formats used are system dependent. The following are
% typical. See SUBROUTINE SAVLOD in your local
% implementation of MATLAB.
%
% 101 FORMAT(4A1,3I4)
% 102 FORMAT(4Z18)
% 102 FORMAT(4O20)
% 102 FORMAT(4D25.18)
%
% SCHUR Schur decomposition. __ = SCHUR(X) produces an upper
% triangular matrix T , with the eigenvalues of X on the
% diagonal, and a unitary matrix U so that X = U*T*U' and
% U'*U = EYE . By itself, SCHUR(X) returns T .
%
% SHORT See LONG .
%
% SEMI Semicolons at the end of lines will cause, rather than
% suppress, printing. A second SEMI restores the initial
% interpretation.
%
% SIN SIN(X) is the sine of X . See FUN .
% SIZE If X is an M by N matrix, then SIZE(X) is .
% Can also be used with a multiple assignment,
% = SIZE(X) .
%
% SQRT SQRT(X) is the square root of X . See FUN . Complex
% results are produced if X is not positive, or has
% nonpositive eigenvalues.
%
% STOP Use EXIT instead.
%
% SUM SUM(X) is the sum of all the elements of X .
% SUM(DIAG(X)) is the trace of X .
%
% SVD Singular value decomposition. = SVD(X) produces a
% diagonal matrix S , of the same dimension as X and with
% nonnegative diagonal elements in decreasing order, and
% unitary matrices U and V so that X = U*S*V' .
% By itself, SVD(X) returns a vector containing the singular
% values.
% = SVD(X,0) produces the "economy size"
% decomposition. If X is m by n with m > n, then only the
% first n columns of U are computed and S is n by n .
%
% TRIL Lower triangle. TRIL(X) is the lower triangular part of X.
% TRIL(X,K) is the elements on and below the K-th diagonal of
% X. K = 0 is the main diagonal, K > 0 is above the main
% diagonal and K < 0 is below the main diagonal.
%
% TRIU Upper triangle. TRIU(X) is the upper triangular part of X.
% TRIU(X,K) is the elements on and above the K-th diagonal of
% X. K = 0 is the main diagonal, K > 0 is above the main
% diagonal and K < 0 is below the main diagonal.
%
% USER Allows personal Fortran subroutines to be linked into
% MATLAB . The subroutine should have the heading
%
% SUBROUTINE USER(A,M,N,S,T)
% REAL or DOUBLE PRECISION A(M,N),S,T
%
% The MATLAB statement Y = USER(X,s,t) results in a call to
% the subroutine with a copy of the matrix X stored in the
% argument A , its column and row dimensions in M and N ,
% and the scalar parameters s and t stored in S and T
% . If s and t are omitted, they are set to 0.0 . After
% the return, A is stored in Y . The dimensions M and
% N may be reset within the subroutine. The statement Y =
% USER(K) results in a call with M = 1, N = 1 and A(1,1) =
% FLOAT(K) . After the subroutine has been written, it must
% be compiled and linked to the MATLAB object code within the
% local operating system.
%
% WHAT Lists commands and functions currently available.
%
% WHILE Repeat statements an indefinite number of times.
% WHILE expr rop expr, statement, ..., statement, END
% where rop is =, <, >, <=, >=, or <> (not equal) . The END
% at the end of a line may be omitted. The comma before the
% END may also be omitted. The commas may be replaced by
% semicolons to avoid printing. The statements are
% repeatedly executed as long as the indicated comparison
% between the real parts of the first components of the two
% expressions is true. Example (assume a matrix A is
% already defined).
% E = 0*A; F = E + EYE; N = 1;
% WHILE NORM(E+F-E,1) > 0, E = E + F; F = A*F/N; N = N + 1;
% E
%
% WHO Lists current variables.
%
% WHY Provides succinct answers to any questions.
%
% //
##### SOURCE END ##### eca491f76e4243cdbdeb2ef47ed6b124
--> __

__
__Microsoft Office Certification proves that you have core to advanced skills in Microsoft Office applications. Certification is helpful for those new to the workforce or transitioning to a more analytical role, since it proves you can perform tasks at a higher level. This gives you a leg up against competing candidates.

Microsoft Office Certification proves that you have core to advanced skills in Microsoft Office applications. Certification is helpful for those new to the workforce or transitioning to a more analytical role, since it proves you can perform tasks at a higher level. This gives you a leg up against competing candidates.

### LINPACK, Linear Equation Package

The ACM Special Interest Group on Programming Languages, SIGPLAN, expects to hold the fourth in a series of conferences on the History of Programming Languages in 2020, see HOPL-IV. The first drafts of papers are to be submitted by August, 2018. That long lead time gives me the opportunity to write a detailed history of MATLAB. I plan to write the paper in sections, which I'll post in this blog as they are available. This is the third such installment.

LINPACK and EISPACK are Fortran subroutine packages for matrix computation developed in the 1970's. They are a follow-up to the Algol 60 procedures that I described in my blog post about the Wilkinson and Reinsch Handbook. They led to the first version of MATLAB. This post is about LINPACK. My previous post was about EISPACK.

Contents

- Cart Before the Horse
- LINPACK Project
- Contents
- Programming Style
- BLAS
- Portability
- Publisher
- LINPACK Benchmark
- In Retrospect
- References

Logically a package of subroutines for solving linear equations should come before a package for computing matrix eigenvalues. The tasks addressed and the algorithms involved are mathematically simpler. But chronologically EISPACK came before LINPACK. Actually, the first section of the *Handbook* is about linear equations, but we pretty much skipped over that section when we did EISPACK. The eigenvalue algorithms were new and we felt it was important to make Fortran translations widely available.

In 1975, as EISPACK was nearing completion, we proposed to the NSF another research project investigating methods for the development of mathematical software. A byproduct would be the software itself.

The four principal investigators, and eventual authors of the Users' Guide, were

- Jack Dongarra, Argonne National Laboratory
- Cleve Moler, University of New Mexico
- James Bunch, University of California, San Diego
- G. W. (Pete) Stewart, University of Maryland

The project was centered at Argonne. The three of us at universities worked at our home institutions during the academic year and we all got together at Argonne in the summer. Dongarra and I had previously worked on EISPACK.

ContentsFortran subroutine names at the time were limited to six characters. We adopted a naming convention with five letter names of the form TXXYY. The first letter, T, indicates the matrix data type. Standard Fortran provides three such types and, although not standard, many systems provide a fourth type, double precision complex, COMPLEX*16. Unlike EISPACK which follows Algol and has separate arrays for the real and imaginary parts of a complex matrix, LINPACK takes full advantage of Fortran complex. The first letter can be

S Real D Double precision C Complex Z Complex*16The next two letters, XX, indicate the form of the matrix or its decomposition. (A packed array is the upper half of a symmetric matrix stored with one linear subscript.)

GE General GB General band PO Positive definite PP Positive definite packed PB Positive definite band SI Symmetric indefinite SP Symmetric indefinite packed HI Hermitian indefinite HP Hermitian indefinite packed TR Triangular GT General tridiagonal PT Positive definite tridiagonal CH Cholesky decomposition QR Orthogonal triangular decomposition SV Singular value decompositionThe final two letters, YY, indicate the computation to be done.

FA Factor SL Solve CO Factor and estimate condition DI Determinant, inverse, inertia DC Decompose UD Update DD Downdate EX ExchangeThis chart shows all 44 of the LINPACK subroutines. An initial S may be replaced by any of D, C, or Z. An initial C may be replaced by Z. For example, DGEFA factors a double precision matrix. This is probably the most important routine in the package.

CO FA SL DI SGE * * * * SGB * * * * SPO * * * * SPP * * * * SPB * * * * SSI * * * * SSP * * * * CHI * * * * CHP * * * * STR * * * SGT * SPT * DC SL UD DD EX SCH * * * * SQR * * SSV *Following this naming convention the subroutine for computing the SVD, the singular value decomposition, fortuitously becomes SSVDC.

Programming StyleFortran at the time was notorious for unstructured, unreadable, "spaghetti" code. We adopted a disciplined programming style and expected people as well as machines to read the codes. The scope of loops and if-then-else constructions were carefully shown by indenting. Go-to's and statement numbers were used only to exit blocks and properly handle possibly empty loops.

In fact, we, the authors, did not actually write the final programs, TAMPR did. TAMPR was a powerful software analysis tool developed by Jim Boyle and Ken Dritz at Argonne. It manipulates and formats Fortran programs to clarify their structure. It also generates the four data type variants of the programs. So our source code was the Z versions and the S, D, and C versions, as well as "clean" Z versions, were producing by TAMPR.

This wasn't just a text processing task. For example, in ZPOFA, the Cholesky factorization of a positive definite complex Hermitian matrix, the test for a real, positive diagonal is

if (s .le. 0.0d0 .or. dimag(a(j,j)) .ne. 0.0d0) go to 40In the real version DPOFA there is no imaginary part, so this becomes simply

if (s .le. 0.0d0) go to 40BLASAll of the inner loops are done by calls to the BLAS, the Basic Linear Algebra Subprograms, developed concurrently by Chuck Lawson and colleagues. On systems that did not have optimizing Fortran compilers, the BLAS could be implemented efficiently in machine language. On vector machines, like the CRAY-1, the loop is a single vector instruction.

The two most important BLAS are the inner product of two vectors, DDOT, and vector plus scalar times vector, DAXPY. All of the algorithms are column oriented to conform to Fortran storage of two-dimensional arrays and thereby provide locality of memory access.

PortabilityThe programs avoid all machine dependent quantities. There is no need for anything like the EISPACK parameter MACHEP. Only one of the algorithms, the SVD, is iterative and involves a convergence test. Convergence is detected as a special case of a negligible subdiagonal element. Here is the code for checking if the subdiagonal element e(l) is negligible compared to the two nearby diagonal elements s(l) and s(l+1). I have included enough of the code to show how TAMPR displays structure by indentation and loop exits by comments with an initial c.

do 390 ll = 1, m l = m - ll c ...exit if (l .eq. 0) go to 400 test = dabs(s(l)) + dabs(s(l+1)) ztest = test + dabs(e(l)) if (ztest .ne. test) go to 380 e(l) = 0.0d0 c ......exit go to 400 380 continue 390 continue 400 continuePublisherNo commercial book publisher would publish the *LINPACK Users' Guide*. It did not fit into their established categories; it was neither a textbook nor a research monograph and was neither mathematics nor computer science.

Only one nonprofit publisher, SIAM, was interested. I vividly remember sitting on a dock in Lake Mendota at the University of Wisconsin-Madison during a SIAM annual meeting, talking to SIAM's Executive Director Ed Block, and deciding that SIAM would take a chance. The *Guide* has turned out to be one of SIAM's all-time best sellers.

A few years later, in 1981, Beresford Parlett reviewed the publication in SIAM Review. He wrote

It may seem strange that SIAM should publish and review a users' guide for a set of Fortran programs. Yet history will show that SIAM is thereby helping to foster a new aspect of technology which currently rejoices in a name mathematical software. There is as yet no satisfying characterization of this activity, but it certainly concerns the strong effect that a computer system has on the efficiency of a program.

LINPACK BenchmarkToday, LINPACK is better known as a benchmark than as a subroutine library. I wrote a blog post about the LINPACK Benchmark in 2013.

In RetrospectIn a sense, the LINPACK and EISPACK projects were failures. We had proposed research projects to the NSF to "explore the methodology, costs, and resources required to produce, test, and disseminate high-quality mathematical software." We never wrote a report or paper addressing those objectives. We only produced software

ReferencesJ. Dongarra and G. W. Stewart, "LINPACK, A Package for Solving Linear Systems", chapter 2 in W. Cowell, 29 pages, *Sources and Development of Mathematical Software*, Prentice Hall, 1984, PDF available.

J. J. Dongarra, J. R.Bunch, C. B. Moler and G. W. Stewart, *LINPACK User's Guide*, SIAM, 1979, 344 pages, LINPACK User's Guide.

C. Lawson, R. Hanson, D. Kincaid and F. Krogh, "Basic Linear Algebra Subprograms for Fortran Usage, *ACM Trans. Math. Software* vol. 5, 308-371, 1979. Reprint available.

J. Boyle and K. Dritz, "An automated programming system to aid the development of quality mathematical software," *IFIP Proceedings*, North Holland, 1974.

Get
the MATLAB code (requires JavaScript)

Published with MATLAB® R2018a

% It may seem strange that SIAM should publish and review a users' % guide for a set of Fortran programs. Yet history will show that % SIAM is thereby helping to foster a new aspect of technology % which currently rejoices in a name mathematical software. % There is as yet no satisfying characterization of this activity, % but it certainly concerns the strong effect that a computer % system has on the efficiency of a program. %

% %% LINPACK Benchmark % Today, LINPACK is better known as a benchmark than as a subroutine % library. I wrote a blog post about the % in 2013. %% In Retrospect % In a sense, the LINPACK and EISPACK projects were failures. We had % proposed research projects to the NSF to "explore the methodology, % costs, and resources required to produce, test, and disseminate % high-quality mathematical software." We never wrote a report or % paper addressing those objectives. We only produced software %% References % J. Dongarra and G. W. Stewart, % "LINPACK, A Package for Solving Linear Systems", % chapter 2 in W. Cowell, 29 pages, % _Sources and Development of Mathematical Software_, % Prentice Hall, 1984, % . % % J. J. Dongarra, J. R.Bunch, C. B. Moler and G. W. Stewart, % _LINPACK User's Guide_, % SIAM, 1979, 344 pages, % . % % C. Lawson, R. Hanson, D. Kincaid and F. Krogh, % "Basic Linear Algebra Subprograms for Fortran Usage, % _ACM Trans. Math. Software_ vol. 5, 308-371, 1979. % . % % J. Boyle and K. Dritz, % "An automated programming system to aid the development of quality % mathematical software," % _IFIP Proceedings_, North Holland, 1974. ##### SOURCE END ##### e86512150129490596d428f178e352a3 -->### EISPACK, Matrix Eigensystem Routines

The ACM Special Interest Group on Programming Languages, SIGPLAN, expects to hold the fourth in a series of conferences on the History of Programming Languages in 2020, see HOPL-IV. The first drafts of papers are to be submitted by August, 2018. That long lead time gives me the opportunity to write a detailed history of MATLAB. I plan to write the paper in sections, which I'll post in this blog as they are available. This is the second such installment.

LINPACK and EISPACK are Fortran subroutine packages for matrix computation developed in the 1970's. They are a follow up to the Algol 60 procedures that I described in my previous blog post about the Wilkinson and Reinsch Handbook. They led to the first version of MATLAB. This post is about EISPACK. A post about LINPACK will follow.

ContentsArgonne

In 1970, even before the *Handbook* was published, a group at Argonne National Laboratory proposed to the U.S. National Science Foundation (NSF) to "explore the methodology, costs, and resources required to produce, test, and disseminate high-quality mathematical software and to test, certify, disseminate, and support packages of mathematical software in certain problem areas."

Argonne is a national research laboratory west of Chicago that has origins in the University of Chicago's work on the Manhattan project in the 1940s. In 1970 it was under the auspices of what was then called the Atomic Energy Commission (now the Department of Energy). As far as I know, this was the first time anybody at Argonne proposed a research project to the NSF.

At the time I was a summer consultant at Argonne and was spending a sabbatical year at Stanford. Yoshiko Ikebe, from the University of Texas, was also a participant. So initially we called the project NATS, for NSF, Argonne, Texas and Stanford, but that was soon changed to National Activity to Test Software.

NATS took on two projects. EISPACK, Matrix Eigensystem Package, translated the Algol procedures for eigenvalues from the second part of the *Handbook* to Fortran and worked extensively on testing and portability. FUNPACK, led by Jim Cody, developed rational approximations and software to compute special mathematical functions, including Bessel functions, gamma functions and the error function.

The Algol 60 procedures in the Wilkinson and Reinsch *Handbook* provide an excellent reference for the algorithms of matrix computation, but it was difficult to actually run the code because Algol compilers were not readily available.

In the 1970's the clear choice of a language for technical computation was Fortran. The EISPACK team rewrote the Algol in Fortran, being careful to retain the structure and the numerical properties. Wilkinson visited Argonne for a few weeks every summer. He didn't do any Fortran programming, but he provided advice about numerical behavior and testing procedures.

In 1971 the first release of the collection was ready to be sent to about two dozen universities and national laboratories where individuals had shown an interest in the project and agreed to test the software.

The Fortran subroutines in the first release were all derived from the Algol procedures in the second section of the *Handbook*, so the contents of the package was essentially the same as the contents I listed in my previous blog. Most of the subroutine names describe the underlying algorithm, not the problem being addressed. It is hard to tell from names like TRED1 and HQR2 what the subroutine actually does. There were 34 subroutines in the first release.

With seven authors of the users' guide for the first release, it took almost three years, until 1974, to complete the documentation. Since the project was considered to be language translation, the *EISPACK Guide* was published by Spring-Verlag, the publisher of the *Handbook*.

By 1976 a second release was ready for public distribution. The number of subroutines nearly doubled, to 70. The additional capabilities included

- The SVD algorithm of Golub, Kahan, and Reinsch for factoring rectangular matrices and solving overdetermined linear systems. The Algol procedures are in the first section of the
*Handbook*.

- The QZ algorithm of Moler and Stewart for the generalized eigenvalue problem, $Ax = \lambda Bx$. We developed this in Fortran at the same time as the rest of EISPACK, so there was never any Algol version.

- A set of "driver routines" with names like RS, for real symmetric, and RSG, for real symmetric generalized, that reflect the problem being solved. These drivers collect the recommended subroutines from the underlying collection into one call.

A second Springer book, called the *EISPACK Guide Extension*, was published in 1976 to document the additional subroutines.

The hardware landscape that EISPACK faced initially was a diverse collection of mainframes from different manufacturers. The early 1970s was before the age of minicomputers. (The DEC VAX-11/780 was introduced in October, 1975.) It was also long before the IEEE 784 standard for floating point arithmetic. Each manufacturer had different word length and floating point format.

The most prevalent machines were from the IBM 360-370 family. These offered two floating point options. The 32-bit word with four bytes was designated REAL*4 and the 64-bit word was REAL*8. In contrast to today's floating point with the same word length, the 360 employed base 16. So all numbers between powers of 16, not powers of 2, were equally spaced. Jim Cody referred to this as "wobbling precision".

The Fortran distribution was available in several versions that differed in the setting of the machine accuracy parameter, MACHEP. Here are the different machines and corresponding values of MACHEP.

MACHINEMACHEP Burroughs 6700 2.**(-37) CDC 6000-7000 2.**(-47) DEC PDP-10 2.**(-26) Honeywell 6070 2.**(-26) Univac 2.**(-26) IBM 360-370, REAL*4 16.**(-5) IBM 360-370, REAL*8 16.D0**(-13)

Complex ArithmeticAlgol does not have a complex data type, so the derived subroutines do not use Fortran complex arithmetic . Real and imaginary parts of complex quantities are stored in separate arrays.

CertificationThe distributed software does not have a copyright notice or involve any kind of license. The term "open source" was not yet widely used. The closest we came to a license agreement was a statement about "certification". Both guides listed the machines, operating systems, and compilers of the 15 test sites. About half of the sites were universities and half were government labs. One was from Canada and one from Sweden. Various models from six different manufactures were involved.

This list of test sites was followed by the following statement about support.

Certification implies the full support of the NATS project in the sense that reports of poor or incorrect performance on at least the computer systems listed above will be examined and any necessary corrections made. This support holds only when the software is obtained through the channels indicated below and has not been modified; it will continue to hold throughout the useful life of EISPACK or until, in the estimation of the developers, the package is superseded or incorporated into other supported, widely-available program libraries. The following individual serves as a contact about EISPACK, accepting reports from users concerning performance:

This was followed by Burt Garbow's mailing address and phone number at Argonne. As far as I know, Burt never received any serious bug reports or complaints.

ReferencesJ. Dongarra and C. Moler, "EISPACK, A Package for Solving Matrix Eigenvalue Problems", chapter 4 in W. Cowell, *Sources and Development of Mathematical Software*, Prentice Hall, 1984, 10 pages, PDF available.

B. T. Smith, J. M. Boyle, J. J. Dongarra, B. S. Garbow, Y. Ikebe, V. C. Klema, and C. B. Moler, *Matrix Eigensystem Routines - EISPACK Guide, 2nd Edition*, vol. 6 in Lecture Notes in Computer Science, Springer, 1974, 551 pages, EISPACK Guide.

B. S. Garbow, J. M. Boyle, J. J. Dongarra and C. B. Moler, *Matrix Eigensystem Routines - EISPACK Guide Extension*, vol. 51 in Lecture Notes in Computer Science, Springer, 1977, 343 pages, EISPACK Guide Extension.

Get
the MATLAB code (requires JavaScript)

Published with MATLAB® R2017a

% Certification implies the full support of the NATS project in the % sense that reports of poor or incorrect performance on at least the % computer systems listed above will be examined and any necessary % corrections made. This support holds only when the software is % obtained through the channels indicated below and has not been % modified; it will continue to hold throughout the useful life % of EISPACK or until, in the estimation of the developers, the % package is superseded or incorporated into other supported, % widely-available program libraries. The following individual serves % as a contact about EISPACK, accepting reports from users concerning % performance: %

% % % This was followed by Burt Garbow's mailing address and phone number % at Argonne. As far as I know, Burt never received any serious % bug reports or complaints. %% References % J. Dongarra and C. Moler, % "EISPACK, A Package for Solving Matrix Eigenvalue Problems", % chapter 4 in W. Cowell, % _Sources and Development of Mathematical Software_, % Prentice Hall, 1984, 10 pages, % . % % B. T. Smith, J. M. Boyle, J. J. Dongarra, B. S. Garbow, Y. Ikebe, % V. C. Klema, and C. B. Moler, % _Matrix Eigensystem Routines - EISPACK Guide, 2nd Edition_, % vol. 6 in % Lecture Notes in Computer Science, Springer, 1974, 551 pages, % . % % B. S. Garbow, J. M. Boyle, J. J. Dongarra and C. B. Moler, % _Matrix Eigensystem Routines - EISPACK Guide Extension_, % vol. 51 in % Lecture Notes in Computer Science, Springer, 1977, 343 pages, % . ##### SOURCE END ##### dec80a0ea4c64d958d1852f1a699360f -->### EISPACK, Matrix Eigensystem Routines

The ACM Special Interest Group on Programming Languages, SIGPLAN, expects to hold the fourth in a series of conferences on the History of Programming Languages in 2020, see HOPL-IV. The first drafts of papers are to be submitted by August, 2018. That long lead time gives me the opportunity to write a detailed history of MATLAB. I plan to write the paper in sections, which I'll post in this blog as they are available. This is the second such installment.

LINPACK and EISPACK are Fortran subroutine packages for matrix computation developed in the 1970's. They are a follow up to the Algol 60 procedures that I described in my previous blog post about the Wilkinson and Reinsch Handbook. They led to the first version of MATLAB. This post is about EISPACK. A post about LINPACK will follow.

ContentsArgonne

In 1970, even before the *Handbook* was published, a group at Argonne National Laboratory proposed to the U.S. National Science Foundation (NSF) to "explore the methodology, costs, and resources required to produce, test, and disseminate high-quality mathematical software and to test, certify, disseminate, and support packages of mathematical software in certain problem areas."

Argonne is a national research laboratory west of Chicago that has origins in the University of Chicago's work on the Manhattan project in the 1940s. In 1970 it was under the auspices of what was then called the Atomic Energy Commission (now the Department of Energy). As far as I know, this was the first time anybody at Argonne proposed a research project to the NSF.

At the time I was a summer consultant at Argonne and was spending a sabbatical year at Stanford. Yoshiko Ikebe, from the University of Texas, was also a participant. So initially we called the project NATS, for NSF, Argonne, Texas and Stanford, but that was soon changed to National Activity to Test Software.

NATS took on two projects. EISPACK, Matrix Eigensystem Package, translated the Algol procedures for eigenvalues from the second part of the *Handbook* to Fortran and worked extensively on testing and portability. FUNPACK, led by Jim Cody, developed rational approximations and software to compute special mathematical functions, including Bessel functions, gamma functions and the error function.

The Algol 60 procedures in the Wilkinson and Reinsch *Handbook* provide an excellent reference for the algorithms of matrix computation, but it was difficult to actually run the code because Algol compilers were not readily available.

In the 1970's the clear choice of a language for technical computation was Fortran. The EISPACK team rewrote the Algol in Fortran, being careful to retain the structure and the numerical properties. Wilkinson visited Argonne for a few weeks every summer. He didn't do any Fortran programming, but he provided advice about numerical behavior and testing procedures.

In 1971 the first release of the collection was ready to be sent to about two dozen universities and national laboratories where individuals had shown an interest in the project and agreed to test the software.

The Fortran subroutines in the first release were all derived from the Algol procedures in the second section of the *Handbook*, so the contents of the package was essentially the same as the contents I listed in my previous blog. Most of the subroutine names describe the underlying algorithm, not the problem being addressed. It is hard to tell from names like TRED1 and HQR2 what the subroutine actually does. There were 34 subroutines in the first release.

With seven authors of the users' guide for the first release, it took almost three years, until 1974, to complete the documentation. Since the project was considered to be language translation, the *EISPACK Guide* was published by Spring-Verlag, the publisher of the *Handbook*.

By 1976 a second release was ready for public distribution. The number of subroutines nearly doubled, to 70. The additional capabilities included

- The SVD algorithm of Golub, Kahan, and Reinsch for factoring rectangular matrices and solving overdetermined linear systems. The Algol procedures are in the first section of the
*Handbook*.

- The QZ algorithm of Moler and Stewart for the generalized eigenvalue problem, $Ax = \lambda Bx$. We developed this in Fortran at the same time as the rest of EISPACK, so there was never any Algol version.

- A set of "driver routines" with names like RS, for real symmetric, and RSG, for real symmetric generalized, that reflect the problem being solved. These drivers collect the recommended subroutines from the underlying collection into one call.

A second Springer book, called the *EISPACK Guide Extension*, was published in 1976 to document the additional subroutines.

The hardware landscape that EISPACK faced initially was a diverse collection of mainframes from different manufacturers. The early 1970s was before the age of minicomputers. (The DEC VAX-11/780 was introduced in October, 1975.) It was also long before the IEEE 784 standard for floating point arithmetic. Each manufacturer had different word length and floating point format.

The most prevalent machines were from the IBM 360-370 family. These offered two floating point options. The 32-bit word with four bytes was designated REAL*4 and the 64-bit word was REAL*8. In contrast to today's floating point with the same word length, the 360 employed base 16. So all numbers between powers of 16, not powers of 2, were equally spaced. Jim Cody referred to this as "wobbling precision".

The Fortran distribution was available in several versions that differed in the setting of the machine accuracy parameter, MACHEP. Here are the different machines and corresponding values of MACHEP.

MACHINEMACHEP Burroughs 6700 2.**(-37) CDC 6000-7000 2.**(-47) DEC PDP-10 2.**(-26) Honeywell 6070 2.**(-26) Univac 2.**(-26) IBM 360-370, REAL*4 16.**(-5) IBM 360-370, REAL*8 16.D0**(-13)

Complex ArithmeticAlgol does not have a complex data type, so the derived subroutines do not use Fortran complex arithmetic . Real and imaginary parts of complex quantities are stored in separate arrays.

CertificationThe distributed software does not have a copyright notice or involve any kind of license. The term "open source" was not yet widely used. The closest we came to a license agreement was a statement about "certification". Both guides listed the machines, operating systems, and compilers of the 15 test sites. About half of the sites were universities and half were government labs. One was from Canada and one from Sweden. Various models from six different manufactures were involved.

This list of test sites was followed by the following statement about support.

Certification implies the full support of the NATS project in the sense that reports of poor or incorrect performance on at least the computer systems listed above will be examined and any necessary corrections made. This support holds only when the software is obtained through the channels indicated below and has not been modified; it will continue to hold throughout the useful life of EISPACK or until, in the estimation of the developers, the package is superseded or incorporated into other supported, widely-available program libraries. The following individual serves as a contact about EISPACK, accepting reports from users concerning performance:

This was followed by Burt Garbow's mailing address and phone number at Argonne. As far as I know, Burt never received any serious bug reports or complaints.

ReferencesJ. Dongarra and C. Moler, "EISPACK, A Package for Solving Matrix Eigenvalue Problems", chapter 4 in W. Cowell, *Sources and Development of Mathematical Software*, Prentice Hall, 1984, 10 pages, PDF available.

B. T. Smith, J. M. Boyle, J. J. Dongarra, B. S. Garbow, Y. Ikebe, V. C. Klema, and C. B. Moler, *Matrix Eigensystem Routines - EISPACK Guide, 2nd Edition*, vol. 6 in Lecture Notes in Computer Science, Springer, 1974, 551 pages, EISPACK Guide.

B. S. Garbow, J. M. Boyle, J. J. Dongarra and C. B. Moler, *Matrix Eigensystem Routines - EISPACK Guide Extension*, vol. 51 in Lecture Notes in Computer Science, Springer, 1977, 343 pages, EISPACK Guide Extension.

Get
the MATLAB code (requires JavaScript)

Published with MATLAB® R2017a

% Certification implies the full support of the NATS project in the % sense that reports of poor or incorrect performance on at least the % computer systems listed above will be examined and any necessary % corrections made. This support holds only when the software is % obtained through the channels indicated below and has not been % modified; it will continue to hold throughout the useful life % of EISPACK or until, in the estimation of the developers, the % package is superseded or incorporated into other supported, % widely-available program libraries. The following individual serves % as a contact about EISPACK, accepting reports from users concerning % performance: %

% % % This was followed by Burt Garbow's mailing address and phone number % at Argonne. As far as I know, Burt never received any serious % bug reports or complaints. %% References % J. Dongarra and C. Moler, % "EISPACK, A Package for Solving Matrix Eigenvalue Problems", % chapter 4 in W. Cowell, % _Sources and Development of Mathematical Software_, % Prentice Hall, 1984, 10 pages, % . % % B. T. Smith, J. M. Boyle, J. J. Dongarra, B. S. Garbow, Y. Ikebe, % V. C. Klema, and C. B. Moler, % _Matrix Eigensystem Routines - EISPACK Guide, 2nd Edition_, % vol. 6 in % Lecture Notes in Computer Science, Springer, 1974, 551 pages, % . % % B. S. Garbow, J. M. Boyle, J. J. Dongarra and C. B. Moler, % _Matrix Eigensystem Routines - EISPACK Guide Extension_, % vol. 51 in % Lecture Notes in Computer Science, Springer, 1977, 343 pages, % . ##### SOURCE END ##### dec80a0ea4c64d958d1852f1a699360f -->### Bug in Half-Precision Floating Point Object

My post on May 8 was about "half-precision" and "quarter-precision" arithmetic. I also added code for objects fp16 and fp8 to Cleve's Laboratory. A few days ago I heard from Pierre Blanchard and my good friend Nick Higham at the University of Manchester about a serious bug in the constructors for those objects.

ContentsThe BugLet

format long e = eps(fp16(1)) e = 9.765625000000000e-04The value of e is

1/1024 ans = 9.765625000000000e-04This is the relative precision of half-precision floating point numbers, which is the spacing of half-precision numbers in the interval between 1 and 2. So in binary the next number after 1 is

disp(binary(1+e)) 0 01111 0000000001And the last number before 2 is

disp(binary(2-e)) 0 01111 1111111111The three fields displayed are the sign, which is one bit, the exponent, which has five bits and the fraction, which has ten bits.

So far, so good. The bug shows up when I try to convert any number between 2-e and 2 to half-precision. There aren't any more half-precision numbers between those limits. The values in the lower half of the interval should round down to 2-e and the values in the upper half should round up to 2. The round-to-even convention says that the midpoint, 2-e/2, should round to 2.

But I wasn't careful about how I did the rounding. I just used the MATLAB round function, which doesn't follow the round-to-even convention. Worse yet, I didn't check to see if the fraction rounded up into the exponent field. I tried to do everything in one statement.

dbtype oldfp16 48:49 48 u = bitxor(uint16(round(1024*f)), ... 49 bitshift(uint16(e+15),10));For values between 2-e/2 and 2, the round(1024*f) is 1024, which requires eleven bits. The bitxor then clobbers the exponent field. I won't show the result here. If you have the May half-precision object on your machine, give it a try.

This doesn't just happen for values a little bit less than 2, it happens close to any power of 2.

The FixWe need a proper round-to-nearest-even function.

dbtype fp16 31 31 rndevn = @(s) round(s-(rem(s,2)==0.5));Then don't try to do it all at once.

dbtype fp16 50:56 50 % normal 51 t = uint16(rndevn(1024*f)); 52 if t==1024 53 t = uint16(0); 54 e = e+1; 55 end 56 u = bitxor(t, bitshift(uint16(e+15),10));It turns out that the branch for denormals is OK, once round is replaced by rndeven. The exponent for denormals is all zeros, so when the fraction encroaches it produces the correct result.

A similar fix is required for the quarter-precision constructor, fp8.

Cleve's LaboratoryI am updating the code on the MATLAB Central File Exchange. Only @fp16/fp16 and @fp8/fp8 are affected. (Give me a few days to complete the update process.)

ThanksThanks to Pierre and Nick.

\n'); d.write(code_string); // Add copyright line at the bottom if specified. if (copyright.length > 0) { d.writeln(''); d.writeln('%%'); if (copyright.length > 0) { d.writeln('% _' + copyright + '_'); } } d.write('\n'); d.title = title + ' (MATLAB code)'; d.close(); } -->

Get
the MATLAB code (requires JavaScript)

Published with MATLAB® R2017a

### Bug in Half-Precision Floating Point Object

My post on May 8 was about "half-precision" and "quarter-precision" arithmetic. I also added code for objects fp16 and fp8 to Cleve's Laboratory. A few days ago I heard from Pierre Blanchard and my good friend Nick Higham at the University of Manchester about a serious bug in the constructors for those objects.

ContentsThe BugLet

format long e = eps(fp16(1)) e = 9.765625000000000e-04The value of e is

1/1024 ans = 9.765625000000000e-04This is the relative precision of half-precision floating point numbers, which is the spacing of half-precision numbers in the interval between 1 and 2. So in binary the next number after 1 is

disp(binary(1+e)) 0 01111 0000000001And the last number before 2 is

disp(binary(2-e)) 0 01111 1111111111The three fields displayed are the sign, which is one bit, the exponent, which has five bits and the fraction, which has ten bits.

So far, so good. The bug shows up when I try to convert any number between 2-e and 2 to half-precision. There aren't any more half-precision numbers between those limits. The values in the lower half of the interval should round down to 2-e and the values in the upper half should round up to 2. The round-to-even convention says that the midpoint, 2-e/2, should round to 2.

But I wasn't careful about how I did the rounding. I just used the MATLAB round function, which doesn't follow the round-to-even convention. Worse yet, I didn't check to see if the fraction rounded up into the exponent field. I tried to do everything in one statement.

dbtype oldfp16 48:49 48 u = bitxor(uint16(round(1024*f)), ... 49 bitshift(uint16(e+15),10));For values between 2-e/2 and 2, the round(1024*f) is 1024, which requires eleven bits. The bitxor then clobbers the exponent field. I won't show the result here. If you have the May half-precision object on your machine, give it a try.

This doesn't just happen for values a little bit less than 2, it happens close to any power of 2.

The FixWe need a proper round-to-nearest-even function.

dbtype fp16 31 31 rndevn = @(s) round(s-(rem(s,2)==0.5));Then don't try to do it all at once.

dbtype fp16 50:56 50 % normal 51 t = uint16(rndevn(1024*f)); 52 if t==1024 53 t = uint16(0); 54 e = e+1; 55 end 56 u = bitxor(t, bitshift(uint16(e+15),10));It turns out that the branch for denormals is OK, once round is replaced by rndeven. The exponent for denormals is all zeros, so when the fraction encroaches it produces the correct result.

A similar fix is required for the quarter-precision constructor, fp8.

Cleve's LaboratoryI am updating the code on the MATLAB Central File Exchange. Only @fp16/fp16 and @fp8/fp8 are affected. (Give me a few days to complete the update process.)

ThanksThanks to Pierre and Nick.

\n'); d.write(code_string); // Add copyright line at the bottom if specified. if (copyright.length > 0) { d.writeln(''); d.writeln('%%'); if (copyright.length > 0) { d.writeln('% _' + copyright + '_'); } } d.write('\n'); d.title = title + ' (MATLAB code)'; d.close(); } -->

Get
the MATLAB code (requires JavaScript)

Published with MATLAB® R2017a

### Microsoft Word Training

Microsoft Word is a word processing system that can be utilized for both business and personal use. While already feature rich and critical for productivity, Microsoft continues to improve and enhance Word with each new release.

Microsoft Word training at New Horizons will help you develop or improve your Microsoft Word skills so that you are able to make the most of this industry standard application. All of our Microsoft Word classes are taught by Microsoft Certified Trainers.

MICROSOFT WORD COURSES WILL COVER:

- Newest features of Word
- Document creation, editing, and saving
- Formatting text and paragraphs
- Working with tables, columns, and other formatting features
- Graphics, WordArt, charts, and text flow
- Document templates
- Advanced features including mail merge, macros, document versioning, and proofing tools

MICROSOFT WORD CERTIFICATION

Your Microsoft Word training will prepare you for a Microsoft Office Specialist certification. Check out more details here:

Microsoft Office Certification proves that you have core to advanced skills in Microsoft Office applications. Certification is helpful for those new to the workforce or transitioning to a more analytical role, since it proves you can perform tasks at a higher level. This gives you a leg up against competing candidates.

### Publisher Training

Microsoft Publisher is a desktop publishing application that places emphasis on page layout and design. It lets you create visually rich, professionally-looking publications, including flyers, posters, product catalogs, and email newsletters.

When you or your team effectively utilize Microsoft Publisher, then you save on graphic design costs and have more control over the presentation of your personal message.

New Horizons offers Publishing training to help get you started on using Microsoft Publisher's intuitive, easy-to-use software. Whether you are new to Microsoft Publisher or looking to learn new tips and tricks, we have you covered.

All of our Microsoft Publisher courses are taught by Microsoft Certified trainers.

In New Horizons Publisher courses you will learn about:

- Working with basic publications
- Editing and formatting publications
- Working with pictures and graphics
- Preparing a publication for distribution

### Microsoft Project Training

Microsoft Project is a project management software program designed to assist managers so they can achieve a successful outcome and the benefits of Project software grow with each new edition. Microsoft Project training from New Horizons will help you stay abreast of the latest changes, whether it be for personal use, or to train your employees for an upgrade in your organization. Microsoft Project training from New Horizons can help project managers assign resources to tasks, track progress, manage a budget and analyze workloads.

With Microsoft Project, managers are easily able to analyze resources, check budgets, evaluate timelines, measure progress, and anticipate resource needs. In addition to assisting project managers, Microsoft Project also enables team members to manage tasks, collaborate, submit time sheets, and flag issues and risks. Moreover, it helps executives to define business drivers, measure strategic impact of competing ideas, make funding decisions, and view project and resources status.:

PROJECT COURSE TOPICS

- Creating a project plan
- Managing and configuring tasks in Project
- Understanding and managing resources
- Integrating data with other Microsoft applications
- Tracking costs
- Viewing project information visually

### PowerPoint Training

Microsoft PowerPoint is powerful software that allows you to create captivating slide presentations that can easily be shared on the web. If you want to present any information creatively and professionally, then Microsoft PowerPoint is the perfect tool.

New Horizons has courses to help you learn how to use Microsoft PowerPoint and get started on creating memorable PowerPoint presentations. All of New Horizons' Microsoft PowerPoint classes are taught by Microsoft Certified Trainers.

MICROSOFT POWERPOINT COURSE TOPICS

- New Features of PowerPoint
- Creating presentations with PowerPoint
- Formatting and organizing PowerPoint slides
- Working with graphics, tables and charts
- Adding multimedia and SmartArt presentations
- Integrating with Microsoft Office files

MICROSOFT POWERPOINT CERTIFICATION

Your Microsoft PowerPoint training will prepare you for a Microsoft Office Specialist certification. Check out more details here:

Microsoft Office Certification proves that you have core to advanced skills in Microsoft Office applications. Certification is helpful for those new to the workforce or transitioning to a more analytical role, since it proves you can perform tasks at a higher level. This gives you a leg up against competing candidates.

### Outlook Training

New Horizons offers world-class training for several versions of Microsoft Office including 2016, 2013, 2010 and earlier. All our courses are delivered by one of our Microsoft Certified Trainers ready to help you or your employees get the most out of Outlook.

Microsoft Outlook courses at New Horizons will teach you how to:

- Use features of Outlook
- Compose and organize your email
- Working with contacts
- Use calendar features
- Do Outlook tasks

OUTLOOK CERTIFICATION

Your Outlook training will help prepare you for Microsoft Office Specialist certification. This certification is helpful if you are new to the job market, or looking to get ahead in your current position. Check out more details here:

Microsoft Office Certification proves that you have core to advanced skills in Microsoft Office applications. Certification is helpful for those new to the workforce or transitioning to a more analytical role, since it proves you can perform tasks at a higher level. This gives you a leg up against competing candidates.

### OneNote Training

Microsoft OneNote lets you create and store notes in a convenient location, enabling you to find and use them simply. The organizational power of OneNote is essential to maintaining productivity.

New Horizons offers world-class OneNote training and all courses are delivered by one of our Microsoft Certified Trainers. Whether you're new to OneNote or an advanced user, our training will cover all you need to know to become an OneNote power user.

MICROSOFT ONENOTE COURSES WILL COVER:

- The Microsoft OneNote interface
- How to create a simple notebook
- How to create notes
- How to organize content and search for information in a OneNote notebook
- Integration of OneNote with other applications
- Using OneNote to share notes with other people

### Visio Training

Microsoft Visio is an intelligent diagramming and vector graphics application. Visio helps simplify information communication with data-driven visual information including, but not limited to:

- Organization charts
- Network diagrams
- Business processes

Visio makes it easy to share this information on the Web and in real-time. Its professional templates are also easily integrated with other Microsoft Office products, such as Office and Excel.

BENEFITS OF VISIO TRAINING FROM NEW HORIZONS

Whether you are new to Visio or an advanced user looking for tips and tricks, New Horizons has the courses you need. All of our Microsoft Visio trainers are Microsoft Certified and are ready to help teach you all that you need to know to become a Microsoft Visio pro.

New Horizons Visio courses cover:

- Working with basic diagrams and shapes
- Creating an organization chart
- Creating shapes, design styles, templates, and stencils
- Working with layers
- Sharing and collaboration features

### Excel Training

Microsoft Excel is the most commonly used spreadsheet application. Learning how to use Excel is an investment in both your personal and professional life. Excel makes it easy to monitor financial performance, such as business profit or loss, calculate payments on large purchases, plan a budget, or stay organized with checklists.

As an employee, learning how to use Excel efficiently provides value, since most jobs utilize this application. This opens up more opportunities for employment and career advancement.

When employees know how to use Excel, it improves their efficiency in the workplace. Employees who know how to create detailed worksheets, invoices, charts, and complex formulas achieve professional results in a fraction of the time.

Excel training at New Horizons includes basic to advanced courses. Whether you're brand new to Excel or seeking advanced knowledge, we've got it covered. Get Started Today!

MICROSOFT OFFICE SPECIALIST CERTIFICATION

Excel training at New Horizons will help prepare you for a Microsoft Office Specialist Certification (MOS) Excel Certification. Check out more details here:

Holding a MOS certification can earn an entry-level business employee as much as $16,000 more in annual salary than uncertified peers.*

### ACCESS TRAINING

Take control of your data with Microsoft Access relational database software training at New Horizons. Our courses will teach you basic to advanced features of Access.

Whether you are new to Access or an advanced user, this training will cover what you need to know as efficiently as possible. New Horizons offers world-class training for several versions, including 2016, 2013, 2010, 2007 and earlier. And, of course, all classes are delivered by one of our Microsoft Certified Trainers.

Microsoft Access courses at New Horizons will teach you how to:

- Create and design Access databases
- Work with Access tables, relationships, keys and constraints
- Query data
- Manage and design interfaces with Access Forms
- Create basic to advanced reports
- Automate tasks with Macros and VBA

Microsoft Office Specialist Certification

Access training at New Horizons will help prepare you for a Microsoft Office Specialist (MOS) Access Certification. Check out more details here:

### Wilkinson and Reinsch Handbook on Linear Algebra

The ACM Special Interest Group on Programming Languages, SIGPLAN, expects to hold the fourth in a series of conferences on the History of Programming Languages in 2020, see HOPL-IV. The first drafts of papers are to be submitted by August, 2018. That long lead time gives me the opportunity to write a detailed history of MATLAB. I plan to write the paper in sections, which I'll post in this blog as they are available.

The mathematical basis for the first version of MATLAB begins with a volume edited by J. H. Wilkinson and C. Reinsch, *Handbook for Automatic Computation, Volume II, Linear Algebra*, published by the eminent German publisher Springer-Verlag, in 1971.

Contents

- Eigenvalues
- Handbook for Automatic Computation
- Contents
- Symmetric matrices
- Nonsymmetric matrices
- Complex matrices
- Preferred paths
- QR vs. QL
- Historical note

The mathematical term "eigenvalue" is a linguistic hybrid. In German the term is "eigenwert". One Web dictionary says the English translation is "intrinsic value". But after a few decades using terms like "characteristic value" and "latent root", mathematicians have given up trying to translate the entire word and generally translate only the second half.

The *eigenvalue problem* for a given matrix $A$ is the task of finding scalars $\lambda$ and, possibly, the corresponding vectors $x$ so that

$$Ax = \lambda x$$

It is important to distinguish the case where $A$ is symmetric. After the *linear equation problem*,

$$Ax = b$$

the eigenvalue problem is the most important computational task in numerical linear algebra.

The state of the art in numerical linear algebra 50 years ago, in the 1960's, did not yet provide reliable, efficient methods for solving matrix eigenvalue problems. Software libraries had a hodgepodge of methods, many of them based upon polynomial root finders.

Jim Wilkinson told a story about having two such subroutines, one using something called Bairstow's method and one using something called Laguerre's method. He couldn't decide which one was best, so he put them together in a program that first tried Bairstow's method, then printed a message and switched to Laguerre if Bairstow failed. After several months of frequent use at the National Physical Laboratory, he never saw a case where Bairstow failed. He wrote a paper with his recommendation for Bairstow's method and soon got a letter from a reader who claimed to have an example which Bairstow could not handle. Wilkinson tried the example with his program and found a bug. The program didn't print the message that it was switching to Laguerre.

Handbook for Automatic ComputationOver the period 1965-1970, Wilkinson and 18 of his colleagues published a series of papers in the Springer journal *Numerische Mathematik*. The papers offered procedures, written in Algol 60, for solving different versions of the linear equation problem and the eigenvalue problem. These were *research* papers presenting results about numerical stability, subtle details of implementation, and, in some cases, new methods.

In 1971, these papers were collected, sometimes with modifications, in the volume edited by Wilkinson and Reinsch. This book marks the first appearance is an organized library of the algorithms for the eigenvalue problem for dense, stored matrices that we still use in MATLAB today. The importance of using orthogonal transformations wherever possible was exposed by Wilkinson and the other authors. The effectiveness of the newly discovered QR algorithms of J. Francis and the related LR algorithm of H. Rutishauser had only recently been appreciated.

ContentsThe Algol 60 procedures are the focus of each chapter. These codes remain a clear, readable reference for the important ideas of modern numerical linear algebra. Part I of the volume is about the linear equation problem; part II is about the eigenvalue problem. There are 40 procedures in part I and 43 in part II.

Here is a list of the procedures in Part II. A suffix 2 in the procedure name indicates that it computes both eigenvalues and eigenvectors. The "bak" procedures apply reduction transformations to eigenvectors.

Many of the procedures work with a reduced form, which is tridiagonal for symmetric or Hermitian matrices and Hessenberg (upper triangular plus one subdiagonal) for nonsymmetric matrices., Since Algol does not have a complex number data type, the complex arrays are represented by pairs of real arrays.

Symmetric matrices
**Reduction to tridiagonal**
*tred1, tred2, tred3, trbak1, trbak3* Orthogonal tridiagonalization.
**Band**
*bandrd* Tridiagonalization.
*symray* Eigenvectors.
*bqr* One eigenvalue.
**Tridiagonal**
*imtql1,imtql2* All eigenvalues and vectors, implicit QR.
*tql1, tql2* All eigenvalues and vectors, explicit QR.
*ratqr Few eigenvalues, rational QR.
bisect Few eigenvalues, bisection.
tristurm Few eigenvalues and vectors.
Few eigenvalues
ritzit Simultaneous iteration.
Jacobi method
jacobi Jacobi method.
Generalized problem, Ax = \lambda Bx
reduc1, reduc2, rebaka, rebakb Symmetric A, positive definite B.
*

*Nonsymmetric matrices*

**Reduction to Hessenberg**
*balance, balbak* Balance (scaling)
*dirhes, dirbak, dirtrans* Elementary, accumulated innerprod.
*elmhes, elmbak, elmtrans* Elementary.
*orthes, ortbak, ortrans* Orthogonal.
**Band**
*unsray* Eigenvectors.
**Hessenberg**
*hqr, hqr2* All eigenvalues and vectors, implicit QR.
*invit* Few eigenvectors, inverse iteration.
**Norm reducing**
*eigen* Eigenvalues.

*comeig* Norm reducing Jacobi.
*comhes, combak* Reduce to Hessenberg form.
*comlr, comlr2* Complex LR algorithm.
*cxinvit* Inverse iteration.

The preferred path for finding all the eigenvalues of a real, symmetric matrix is *tred1* followed by *imtql1*. The preferred path for finding all the eigenvalues and eigenvectors of a real, symmetric matrix is *tred2* followed by *imtql2*.

The preferred path for finding all the eigenvalues of a real, nonsymmetric matrix is *balanc*, *elmhes*, and finally *hqr*. The preferred path for finding all the eigenvalues and eigenvectors of a real, nonsymmetric matrix is *balanc*, *elmhes*, *elmtrans*, *hqr2*, and finally *balbak*.

"QR" and "QL" are right- and left-handed, or forward and backward, versions of the same algorithm. The algorithm is usually described in terms of factoring a matrix into an orthogonal factor, Q, and an upper or right triangular factor, R. This leads to QR algorithms. But for reasons having to do with graded matrices and terminating a loop at 1 rather than n-1, the authors of the Handbook decided to use left triangular and QL algorithms.

Historical noteWhen the papers from *Numerische Mathematik* were collected to form the 1971 *Handbook for Automatic Computation, Volume II, Linear Algebra*, almost all of them where reprinted without change. But, despite what its footnote says, Contribution II/4, *The Implicit QL Algorithm*, never appeared in the journal. The paper is the merger of a half-page paper by Austin Dubrulle and earlier contributions by R.S. Martin and Wilkinson. Dubrulle was able to reduce the operation count in the inner loop.

The authors of Contribution II/4 of the Handbook are listed as A. Dubrulle, R.S.Martin, and J.H.Wilkinson, although the three of them never actually worked together. Proper credit is given, but I'm afraid that an interesting little bit of history has been lost.

Dubrulle's version of the implicit tridiagonal QR algorithm continues to be important today. In future posts I will describe how the Handbook led to EISPACK, LINPACK and LAPACK. In LAPACK the *imtql1* and *imtql2* functions are combined into one subroutine named DSTEQR.F The code for the inner loop is very similar to Austin's note.

Years after making this contribution, Austin returned to grad school and was one of my Ph.D. students at the University of New Mexico. His thesis was about the distribution of the singular value of the matrices derived from photographs.

\n'); d.write(code_string); // Add copyright line at the bottom if specified. if (copyright.length > 0) { d.writeln(''); d.writeln('%%'); if (copyright.length > 0) { d.writeln('% _' + copyright + '_'); } } d.write('\n'); d.title = title + ' (MATLAB code)'; d.close(); } -->

Get
the MATLAB code (requires JavaScript)

Published with MATLAB® R2017a

**Reduction to tridiagonal**%

*tred1, tred2, tred3, trbak1, trbak3*Orthogonal tridiagonalization. %

**Band**%

*bandrd*Tridiagonalization. %

*symray*Eigenvectors. %

*bqr*One eigenvalue. %

**Tridiagonal**%

*imtql1,imtql2*All eigenvalues and vectors, implicit QR. %

*tql1, tql2*All eigenvalues and vectors, explicit QR. %

*ratqr Few eigenvalues, rational QR. %*

*bisect*Few eigenvalues, bisection. %*tristurm*Few eigenvalues and vectors. %**Few eigenvalues**%*ritzit*Simultaneous iteration. %**Jacobi method**%*jacobi*Jacobi method. %**Generalized problem, Ax = \lambda Bx**%*reduc1, reduc2, rebaka, rebakb*Symmetric A, positive definite B. % %% Nonsymmetric matrices % % %**Reduction to Hessenberg**%*balance, balbak*Balance (scaling) %*dirhes, dirbak, dirtrans*Elementary, accumulated innerprod. %*elmhes, elmbak, elmtrans*Elementary. %*orthes, ortbak, ortrans*Orthogonal. %**Band**%*unsray*Eigenvectors. %**Hessenberg**%*hqr, hqr2*All eigenvalues and vectors, implicit QR. %*invit*Few eigenvectors, inverse iteration. %**Norm reducing**%*eigen*Eigenvalues. % %% Complex matrices % % %*comeig*Norm reducing Jacobi. %*comhes, combak*Reduce to Hessenberg form. %*comlr, comlr2*Complex LR algorithm. %*cxinvit*Inverse iteration. % %% Preferred paths % The preferred path for finding all the eigenvalues of a real, symmetric % matrix is _tred1_ followed by _imtql1_. % The preferred path for finding all the eigenvalues and eigenvectors % of a real, symmetric matrix is _tred2_ followed by _imtql2_. % % The preferred path for finding all the eigenvalues of a real, nonsymmetric % matrix is _balanc_, _elmhes_, and finally _hqr_. % The preferred path for finding all the eigenvalues and eigenvectors % of a real, nonsymmetric matrix is _balanc_, _elmhes_, _elmtrans_, _hqr2_, % and finally _balbak_. %% QR vs. QL % "QR" and "QL" are right- and left-handed, or forward and backward, % versions of the same algorithm. The algorithm is usually described in % terms of factoring a matrix into an orthogonal factor, Q, and an upper % or right triangular factor, R. This leads to QR algorithms. But for % reasons having to do with graded matrices and terminating a loop at |1| % rather than |n-1|, the authors of the Handbook decided to use left % triangular and QL algorithms. %% Historical note % When the papers from _Numerische Mathematik_ were collected to form the % 1971 _Handbook for Automatic Computation, Volume II, Linear Algebra_, % almost all of them where reprinted without change. But, despite what % its footnote says, Contribution II/4, _The Implicit QL Algorithm_, % never appeared in the journal. The paper is the merger of a half-page % paper by % and earlier contributions by R.S. Martin and Wilkinson. % Dubrulle was able to reduce the operation count in the inner loop. %% % The authors of Contribution II/4 of the Handbook are listed as % A. Dubrulle, R.S.Martin, and J.H.Wilkinson, although the three of them % never actually worked together. % Proper credit is given, but I'm afraid that an interesting little bit % of history has been lost. % % Dubrulle's version of the implicit tridiagonal QR algorithm continues to % be important today. In future posts I will describe how the Handbook % led to EISPACK, LINPACK and LAPACK. % In LAPACK the _imtql1_ and _imtql2_ functions are combined into one % subroutine named % % The code for the inner loop is very similar to Austin's note. % % Years after making this contribution, Austin returned to grad school % and was one of my Ph.D. students at the University of New Mexico. % His thesis was about the distribution of the singular value of the % matrices derived from photographs. ##### SOURCE END ##### 26cefaedad044f33a20d375f994408b9 -->

*
*### Wilkinson and Reinsch Handbook on Linear Algebra

The ACM Special Interest Group on Programming Languages, SIGPLAN, expects to hold the fourth in a series of conferences on the History of Programming Languages in 2020, see HOPL-IV. The first drafts of papers are to be submitted by August, 2018. That long lead time gives me the opportunity to write a detailed history of MATLAB. I plan to write the paper in sections, which I'll post in this blog as they are available.

The mathematical basis for the first version of MATLAB begins with a volume edited by J. H. Wilkinson and C. Reinsch, *Handbook for Automatic Computation, Volume II, Linear Algebra*, published by the eminent German publisher Springer-Verlag, in 1971.

Contents

- Eigenvalues
- Handbook for Automatic Computation
- Contents
- Symmetric matrices
- Nonsymmetric matrices
- Complex matrices
- Preferred paths
- QR vs. QL
- Historical note

The mathematical term "eigenvalue" is a linguistic hybrid. In German the term is "eigenwert". One Web dictionary says the English translation is "intrinsic value". But after a few decades using terms like "characteristic value" and "latent root", mathematicians have given up trying to translate the entire word and generally translate only the second half.

The *eigenvalue problem* for a given matrix $A$ is the task of finding scalars $\lambda$ and, possibly, the corresponding vectors $x$ so that

$$Ax = \lambda x$$

It is important to distinguish the case where $A$ is symmetric. After the *linear equation problem*,

$$Ax = b$$

the eigenvalue problem is the most important computational task in numerical linear algebra.

The state of the art in numerical linear algebra 50 years ago, in the 1960's, did not yet provide reliable, efficient methods for solving matrix eigenvalue problems. Software libraries had a hodgepodge of methods, many of them based upon polynomial root finders.

Jim Wilkinson told a story about having two such subroutines, one using something called Bairstow's method and one using something called Laguerre's method. He couldn't decide which one was best, so he put them together in a program that first tried Bairstow's method, then printed a message and switched to Laguerre if Bairstow failed. After several months of frequent use at the National Physical Laboratory, he never saw a case where Bairstow failed. He wrote a paper with his recommendation for Bairstow's method and soon got a letter from a reader who claimed to have an example which Bairstow could not handle. Wilkinson tried the example with his program and found a bug. The program didn't print the message that it was switching to Laguerre.

Handbook for Automatic ComputationOver the period 1965-1970, Wilkinson and 18 of his colleagues published a series of papers in the Springer journal *Numerische Mathematik*. The papers offered procedures, written in Algol 60, for solving different versions of the linear equation problem and the eigenvalue problem. These were *research* papers presenting results about numerical stability, subtle details of implementation, and, in some cases, new methods.

In 1971, these papers were collected, sometimes with modifications, in the volume edited by Wilkinson and Reinsch. This book marks the first appearance is an organized library of the algorithms for the eigenvalue problem for dense, stored matrices that we still use in MATLAB today. The importance of using orthogonal transformations wherever possible was exposed by Wilkinson and the other authors. The effectiveness of the newly discovered QR algorithms of J. Francis and the related LR algorithm of H. Rutishauser had only recently been appreciated.

ContentsThe Algol 60 procedures are the focus of each chapter. These codes remain a clear, readable reference for the important ideas of modern numerical linear algebra. Part I of the volume is about the linear equation problem; part II is about the eigenvalue problem. There are 40 procedures in part I and 43 in part II.

Here is a list of the procedures in Part II. A suffix 2 in the procedure name indicates that it computes both eigenvalues and eigenvectors. The "bak" procedures apply reduction transformations to eigenvectors.

Many of the procedures work with a reduced form, which is tridiagonal for symmetric or Hermitian matrices and Hessenberg (upper triangular plus one subdiagonal) for nonsymmetric matrices., Since Algol does not have a complex number data type, the complex arrays are represented by pairs of real arrays.

Symmetric matrices
**Reduction to tridiagonal**
*tred1, tred2, tred3, trbak1, trbak3* Orthogonal tridiagonalization.
**Band**
*bandrd* Tridiagonalization.
*symray* Eigenvectors.
*bqr* One eigenvalue.
**Tridiagonal**
*imtql1,imtql2* All eigenvalues and vectors, implicit QR.
*tql1, tql2* All eigenvalues and vectors, explicit QR.
*ratqr Few eigenvalues, rational QR.
bisect Few eigenvalues, bisection.
tristurm Few eigenvalues and vectors.
Few eigenvalues
ritzit Simultaneous iteration.
Jacobi method
jacobi Jacobi method.
Generalized problem, Ax = \lambda Bx
reduc1, reduc2, rebaka, rebakb Symmetric A, positive definite B.
*

*Nonsymmetric matrices*

**Reduction to Hessenberg**
*balance, balbak* Balance (scaling)
*dirhes, dirbak, dirtrans* Elementary, accumulated innerprod.
*elmhes, elmbak, elmtrans* Elementary.
*orthes, ortbak, ortrans* Orthogonal.
**Band**
*unsray* Eigenvectors.
**Hessenberg**
*hqr, hqr2* All eigenvalues and vectors, implicit QR.
*invit* Few eigenvectors, inverse iteration.
**Norm reducing**
*eigen* Eigenvalues.

*comeig* Norm reducing Jacobi.
*comhes, combak* Reduce to Hessenberg form.
*comlr, comlr2* Complex LR algorithm.
*cxinvit* Inverse iteration.

The preferred path for finding all the eigenvalues of a real, symmetric matrix is *tred1* followed by *imtql1*. The preferred path for finding all the eigenvalues and eigenvectors of a real, symmetric matrix is *tred2* followed by *imtql2*.

The preferred path for finding all the eigenvalues of a real, nonsymmetric matrix is *balanc*, *elmhes*, and finally *hqr*. The preferred path for finding all the eigenvalues and eigenvectors of a real, nonsymmetric matrix is *balanc*, *elmhes*, *elmtrans*, *hqr2*, and finally *balbak*.

"QR" and "QL" are right- and left-handed, or forward and backward, versions of the same algorithm. The algorithm is usually described in terms of factoring a matrix into an orthogonal factor, Q, and an upper or right triangular factor, R. This leads to QR algorithms. But for reasons having to do with graded matrices and terminating a loop at 1 rather than n-1, the authors of the Handbook decided to use left triangular and QL algorithms.

Historical noteWhen the papers from *Numerische Mathematik* were collected to form the 1971 *Handbook for Automatic Computation, Volume II, Linear Algebra*, almost all of them where reprinted without change. But, despite what its footnote says, Contribution II/4, *The Implicit QL Algorithm*, never appeared in the journal. The paper is the merger of a half-page paper by Austin Dubrulle and earlier contributions by R.S. Martin and Wilkinson. Dubrulle was able to reduce the operation count in the inner loop.

The authors of Contribution II/4 of the Handbook are listed as A. Dubrulle, R.S.Martin, and J.H.Wilkinson, although the three of them never actually worked together. Proper credit is given, but I'm afraid that an interesting little bit of history has been lost.

Dubrulle's version of the implicit tridiagonal QR algorithm continues to be important today. In future posts I will describe how the Handbook led to EISPACK, LINPACK and LAPACK. In LAPACK the *imtql1* and *imtql2* functions are combined into one subroutine named DSTEQR.F The code for the inner loop is very similar to Austin's note.

Years after making this contribution, Austin returned to grad school and was one of my Ph.D. students at the University of New Mexico. His thesis was about the distribution of the singular value of the matrices derived from photographs.

\n'); d.write(code_string); // Add copyright line at the bottom if specified. if (copyright.length > 0) { d.writeln(''); d.writeln('%%'); if (copyright.length > 0) { d.writeln('% _' + copyright + '_'); } } d.write('\n'); d.title = title + ' (MATLAB code)'; d.close(); } -->

Get
the MATLAB code (requires JavaScript)

Published with MATLAB® R2017a

**Reduction to tridiagonal**%

*tred1, tred2, tred3, trbak1, trbak3*Orthogonal tridiagonalization. %

**Band**%

*bandrd*Tridiagonalization. %

*symray*Eigenvectors. %

*bqr*One eigenvalue. %

**Tridiagonal**%

*imtql1,imtql2*All eigenvalues and vectors, implicit QR. %

*tql1, tql2*All eigenvalues and vectors, explicit QR. %

*ratqr Few eigenvalues, rational QR. %*

*bisect*Few eigenvalues, bisection. %*tristurm*Few eigenvalues and vectors. %**Few eigenvalues**%*ritzit*Simultaneous iteration. %**Jacobi method**%*jacobi*Jacobi method. %**Generalized problem, Ax = \lambda Bx**%*reduc1, reduc2, rebaka, rebakb*Symmetric A, positive definite B. % %% Nonsymmetric matrices % % %**Reduction to Hessenberg**%*balance, balbak*Balance (scaling) %*dirhes, dirbak, dirtrans*Elementary, accumulated innerprod. %*elmhes, elmbak, elmtrans*Elementary. %*orthes, ortbak, ortrans*Orthogonal. %**Band**%*unsray*Eigenvectors. %**Hessenberg**%*hqr, hqr2*All eigenvalues and vectors, implicit QR. %*invit*Few eigenvectors, inverse iteration. %**Norm reducing**%*eigen*Eigenvalues. % %% Complex matrices % % %*comeig*Norm reducing Jacobi. %*comhes, combak*Reduce to Hessenberg form. %*comlr, comlr2*Complex LR algorithm. %*cxinvit*Inverse iteration. % %% Preferred paths % The preferred path for finding all the eigenvalues of a real, symmetric % matrix is _tred1_ followed by _imtql1_. % The preferred path for finding all the eigenvalues and eigenvectors % of a real, symmetric matrix is _tred2_ followed by _imtql2_. % % The preferred path for finding all the eigenvalues of a real, nonsymmetric % matrix is _balanc_, _elmhes_, and finally _hqr_. % The preferred path for finding all the eigenvalues and eigenvectors % of a real, nonsymmetric matrix is _balanc_, _elmhes_, _elmtrans_, _hqr2_, % and finally _balbak_. %% QR vs. QL % "QR" and "QL" are right- and left-handed, or forward and backward, % versions of the same algorithm. The algorithm is usually described in % terms of factoring a matrix into an orthogonal factor, Q, and an upper % or right triangular factor, R. This leads to QR algorithms. But for % reasons having to do with graded matrices and terminating a loop at |1| % rather than |n-1|, the authors of the Handbook decided to use left % triangular and QL algorithms. %% Historical note % When the papers from _Numerische Mathematik_ were collected to form the % 1971 _Handbook for Automatic Computation, Volume II, Linear Algebra_, % almost all of them where reprinted without change. But, despite what % its footnote says, Contribution II/4, _The Implicit QL Algorithm_, % never appeared in the journal. The paper is the merger of a half-page % paper by % and earlier contributions by R.S. Martin and Wilkinson. % Dubrulle was able to reduce the operation count in the inner loop. %% % The authors of Contribution II/4 of the Handbook are listed as % A. Dubrulle, R.S.Martin, and J.H.Wilkinson, although the three of them % never actually worked together. % Proper credit is given, but I'm afraid that an interesting little bit % of history has been lost. % % Dubrulle's version of the implicit tridiagonal QR algorithm continues to % be important today. In future posts I will describe how the Handbook % led to EISPACK, LINPACK and LAPACK. % In LAPACK the _imtql1_ and _imtql2_ functions are combined into one % subroutine named % % The code for the inner loop is very similar to Austin's note. % % Years after making this contribution, Austin returned to grad school % and was one of my Ph.D. students at the University of New Mexico. % His thesis was about the distribution of the singular value of the % matrices derived from photographs. ##### SOURCE END ##### 26cefaedad044f33a20d375f994408b9 -->

*
*### Leslie Fox

Leslie Fox (1918-1992) is a British numerical analyst who was a contemporary of three men who played such an important role in my education and early professional life, John Todd, George Forsythe, and Jim Wilkinson. Although I saw less of Fox than I did of the others, he was still an important influence in my life.

ContentsNPL

The National Physical Laboratory, in Teddington, England, just west of London, is the British equivalent of the U.S. agency that was originally called the National Bureau of Standards and is now known as the National Institute of Science and Technology. Fox worked at NPL from 1945 until 1956. Wilkinson worked there his entire professional life. Despite the fact that Fox was from Oxford and Wilkinson was from Cambridge, they became close friends.

In the first few years after World War II, both NPL and NBS sponsored the development of first generation stored program electronic computers. An initial design for the NPL machine, the Automatic Computing Engine, was proposed by Alan Turing. But the design was based in part on secret work that Turing had done during the War for breaking German military codes. The management of NPL at the time was not allowed access to Turing's secret work and judged the machine to be impractical with current technology. Disillusioned, Turing left NPL and went to the University of Manchester, where they were also building a computer.

It fell to one of Turing's colleagues, Jim Wilkinson, to take over the project and build a simplified version of the ACE, the Pilot Ace. Wilkinson went on to become the world's leading authority on matrix computation. He was my inspiration and friend. His work provided the mathematical basis for the first MATLAB. Fox was less involved in the hardware development, but he was certainly an early user of the Pilot Ace.

NBSMeanwhile in U.S., NBS was sponsoring the development of two early machines, one at its headquarters in Maryland, the Standards Eastern Automatic Computer, or SEAC, and one in southern California, at UCLA, the Standards Western Automatic Computer, or SWAC.

NBS also established a research group at UCLA, the Institute for Numerical Analysis, or INA, to contribute to the work on the SWAC. Two members of the INA were John Todd, who a few years later as a faculty member at Caltech, would introduce me to numerical analysis and computing, and George Forsythe, who would go on to establish the Computer Science Department at Stanford and also be my Ph.D. thesis advisor.

The groups at NPL in England and INA in California were in close contact with each other and often exchanged visits. As a result, the four men -- Fox, Wilkinson, Todd, and Forsythe -- became friends and colleagues and leading figures in the budding field that today would be called computational science.

Modern Computing MethodsWhen I took my first numerical analysis course from Todd at Caltech in 1959, we didn't have a regular textbook. There weren't many to choose from. Todd was in the process of writing what would become his Survey of Numerical Analysis. He passed out purple dittoed copies of an early draft of the book. (Today, Amazon has 19 used copies available for $8.23 with free shipping.)

In lieu of a formal textbook, Todd also referred to this 130-page paper bound pamphlet. Its title says "Modern", but it was first published 60 years ago, in 1957. Here is a link to the hardcover second addition, Modern Computing Methods. (Amazon has 3 used copies available for $7.21, plus $3.99 shipping.)

The publisher of the first edition is listed as the picturesque *Her Majesty's Stationery Office*. But there are no authors listed. The source is just the National Physical Laboratory. For many years I believed the authors were Fox and Wilkinson. Now, while writing this post, I have come across this review by Charles Saltzer in *SIAM Review*. The authors are revealed to be "E. T. Goodwin *et al*". Goodwin was another member of the NPL group. So, I suspect that Fox and Wilkinson were that " *et al* ".

Fox resigned from NPL in 1956 and, after a year in the U.S. visiting U. C. Berkeley, joined the faculty at his old school, Oxford. He remained there the rest of his life. He established the Oxford Computing Laboratory. In 1963 he was appointed Professor of Numerical Analysis and Fellow of Balliol College. (My good friend Nick Trefethen now holds both of those appointments.)

Fox was a prolific author. The bibliography in Leslie Fox booklet lists eight books. He was also a popular professor. The same booklet lists 18 D. Phil. students that he supervised and 71 D. Phil. students where he a member of the committee.

NAGThe Numerical Algorithms Group is a British company, founded in 1970 and with headquarters in Oxford, which markets mathematical software, consulting, and related services. Fox helped the company get started and served on its Board of Directors for many years.

Householder VIIIFox hosted the eighth Householder Symposium at Oxford in 1981. Here is the group photo. Fox is in the center, in the brown suit. (I am in the second row, second from left, in the blue shirt.)

Fox Prize

The Leslie Fox Prize for Numerical Analysis is awarded every two years by the IMA, the British Institute of Mathematics. The prize honors "young numerical analysts worldwide", that is anyone who is less than 31 years old. One or two first prize winners and usually several second prize winners are announced at symposia in the UK in June of odd numbered years. Eighteen meetings and prize announcements have been held since the award was established in 1985. The 2017 call for papers is IMA Leslie Fox Prize.

The Prize Winners for 2017 were announced in a meeting in Glasgow on June 26. The first prize was awarded to Nicole Spillain of Ecole Polytechnique. Her winning paper was An Adaptive Multipreconditioned Conjugate Gradient Algorithm.

Blue PlaqueWikipedia explains that "a blue plaque is a permanent sign installed in a public place in the United Kingdom and elsewhere to commemorate a link between that location and a famous person or event, serving as a historical marker." In June, two blue plaques were unveiled in the Dewsbury Railroad Station, in West Yorkshire, about 40 miles west of Manchester. (Google maps tells me that today trains to Manchester leave every half hour and usually can make the trip in 40 minites.)

Photo credit: Huddersfield Daily Examiner

The plaques honor two computer scientists, Leslie Fox and Tom Kilburn. Here is a local newspaper story covering the unveiling. Huddersfield Daily Examiner. It turns out the two were school mates in the local grammar school.

Tom Kilburn worked with Freddie Williams at the University of Manchester to develop the Williams-Kilburn Tube, an early random access memory employing the static charge left by a grid of dots on a cathode ray tube. The scheme provided fast access time, but required constant tuning.

Kilburn and Williams needed a stored program digital computer to test their storage device. So they built a relatively simple machine that came to be known as "Manchester Baby". One day in June, 1948, on the train from Dewsbury to Manchester, Kilburn wrote a program to find the prime factors of an integer. The code was probably only a few dozen machine instructions. When the team got the machine in shape to actually run the program it worked correctly the first time. This was the world's first working computer program, and first instance of mathematical software.

Kilburn became the Professor of Computer Science at Manchester about the same time as Fox became the Professor of Numerical Analysis at Oxford.

Further ReadingTwo articles about Leslie Fox are available on the Web at Wikipedia and at the MacTutor History of Mathematics Archive.

The booklet prepared by Oxford for his memorial service is available at Leslie Fox booklet.

\n'); d.write(code_string); // Add copyright line at the bottom if specified. if (copyright.length > 0) { d.writeln(''); d.writeln('%%'); if (copyright.length > 0) { d.writeln('% _' + copyright + '_'); } } d.write('\n'); d.title = title + ' (MATLAB code)'; d.close(); } -->

Get
the MATLAB code (requires JavaScript)

Published with MATLAB® R2017a

## Pages