# Feed aggregator

### Responsive Single-Page Design with Sass Training Classes

websitedeveloper - Wed, 11/15/2017 - 19:36

A single-page application (SPA) is a web application or web site that interacts with the user by dynamically rewriting the current page rather than loading entire new pages from a server. This approach avoids interruption of the user experience between successive pages, making the application behave more like a desktop application. In an SPA, either all necessary code – HTML, JavaScript, and CSS – is retrieved with a single page load, or the appropriate resources are dynamically loaded and added to the page as necessary, usually in response to user actions. The page does not reload at any point in the process, nor does control transfer to another page, although the location hash or the HTML5 History API can be used to provide the perception and navigability of separate logical pages in the application. Interaction with the single page application often involves dynamic communication with the web server behind the scenes.

Sass (syntactically awesome stylesheets) is a style sheet language initially designed by Hampton Catlin and developed by Natalie Weizenbaum. After its initial versions, Weizenbaum and Chris Eppstein continued to extend Sass with SassScript, a simple scripting language used in Sass files.

Sass is a scripting language that is interpreted or compiled into Cascading Style Sheets (CSS). SassScript is the scripting language itself. Sass consists of two syntaxes. The original syntax, called "the indented syntax", uses a syntax similar to Haml. It uses indentation to separate code blocks and newline characters to separate rules. The newer syntax, "SCSS", uses block formatting like that of CSS. It uses braces to denote code blocks and semicolons to separate lines within a block. The indented syntax and SCSS files are traditionally given the extensions .sass and .scss, respectively.

CSS3 consists of a series of selectors and pseudo-selectors that group

Topic Included:

Learn how to build dynamic, responsive single-page designs with HTML, JavaScript, and CSS. The website featured in this course combines docking navigation, columns that adjust without cluttering your site layout or HTML markup, and animated scrolling effects that respond to user direction. Author Ray Villalobos shows you how to build it. He starts with a lean, easy-to-read template, and then explains how to add the features that make single-page designs so great, with these four frameworks:

• Compass, whose Sass mixins help you leverage CSS3 features like Flexbox
• Susy 2, the framework that "subtracts" the math from responsive grid design
• ScrollMagic, for adding "magical" scroll effects
• Breakpoint, which makes writing media queries in Sass a snap

But this course isn't just about the tools. It's a realistic project that epitomizes many of the design challenges website developers face in the real world. Start watching now and learn how to use HTML, jQuery, and CSS to build your own dynamic, deeply responsive designs.

Topics include:

• Analyzing the project before you begin
• Creating basic styles
• Building your own Sass mixins
• Coding the navigation
• Making the navigation responsive, with grids
• Using a split layout
• Creating tween animations
• Controlling scenes with scrolling

### Responsive Web Design Training Classes

websitedeveloper - Wed, 11/15/2017 - 19:34

Responsive web design (RWD) is an approach to web design which makes web pages render well on a variety of devices and window or screen sizes. Recent work also considers the viewer proximity as part of the viewing context as an extension for RWD. Content, design and performance are necessary across all devices to ensure usability and satisfaction.

A site designed with RWD adapts the layout to the viewing environment by using fluid, proportion-based grids, flexible images, and CSS3 media queries, an extension of the @media rule, in the following ways:

The fluid grid concept calls for page element sizing to be in relative units like percentages, rather than absolute units like pixels or points.
Flexible images are also sized in relative units, so as to prevent them from displaying outside their containing element.
Media queries allow the page to use different CSS style rules based on characteristics of the device the site is being displayed on, most commonly the width of the browser.
Responsive web design has become more important as the amount of mobile traffic now accounts for more than half of total internet traffic. Therefore, Google announced Mobilegeddon in 2015, and started to boost the ratings of sites that are mobile friendly if the search was made from a mobile device. Responsive web design is an example of user interface plasticity.

Topic Included:

Discover how to use responsive web design to make your site more readable and efficient—on any device. Chris Converse shares his own specialized techniques for combining HTML and CSS into a web layout that can adapt to different screen sizes and orientations. The course takes the site from start to finish, from setting up the HTML page and containers to styling established elements for small, medium, and large screens. Along the way, Chris shows how to reposition the nav bar for better viewing on mobile devices, create animated transitions, and turn bulleted lists into interactive menus with full touch support. Plus, learn how to use CSS pseudo-elements and adapt layouts for print to save ink and paper.

The exercise files for this course are free to all members. Download them and start creating your own responsive websites today.Topics include:

• Planning your layout
• Creating HTML containers, content, and links
• Creating and styling the layout with CSS
• Creating a menu system
• Styling headings, body text, and footers
• Swapping high-resolution graphics for Retina displays
• Making sure content is printable

### Dreamweaver CC Training Classes

websitedeveloper - Wed, 11/15/2017 - 19:31

Adobe Dreamweaver is a proprietary web development tool developed by Adobe Systems. Dreamweaver was created by Macromedia in 1997, and was maintained by them until Macromedia was acquired by Adobe Systems in 2005.

Adobe Dreamweaver is available for macOS and for Windows.

Following Adobe's acquisition of the Macromedia product suite, releases of Dreamweaver subsequent to version 8.0 have been more compliant with W3C standards. Recent versions have improved support for Web technologies such as CSS, JavaScript, and various server-side scripting languages and frameworks including ASP (ASP JavaScript, ASP VBScript, ASP.NET C#, ASP.NET VB), ColdFusion, Scriptlet, and PHP.

Topic Included:

Discover how to use Adobe Dreamweaver CC 2017—the popular web design and development application—to create and publish websites using standards-based technology and a simplified user interface. In this course, David Powers shows how to get started with Dreamweaver, covering the user interface and providing guidance on creating new files, setting up a logical structure for content, and generating clear and accessible code. David explains how to format basic HTML and CSS using the visual and code-based tools in Dreamweaver. He also explores site management techniques, including checking for broken links, connecting to a remote server, and managing multiple sites.Topics include:

• The Dreamweaver interface
• Toolbars
• Managing workspaces
• Changing preferences
• Managing projects
• Defining a new site
• Creating new documents
• Editing in Live view
• Reusing code snippets
• Structuring documents
• Managing CSS
• Working with images, videos, and tables
• Site management

### Three-Term Recurrence Relations and Bessel Functions

latest Blogs - Mon, 11/06/2017 - 22:30
d

Three-term recurrence relations are the basis for computing Bessel functions.

ContentsA Familiar Three-Term Recurrence

Start with two large integers, say 514229 and 317811. For reasons that will soon become clear, I'll label them $f_{30}$ and $f_{29}$.

clear n = 30 f(30) = 514229; f(29) = 317811 n = 30 f = Columns 1 through 6 0 0 0 0 0 0 Columns 7 through 12 0 0 0 0 0 0 Columns 13 through 18 0 0 0 0 0 0 Columns 19 through 24 0 0 0 0 0 0 Columns 25 through 30 0 0 0 0 317811 514229

Now take repeated differences, and count backwards.

$$f_{n-1} = f_{n+1} - f_n, \ n = 29, 28, ..., 2, 1$$

for n = 29:-1:2 f(n-1) = f(n+1) - f(n); end n f n = 2 f = Columns 1 through 6 0 1 1 2 3 5 Columns 7 through 12 8 13 21 34 55 89 Columns 13 through 18 144 233 377 610 987 1597 Columns 19 through 24 2584 4181 6765 10946 17711 28657 Columns 25 through 30 46368 75025 121393 196418 317811 514229

Recognize those integers? They're the venerable Fibonacci numbers. I've just reversed the usual procedure for computing them. And, I didn't pick the first two values at random. In fact, their choice is very delicate. Try picking any other six-digit integers, and see if you can take 30 differences before you hit a negative value. Of course, we usually start with $f_0 = 0$, $f_1 = 1$ and go forward with additions.

The relation

$$f_n = f_{n+1} - f_{n-1}$$

is an example of a three-term recurrence. I've written it in such a way that I can go either forward or backward.

Friedrich Bessel

Friedrich Bessel was a German astronomer, mathematician and physicist who lived from 1784 until 1846. He was a contemporary of Gauss, but the two had some sort of falling out. Although he did not have a university education, he made major contributions to precise measurements of planetary orbits and distances to stars. He has a crater on the Moon and an asteroid named after him.

In mathematics, the special functions that bear his name were actually discovered by someone else, Daniel Bernoulli, and christened after Bessel's death. They play the same role in polar coordinates that the trig functions play in Cartesian coordinates. They are applicable in a wide range of fields, from physics to finance.

Bessel Functions

Bessel functions are defined by an ordinary differential equation that generalizes the harmonic oscillator to polar coordinates. The equation contains a parameter $n$, the order.

$$x^2 y'' + x y' + (x^2 - n^2) y = 0$$

Solutions to this equation are also sometimes called cylindrical harmonics.

For my purposes here, their most important property is the three-term recurrence relation:

$$\frac{2n}{x} J_n(x) = J_{n-1}(x) + J_{n+1}(x)$$

This has a variable coefficient that depends upon the two parameters, the order $n$ and the argument $x$.

To complete the specification, it would be necessary to supply two initial conditions that depend upon $x$. But, as we shall see, using the recurrence in the forward direction is disastrous numerically. Alternatively, if we could somehow supply two end conditions, we could use the recurrence in the reverse direction. This the idea behind a classical method known as Miller's algorithm.

Miller's Algorithm

Miller's algorithm also employs this interesting identity, which I'll call the normalizer.

$$J_0(x) + 2 J_2(x) + 2 J_4(x) + 2 J_6(x) + \ ... = 1$$

It's not obvious where this identity comes from and I won't try to derive it here.

We don't know the end conditions, but we can pick arbitrary values, run the recurrence backwards, and then use the normalizer to rescale.

Step 1. Pick a large $N$ and temporarily let

$$\tilde{J}_N = 0$$

$$\tilde{J}_{N-1} = 1$$

Then for $n = N-1, ..., 1$ compute

$$\tilde{J}_{n-1} = 2n/x \tilde{J}_n \ - \tilde{J}_{n+1}$$

Step 2. Normalize:

$$\sigma = \tilde{J}_0 + 2 \tilde{J}_2 + 2 \tilde{J}_4 + ...$$

$$J_n = \tilde{J}_n/\sigma, \ n = 0, ..., N$$

I'll show some results in a moment, after I introduce an alternative formulation.

Matrix Formulation

You know that I like to describe algorithms in matrix terms. The three-term recurrence generates a tridiagonal matrix. But which three diagonals? I can put the three below the diagonal, above the diagonal, or centered on the diagonal. This is accomplished with the second argument of the sparse matrix generator, spdiags. Here is the code.

type blt function B = blt(n,x,loc) % BesseL Three term recurence. % B = blt(n,x,loc) is an n-by-n sparse tridiagonal coefficent matrix % that generates the Bessel functions besselj((0:n-1)',x). % loc specifies the location of the diagonals. % loc = 'center', the default, centers the three diagonals. % loc = 'lower' puts the three diagonals in the lower triangle. % loc = 'upper' puts the three diagonals in the upper triangle. if nargin == 2 loc = 'center'; end switch loc case 'center', locd = -1:1; case 'lower', locd = -2:0; case 'upper', locd = 0:2; end % Three term recurrence: 2*n/x * J_n(x) = J_(n-1)(x) + J_n+1(x). d = -2*(0:n-1)'/x; e = ones(n,1); B = spdiags([e d e],locd,n,n); end Lower

If there were no roundoff error, and if we knew the values of the first two Bessel functions, $J_0(x)$ and $J_1(x)$, we could use backslash, the linear equation solver, with the diagonals in the lower triangle to compute $J_n(x)$ for $n > 1$. Here, for example, is $x = 1$.

format long J = @(n,x) besselj(n,x); n = 8 x = 1.0 B = blt(n,x,'lower'); rhs = zeros(n,1); rhs(1:2) = J((0:1)',x); v = B\rhs n = 8 x = 1 v = 0.765197686557967 0.440050585744934 0.114903484931901 0.019563353982668 0.002476638964110 0.000249757730213 0.000020938338022 0.000001502326053

These values are accurate to almost all the digits shown. Except for the last one. It's beginning to show the effects of severe numerical instability. Let's try a larger value of n.

n = 18 B = blt(n,x,'lower'); rhs = zeros(n,1); rhs(1:2) = J((0:1)',x); v = B\rhs; semilogy(0:n-1,v,'o-') title('J_n(1)') xlabel('n') n = 18

The values beyond about n = 9 are suspect and, in fact, totally wrong. The forward elimination that backslash uses to solve this lower triangular system is just the three-term recurrence for increasing n. And that recurrence has two solutions -- the one we're trying to compute and a second one corresponding to $Y_n(x)$, the Bessel function of second kind. $Y_n(x)$ is stimulated by roundoff error and completely takes over for $n > 9$.

So the lower triangular option -- the forward recurrence -- is unsatisfactory for two reasons: it requires two Bessel values to get started and it is unstable.

Upper

We need to make one modification to the upper triangular coefficient matrix.

n = 30; B = blt(n,x,'upper'); B(n-1,n) = 0;

Now the last two rows and columns are a 2-by-2 identity.

full(B(n-1:n,n-1:n)) ans = 1 0 0 1

The back substitution process just runs the three-term recursion backwards. We need to start with $J_{n-1}(x)$ and $J_{n-2}(x)$.

rhs = zeros(n,1); rhs(n-1:n) = J((n-2:n-1)',x); format long e v = B\rhs v = 7.651976865579656e-01 4.400505857449330e-01 1.149034849319004e-01 1.956335398266838e-02 2.476638964109952e-03 2.497577302112342e-04 2.093833800238925e-05 1.502325817436807e-06 9.422344172604491e-08 5.249250179911870e-09 2.630615123687451e-10 1.198006746303136e-11 4.999718179448401e-13 1.925616764480172e-14 6.885408200044221e-16 2.297531532210343e-17 7.186396586807488e-19 2.115375568053260e-20 5.880344573595754e-22 1.548478441211652e-23 3.873503008524655e-25 9.227621982096665e-27 2.098223955943776e-28 4.563424055950103e-30 9.511097932712488e-32 1.902951751891381e-33 3.660826744416801e-35 6.781552053554108e-37 1.211364502417112e-38 2.089159981718163e-40

These values are accurate to almost all the significant digits shown. The backwards recursion suppresses the unwanted $Y_n(x)$ and the process is stable.

Center

Can we avoid having to know those two starting values? Yes, by using a matrix formulation of the Miller algorithm. Insert the normalization condition in the first row. Now $B$ is tridiagonal, except for first row, which is

[1 0 2 0 2 0 2 ... ]

Other rows have $1$ s on the super- and subdiagonal and $-2n/x$ on the diagonal of the $(n+1)$ -st row.

n = 30; B = blt(n,x); B(1,1:2) = [1 0]; B(1,3:2:n) = 2; spy(B) title('B')

The beauty of this approach is that it does not require any values of the Bessel function a priori. The right hand side is simply a unit vector.

rhs = eye(n,1); format long e v = B\rhs semilogy(0:n-1,v,'o-') xlabel('n') title('J_n(1)') v = 7.651976865579665e-01 4.400505857449335e-01 1.149034849319005e-01 1.956335398266840e-02 2.476638964109954e-03 2.497577302112343e-04 2.093833800238926e-05 1.502325817436807e-06 9.422344172604494e-08 5.249250179911872e-09 2.630615123687451e-10 1.198006746303136e-11 4.999718179448401e-13 1.925616764480171e-14 6.885408200044220e-16 2.297531532210343e-17 7.186396586807487e-19 2.115375568053260e-20 5.880344573595753e-22 1.548478441211652e-23 3.873503008524655e-25 9.227621982096663e-27 2.098223955943775e-28 4.563424055950102e-30 9.511097932712487e-32 1.902951751891381e-33 3.660826744416762e-35 6.781552053355331e-37 1.211364395117367e-38 2.088559301926495e-40 Relative error w = J((0:n-1)',x); relerr = (v - w)./w; semilogy(0:n-1,abs(relerr) + eps*(v==w),'o-') xlabel('n') title('relative error')

The relative error deteriorates for the last few values of n. This is not unexpected because we have effectively truncated a semi-infinite matrix.

Triangular Factors

The LU decomposition of $B$ shows no pivoting for this value of $x$, but there is fill-in. The normalization condition is mixing in with the recurrence.

[L,U] = lu(B); spy(L) title('L') snapnow spy(U) title('U') References

J. C. P. Miller, "On the choice of standard solutions for a homogeneous linear equation of the second order", Quart. J. Mech. Appl. Math. 3, 1950, 225-235.

Walter Gautschi, "Computational Aspects of Three-Term Recurrence Relations", SIAM Review 9, 1967, 24-82.

\n'); d.write(code_string); // Add copyright line at the bottom if specified. if (copyright.length > 0) { d.writeln(''); d.writeln('%%'); if (copyright.length > 0) { d.writeln('% _' + copyright + '_'); } } d.write('\n'); d.title = title + ' (MATLAB code)'; d.close(); } -->

Get the MATLAB code (requires JavaScript)

Published with MATLAB® R2017a

. I've just reversed the usual procedure for % computing them. And, I didn't pick the first two values % at random. In fact, their choice is very delicate. Try picking % any other six-digit integers, and see if you can take 30 differences % before you hit a negative value. Of course, we usually start with % $f_0 = 0$, $f_1 = 1$ and go forward with additions. %% % The relation % % $$f_n = f_{n+1} - f_{n-1}$$ % % is an example of a _three-term recurrence_. I've written it in such % a way that I can go either forward or backward. %% Friedrich Bessel % Friedrich Bessel was a German astronomer, mathematician and physicist % who lived from 1784 until 1846. % He was a contemporary of Gauss, but the two had some sort of falling % out. Although he did not have a university education, he made major % contributions to precise measurements of planetary orbits and % distances to stars. He has a crater on the Moon and an asteroid % named after him. %% % In mathematics, the special functions that bear his name were actually % discovered by someone else, Daniel Bernoulli, and christened after % Bessel's death. They play the same role in polar coordinates that the % trig functions play in Cartesian coordinates. They are applicable % in a wide range of fields, from physics to finance. %% Bessel Functions % Bessel functions are defined by an ordinary differential % equation that generalizes the harmonic oscillator to polar % coordinates. The equation contains a parameter $n$, the _order_. % % $$x^2 y'' + x y' + (x^2 - n^2) y = 0$$ % % Solutions to this equation are also sometimes called % _cylindrical harmonics_. %% % For my purposes here, their most important property is the % three-term recurrence relation: % % $$\frac{2n}{x} J_n(x) = J_{n-1}(x) + J_{n+1}(x)$$ % % This has a variable coefficient that depends upon the two parameters, % the order $n$ and the argument $x$. %% % To complete the specification, it would be necessary to supply two % initial conditions that depend upon $x$. % But, as we shall see, using the recurrence in the forward direction % is disastrous numerically. % Alternatively, if we could somehow supply two end conditions, % we could use the recurrence in the reverse direction. % This the idea behind a classical method known as Miller's algorithm. %% Miller's Algorithm % Miller's algorithm also employs this interesting identity, % which I'll call the _normalizer_. % % $$J_0(x) + 2 J_2(x) + 2 J_4(x) + 2 J_6(x) + \ ... = 1$$ % % It's not obvious where this identity comes from and I won't try to % derive it here. %% % We don't know the end conditions, but we can pick arbitrary values, % run the recurrence backwards, and then use the normalizer to rescale. %% % Step 1. Pick a large $N$ and temporarily let % % $$\tilde{J}_N = 0$$ % % $$\tilde{J}_{N-1} = 1$$ % % Then for $n = N-1, ..., 1$ compute % % $$\tilde{J}_{n-1} = 2n/x \tilde{J}_n \ - \tilde{J}_{n+1}$$ % %% % Step 2. Normalize: % % $$\sigma = \tilde{J}_0 + 2 \tilde{J}_2 + 2 \tilde{J}_4 + ...$$ % % $$J_n = \tilde{J}_n/\sigma, \ n = 0, ..., N$$ %% % I'll show some results in a moment, after I introduce an % alternative formulation. %% Matrix Formulation % You know that I like to describe algorithms in matrix terms. % The three-term recurrence generates a tridiagonal matrix. % But which three diagonals? I can put the three below the diagonal, % above the diagonal, or centered on the diagonal. This is accomplished % with the second argument of the sparse matrix generator, |spdiags|. % Here is the code. type blt %% Lower % If there were no roundoff error, and if we knew the values of the % first two Bessel functions, $J_0(x)$ and $J_1(x)$, we could use % backslash, the linear equation solver, with the diagonals in the % lower triangle to compute $J_n(x)$ for $n > 1$. Here, for example, % is $x = 1$. format long J = @(n,x) besselj(n,x); n = 8 x = 1.0 B = blt(n,x,'lower'); rhs = zeros(n,1); rhs(1:2) = J((0:1)',x); v = B\rhs %% % These values are accurate to almost all the digits shown. % Except for the last one. It's beginning to show the effects of % severe numerical instability. Let's try a larger value of |n|. n = 18 B = blt(n,x,'lower'); rhs = zeros(n,1); rhs(1:2) = J((0:1)',x); v = B\rhs; semilogy(0:n-1,v,'o-') title('J_n(1)') xlabel('n') %% % The values beyond about |n = 9| are suspect and, in fact, totally % wrong. The forward elimination that backslash uses to solve this % lower triangular system is just the three-term recurrence for % increasing |n|. And that recurrence has two solutions REPLACE_WITH_DASH_DASH the one % we're trying to compute and a second one corresponding to $Y_n(x)$, % the _Bessel function of second kind_. $Y_n(x)$ is stimulated % by roundoff error and completely takes over for $n > 9$. %% % So the lower triangular option REPLACE_WITH_DASH_DASH the forward recurrence REPLACE_WITH_DASH_DASH is % unsatisfactory for two reasons: it requires two Bessel values to get % started and it is unstable. %% Upper % We need to make one modification to the upper triangular coefficient % matrix. n = 30; B = blt(n,x,'upper'); B(n-1,n) = 0; %% % Now the last two rows and columns are a 2-by-2 identity. full(B(n-1:n,n-1:n)) %% % The back substitution process just runs the three-term recursion % backwards. We need to start with $J_{n-1}(x)$ and $J_{n-2}(x)$. rhs = zeros(n,1); rhs(n-1:n) = J((n-2:n-1)',x); format long e v = B\rhs %% % These values are accurate to almost all the significant digits % shown. The backwards recursion suppresses the unwanted $Y_n(x)$ and % the process is stable. %% Center % Can we avoid having to know those two starting values? % Yes, by using a matrix formulation of the Miller algorithm. % Insert the normalization condition in the first row. % Now $B$ is tridiagonal, except for first row, which is % % [1 0 2 0 2 0 2 ... ] % % Other rows have $1$ s on the super- and subdiagonal and $-2n/x$ % on the diagonal of the $(n+1)$ -st row. n = 30; B = blt(n,x); B(1,1:2) = [1 0]; B(1,3:2:n) = 2; spy(B) title('B') %% % The beauty of this approach is that it does not require any % values of the Bessel function _a priori_. The right hand side % is simply a unit vector. rhs = eye(n,1); format long e v = B\rhs semilogy(0:n-1,v,'o-') xlabel('n') title('J_n(1)') %% Relative error w = J((0:n-1)',x); relerr = (v - w)./w; semilogy(0:n-1,abs(relerr) + eps*(v==w),'o-') xlabel('n') title('relative error') %% % The relative error deteriorates for the last few values of |n|. % This is not unexpected because we have effectively truncated % a semi-infinite matrix. %% Triangular Factors % The LU decomposition of $B$ shows no pivoting for this value of $x$, % but there is fill-in. The normalization condition is mixing in % with the recurrence. [L,U] = lu(B); spy(L) title('L') snapnow spy(U) title('U') %% References % J. C. P. Miller, "On the choice of standard solutions for a % homogeneous linear equation of the second order", % _Quart. J. Mech. Appl. Math._ 3, 1950, 225-235. % % Walter Gautschi, "Computational Aspects of Three-Term Recurrence % Relations", _SIAM Review_ 9, 1967, 24-82. % ##### SOURCE END ##### eac8550e3f7a4d2ea24b0951cd0803d1 -->

### Three-Term Recurrence Relations and Bessel Functions

latest Blogs - Mon, 11/06/2017 - 22:30
d

Three-term recurrence relations are the basis for computing Bessel functions.

ContentsA Familiar Three-Term Recurrence

Start with two large integers, say 514229 and 317811. For reasons that will soon become clear, I'll label them $f_{30}$ and $f_{29}$.

clear n = 30 f(30) = 514229; f(29) = 317811 n = 30 f = Columns 1 through 6 0 0 0 0 0 0 Columns 7 through 12 0 0 0 0 0 0 Columns 13 through 18 0 0 0 0 0 0 Columns 19 through 24 0 0 0 0 0 0 Columns 25 through 30 0 0 0 0 317811 514229

Now take repeated differences, and count backwards.

$$f_{n-1} = f_{n+1} - f_n, \ n = 29, 28, ..., 2, 1$$

for n = 29:-1:2 f(n-1) = f(n+1) - f(n); end n f n = 2 f = Columns 1 through 6 0 1 1 2 3 5 Columns 7 through 12 8 13 21 34 55 89 Columns 13 through 18 144 233 377 610 987 1597 Columns 19 through 24 2584 4181 6765 10946 17711 28657 Columns 25 through 30 46368 75025 121393 196418 317811 514229

Recognize those integers? They're the venerable Fibonacci numbers. I've just reversed the usual procedure for computing them. And, I didn't pick the first two values at random. In fact, their choice is very delicate. Try picking any other six-digit integers, and see if you can take 30 differences before you hit a negative value. Of course, we usually start with $f_0 = 0$, $f_1 = 1$ and go forward with additions.

The relation

$$f_n = f_{n+1} - f_{n-1}$$

is an example of a three-term recurrence. I've written it in such a way that I can go either forward or backward.

Friedrich Bessel

Friedrich Bessel was a German astronomer, mathematician and physicist who lived from 1784 until 1846. He was a contemporary of Gauss, but the two had some sort of falling out. Although he did not have a university education, he made major contributions to precise measurements of planetary orbits and distances to stars. He has a crater on the Moon and an asteroid named after him.

In mathematics, the special functions that bear his name were actually discovered by someone else, Daniel Bernoulli, and christened after Bessel's death. They play the same role in polar coordinates that the trig functions play in Cartesian coordinates. They are applicable in a wide range of fields, from physics to finance.

Bessel Functions

Bessel functions are defined by an ordinary differential equation that generalizes the harmonic oscillator to polar coordinates. The equation contains a parameter $n$, the order.

$$x^2 y'' + x y' + (x^2 - n^2) y = 0$$

Solutions to this equation are also sometimes called cylindrical harmonics.

For my purposes here, their most important property is the three-term recurrence relation:

$$\frac{2n}{x} J_n(x) = J_{n-1}(x) + J_{n+1}(x)$$

This has a variable coefficient that depends upon the two parameters, the order $n$ and the argument $x$.

To complete the specification, it would be necessary to supply two initial conditions that depend upon $x$. But, as we shall see, using the recurrence in the forward direction is disastrous numerically. Alternatively, if we could somehow supply two end conditions, we could use the recurrence in the reverse direction. This the idea behind a classical method known as Miller's algorithm.

Miller's Algorithm

Miller's algorithm also employs this interesting identity, which I'll call the normalizer.

$$J_0(x) + 2 J_2(x) + 2 J_4(x) + 2 J_6(x) + \ ... = 1$$

It's not obvious where this identity comes from and I won't try to derive it here.

We don't know the end conditions, but we can pick arbitrary values, run the recurrence backwards, and then use the normalizer to rescale.

Step 1. Pick a large $N$ and temporarily let

$$\tilde{J}_N = 0$$

$$\tilde{J}_{N-1} = 1$$

Then for $n = N-1, ..., 1$ compute

$$\tilde{J}_{n-1} = 2n/x \tilde{J}_n \ - \tilde{J}_{n+1}$$

Step 2. Normalize:

$$\sigma = \tilde{J}_0 + 2 \tilde{J}_2 + 2 \tilde{J}_4 + ...$$

$$J_n = \tilde{J}_n/\sigma, \ n = 0, ..., N$$

I'll show some results in a moment, after I introduce an alternative formulation.

Matrix Formulation

You know that I like to describe algorithms in matrix terms. The three-term recurrence generates a tridiagonal matrix. But which three diagonals? I can put the three below the diagonal, above the diagonal, or centered on the diagonal. This is accomplished with the second argument of the sparse matrix generator, spdiags. Here is the code.

type blt function B = blt(n,x,loc) % BesseL Three term recurence. % B = blt(n,x,loc) is an n-by-n sparse tridiagonal coefficent matrix % that generates the Bessel functions besselj((0:n-1)',x). % loc specifies the location of the diagonals. % loc = 'center', the default, centers the three diagonals. % loc = 'lower' puts the three diagonals in the lower triangle. % loc = 'upper' puts the three diagonals in the upper triangle. if nargin == 2 loc = 'center'; end switch loc case 'center', locd = -1:1; case 'lower', locd = -2:0; case 'upper', locd = 0:2; end % Three term recurrence: 2*n/x * J_n(x) = J_(n-1)(x) + J_n+1(x). d = -2*(0:n-1)'/x; e = ones(n,1); B = spdiags([e d e],locd,n,n); end Lower

If there were no roundoff error, and if we knew the values of the first two Bessel functions, $J_0(x)$ and $J_1(x)$, we could use backslash, the linear equation solver, with the diagonals in the lower triangle to compute $J_n(x)$ for $n > 1$. Here, for example, is $x = 1$.

format long J = @(n,x) besselj(n,x); n = 8 x = 1.0 B = blt(n,x,'lower'); rhs = zeros(n,1); rhs(1:2) = J((0:1)',x); v = B\rhs n = 8 x = 1 v = 0.765197686557967 0.440050585744934 0.114903484931901 0.019563353982668 0.002476638964110 0.000249757730213 0.000020938338022 0.000001502326053

These values are accurate to almost all the digits shown. Except for the last one. It's beginning to show the effects of severe numerical instability. Let's try a larger value of n.

n = 18 B = blt(n,x,'lower'); rhs = zeros(n,1); rhs(1:2) = J((0:1)',x); v = B\rhs; semilogy(0:n-1,v,'o-') title('J_n(1)') xlabel('n') n = 18

The values beyond about n = 9 are suspect and, in fact, totally wrong. The forward elimination that backslash uses to solve this lower triangular system is just the three-term recurrence for increasing n. And that recurrence has two solutions -- the one we're trying to compute and a second one corresponding to $Y_n(x)$, the Bessel function of second kind. $Y_n(x)$ is stimulated by roundoff error and completely takes over for $n > 9$.

So the lower triangular option -- the forward recurrence -- is unsatisfactory for two reasons: it requires two Bessel values to get started and it is unstable.

Upper

We need to make one modification to the upper triangular coefficient matrix.

n = 30; B = blt(n,x,'upper'); B(n-1,n) = 0;

Now the last two rows and columns are a 2-by-2 identity.

full(B(n-1:n,n-1:n)) ans = 1 0 0 1

The back substitution process just runs the three-term recursion backwards. We need to start with $J_{n-1}(x)$ and $J_{n-2}(x)$.

rhs = zeros(n,1); rhs(n-1:n) = J((n-2:n-1)',x); format long e v = B\rhs v = 7.651976865579656e-01 4.400505857449330e-01 1.149034849319004e-01 1.956335398266838e-02 2.476638964109952e-03 2.497577302112342e-04 2.093833800238925e-05 1.502325817436807e-06 9.422344172604491e-08 5.249250179911870e-09 2.630615123687451e-10 1.198006746303136e-11 4.999718179448401e-13 1.925616764480172e-14 6.885408200044221e-16 2.297531532210343e-17 7.186396586807488e-19 2.115375568053260e-20 5.880344573595754e-22 1.548478441211652e-23 3.873503008524655e-25 9.227621982096665e-27 2.098223955943776e-28 4.563424055950103e-30 9.511097932712488e-32 1.902951751891381e-33 3.660826744416801e-35 6.781552053554108e-37 1.211364502417112e-38 2.089159981718163e-40

These values are accurate to almost all the significant digits shown. The backwards recursion suppresses the unwanted $Y_n(x)$ and the process is stable.

Center

Can we avoid having to know those two starting values? Yes, by using a matrix formulation of the Miller algorithm. Insert the normalization condition in the first row. Now $B$ is tridiagonal, except for first row, which is

[1 0 2 0 2 0 2 ... ]

Other rows have $1$ s on the super- and subdiagonal and $-2n/x$ on the diagonal of the $(n+1)$ -st row.

n = 30; B = blt(n,x); B(1,1:2) = [1 0]; B(1,3:2:n) = 2; spy(B) title('B')

The beauty of this approach is that it does not require any values of the Bessel function a priori. The right hand side is simply a unit vector.

rhs = eye(n,1); format long e v = B\rhs semilogy(0:n-1,v,'o-') xlabel('n') title('J_n(1)') v = 7.651976865579665e-01 4.400505857449335e-01 1.149034849319005e-01 1.956335398266840e-02 2.476638964109954e-03 2.497577302112343e-04 2.093833800238926e-05 1.502325817436807e-06 9.422344172604494e-08 5.249250179911872e-09 2.630615123687451e-10 1.198006746303136e-11 4.999718179448401e-13 1.925616764480171e-14 6.885408200044220e-16 2.297531532210343e-17 7.186396586807487e-19 2.115375568053260e-20 5.880344573595753e-22 1.548478441211652e-23 3.873503008524655e-25 9.227621982096663e-27 2.098223955943775e-28 4.563424055950102e-30 9.511097932712487e-32 1.902951751891381e-33 3.660826744416762e-35 6.781552053355331e-37 1.211364395117367e-38 2.088559301926495e-40 Relative error w = J((0:n-1)',x); relerr = (v - w)./w; semilogy(0:n-1,abs(relerr) + eps*(v==w),'o-') xlabel('n') title('relative error')

The relative error deteriorates for the last few values of n. This is not unexpected because we have effectively truncated a semi-infinite matrix.

Triangular Factors

The LU decomposition of $B$ shows no pivoting for this value of $x$, but there is fill-in. The normalization condition is mixing in with the recurrence.

[L,U] = lu(B); spy(L) title('L') snapnow spy(U) title('U') References

J. C. P. Miller, "On the choice of standard solutions for a homogeneous linear equation of the second order", Quart. J. Mech. Appl. Math. 3, 1950, 225-235.

Walter Gautschi, "Computational Aspects of Three-Term Recurrence Relations", SIAM Review 9, 1967, 24-82.

\n'); d.write(code_string); // Add copyright line at the bottom if specified. if (copyright.length > 0) { d.writeln(''); d.writeln('%%'); if (copyright.length > 0) { d.writeln('% _' + copyright + '_'); } } d.write('\n'); d.title = title + ' (MATLAB code)'; d.close(); } -->

Get the MATLAB code (requires JavaScript)

Published with MATLAB® R2017a

. I've just reversed the usual procedure for % computing them. And, I didn't pick the first two values % at random. In fact, their choice is very delicate. Try picking % any other six-digit integers, and see if you can take 30 differences % before you hit a negative value. Of course, we usually start with % $f_0 = 0$, $f_1 = 1$ and go forward with additions. %% % The relation % % $$f_n = f_{n+1} - f_{n-1}$$ % % is an example of a _three-term recurrence_. I've written it in such % a way that I can go either forward or backward. %% Friedrich Bessel % Friedrich Bessel was a German astronomer, mathematician and physicist % who lived from 1784 until 1846. % He was a contemporary of Gauss, but the two had some sort of falling % out. Although he did not have a university education, he made major % contributions to precise measurements of planetary orbits and % distances to stars. He has a crater on the Moon and an asteroid % named after him. %% % In mathematics, the special functions that bear his name were actually % discovered by someone else, Daniel Bernoulli, and christened after % Bessel's death. They play the same role in polar coordinates that the % trig functions play in Cartesian coordinates. They are applicable % in a wide range of fields, from physics to finance. %% Bessel Functions % Bessel functions are defined by an ordinary differential % equation that generalizes the harmonic oscillator to polar % coordinates. The equation contains a parameter $n$, the _order_. % % $$x^2 y'' + x y' + (x^2 - n^2) y = 0$$ % % Solutions to this equation are also sometimes called % _cylindrical harmonics_. %% % For my purposes here, their most important property is the % three-term recurrence relation: % % $$\frac{2n}{x} J_n(x) = J_{n-1}(x) + J_{n+1}(x)$$ % % This has a variable coefficient that depends upon the two parameters, % the order $n$ and the argument $x$. %% % To complete the specification, it would be necessary to supply two % initial conditions that depend upon $x$. % But, as we shall see, using the recurrence in the forward direction % is disastrous numerically. % Alternatively, if we could somehow supply two end conditions, % we could use the recurrence in the reverse direction. % This the idea behind a classical method known as Miller's algorithm. %% Miller's Algorithm % Miller's algorithm also employs this interesting identity, % which I'll call the _normalizer_. % % $$J_0(x) + 2 J_2(x) + 2 J_4(x) + 2 J_6(x) + \ ... = 1$$ % % It's not obvious where this identity comes from and I won't try to % derive it here. %% % We don't know the end conditions, but we can pick arbitrary values, % run the recurrence backwards, and then use the normalizer to rescale. %% % Step 1. Pick a large $N$ and temporarily let % % $$\tilde{J}_N = 0$$ % % $$\tilde{J}_{N-1} = 1$$ % % Then for $n = N-1, ..., 1$ compute % % $$\tilde{J}_{n-1} = 2n/x \tilde{J}_n \ - \tilde{J}_{n+1}$$ % %% % Step 2. Normalize: % % $$\sigma = \tilde{J}_0 + 2 \tilde{J}_2 + 2 \tilde{J}_4 + ...$$ % % $$J_n = \tilde{J}_n/\sigma, \ n = 0, ..., N$$ %% % I'll show some results in a moment, after I introduce an % alternative formulation. %% Matrix Formulation % You know that I like to describe algorithms in matrix terms. % The three-term recurrence generates a tridiagonal matrix. % But which three diagonals? I can put the three below the diagonal, % above the diagonal, or centered on the diagonal. This is accomplished % with the second argument of the sparse matrix generator, |spdiags|. % Here is the code. type blt %% Lower % If there were no roundoff error, and if we knew the values of the % first two Bessel functions, $J_0(x)$ and $J_1(x)$, we could use % backslash, the linear equation solver, with the diagonals in the % lower triangle to compute $J_n(x)$ for $n > 1$. Here, for example, % is $x = 1$. format long J = @(n,x) besselj(n,x); n = 8 x = 1.0 B = blt(n,x,'lower'); rhs = zeros(n,1); rhs(1:2) = J((0:1)',x); v = B\rhs %% % These values are accurate to almost all the digits shown. % Except for the last one. It's beginning to show the effects of % severe numerical instability. Let's try a larger value of |n|. n = 18 B = blt(n,x,'lower'); rhs = zeros(n,1); rhs(1:2) = J((0:1)',x); v = B\rhs; semilogy(0:n-1,v,'o-') title('J_n(1)') xlabel('n') %% % The values beyond about |n = 9| are suspect and, in fact, totally % wrong. The forward elimination that backslash uses to solve this % lower triangular system is just the three-term recurrence for % increasing |n|. And that recurrence has two solutions REPLACE_WITH_DASH_DASH the one % we're trying to compute and a second one corresponding to $Y_n(x)$, % the _Bessel function of second kind_. $Y_n(x)$ is stimulated % by roundoff error and completely takes over for $n > 9$. %% % So the lower triangular option REPLACE_WITH_DASH_DASH the forward recurrence REPLACE_WITH_DASH_DASH is % unsatisfactory for two reasons: it requires two Bessel values to get % started and it is unstable. %% Upper % We need to make one modification to the upper triangular coefficient % matrix. n = 30; B = blt(n,x,'upper'); B(n-1,n) = 0; %% % Now the last two rows and columns are a 2-by-2 identity. full(B(n-1:n,n-1:n)) %% % The back substitution process just runs the three-term recursion % backwards. We need to start with $J_{n-1}(x)$ and $J_{n-2}(x)$. rhs = zeros(n,1); rhs(n-1:n) = J((n-2:n-1)',x); format long e v = B\rhs %% % These values are accurate to almost all the significant digits % shown. The backwards recursion suppresses the unwanted $Y_n(x)$ and % the process is stable. %% Center % Can we avoid having to know those two starting values? % Yes, by using a matrix formulation of the Miller algorithm. % Insert the normalization condition in the first row. % Now $B$ is tridiagonal, except for first row, which is % % [1 0 2 0 2 0 2 ... ] % % Other rows have $1$ s on the super- and subdiagonal and $-2n/x$ % on the diagonal of the $(n+1)$ -st row. n = 30; B = blt(n,x); B(1,1:2) = [1 0]; B(1,3:2:n) = 2; spy(B) title('B') %% % The beauty of this approach is that it does not require any % values of the Bessel function _a priori_. The right hand side % is simply a unit vector. rhs = eye(n,1); format long e v = B\rhs semilogy(0:n-1,v,'o-') xlabel('n') title('J_n(1)') %% Relative error w = J((0:n-1)',x); relerr = (v - w)./w; semilogy(0:n-1,abs(relerr) + eps*(v==w),'o-') xlabel('n') title('relative error') %% % The relative error deteriorates for the last few values of |n|. % This is not unexpected because we have effectively truncated % a semi-infinite matrix. %% Triangular Factors % The LU decomposition of $B$ shows no pivoting for this value of $x$, % but there is fill-in. The normalization condition is mixing in % with the recurrence. [L,U] = lu(B); spy(L) title('L') snapnow spy(U) title('U') %% References % J. C. P. Miller, "On the choice of standard solutions for a % homogeneous linear equation of the second order", % _Quart. J. Mech. Appl. Math._ 3, 1950, 225-235. % % Walter Gautschi, "Computational Aspects of Three-Term Recurrence % Relations", _SIAM Review_ 9, 1967, 24-82. % ##### SOURCE END ##### eac8550e3f7a4d2ea24b0951cd0803d1 -->

### Two Other MATLABs, in Bangladesh and in Hindi

latest Blogs - Mon, 10/23/2017 - 22:30

This post is about the words "Matlab" and "matlab", in upper and lower case, and without a trademark symbol. Matlab with a capital "M" is a district in Bangladesh. "matlab" with a lower case "m" is a common word in the Hindi language.

Matlab is the name of both a rural district in central Bangladesh and it largest city. The population of the district is only about a quarter of a million people, compared to over 160 million in all of Bangladesh.

For over 50 years, since the early 1960s, Matlab has served as important laboratory for the study of health issues in impoverished rural areas. The studies began with a hospital aboard a barge floating around the cholera-prone rivers of the district. The Health and Demographic Surveillance System (HDSS) was established in 1966.

Almost everyone in Matlab has a personal heath identification number and records are kept of births, deaths, marriages, and various illnesses. The relative isolation of the area makes it a nearly self-contained population. Dozens of studies and hundreds of reports and academic papers detail all aspects of public health, especially fertility, reproductive health, maternal and child health, cause of death and child morbidity, health equity, and effects of climate change.

Matlab played a central role the development of ORS, oral rehydration salts, a home-made mixture of water, salt and molasses that prevents dehydration in children sick with cholera. The British medical journal The Lancet described ORS as "potentially the most important medical advance of this century."

matlab, in Hindi

In the Hindi language, the word "matlab" means "means", as in "What do you mean?". With English characters the word is spelled matlab, but is pronounced a little bit like mutlub. Let's try to get your browser to display Hindi chacters,

This video, by Anil Mihato, has an excellent explanation. YouTube video.

And here is an introduction to MATLAB in Hindi. MATLAB in Hindi.

\n'); d.write(code_string); // Add copyright line at the bottom if specified. if (copyright.length > 0) { d.writeln(''); d.writeln('%%'); if (copyright.length > 0) { d.writeln('% _' + copyright + '_'); } } d.write('\n'); d.title = title + ' (MATLAB code)'; d.close(); } -->

Get the MATLAB code (requires JavaScript)

Published with MATLAB® R2017a

> %% % Matlab is the name of both a rural district in central Bangladesh % and it largest city. The population of the district is only about a % quarter of a million people, compared to over 160 million in all of % Bangladesh. %% % For over 50 years, since the early 1960s, Matlab has served as important % laboratory for the study of health issues in impoverished rural areas. % The studies began with a hospital aboard a barge floating around % the cholera-prone rivers of the district. The Health and % Demographic Surveillance System (HDSS) was established in 1966. %% % Almost everyone in Matlab has a personal heath identification number % and records are kept of births, deaths, marriages, and various illnesses. % The relative isolation of the area makes it a nearly self-contained % population. Dozens of studies and hundreds of reports and academic % papers detail all aspects of public health, especially fertility, % reproductive health, maternal and child health, cause of death and % child morbidity, health equity, and effects of climate change. %% % Matlab played a central role the development of ORS, oral rehydration % salts, a home-made mixture of water, salt and molasses that prevents % dehydration in children sick with cholera. The British medical journal % _The Lancet_ described ORS as "potentially the most important medical % advance of this century." %% % % <> % % %% matlab, in Hindi % In the Hindi language, the word "matlab" means "means", as in % "What do you _mean_?". With English characters the word is spelled % _matlab_, but is pronounced a little bit like _mutlub_. % Let's try to get your browser to display Hindi chacters, % % <> %% % This video, by Anil Mihato, has an excellent explanation. % . %% % And here is an introduction to MATLAB in Hindi. % . ##### SOURCE END ##### 6a8630bce901459286954cbc2f307292 -->

### My First Computer, the Burroughs 205 Datatron

latest Blogs - Mon, 10/09/2017 - 22:30

The first computer that I seriously used was the Burroughs 205 Datatron. I actually used two different machines, one at Caltech for two years in 1959-61 and one at the University of Utah in the summer of 1960. Both the technology and the style of usage were wildly different from today's computers.

The blinking lights on the console made the B205 a featured player in many movies and TV shows during the late '50s and early '60s, including Batman and Lost in Space. Here is an animated gif on a reproduction of the console.

image credit: http://www.angelfire.com/scifi/B205

ContentsBurroughs Corporation

Wikipedia tells us that the American Arithmometer Company was founded in 1886 to manufacture and sell adding machines. The company name was changed to Burroughs Adding Machine in 1904 to honor the founder, William Seward Burroughs, who, coincidentally, was the grandfather of beat generation poet William S. Burroughs. At one time it was the largest adding machine in the country. The company was merged with Sperry Corporation to form Unisys in 1986.

In 1956 Burroughs got into the new computer business by acquiring Electro Data and began marketing their Datatron, which had been introduced two years earlier. Throughout the '50s, '60s, and '70s, Burroughs was a force in computing, particularly in the academic and banking markets. In the 1960's, IBM completely dominated the industry, but there were enough smaller companies that all together they were known as "IBM and the Seven Dwarfs."

Room Full

In 1959 the Burroughs 205 Datatron cost a few hundred thousand dollars and occupied an entire room. (Here is a 1957 price list). In the foreground of this photo are the console and an electric typewriter. In the background are a magnetic tape drive and several cabinets containing a total of about 1600 vacuum tubes.

I like to joke that this was a "personal computer". Only one person could use it at a time. We would sign up for machine time, usually in half-hour chunks, at all hours of the day or night.

Vacuum Tubes

This was before transistors and semiconductors. The primary logic unit was the electronic vacuum tube. Here is one plugin module with eight tubes. The computer worked in base 10 and this module held one decimal digit. Four of the tubes held one digit using the binary combinations 0 through 9. The binary values 10 through 15 were never used; if they showed up, it was an error condition and caused an immediate halt.

Drum

Before the magnetic core, a huge problem with early computers was memory. People knew how to produce the electronics for the arithmetic, logic and control units, but where do you store programs and data?

The 205 had drum memory. Think of the rotating drum in a small clothes dryer. Here is a photo of a machine that has not been used for years and that has not been stored carefully. The drum is at the lower right. Four registers, each containing ten of the single decimal digit modules, are at the upper left.

photo credit: http://www.angelfire.com/scifi/B205/ecd.html

The drum held 4000 ten-digit words. If you regard each digit as half a byte on a modern binary machine, that is 20K bytes. The words were stored in 20 bands of 200 words each. The drum's speed was 3570 rpm. So, on average, it would take about 1/30-th of a second to access the start of a band.

In addition to the primary drum storage, there were four "high speed" bands where a block of 20 words is duplicated ten times around the drum. This was sort of like cache memory on today's computers. One machine instruction would duplicate 20 words, usually a piece of a program, ten times around a high speed band where it could be accessed ten times as fast.

Caltech and Utah

I learned a little about the 205 during my sophomore year at Caltech in the spring of 1959, and more when I was a junior and took numerical analysis from Prof. John Todd in 1959-60. We used Caltech's 205 for some of the homework in that the class. Then I went back home to Salt Lake City for the summer of 1960. It turned out the University of Utah also had a 205 and they were happy to hire me for a summer job.

There were just four people working in Utah's computer center at the beginning of that summer -- a woman who served as director, me, and two part timers, a secretary and a grad student from EE who maintained the machine. Then, a few weeks into the summer, the director resigned. So I found myself -- two months before I turned 20 -- Acting Director of the University of Utah Computer Center.

I returned to Caltech in the fall, and had a chance to do individual study under Todd and more work on that 205 until I graduated in 1961. I did a project involving Hilbert matrices. It was my introduction to matrix computation.

Paper Tape

At first, at both Caltech and Utah, we did not have a compiler or an assembler. We wrote our programs in absolute numeric machine language and punched them on paper tape, like this.

The machine was designed so that op code 00 meant "read one word from paper tape into the current location and go to the next location". There was a button on the console that would erase the drum, set the location counter to 0000 and the op code to 00, and then start the tape reader. That served as the "operating system". It would read the entire contents of the paper tape onto the drum. When the tape ran off the end, you would turn off the reader, set the location counter back to 0000, and press start. That would begin executing the program you had just read from tape onto the drum.

Registers

There were five registers. The A register -- the accumulator -- is where arithmetic happens. It holds ten decimal digits, plus a sign bit. For example, one instruction is "Add the contents of memory location XXXX to the A register". The R register -- the remainder -- is an extension of the A register. Multiplication of two ten digit (fixed point) numbers produces a twenty digit result across A and R. Division produces a quotient in A and a remainder in R.

The C register -- the command register -- holds the instructions as they are being executed. The D register holds a single word as it is being transferred to or from the drum or external I/O equipment.

The B register -- the index register -- is only four digits wide. If the sign bit of an instruction is a 1, the contents of the B register is added to the address portion of the instruction, thereby providing an array index. The value of the B register is automatically increased or decreased by one after each indexing instruction.

Floating Point

A floating point arithmetic option was available for an additional \$12,500. (Floating point was also an option with the Intel chips used in the first PC's. The 8087 chip cost about \$150.) The floating point word has a two-digit exponent and an eight-digit fraction. So floating point numbers range from about 10^(-50) to 10^50 and have an accuracy of about 10^(-8).

Program

Here is an example program from the appendix of the machine manual. This writeup is much fancier than the programs we would actually write. It's on a coding sheet; we would just use lined yellow notebooks. We would never need to write out 6980, 6981, etc. And the Remarks are much more detailed. This example is intended to be carefully explanatory.

Skip on Minus

One of the programs that I wrote during the summer at the University of Utah involved a loop that was exactly 20 instructions long. That meant the entire loop could be loaded into the high speed memory. It didn't even need the jump instruction from the end of the loop back to the beginning. It would just continue execution from the duplicate of the code in the next block on the drum. The trouble was there was no way to exit the loop.

I needed a "Skip on Minus" instruction. There was a "Skip on Plus", but no "Skip on Minus". The EE grad student who maintained the machine was a guy named Joe. I wish I could remember his last name. Anyway, I told him I needed a "Skip on Minus" instruction. He said, "I'll one in". He disappeared behind the cabinets with circuit diagrams and a soldering iron. He called out, "the op code will be 51".

I wrote my program with a "51" instruction. It worked perfectly. And so the University of Utah had the only Burroughs 205 Datatron in the world with a "Skip on Minus" instruction.

\n'); d.write(code_string); // Add copyright line at the bottom if specified. if (copyright.length > 0) { d.writeln(''); d.writeln('%%'); if (copyright.length > 0) { d.writeln('% _' + copyright + '_'); } } d.write('\n'); d.title = title + ' (MATLAB code)'; d.close(); } -->

Get the MATLAB code (requires JavaScript)

Published with MATLAB® R2017a

> % % image credit: http://www.angelfire.com/scifi/B205 %% Burroughs Corporation % Wikipedia tells us that the American Arithmometer Company was founded % in 1886 to manufacture and sell adding machines. The company name was % changed to Burroughs Adding Machine in 1904 to honor the founder, % William Seward Burroughs, who, coincidentally, was the grandfather of % beat generation poet William S. Burroughs. At one time it was the % largest adding machine in the country. The company was merged with % Sperry Corporation to form Unisys in 1986. %% % In 1956 Burroughs got into the new computer business by acquiring % Electro Data and began marketing their Datatron, which had been % introduced two years earlier. % Throughout the '50s, '60s, and '70s, Burroughs was a force in computing, % particularly in the academic and banking markets. In the 1960's, IBM % completely dominated the industry, but there were enough smaller % companies that all together they were known as "IBM and the Seven % Dwarfs." %% Room Full % In 1959 the Burroughs 205 Datatron cost a few hundred thousand dollars % and occupied an entire room. (Here is a 1957 % ). In the foreground % of this photo are the console and an electric typewriter. % In the background are a magnetic tape drive % and several cabinets containing a total of about 1600 vacuum tubes. % % <> %% % I like to joke that this was a "personal computer". Only one person % could use it at a time. We would sign up for machine time, usually % in half-hour chunks, at all hours of the day or night. %% Vacuum Tubes % This was before transistors and semiconductors. The primary logic unit % was the electronic vacuum tube. Here is one plugin module with eight % tubes. The computer worked in base 10 and this module held one decimal % digit. Four of the tubes held one digit % using the binary combinations 0 through 9. The binary values 10 through % 15 were never used; if they showed up, it was an error condition and % caused an immediate halt. % % <> % % photo credit: http://www.cs.kent.edu/~rothstei/10051/history/UVa-page_files/ComputerHistory1.html %% Drum % Before the magnetic core, a huge problem with early computers was % memory. People knew how to produce the electronics for the arithmetic, % logic and control units, but where do you store programs and data? %% % The 205 had drum memory. Think of the rotating drum in a small clothes % dryer. Here is a photo of a machine that has not been used for years % and that has not been stored carefully. The drum is at the lower right. % Four registers, each containing ten of the single decimal digit modules, % are at the upper left. % % <> % % photo credit: http://www.angelfire.com/scifi/B205/ecd.html % % The drum held 4000 ten-digit words. If you regard each digit as half % a byte on a modern binary machine, that is 20K bytes. The words were % stored in 20 bands of 200 words each. The drum's speed was 3570 rpm. % So, on average, it would take about 1/30-th of a second to access the % start of a band. %% % In addition to the primary drum storage, there were four "high speed" % bands where a block of 20 words is duplicated ten times around the drum. % This was sort of like cache memory on today's computers. One machine % instruction would duplicate 20 words, usually a piece of a program, % ten times around a high speed band where it could be accessed % ten times as fast. %% Caltech and Utah % I learned a little about the 205 during my sophomore year at Caltech % in the spring of 1959, and more when I was a junior and took numerical % analysis from Prof. John Todd in 1959-60. We used Caltech's 205 % for some of the homework in that the class. Then I went back home % to Salt Lake City for the summer of 1960. It turned out the % University of Utah also had a 205 and they were happy % to hire me for a summer job. %% % There were just four people working in Utah's computer center at the % beginning of that summer REPLACE_WITH_DASH_DASH a woman who served as director, me, and % two part timers, a secretary and a grad student from EE who maintained % the machine. Then, a few weeks into the summer, the director resigned. % So I found myself REPLACE_WITH_DASH_DASH two months before I turned 20 REPLACE_WITH_DASH_DASH Acting Director % of the University of Utah Computer Center. %% % I returned to Caltech in the fall, and had a chance to do individual % study under Todd and more work on that 205 until I graduated in 1961. % I did a project involving Hilbert matrices. It was my introduction % to matrix computation. %% Paper Tape % At first, at both Caltech and Utah, we did not have a compiler or an % assembler. We wrote our programs in absolute numeric machine language % and punched them on paper tape, like this. % % <> %% % The machine was designed so that op code 00 meant "read one word from % paper tape into the current location and go to the next location". % There was a button on the console that would erase the drum, % set the location % counter to 0000 and the op code to 00, and then start the tape reader. % That served as the "operating system". It would read the entire contents % of the paper tape onto the drum. % When the tape ran off the end, you would turn off the % reader, set the location counter back to 0000, and press start. That % would begin executing the program you had just read from tape onto % the drum. %% Registers % There were five registers. The A register REPLACE_WITH_DASH_DASH the accumulator REPLACE_WITH_DASH_DASH is % where arithmetic happens. It holds ten decimal digits, plus a sign bit. % For example, one instruction is "Add the contents of memory location % XXXX to the A register". The R register REPLACE_WITH_DASH_DASH the remainder REPLACE_WITH_DASH_DASH is an % extension of the A register. Multiplication of two ten digit % (fixed point) numbers produces a twenty digit result across A and R. % Division produces a quotient in A and a remainder in R. %% % The C register REPLACE_WITH_DASH_DASH the command register REPLACE_WITH_DASH_DASH holds the instructions as they % are being executed. The D register holds a single word as it is being % transferred to or from the drum or external I/O equipment. %% % The B register REPLACE_WITH_DASH_DASH the index register REPLACE_WITH_DASH_DASH is only four digits wide. % If the sign bit of an instruction is a 1, the contents of the B register % is added to the address portion of the instruction, thereby providing % an array index. The value of the B register is automatically increased % or decreased by one after each indexing instruction. %% Floating Point % A floating point arithmetic option was available for an additional % \$12,500. (Floating point was also an option with the Intel chips used % in the first PC's. The 8087 chip cost about \$150.) The floating point % word has a two-digit exponent and an eight-digit fraction. So floating % point numbers range from about 10^(-50) to 10^50 and have an accuracy % of about 10^(-8). %% Program % Here is an example program from the appendix of the % . This writeup is much fancier than the programs we would % actually write. It's on a coding sheet; we would just use lined yellow % notebooks. We would never need to write out 6980, 6981, etc. % And the Remarks are much more detailed. This example is intended to be % carefully explanatory. % % <> %% Skip on Minus % One of the programs that I wrote during the summer at the University of % Utah involved a loop that was exactly 20 instructions long. That meant % the entire loop could be loaded into the high speed memory. It didn't % even need the jump instruction from the end of the loop back to the % beginning. It would just continue execution from the duplicate of the % code in the next block on the drum. The trouble was there was no way % to exit the loop. %% % I needed a "Skip on Minus" instruction. There was a "Skip on Plus", % but no "Skip on Minus". The EE grad student who maintained the machine % was a guy named Joe. I wish I could remember his last name. Anyway, % I told him I needed a "Skip on Minus" instruction. He said, "I'll % one in". He disappeared behind the cabinets with circuit diagrams and % a soldering iron. He called out, "the op code will be 51". %% % I wrote my program with a "51" instruction. It worked perfectly. % And so the University of Utah had the only Burroughs 205 Datatron in the % world with a "Skip on Minus" instruction. ##### SOURCE END ##### 95f11082495b4f85b8b30cb5f23fe0b2 -->