(2019/08/20) Convergents are a sequence of rational numbers, which increasingly better approximate a given number, given by truncating the continued fraction representation of that number. The number can be a rational number, in which case the sequence ends with the given value; or, a real number like an irrational root or transcendental constant, in which case the sequence continues infinitely.

In short, this lets you approximate a complicated ratio, by multiplication and division by relatively small integers.

Enter a number (in decimal format "123.456" or floating point "1.234e6"), or a ratio (two numbers, separated by a "/").

Examples where this might arise:

- Matching transformers: turns ratio is the square root of the impedance ratio, so a matching transformer usually needs a nasty irrational value ratio. When you need the widest possible bandwidth, it's important to pick a "close enough" ratio, to minimize the number of turns required. Choose a convergent with the most turns less than your limit.
- PWM DAC: precision can be improved (on average) by updating not just the fraction of on-time (multiplier), but also the total timer count period (frequency divider value)—assuming you can tolerate some frequency modulation in the process. Choose the largest convergent less than the register size (typically a power of 2); if the numerator and denominator are less than half the maximum register values, multiply them by 2 until they are; this will keep the frequency modulation within an octave. (You could even apply convergents again, to find the largest multiple less than the register size, further improving the average FM deviation; probably for most purposes, the simple shift-left operation will suffice.)
- Fixed ratio PLLs: Where an exact frequency ratio is not required, the same principle can be used to set the multiplier and divider ratio, without needing obnoxiously large multiplier and divider registers.
- Constructing a ratio from fixed references or ratios: example, a collection of fixed resistors, or capacitors, or crystals and desired frequency bands, etc. Apply this function to all pairs of references to exhaustively search for an optimal combination. (Note that impedance networks have rational values: impedances add in series, and, in parallel the operation is not a pure divide, but a division is involved. A ladder network is almost a graphical equivalent to the continued fraction.)

On the other hand, where an incremental, repetitive, or real-time ratio is required, it is often better to employ Bresenham's line algorithm to generate a rounded or dithered sequence. The most familiar application of this algorithm is in drawing graphical lines (a ratio in spacial dimensions), however the same algorithm, applied over time (a ratio of value over time—a rate) rather than space, is simply an NCO or DDS. Delta-sigma modulation is also a related topic.

This is a wonderful bit of arithmetic. It's a shame it's not taught earlier in school; it relates to lots of beautiful number-theoretic and analytic mathematics. By stacking fractions infinitely, one can construct any real number. By building a tree of every possible combination of fractions, one can construct, not just the entire real number line, but a superset called the surreals. Obivously—infinities aren't very practical, but since they get *arbitrarily close* to real numbers (and zero and infinity), and progressively closer in a sequence, they are very useful indeed.

This probably sounds complicated and overwhelming...

Relax: just evaluate it starting from the bottom-right. Do exactly what the formula says: Take a calculator and punch in the highest (nth) term. Take the reciprocal (1/x). Add the next term (a_{n-1}). Take the reciprocal. And so on, until you add the zeroth term. Voila, you're done! (Alternately, for those of you accustomed to RPN calculators, the whole stack can be entered in one go.)

Conversely, to calculate the plain continued fraction of a number, simply: subtract the whole (integer) part of the number (writing it down); of the remainder, take the reciprocal; and repeat ad nauseum. If the reciprocal eventually becomes whole, that means you're done: the number was a rational number, and you've taken it completely apart. If the sequence repeats, it's probably an irrational square root; if it continues erratically, it's probably something else (or, you're just pulling up rounding errors). Which, speaking of rounding errors: you may sometimes get a crazy answer for a final step (for example, the 14th step for "1.618"). This is a side effect of the floating point arithmetic used; just ignore the last step. You can't, of course, enter arbitrary real numbers here; precision is limited by the double-precision floating point used by Javascript here.

What's the simplest continued fraction? Alternately, have you ever wondered if there exists a number where the fractional part looks the same whether "right side up" or (1/x)? If you take a calculator and repeatedly type in: +, 1, (1/x), ..., you get something like 1.618... Indeed, if you use that description as a word problem, you get the relation: x + 1 = 1 / x, which has the solution x = ±(1 + √5) / 2, better known as φ, the golden ratio.

Which brings up another interesting property of continued fractions. You know you've reached a good stopping point when one of the terms is unusually large. φ is the *worst possible number* to approximate with a rational number: it has the smallest possible terms (all ones), so the convergence is as slow as possible. In contrast, the 5th term of π's continued fraction is very large—292, a sign that the proceeding terms approximate π extremely well. Indeed, stopping the sequence at this point gives the famous 355/113, good to 6 decimal places.