Thursday, January 29, 2015

Higher-Order Functions and Accumulation


The notion of a higher-order function – that is, a function that operates on other functions – is a fundamental idea in mathematics.

As an easy example, take the concept of differentiation from Calculus. Whereas a "normal" function like sin(x) operates in terms of numbers (ie. pass it a number, and it'll return you a number as a result), the act of taking a derivative is a little different. When we take the derivative of a function, we get back another function.

This is actually an incredible idea. Now not only can we talk about transformations (of numbers), but we can talk about transformations of transformations!

So in yet another of my blog posts about things I've investigated while working my way through the Structure and Interpretation of Computer Programs, we're going to look at this idea of higher-order functions by considering the process of accumulation.

We'll write some procedures for calculating sums and products, and then show how we can abstract away the common process of successively combining new elements with a current total. We'll then go a step further and write a more general procedure for filtered-accumulation, where we only take those elements that satisfy a particular condition.

[The relevant SICP exercises for this one are 1.30, 1.31, 1.32, and 1.33.]

We're gonna be using good old Java today. In fact, this topic gives us the perfect opportunity to explore the new functional interfaces and lambda expressions of the latest Java 8 release!

Saturday, January 24, 2015

Newton's Method


As a follow-up from the previous post dealing with fixed-point iteration, another particularly useful family of numerical techniques deals with the iterative approximation of zeros (or roots) of a given function – that is, those points x where the value of a function f is zero, or f(x)=0. One such technique is Newton's Method (or the Newton-Raphson Method, if you like.)

The idea behind Newton's Method is as follows. If we suppose that we can use Taylor's Theorem to get a polynomial approximation to a function f in the neighbourhood of points that we care about, then we can actually use our fixed-point iteration to converge upon an approximation for a root of f.

So we'll do just that: we'll derive an expression from Taylor's theorem that we can use for the fixed-point iteration, and then code it up. We can then use our code to find the roots of some polynomials, as well as to implement another square-root finding procedure. As for the JVM language we'll use? We haven't seen JRuby make an appearance yet on this blog, so it's about time for its debut! 

Thursday, January 22, 2015

Finding Fixed-Points of Functions


Quite often in scientific computing we're interested in finding the fixed-points of a function or transformation: those points where the value of the function f at a point x is equal to x itself, ie. f(x) = x. You can see how these might be useful – if the function f models any kind of real-world phenomenon, then a fixed-point suggests a sort of equilibrium or steady-state.
 
In fact, fixed-point iteration can be used in many numerical methods. Newton's method for finding the zeros of a differentiable function (which we'll look at in a later post) can be written in terms of a fixed-point computation, and many methods for solving ordinary differential equations apply these same fixed-point ideas.

In this post we'll code up a Scala function to (hopefully!) converge upon the fixed-points of a given transformation, and look at some examples; including a method for finding square roots of a number. We'll also look at the general technique of average-damping that we can apply to ensure/quicken convergence in particular cases.

[Inspired by the Structure and Interpretation of Computer Programs. (noticing a pattern yet?) The book is here, and the relevant Scheme solutions are for Exercises 1.35 and 1.36.]

Saturday, January 17, 2015

Recursive Sine Approximation


For a quick smaller post today, we'll look at an approximation for the sine of an angle. (As always, SICP is the inspiration, and the Scheme solution for the relevant Exercise 1.15 can be found here.)

We saw in the last post that we could approximate the tangent function using Lambert's continued fraction representation. We can find a similar idea of an approximation for the sine function by using the following trigonometric identity:


So the plan is to implement that idea with some Scala code. We'll quickly look at the efficiency of the implementation, and then use it to compute a few examples. Easy.

Wednesday, January 14, 2015

K-term Continued Fractions


Another interesting set of examples in SICP are the exercises dealing with infinite continued fractions, and their k-term truncations. [That is, Exercises 1.37, 1.38, and 1.39.]

As given in the book, an infinite continued fraction looks like this:


Of course that expression doesn't ever stop, so we approximate it with a k-term continued fraction, where we cut it off after k terms. So it looks like this:


Since we have a finite representation now, we can easily write code to work with such structures! So we'll do just that, using Scala as our JVM language.

Friday, January 9, 2015

Testing for Prime Numbers


On first glance, prime numbers seem pretty trivial; and yet, most of their more interesting properties still elude us. These numbers have been bugging mathematicians for as long as we've cared about them. All sorts of questions are still completely open:
(That last one is literally a million-dollar question, by the way.)

From a computer science standpoint though, the fact that we know so little about prime numbers is actually kind of useful. If I don't even know how the primes are distributed, I can't easily factorise a massive composite number into its prime constituents; not even if I used a computer. However the reverse problem – taking the primes and multiplying them together to get to the composite number – is much easier. Such 'one-way functions' make for awesome cryptographic algorithms, which are absolutely vital for digital security these days.

We're not going to worry too much about that end of things though, and instead consider the problem of determining whether a given integer is prime. (Inspired by the SICP exercises as always: the relevant Scheme solutions are 1.22, 1.24, 1.27 and 1.28.)

We'll be using good old Java for our programming language today as we explore three different ways to determine the primality of a number:
  • an exhaustive search of all the possible candidates,
  • the Fermat test, which uses Fermat's Little Theorem in a probabilistic primality test,
  • the Miller-Rabin test, a variant of the Fermat test that fixes one particular deficiency.

Monday, January 5, 2015

4 Ways to Generate the Fibonacci Numbers


In keeping with our recent theme of generating famous number structures, we'll consider the Fibonacci sequence today. This post was also inspired by some of the examples and exercises in the Structure and Interpretation of Computer Programs (namely Exercise 1.13 and Exercise 1.19 for those that came for the Scheme solutions.)

The story goes that the Fibonacci numbers were so named by the Italian mathematician Leonardo Fibonacci in the 1200's, when he discovered the sequence while forming a mathematical model for rabbit populations. However evidence suggests that the sequence was known to others before then (it seems to appear in earlier Indian mathematics, for instance.)

The Fibonacci numbers also have an interesting relationship with the Golden Ratio, namely that as we take larger and larger numbers in the sequence, the ratio between one number and the next approaches the Golden Ratio.

Or mathematically:


And these numbers pop up in all sorts of places. Remember Pascal's triangle? Turns out if you draw some “shallow diagonal” lines across the triangle and sum up the numbers on each line, you get the Fibonacci sequence too!
(There's a nice picture of this here.)

Though that history is all well and good, computation is always much more interesting! We'll look at four different ways to generate the Fibonacci numbers, with Clojure taking centre stage as our JVM language today, including some guest appearances from Python (or Jython if you like, so as to keep with the JVM theme.)

Saturday, January 3, 2015

Computing Pascal's Triangle


One of the coolest things about the Structure and Interpretation of Computer Programs has to be the exercises, in my view. Some of them are surprisingly deep explorations of mathematical or computational ideas, and even the ones that aren't still end up being quite interesting.

As the first real post for this blog, I figured I'd start off with one of the exercises in that latter category: not as deep, but still pretty cool. This exercise involves computing the numbers of Pascal's Triangle.

[For those that are interested, in the book we're dealing with Exercise 1.12 here. My worked solution in Scheme is available in my GitHub repository. However, in this post I'll be going wild with this exercise and looking at the problem for all sorts of weird angles (and all of the code will be in the JVM languages to keep with the blog's theme.) So the Scheme solutions are still worth a look if you've read this post, and vice versa.]


Thursday, January 1, 2015

Blog Overview and About Me


Welcome to The Java Mathematician!
  
I started this blog as a place where I could waffle on about whatever little thing has caught my fancy lately. Those things will usually be mathematics or programming-related, so I guess the blog name is fitting.