A triangular pyramid is constructed using spherical balls so that each ball rests on exactly three balls of the next lower level.

Then, we calculate the number of paths leading from the apex to each position: A path starts at the apex and progresses downwards to any of the three spheres directly below the current position. Consequently, the number of paths to reach a certain position is the sum of the numbers immediately above it (depending on the position, there are up to three numbers above it).

The result is Pascal’s pyramid and the numbers at each level n are the coefficients of the trinomial expansion $(x + y + z)^n$. How many coefficients in the expansion of $(x + y + z)^{200000}$ are multiples of $10^{12}$?

## Solution Using the Multinomial Theorem

The generalization of the binomial theorem is the multinomial theorem. It says that multinomials raised to exponents can be expanded using the formula
$(x_1+x_2+\cdots+x_m)^n=\sum_{{k_1+k_2+\cdots+k_m=n}\atop{0\le k_i\le n}}\left({n}\atop{k_1,k_2,\ldots,k_m}\right)\prod_{1\le t\le m}x_t^{k_t}$
where
$\left({n}\atop{k_1,k_2,\ldots,k_m}\right)=\frac{n!}{k_1!k_2!\cdots k_m!}.$
Of course, when m=2 this gives the binomial theorem. The sum is taken over all partitions $k_1+k_2+\cdots+k_m=n$ for integers $k_i$. If n=200000 abd m=3, then the terms in the expansion are given by
$\left({200000}\atop{k_1,k_2,k_3}\right)x_1^{k_1}x_2^{k_2}x_3^{k_3}=\frac{200000!}{k_1!k_2!k_3!}x_1^{k_1}x_2^{k_2}x_3^{k_3}$
where $k_1+k_2+k_3=200000$. It’s worth pointing out that each of the coefficients is an integer, and thus has a unique factorization into products of prime integers. Of course, there’s no way that we’re going to calculate these coefficients. We only need to know when they’re divisible by $10^{12}$. Thus, we only need to consider how many factors of 2 and 5 are involved.

First, we’ll create a function $p(n,d)$ that outputs how many factors of $d$ are included in $n!$. We have that
$p(n,d)=\left\lfloor\frac{n}{d}\right\rfloor+\left\lfloor\frac{n}{d^2}\right\rfloor+\left\lfloor\frac{n}{d^3}\right\rfloor+ \cdots+\left\lfloor\frac{n}{d^r}\right\rfloor,$
where $d^r$ is the highest power of $d$ dividing $n$. For instance, there are 199994 factors of 2 in 200000!. Since we’re wondering when our coefficients are divisible by $10^{12}=2^{12}5^{12}$, we’ll be using the values provided by $p(n,d)$ quite a bit for $d=2$ and $d=5$. We’ll store two lists:
$p2=[p(i,2)\text{ for }1\le i\le 200000]\quad\text{and}\quad p5=[p(i,5)\text{ for }1\le i\le 200000].$
For a given $k_1,k_2,k_3$, the corresponding coefficient is divisible by $10^{12}$ precisely when
$p2[k_1]+p2[k_2]+p2[k_3]<199983\ \text{and}\ p5[k_1]+p5[k_2]+p5[k_3]<49987.$
That is, this condition ensures that there are at least 12 more factors of 2 and 5 in the numerator of the fraction defining the coefficients.

Now, we know that $k_1+k_2+k_3=200000$, and we can exploit symmetry and avoid redundant computations if we assume $k_1\le k_2\le k_3$. Under this assumption, we always have
$k_1\le\left\lfloor\frac{200000}{3}\right\rfloor=66666.$
We know that $k_1+k_2+k_3=200000$ is impossible since 200000 isn't divisible by 3. It follows that we can only have (case 1) $k_1=k_2 < k_3$, or (case 2) $k_1 < k_2=k_3$, or (case 3) $k_1 < k_2 < k_3$.

In case 1, we iterate $0\le k_1\le 66666$, setting $k_2=k_1$ and $k_3=200000-k_1-k_2$. We check the condition, and when it is satisfied we record 3 new instances of coefficients (since we may permute the $k_i$ in 3 ways).

In case 2, we iterate $0\le k_1\le 66666$, and when $k_1$ is divisible by 2 we set $k_2=k_3=\frac{200000-k_1}{2}$. When the condition holds, we again record 3 new instance.

In case 3, we iterate $0\le k_1\le 66666$, and we iterate over $k_2=k_1+a$ where $1\le a < \left\lfloor\frac{200000-3k_1}{2}\right\rfloor$. Then $k_3=200000-k_1-k_2$. When the condition holds, we record 6 instances (since there are 6 permutations of 3 objects).

## Cython Solution

I’ll provide two implementations, the first written in Cython inside Sage. Then, I’ll write a parallel solution in C.

%cython

import time
from libc.stdlib cimport malloc, free

cdef unsigned long p(unsigned long k, unsigned long d):
cdef unsigned long power = d
cdef unsigned long exp = 0
while power <= k:
exp += k / power
power *= d
return exp

cdef unsigned long * p_list(unsigned long n, unsigned long d):
cdef unsigned long i = 0
cdef unsigned long * powers = <unsigned long *>malloc((n+1)*sizeof(unsigned long))
while i <= n:
powers[i] = p(i,d)
i += 1
return powers

run_time = time.time()

# form a list of number of times each n! is divisible by 2.
cdef unsigned long * p2 = p_list(200000,2)

# form a list of number of times each n! is divisible by 5.
cdef unsigned long * p5 = p_list(200000,5)

cdef unsigned long k1, k2, k3, a
cdef unsigned long long result = 0

k1 = 0
while k1 <= 66666:
# case 1: k1 = k2 < k3
k2 = k1
k3 = 200000 - k1 - k2
if 199982 >= (p2[k1]+p2[k2]+p2[k3]) and 49986 >= (p5[k1]+p5[k2]+p5[k3]):
result += 3
# case 2: k1 < k2 = k3
if k1 % 2 == 0:
k2 = (200000 - k1)/2
k3 = k2
if 199982 >= (p2[k1]+p2[k2]+p2[k3]) and 49986 >= (p5[k1]+p5[k2]+p5[k3]):
result += 3
# case 3: k1 < k2 < k3
a = 1
while 2*a < (200000 - 3*k1):
k2 = k1 + a
k3 = 200000 - k1 - k2
if 199982 >= (p2[k1]+p2[k2]+p2[k3]) and 49986 >= (p5[k1]+p5[k2]+p5[k3]):
result += 6
a += 1
k1 += 1

free(p2)
free(p5)

elapsed_run = round(time.time() - run_time, 5)

print "Result: %s" % result
print "Runtime: %s seconds (total time: %s seconds)" % (elapsed_run, elapsed_head)

When executed, we find the correct result relatively quickly.

Result: 479742450
Runtime: 14.62538 seconds (total time: 14.62543 seconds)

## C with OpenMP Solution

#include <stdio.h>
#include <stdlib.h>
#include <malloc.h>
#include <omp.h>

/*****************************************************************************/
/* function to determine how many factors of 'd' are in 'k!'                 */
/*****************************************************************************/
unsigned long p(unsigned long k, unsigned long d) {
unsigned long power = d;
unsigned long exp = 0;
while (power <= k) {
exp += k/power;
power *= d;
}
return exp;
}

/*****************************************************************************/
/* create a list [p(0,d),p(1,d),p(2,d), ... ,p(n,d)] and return pointer      */
/*****************************************************************************/
unsigned long * p_list(unsigned long n, unsigned long d) {
unsigned long i;
unsigned long * powers = malloc((n+1)*sizeof(unsigned long));
for (i=0;i<=n;i++) powers[i] = p(i,d);
return powers;
}

/*****************************************************************************/
/* main                                                                      */
/*****************************************************************************/
int main(int argc, char **argv) {
unsigned long k1, k2, k3, a;
unsigned long long result = 0;

unsigned long * p2 = p_list(200000, 2);
unsigned long * p5 = p_list(200000, 5);

#pragma omp parallel for private(k1,k2,k3,a) reduction(+ : result)
for (k1=0;k1<66667;k1++) {
// case 1: k1 = k2 < k3
k2 = k1;
k3 = 200000 - k1 - k2;
if (p2[k1]+p2[k2]+p2[k3]<199983 && p5[k1]+p5[k2]+p5[k3]<49987) {
result += 3;
}
// case 2: k1 < k2 = k3
if (k1 % 2 == 0) {
k2 = (200000 - k1)/2;
k3 = k2;
if (p2[k1]+p2[k2]+p2[k3]<199983 && p5[k1]+p5[k2]+p5[k3]<49987) {
result += 3;
}
}
// case 3: k1 < k2 < k3
for (a=1;2*a<(200000-3*k1);a++) {
k2 = k1 + a;
k3 = 200000 - k1 - k2;
if (p2[k1]+p2[k2]+p2[k3]<199983 && p5[k1]+p5[k2]+p5[k3]<49987) {
result += 6;
}
}
}

free(p2);
free(p5);

printf("result: %lld\n", result);

return 0;
}

This can be compiled and optimized using GCC as follows.

$gcc -O3 -fopenmp -o problem-154-omp problem-154-omp.c When executed on a 16-core machine, we get the following result.$ time ./problem-154-omp
result: 479742450

real    0m1.487s

This appears to be the fastest solution currently known, according to the forum of solutions on Project Euler. The CPUs on the 16-core machine are pretty weak compared to modern standards. When running on a single core on a new Intel Core i7, the result is returned in about 4.7 seconds.

### Problem

Euler published the remarkable quadratic formula:

$n^2+n+41$

It turns out that the formula will produce 40 primes for the consecutive values $n=0$ to $39$. However, when $n=40$, $40^2+40+41=40(40+1)+41$ is divisible by 41, and certainly when $n=41$, $41^2+41+41$ is clearly divisible by 41.

Using computers, the incredible formula $n^2-79n+1601$ was discovered, which produces 80 primes for the consecutive values $n=0$ to $79$. The produce of the coefficients, $-79$ and $1601$ is $-126479$.

$time ./problem-21-openmp no of threads 16 result: 31626 real 0m0.140s Problem: You are given the following information, but you may prefer to do some research for yourself. • 1 Jan 1900 was a Monday. • Thirty days has September, April, June and November. All the rest have thirty-one, Saving February alone, Which has twenty-eight, rain or shine. And on leap years, twenty-nine. • A leap year occurs on any year evenly divisible by 4, but not on a century unless it is divisible by 400. How many Sundays fell on the first of the month during the twentieth century (1 Jan 1901 to 31 Dec 2000)? ### Approach There are several different ways to approach this. The easiest, I think, is to use the Gaussian formula for day of week. It is a purely mathematical formula that I have encoded in the following Python code. ### Python Solution import time from math import floor """ Gaussian algorithm to determine day of week """ def day_of_week(year, month, day): """ w = (d+floor(2.6*m-0.2)+y+floor(y/4)+floor(c/4)-2*c) mod 7 Y = year - 1 for January or February Y = year for other months d = day (1 to 31) m = shifted month (March = 1, February = 12) y = last two digits of Y c = first two digits of Y w = day of week (Sunday = 0, Saturday = 6) """ d = day m = (month - 3) % 12 + 1 if m &gt; 10: Y = year - 1 else: Y = year y = Y % 100 c = (Y - (Y % 100)) / 100 w = (d + floor(2.6 * m - 0.2) + y + floor(y/4) + floor(c/4) - 2*c) % 7 return int(w) """ Compute the number of months starting on a given day of the week in a century """ def months_start_range(day,year_start,year_end): total = 0 for year in range(year_start, year_end + 1): for month in range(1,13): if day_of_week(year, month, 1) == day: total += 1 return total start = time.time() total = months_start_range(0,1901,2000) elapsed = time.time() - start print("%s found in %s seconds") % (total,elapsed) This returns the correct result. 171 found in 0.0681998729706 seconds That will run faster if executed directed in Python. Remember, I’m using the Sage notebook environment to execute most Python here. I’m also writing Cython in the same environment. ### Cython Solution There is a nearly trivial rewriting to Cython of the above Python code. %cython import time from math import floor """ Gaussian algorithm to determine day of week """ cdef day_of_week(int year, int month, int day): """ w = (d+floor(2.6*m-0.2)+y+floor(y/4)+floor(c/4)-2*c) mod 7 Y = year - 1 for January or February Y = year for other months d = day (1 to 31) m = shifted month (March = 1, February = 12) y = last two digits of Y c = first two digits of Y w = day of week (Sunday = 0, Saturday = 6) """ cdef int d = day cdef int m = (month - 3) % 12 + 1 cdef int Y if m &gt; 10: Y = year - 1 else: Y = year y = Y % 100 cdef int c = (Y - (Y % 100)) / 100 cdef double w w = (d + floor(2.6 * m - 0.2) + y + floor(y/4) + floor(c/4) - 2*c) % 7 return int(w) """ Compute the number of months starting on a given day of the week in range of years """ cdef months_start_range(int day, int year_start,int year_end): cdef unsigned int total = 0 cdef int year, month for year in range(year_start, year_end + 1): for month in range(1,13): if day_of_week(year, month, 1) == day: total += 1 return total start = time.time() total = months_start_range(0,1901,2000) elapsed = time.time() - start print("%s found in %s seconds") % (total,elapsed) The code is a bit longer, but it executes much faster. 171 found in 0.00387215614319 seconds The Cython code runs roughly 18 times faster. ### C Solution The Cython code was used as a model to create more efficient C code. The only issue here is in maintaining the correct datatypes (not too hard, but compared to Cython it is a pain). #include <stdio.h> #include <stdlib.h> #include <math.h> int day_of_week(int year, int month, int day) { // Using the Gaussian algorithm int d = day; double m = (double) ((month - 3) % 12 + 1); int Y; if(m > 10) Y = year - 1; else Y = year; int y = Y % 100; int c = (Y - (Y % 100)) / 100; int w = ((d+(int)floor(2.6*m-0.2)+y+ y/4 + c/4 -2*c))%7; return w; } long months_start_range(int day, int year_start, int year_end) { unsigned long total = 0; int year, month; for(year = year_start; year < year_end; year++) { for(month = 1; month <= 12; month++) { if(day_of_week(year, month, 1)==day) total++; } } return total; } int main(int argc, char **argv) { int iter = 0; long total; while(iter < 100000) { total = months_start_range(0,1901,2000); iter++; } printf("Solution: %ld\n",total); return 0; } Notice that this executes the loop 100,000 times, as I’m trying to get a good idea of what the average runtime is. We compile with optimization and the [[[-lm]]] math option. We get the following result. gcc -O3 -o problem-19 problem-19.c -lm$ time ./problem-19
Solution: 171

real	0m6.240s

The C code runs roughly 62 times as fast as the Cython and roughly 1124 times as fast as the Python. Each iteration executes in about 6.2000e-5 seconds.

Problem: $2^{15}=32768$ and the sum of its digits is $3+2+7+6+8=26$.
What is the sum of the digits of the number $2^{1000}$?

### Sage Solution

Sage’s built-in Python and functions makes this easy.

import time

start = time.time()
a = 2^1000
s = sum(a.digits())
elapsed = time.time() - start

print "%s found in %s seconds" % (s,elapsed)

It executes pretty quickly too.

1366 found in 0.000343084335327 seconds

### An Easy Python Solution

Python itself also makes this problem too easy, due to string functions.

import time

def pow2sum(exp):
pow = list(str(2**1000))
return sum([int(i) for i in pow])

start = time.time()
n = pow2sum(1000)
elapsed = (time.time() - start)
print "%s found in %s seconds" % (n,elapsed)

And it is fast:

1366 found in 0.000911951065063 seconds

### Python Without Strings Attached

Let’s do our arithmetic without string functionality. Then, I’d note that $2^{1000}<10^{1000}$ and so we know that, at most, we're dealing with 1001 digits. So, we can create a routine to multiply 2 by itself 1000 times, maintaining the result at each step in a list instead of a single integer. (This is how you'd be forced to do things in C, where arbitrary length integers are pure fiction.) Since we're only multiplying by 2 at each iteration, we know that we'll either carry a zero or one to the next stage... which does make this routine a bit more simple than your typical multiply-by-list routine.

import time

def pow2sum(exp):
L = [0] * exp # make a list exp long
L[0] = 1
for power in range(exp):
for index in range(exp):
prod = L[index] * 2 + carry
if prod > 9:
carry = 1
prod = prod % 10
else: carry = 0
L[index] = prod
return sum(L)

start = time.time()
n = pow2sum(1000)
elapsed = (time.time() - start)
print "%s found in %s seconds" % (n,elapsed)

It runs relatively quickly, although not as quickly as the string version runs.

1366 found in 0.361020803452 seconds

### Cython Solutions

We can first trivially rewrite the string Python version.

%cython

import time

cdef pow2sum(unsigned int exp):
cdef list pow = list(str(2**1000))
return sum([int(i) for i in pow])

start = time.time()
n = pow2sum(1000)
elapsed = (time.time() - start)
print "%s found in %s seconds" % (n,elapsed)

When executed, we find that the string Cython code runs about 1.14 times as fast as the string Python code.

1366 found in 0.000799894332886 seconds

Of course, I’m more interested in seeing how much faster the arithmetic version runs.

%cython

import time
from libc.stdlib cimport malloc, free

cdef pow2sum(unsigned int exp):
cdef int *L = <int *>malloc(exp * sizeof(int))
cdef unsigned int power = 0,index = 0
cdef unsigned int prod, carry, add
while index < exp:
L[index] = 0
index += 1
L[0] = 1
while power < exp:
carry, add, index = 0, 0, 0
while index < exp:
prod = L[index] * 2 + carry
if prod > 9:
carry = 1
prod = prod % 10
else: carry = 0
L[index] = prod
index += 1
power += 1
cdef int sum = 0
index = 0
while index < exp:
sum += L[index]
index += 1
free(L)
return sum

start = time.time()
n = pow2sum(1000)
elapsed = (time.time() - start)
print "%s found in %s seconds" % (n,elapsed)

When executed, we get the following result.

1366 found in 0.00858902931213 seconds

Problem: Starting in the top left corner of a $2\times 2$ grid and moving only down and right, there are 6 routes to the bottom right corner.

How many routes are there through a $20\times 20$ grid?

### A Great Interview Problem

This is precisely the sort of problem I expect to see in a technical coding interview, and knowing the various ways of solving a problem such as this will help you get far in those situations. But, even if you’re an experienced coder, the solutions may not be obvious at first. I want to walk through a few potential solutions and see how they perform.

### Recursive Python Solution

I think the easiest solution, and one you should know (but possible don’t) is the recursive approach. The main idea here is to develop a function that will call itself, inching along to the right and down in all possible combinations, returning a value of 1 whenever it reaches the bottom-right and summing all of those 1s along the way. You should convince yourself that the procedure actually terminates (at what we call “the base case”) and returns the correct solution. Try doing it on paper on the 2×2 grid, or a 3×3 grid, and see what happens. Here’s what the Python code looks like.

#!/usr/bin/python

import time

gridSize = [20,20]

def recPath(gridSize):
"""
Recursive solution to grid problem. Input is a list of x,y moves remaining.
"""
# base case, no moves left
if gridSize == [0,0]: return 1
# recursive calls
paths = 0
# move left when possible
if gridSize[0] > 0:
paths += recPath([gridSize[0]-1,gridSize[1]])
# move down when possible
if gridSize[1] > 0:
paths += recPath([gridSize[0],gridSize[1]-1])

return paths

start = time.time()
result = recPath(gridSize)
elapsed = time.time() - start

print "result %s found in %s seconds" % (result, elapsed)

That’s great, and it will actually work, but it may take some time. Actually it takes a lot of time. By that, I mean it really, really takes a lot of time. When we run it on the 2×2 output, we get the following.

result 6 found in 9.05990600586e-06 seconds

When we run it on the 20×20 input, as the problem requires, it runs for about 4 hours before I kill it. Python is OK at recursive function calls, and it can handle/collapse the memory required moderately well, but what we’re doing here is manually constructing ALL possible paths to a solution, which isn’t incredibly efficient. Still, during a technical interview, this is definitely the first idea for a solution that should pop into your head. It’s quick and easy to write, and many problems can be solved like this.

### Dynamic Python Solution

Our recursive approach suffers from the problem that we’re doing a lot of similar operations over and over. Can we learn anything from the smaller cases and build up from there? In this case, the answer is yes, but it requires us to build up some mathematics a bit more. This is actually quite easy if approached correctly. The idea is to construct the solution recursively, but differently this time.

Claim: Let $n$ be any natural number and consider the 2-dimensional sequence $S_{i,j}$ defined by

$S_{i,j}=\begin{cases}1 &\text{ if }j=0\\ S_{i,j-1}+S_{i-1,j} &\text{ if }0<j<i\\ 2S_{i,j-1} &\text{ if }i=j\end{cases}$

where $0\le i\le n$ and $0\le j\le i$. Then the number of non-backtracking paths from top-left to bottom-right through an $n\times n$ grid is $S_{n,n}$.

Proof: Consider a grid of $m$ rows and $n$ columns (we do not need to assume that the grid is square). Counting from the upper-left and starting at zero, denote the intersection/node in the $i$-th row and $j$-th column by $N_{i,j}$. Thus, the upper-left node is $N_{0,0}$, the bottom-left is $N_{m,0}$ and the bottom-right is $N_{m,n}$. Clearly, the number of paths from $N_{0,0}$ to any node along the far left or far top of the grid is only 1 (since we may only proceed down or left). Now, consider how many paths there are to $N_{1,1}$. We must first travel through $N_{0,1}$ or $N_{1,0}$. This yields only two paths to $N_{1,1}$. We can continue this process. In order to determine the total number of paths to any node $N_{i,j}$, we only need to sum together the total number of paths to $N_{i,j-1}$ and $N_{i-1,j}$. The process is understood graphically in the following diagram, where each new integer represents the number of paths to a node.

Thus, in a $4\times 4$ grid, there are 70 non-backtracking paths. How does this relate to the sequence $S_{i,j}$? Simply put, $S_{i,j}$ is the number of paths to node $N_{i,j}$. If we write out the sequence $S_{i,j}$ for $0\le i\le 4$ and $0\le j\le i$, we simply obtain the lower diagonal sequence embedded in the diagram above.

$\begin{array}{ccccc} S_{0,0} = 1 & & & & \cr S_{1,0} = 1 & S_{1,1}=2 & & & \cr S_{2,0} = 1 & S_{2,1}=3 & S_{2,2}=6 & & \cr S_{3,0} = 1 & S_{3,1}=4 & S_{3,2}=10 & S_{3,3}=20 & \cr S_{4,0} = 1 & S_{4,1}=5 & S_{4,2}=15 & S_{4,3}=35 & S_{4,4}=70\end{array}$

That completes the proof.

Let’s code that into a Python solution and see how fast it runs. My bet is that this will be considerably faster than our initial recursive solution. We can record the sequence $S_{i,j}$ as a single dimensional list that is simply rewritten at each iteration.

#!/usr/bin/python

import time

def route_num(cube_size):
L = [1] * cube_size
for i in range(cube_size):
for j in range(i):
L[j] = L[j]+L[j-1]
L[i] = 2 * L[i - 1]
return L[cube_size - 1]

start = time.time()
n = route_num(20)
elapsed = (time.time() - start)
print "%s found in %s seconds" % (n,elapsed)

When executed, we get the following.

137846528820 found in 0.000205039978027 seconds

### Cython Solution

We’ll recode things in Cython and see how much faster we can get the result returned.

%cython

import time
from libc.stdlib cimport malloc, free

cdef route_num(short cube_size):
cdef unsigned long *L = <unsigned long *>malloc((cube_size + 1) * sizeof(unsigned long))
cdef short j,i = 0
while i <= cube_size:
L[i] = 1
i += 1
i = 1
while i <= cube_size:
j = 1
while j < i:
L[j] = L[j]+L[j-1]
j += 1
L[i] = 2 * L[i - 1]
i += 1
cdef unsigned long c = L[cube_size]
free(L)
return c

start = time.time()
cdef unsigned long n = route_num(20)
elapsed = (time.time() - start)
print "%s found in %s seconds" % (n,elapsed)

We now get the result a bit more quickly.

137846528820 found in 2.21729278564e-05 seconds

The Cython code executes roughly 9 times as fast as the Python.

Problem: The following iterative sequence is defined for the set of positive integers:

$n\rightarrow\begin{cases}n/2 & n \text{ even}\\ 3n+1 & n \text{ odd}\end{cases}$

Using the rule above and starting with 13, we generate the following sequence:

$13\rightarrow 40\rightarrow 20\rightarrow 10\rightarrow 5\rightarrow 16\rightarrow 8\rightarrow 4\rightarrow 2\rightarrow 1$

It can be seen that this sequence (starting at 13 and finishing at 1) contains 10 terms. Although it has not been proved yet (Collatz Problem), it is thought that all starting numbers finish at 1. Which starting number, under one million, produces the longest chain? Note: Once the chain starts the terms are allowed to go above one million.

### Idea Behind a Solution

I’ll refer to the “Collatz length of $n$” as the length of the chain from an integer $n$ to 1 using the above described sequence. If we were to calculate the Collatz length of each integer separately, that would be incredibly inefficient. In Python, that would look something like this.

### First Python Solution

import time

start = time.time()

def collatz(n, count=1):
while n > 1:
count += 1
if n % 2 == 0:
n = n/2
else:
n = 3*n + 1
return count

max = [0,0]
for i in range(1000000):
c = collatz(i)
if c > max[0]:
max[0] = c
max[1] = i

elapsed = (time.time() - start)
print "found %s at length %s in %s seconds" % (max[1],max[0],elapsed)

Now, this will actually determine the solution, but it is going to take a while, as shown when we run the code.

found 837799 at length 525 in 46.6846499443 seconds.

### A Better Python Solution

What I’m going to do is to cache any Collatz numbers for integers below one million. The idea is that we can use the cached values to make calculations of new Collatz numbers more efficient. But, we don’t want to record every single number in the Collatz sequences that we’ll be using, because some of the sequences actually reach up into the hundreds of millions. We’ll make a list called TO_ADD, and we’ll only populate that with numbers for which Collatz lengths are unknown. Once known, the Collatz lengths will be stored for repeated use.

import time

start = time.time()

limit = 1000000
collatz_length = [0] * limit
collatz_length[1] = 1
max_length = [1,1]

for i in range(1,1000000):
n,s = i,0
TO_ADD = [] # collatz_length not yet known
while n > limit - 1 or collatz_length[n] < 1:
if n % 2 == 0: n = n/2
else: n = 3*n + 1
s += 1
# collatz_length now known from previous calculations
p = collatz_length[n]
for j in range(s):
if m < limit:
new_length = collatz_length[n] + s - j
collatz_length[m] = new_length
if new_length > max_length[1]: max_length = [i,new_length]

elapsed = (time.time() - start)
print "found %s at length %s in %s seconds" % (max_length[0],max_length[1],elapsed)

This should return the same result, but in significantly less time.

found 837799 at length 525 in 5.96128201485 seconds

### A First Cython Solution

If we take our original approach of computing each Collatz length from scratch, this might actually work slightly better in Cython.

%cython

import time

cdef collatz(unsigned int n):
cdef unsigned count = 1
while n > 1:
count += 1
if n % 2 == 0:
n = n/2
else:
n = 3*n + 1
return count

cdef find_max_collatz(unsigned int min, unsigned int max):
cdef unsigned int m = 1
cdef unsigned long num = 1
cdef unsigned int count = 1
cdef unsigned long iter = min
while iter < max:
count = collatz(iter)
if count > m:
m = count
num = iter
iter += 1
return num

start = time.time()
max_found = find_max_collatz(1,1000000)
elapsed = (time.time() - start)
print "found %s in %s seconds" % (max_found,elapsed)

In fact, when executed, we find that it is significantly better than our efficient Python code.

found 837799 in 0.604798078537 seconds

This just goes to show that even low efficiency machine/compiled code can drastically outperform efficient Python. But, how far can we take this Cython refinement? What if we were to recode our more efficient algorithm in Cython? It may look something like this.

### A Better Cython Solution

%cython

import time
from libc.stdlib cimport malloc, free

cdef find_max_collatz(unsigned long int max):
cdef int *collatz_length = <int *>malloc(max * sizeof(int))
cdef list TO_ADD # holds numbers of unknown collatz length
cdef unsigned long iter, j, m, n, s, p, ind, new_length, max_length = 0

# set initial collatz lengths
iter = 0
while iter < max:
collatz_length[iter] = 0
iter += 1
collatz_length[1] = 1

# iterate to max and find collatz lengths
iter = 1
while iter < max:
n,s = iter,0
while n > max - 1 or collatz_length[n] < 1:
if n % 2 == 0: n = n/2
else: n = 3*n + 1
s += 1
# collatz length now known from previous calculations
p = collatz_length[n]
j = 0
while j < s:
if m < max:
new_length = collatz_length[n] + s - j
collatz_length[m] = new_length
if new_length > max_length:
max_length = new_length
ind = m
j += 1
iter += 1

free(collatz_length)
return ind

start = time.time()
max_collatz = find_max_collatz(1000000)
elapsed = (time.time() - start)
print "found %s in %s seconds" % (max_collatz,elapsed)

This gives us some relatively good results:

found 837799 in 0.46523308754 seconds

Still, it isn’t a great improvement over the naive Cython code. What’s going on? I bet that the TO_ADD data structure could be changed from a Python list (notice the “cdef list” definition) to a malloc’d C array. That will be a bit more work, but my gut instincts tell me that this is probably the bottleneck in our current Cython code. Let’s rewrite it a bit.

%cython

import time
from libc.stdlib cimport malloc, free

cdef find_max_collatz(unsigned long int max):
cdef int *collatz_length = <int *>malloc(max * sizeof(int))
cdef int *TO_ADD = <int *>malloc(600 * sizeof(int))
cdef unsigned long iter, j, m, n, s, p, ind, new_length, max_length = 0

# set initial collatz lengths and TO_ADD numbers
iter = 0
while iter < max:
collatz_length[iter] = 0
iter += 1
collatz_length[1] = 1
iter = 0
while iter < 600:
iter += 1

# iterate to max and find collatz lengths
iter = 1
while iter < max:
n,s = iter,0
while n > max - 1 or collatz_length[n] < 1:
if n % 2 == 0: n = n/2
else: n = 3*n + 1
s += 1
# collatz length now known from previous calculations
p = collatz_length[n]
j = 0
while j < s:
if m < max:
new_length = collatz_length[n] + s - j
collatz_length[m] = new_length
if new_length > max_length:
max_length = new_length
ind = m
j += 1
iter += 1

free(collatz_length)
return ind

start = time.time()
max_collatz = find_max_collatz(1000000)
elapsed = (time.time() - start)
print "found %s in %s seconds" % (max_collatz,elapsed)

Now, when we execute this code we get the following.

found 837799 in 0.0465848445892 seconds

That’s much better. So, by using Cython and writing things a bit more efficiently, the code executes 1119 times as fast.

### C Solution

If I structure the algorithm in the same way, I don’t expect to gain much by rewriting things in C, but I’ll see what happens.

#include <stdio.h>
#include <stdlib.h>
#include <time.h>

int find_max_collatz(unsigned long max) {
unsigned int collatz_length[max];
unsigned long iter, j, m, n, s, p, ind, new_length, max_length = 0;

// set initial collatz lengths and TO_ADD numbers
iter = 0;
while(iter < max) {
collatz_length[iter] = 0;
iter++;
}
collatz_length[1] = 1;
iter = 0;
while(iter < 600) {
iter++;
}
// iterate to max and find collatz lengths
iter = 1;
while(iter < max) {
n = iter;
s = 0;
while(n > max - 1 || collatz_length[n] < 1) {
if(n % 2 == 0) n = n/2;
else n = 3*n + 1;
s++;
}
// collatz length now known from previous calculations
p = collatz_length[n];
j = 0;
while(j < s) {
if(m < max) {
new_length = collatz_length[n] + s - j;
collatz_length[m] = new_length;
if(new_length > max_length) {
max_length = new_length;
ind = m;
}
}
j++;
}
iter++;
}
return ind;
}

int main(int argc, char **argv) {
unsigned int max, i;
time_t start, end;
double total_time;

start = time(NULL);

for(i=0;i<1000;i++) max = find_max_collatz(1000000);

end = time(NULL);
total_time = difftime(end,start);

printf("%d found in %lf seconds.\n",max,total_time);

return 0;
}

We’re using 1,000 iterations to try and get a good idea of how this runs.

### Python Solution

import time

def prod_triplet_w_sum(n):
for i in range(1,n,1):
for j in range(1,n-i,1):
k = n-i-j
if i**2+j**2==k**2:
return i*j*k
return 0

start = time.time()
product = prod_triplet_w_sum(1000)
elapsed = (time.time() - start)

print "found %s in %s seconds" % (product,elapsed)

When executed, this code gives the following result.

found 31875000 in 0.101438999176 seconds

### Cython Solution

We take the same approach in Cython, but remove the Python iterators in an attempt to improve performance.

%cython

import time

cdef prod_triplet_w_sum(unsigned int n):
cdef unsigned int i,j,k
i = 1
while i < n:
j = 1
while j < n - i:
k = n - i - j
if i**2 + j**2 == k**2:
return i*j*k
j += 1
i += 1
return 0

start = time.time()
product = prod_triplet_w_sum(1000)
elapsed = (time.time() - start)

print "found %s in %s seconds" % (product,elapsed)

This gives us the following.

found 31875000 in 0.00186085700989 seconds

Thus, the Cython version is roughly 56 times faster than the Python code.