Did I just discover a new mathematical formula?
What a weird way of starting a blog... anyways, please join me in my journey of spending time reasoning about complex stuff I know nothing about for no particular reason.
Let me start with the disclaimer: I am not sure if this is really unknown, I don’t know if it is useful for something, and I could never give a proper proof for it. I’m just a simple developer and far away from being a mathematician; so be nice. Even if this is not new, I’m glad that I managed to figure this out all by myself.
The history
It all occurred in three fases. The first one started at high school one day when I was in a city bus coming back home. My brain usually never leave me alone, so on that day I was playing with car plates numbers as I saw them through the window. I can’t remember exactly how, but I realized that any integer squared equals to one below it times one above it plus one.
Then a few years later (2016, according to a document I found on my drive in which I was exploring this in more detail) I strived to find out if it could be applied to powers other than two and to describe it as a formula. I ended up with two different equations, one for even and another for odd exponents. For some reason I didn’t see at that moment that the one for even exponents could be used for both cases. This is the one we’ll be considering in this article.
Before we jump right in, let’s walk through some relations that helped me to complete this task.
Note that when we recursively subtract one value from the previous one, we get at the end the factorial of the exponent. Let’s see another example.
After playing for a little while, I got this (again, this originally is the even power version):
Let’s see it in action.
However, since we have more exponentiation inside the summation, we can do this recursively just to make it look bigger.
The final unfolding
As you can see, the formula has limitations; it would be nicer if it could handle fractional exponents. This is what I’ve been addressing after all these years because I only recollected this recently.
The tip lies in the leading ; we can express it as . This zero exponent is the same zero on the sum lower bound, and both actually are the fractional part of —which has been only integers up to now.
Applying it, we finally have this beauty:
Or if you are not a fan of using non-integers values in sum bounds (I don’t even know if that is strictly correct to be honest), this one is for you:
But the latter is kinda ugly, so I’ll stick with the former. Judge me.
Let’s put it to the test.
It works when each individual calculation (e.g. ) renders a fraction; I simply picked one that doesn’t do that in order to keep things easier to read.
Some code
Here you can check out the equation in Go:
package main
import (
"fmt"
"math"
"math/cmplx"
)
var memo = make(map[float64]map[float64]complex128)
func Pow(x, n float64) complex128 {
if r1 := memo[x]; r1 != nil {
if xpown := r1[n]; xpown != 0 {
return xpown
}
} else {
memo[x] = make(map[float64]complex128)
}
// n mod 1
nmod1 := math.Mod(n, 1)
// x^(n mod 1)
xpownmod1 := cmplx.Pow(complex(x, 0), complex(nmod1, 0))
// (x - 1)
xminus1 := complex(x-1, 0)
// n - 1
nminus1 := n - 1
// sum
var sum complex128
for i := nmod1; i <= nminus1; i++ {
sum += Pow(x, i)
}
// x^n
xpown := xpownmod1 + xminus1*sum
memo[x][n] = xpown
return xpown
}
func main() {
x := float64(-4)
n := float64(3.5)
builtin := cmplx.Pow(complex(x, 0), complex(n, 0))
custom := Pow(x, n)
fmt.Printf(" (%v^%v)\n", x, n)
fmt.Printf("built-in = %v\n", builtin)
fmt.Printf(" custom = %v\n", custom)
}
// prints:
// (-4^3.5)
// built-in = (-5.4864176601801346e-14-128i)
// custom = (-7.83773951454305e-15-128i)
And the exact same thing but in C++ (written for confirmation purposes):
#include <iostream>
#include <complex>
#include <unordered_map>
using namespace std;
unordered_map<double, unordered_map<double, complex<double>>> memo;
complex<double> Pow(double x, double n) {
if (auto r1 = memo.find(x); r1 != memo.end()) {
if (auto xpown = r1->second.find(n); xpown != r1->second.end()) {
return xpown->second;
}
}
// n mod 1
double nmod1 = fmod(n, 1);
// x^(n mod 1)
complex<double> xpownmod1 = pow(complex<double>(x, 0), complex<double>(nmod1, 0));
// (x - 1)
complex<double> xminus1 = complex<double>(x-1, 0);
// n - 1
double nminus1 = n - 1;
// sum
complex<double> sum;
for (double i = nmod1; i <= nminus1; i++) {
sum += Pow(x, i);
}
// x^n
complex<double> xpown = xpownmod1 + xminus1*sum;
memo[x][n] = xpown;
return xpown;
}
int main() {
double x = -4;
double n = 3.5;
complex<double> builtin = pow(complex<double>(x, 0), complex<double>(n, 0));
complex<double> custom = Pow(x, n);
cout << " (" << x << "^" << n << ")\n";
cout << "built-in = " << builtin << "\n";
cout << " custom = " << custom << "\n";
}
// prints:
// (-4^3.5)
// built-in = (-5.48642e-14,-128)
// custom = (-7.83774e-15,-128)
Side note: I chose on purpose to show this interesting computational behavior, that when the result has anything to do with complex numbers, the imaginary part is correct, but the real part shows a super small value while it should be zero. Also, since built-in and custom methods are calculated in different ways, they end up being different from each other, however they are at least consistent across these two languages.
That guy has something to share with us about it:
Final thoughts
Again, it’s been fun to spend some time getting my head around this, no matter what it is. Please let me know what you think about it. I mean it.
See you in the next one!