Talk:Exponential factorial

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Notes for expansion[edit]

Was posted on the main page:

Highly recommended suggestion: Need to consider analogs and relations of this class of functions to the Knuth up-arrow notation, the corresponding logarithmically-based down-arrow notation, and the power towers on MathWorld. If nothing else, the use of these conventions can simplify the notation used to express the exponential factorial sequences defined here. Aren't there also cases where appropriate normalization (I'm thinking by something like the analog to the Barnes G-function) yields special constants? All of this needs to be added to make this a cohesive article.

Multipotentialmike (talk) 08:41, 2 May 2018 (UTC)[reply]

I would like to believe that Jddowney789 removed a 1 digit from the sum of reciprocals because he or she honestly believed that the calculation was incorrect. In that case, the calculation methodology has to be explained. I don't know how Anton Mravcek did the calculation, but here is what I did to doublecheck it. Using Mathematica, I defined the following:

a[0] := 1 a[n_] := a[n] = n^(a[n - 1])

These should look a lot like the formula given in the article. Then

Table[a[n], {n, 4}]

lets me know that the definitions are correct. Then,

1/%

(the percent symbol in Mathematica refers to the previous calculation) gives me a list of fractions, then

N[Plus@@%,23]

gives 1.6111149258083767361111. Then I put in

"

then paste the number from Wikipedia, followed by

" == "

and copy and paste the number from the calculation above. I hit enter, and Mathematica says True.

But to be extra sure, I also compare the number against that given in the OEIS sequence OEISA080219 (to strip the commas, click on the blue "cons" link). PrimeFan 21:54, 17 February 2006 (UTC)[reply]

Sum of reciprocals[edit]

I think it should be said on page that sum of reciprocals is transcendental number (as said on MathWorld) and Liouville number — Preceding unsigned comment added by 79.184.102.31 (talk) 17:09, 6 February 2012 (UTC)[reply]

I changed it to say transcendental. Joule36e5 (talk) 10:07, 29 August 2013 (UTC)[reply]

The number looks like it's absolutely abnormal, like Greg Martin's number in Normal number. phma (talk) 05:50, 26 November 2023 (UTC)[reply]

merge with Factorials[edit]

this should be part of the article about Factorials — Preceding unsigned comment added by 89.201.190.75 (talk) 00:23, 22 April 2015 (UTC)[reply]

Connection to tetration/hyperoperators[edit]

In response to suggestion about notation: it's a good thought, and we really do need a better notation for functions iterated over a non-constant sequence. Unfortunately I really don't think Knuth's up arrows would be of much use here. This is because this notation encodes 3 variables: (1) a base number, (2) a an exponent, or a "height" as it's called for power towers / tetration, or the analog for higher hyperoperators, and (3) a hyperoperator level, i.e. 1 Knuth up-arrow means exponentiation, 2 arrows means tetration, etc.

Conway chained arrows get around this with a recursive system wherein the integers in an arrow chain specify how many of Knuth's up arrows should be drawn. While still being a very powerful notation, (1) they too will run into the same basic issue of Knuth's arrows, where someone will start asking what happens when we want to have a function that says something about the (likely huge) number of (Conway) arrows, and we will keep running along this treadmill of metaness until the end of time; but (2) while we're busy chasing arrows up we haven't stopped to look around. The power towers in the denominators of the terms of these sequences are the same type of power tower used to define tetration. Maybe there has been some recent research in this area?

I know that very recently, there has been much interesting work done on extending tetration to real and complex heights, and on finding conditions under which such an extension is both unique and "nice". I think there are indeed some ideas that might carry over to here, such as the approach of extending to complex "heights" by integrating along a known contour, and of thinking about the asymptotic behavior at +/- imaginary infinity. — Preceding unsigned comment added by Indnwkybrd (talkcontribs) 02:06, 20 January 2020 (UTC)[reply]

About the value of 0$[edit]

I made an account just because I wanted to say that 0$ is not equal to 1, at least it should not be equal to 1. I'm basing this off of my own approximation of exponential factorial, which I will list below.


Here's an approximation for x$ between 0 and 1, where f(0) is equal to 1. Below is just something you can paste into a graphing calculator like Desmos.

f(x) ≈ 1 - 5x + 7x2 - 2x4, for 0 ≤ x ≤ 1

f\left(x\right)=1-5x+7x^{2}-2x^{4}\left\{0\le x\le1\right\}

Between 0 and 1, the derivative of this function is not strictly increasing. As such, f(x) will oscilate between values much like sine.

Here's my approximation of x$ between 0 and 1, in the same format as above. Notice that f(0) is not equal to 1.

f(x) ≈ 0.575571099101947640519485 + 0.351862783669537063091531x - 0.006009963746864688261002x2 + 0.078576080975379984649986x3, for 0 ≤ x ≤ 1

f\left(x\right)=0.575571099101947640519485+0.351862783669537063091531x-0.006009963746864688261002x^{2}+0.078576080975379984649986x^{3}\left\{0\le x\le1\right\}

I would consider this approximation more ideal because between 0 and 1, the derivative is strictly increasing. If you used the recurrance relation, you would find that f(x) is strictly increasing for all x greater than or equal to -1.


0$ also has an interesting property. If we define f(x) as being equal to x$, we can show that f(0) = f'(1).

f(x) = xf(x-1)

f(x+1) = (x+1)f(x)

logx+1(f(x+1)) = f(x) (This is another recurrance relation!)

f(x) = ln(f(x+1)) / ln(x+1)

f(0) = ln(f(1)) / ln(0+1) = ln(1) / ln(1) = 0/0

Our answer is indeterminate, but luckily for us we can use L'Hôpital's rule here.

f(0) = limx→0 ln(f(x+1)) / ln(x+1)

f(0) = limx→0 (f'(x+1) / f(x+1)) / (1 / (x+1))

f(0) = (f'(0+1) / f(0+1)) / (1 / (0+1))

f(0) = (f'(1) / f(1)) / (1 / 1)

f(0) = (f'(1) / 1) / 1

f(0) = f'(1)

That's all I have to say. Unfortunately I'm not some math guru and I'm a new wikipedia user, so I have a feeling I've missed something, but hopefully it's alright for the most part. — Preceding unsigned comment added by Expfac user (talkcontribs) 17:43, 12 January 2021 (UTC)[reply]

I agree that the logical value for f(0) is not 1 (nor 0), and the recursion should bottom out at f(1) = 1. I, too, found a limit value near 0.57 (somewhat disappointed that it didn't turn out to be gamma). Unfortunately, Wikipedia is based on reliable sources, and the primary source for both this entry and the one in OEIS is Exponential Factorial in MathWorld. Jonathan Sondow (1943-2020), the author of the MathWorld entry, also agreed (personal correspondence, 2014) but said that he'd been unable to get the editors to make his recommended changes. I'd been hoping to get MathWorld fixed first, and then the various wikis that reference it. Joule36e5 (talk) 23:45, 17 May 2022 (UTC)[reply]
Same idea. EF(0) isn't 1, it must satisfy EF(0) = EF'(1) which appears to be 0.57 (see https://math.eretrandre.org/tetrationforum/showthread.php?tid=162&pid=2151#pid2151). I wonder if this could be improved further @Expfac user Kwékwlos (talk) 11:51, 22 March 2023 (UTC).[reply]
I'm pretty sure that I've seen at least one definition that bottomed out at EF(0)=0 instead of 1, and at least one that bottomed out "properly" at EF(1)=1. Can we rewrite it to start at 1, and just say that there's some disagreement in the value of EF(0)? (I see that the sum-of-reciprocals constant does use 1 as the lower bound; good. I've seen that done both ways, too.) Joule36e5 (talk) 02:06, 23 March 2023 (UTC)[reply]
Well, like Kneser's tetration method, the one with EF(0) ~ 0.57 should be the canonical one. I don't know if anyone has managed to make a better approximation of this. Kwékwlos (talk) 12:02, 23 March 2023 (UTC)[reply]
I've decided to boldly edit. I went with "...which suggests a value strictly between 0 and 1"; I hope that's not considered OR. Joule36e5 (talk) 21:28, 23 March 2023 (UTC)[reply]
How did @Expfac user and others at Eretrandre manage to calculate the value as ~ 0.57? Kwékwlos (talk) 09:57, 24 March 2023 (UTC)[reply]
As with tetration, there are methods to extend to fractional values, and hence EF(0) as a limit. Exponential factorial and tetration (with any base > e^(1/e)) are equivalent problems in a sense, so we only need a solution to one of them. I came up with what I thought was the "right answer" several years ago, but I later realized that it depended on an arbitrary decision that I can't justify -- but it does result in EF(0) = 0.5757. Now, to make this relevant to the talk page, is anybody aware of published work that covers this? Joule36e5 (talk) 22:11, 24 March 2023 (UTC)[reply]
I don't think of any. The quartic approximation of EF(0) according to the Eretrandre post yields EF(0) ~ 0.570807. I suppose one could give a higher-order polynomial approximation to see if it converges. Or you could ask the people there. Kwékwlos (talk) 09:34, 25 March 2023 (UTC)[reply]

Dollar sign notation[edit]

I haven't been able to find any sources that use the dollar sign notation () for the exponential factorial. I believe it was added in error and I will be removing it. If anyone has a good source (that didn't originally come from Wikipedia) please add it.

The notation was added here on May 18, 2018 without a citation. It was then added in good faith to the Factorial article on December 17, 2021.

However, the Factorial article used to use the dollar sign notation for something called Pickover's superfactorial, which is distinct from the exponential factorial (as well as the regular superfactorial). It is: .

The Factorial article credited the Pickover superfactorial to the book Keys to Infinity by Clifford A. Pickover. I found a copy and confirmed that it does use the notation. Pickover credits the definition to Berezin, A (1987). "Super Super Large Numbers". Journal of Recreational Math. 19 (2): 142–143.. I couldn't find a copy of Berezin's article. Pickover's book A Passion for Mathematics also mentions this definition/notation.

Of course math symbols can have multiple meanings, but since I haven't found any evidence for that in this case I believe it's more likely that the exponential factorial was confused with Pickover's superfactorial. Jak86 (talk)(contribs) 15:33, 6 September 2022 (UTC)[reply]

Thanks for fixing that; it's been on my radar for a while. If it does need its own notation, then I think is ideal -- punning on interpreting "!" to mean "continue with the smaller positive integers", so if (4)! means (4)(3)(2)(1), then 4+! must mean 4+3+2+1 and 4^! must mean 4^3^2^1. Joule36e5 (talk) 20:22, 6 September 2022 (UTC)[reply]
I just made up the n$. I didn't know about the Pickover superfactorial. n^! seems vastly superior to n$. It makes a lot of sense honestly. 67.140.97.192 (talk) 17:11, 1 September 2023 (UTC)[reply]

optimized exponential factorial (n!1)[edit]

Big exponents are more significant.

Because 1x is always 1, the bottom base is 2.

Then, n!1 = 2 raised to the power of 3, which in turn is raised to the power of 4, and so on.

The first few optimized exponential factorials are 2, 8, 2417851639229258349412352, ... (OEISA124075)

5!1 is already bigger than a googolplex.

Why was not the original exponential factorial created this way? 84.154.71.41 (talk) 19:07, 18 April 2023 (UTC)[reply]

Let's call the original EF(n), and the reversed one RF(n). I presume that by "optimized" you mean "larger" -- but in terms of generating large numbers, EF(n), RF(n), and tetration 2^^n are all equivalent modulo the relation f(n) = g(n+O(1)). And EF(n) has a recursive definition, which RF(n) does not; I think that's enough to make it the more interesting one to consider. Probably worth a mention (and an OEIS link) in the "Related Functions" section, though. Joule36e5 (talk) 22:04, 18 April 2023 (UTC)[reply]
Is it possible to extend this series to the real numbers? Kwékwlos (talk) 10:35, 28 April 2023 (UTC)[reply]
I don't see a way. If I have a real extension of tetration, then I could give you a tight estimate for what RF(googolplex + 0.5) should be. But with no recursion to unwind, I can't use that to compute RF(3.5). Joule36e5 (talk) 11:39, 28 April 2023 (UTC)[reply]
We could try to compute, for example, Kneser slog_2(RF(x)) and interpolate. Kwékwlos (talk) 12:52, 28 April 2023 (UTC)[reply]