Talk:Fibonacci heap

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Mergeable priority queue ADT[edit]

Has anyone else ever heard it mentioned that Fibonacci heaps can be used to efficiently implement mergeable priority queues? If you're familiar with CS then you can see from the definitions that it's clearly true (and that merging is far more efficient than if a traditional heap had been used for the priority queues) though I have yet to find an application where merging priority queues needs to be done frequently enough to make it useful :) Anyway, I'd like some opinions on adding that tidbit to the article. DaveWF 07:48, 14 February 2007 (UTC)[reply]

Edmonds' algorithm depends on the mergeable property. In that algorithm you have collapsed node sets and are interested in the optimal incoming edge over all nodes in the set. Whenever such a collapsing happens, the individual queues are merged. 2.207.170.18 (talk) 23:15, 25 November 2014 (UTC) Joerg Sonnenberger[reply]

Why "Fibonacci"[edit]

Although the article mentions that Fibonacci numbers "are used" in the running time analysis, and that Fibonacci numbers give a constraint for subtree size given the order of a node, the "why" of this is not described. I will look into it and add an explanation if I can, but if I don't, someone else could. --CyHawk (talk) 08:42, 15 August 2008 (UTC)[reply]

This article is very confusing, technically rambling and does not provide a good intro to the topic. In fact, the basic data structure is NEVER DEFINED clearly or explicitly. The rest of the comments are more theory of computer science. Frankly, we need curated sources, or better off, send people to read a book not to waste time. 2601:647:C001:97C0:CC7:1833:5E7B:4D00 (talk) 17:18, 12 November 2023 (UTC)[reply]


I think the analysis relies on the fact that the number of ancestors of any node is exponential in its degree, n. This is proven by showing that the number of ancestors grows faster than the Fibonacci numbers do, since ~ . In particular, you can show that , and so since , , and , . A(i) denotes the number of ancestors of a node of degree i. Something like that. WuTheFWasThat (talk) 17:28, 13 September 2009 (UTC)[reply]

Descendants. The number of descendants of a node of degree i (including the node itself) is less than a Fibonacci number. A node of degree 0 has fewer than 2 descendants. A node of degree 1 has fewer than 3 descendants. A node of degree 2 has fewer than 5 descendants. A node of degree 3 has fewer than 8 descendants. Etc. —David Eppstein (talk) 19:48, 12 November 2023 (UTC)[reply]

Summary of running times[edit]

The linked list deletemin and delete operations assumes a search is required. If one is given a pointer to the item that must be deleted, then deletion can be done in Theta(1) worst-case time. (in fact, all the running times should be made more precise - I would list them as Theta with an explanation that this is worst-case running time). —Preceding unsigned comment added by Trachten (talkcontribs) 15:14, 7 October 2009 (UTC)[reply]

This list seems to be a little inconsistent. Are accessmin and deletemin really possible in O(1) for binary trees? Also, a statement for linked lists appears to be confusing as it depends on what case (insertion or removal) is preferred by an implementation.--85.178.10.62 (talk) 20:01, 31 August 2008 (UTC)[reply]

I think it is misleading that this table states that the insert operation is O(log(n)) on a binary heap, and doesn't mention anything about average time. After each of the log(n) swaps that *could possibly* occur during insertion into a binary heap, the likelihood that the swapping must continue decreases (on average by a factor of about 2). Thus, insertion into a binary heap averages about O(1). (1/2 + 1/4 + 1/8 + ... = 1.) --128.187.80.2 (talk) 18:26, 30 July 2009 (UTC)[reply]

What about adding sorted linked list and/or sorted (dynamic) array to this summary? -- akerbos 17:55 Oct 23 2010 —Preceding unsigned comment added by 95.80.12.63 (talk) 15:55, 23 October 2010 (UTC)[reply]

This section does not seem to belong here at all. I shall remove it if nobody opposes. --Yecril (talk) 19:58, 17 January 2014 (UTC)[reply]

Introduction less overviews how the Fibonacci heap works[edit]

The introduction appears to do less than provide an overview of how the Fibonacci Heap works, and more gives a mind-numbing discussion on Big-O notation garbage. I believe that this is a fatal error. 130.56.92.62 (talk) 17:03, 19 May 2011 (UTC)[reply]

decrease key should update the minimum pointer, shouldn’t it?[edit]

It seems necessary when the key is decreased below the minimum. I understand it requires constant time, but still… --Yecril (talk) 19:46, 17 January 2014 (UTC)[reply]

Complexity in entry section[edit]

Describing find-minimum as amortized O(1) is misleading. Given the implementation afterwards, it is a constant time operation. This is a stricter boundary. Same for insert and union. The comparison with binomial heaps is also a bit confusing as three sentences are used, but only two groups are discussed. 2.207.170.18 (talk) 23:25, 25 November 2014 (UTC) Joerg Sonnenberger[reply]

@David Eppstein: Thanks for the clarifying edits and your explanations in the summaries. However, I still don't understand how the sum of two amortized complexities can be a worst-case complexity. Or, maybe, some knowledge about e.g. insert's and delete's worst-case time is tacitly used to infer the sequence's worst-case time? - Jochen Burghardt (talk) 09:58, 5 February 2016 (UTC)[reply]

The sum of amortized times for any sequence of operations in any data structure, starting from when it is initiialized and including all the operations that are performed, is a valid bound on the worst case time for the same operation sequence. That's what amortized time bounds mean. If you're using the potential function method of amortization, the changes to the potential telescope out leaving you only the sum of actual times. —David Eppstein (talk) 16:46, 5 February 2016 (UTC)[reply]
Oops, I confused amortized time and some notion of average time; I should have read the (somewhat hidden) formal part of amortized analysis first. Thanks for your explanation again. - Jochen Burghardt (talk) 11:10, 7 February 2016 (UTC)[reply]

In the infobox, there is information about average complexity, whereas in the article the complexity is described with amortized complexity. They are not the same, and amortized complexity is stronger than average complexity. The difference is that average complexity makes assumptions about what is an "average input", meaning that under certain circumstances, if the input wasn't random for example, the actual complexity might be a lot worse. On the other hand, amortized makes no assumptions, meaning that you can't forge a specific input for which it is going to be slow, in the long run. To be clear: if I were to run a lot of operations, each with O(1) average complexity but O(n) worst time (say, I run m of them), most of the time I'm getting O(m) complexity, but nothing prevents me from getting O(nm) complexity, especially if I carefully crafted the input. Now, I run m operations, each with O(1) amortized complexity, and O(n) worst time. I'm going to get O(m) complexity, no matter what. — Preceding unsigned comment added by TheBlackBeans (talkcontribs) 07:41, 11 February 2021 (UTC)[reply]

Time complexity of extract minimum[edit]

The article says: "Therefore, the amortized running time of this phase is O(d) = O(log n)." This seems wrong to me. I think it probably should be: "O(d) O(log n)" McGucket (talk) 16:10, 27 August 2017 (UTC)[reply]

If you're using subset-O notation, you have it backwards, but as stated it is a correct usage of the more common =-O notation. —David Eppstein (talk) 22:41, 27 August 2017 (UTC)[reply]
You're right. I messed the subset symbol up ('p' instead of 'b'). The "=-O" notation is just stupid. It doesn't make any sense. It talks about an equality when there is none. In this case, it's especially bad because both sides of the "equation" are sets of functions, so you have to read it a specific way, even though this is never the case with an equals sign an any other field. I have even seen lecture slides in which the "=-O" notation is written the other way around when talking about 2 functions: f = O() = g McGucket (talk) 22:50, 28 August 2017 (UTC)[reply]
See WP:SOAPBOX. We should be following field-standard conventions, not pushing minority conventions merely because we think they're better. And wake me up when you have an alternative subset-based semantics that also works in contexts like or . —David Eppstein (talk) 00:14, 29 August 2017 (UTC)[reply]