Talk:Particle filter

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

RADAR example[edit]

Seems to have no mention at all of particle filters... I'm going to remove it $ynoptik_m4yh3m (talk) 02:58, 17 May 2016 (UTC)[reply]

Corrected meaning of SIR[edit]

SIR stands for "sequential importance resampling", not "sampling importance resampling". See any of Doucet's work: http://people.cs.ubc.ca/~arnaud/doucet_johansen_tutorialPF.pdf http://people.cs.ubc.ca/~arnaud/samsi_course.html

129.31.206.59 (talk) 04:26, 11 August 2010 (UTC)[reply]

SIR is "sampling importance resampling" Here is the original paper: ftp://swfscftp.noaa.gov/users/jbarlow/SIO-279/Reading%20Assigments/Rubin%201988%20SIRalgorithm.pdf — Preceding unsigned comment added by 131.96.49.166 (talk) 20:52, 20 February 2013 (UTC)[reply]

They are actually the same thing. cf. S. Särkkä, Bayesian Filtering and Smoothing, CUP, 2013. P123 footnote. — Preceding unsigned comment added by Bairnxy (talkcontribs) 14:26, 1 November 2020 (UTC)[reply]

Nice work but...[edit]

Quoting: "They are something like an Extended Kalman filter (EKF)"

They are NOTHING like an EKF!

  • The EKF:
  1. uses a 1st order linearisation around the current estimate.
  2. assumes that the process and measurement noise of the system are Gaussian.
  • The particle filter:
  1. uses the actual nonlinear dynamics for propagating the system.
  2. it can deal with extreme non-Gaussian and multimodal noise distributions.
  3. since it is a Monte Carlo based technique, it can incorporate easily and accurately in its structure any non-standard information (like hard/soft constrains, a-priori knowledge), improving thus its performance.

and many more...

(if I will have time I might add some new things in the article)

Have changed the offending phrase, hope it's an improvement.
As to why it's a valid comparison in the first place,
* It's in an encyclopedia article. That means it must be useful to non-specialists.
* The EKF does solve a related problem and it's probably the best known filtering algorithm after the Kalman filter itself, and the best known one for nonlinear state-space models (if there's a better known one put that in instead).  : * It's in an introductory paragraph, and an appropriate place for an informal introduction.
Of course there are major differences compared with the EKF, but it's still a worthwhile comparison for anyone new to particle filters.
I'm with the original author on this one. It's pedantic at best to say that particle filtering is "nothing" like EKF. Both exist to solve nonlinear estimation problems. Particle filtering, of course, goes about this in a very different way in trying to approximate important samples of the density rather than forcing a Gaussian estimate via linearization. But the general purpose and scope of the two approaches is quite similar. Mateoee 20:42, 4 November 2006 (UTC)[reply]

- I would have to agree that particle filters and any version of the kalman filter are not similar. SMC methods are based on applying the Bayesian recursion equation directly and then using a Monte-Carlo based approach to solve the integrals involved in Bayesian Inference. A comparison with an EKF should not be made.

- Another important modification should be to include the SIS section before the SIR section, since the Resampling step was proposed by Gordon et al., in 1993 to address the issue of weight-collapse faced by the SIS. — Preceding unsigned comment added by Sharkir hussain (talkcontribs) 22:27, 22 August 2013 (UTC)[reply]

missing probability symbol[edit]

The definition could be clearer. For example, what is in the definition? It means conditional probability of , so why not say ?

Not necessarily. If you're talking about:
then that means is a distribution, not a probability. Cburnett July 8, 2005 16:05 (UTC)


So this notation convention should be explained, since it is non standard enough for some of us to understand it, no? --Powo 10:51, 15 January 2007 (UTC)[reply]
Indeed the notation can use some improvement and few words of explanation should be added here and there. wants to say "if is the state at the time then the probability density of , the state at the time , is ." Here is a function and its argument. Putting the conditional symbol | also in the argument makes little sense. Later on the article drops the subscript of the density function completely. It should stick to one or the other and define the notation properly or link to an article that does. Unfortunately such terse expression and notation (perhaps misuse of notation?) are common in the PF literature and make understanding hard for the novice. Jmath666 02:36, 11 March 2007 (UTC)[reply]

Choice of P[edit]

How is the number of particles (P) normally chosen? Is it a necessarily large number and does each state have the same number of particles?

According to Cristan et al., the square-root of the number of particles is inversely proportional to the root-mean-square-error of the predictions. Theoretically an infinite size of the particle population would provide accurate estimates, however that is computationally infeasible.

This number is picked based on the problem it's trying to solve, most importantly on the number of dimensions X models. The bigger the possible range of X, the more samples you need.

Eh?[edit]

I find this article too hard to understand right now. Examples could help. Thanks, --Abdull 14:52, 28 February 2006 (UTC)[reply]

Indeed. Perhaps something like my explanation above would be useful, too. Jmath666 03:09, 11 March 2007 (UTC)[reply]

I agree. This looks like a cheat sheet for people who already largely understand particle filters but can't remember the technicalities. Since this is supposed to be an encyclopedia article, I would expect to see the following sorts of things, written in plain English:

  • Who invented the particle filter?
  • When?
  • In what fields is it used?
  • What is an intuitive explanation of the main idea, for non-experts? If you need to use technical terms like "model estimation" that make no sense to someone outside the field, then you need to say what you mean or provide an explanatory link. (The "estimation" link is useless, just like a "model" link would be.)
  • What sort of "models" (described in plain English) is it applicable to?
  • What are its advantages and disadvantages compared to other methods?
  • Can you show a very simple example?

-Matt 130.60.5.218 09:00, 29 September 2007 (UTC)[reply]

I too agree with Matt, and it is somewhat disappointing that in almost 7 years, no-one has been able to amend the article to deal with his points. His list of questions is a very good point to start improving this article, but the 4th point is probably the most important for Wikipedia: what's the main idea, for non-experts? Why 'particle'? What do the particles represent? What are the inputs and outputs of the filter, typically? Sangwine (talk) 15:43, 19 February 2014 (UTC)[reply]

I also agree. This page is awful for someone who doesn't already know what the particle filter is... its detail gives the impression there is a lot to grasp before knowing what they are or how they work which is in fact not the case. They can be described really quite succinctly: [1] and a simple algorithm for their implementation is barely 10 lines: [2] Everybody knows this is nowhere (talk) 05:12, 14 May 2016 (UTC)[reply]

I completely agree. I actually work with particle filters, and I don't really understand most of the stuff after section three. This is a cheat sheet for experts that need to look up a formula, not an introductory article of how particle filters work in principle. I would edit it, but then again I am not enough of an expert to be sure to not mess up some of the theoretical background. Engineeru (talk) 08:15, 27 June 2017 (UTC)[reply]

One can watch youtube videos and get the aha experience in less than 3 minutes compared to reading this article, getting confused and giving up. For an encyclopedia, this is just sad. — Preceding unsigned comment added by 193.141.219.36 (talk) 13:27, 5 December 2018 (UTC)[reply]

References

Simple example please (Eh? no. 2)[edit]

I agree with Matt - see section Eh?: it would be great to have a simple example. The first 2 paragraphs read well, but then it gets technical without a leading example or illustration. I read about particle filtering in computer vision books and wanted to broaden my horizon on this topic, but it helped limitedly. I'm just not the right person to help out here - I understand too little. Hope someone else will find the time. Regards from Bucuresti, Romania. Rasche (talk) 17:01, 22 August 2013 (UTC) — Preceding unsigned comment added by Rasche (talkcontribs) 16:31, 22 August 2013 (UTC)[reply]

Computer vision category[edit]

I removed this article from the computer vision category. The P-filter is probably usful in some part of CV but

  1. It is not a concept developed within CV or specific to CV.
  2. There is no material in this article which relates it to CV.

--KYN 15:09, 28 July 2007 (UTC)[reply]

Direct version: missing notation[edit]

In the following line:

5) Generate another uniform u from

Maybe I missed something, but has not been specified.

Uliba 11:20, 31 October 2007 (UTC)[reply]

Kitagawa (1996) Cite Needed[edit]

Although the article mentions an article by Kitagawa, it gives no actual citation. Either supply the citation or remove the comment. Preferably the former. Bill Jefferys 02:26, 15 November 2007 (UTC)[reply]

Would the correct citation here be this one? I obtained it by Google search on "kitigawa statistics stratified resampling".

"Monte Carlo Filter and Smoother for Non-Gaussian Nonlinear State Space Models", Genshiro Kitagawa Journal of Computational and Graphical Statistics, Vol. 5, No. 1 (Mar., 1996), pp. 1-25

Would the person who added the comment about Kitagawa in the main article please state if this is the right citation? Bill Jefferys 22:10, 15 November 2007 (UTC)[reply]

Notation?[edit]

I've never seen the superscript-in-parentheses before. What does it mean? I'm guessing it's not exponentiation... Leptogenesis (talk) 06:22, 16 February 2009 (UTC)[reply]

It's to show a set of particles I believe -- <w^{(K}], x^{(K)}> for K = 1, 2, ..., n PirateAngel (talk) 13:27, 23 April 2009 (UTC)[reply]

The parentheses in superscript is used to distinguish a power from an index. That way if I have a collection of particles { x(i) : i = 1,2, ..., N }, there is no confusion as to what x(2) means. Without the parentheses, it might refer to the square of some quantity x. Bradweir (talk) 20:06, 12 July 2011 (UTC)[reply]

Uninformative and Misleading Figure[edit]

There is no explanation of how the plot was generated or even what variable is being estimated. Maybe it's the beta coefficient of a stock? Whatever it is needs to be clearly stated as do the observation and propagation models (pdfs). However, even with this additional information the plot is misleading since it shows the mean of the estimated variable which obfuscates one of the main advantages of the particle filter: that it is non-parametric. It would be an improvement to plot the ML estimate of the variable instead of the mean, but it would probably be even more informative to plot the particle ancestry of the particles alive at the final time step. This would help illustrate the multi-hypothesis nature of the particle filter. Mark 20:53, 15 June 2009 (UTC) —Preceding unsigned comment added by 209.211.131.111 (talk)

Conditional/Filter vs. Posterior[edit]

In the paragraph ...

All Bayesian estimates of xk follow from the posterior distribution p(xk | y0,y1,…,yk). In contrast, the MCMC or importance sampling approach would model the full posterior p(x0,x1,…,xk | y0,y1,…,yk).

I think the correct term for p(xk | y0,y1,…,yk) is just the conditional distribution, as what's called the "full posterior" is usually just called the posterior.

Also, since this is the "nowcast" distribution, it's also called the filter distribution, which I believe is used in other parts of the article.

Anyone reading this? I'm going to change the article to reflect reality if not.

Bradweir (talk) 04:28, 12 July 2011 (UTC)[reply]

Proposal Distribution[edit]

...is never explained. —Preceding unsigned comment added by IskaralPust (talkcontribs) 08:20, 12 May 2011 (UTC)[reply]

The terms "proposal distribution" and "importance function" have the same meaning in this context. I think we should use only one of the terms or explicitly state that they are the same. If nobody diagrees I would fix this.--Sagzehn (talk) 10:04, 28 January 2015 (UTC)[reply]

Is the "Model" Section Incorrect?[edit]

The current version of the "Model" section claims that particle filters assume the system state is 1st order Markov and that the observations depend only on the current state. I realize that most introductory descriptions of particle filters assume this for simplicity, but I didn't think that particle filters necessarily make that assumption.  ?? — Preceding unsigned comment added by 99.113.169.222 (talk) 07:07, 4 June 2012 (UTC)[reply]

In general (not just for PF), derivations and theoretical justifications often invoke a Markov assumption, just as they might say that the Kalman filter has a Gaussian assumption. In the real world these assumptions are almost always violated at some level. The question is what theory can tell us about the usability of the technique under those conditions. With the PF, if the problem is simply that there is some time correlation in the input, perhaps due to a preceding low-pass filter of some sort, then if you just reduce the sampling rate to where the correlation is minimal, then you can avoid major problems. Something that the theory doesn't emphasize is that every time you iterate a PF you're destroying information. If the current input contains no useful information, then this is a net loss. At least for PF localization applications I've found it's beneficial to reduce resampling. Practical Kalman filter application also often makes use of theoretically inelegant tricks such as limiting the innovation to 6 sigma, which would never happen if Gaussian statistics actually applied. With the KF, we can say "it may work, but it isn't provably optimal". Proofs related to PFs are much weaker, more along the lines of "it probably works eventually if you make unrealistic assumptions about sample size." So IMO PF theory doesn't get you very far, but it does point out possible problem areas.

Going a bit more philosophical, the "model" is a mathematical theory of what the algorithm does. Yet Computability theory tells us that there is in general no more compact way of describing what an algorithm does than the algorithm itself. Looking at the literature, it is clear that PFs are robust against all kind of theories, both mathematically sound and intuitive/emperical. I'd say that one of the strengths of PFs is that you can understand and tune up a PF without having any deep mathematical understanding. The huge diversity of PF algorithms, especially in the area of resampling and sample size management, suggests that there should be a No free lunch theorem for PFs. When you adapt the algorithm to a particular application, you are putting domain-specific knowledge into the algorithm. What is a good PF for X is often not a good PF for Y.

Robertmacl (talk) 14:41, 26 November 2014 (UTC)[reply]

SMC methods use a grid-based approach?[edit]

As I understand this statement, it is not true at all. The samples drawn by an SMC algorithm are not restricted to be discrete (i.e. defined in a grid-division of a continuous space). In other words, in principle the samples (or particles in a particle filtering context) can be anywhere within the support of the target distribution. Comment added by Iglesiasg 9:58, 1 April 2014 (UTC)

I've deleted this puzzling claim. Maybe it made sense to someone, but is IMO highly inappropriate in the introduction. The intro is still really terrible, lots of sentences saying the same thing, obscured by heavy jargon and verbosity. Robertmacl (talk) 14:54, 26 November 2014 (UTC)[reply]

External links modified[edit]

Hello fellow Wikipedians,

I have just modified one external link on Particle filter. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 18 January 2022).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 11:32, 26 December 2017 (UTC)[reply]

Typo[edit]

First sentence of the 3 paragraphs in the first section has a typo. Not an expert in this subject so not sure how to update properly.

Particle filters implement the prediction-updating updates in an approximate manner.

50.207.97.110 (talk) 21:27, 12 November 2019 (UTC)[reply]

I think I solved this.--Mvqr (talk) 11:38, 16 February 2020 (UTC)[reply]

Superfluous jargon and obfuscation[edit]

As Jeremy Kun has pointed out here, there is a style of writing in statistics that uses superfluous jargon, is verbose, and is altogether far more complicated than it needs to be. This article in its current state can serve as a prime example. --Svennik (talk) 11:45, 29 May 2020 (UTC)[reply]

You are correct, however 60-70% of journal articles are all choke full of jargon. It is sometimes hard to escape.--Mvqr (talk) 10:03, 30 May 2020 (UTC)[reply]