Talk:Galerkin method

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

as i understand it, galerkin method is not specific to differential formulations but in general applied to operation equation. for example Lu=f, then you expand u in a basis v_i and do (v_i,Lv_j)=(v_i,f). for differential formulations, you can use identities to get the LHS to be more convient. however if you have an integral equation then you have two sets of integrals. if no one objects, i will modify the article in this generality.


  • This is correct. The page has many errors on it, the galerkin method is a pretty vague term and should probably be called galerkin methods. Then one could talk about Ritz-Galerkin, Babuska-Galerkin, Discontinuous Galerkin, Generalized Galerkin ... If I had time to fix it now, but basically I would throw away everything in this page. The notation isn't even consistent

Remove example?[edit]

For me the example is very unclear. I'm not able to follow the reasoning within the example, but foremost I do not see the goal of the example. I propose one of two things:

1. Just remove the example for now.

2. Describe it better and move it to a later part in the text!

Clean up math[edit]

The "proofs" on this page are really not necessary. This page should probably only give a brief explanation, possibly outline the most popular Galerkin methods and point to more specialized topics. (Art187 13:46, 9 June 2007 (UTC))[reply]


This article doesn't mention that the Galerkin method is part of the set of the spectral methods. In addition, the main algorithms for programming this method aren't given, or shown clearly. I am currently studying this set of techniques, once I have it down pat I'll try and clean up this article. (Starscreaming Crackerjack 21:11, 17 August 2007 (UTC))[reply]

As PDE analysis method[edit]

I think one should mention that the Galerkin method started as and continuous to be one of the important methods for showing existence of solutions to PDE's, even though nowadays the most popular applications seem to be in numerical analysis. Temur (talk) 04:05, 8 March 2008 (UTC)[reply]


I totally agree with this. As a helicopter aerodynamicist, the Galerkin method is used as a first order approximation of the flap frequency of the rotor blades. (Basically, solving the PDE that describes a cantilevered beam with aerodynamic forcing) The page on Galerkin method is way too complicated for anyone but a highly educated mathematician to understand. Even as someone who uses complex mathematical formulations every day, I have a hard time wrapping my head around what is being said on this page. "Keep it simple stupid" is my philosophy. Aharrin1 (talk) 19:40, 5 March 2010 (UTC)[reply]

Perhaps a set of (several) examples would help with understanding. It's helpful to have a reminder of the meaning of "bilinear form" on the same page, and it's important to demonstrate how the Galerkin method uses partial integration to reduce the order of the problem AND introduce boundary conditions, once the differential equation is converted to a weak formulation. Finally, it'd be nice to see the end result of the application of this method/these methods (Ku=f). The high-level Math-ese is important, but that can't be all there is, or no one who needs help will get it. Also, the Galerkin Method is but one example of methods of weighted residuals (least squares, collocation, etc), but I didn't see any mention of this set of methods.Oconno39 (talk) 20:10, 13 April 2014 (UTC)[reply]

Krylov subspace methods[edit]

Are Krylov subspace methods truly an example of Galerkin methods? In my understanding, Galerkin methods are for discretizing infinite-dimensional problems, while Krylov subspace methods accelerate the solution of already discretized problems. Eriatarka (talk) 19:06, 15 July 2008 (UTC)[reply]

Are you asking a question or answering it? Your understanding is the same as mine. Galkerkin methods discretize, Krylov subspace methods help you solve those finite-dimensional problems. —Ben FrantzDale (talk) 00:09, 16 July 2008 (UTC)[reply]
I don't know, but the statement has a pretty reliable reference attached to it so I'm not dismissing it out of hand. I didn't check Saad's book but I guess the idea is that the conjugate gradient method for solving Ax = b (with A an m-by-m matrix) is a Galerkin method with V = \R^m, a(u,v) = u^T A v, f(v) = b^T v, and V_n = n-th Krylov subspace (notation as in the section "Introduction with an abstract problem"). There is no requirement in the abstract formulation that the original problem be infinite-dimensional. -- Jitse Niesen (talk) 11:14, 16 July 2008 (UTC)[reply]

I wouldn't quite say that "Galerkin methods discretize" and "Krylov methods solve". Galerkin methods reduce higher-dimensional problems to lower ones; discretization is only an example of that. Krylov methods try to solve problems by constructing a particular low-dimensional subspace that contains a good approximation for the solution, and then turn in that subspace they often formulate & solve a low-dimensional problem by a Galerkin approach. So, in solving a PDE you often have two steps of Galerkin: first to reduce it approximately to a finite-dimensional problem, and then further to approximate the solution restricted to the Krylov space.

If you have a linear equation Lu=f, where L is a linear operator, and you want to find an approximate solution in a subspace S, Galerkin methods define a particular approximate solution: the u in S such that the residual Lu-f is orthogonal to S. Galerkin has the nice property of preserving any self-adjointness and definiteness of the original L in the new problem.

In PDEs, the original problem lives in an infinite-dimensional space, and you are trying to approximate it by a solution in a finite-dimensional space (e.g. a finite-element mesh). However, the method also applies to finite dimensional problems that you are trying to approximate by a smaller finite-dimensional problem.

A Krylov space is a particular finite-dimensional subspace that many iterative methods build up in solving large but (usually) finite-dimensional linear-algebra problems like Ax=b or Ax=λx. Given a Krylov space, one then often solves the original problem approximately by a Galerkin approach. For example, Arnoldi iteration builds up an orthonormal basis for a Krylov space, and then finds the approximate solutions Ax=λx in this Krylov space by a Galerkin approach — in that context, these approximate solutions are called "Ritz vectors". As another example, conjugate gradient also (implicitly) constructs a Krylov space and implicitly solves Ax=b in the Galerkin sense with the A-weighted inner product.

— Steven G. Johnson (talk) 22:25, 5 December 2011 (UTC)[reply]

Collision of symbols[edit]

In this article has the same symbol as its -th coordinate in the basis . () I suggest replacement to solve the problem. Delimata (talk) 23:36, 26 July 2008 (UTC)[reply]

Evolution equations[edit]

Can someone who is more of an expert than me on this add a section about Galerkin methods for evolution equations (e.g. hyperbolic and parabolic equations)? Holmansf (talk) 18:57, 11 June 2013 (UTC)[reply]