User:KYN/InProgress

From Wikipedia, the free encyclopedia

Small European countries or territories not members of the European Union[edit]

In maps of the European Union the resolution often fails to correctly describe that some smaller countries or territories in Europe are not members or part of the European Union. Most of these territories or states are closely related to members of the European Union, e.g., to manage their defence or foreign policy, and most also have close relations to the European Union, e.g., some are using the euro currency. However, they are formally not members and therefore neither are governed by, nor benefiting from, all treaties and agreements valid within the European Union. They are:

In addition to these small non-EU states or territories, there are also others which formally are part of the European Union but for which special arrangements have been agreed upon. See the article on Special member state territories and their relations with the European Union.

Light field[edit]

Lead[edit]

The light field is a concept used in optics, photogrammetry, computer graphics, and computer vision to denote the amount of light which travels from all directions in 3D space. In the literature, there are at two closely related but slightly different definitions of the light space, and similar concept has been described by several authors who have referred to it as, for example, the plenoptic function and the photic field. See the history section. The two formal definitions of light field is as follows

  • The light field is a 5-dimensional function, which describes the amount of light which travels from all possible direction (2 variables) through all possible points (3 variables) in 3D space. This is sometimes referred to as the 5D light field.
  • Under the assumption that light rays are not occluded, all points along a line will receive the same amount of light along the same direction of the line. This assumption, valid in various computer graphics applications, allows a simpler 4D light field to be defined.

Apart of position and direction in space, the light field can also have additional variables to describe variations in time, the distribution of energy in different wavelengths bands, polarization and even the phase of the light.

History[edit]

In an 1846 lecture entitled "Thoughts on Ray Vibrations" Michael Faraday was the first to propose that light should be interpreted as a field, much like the magnetic fields on which he had been working for several years. The name light field was coined by Alexander Gershun in a classic paper on the radiometric properties of light in three-dimensional space (1936, translated to English in 1939), but a similar idea was presented already in 1932 by Yamauti (see comment in Ashdown's paper). The same concept was later called photic field by Moon and Spencer (1981) and it was introduced in the computer vision community by Adelson and Bergen in 1991 under the name plenoptic function. The 4D light field was introduced to the computer graphics community by Levoy and Hanrahan in 1996.

CV Category[edit]

65 entries in Category:Computer vision on 2007-07-28 18:27

Article Status Editors Last update
Active Appearance Model Stub User:KetchTressle User:Spacebear 2007-07-06
Active shape model Stub 81.6.222.24 User:Salix alba User:Vectraproject 2007-07-27
Affine shape adaptation Article User:Tpl 2007-03-27
Artificial neural network Removed · ·
Belief propagation Removed · ·
Bhattacharyya distance Removed · ·
Blob detection Article User:Tpl 2007-06-28
Blob extraction Stub, combine with Blob detection? User:Kostmo User:Oaf2 2007-06-27
Boosting Removed · ·
Canny Article User:Asc99c User:Seabhcan User:Iknowyourider User:Tpl 2007-07-04
Color histogram Article User:Kostmo User:Valentein 2007-05-11
Complex wavelet transform ??? · ·
Computer vision systems Stub, (Delete?) · ·
Condensation algorithm Stub User:Orderud User:KellyCoinGuy 2007-06-13
Connected Component Labeling Article User:Defireman User:Oaf2 2007-07-15
Corner detection Article User:Retardo User:Tpl User:Serviscope Minor 2007-07-21
Dense motion field Stub User:Ophirgv 2007-02-13
Digital image processing Article User:Yaroslavvb User: Dicklyon 2007-07-25
Edge detection Article 203.200.24.199 User:Seabhcan User:Tpl 2007-07-26
Feature (Computer vision) Article User:KYN User:Tpl User:Serviscope Minor 2007-06-18
Feature detection Article 128.2.156.14 User:Tpl 2007-06-18
Feature extraction Stub User:Sarcas User:Tpl User:Oaf2 2007-06-27
GLOH Stub User:Redgecko 2007-07-21
GemIdent Article User:Way4thesub 2007-07-15
Generalized procrustes analysis Stub, remove? User:Vectraproject 2007-07-27
Gesture recognition Moved to ACV · ·
Graph cuts in computer vision Article User:Bruceporteous User:Iknowyourider 2007-07-25
Haar-like features Stub User:Grokmenow 2007-07-20
Hidden Markov model Removed · ·
Horn Schunck method Stub User:Dawoodmajoka 2007-02-17
Hough transform Article User:IMSoP User:Nguyener User:Tpl 2007-07-25
IAPR Article User:Cpapadopoulos 2007-01-17
Image analysis Article? Merge with CV? User:Hobbes User:Jorge Stolfi User:Seabhcan User:Radagast83 2007-07-21
Image fusion Article User: Matrix13 User: Amyfay 2007-07-10
Image moments Article User:Orderud User:Baccyak4H 2007-07-17
Image processing Article 200.1.31.44 User:Seabhcan User:Sumbaug 2007-07-28
Image rectification Article User:Lgrove 2007-05-25
Image registration Article User:Bensmith User:Oleg Alexandrov User:Carloscastro User:Dglane001 2007-07-10
Interest point detection Article? User:Tpl User:KYN 2006-09-25
Landmark point Article? Stub? User:Indon 2007-04-10
List of computer vision conferences Auxiallary User:Orderud 2007-07-27
List of computer vision topics Auxillary Uer:Mpoloton User:Orderud 2007-07-18
Lucas Kanade method Stub User:Dawoodmajoka 2007-07-26
Machine Learning Article · ·
Machine vision Article User:Rethunk User:Seabhcan User:Wedesoft 2007-07-18
Marr-Hildreth algorithm Stub 212.41.71.191 User:Tpl 2007-07-12
Mean-shift Stub User:Bilal.alsallakh 2007-06-18
Motion field Stub User:Rich8rd 2007-04-24
Multi-scale approaches Article User:Tpl User:Dicklyon 2006-09-07
N-jet Stub User:Tpl User:Dicklyon 2006-09-19
Neighborhood operation Article User:KYN User:Tpl 2007-06-05
Optical flow Stub User:Georgec User:Orderud User:Seabhcan 2007-07-21
Particle filter Removed · ·
Phase correlation Article? User:Orderud 2007-07-27
Physical computing Moved to ACV · ·
Pose (computer vision) Stub User:KYN 2007-07-28
Relaxation labelling Stub User:Bhrushkäntlagrite 2007-04-20
Ridge detection Article User:JamesCrowley User:Tpl User:Millerj870 2007-07-04
SURF Stub User:Redgecko 2007-07-21
Scale space Article User:Male1979 User:Tpl User:Dicklyon 2006-02-22
Scale space implementation Article User:Tpl User:Dicklyon 2007-07-28
Scale-invariant feature transform Article User:NeuronExMachina User: Male1979 User:Tpl 2007-07-24
Scale-space axioms Article User:Tpl User:Dicklyon 2006-09-08
Scale-space segmentation Article User:Tpl User:Dicklyon 2007-05-23
Smart camera Article (move?) 80.138.140.136 User:Zava 2007-07-24
Statistical shape analysis Stub User:Indon 2007-05-23
Stereo cameras Stub User:Comixboy 2007-07-23
Structure from motion Stub User:Waldtelefon 2007-06-29
Structured light Stub User:Webmoof 2007-07-18
Video tracking Article? User:Orderud 2007-07-13
View synthesis Moved to ACV · ·
Visual perception Removed · ·

Scale-space[edit]

The reason for my concern about this article (not just the lead) is that a) scale-space is an important concept within (e.g.) computer vision and b) currently has a number of related articles in WP (thanks to Tpl and others). For these reasons it makes sense to have at least one introductory article to this field which is relatively simple to understand. Perhaps not for any reader who happens to find it but at least for an undergraduate EE student. That article appears to be this one.

  • Since there was no objection, I take it that it is OK to move the article to new title "Scale-space" (with a hyphen)
  • For the reasons given above I suggest to bring the article to a state which better follows, for example, WP:BETTER even though it is not obvious how to accomplish that. My general conclusion is that a) the lead needs to be shorter and simpler, b) some of the stuff now in the lead must be worked into separate headings. I dare not do this myself given the hard work has already been done by others who are far more experienced in the field, but I can contribute by pointing out some of the difficulties in reading the text for a non-expert. I focus on the lead and hope that some of the other issues maybe solve themselves in that process.
  • According to WP:BETTER the first sentence of the lead should "give the shortest possible relevant characterization of the subject". It does not say explicitly that it also must make sense to someone not already familiar with the subject, but this is perhaps the very idea of an encyclopedia, and this is where I am still concerned. For example, "multi-scale signal representation" means that the reader must already have an idea of both "multi-scale" and "signal representation" could mean. Here is a suggestion for a new lead:

Start

A scale-space is a theory for describing signals in different levels of resolution developed by the computer vision, image processing and signal processing communities, to some extent based also on physics and biological vision. It represents the signal, typically an image, as a set of (scale) levels where each level is characterized by having a specific resolution relative the original signal. The resolution, often referred to simply as scale describes how fine details can be expected at that level. The highest scale corresponds to the original image, and moving successively to lower scales removes more and more of the details. A central idea is that there is a continuum of scale levels, even though in practice the scale levels are only computed for a finite set of scale values. For a particular signal, its scale-space refers to the set of scale levels produced by some specific scale-space theory.

In general scale-spaces can be formulated for signals of arbitrary dimension, using linear or non-linear computations. The most common applications, however, are for 2D images and by far the most common computational approach for computing the scale levels is based on convolution of the original image f by a Gaussian function g:

where

and is the variance of the Gaussian and also the index of the corresponding scale level. There are 2 formal issues related to the convolution expression which need to be sorted out: 1) what does it mean to convolve a 3 variable function with a 2 variable function, 2) in this particular formulation it is perfectly possible to be strict and not put the variables inside the convolution operation. For 1) is it possible to define both L and g as 2D functions with an additional index; e.g., ? This would also help solving issue 2)

This type of scale-space is interesting since it is the only one to satisfy a set of useful properties commonly referred to as the scale space axioms. Furthermore, the set of 2D functions , indexed by the scale parameter t, can be shown to be the solution of the heat equation.

Stop


The rest of the article can have sections with headings such as

  • Motivation/background (in relation to biological vision / physics)
  • Applications in computer vision and image processing (move subheadings 2-4 in Further readings to here)
  • The scale space axioms (short presentation with link to the main article)
  • Implementation issues (perhaps expand subheading 5 in Further reading and link it to the main article)

In my view, most of the stuff is there already, we only need to reshuffle a bit and be focused in the lead section, then this could be a great article.

Projective space[edit]

In mathematics a projective space is a set of elements constructed from a vector space such that a distinct element of the projective space consists of all non-zero vectors which are equal up to a multiplication by a non-zero scalar. A formal definition of a projective space can be formulated in several ways, and can also be made more abstract, see below. The projective space generated from a particular vector space V is often denoted P(V). The cases when V=R2 or V=R3 are the projective line and the projective plane, respectively.

The idea of a projective space relates to perspective, more precisely to the way an eye or a camera projects a 3D scene to a 2D image. All points which lie on a projection line, intersecting with the focal point of the camera, are projected onto a common image point. In this case the vector space is R3 with the camera focal point at the origin and the projective space corresponds to the image points.

Projective spaces can be studied as a separate field in mathematics, but are also used in various applied fields, geometry in particular. Geometric objects, such as points, lines, or planes, can be given a representation as elements in projective spaces based on homogeneous coordinates. As a result, various relations between these objects can be described in simpler way than is possible without homogeneous coordinates. Furthermore, various statements in geometry can be made more consistent without exceptions. For example, in the standard geometry for the plane two lines always intersect at a point except when the lines are parallel. In a projective representation of lines and points, however, such an intersection point exists even for parallel lines, and it can be computed in the same way as other intersection points.

Other mathematical fields where projective spaces play a significant role is topology, the theory of Lie groups and algebraic groups, and their representation theory.

Representation of 2D points and lines in terms of Plücker coordinates[edit]

The vector form defined above for the homogeneous representations of either points or lines has a counter part in representations based on anti-symmetric matrices, referred to as Plücker coordinates. It is straightforward to go between these two representations, and using Plücker coordinates it is easy to find the line which intersects two points and, vice versa, the point given by the intersection of two lines.

From points to lines[edit]

Let two distinct points have homogeneous coordinates given by the vectors . Form the anti-symmetric matrix as

Let the elements of the two vectors be given by

which gives the elements in matrix according to

Notice that the anti-symmetric matrix is "three-dimensional" in the sense that it contains only 3 linearly independent elements. We can construct a vector as

Let us now interpret as the dual homogeneous representation of a 2D line, as described above. Which line is it? To see this, we can investigate its relation to the two points represented by :

This implies that the line which represents intersects both points. Consequently, we have found a computational approach for determining the dual homogeneous representation of the line which intersects two specific points. In this approach two different representations of the line are used; the dual homogeneous coordinates given by the vector and the so-called Plücker coordinates given by the anti-symmetric matrix . The mapping from to described above is one-to-one so we can easily go from one representation to the other and back again as we choose.

It was said above that represent two distinct points. If the points coincide, then . This is an indication that there is no unique line which intersects both points if they coincide.

From this discussion it follows immediately that if the two points are at infinity, with , then the line is also at infinity.

From lines to points[edit]