Talk:Deinterlacing

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

what is it?[edit]

Actually, what is Deinterlacing? I still cannot understand. Can someone make this article using simple English and terminology. because not everyone can understand the way it was described. Help is much appreciated. thanks. from Senyuman

—The preceding unsigned comment was added by 60.50.137.61 (talk) 12:22, 13 January 2007 (UTC).[reply]

100fps.com??[edit]

Would anyone be opposed to linking 100fps.com in this article... I know some of the information is outdated, but it provides a huge amount of screenshots, video clips, etc.

Linear Interlacing?[edit]

It would be nice with an explanation of what the interlacing type 'linear' mean.

==

I deleted this link as it crashes firefox hard!

A few suggestions[edit]

A pedantic take on the terminology:

- the output of the de-interlacing process are progressive frames;

- frames are considered progressive; fields denote interlaced content; evenness/oddness of a field is related to the set of lines from the original progressive frame that comprise the field: lines 0, 2,.., 2k form the top field. Respectively, odd lines compose the bottom field.

- it's not necessary that interlaced video content is presented as pairs of fields; this type of content is made of single-field video samples.

- Line doubling is a term used mainly in the Home Theatre discussions. Interpolation would be the more appropriate term, since the missing lines aren't necessarily doubled.

- the two fundamental approaches to deinterlacing could be better termed as spatial and temporal; of course, combinations of the two also exist;

- the deinterlacing techniques are categorized as follows:

 * BOB line replication: the missing lines are simply copied from one of the existing lines
 * BOB vertical stretch: the missing lines are interpolated from the existing lines of the current field; the standard algorithms use either 2 {1/2, 1/2} or 4 {-1/16, 9/16, 9/16, -1/16}lines;
 * median filtering: a median selection gives the value of the calculated pixels
 * edge filtering: this technique eliminates the combing effect by attempting to detect an edge in the current field; the interpolated pixels are calculated by a filter that follows the edge;
 * field adaptive: depending on the amount of motion detected, the interpolation of pixels is performed by either a spatial filter (within the current field) or a temporal one (using future fields or backward reference frames);
 * pixel adaptive: similar to field adaptive, but at a pixel level (thus much costlier);
 * motion vector steered: requires multiple fields/reference frames, attempts to predict the trajectory of moving objects within the picture.

—The above unsigned remarks were made by 71.112.10.95 on 12 April 2006

I basically like the above remarks a lot. Except perhaps for the statement that "frames are considered progressive". It is certainly common to refer to interlaced video frames (SMPTE timecode has a frame counter, etc.), so I think that not all frames should be considered progressive. To me, a frame is a pairing of a top field and a bottom field that are temporally consecutive or simultaneous, regardless of whether these represent an interlaced-scan (consecutive) or progressive-scan (simultaneous) sampling. Also, I have some trouble with the idea of "the original progressive frame". In some interlaced-scan systems there is no original progressive frame - the material is produced using an interlaced scan from the very beginning. Also, in my experience, many people use the term "line doubling" loosely (and in my opinion not so accurately) to refer to any form of deinterlacing process. —SudoMonas 05:59, 1 July 2006 (UTC)[reply]

Basic question[edit]

If a video has to be deinterlaced for showing on a TFT display, why isn't it just "deinterlaced" like it would on a CRT display (by the afterglowing). This means, if a video is 50 interlaced fields per second, why don't we just show field 1 and field 2, then field 3 and field 2, then field 3 and field 4, and so on, each for 1/50 second? --Victor--H 16:30, 3 April 2006 (UTC)[reply]

I'm not sure what you meant by "afterglowing". A deinterlacing method similar to the one in your description is weaving, which only produces satisfactory results with static video content. Any sort of moving content would display visible artifacts (combing). 71.112.132.83 07:38, 12 April 2006 (UTC)[reply]

  • There is no significant afterglow on a CRT TV display. I once took a photograph at high shutter speed to test this, and found that the picture faded over about a quarter of the picture height, in other words in 1/200th of a second. Nor is 'persistence of vision' the simple thing it seems. I believe, from experiments, that it is processed image content in the brain that persists, not the image on the retina. Interlacing works because of this. The brain does not combine subsequent frames; if it did we would see mice-teeth on moving verticals, as we do on computer images or stills, which in fact we don't see on a CRT display. The brain also cannot register fine detail on moving images, hence I think litte is lost on a proper CRT interlaced display, while motion judder is reduced as well as flicker. Modern LCD and Plasma displays seem to me to be fundamentally unsuited to interlaced *video since they necessitate de-interlacing in the TV with inevitable loss. In theory, it is not impossible to make an interlaced plasma or LCD display, in which the two fields were lit up alternately, but in practice this would halve brightness, even if the response was fast enough. In view of this, I think it is a great pity that the 1080i HD standard was created, since it is unlikely ever to be viewed except as a de-interlaced compromise on modern displays. If 1080p/25 (UK) were encouraged worldwide, then de-interlacing would not be needed. 1080p/25fps requires no more bandwidth than 1080i of course, but it has the advantage of being close enough to 24fps to avoid 'pull down' on telecine from movies (in the UK we just run movies 4% fast and get smoother motion.) It also fits well with the commonly used 75Hz refresh rate of many computer monitors, which would just repeat each frame three times for smooth motion. In high-end TV's processing using motion detection could be used to generate intermediate frames at 50 or 75Hz as was done fairly successfully in some last-generation 100Hz TV's. Reducing motion judder in this way as an option is a better way of using motion detection than de-interlacing, because it can be turned off or improved as technology progresses. I note that the EBU has advised against the use of interlace, especially in production, where it recommends that 1080p/50fps be used as a future standard. --Lindosland 15:26, 23 June 2006 (UTC)[reply]

When should "Odd interpolate" and "Even interpolate " be used ? 213.40.111.40 (talk) 17:23, 29 May 2009 (UTC)[reply]

Correcting common misunderstandings over interlace[edit]

I have taken out the statement that the interlaced image contains only half the information. In terms of information theory it contains the same information, assuming we are de-interlacing to the same frame rate and not attempting to generate twice as many frames. Arguably interlaced video contains MORE visible information, since by sampling the image twice as often it provides more temporal information. Though this is of course at the expense of detail, the point about true interlacing (on a CRT) is that we do not miss the detail, even on motion, as we do not percieve detail so well on motion (see my above comments regarding the real nature of 'visual persistance') - we have to concentrate to see it. I think a lot of misunderstanding has arisen from the fact that interlaced video is now being judged on LCD and Plasma displays, via an unknown deinterlacer which introduces actual visible blurr onto anything that moves. --Lindosland 16:00, 23 June 2006 (UTC)[reply]

Then you were wrong to do so. Interlaced video really does contain half the information that progressive video does. This is betrayed by its raison d'être which is to use half the bandwidth of progressive video for the same vertical scan rate.
Since this seems to be a difficult concept for some people, permit me to elaborate. A 50fps progressive video stream of 1080/50p format transmits 50 1920x1080 pixels per second - that's data for 103,680,000 pixels every second (there is some extra housekeeping data, but let's ignore that for this argument).
In the interlaced format (1080/50i), only every other line of pixels is transmitted with each field, the 'missing' lines are transmitted on the next field. This means that although half the pixels are transmitted 50 times a second, the entire 1920x1080 pixel image is only transmitted 25 times per second - that's data for 51,840,000 pixels per second. The important issue here is that for each 1/50th second vertical scan, 1080 lines of image are transmitted in the progressive system, but only 540 lines of data in the interlaced system (every other line). 86.177.31.209 (talk) 18:09, 8 January 2011 (UTC)[reply]
I recond we get confusion in this are becuase frames per second get mixed up with fields per second. As I understand it, the traditional PAL system (as used in England) was 25 frames per second. Each frame was split into 2 fields, one containing the odd lines and one the even lines. This gives 50 fields per second (50 htz, matching the electrial supply?). Some people mistake this for 50 frames per second.
Now (unless I've got confused), this is written 25i, and would contain as much information as 25 frames per second progressive. A 50 frames per second progressive video would contain more information than 50 fiels per second (25 frames interlaced). 1080/50p would be the entire 1920x1080 pixel image transitted 50 times per second, and be 50 fps (frames per second). An entire 1920x1080 pixel image transitted 25 times per second would be 25 fps (frames per second). That would be 1080/25i.
It all depends on wheather you're comparing 25 fps with 25 fps, or 25 fps (in 25 or 50 fields) with 50 fps.
Of couse, I could of got it wrong. In which case it's all a much a bigger mess than I actully thought it was. Dannman (talk) 17:56, 19 December 2013 (UTC)[reply]
There might be some confusion for consumers, but the preferred notation used by both EBU and SMPTE (i.e. the technical folks) has always been in frames, and never in fields - for example, 1080i/25 (25 interlaced frames), 1080p/29.97 (30*1000/1001 progressive frames), 1080p/50 (50 progresive frames )etc.
The real question was if 25 interlaced frames have better resolution and/or motion perception than 25 progressive frames and can they compare to 50 progressive frames when viewed on a natively interlaced display? But since CRT TVs has never even got to the point of displaying true 1080 lines before becoming obsolete and replaced by high-resolution fixed-pixel displays, this question has largely become irrelevant. The industry now embraces progressive scanning formats with very high frame rates, such as 8K 4320p120 in the Ultra HDTV standard. --Dmitry (talkcontibs) 12:39, 4 January 2014 (UTC)[reply]
I think that's clear and I understand.Dannman (talk) 14:55, 4 January 2014 (UTC)[reply]

More 3:2 pulldown info needed[edit]

The article absolutely needs to explain its relationship, and clearly explain the difference in respect, to 3:2 pulldown and reverse telecine, because it is a massively big subject in the audiovisual industry. There also needs to be more information about the close relationship between deinterlacing and reverse telecine (see reverse telecine section). Sometimes these two are even confused. However, 3:2 pulldown removal applies to movies broadcast as video, while deinterlacing applies to video broadcast as video. Because both movies and video are broadcast, many modern devices (line doublers, upconversion in HDTV sets, some progressive scan DVD players, etc) automatically switch between reverse telecine (3:2 pulldown removal) and deinterlacing, based on algorithms that automatically analyze the video for the prescence of a pulldown sequence in the video. Chips such as DCDi perform this task. As proof, there are over 70,000 search results that cover both "3:2 pulldown" and "deinterlacing" on the same page: Search ... A lot of confusion exists because these two (3:2 pulldown and deinterlacing) go hand-in-hand in modern consumer devices nowadays (HDTV's, line doublers, progressive-scan DVD players, etc). Therefore more consistency needs to exist between the deinterlacing article and the reverse telecine section in the telecine article. I've added a few sentences to refer to each other, as a result, because of the close relationship that exists here (especially in the explosion of modern end-user video equipment, such as HDTV sets which, when upconverting NTSC 480i material to high-def, automatically do both either deinterlacing or reverse telecine, depending on the video material being displayed). --Mdrejhon 22:56, 7 August 2006 (UTC)[reply]

question -where deinterlacing is done[edit]

Hi Guys, Trying to learn more about deinterlacing and as usual turning to wikipedia.

My basic question is: Am I better off

  1. playing a regular DVD on a regular player, and having the HDTV de-interlace and upscale the DVD? or
  2. playing a regular DVD on a player which de-interlaces and upscales the DVD, then sends to HDTV?

Not sure if this article is the place to answer such a question - but your heading "where deinterlacing is done" seemed ideal :) Greg 12:58, 22 March 2007 (UTC)[reply]

Answer:

You have two interlacer/upscalers (one in the DVD player and one in the HDTV). Use whichever choice gives better quality; there is a lot of variation in quality amongst deinterlacers and scalers. You probably also have the option of selecting progressive (but not upscaled) output from the player, thus using the deinterlacer in the DVD player and the upscaler in the HDTV.

216.191.144.135 (talk) 13:28, 30 May 2008 (UTC)[reply]

matched the properties of CRT screens[edit]

The second paragraph of the article says: "analog television employed this technique because it allowed for less transmission bandwidth and matched the properties of CRT screens."

The part about CRT properties is wrong. There is no property of CRT screens that mandates interlace. Millions of CRT computer monitors displaying progressive images prove it. --Xerces8 (talk) 12:32, 6 November 2010 (UTC)[reply]

In fact, CRTs do have specific properties which allow them to display interlaced video directly. Unlike current displays, CRTs have no fixed pixels but just lots of subpixels (RGB triads) which are scanned continuously by the electron beam. This allows CRTs to directly support various resolutions with no perceived quality loss or artifacts, because the electron beam and the shadow mask perform a kind of analog "filtering" and "scaling" the video signal. Also, the phosphors in the RGB triads excite very fast, and they fall-off very fast as well, allowing subfields to be perceived by the eye as two separate half-resolution frames at twice the framerate.
On the other hand, fixed-pixel displays like LCD, plasma, DLP, FED/SED, etc. feature fixed resolution and have much worse pixel response times, so you can not directly feed analog interlaced video signal to these displays, it first needs to be deinterlaced, scaled to match the native resolution, and frame rate conversion needs to be performed as well. --Dmitry (talkcontibs ) 18:51, 13 December 2010 (UTC)[reply]
Whilst you are correct, the phrase, "... and matched the properties of CRT screens." does imply that there is some characteristic of a CRT whereby it will operate better with an interlaced signal. This is, of course, not the case. They operate satisfactorily with either interlaced or progressive video (or even vector scan video). 86.163.86.51 (talk) 08:38, 5 June 2011 (UTC)[reply]
The "complete analog nature" phrase in "CRT-based displays were able to display interlaced video correctly due to their complete analog nature" is not an explanation at all. The reason early (emphasis) television CRTs didn't reveal interlacing is due to their rather poor resolution (due to spot size), and phosphor persistence (mostly the latter). While those properties are a result of their analog nature, "analog nature" falls short of an explanation.
The now gone "matched the properties of CRT screens" was correct in a way, but seemed to indicate that interlacing was used because of the CRT properties, which is backwards -- the purpose of interlacing was to save bandwidth, as indicated; the CRT properties (of the time) readily facilitated interlacing by simply not being able to reveal it. (By the way, it wasn't just transmission bandwidth that needed saving; receiver bandwidth was also a concern.)
Dmitry's mention that CRTs have "lots of [RGB] subpixels" and shadow masks applies only to color CRTs -- remember that interlacing was devised at a time when television was monochrome. Those CRTs had no pixels whatsoever; the phosphor was continuous across the entire screen. BMJ-pdx (talk) 01:08, 9 April 2022 (UTC)[reply]

Handbrake, VisualHub, Wondershare[edit]

These video encoding programs seem to be able to deinterlace, detelecine, etc., almost any kind of video source. How? And should it be mentioned? (With proper citation, etc.) Apple8800 (talk) 09:24, 15 February 2011 (UTC)[reply]

Provided you don't make it read like an advert for those prodocts, then they probably should. I look forward to your edit. DieSwartzPunkt (talk) 14:49, 3 May 2011 (UTC)[reply]

Repeated vandalism by a pair of sockpuppets[edit]

The article is repeatedly being vandalised by a pair of IP addresses. The vandalism persistently removes the table of comparison of various deinterlacing methods.

The Ip addresses that are vandalising are:

User:188.123.231.4 and

User:82.179.218.11

The edit history of the two user also suggests that they have near identical interests.

The vandalism is identical from the two users and neither user leaves an edit summary (always a reliable sign). I would have submitted a sockpuppetry report, but the systen seems to have changed and I can't figure out how to do it. Can someone else oblige or at least tell me where I am going wrong? 86.184.24.140 (talk) 16:29, 16 June 2011 (UTC)[reply]

I don't think your case would be a particularly good one. I grant that there is a good reason for suspicion, but I believe that is about as far as it goes. There does not appear to be any evidence of sockpuppetry elsewhere, so it just might be a complete coincidence that both users (if indeed they are separate users) reintroduced the same deletion of that table from the article, and both users failed to provide an edit summary justifying the deletion. DieSwartzPunkt (talk) 18:16, 16 June 2011 (UTC)[reply]
You may be right, but it is interesting to note that since the allegation, that there has been no further incident - from either of the sockpuppets. 86.166.68.106 (talk) 17:10, 28 June 2011 (UTC)[reply]

Doublers section[edit]

Can the person who created the table of VLC deinterlacers in Doublers section provide explanations/disabbreviations of its context? I undrestand that in the 2nd and 3rd columns H means half or full frame and FR means frame rate. But what is C and what do numbers 1), 2) up to 7) in the notes and in C (2nd and 3rd column) mean. Please explain. Thank you. — Preceding unsigned comment added by Metaleonid (talkcontribs) 19:42, 5 January 2012‎

The entire Doublers section is written in non-encyclopedic style with very cryptic language, which is very hard to understand for anyone who is not a hacker, and contains no citations which might suggest original research. Unless this section is rewritten in a more appropriate style and given proper references, I suggest to remove it altogether as it only doubles the content of other section with faux-technical implementation details of some unknown deinterlacing algoryhtm. --Dmitry (talkcontibs) 08:13, 28 February 2012 (UTC)[reply]
The entire section has been copied word-for-word from http://wiki.videolan.org/Deinterlace#Appendix:_Technical_summary - while this does not constiture a copyright violation due to LGPL license used on the Videolan Wiki, the section is too technical as it covers implementation details of VLC Media Player. I have removed this section and added an external link instead. --82.179.218.138 (talk) 11:42, 17 May 2012 (UTC)[reply]

Blacklisted Links Found on Deinterlacing[edit]

Cyberbot II has detected links on Deinterlacing which have been added to the blacklist, either globally or locally. Links tend to be blacklisted because they have a history of being spammed or are highly inappropriate for Wikipedia. The addition will be logged at one of these locations: local or global If you believe the specific link should be exempt from the blacklist, you may request that it is white-listed. Alternatively, you may request that the link is removed from or altered on the blacklist locally or globally. When requesting whitelisting, be sure to supply the link to be whitelisted and wrap the link in nowiki tags. Please do not remove the tag until the issue is resolved. You may set the invisible parameter to "true" whilst requests to white-list are being processed. Should you require any help with this process, please ask at the help desk.

Below is a list of links that were found on the main page:

  • http://guru.multimedia.cx/deinterlacing-filters/
    Triggered by \bguru\b on the local blacklist

If you would like me to provide more information on the talk page, contact User:Cyberpower678 and ask him to program me with more info.

From your friendly hard working bot.—cyberbot IITalk to my owner:Online 00:21, 14 August 2015 (UTC)[reply]

External links modified[edit]

Hello fellow Wikipedians,

I have just modified 4 external links on Deinterlacing. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 18 January 2022).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 03:42, 8 September 2017 (UTC)[reply]