Wikipedia:Reference desk/Archives/Computing/2012 June 19

From Wikipedia, the free encyclopedia
Computing desk
< June 18 << May | June | Jul >> June 20 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


June 19[edit]

Load runner[edit]

Hi,

I am recording a script for one of the application, a unique id was created after filling one from now i need to search using the id which was generated. Can any one suggest me how to do it? is it possible by correlation

Regards, Swamy — Preceding unsigned comment added by 125.22.193.145 (talk) 10:33, 19 June 2012 (UTC)[reply]

I don't really understand the question. Apparently some application creates something (a "script" ?) with a unique ID associated with it in some way. Is this "script" a file, and is this unique ID the file name ? If so, you could just sort the files by file name and easily find the one you want. The more rigorous solution is to put them into a relational database, indexed by the unique ID. StuRat (talk) 17:54, 19 June 2012 (UTC)[reply]
If English isn't your native language, you might want to post your question in your own language, then we will translate it to English. StuRat (talk) 00:59, 20 June 2012 (UTC)[reply]

Digital camera resolution[edit]

I have a cheap little point-and-shoot digital camera (Nikon Coolpix L26). The specs say it has 16.44 million total (1/2.3-in type CCD) effective pixels in the image sensor. However, when I set it to a resolution with about that many pixels (4608×3456), it looks fuzzy. I don't seem to get any more resolution than if I set it to about 1600×1200. It rather looks like the 4608×3456 pic was upconverted from the 1600×1200 pic. So, what's going on ? Does the 16.44 million pixels include different pixels for different colors, so effectively it provides a lower number of full-color pixels ? StuRat (talk) 19:28, 19 June 2012 (UTC)[reply]

Welcome to the world of Pixel count. Precise optical parts are expensive to make, while Megapixels are cheap and impressive in the adverts, so the manufacturers supply more pixels than the lens can properly focus. I almost always crank down my modestly more expensive Nikon P-6000 from its nominal 13 Megapixels to 8, for smaller files and higher sensitiviy. The change in fuzziness, if any, is indetectable. Expensive cameras tend to put the extra money mostly into optics, so they can make better use of those Megapixels. Jim.henderson (talk) 19:45, 19 June 2012 (UTC)[reply]
I see. But what specifically is wrong with the optics ? Is the image always slightly out of focus on the CCD ? Is this due to lens aberrations ? And is there any way to know what the real resolution of a camera is, before you buy it ? StuRat (talk) 20:01, 19 June 2012 (UTC)[reply]
(off-topic?) My camera has a CMOS 15.1 effective megapixel sensor. Yours use CCD sensors. Did they ever sort out which was better?--Canoe1967 (talk) 19:59, 19 June 2012 (UTC)[reply]
It's very similar to when you see amateur telescopes advertised as 800x magnification.. It might technically be capable of that, but it will look rubbish! i imagine that the pixels are just "crappy" (technical term), pushing them to their resolution limit exposes that fact. As to how you can tell what the "real" resolution is, the problem is there is no such thing as "real resolution", it's all relative, you can try to work out a relative resolution by reading reviews from reputable sites, like dpreview dot com. But be warned, camera reviews can be almost a black hole, there is no objective standard so you can literally spend eternity chasing after the "best camera". My advice is pick a brand you like and stick with it, pick the kind of camera you want and just buy the 'middle of the road' model. (unless you're a pro, but then you wouldn't be asking for advice). These days you can't go wrong with that advice, most extra features are just gimmicks and most cameras these days will take a photo that is more then good enough. Gone are the days where you invest in ONE camera that will last you a lifetime, i think it's far more economical and practical to upgrade your camera regularly (I do it about every 3-4 years, usually when I go for an overseas trip). You will ALWAYS find a camera that has slightly better feature or is slightly cheaper; make a decision and don't look back. Vespine (talk) 22:59, 19 June 2012 (UTC)[reply]
I should try a test with 2 lenses I got from Ebay for mine. A Canon 80-200mm and a Vivitar 100-300mm. Set the camera to the same setting for each at 100, 150, and 200mm then take 6 images and compare them for lens quality. What would would be the best subject, a phonebook page at bright indirect light?--Canoe1967 (talk) 23:15, 19 June 2012 (UTC)[reply]
I don't think it's the lens, unless the blurring you're talking about is chromatic aberration. There are two things that reduce the effective resolution of all digital cameras that have nothing to do with the lens:
  1. By convention, a single pixel in a digital camera is a single sensor behind a colored filter usually arranged in a Bayer pattern, whereas a single pixel on a computer monitor has red, green, and blue subpixels, so 16.44 million digital camera pixels is only as many samples as 5.48 million monitor pixels. That doesn't degrade the resolution by a factor of 3, but it does degrade it.
  2. The tinier the sensor, the less light it collects and so the noisier the output. All P&S cameras denoise the image as part of the postprocessing, which also degrades the resolution, because high-frequency detail can't be distinguished from high-frequency noise.
-- BenRG (talk) 23:51, 19 June 2012 (UTC)[reply]
1) Why doesn't it degrade it by a factor of 3 ? (In my camera, it seems to be degraded by approximately a factor of 8.)
2) Wouldn't the total light gathering ability of the camera depend on the diameter of the lens, not the sensor ? StuRat (talk) 00:51, 20 June 2012 (UTC)[reply]
1) You still have the full spatial resolution of 16 million separate pixels, and variations in the three color types are highly correlated in practice. Just as an example, you could use the 8 million G pixels as the luminance and then colorize it using the R and B pixels. That would lose only half the resolution, not two thirds. I think actual postprocessing algorithms do better than that. 2) I meant that for a given amount of light hitting the whole sensor, as the pixel count increases, the amount of light hitting each pixel decreases. I was calling each individual pixel a "sensor", which probably isn't the right terminology.
I crossed out the claim that the lens quality doesn't matter much because I don't really know. -- BenRG (talk) 22:07, 20 June 2012 (UTC)[reply]
1) That wouldn't work, as that approach would mean any spot devoid of green would come out as black, when it's really bright red, blue, or purple. StuRat (talk) 22:23, 20 June 2012 (UTC)[reply]
The "red", "green" and "blue" channels in a Bayer array all cover a large range of the visual spectrum. They aren't like the RGB subpixels on a display, which really are those perceptual colors. Deriving luminance only from the "green" channel probably isn't a great idea, but it would work better than you suggest. The human visual system actually uses only the L and M ("red" and "green") cones to calculate luminance. -- BenRG (talk) 23:59, 20 June 2012 (UTC)[reply]

I would think a system of testing cameras to find their actual resolution could be devised. Here are my thoughts:

A) Photograph a series of grids, with finer and finer resolutions, until you get down to the resolution where each line and gap in the grid will be one pixel wide in the digital image. This series of grids could be generated on a computer monitor (but not a type with a fixed pixel count, like LCD; perhaps an old CRT would be best).

B) Take the digital output and feed it into a program that counts the number of lines in the grid. If it's able to correctly count them, then the camera can handle that resolution.

C) Repeat this process with different cameras, settings, and grid sizes, until you have a chart listing the maximum effective resolution of every camera.

Should I contact Consumer Reports to convince them to do such a test ? :-) StuRat (talk) 00:48, 20 June 2012 (UTC)[reply]

It's a combination of sensor and lens. One obvious thing to notice is that a lense is round but a sensor is square, so the lens has to "overshoot" the sensor to some degree. In cheap cameras (as a rule of thumb), the overshoot will be as small as possible, in more expensive cameras, there will be a bit more overshoot. Since the edges of a lens require tighter tolerance, the corners of an image typically suffer most from this effect. At leasat one good review site include an image in its reviews which shows fine lines to determine the "effective resolution". Vespine (talk) 02:26, 20 June 2012 (UTC)[reply]
The messed up corners can be fixed by a bit of cropping. StuRat (talk) 02:32, 20 June 2012 (UTC)[reply]
  • In a perfect world there should be a site that has images from all cameras taken of the same test pattern. Someone should make an .svg one and upload it perhaps? I have also seen a line pattern that is angled away from the camera to set the accuracy of the auto-focus. Take a picture of the center of the lines and then see which numbered line is actually in focus (they are numbered by distance). With many high end cameras you can input that number to get perfect focus each time.--Canoe1967 (talk) 15:40, 20 June 2012 (UTC)[reply]
I don't believe that you can quantify optical quality in any simple way, because image quality changes based on aperature, ISO, focus, shutter speed, and many other things. If you give a camera enough light to work with, it doesn't even need a lens (pinhole camera). It might be tempting, therefore, to just try lower and lower light levels and measure how grainy things get, but lots of good photography requires opening up the aperature and using shallow depth-of-field, and for that you suddenly become interested in what aperature settings it has and the nature of its bokeh, and you don't care at all about high-ISO performance. Paul (Stansifer) 18:52, 20 June 2012 (UTC)[reply]
I realize that it's complicated, but the "maximum resolution achievable by a camera under ideal conditions" is something I would certainly be interested in knowing. I believe that's what many consumers think the megapixel count is giving them, but clearly, it is not. StuRat (talk) 18:58, 20 June 2012 (UTC)[reply]
Do you mean something like an 'acceleration rating' for cars? I used to have a 1975 Chev that cruised at 105mph and peaked at 125+. The 0-60 was crap because it was so heavy. Raw horsepower can't be used to judge but horsepower to weight ratio can.--Canoe1967 (talk) 21:46, 20 June 2012 (UTC)[reply]
Something like that, yes. StuRat (talk) 22:19, 20 June 2012 (UTC)[reply]
Everything above is wrong. Jim.henderson came closest, but it's a case of a fundamental physical limit, rather than Nikon cutting costs on the optics. The problem is actually quite simple: you're hitting the diffraction limit. For any given aperture, there's a minimum size of spot that a lens can focus light to, the Airy disk, and if the sensor elements are smaller than the disk, light will spill over into adjacent sensor elements and give a fuzzy appearance. In your case, assuming an aperture of f/8 (common for outdoor photographs), the Airy disk is three pixels wide. --Carnildo (talk) 01:30, 21 June 2012 (UTC)[reply]
That would be about right, because 3×3 pixels would be 9, and I seem to see about an 8-fold degradation of the resolution relative to the total number of pixels (if you consider that a circle with a diameter of 3 has an area of 7, you also get close to that). Can you tell me how you determined the 3 pixel width ? This camera has the f/8 aperture and also an f/3.2 aperture. How many pixels wide would the airy disk be for that setting ? StuRat (talk) 02:09, 21 June 2012 (UTC)[reply]
I ran the numbers through the advanced diffraction limit calculator at [1]. Since the calculator doesn't have a setting for a 1/2.3" sensor, I used the 1/2" sensor setting instead. An aperture setting of f/3.2 will still be diffraction-limited, but less visibly so: the loss of detail from a 1.5-pixel Airy disk is on the same scale as the loss of detail from interpolating the Bayer filter and loss of detail from noise reduction. --Carnildo (talk) 22:26, 22 June 2012 (UTC)[reply]
Thanks, so it sound like I could get away with half or a third of the total megapixel count without visible graininess, then, at f/3.2 ? If so, that's a lot better than 1/8th, which was all I got before. StuRat (talk) 00:29, 23 June 2012 (UTC)[reply]
Just did another test with the pill bottle. Using the brightest light I have, and no zoom, I was able to get a 1/500th second exposure with f/3.4. The maximum resolution seems to occur around 8 megapixels (2448x3264 pixels), so right about what we expected. I can see the printing dots and make out misalignments in the colors (which I assume are actually on the bottle), that I can't see with the naked eye, so I'm happy with that. So, looks like this camera is only good for close-up pics of inanimate objects. StuRat (talk) 03:42, 23 June 2012 (UTC)[reply]
The usual solution to an image that's too blurry at full size is to scale it down. If you take one of the full-size images from that camera and scale it down to 4 megapixels (so, a scale factor of 2), it should look reasonably sharp. There are other post-processing steps you can try to make the image look sharper as well, such as unsharp masking. --Carnildo (talk) 20:49, 24 June 2012 (UTC)[reply]

Bad pixels ?[edit]

I was wondering, do some of the pixel sensors just send out garbage ? If so, the software might apply some type of averaging algorithm to disguise these bad pixels, which might also account for the blurriness. I believe our eyes do something similar.StuRat (talk) 22:19, 20 June 2012 (UTC)[reply]

Bad pixels should show up as just a crappy pixel. Some cameras can do that if you register the 'dust data' though. They mark dust spots on the sensor, and average to the pixels around them, I think. If there isn't a lab standard to test the horsepower/weight ratio of cameras, someone should create a standard. I usually trust the camera store. I tell them my budget and they recommend a camera. Future Shop I have found does know alot about that, they may get advice from their own head office and not even sell crap cameras with lots of pixels, but lenses made from flour and water.--Canoe1967 (talk) 23:08, 20 June 2012 (UTC)[reply]
What does "register the 'dust data'" mean ? StuRat (talk) 23:15, 20 June 2012 (UTC)[reply]
I have a parameter on my camera for it. I think it just sets up a 'balanced' output from the sensor. I am trying to find a link to info on it. If there is dust on my sensor, I think it compensates somehow. I can add dust data and then remove it after a sensor cleaning. This I assume takes pictures without the effects of the dust spots showing as much.--Canoe1967 (talk) 23:26, 20 June 2012 (UTC)[reply]
I'm still not quite getting it. Do you tell it which specific pixels are bad, or does it somehow figure it out (from them producing output that doesn't match the surrounding pixels). StuRat (talk) 23:30, 20 June 2012 (UTC)[reply]
  • Well, that looks unnecessarily complex. You have to take a pic of a white background, zoom in on any dust spots on the image, then ask it to delete those spots. I'd want it to detect any pixels which don't match the background automatically, tell me, and ask if I want to use the average of the surrounding pixels instead. Of course, a piece of dust might blot out several pixels, while the type of bad pixel I'm talking about should be alone. StuRat (talk) 23:38, 20 June 2012 (UTC)[reply]
You're not encountering bad pixels. Bad pixels are very common in camera sensors (yours probably has a few hundred to a few thousand). They're handled by the camera's software identifying pixels that are inappropriately black or inappropriately full-on, and replacing them with the average of the pixel's neighbors. Technically speaking, this blurs the image, but since it's in such small, isolated areas, you'll never be able to spot it. --Carnildo (talk) 01:37, 21 June 2012 (UTC)[reply]

Light level effect on digipics ?[edit]

UPDATE: I found I get much sharper images with more light. The ambient light was normal room lighting before, which I thought would be sufficient. However, when I shined a 500 watt halogen light directly on the subject (a pill bottle, in this test), it came out much better (and not any brighter). I can think of several possible reasons:

1) Increased signal-to-noise ratio.

2) The auto-focus may have had insufficient light to work before, leaving the image slightly out of focus.

3) The shorter exposure time needed under such bright light may have eliminated blurring from camera vibration (either from the electronics or me having the DTs).

So, which of these is the most likely explanation ? StuRat (talk) 05:41, 21 June 2012 (UTC)[reply]

I would say 1) Increased signal-to-noise ratio, If I had to choose from that list. I don't know if you can adjust shutter speed or 'film speed' with your camera. If they are set automatically then it may be a higher film speed that increases the 'grain'. My camera can take pictures in very low light, but they are very 'grainy'. See Image sensor and Film_speed#Digital. Wp seems full with information split into so many articles, we should stop adding to it maybe? The auto-focus shouldn't be an issue and hand held shutter speeds are usually okay at 1/100 or faster. --Canoe1967 (talk) 12:03, 21 June 2012 (UTC)[reply]
The fastest I've gotten it it 1/60th of a second (while it doesn't let me manually set the speed, it does report it). It supposedly can do a 1/2000 second exposure, but enough light for it to choose that would likely set the subject on fire. :-) StuRat (talk) 04:20, 22 June 2012 (UTC)[reply]
If you have the same subject as the original low quality image, could you try another shot with brighter light? Longer lenses need faster shutter speeds as well. They do the math different now but it used to be 1/lens. 50mm 1/50, 200mm 1/200 sec type thing. I think they still use the same math and then multiply/divide with the crop factor. I think your camera has image stabilization which should help as well.--Canoe1967 (talk) 14:30, 22 June 2012 (UTC)[reply]
Unfortunately the original subjects were my family gathered for the last holiday. Even when they are gathered together, shining blinding lights in their eyes to get a less grainy pic probably wouldn't be much appreciated. So, it seems like this camera may only be good for shooting inanimate objects under arc lamps. I may need to get myself a welding mask for this. :-) StuRat (talk) 00:23, 23 June 2012 (UTC)[reply]

Will it take an external flash? Most built-in ones are crap even on high-end cameras and only go a 3-6 feet or so.--Canoe1967 (talk) 18:11, 23 June 2012 (UTC)[reply]

I don't believe it does take an external flash. And, even if it did, it looks like it would have to be so bright as to cause retina damage. (Maybe I can have everyone wear dark sunglasses ?) StuRat (talk) 18:24, 23 June 2012 (UTC)[reply]
With this camera's need for extreme light levels, I wonder if it would take decent pictures of the Sun (too bad I just missed the transit of Venus). Or would some component be damaged (like the light meter) ? StuRat (talk) 18:31, 23 June 2012 (UTC)[reply]
A point-and-shoot camera like you've got only has one sensor: it uses the main imaging sensor for metering, focus, live preview, and taking the image. It's unlikely that you'll damage it by taking pictures of the Sun, but it's unlikely that you'll get anything but a white circle, either: the Sun's just too bright. For the recent eclipse, I used a shutter speed of 1/4000 of a second, an aperture of f/32, a stack of filters equivalent to a five-stop neutral density filter, a 1.4x teleconverter (reducing the light by one stop), and a thin overcast to cut the light down to something my camera could handle. --Carnildo (talk) 20:49, 24 June 2012 (UTC)[reply]