Talk:Half-precision floating-point format

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

"Citations needed"[edit]

The tables of examples are math problems, not research, therefore they do not need citations and it wouldn't make any sense to cite a source any moreso than for 2+2=4. I propose they are removed.

I agree. I cannot find any WP policy regarding examples, only the policy that only information that is likely to be challenged must have a source. See WP:NOR. I am taking the liberty to remove the "Unreferenced" notice. Agnerf (talk) 14:12, 20 October 2023 (UTC)[reply]

Untitled[edit]

This page confuses increased and decreased precision. Surely increased dynamic range is double-precision : what does 'precsie' mean ? Half-precision reduces storage requirements instead !

ie Half of what ? Hmmm ... Maybe they should be one page ? — Preceding unsigned comment added by 195.137.93.171 (talk) 05:56, 4 June 2007 (UTC)[reply]

=> To me it was perfectly clear: the precision is the amount of bits/digits past the floating point. An integer is very unprecise, because between every two integers there's a big range of other numbers. e.g. what if you want to drink 3.5 glasses of milk instead of 3 or 4? An integer is not precise enough to represent this. A single precision floating point has no problem with 3.5, or even 3.000 005 (dunnow the exact amount of zeros allowed, but my point is clear). Single precision takes up 4bytes/32bits though, so it's quite memory hungry. To combine the best of 2 worlds (small memory footprints of ints but good precision of singles/floats), half precision float/a float with 16bits/2bytes memory representation is ideal. You trade the loss of precision for the gain in less memory footprint. Indeed, with those 2 bytes, 3.000 005 is not representable. 3.05 is representable though. — Preceding unsigned comment added by 134.58.253.57 (talk) 07:05, 20 July 2009 (UTC)[reply]

=> It's nomenclature. It's a name. It's not "half" of anything other than another format which we call "full" precision.

Hardware Support for HP?[edit]

Is anyone aware of actual hardware providing native support for half-precision floating-point datatypes? This would be a helpful information to be added.

129.27.140.172 (talk) 15:35, 5 November 2009 (UTC)[reply]

  • At least Nvidia hardware supports storing halfs in registers as well as extending them to floats while calculating. I believe it is common with GPU's MX44 (talk) 10:58, 20 July 2012 (UTC)[reply]

precision with large numbers[edit]

quite clever, pulling the equivalent of a 40-bit integer out of a 16-bit space, but accuracy must surely suffer at the high end? if the largest figure is ~65500 with 10 (11?) bits of mantissa, are we moving in steps of 64 (32?) up at that end? are we not then sacrificing accuracy of representation for wide range? 193.63.174.10 (talk) 09:46, 18 October 2010 (UTC)[reply]

(I mean - a 16 bit, unsigned integer at that point would move in single steps, being 32-64x more precise; every number beyond ~1024/2048 is in fact less precise than the integer form, and we're not able to represent "halves" until 512/1024 or less, and full single-place decimal to about 100/200) 193.63.174.10 (talk) 09:52, 18 October 2010 (UTC)[reply]
Also, what of 20 or 24bit precision? Are there no standards for using that in either integer or float form? The number packs in to 2.5 or 3 bytes, we get somewhat better precision and the same or a wider range, without using quite so much space. Or even 21-bit (RGB+very simple transparency in 64 bits; 18/19/20 to give RGB+4/7/10bit transparency with HDR... or even 17/13 (4x half would be 16/16 of course)... i'd probably err towards 19/10 or even 19/20/18/10 RGBA) 193.63.174.10 (talk) 09:57, 18 October 2010 (UTC)[reply]
The entire purpose of floating point is to skew the range to get greater precision in the smaller values than the larger ones. The range is similar to a 17-bit signed integer but obviously anything important needs to go in the smaller values. One important consideration for using any kind of floating point is to scale your values appropriately; some HDR image formats go so far as to define all values as between 0 and 1 or -1 and 1, with anything outside of that being the HDR area. It wouldn't make much sense to confine it so tight for 16-bit, but you still have a ton of flexibility despite the weak precision at higher values. Regarding the other point: Some image formats combine 16 bits of decimal places for each channel with a shared 16-bit exponent for all three, which is smaller and much easier to work with than three 20 or 24 bit channels. Foxyshadis(talk) 04:01, 15 September 2012 (UTC)[reply]

Confusingly written[edit]

The range is the difference between the minimum, non-zero value, and the maximum value. The precision is the number of significant figures available in the format. "Single" format (from memory) has a range of 10^-37 to 10^37, and about 7 significant figures. Double format has a range of about 10^307 down to 10^-307, and about 14 significant figures.

It looks like, from the entry, that 16 bit floating points have a range from 5*10^-8 to 65504 (10^-8 to 10^4), so about 12 orders of magnitude. And, that the precision is about 4 significant figures.

As for the section: Precision limitations on integer values, it would probably be more useful to have a chart of epsilon versus number magnitude. 195.59.43.240 (talk) 13:25, 25 April 2012 (UTC)[reply]

History is confusing[edit]

What's up with the history section? The page originally said that ILM created it for EXR (circa 1999), and that matches what I've seen elsewhere (e.g. the OpenEXR history page just says "ILM created the "half" format"). Later someone edited the wikipedia page to say that "NVidia and ILM created it concurrently" -- and otherwise seems to imply (but never says so explicitly) that NVidia invented it. But all the links seem to be very vague about exactly happened...

So what's the actual sequence of events? It's unlikely they (ILM and NVidia) both developed the same exact standard independently, so presumably either they cooperated on its development, or one of them did it first, and the other based their work on that.

--Snogglethorpe (talk) 10:20, 20 June 2012 (UTC)[reply]

How is it unlikely? If you compare to the previously existing standards, the only possible difference is in choosing the length of the exponent and the mantissa, and the length was chosen to be the closest to the ratios of both single and double precision. In all other respects they behave in exactly the same way as the standard. It doesn't take a genius, just someone who has a need for it, with basic passing familiarity with the standard and why that ratio was chosen. (Which is because it most closely lines up normal and subnormal without overlap.) Foxyshadis(talk) 03:06, 15 September 2012 (UTC)[reply]
Ok, never mind the subjective argument about whether it's likely or not, really the point is that the article should be clear about the history. If they both came up with it independently, it would be good to say so explicitly; if one influenced the other, the article should say that instead. I, unfortunately, do not know the answer to this (I've spent some time searching for it, but haven't had any luck finding a clear answer). --Snogglethorpe (talk) 05:01, 17 October 2012 (UTC)[reply]

Exact Integer Range?[edit]

The exact integer range has been specified as 0..2048, but shouldn't it be from -2^11 to + 2^11, meaning -2048...+2048 ? I find this confusing. Maxiantor (talk) 12:02, 17 October 2015 (UTC) Positive and negative values both have exact same ranges and precision in a float. 217.99.252.226 (talk) 17:24, 21 October 2015 (UTC)[reply]

Lower than half common?) 11- and 10-bit unsigned.[edit]

I noticed at https://www.khronos.org/registry/vulkan/specs/1.0/xhtml/vkspec.html#fundamentals-fp16 not just 16-bit (half) but also lower. Should it be mentioned, maybe on another page? comp.arch (talk) 08:39, 19 December 2016 (UTC)[reply]

Performance[edit]

Article says:

However, if there is no hardware support, math must be done by emulation, or by conversion to single or double precision and then back, and is therefore slower.

Each individual operation may well be slower (latency), but conversion to single precision and back may still result in more operations per unit of time (throughput) since only half as much data needs to be stored in cache or fetched from RAM (as pointed out earlier in this section). So this claim is probably not true; if it is, it needs a WP:RS. --Macrakis (talk) 15:22, 28 February 2023 (UTC)[reply]

Please kindly add the number 1.000976563 to the intro it is equivalent to the minstep of 1/1024+1[edit]

The smallest increment above 1.0 in this format by my calculations would be 1.000976563. Tomachi (talk) 14:09, 26 April 2024 (UTC)[reply]

That is in the examples table Spitzak (talk) 08:57, 27 April 2024 (UTC)[reply]