Talk:Intel 4040

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Addressing?![edit]

OK, there's some weirdness going on here, in terms of both missing and contradictory information.

1/ The chip is listed as having both "12 bit", "13 bit" and "8 kilobyte" address ranges. For a chip with a 4-bit word length / data bus width, that latter one is equal to 14 bits, not 13, as 8192 bytes = 16384 nibbles. So, which is it? If it's 12-bit like the 4004, then that's only a total of 2KB address space (4k words). 13-bit would be 4KB / 8k-nibble. 14-bit seems like a realistic possibility, because that would then match the 8008 and maybe explain its rather odd choice of address width.

2/ Is it multiplexed or separate? The much greater number of pins vs the 4004 suggests that, much like the 8080 vs 8008, the data and address buses were partially or wholly demultiplexed in the 4040 (ie instead of issuing a string of 3 nibbles for the address then reading or writing the actual data as a fourth, the address is asserted as a single 12 to 14 bit value, or even two 6 or 7 bit ones, and the data is either then read/written sequentially via four of those pins, or transfers more-or-less simultaneously over 4 separate pins), which would produce a significant speed boost when reading/writing memory or IO devices... as well as avoiding the introduction of additional delays if the address space is actually greater than 12 bits. But there's no explicit mention of it and the extra pins could well be spoken for by other functions not clearly defined in the article.

Even the functional diagram doesn't make it plain; there's no appearance of dedicated address lines anywhere in the image, but neither are the data lines marked as "address/data" as would be the case for the 4004, 8008, 8085, 8086, 8088 etc... 146.199.0.170 (talk) 13:46, 19 September 2018 (UTC)[reply]

4040 slower than 4004[edit]

Only 60k instructions per second. It means 4040 took 15μs compared to the 10μs taken by 4004 to execute a singular machine instruction. Am I missing something?Anwar (talk) 19:17, 7 May 2008 (UTC)[reply]

Bear in mind the "500 to 740khz" operating speed range. The figures given on the 4004 page are clearly using the maximum speed (also 740khz), and "10us" is a bit of a simplification of its maximum instruction processing speed: it takes 8 clocks per machine cycle, and each instruction needs either 1 or 2 MC's, thus 8 to 16 clocks. This works out to a minimum 10.8us and maximum 21.6us per instruction, or an average of around 16.2us assuming an even mix (or a little under 62000 IPS).
Now, 740khz is very nearly 1.5x 500khz... if we slow the 4004 down to that speed, our "10us" becomes nearly "15us" straight off the bat; more correctly, 10.8us becomes 16.0us. As there's a good bit of rounding going on (and we don't have anywhere near as detailed a datasheet available for the 4040 as the 4004), it's not outside the realm of justification to assume that both processors have equal instruction cycle lengths at 8 clock pulses, and what we're seeing is *either* the 4040 having the same best-case per-clock execution speed as the 4004, merely specified here at its *minimum* clock speed rather than max; OR a rather extreme approximation of its average execution speed at the maximum clock rate (i.e. it's actually ~62kips not 60).
Bear in mind however that the 4040 would have been far more efficient with its instruction cycles, as it had a great many more instructions (almost as many as the 8080) and so could do a lot more with one or two of them. A 32-bit (8-digit BCD) addition is mentioned on the 4004 page as taking 79 instruction times (632 clocks), or almost 10 instructions (more precisely, 79 clocks) per digit, which is probably partly due to the more complicated code that would be required to carry out the addition, as well as possibly the multiplexed buses (and lack of index registers) slowing down data transfer. Not that it would have mattered much for a chip intended for use in a desk calculator where the difference between doing that operation in 1ms and 100ms would hardly be noticed, but still, I bet you any money the 4040 would have done the job faster at the same clock speed simply through requiring less instructions and possibly fewer bus accesses. And if you were trying to build a more complex device from it, including possibly a rudimentary computer, that difference would actually have been a meaningful one.
It would also have offered attractive advantages in terms of needing far less support hardware and thus a simpler, cheaper, more easily designed/understood/tested system layout, much like the 8080 did over the 8008. And although it still only had an internal stack, it was at least twice the length as the 4004's, so more complex programs could be run (and/or it would have to do less longwinded dumping and reloading of the stack registers into/out of memory in order to service deeply nested subroutines). 146.199.0.170 (talk) 13:24, 19 September 2018 (UTC)[reply]

Pricing[edit]

The big thing lacking from most of these microprocessor articles is the pricing. Initial pricing at the time of release is the absolute must. Usually these prices varied according to the quantity ordered. Also, if the info is available it would be good to see how the price changed through the years. JettaMann (talk) 14:25, 10 September 2010 (UTC)[reply]

Might be difficult. The chip probably didn't sell in great quantities and wouldn't have been so much of an open-market thing - more B2B, ie Intel negotiating directly with purchasers calling them from interested companies. The details of such deals being generally... well, not so much secret, as simply something not discussed outside of the company, because it wasn't anyone else's business and they probably wouldn't have been interested anyway. There was no particular home computer scene at the time, so no-one was reading computer and electronics mags looking for retailer adverts in order to get the best deal on a 4040 to upgrade their 4004 based system with or whatever.
Plus it's known from the history of the 8080 and the other chips that sprung up to compete with it (6800, 6502, Z80 etc), as well as its use in the first micros to be built in any quantity, that chip makers simply didn't know how to price the product at the time - the unit cost could fluctuate so wildly depending on the deal you cut with them that trying to pin it down to a simple "in year X, it cost Y dollars" figure is pretty much meaningless. EG, Intel originally pegged the per-unit price of the 8080 at $360, just because it matched nicely with their current System 360 mainframe, and it seemed like a good certain-fraction-of-a-mainframe-CPU-board price for a chip that offered a good fraction of the performance of a mainframe CPU board. Other manufacturers of early chips followed suit and listed their first open-market CPUs for about the same. But when the maker of the Altair cut a deal with Intel to buy a thousand or so of the processors to build into the first run of 8800's, the apparent unit price was a mere $75 (which enabled the kit to sell for only $439). Soon after, the 6800 started selling for about that much, and was then massively undercut by the 6502 which listed at just $25.
However much the 4004 and 4040 went for ... who knows. The best method of estimating it is to look for how much the calculators and other early devices that incorporated them retailed for, divide that in half, and then consider what fraction of the total BOM the central processor might have represented in such a machine... Could be Intel didn't rate them as worth much more than their individual DRAMs at first, then jacked up the RRP of the 8080 when they realised just what kind of numbers the new product might sell in and how desperate people would be to get hold of them at almost any price... 146.199.0.170 (talk) 13:35, 19 September 2018 (UTC)[reply]

Uploading new photograph?[edit]

Hi,

I am currently in the possession of a (now EXTREMELY rare) C4040 CPU and I would like to contribute by uploading a home-made picture of it. Unfortunately the legal pages look like a minefield. Can anyone help me out here? — Preceding unsigned comment added by DiederikH (talkcontribs) 19:21, 14 April 2012 (UTC)[reply]

There's nothing to get too excited about. If you don't care if other people will make money off your picture or take credit for it, just release it into the public domain. If you do but you want others to be able to use it, there's things like GNU public license (other people can make money but have to credit and provide source) and the Creative Commons license (you can choose if people can make money, derive works (edit your stuff and reissue)). Personally I couldn't care less, so I just PD most my stuff. Not really a minefield; rather a field full of different fruits to suit your taste. Paul Moir (talk) 05:37, 17 April 2012 (UTC)[reply]