Wikipedia:Reference desk/Archives/Computing/2016 December 29

From Wikipedia, the free encyclopedia
Computing desk
< December 28 << Nov | December | Jan >> December 30 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


December 29[edit]

Why no virtual 8086 mode in x86-64 long mode?[edit]

Why is it that microprocessors using the x86-64 architecture have virtual 8086 mode turned off when the processor is in long mode, even though these same processors are perfectly capable of running virtual 8086 mode (as demonstrated by the fact that they run virtual 8086 mode just fine when running in legacy mode)? Whoop whoop pull up Bitching Betty | Averted crashes 02:09, 29 December 2016 (UTC)[reply]

I don't see any reason to think it's any more complicated that that suggested by our article (x86-64#Architectural features Removal of older features) namely the removal of features that are no longer needed from the new architecture. Such cleaning up of stuff (architectures, software etc) by removal of unneeded legacy features isn't uncommon and is generally considered good practice. Nil Einne (talk) 06:06, 29 December 2016 (UTC)[reply]
...except that virtual 8086 isn't an "unneeded" feature, not even close, given that its removal breaks compatibility with all 16-bit applications, even widely-used versions of such.
Is there some way to override or remove the long-mode disabling of virtual 8086 mode? Whoop whoop pull up Bitching Betty | Averted crashes 09:24, 29 December 2016 (UTC)[reply]

Why do you believe that 16-bit application compability was considered significantly important to the designers of the x86-64 architecture? I use a lot of crap, but although I've been using Windows x64 since XP and single core Venice days, mostly I only ever encountered it with crappy programs which still used 16 bit installers. That and DOS programs, but while the later was mildly annoying in 2006, by now DOSbox is a far better alternative. (Well maybe it was then too, but I'm not so sure since IIRC I did sometimes have performance problems albeit maybe this was because of upscaling modes.)

Note also that for the people that matter most i.e. business and enterprise customers, even if they did have 16-bit applications they needed to keep around, it's not clear if they'd want them running on a processor running in long mode (and a x86-64 OS). It's likely such code would be something which is sufficiently important to keep around, but which for whatever reason they could upgrade. The more complicated you make things, the more likely something is to break which would generally defeat the purpose of not upgrading the code. In other words, while I can't find a source which explicitly says so (I did look), there's good reason think the designers felt similar. It's also worth considering there's a fair chance this decision was made twice. I don't know enough to be sure, but I wouldn't be surprised if Intel could have added virtual 8086 mode to their version of long mode without breaking compatibility with AMD64, but they too decided it wasn't worth it. Of course they'd also have to figure it may not be used if a major player (at the time) didn't have it, but they weren't generally shy about adding their own features.

As for your later question, are you actually reading the articles linked to or just linking to them for fun? As the Virtual 8086 mode#64-bit and VMX support article notes, there are ways you can use virtual 8086 mode when running in long mode relating to virtualisation. Unless you know what you're doing however, it doesn't sound like any of them are that well supported other than the basic of running the code on an operating system which does support 16-bit code running as a virtual machine. (And if you're talking about Windows I'm pretty sure it's basically the only option besides pure emulation.) Which technically could be done before VT modes were added anyway. (And the fact this doesn't seem to be that well supported also sort of suggests many felt it wasn't that worthwhile.)

Nil Einne (talk) 10:03, 29 December 2016 (UTC)[reply]

The answer is that probably nobody thought about all consequences of the virtual 8086 mode removal. The same thing happened with the removal of segmentation, which made it impossible any virtualization in the first 64-bit processors (in the long mode). To fix these problems a hardware virtualization support was added later. Ruslik_Zero 20:45, 29 December 2016 (UTC)[reply]
They didn't really remove anything. To allow switching between V86 and long mode (in particular, handling V86-mode faults in 64-bit OS code) they would have had to define a new protocol for that, implement it in silicon and/or microcode, and support it forever. It wasn't sensible to add yet more cruft to the architecture when emulation works fine. It's (far more than) fast enough, it's more flexible, and it's more secure because it doesn't need any special kernel support or privileges. -- BenRG (talk) 02:18, 30 December 2016 (UTC)[reply]
About the segmentation issue, there's some discussion here [1]. It's not from the designers and I don't understand it so can't comment on whether it's well informed but it does probably illustrate the point that such matters are fairly complicated. It could be true that at the time of design it wasn't quite realised the importance to virtualisation but it could also be that the designers felt it wasn't worth it or there were better options. The fact that these only came later and there was a partial reversal on the AMD side but not the Intel side may be due to student priorities coming to play. BenRG also has good point that when you're designing a new architecture it's not so much should I remove this but should I add this legacy feature? In any case regardless of the good our bad of the removal of segmentation, I haven't seen any evidence there was really any feeling the same to Virtual 8086. Nil Einne (talk) 04:07, 30 December 2016 (UTC)[reply]

swproxy.wmflabs.org gets indexed in Google[edit]

Probably not the correct place to report this problem, but someone here will know what to do/who to contact.

http://imgur.com/a/EMw8c

The proxy needs to be excluded via robots.txt or noindex-ed (but using robots.txt is probably better).

(((The Quixotic Potato))) (talk) 09:27, 29 December 2016 (UTC)[reply]

@Guy Macon: You have the knack, you'll know what to do. (((The Quixotic Potato))) (talk) 09:30, 29 December 2016 (UTC)[reply]

The Knack (Dilbert episode)... I am on it. --Guy Macon (talk) 13:59, 29 December 2016 (UTC)[reply]
See Wikipedia:Help desk#Trying to understand some odd swproxy.wmflabs.org behavior
QP, I am curious; what did you search on to get the result in the image above? I can;t find a search term that finds that exact page. I found plenty of others though, so I can show that there is a problem anyway, but I am curious about what you searched on. --Guy Macon (talk) 14:47, 29 December 2016 (UTC)[reply]
@Guy Macon: Thank you. I have updated the imgur album. My search query was "optimist guide to wikipedia" (without quotes). I live in Amsterdam, so I am probably using a Google datacenter in NL. Try searching for "site:swproxy.wmflabs.org" (without quotes). To find the page I found you can use the following:
site:swproxy.wmflabs.org "optimist guide to"
Happy New Year! (((The Quixotic Potato))) (talk) 17:24, 29 December 2016 (UTC)[reply]
Update: I spoke to an ops engineer for the Labs team, everyone is on holidays right now, but they will poke someone in the team after new year. (((The Quixotic Potato))) (talk) 18:29, 29 December 2016 (UTC)[reply]