Wikipedia:Reference desk/Archives/Computing/2007 November 24

From Wikipedia, the free encyclopedia
Computing desk
< November 23 << Oct | November | Dec >> November 25 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


November 24[edit]

Restructuring GOTO programs[edit]

I found an old BASIC program lying around on my computer, and I was thinking of rewriting it in perl. But as with so many BASIC programs, it uses extensively the GOTO statement and is incredibly difficult to parse. Is there any way I could have the computer restructure, or just somehow analyze, it for me?  ›mysid () 10:07, 24 November 2007 (UTC)[reply]

Draw some flowcharts? Or if you post it here I'm sure someone would gladly help you. --antilivedT | C | G 10:26, 24 November 2007 (UTC)[reply]
There are algorithms for automatically turning spaghetti code into structured code. Cristina Cifuentes's thesis has a nice overview (Chapter 6). But I suspect you're looking not for algorithms but for a ready-made implementation targeted at BASIC code, and in that case I have no idea. -- BenRG (talk) 13:23, 24 November 2007 (UTC)[reply]
If the program is well-structured -- that is, if the GOTO's all end up implementing familiar if/then/else and looping constructs -- it should be straightforward to translate into a structured language (once you can locate the familiar if/then/else and looping constructs in the wall-to-wall mass of code). But if the GOTO's are arbitrarily spaghettiesque -- as, in large unstructured programs, they all too often are -- it can be effectively impossible. Moreover, the tangled GOTO's in such a program usually end up implementing logic that is (a) ill-conceived (i.e. not what the user probably really wants, after all) and (b) buggy. So you may be better off figuring out what you want the program to do, and rewriting that part of it from scratch, rather than attempting any kind of translation (whether automated or manual). —Steve Summit (talk) 17:10, 24 November 2007 (UTC)[reply]
I would say that if you intend to simply convert the program for the sake of using it 'as is' then your best bet is to leave the GOTO's alone. PERL appears to support goto's and gosub's too - and this will at least give you the best chance of ending up with a working program at the end. However, if you plan to expand the program - add more features - whatever - then you're going to have to really get to understand the thing - and in that case manually replacing the goto's is a good idea. As other have pointed out, most of the time goto's are being used because BASIC is such a primitive language and you can generally see that they are being used to stand in for more modern programming constructs - which makes it fairly easy to fix them as you are reading, translating and generally trying to understand the code. The other alternative (which I'm not seriously recommending) is an option I took many years ago I had been presented with a really AWFUL example of spaghetti programming in machine code for an obsolete microcontroller that I had to get working in under a week. In the end, I found it easier to write an emulator for the microcontroller in C++ than it was to translate the code! I hope you don't have to go that far! SteveBaker (talk) 20:04, 24 November 2007 (UTC)[reply]
Thanks for all the answers. Drawing a flowchart (of the beginning of the program) indeed helped, as I was able to grasp the idea behind the very obscure usage of GOTOs. Just a snippet of the code so you get the picture of what I'm dealing with :) :
 ...
 260 IF r - 1 = 0 THEN 530
 ...
 530 IF s - 1 = 0 THEN 670
 ...
 670 IF r = h THEN 740
 ...
 740 IF s <> v THEN 760
 750 IF z = 1 THEN 780
 755 q = 1: GOTO 770
 760 IF w(r, s + 1) <> 0 THEN 780
 770 GOTO 910
 780 GOTO 1000
 ...
 1000 GOTO 210
 ...
mysid () 17:29, 25 November 2007 (UTC)[reply]
My condolences. That's what I was afraid of. "Arbitrarily spaghettiesque", with a vengeance!
(It still baffles me that people write code like this -- but they do, they do. Moreover, they think -- at a deep and unquestioned level-- that it has to be like this. They just don't know any other way. The code they learned from -- in the poorly-written textbooks they read, in the poorly-taught classes they took, in the poorly-written programs of their peers they looked at -- was all like this. Whenever you have a bug, you add a bit more special-cased code to patch around it, and if that involves another GOTO or three, well, there are already 2,643 of the damn things, so a few more won't hurt.
Worse still, some programmers come to believe that not only are computer programs necessarily tangled and complicated and hard to understand, but that they are uniquely smart enough to "understand" and maintain them. They feel sorry for the rest of us who throw up our hands and run away screaming when confronted with their gawdawful monstrosities -- we're obviously pitiful simpletons who'll never be Real Programmers. The notion that bugs might be fixed, or programs cleaned up, or features added, by simplifying and clarifying things, by removing duplicated code, by omitting needless gotos -- that notion is either heretical, or laughable, or unthinkable, or Just Plain Wrong.
Me, sometimes all I can say in this situation is, It doesn't have to be that way. But I'm never sure what to do about the programmers who don't get it. I keep hoping to find a way of explaining it to them, but perhaps their brains are just different.) —Steve Summit (talk) 18:02, 25 November 2007 (UTC)[reply]
It certainly doesn't. When I was doing my degree (back in 1975!), our coursework would get a score of ZERO if there was even one goto anywhere in the code (except of course in languages where it's unavoidable). Since then, I have never used a single GOTO (except of course in languages where it can't be avoided - but even then, GOTO's are only to be used in ways that mimic non-goto code paths such as in C). This hasn't given me even one moment of trouble. It's certainly as easy and efficient to avoid the little buggers - so one should.
As for that BASIC snippet, we don't really have enough code to go on. But in C, I suspect this would probably be something like this:
/*210*/  do
         {
/*260*/    if ( r - 1 != 0 ) { ... }
/*530*/    if ( s - 1 != 0 ) { ... }
/*670*/    if (   r   != h ) { ... }
/*740*/    if ( s == v )
           {
/*750*/      if ( z == 1 ) continue ;
/*755*/      q = 1 ;
             break ;
           }
/*760*/  } while ( w(r, s + 1) != 0 ) ;
But it's hard to know without seeing it all. Even with the GOTO's removed, the 'break' and 'continue' stuff speaks of some algorithmic ugliness - but it's certainly possible to write this code without GOTO's...and it's much clearer without them! SteveBaker (talk) 19:16, 25 November 2007 (UTC)[reply]
There is no need to criticise use of gotos. In early versions of Basic you could only do one thing on your if condition, so anything complicated needed a goto. Also even in machine code today, the equivalent of jump is used. Sure you could use a macro in assembly language to overcome some of the goto mess. However goto is just a tool to make spaghetti code, not the reason for a mess in itself. Programmers today are quite capable of making a mess of code without using gotos. ANd the code snippet looks somewhat like a nested if, you need to make sure whether or not control flows from one test block into the next, or if indeed that matters. Graeme Bartlett (talk) 20:47, 25 November 2007 (UTC)[reply]
I think you're slightly missing the point. In languages (like assembly code and early versions of BASIC, SNOBOL & Fortran) where you don't have a choice but to use 'GOTO'-like jumps, you can still choose to use those jumps in ways that mimic things that are missing in those languages like if/then/else, while, do/while, for, switch/case and subroutines - possibly with judicious use of early-return, break and continue. The trouble with GOTO is that it allows people (and positively ENCOURAGES others) to write truly spaghetti-like code - if people were disciplined then GOTO's wouldn't be anything like as big of a problem as they sadly tend to be. Restricting yourself to those uses of GOTO will get you code that people will find much easier to understand...doubly so if you comment them appropriately. Fortunately, languages without decent control constructs are finally being obsoleted by languages like C/C++, Java and Python that have decent command structures. Worse still, with modern CPU's (which have been agressively optimised for running C), goto's are really inefficient. They make optimisation much harder on the compiler - and are disasterous at runtime too. SteveBaker (talk) 06:51, 26 November 2007 (UTC)[reply]
I disagree with something that I think SteveBaker said above. He speaks of "gotos" being disasterous at runtime but, in fact, most computer architectures still depend upon "goto" instructions at the level of the machine language. That is, all those fancy if-then-else structures that you write up at the level of C-code eventually become goto-laden spaghettis code when you get down to the machine level. Oh, it's often very nicely optimised spaghetti code, but spaghetti code it still is. About the only sop to structured programming that has made any inroads at the level of machine language is the ability to conditionally execute instructions. That is, on some architectures, rather than branch around an instruction (and thus disrupt the pipeline because of the execution of a branch), you can simply quash the execution of a single instruction based on some condition, thus turning it into a temporary no-op (which happily processes through the pipeline with no disruption).
It's because gotos still exist at the machine level that branch prediction gets so much attention from computer architects.
Atlant (talk) 17:02, 27 November 2007 (UTC)[reply]

Time loop[edit]

My computer's clock is perpetually stuck between 6:04:21 PM and 7:04:20 PM and keeps resetting to the former when it reaches the latter. The date is also consistantly stuck at November 19, 2007. The problem arose when I had to mess with my computer's motherboard jumpers. Thoughts on fixing? —Preceding unsigned comment added by SeizureDog (talkcontribs) 11:09, 24 November 2007 (UTC)[reply]

Hmm... any temporal anomalies in your area recently? (*snicker*, "recently") --ffroth 23:37, 24 November 2007 (UTC)[reply]
Check your shunt settings, obviously. Also, maybe completely clearing your CMOS and then letting your OS set the time? 68.39.174.238 (talk) 00:04, 25 November 2007 (UTC)[reply]
There are two possibilities, I think:
  1. For some very strange reason(s), your computer is (a) two weeks, one day, 16 hours, 4 minutes, and 20 seconds late for this year's North American DST changeover and (b) repeating the "fall back" thing over and over.
  2. Your name is Phil Connors, you're on assignment in Punxsutawney, Pennsylvania, and you're almost 10 months late for Groundhog Day.
Steve Summit (talk) 01:45, 25 November 2007 (UTC)[reply]
The CMOS battery could have failed (or accidentally been dislodged). Try downloading a new flash BIOS from the motherboard manufacturer's website, that may fix it. GaryReggae (talk) 13:00, 26 November 2007 (UTC)[reply]

collaboration[edit]

I do some collaborative work for my classes. I would like to find some way to easily compare revisions, in an easy to access format. I tried google Docs, but that was a pain. Would a wiki work? How would I set one up? —Preceding unsigned comment added by --Omnipotence407 (talk) 16:12, 24 November 2007 (UTC)[reply]

A wiki is a great way to do collaborative work and to compare revisions. As for setting one up, it depends on what type of wiki software you want to use. Wikipedia uses MediaWiki, which is probably the most advanced and worked-upon wiki software out there at the moment. You probably need to purchase server space if you don't already have some, and then install MediaWiki on it. --24.147.86.187 (talk) 16:30, 24 November 2007 (UTC)[reply]

Would I be able to use a site like geocities or googlepages, or something like that?--Omnipotence407 (talk) 16:42, 24 November 2007 (UTC)[reply]

No, Mediawiki requires a server with MySQL and PHP support, which you'd either have to set up on a machine you controlled or would be a nontrivial cost to rent. Take a look at Comparison of wiki farms for some hosting options. But MediaWiki is written primarily to host Wikipedia and its siblings, and is probably overkill for a small project - so take a look at Comparison of wiki software for some simpler alternatives. -- Finlay McWalter | Talk 16:49, 24 November 2007 (UTC)[reply]
It's true that Mediawiki is extremely powerful, but it's also ridiculously easy to set up. (My hat is off to whoever wrote its installation procedure.) As long as you have MySQL and PHP set up, installing Mediawiki is a no-brainer. I think it took me about 5 minutes on my laptop. (Happily, Mac OS X comes with PHP and MySQL out of the box.) —Steve Summit (talk) 17:19, 24 November 2007 (UTC)[reply]
Why would you expect it to be difficult to set up? There's nothing in the installation procedure that's unusual for a PHP app- you just run the install script to build databases, and go on with the config. Granted it doesn't make you configure it, but you want to anyway --ffroth 02:52, 25 November 2007 (UTC)[reply]
And note that "non-trivial" can be less than $10 a month (paid in advance). Server space is relatively cheap as far as computer things go. --24.147.86.187 (talk) 19:08, 24 November 2007 (UTC)[reply]
The expense (if any) is only in getting access to a computer on the internet (which might be a computer on your internal network if you only need access within the school/college). All of the software you need is OpenSourced. Hence you have to install MySQL and make sure that the web server software you are using (Apache say) is configured to support PHP. Then MediaWiki itself is extremely easy to install and works like a dream. Yes, it's fully-featured - but that doesn't make it hard to set up - so you might as well go with the best, even if you don't need all of the bells and whistles. One huge benefit of using MediaWiki instead of one of the others is that the steamroller success of Wikipedia means that there is vastly more expertise out there for MediaWiki than for the other Wiki platforms. If you are used to MediaWiki, the others (such as TWiki) seem amazingly primitive by comparison. SteveBaker (talk) 19:46, 24 November 2007 (UTC)[reply]
I'll say. We've got a WikkiTikkiTavi installation at work -- nobody uses it. One of these days I have to figure out how to pour its database into Midiawiki's. —Steve Summit (talk) 20:10, 24 November 2007 (UTC)[reply]

So for server space I need to buy hardware? Am I able to just get free or cheap web space? For some reason, im easily confused on this subject.--Omnipotence407 (talk) 22:22, 26 November 2007 (UTC)[reply]

You need to have access to a computer that has an always-on connection to the Internet and which can run this software 24/7. That's unlikely to be your desktop PC. You could certainly purchase server space from someone like http://www.dreamhost.com/ - who will supply the hardware, software, network connection, free web space, email, and Wiki for about $100 per year. However, if you are at a school, university or business - then your organisation probably already has server computers on the Internet 24/7 that you could use for free. Presumably there is some kind of IT department you could go to to request that. If that's not possible - and you don't want to spend $100 a year - then you could possibly set up a machine of your own (an older PC that's no longer needed perhaps - you wouldn't need anything fast or fancy) on your local network and install the software on that. However, it sounds like your degree of computer expertise may not be that high (forgive me if this is not true!) - so getting some help from your organisation to get it set up is likely to be preferable. SteveBaker (talk) 17:35, 27 November 2007 (UTC)[reply]

I need a bigger screen[edit]

Hi, I'm still using an old 15" CRT screen, so I pondered these:

  • A 22" 1680x1050 widescreen TFT
  • A 24" 1920x1200 widescreen TFT
  • Dual 19" 1280x1024 TFT screens

I guess a DVI connection would be better. Not being a gamer, I'm using an Athlon 2000 system.

The problems are:

  • My board only supports AGP 4x
  • I'd like to use Linux with open source graphics card drivers

Which solution would work, and which would you recommend? Which graphics card should I buy (or will my old ATI Xpert 2000 be sufficient)? TIA, --Hochwohlgeboren (talk) 20:16, 24 November 2007 (UTC)[reply]

I would go for the 1920×1200 TFT, but it really depends on what you're going do with it. Having dual screen is not as good as you would think, I would much rather have 1 huge screen than many small screens. Are you going to watch HD (1080p) things on there? Compiz? If you're gonna do either of these then you would need a reasonably good card with quite a bit of VRAM, something like the Geforce 7600 will do. --antilivedT | C | G 04:19, 25 November 2007 (UTC)[reply]
I'm actually happier with two medium resolution screens than with one high res one. You see more pixels that way and the side-by-side setup gives you a super-wide setup that you can't get with a single screen that happens to suit the kinds of things I do. Admittedly for games playing and watching video's I have to restrict myself to just one of the two screens - but that's just not my main activity and it's rather nice to have the movie playing on one screen and to have Wikipedia up on the other without overlapping windows and such. I agree that a GeForce 7000 or 8000 series graphics card is the best here - but I doubt that either of those are going to work with AGP 4x - so you might be better off looking for a GeForce 6800 on eBay (make sure it's one of the ones that has two video outputs. However, your requirement to have OpenSourced drivers means you are stuck with ATI cards - and IMHO, that's a REALLY BAD choice. Whilst OSS, their Linux drivers are slow, buggy and poorly supported. nVidia's drivers are the same versions they ship for Windows users, they are fast and very reliable. nVidia's support for Linux is exemplary and I think Linux users should reward them for treating us as a first-class platform rather than beating them with a stick for not opening their source code (which it turns out they cannot do for reasons of how they licensed some of the technology). All AGP cards will work with AGP 4x - just be sure you don't buy a PCI-express card by mistake. SteveBaker (talk) 07:40, 25 November 2007 (UTC)[reply]

Thanks for your help! I was wondering if AGP 4x was fast enough to support a big screen. My calculation for dual 19" is:

32 bit x 1280x1024 pixels x 60 Hz x 2 screens = 5033164800 bit = 600 MiB 

Which is still below the 1 GiB that AGP 4x can do. I don't know though if my calculation is too naive, or if the system is bottlenecked by other components. I'm not planning to run HD things or Compiz, my CPU would probably be too slow anyway.

I felt my need for a bigger screen when I was running Eclipse, which looks like a stamp collection at 800x600. Maybe I should buy a 22" widescreen and add a 19" regular one when I need even more space?

After what SteveBaker said, I think I'll give the NVidia-drivers a try. Will all GeForce 6800 (or newer) cards support the widescreen resolutions? Do I need a certain amount of RAM on the card? TIA, --Hochwohlgeboren (talk) 15:58, 25 November 2007 (UTC)[reply]

There is plenty of RAM on even the lowliest of nVidia 6000 series cards - but the more screen memory you consume, the less there is for texture maps and such. However, if you aren't interested in interactive 3D applications like games then you probably don't care. Of course I encourage you to check rather than taking my word for it - it's been a while since I configured a system with a 6800 card and eventually all of those generations of hardware start to blur together! Also, graphics cards with nVidia chipsets are made by a variety of manufacturers - and you can get them with different amounts of RAM on them. Since they are two to three year old technology - you should be able to find good ones for low $$$ on eBay. SteveBaker (talk) 18:53, 25 November 2007 (UTC)[reply]
Thank you very much! I'm going to look out for a bargain. --Hochwohlgeboren (talk) 16:51, 26 November 2007 (UTC)[reply]

Using an AOL proxy to hide bittorrent activity? Possible?[edit]

Is there any way to make bittorrent work from behind an AOL proxy? I seem to recall doing it way back when, and AOL hasn't been my ISP for many, many, years.. but I keep the software around mainly because it allows me to edit wikipedia anonamously. Seems like the same thing should work for bittorrent, but it hasn't lately, yet I distinctively remember doing in the distant past. Any hints?--172.164.141.138 22:09, 24 November 2007 (UTC)[reply]

No, AOL proxies runs on the HTTP protocol, while BitTorrent runs on the BitTorrent protocol. You can use it to access trackers but any traffic to and fro your peers cannot go through the proxy, just as someone who speaks Chinese cannot relay messages in English. Also, it's not entirely anonymously behind AOL proxies, they've enabled the "forwarded-for" header so MediaWiki still knows who's the editor. --antilivedT | C | G 22:31, 24 November 2007 (UTC)[reply]
True, but the "forwarded-for" headers only work if you volentarily upgrade your AOL software to the newest version, which I haven't done in years, so I'm quite anonymous, thanks to the utter crapiness that is AOL they had no mechanism in place to forcibly upgrade their customers to the newest version of the software.--172.130.223.53 22:35, 24 November 2007 (UTC)[reply]
...he says, editing from a non-proxy AOL IP. —Ilmari Karonen (talk) 23:18, 24 November 2007 (UTC)[reply]
I wouldn't go that far, true I'm not editing from behind a shared proxy, but it is still none-the-less an AOL proxy. The distinction between the shared and non-shared proxies is an artificial one created by wikipedia, due to the disruptive nature of the shared proxies (which are currently soft range blocked preventing the few remaining holdouts with antiquated versions of AOL from editing from them except when logged in). This doesn't change the fact that there is no way to associate this IP with any real person, place or thing. And back on the original topic, I finally got bittorrent to respond from behind AOL's dynamic proxy server.. bet you want to know how (: --172.134.213.44 00:06, 25 November 2007 (UTC)[reply]
Since I registered a free AOL account that I use when I log into AOL via the main AOL infrastructure, completely separate from my old "pay" account with AOL, and I only ever log into AOL as "guest" there's really 0 connection between my real life indentity and my IP address. They wouldn't have the slightest idea how to connect my IP with my real name, account, etc... don't even use AOL as an ISP anymore, was automatically switched to RoadRunner back when AOL/TimeWarner went the way of the dodo. Just as anonymous as an open proxy, or tor based proxy, with the added benefit of still being able to edit wikipedia-- 172.135.180.198 00:44, 25 November 2007 (UTC)[reply]