Wikipedia:Reference desk/Archives/Computing/2016 February 17

From Wikipedia, the free encyclopedia
Computing desk
< February 16 << Jan | February | Mar >> February 18 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


February 17[edit]

soft line break[edit]

It's like a soft hyphen but invisible; you might put it after a slash. Open Office has it. WordPerfect had it twenty years ago. I can't find it in MS Word. —Tamfang (talk) 03:29, 17 February 2016 (UTC)[reply]

I believe I've heard that term, meaning "here's a good place to put a line break, if you need one". Whether you need one would depend on the length of the line relative to the column or page margins. This went along with earlier markup language software that didn't render the text in the final format until later. However, modern software is moving more towards WYSIWYG, so it doesn't apply as much anymore. (Editing here is a notable example of where you don't see the final version until you save or do a preview. LaTeX is another example.) StuRat (talk) 03:33, 17 February 2016 (UTC)[reply]
It's also a notable example of where markup like soft line breaks could be useful, since the "final" version you see may not be the same one that other people (with a different size window) see. This is why, as well as the ordinary space character to produce an inter-word space that might become a line break, we also have &nbsp; to produce an inter-word space that won't become a line break. Soft line breaks are less often wanted, but it's the same sort of idea of an appropriate level of control. --69.159.9.222 (talk) 05:00, 17 February 2016 (UTC)[reply]
I have an old (2003) version of Word, but it supports what it calls a "no-width optional break", which can be inserted from the menu Insert > Symbol ... Special Characters ... no-width optional break (near bottom of list). I would assume later versions have the same, although the menu path might be different. See "no-width optional break". -- Tom N talk/contrib 03:52, 17 February 2016 (UTC)[reply]
In 2010 (Mac), it's not on the list! Grr. —Tamfang (talk) 05:45, 17 February 2016 (UTC)[reply]

In the end, I pasted it from a Unicode character table ("zero width non-joiner", U+200C). —Tamfang (talk) 05:31, 18 February 2016 (UTC)[reply]

Buying a refurbished computer w/o Windows but with a license key[edit]

If I buy a refurbished computer that comes without Windows installed but has a genuine Windows license key, and I have the CD for Windows 7 home premium (the corner of the box says "upgrade designed for Windows Vista"), can I install Windows from the CD and use the license key with the computer? Bubba73 You talkin' to me? 07:18, 17 February 2016 (UTC)[reply]

Not necessarily. The key on the sticker attached to the computer will only work with the exact version of Windows originally installed, which might well be a custom version produced for that manufacturer, even if you don't see any obvious differences. You would be able to install generic Windows, but it would fail to activate. Try looking on eBay for a recovery disc. Dell ones are easy to find. 94.12.66.248 (talk) 08:33, 17 February 2016 (UTC)[reply]
Two other potential problems are that your version of Windows 7 appears to be an upgrade, not the full product, and that I'd asume it is already installed on another computer: running a single copy of the OS on two computers is not permitted in the licensing rules.--Phil Holmes (talk) 09:52, 17 February 2016 (UTC)[reply]
Without commenting on the legalities (both because this is the RD and because it depends precisely on how the refurbishment is carried out), for Windows 7 if it's OEM it's fairly likely to be be using SLIC 2.1. This means what you want is installation media for the full version of Windows 7 matching what the computer was licenced for. By version, I mean Ultimate, Home Premium or whatever. A different version may work, but it's not definite and definitely not likely to comply with any licence. If you have the right version, all you need are the certificate and OEM key which for any major OEM are trivial to find. Whether you add these manually or use any OEM installation media isn't going to make a difference to whether the OS works and is properly activated. Nil Einne (talk) 12:49, 17 February 2016 (UTC)[reply]
Also, just having the key numbers doesn't mean you legally own the key. Somebody else (the previous owner, for example), may still be using that key on another PC, so if you try to use it, you will be blocked. StuRat (talk) 20:29, 17 February 2016 (UTC)[reply]

Thank you. Bubba73 You talkin' to me? 00:20, 18 February 2016 (UTC)[reply]

Resolved

GIF[edit]

The GIF article states; "The 89a specification also supports incorporating text labels as text (not embedding them in the graphical data), but as there is little control over display fonts, this feature is not widely used."

Can you please show me an example of a GIF image with a text label that is not part of the graphical data? I've never seen one before. Thanks. 109.207.58.2 (talk) 10:33, 17 February 2016 (UTC)[reply]

Likely to be difficult. This page says "The GIF89 specification allows you to specify text captions to be overlayed on the following image. This feature never took off; browsers and image-processing applications such as Photoshop ignore it" so unless we can find an application that actually shows the text, finding an image won't help.--Phil Holmes (talk) 11:29, 17 February 2016 (UTC)[reply]

Apple encryption controversy - do they have a back door or not?[edit]

I am mystified by this news story. If Apple has a way to decrypt the phone messages, then that's what I in plain(ish) English would call a backdoor already present, and their claim not to have one would be a lie. If Apple doesn't have a way to decrypt the messages, then what can they do? Can someone explain? Wnt (talk) 12:47, 17 February 2016 (UTC)[reply]

I'm tempted to say that you assume facts not in evidence, namely that the court knows what it's doing. Assuming Apple's security design is sound, they could just as well order Mark Hamill to telepathically extract the keyphrase from the owner, or, at that, to build a perpetuum mobile. But I suspect this is mostly theatre trying to help pass laws against strong encryption - which are pointless against hardened criminals (who will just use their own software), but might help catching small fry and thus inflating "success" rates for law enforcement, at the cost of privacy and security of the population at large. --Stephan Schulz (talk) 13:00, 17 February 2016 (UTC)[reply]
In this public letter, CEO Tim Cook says "Specifically, the FBI wants us to make a new version of the iPhone operating system, circumventing several important security features, and install it on an iPhone recovered during the investigation. In the wrong hands, this software — which does not exist today — would have the potential to unlock any iPhone in someone’s physical possession."
Many key questions were never addressed: could anyone create that software, which does not exist today? By extension, ... could Apple create that software, if ordered by a court? Could Mark Hamill create that software, if ordered by a court? May a Federal court in the United States legally order somebody to do an impossible task? Who decides whether a task is impossible?
I am not certain these hypothetical questions could be answered without delving into some very fascinating semantics about the definitions of "yes" and "no" in law, philosophy, and engineering. Nimur (talk) 15:53, 17 February 2016 (UTC)[reply]
Presumably, Apple can create the software because they have the signing key for OS updates, while Mark Hamill does not. -- BenRG (talk) 04:56, 18 February 2016 (UTC)[reply]
Fascinating choice of words! If you are very certain about such facts as you presume, you could volunteer your services as an expert witness to the United States attorneys, whose telephone contact information are published in the court order linked below... But I wonder if that context would put your mind into a state where you might cast a little bit of doubt on such assertions. You wouldn't feel any doubt in the certainty of your presumptions, even if you were sworn under oath and threatened with perjury? There isn't even the tiniest reasonable doubt in your mind because you don't know all the facts? Perhaps some things are impossible? Perhaps the veracity of important facts aren't decided solely on the basis of presumption? Nimur (talk) 18:21, 18 February 2016 (UTC)[reply]
It was an educated guess, based on my knowledge of computer security, and was correct. Compare that to your two posts in this subthread. But thank you for posting a link to the original court order. -- BenRG (talk) 22:14, 18 February 2016 (UTC)[reply]
According to this BBC article, Apple are being asked to disable the feature which deletes the data stored on the phone if more than ten incorrect passwords are entered. This is to let the FBI brute-force the password (which is only four digits in this case). Tevildo (talk) 13:03, 17 February 2016 (UTC)[reply]
(EC) That article seems to be a really poor explaination of the actual court order. This [1] has a better explaination including highlighting the actual court order at the end. Basically the FBI wants to be able to brute force the password quickly and without having to worry about the device erasing itself. As to whether Apple can actual help or not I can't say. Even if the device has OTA updates enabled, I would be surprised if Apple can update the phone unless it's unlocked but I don't know that much about Apple's security model. (If it can be updated, it will definitely help to have Apple's help. Not just because they have the source code but also because it will be difficult to update without Apple's help given the update security.) And even if it can be updated, I'm a bit surprised they didn't implement any timing limitations and wiping into the security chip itself (i.e. the chip will slow things down and count tries to wipe the key) such that it's not dependent on the firmware. (I think the 10 tries thing is optional so it obviously can be disable but it doesn't mean it can be disabled when it's enabled and the device hasn't been unlocked.) But again I don't know that much about Apple's security model. Nil Einne (talk) 13:08, 17 February 2016 (UTC)[reply]
The iOS Security Guide whitepaper, published by Apple, presents a detailed look at the security model for iOS devices. Nimur (talk) 15:38, 17 February 2016 (UTC)[reply]
Disclaimer: I am a potato. Clue level 0. I am probably wrong. Ignore me please.
Comment: I think that is is possible to have cryptographic weaknesses built-in. This way you do not have to create a backdoor. The Quixotic Potato (talk) 15:09, 17 February 2016 (UTC)
[reply]

Hmmm, it's looking to me like the "strong encryption" isn't really. If people are using a 4- or 6-digit passcode, obviously brute force is a possibility. The defense against that, according to the iOS 9 manual linked above, is "The passcode is entangled with the device’s UID, so brute-force attempts must be performed on the device under attack. A large iteration count is used to make each attempt slower. The iteration count is calibrated so that one attempt takes approximately 80 milliseconds. This means it would take more than 5½ years to try all combinations of a six-character alphanumeric passcode with lowercase letters and numbers" And then there are also escalating penalties for wrong answers. But the thing is, an iteration count, having the software loop through NOPs or something, is a meaningless protection when the software can be updated to get rid of it. Apple is protesting so loudly because indeed they are being forced to admit it's a meaningless protection and the "strong crypto" can be trivially cracked, if some value for iteration_count is altered in a signed upgrade. And presumably the federal government will keep hold of that signed upgrade for future use...

The proper approach of course is to use something like bcrypt that genuinely requires the time it takes to process a brute force hack, rather than simply telling the processor to spin its hamster wheel a while.

Please correct me if I'm wrong... Wnt (talk) 16:15, 17 February 2016 (UTC)[reply]

We have no real evidence the software can be updated in that way. As I pointed out above, the best way to do this would be for it all to be done in the security chip and for there to be no way to change values in to the security chip until it enters itself in to the unlocked state. And for it to only be in unlocked state when it has received the proper passphrase and generated the correct decryption key. It may that Apple doesn't do it this way, I'm not sure. (I only skimmed through the document, it sort of suggested that these functionality is supposed to be part of the security chip or Security Enclave or whatever which was what I thought before this controversy came about, but it may be I've misunderstood.) By the same token, it shouldn't really be possible to update the OS when the device remains locked. (And again, I see something in the manual which suggests this is the case.)

Also I don't think you really understand how Apple's iOS updates work. With modern devices and iOS versions, The signatures for updates are device specific. [2] [3] You can't just take a signed update for your friends device, even if it's the exact same model etc. This is fairly well known because it means you can't generally update to an older version of iOS even if it's newer than the version you have as Apple only signs the updates for an old version for few weeks after they release a new version. At least with older versions, you could record a signature for your device while Apple was still signing that version and then use it to upgrade/downgrade to that version, but I believe they've stopped this on newest versions. This limitations is fairly significant for those who want to jailbreak since jailbreaks bugs are often fixed fairly fast and so you only have a short timeframe where you can update to a jailbreakable version while the jailbreak is public. (You obviously can do so before, but won't have a jailbroken device until a public jailbreak is released.)

Of course this part of the signing is automatic, i.e. it doesn't involve several people entering parts of a key that only they know and/or is stored in some super secure safe in Apple HQ. So it may be that some governments have already broken it. However anyone who hasn't broken this part of the update mechanism will need either Apple's cooperation or find some otherway to force an update regardless of what Apple has signed before. (I presume the basic signing of the updates does involve people entering parts of a key only they know.) In any case, the rapid release and adoption of new devices means that software is not going to be that useful after 2 years or so unless it tells however is looking at it how to hack the same thing in to future versions (but in the case signatures become moot). In other words, I think your overestimating how useful a signed probably is.

BTW, I don't really understand your bcrypt point. Apple has designed their encryption such that it takes 80 milliseconds on their devices. I don't see any indication they are using NOP or anything of that sort to achieve this 80 millisecond figure, that seems fairly unlikely. Instead they are performing multiple iterations of the encryption function to slow things down. Obviously different devices will be able to perform the decryption at different speeds, but they've attempted to design the system so that you can't perform the decryption on a different system. (Or rather you can, but you'd be bruteforcing the whole encryption key rather than just the passphrase used to recover the encryption key which is generated based on a unique ID in combination with the passphrase.) And in any case that's always the way such things work. At best you can design your function so it's more difficult to speed up with certain clases of devices. (Which is partly what bcrypt tries to do.)

They could obviously increase the iteration count or otherwise modify the encryption to reduce speed, but I presume they chose the current speed based on what they felt would give an acceptable speed vs security tradeof. (I'm not sure if the iteration count on the device or it's built in to the chip. But even if it's not built in, modifying it after you device is encrypted, even if this is possible, is simply going to mean you're not actually likely to succeed in getting the right key since you're no longer using the same encryption function.)

For additional protection, they attempt to slow down repeated attempts. This will be implemented via NOPs or some other timing function, but there's no way you can make multiple attempts slower other than by slowing down the whole thing. And as already said, they surely chose the value they felt gave the based security-speed tradeof.

To put it a different way, I actually agree Apple probably doesn't want people to know what they can do, but mostly for different reasons from you.

P.S. Let's not forget even with bcrypt many or perhaps even most are going to use multiple iterations, the number of which will depend again on what sort of speed-security tradeof whoever is implementing it feels is best.

P.P.S. Obviously when the device is in the unlocked state, anyone with sufficient control & access could silently turn off any security features that can be turned off. They could likely steal all important parts of the encryption function too. But I think it's fairly well accepted if you've gone someone sufficient control over an unlocked device, you might as well assume they can do whatever they want no matter how hard the vendor tries to stop this. The big question is what is "sufficient control".

P.P.P.S. Ultimately whatever protections you build in, even if it's all internal to the security chip, it may still be possible to de-encapsulate the chip and find some way to modify any values that way. So you have to assume anyone with sufficient resources and technical skill will be able to find a way around stuff like timing limitations and automatic deletions after sufficient failures.

I probably should also mention that while you can try to prevent updates, obviously whatever part is running is not encrypted. If it is partly in a different chip, you must be able to change this. Worst case you can change the chip. However this gets back to the earlier point. If everything important is in the security chip, whatever you change in the other unecrypted chip isn't going to achieve anything. You still need to get the security chip to quickly test your multiple attempts and not enforce any internally set limits. You could try to speed up the encryption chip, at the risk of destroying something if it can't handle this.

Nil Einne (talk) 17:37, 17 February 2016 (UTC)[reply]

@Nil Einne: Your version definitely seems more plausible than mine... the thing is, your version leads to the conclusion that Apple literally cannot crack the data on the phone, at least not unless they're in the business of deencapsulating and cracking chips for the FBI. So even though you know several orders of magnitude more about this stuff than I do, if we see the data being handed over to the FBI in the next few weeks or months we'll know I was right anyway. Or is that also wrong? Wnt (talk) 18:06, 17 February 2016 (UTC)[reply]
The San Jose Mercury News has published the text of the court order from the United States District Court for the Central District of California; it can be found on several of their articles today, including this editorial Apple is right; opening cell phone encryption would be disastrous, or via this direct link to a 3rd-party provider, Order compelling Apple, Inc., to assist agents in search. Nimur (talk) 00:57, 18 February 2016 (UTC)[reply]
The response from Apple is strange. The police going to a judge and presenting evidence to justify a search is how the system is supposed to work. It has nothing to do with warrantless mass surveillance. Big corporations including Apple comply with courts' demands for user data all the time (emails, call logs, etc.). The letter argues that there's a slippery slope from enabling this particular search to enabling mass surveillance, but that's exactly as true here as anywhere else: a landlord unlocking someone's apartment so the FBI can search it with a valid warrant, minus the warrant, times N, equals a police state and the death of freedom. If Apple was technically incapable of complying with the order they could have just told the court that, and surely would have, so it seems very likely that they can do it. Maybe they think it's better PR to fight the system than to admit that they have root on iOS devices, and since they're Apple maybe it is. If Microsoft did this, reporters would probably concentrate on the it's-a-lawful-court-order-not-mass-surveillance-big-corporations-now-have-veto-power-over-the-government angle. -- BenRG (talk) 04:56, 18 February 2016 (UTC)[reply]
The landlord example isn't quite right - it's more like a homeowner destroying the key to their house, and the FBI asking for a master key to all houses to get in. Yes, it's the only way to get into that house, but it also gives the FBI the potential to go into any other house at a later time. An alternative court order could force Apple themselves to recover the data (with suitable confidentiality measures) and turn it over, rather than proving the FBI with the brute force attack, which would me more akin to ordering a locksmith to open the house (the house still gets opened, but the FBI don't hold a master key). MChesterMC (talk) 09:35, 18 February 2016 (UTC)[reply]
Even if Apple had to supply a signed OS build to the FBI, they could make it expire in a week so that it couldn't be used for future hacking, if what Nil Einne wrote above is correct. They could also blacklist it in a future update of everyone's phones.
Also, if it's really gotten to the point that our best hope for protecting ourselves from government surveillance is trying to keep them from getting ahold of lock picks, then I think we're doomed. I would rather focus on having a judiciary that doesn't rubber-stamp warrants and a legislature that doesn't pass patriotically named bills and an executive that doesn't classify every damn thing. That would also safeguard rights that can't be safeguarded by strong encryption, which is most of them. -- BenRG (talk) 17:34, 18 February 2016 (UTC)[reply]
Sorry, the court order actually specifically states that the software "will be coded by Apple with a unique identifier of the phone so that [it] would only load and execute on the SUBJECT DEVICE", and of course that is possible regardless of Apple's code signing system. So this is exactly analogous to giving the FBI a key to one specific apartment. The statement in Apple's open letter that "there is no way to guarantee" "that its use would be limited to this case" is false. I think the FBI and judge have done everything right here. -- BenRG (talk) 22:14, 18 February 2016 (UTC)[reply]
Yeah I read that before (it was mentioned in the source I linked above) but forgot about it when commenting above. I'm not sure if this was mentioned elsewhere in this thread but I was reminded of it when reading the Register article: the court order doesn't even require that Apple hand over the modified firmware. It explicitly allows Apple to do it at their own facilities. So Apple should be able to stop the government from getting hold of this code, even if it was useful to them for other phones. "or alternatively, at an Apple facility; if the latter, Apple shall provide the government with remote access to the SUBJECT DEVICE through a computer allowing the government to conduct passcode recovery analysis". Nil Einne (talk) 17:30, 19 February 2016 (UTC)[reply]
I found this article which references this blog post. I am tempted to postulate some Laws of Commercial Crypto: 1) Companies always say your data is secure; 2) companies always leave themselves a way in; 3) governments always make them use it; 4) they always end up out in public arguing about whether they get paid for it, but both sides talk as if it were a matter of principle. Wnt (talk) 12:49, 18 February 2016 (UTC)[reply]
Yes, everyone please read that article ("Apple Unlocked iPhones for the Feds 70 Times Before", by Shane Harris). -- BenRG (talk) 17:34, 18 February 2016 (UTC)[reply]
According to a statement Apple published on its website, this assertion is not true. Apple has not unlocked iPhones for law enforcement, including Federal law enforcement, at any time in the past. This statement hinges entirely around the distinction about the definition of "unlocking" the device.
Has Apple unlocked iPhones for law enforcement in the past? - No. That is the first word in Apple's answer. It proceeds to explain what has happened in the past: "We regularly receive law enforcement requests for information about our customers and their Apple devices. In fact, we have a dedicated team that responds to these requests 24/7. We also provide guidelines on our website for law enforcement agencies so they know exactly what we are able to access and what legal authority we need to see before we can help them. For devices running the iPhone operating systems prior to iOS 8 and under a lawful court order, we have extracted data from an iPhone."
This distinction matters, because the issue in this case is to create and use a new technical capability that does not presently exist to circumvent the iPhone's protections. The details in this case are not identical to the details in previous cases.
Author Shane Harris, who is neither affiliated with Apple nor an official spokesperson for Apple, wrote "... according to prosecutors in that case, Apple has unlocked phones for authorities at least 70 times since 2008. (Apple doesn’t dispute this figure.)"
Meanwhile, in an official statement from Apple, this figure is exactly disputed. Independently of your evaluation of truth of this figure, I would say that this casts doubt on Mr. Harris' ability to accurately report fact, or at least on his ability to use language precisely. There is a very significant difference between previous lawful orders to render technical assistance, and this specific order to render technical assistance. The significant difference lies in the exact technical nature of the request the court has made on behalf of the FBI.
Apple's statement further states: "As the government has confirmed, we’ve handed over all the data we have, including a backup of the iPhone in question. But now they have asked us for information we simply do not have."
Nimur (talk) 16:24, 22 February 2016 (UTC)[reply]

--Guy Macon (talk) 14:30, 18 February 2016 (UTC)[reply]

Here's an excellent article in The Register (copied from another ref desk thread). -- BenRG (talk) 22:14, 18 February 2016 (UTC)[reply]
And one from Wired: Apple’s FBI Battle Is Complicated. Here’s What’s Really Going On. --Guy Macon (talk) 07:26, 19 February 2016 (UTC)[reply]
And from Reason: The Encryption Fight We Knew Was Coming Is Here—and Apple Appears Ready --Guy Macon (talk) 07:39, 19 February 2016 (UTC)[reply]
An interesting thing albeit more on the legal-political than the technical side are these related stories of a different case [4] [5] [6]. I found the first stories (or similar stories) when researching this story, it's only a few days before the current one. [7] suggest the meth case may have partly been the catalyst for Apple's new stance and it's possble it will be resolved before this one. (The other catalysts may be the introduction of full storage encryption. And Apple's reluctance to let people know of any weaknesses along with their desire to distance themselves from the government after the Snowden inspired controversies even if that includes cases where that help is requested by warrants.) Nil Einne (talk) 17:53, 19 February 2016 (UTC)[reply]
Thanks (or I guess thanks to the thread below since I read it there first). I did wonder whether it was an older less secure iPhone and that article confirm it. Being a 5c is perhaps a key point here since it doesn't have Security Enclave which enforces the rate limiting internally. It does suggest it may be possible for Apple to disable the limiting even in the Security Enclave. I still think barring bugs (and to be fair, as jailbreaking and experience in so many other places have shown, they are often there), Apple would want to prevent this in locked devices but I'm far from an expert. As a case in point, it sounds like being able to update the bootloader even when the device is locked is indeed a feature not a bug whereas as I said above to me it seems it should be disabled. Nil Einne (talk) 17:30, 19 February 2016 (UTC)[reply]

Javascript graphics tools[edit]

What are good Javascript tools for creating plots from text data for inclusion on web pages? Ideally, I'm looking for something that can offer options such as line graphs, bar graphs, scatter plots, and error bars. A high level of control over presentation and formatting would be preferred. Dragons flight (talk) 13:05, 17 February 2016 (UTC)[reply]

@Dragons flight: Lazy potatoes like myself use this. The Quixotic Potato (talk) 14:47, 17 February 2016 (UTC)[reply]
There's also d3.js ---- LongHairedFop (talk) 19:48, 17 February 2016 (UTC)[reply]
I have used Chart.js, but I needed to modify it extensively to meet my specific requirements. Nimur (talk) 22:59, 17 February 2016 (UTC)[reply]
Thanks all for the pointers. Dragons flight (talk) 18:33, 19 February 2016 (UTC)[reply]

SQL, and RD (MySQL\SQLite) differnece in simple words[edit]

Can someone please explain, in the most simple words possible:

1. what is actually the difference between SQL to Relational Databases such as MySQL or SQLite?

2. Why do we need a RD if we have SQL and PHPmyadmin on our server... ?

Thx Ben-Yeudith (talk) 17:30, 17 February 2016 (UTC)[reply]

First we'll need to clarify what you are asking. Technically, SQL stands for "Structured Query Language", a programming language which is used to access and manipulate data stored in a relational database. It can be used with many types of RDBMS, such as IBM DB2, Oracle, MySQL, and others. Perhaps, though, you are using "SQL" as a term for something different? I have heard it used as a name for Microsoft SQL Server. phpMyAdmin is a tool specifically for administration of MySQL; there would be no reason to have phpMyAdmin without also having MySQL installed. --LarryMac | Talk 19:24, 17 February 2016 (UTC)[reply]
I get the impression that SQL is a programming language used to write DB's from scratch (and of course, manipulate existing ones written with it). Suryly I didn't mean to RDBMS-manipualtion GUI like PHPmyadmin... I think I do need to sharpen my understanding of the difference between SQL, and RDBMS
SQL is a language for querying (and modifying) relational databases, not to write a Database Management System. When you "buy database software", you typically get the code and a license for a DBMS (which nowadays most often is a Relational DBMS, or RDBMS). The DBMS is most usually written is a system programming language, like C or C++, or a combination of both. MySQL or MariaDB or Oracle are DBMSs. You then use SQL to put the actual database (i.e. the structured data) into the system. --Stephan Schulz (talk) 20:00, 17 February 2016 (UTC)[reply]

Database transactions[edit]

At work, I have to develop a .NET C# program that reads in files from a directory, extracts data from them and inserts it to a database.

Each file contains its own independent set of data and the files don't affect each other. Each file's data must be atomic, in other words if the processing of a file somehow fails, I can't have the database ending up with only part of the file's data. So therefore I have to use transactions.

Now, the program is called from an external framework that automatically opens a database connection for it at the beginning and closes it at the end. I can do database commands on that connection as I please. I figured I have to begin a new transaction every time I read in a new file.

Are the transactions independent from each other? In other words, if I commit or rollback a transaction, it won't affect anything done to the database before the last commit or rollback?

And is there any way of doing immediate operations on the same connection at the same time, or is everything I do in the database on this connection now part of a transaction?

When I read data from the database, the read operation should see the state of the database since I last committed a transaction, right? JIP | Talk 18:44, 17 February 2016 (UTC)[reply]

You might want to take a look at our ACID article, but this is a topic that can easily get very involved. I'll try to give you some general answers below.
  • When transactions are run sequentially (begin...commit, begin...rollback, begin...commit) they are independent of one another, and each will see the results of any previous committed transactions. If a transaction is rolled-back, the results will be as if no changes were performed during that transaction, but previously committed transactions are not affected. (This is the "Atomicity" of ACID.)
  • If you perform any other operations on the same connection, those operations will have full access to pending changes (even though they are not yet committed), and any new changes will be part of that same transaction. You can open a second connection to perform operations independent of the transaction created in the first connection. For example, you might use a second connection to log activity so that it will persist even if the transaction in the first connection is rolled-back.
  • However, when working with multiple connections, locks may come into play (part of the "Isolation" aspect of ACID), and any attempt by the second connection to access uncommitted data from the first transaction may be blocked until the first transaction is either committed or rolled-back. If the blocked process is the same as the one that opened the original transaction, you will create a deadlock. (There are many advanced aspects of locking that are too involved to go into here.)
  • When you read back from the database, you will see all committed data, plus any pending changes created by the same connection. As mentioned before, any attempts to access uncommitted data from a different connection may be blocked. There are ways around this though - see "Read uncommitted", "Transaction Isolation Level" or "NOLOCK", but again, that is a bit much to try to cover here.
I hope this helps. It sounds like that you will want to use transactions to handle your "real" data, but may perhaps want to use a separate independent connection to handle progress and error logging or other "metadata" activities. -- Tom N talk/contrib 05:36, 18 February 2016 (UTC)[reply]

How can I remove Mozilla from my appdata if windows crashes or fails to delete it?[edit]

I installed the Avira antivirus after earlier suggestions on this desk, when I had had to uninstall (over a week ago) Firefox, which kept crashing, and AVG which would not complete installing.

Running Avira I got the message that there are 5547 hidden files such as:

c:\users\user\appdata\local\mozilla\firefox\profiles\m5g74v4v.default\cache2\entries\3ba89d828ff397d4761d35bd90f9749ce144927d [NOTE] The file is not visible.

On line instructions say to type %appdata% into the search field at the start button (I am using Windows 7) and then delete the Mozilla folder. When I do this, except for the deleting files notification, with the spinning blue wheel of death, the rest of the computer is frozen. After about 10 minutes I get an error message saying files could not be deleted. (The same thing happens if I try to delete the subfiles piecemeal.)

Are there any ways to address this? A program that will delete intransigent files? a command prompt? (If there is a command prompt I will need verbatim instructions, as I am not certain when working with that.) I am about to try booting in sfae mood. Thanks. μηδείς (talk) 18:44, 17 February 2016 (UTC)[reply]

Are you in an unsafe mood right now? If so, I don't feel comfortable answering your question. The Quixotic Potato (talk) 19:36, 17 February 2016 (UTC)[reply]
Hidden files aren't necessarily bad. They're hidden because they are boring, not because they are malicious. Most Firefox related problems can be solved by "refreshing" it. This will create a new profile folder. Extensions and themes, website permissions, modified preferences, added search engines, download history, DOM storage, security certificate and device settings, download actions, plugin settings, toolbar customizations, user styles and social features will be removed. The Quixotic Potato (talk) 19:50, 17 February 2016 (UTC)[reply]
I am not in safe mode now, nor was I when I made the post above. As soon as I made the post above I rebooted in safe mode, but still got the circle of death when I tried deleting those files.
If it helps, the timeline is:
1) My computer was crashing and freezing on jan 15, and I got two blue screen errors
2) I reinstalled from the recovery discs
3) late on jan 16 I re-installed Firefox, my favorite browser for most uses
4) I was unable to reinstall AVG 2016 or 2015, it stopped at 92% installed
5) Firefox was causing me to power off so many times I uninstalled it
6) I uninstalled AVG, which said it was running, but kept getting errors, and could not run windows defender
7) I made an offline disk of windows defender, it found spurious Mozilla files on my c drive.
8) I removed those Mozilla files, but still had no working antivirus, since Windows Defender kept deactivating itself.
9) I installed Avira last night, and rean it overnight. It found 5547 hiddem items, all to do with a Mozilla/Firefox directory address.
10) Since I do not now hafe Firfox installed, I cannot 'refresh" it, although I assume you mean either update it or go back to factory stings.
At this point I am rebooting with Windows Defender Off Line to see if it will help, but I am still curious what I should be looking for in a way to get those invisinle files removed. Then I can concentrate on whatever other files are left. Thanks. μηδείς (talk) 21:43, 17 February 2016 (UTC)[reply]
There is no need to remove the files. The pathname you provided is that of a file belonging to Firefox's cache. This is perfectly benign; Web browsers save things locally in a cache to speed up your browsing experience. (If you really want to learn more, plug "Firefox profile" into a search engine to learn all about the contents of Firefox's profile directory.) With that said, it is perfectly acceptable to delete Firefox's cache; caches are temporary and the content is intended to get deleted eventually. You shouldn't be having any issues deleting the files if your system is functioning normally. This plus your other issues indicates there is a problem with your system. My hunch is it's a hardware issue, such as your hard drive being on its last legs, but it's possible it could be an operating system problem. A good way to do a differential diagnosis is to boot to a Live CD (or "live DVD", "live USB drive", whatever) and use it for a while. Try deleting things from Firefox's cache from the live CD environment. If everything goes smoothly, it's an operating system issue, and the best solution is probably to just do a factory reset of the computer. If you still have issues in the Live CD environment, it's a hardware problem. Also a good thing to do is to back up any files on your system you care about, if you haven't already done so. --71.119.131.184 (talk) 23:32, 17 February 2016 (UTC)[reply]
If you want to try command mode, then you get to it from Start -> Run ... then type "cmd" (without the quotes) to get the black screen.
Then type "cd c:\users\user\appdata\local\mozilla\firefox\profiles\m5g74v4v.default\cache2\entries" (without the quotes)
Then type "dir" to see all the files (though they might not be visible?)
Then type "del *.*"
Then type "dir" again to see if the files have all gone.
Then type "exit" to get back to windows.
This probably won't work because the DOS in Windows 7 is not real DOS but just an emulator running under Windows. Dbfirs 23:59, 17 February 2016 (UTC)[reply]
(edit conflict) The fact that there are hidden files in Mozilla's profile folder is normal (they are hidden because they are boring, not because they are malicious). Its weird that both Windows Defender and Avira detect them, maybe your browser cache contains something bad (but that doesn't necessarily mean that you were infected). If you are afraid that you may have a virus then I recommend installing MalwareBytes Anti-Malware and ESET Smart Security. If you want to delete certain files, but can't (you should be able to delete them after restarting your computer) then I recommend "FileASSASSIN". Its made by the MalwareBytes people. Malware rarely causes a Blue Screen of Death, it is far more likely that that is an indication that a driver is misconfigured or that hardware is about to fail. The Quixotic Potato (talk) 00:00, 18 February 2016 (UTC)[reply]
Yes, I agree with TQP above that those files are harmless. I have lots of them, but I'm able to see them and delete them from Windows. If you can't, then something like File Assassin (above) should be able to do the removal. They should have been deleted when you uninstalled Firefox, so it looks as though something went wrong with that, or possibly you have a glitch in the File Allocation Table, so try running "chkdsk". Dbfirs 00:20, 18 February 2016 (UTC)[reply]
I'm not the best person to be giving ideas here, but I'm thinking (a) if there's a disk issue it should show up if you go to "view event logs" (check the warnings... for some reason every time I've had something really bad, it comes up as a warning). Knowing exactly when you tried to delete the files gives you an advantage finding the problem in that pile of stuff. Also, I'd say just open the Task Manager and see if there's a Firefox/Mozilla process of some type running before you try to delete, just in case they're merely "in use". Wnt (talk) 12:19, 18 February 2016 (UTC)[reply]
Instead of dir use attrib to check fpr hidden files. Note m5g74v4v.default are random characters. See whats You computer profils' random string is. --Hans Haase (有问题吗) 20:01, 18 February 2016 (UTC)[reply]
  • Yay! chdsk found and fixed three damaged sectors (or whatever they are called) and these sectors had the Mozilla (and AVG) files that were not erased when those programs were uninstalled. After chkdsk was run, I was able to delete those files, and the computer is behaving a lot better. I am going to do a clean install of AVG and Firefox this weekend, after some important projects are completed, just in case there is still some other issue. I'll post the results then if this thread is still alive. Thanks to everyone who helped above! μηδείς (talk) 20:48, 18 February 2016 (UTC)[reply]
That's good for now, but bad sectors are generally a sign that your drive is at death's door. I would just go ahead and replace it. Also if you aren't backing up your data regularly now's a great time to start! --71.119.131.184 (talk) 21:02, 18 February 2016 (UTC)[reply]
Yes, definitely use regular backups, and if chkdsk gives more errors next week, then replace the drive. You might like to make a recovery image now, if you don't already have one, just in case. I have known drives to last for years after the odd chkdsk bad sector error, but the risk might or might not be worth it. Dbfirs 16:04, 20 February 2016 (UTC)[reply]

Capturing HTML output of Atom XML formatter[edit]

Dear Wikipedians:

How do I capture the HTML output of Atom CSS formatted XML RSS feed output as shown in the screenshot below:

Screenshot I did of an Atom CSS formatted RSS feed XML file

into an HTML file?

Thanks,

L33th4x0r (talk) 20:23, 17 February 2016 (UTC)[reply]

I would use XSLT. The Quixotic Potato (talk) 21:07, 17 February 2016 (UTC)[reply]

Steps to encrypt your emails?[edit]

I have generated a pair of RSA cryptographic keys using ssh-keygen and installed enigmail. I tried to import the former into the latter, but the add-on asks me for a PKCS-12. How should I transform the keys into this format, if at all?

And after that, how do people normally share their public keys? Do they put them on a site? They send them through email? Exchange them in person? --Llaanngg (talk) 21:50, 17 February 2016 (UTC)[reply]

My understanding is enigmail normally uses gpg (GnuPG). I don't know if there's a way to have it use ssh keys. If you install gpg and generate your public/private keys using that tool, then enigmail should just work. As to how to exchange public keys, any of your three options will work. Remember that there is no security risk in sharing or exposing your public key. If I need to send private email to someone whose public key I don't have, I just email them and ask them to email their public key to me. It doesn't matter if the email is intercepted. Conversely if someone else needs your public key, you can get it from gpg via "gpg --armor --export my.email.address" and send them the output. Mnudelman (talk) 22:36, 17 February 2016 (UTC)[reply]
What if your outbound email gets intercepted and the attacked send you his public key impersonating your contact? That's kind of paranoid, but how can you know that a public key belongs to a concrete person? --Llaanngg (talk) 22:41, 17 February 2016 (UTC)[reply]
Ah well yes, if you're concerned about someone impersonating your contact you do need to take further steps. Gpg supports a "web of trust" system, so that if someone you trust sends you someone else's public key, you can let gpg know that you now trust that key. See [8] and [9]. Mnudelman (talk) 22:49, 17 February 2016 (UTC)[reply]
You can also verify the key fingerprint over a different channel, such as a phone call. -- BenRG (talk) 17:57, 18 February 2016 (UTC)[reply]
I'll give my thoughts on one of the OPs questions, "how do people normally share their public keys?" Due to the minuscule uptake of email encryption by the general public, exchanging encrypted email with a random person on very short notice is just not feasible. Realistically, you can only exchange encrypted emails with someone you already know; you would have to have a discussion about what method you both have access to. Factors to include are the many people who use web email with no encryption capability, quite a few people can't afford a Microsoft Office license, many people can't afford a Adobe Acrobat license, and many people work in a corporate environment where they are not allowed to install software on the computer assigned to them. So an encryption method acceptable to all correspondents must be found before you can even begin to talk about exchanging keys. Jc3s5h (talk) 18:29, 18 February 2016 (UTC)[reply]
Indeed, convincing others to send and accept encrypted mails seems to be the biggest problem here. Unless one of the big email services force users to use end-to-end encryption I don't see how this situation could change any time soon. Llaanngg (talk) 18:50, 18 February 2016 (UTC)[reply]