Talk:Reification (computer science)

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Citations[edit]

Hello. There is a reference in the article which is good. I added one "citations missing" tag because I couldn't see the source. I have no knowledge of reification really. You are most welcome from my point of view to remove or change the tag. Hope this helps. -Susanlesch 21:05, 11 November 2007 (UTC)[reply]

References to linguistics[edit]

The link to Reification (linguistics) in the section on VDM is fine, and is appropriate to the content on that page. However, the reference in the lead (defining nominalization as an instance of reification), if it belongs anywhere, belongs only on that page, not here. Currently, this simpler, less technical meaning for reification is not described on the linguistics page. --Airsplit (talk) 08:47, 12 December 2009 (UTC)[reply]

I agree with you. The reference to linguistics in the beginning is inappropriate here. A link could be added to the Reification (linguistics) page and this example be copied to there. Sae1962 (talk) 07:44, 6 October 2010 (UTC)[reply]
I agree too, so I removed it. (Restorers should argue here instead of just reverting). Rursus dixit. (mbork3!) 09:24, 8 October 2012 (UTC)[reply]

Lack of cohesion (and of clear meaning)[edit]

This unparagraphed article is something extraordinarily difficult to read (for me).

Look at snippets as: "When objects in Smalltalk are viewed as pointers to structs carrying pointers to functions the analogy is clear. There has been renewed interest in the explicit use of C for the implementation of Smalltalk primitives. C is also the language used to implement 3rd-generation Smalltalk in the language Io."

Even reading what's before, understanding what "analogy" the author is talking about is for me impossible. And anyway, I see little coherence in what is written.

I agree it's very hard to read. It's Interesting, but it could have less opinion and be rewritten. LegendLength (talk) 04:19, 22 February 2008 (UTC)[reply]

Forth[edit]

Why isn’t Forth mentioned? It’s the apotheosis of reification, giving the programmer access to every internal structure, including its return stack, dictionary and more. Roman V. Odaisky (talk) 12:11, 31 December 2007 (UTC)[reply]

Muddled[edit]

I can follow the definition given, that "Reification is the act of making an abstract concept or low-level implementation detail of a programming language accessible to the programmer", however, I fail to see how the examples that follow explain this to anyone but an expert, already familiar with the term.

- How does C make memory addresses accessible to the programmer? Is it some sort of one-to-one address space mapping? If so, how is this bounded or constrained into C syntax? - How does Scheme make continuations accessible to the programmer? Same kinds of questions, but a poorer example, because continuations are less universal than memory. - I can't even grasp what "types that are completely available at run time" despite the "clarification". Are you talking about some sort of "reflection framework"?

Agree with other poster about lack of cohesion. I can't get what I'm looking for out of this article. In particular, what does reification do, as a process, that distinguishes it from other processes like refactoring? From what I've gathered, perhaps it exposes functionality in a useful way?

In addition, the article launches into discussions of impacts on OOP without giving the reader a strong sense of why OOP is more (or less) impacted than other methodologies (declarative, structured, functional, aspect-oriented, etc.)

99.170.78.44 (talk) 19:42, 16 July 2008 (UTC)[reply]

Reification, reflective programming, and merging it[edit]

There isn't much added value that this article is bringing to the general concept of reification. It is perhaps a good idea to consolidate the several 'reification' articles into one. However in the meantime, I am going to edit the 'reification in computer science' article by linking it to Reflection (computer science). Regarding the use of reification in the sense of refinement - this originates from Vienna Development Method, in particular, its data reification. According to Formal Method Europe FAQ: Data reification is the VDM terminology for what most other people would call data refinement - that is, the taking of a step towards an implementation by replacing a data representation without a counterpart in the intended implementation language (such as sets) by one that does have a counterpart (such as maps with fixed domains, which can be implemented by arrays), or at least one which is closer to having a counterpart, such as sequences. The word "reification" is preferred over "refinement" because the process has more to do with making an idea concrete than with making it more refined. Arguably, the role of Wikipedia is to clarify this confusion, not to perpetuate it by including a confusing phrase right into the article.

I agree with you that the articles should be merged into one 'Reification (computer science)' article. Sae1962 (talk) 07:48, 6 October 2010 (UTC)[reply]

- Equilibrioception (talk) 19:54, 25 February 2009 (UTC)[reply]

I also agree that the contents of this article are part of Reification (computer science). Sae1962 (talk) 12:44, 27 January 2011 (UTC)[reply]

Reification (translation of)[edit]

Translation of Reification to portuguese: Substantivar, Concretizar. (This is a sugestion) Carnide (talk) 11:44, 3 March 2009 (UTC)carnide[reply]

Clarification?[edit]

How is this different from the general concept of abstraction? —Preceding unsigned comment added by 193.164.118.24 (talk) 14:50, 25 January 2010 (UTC)[reply]

In a sense it's the exact opposite of abstraction. Reification is making the abstract concrete. The muddled presentation in the article reflects the muddled thinking currently out there, as a result of "reification" being a relatively new word in CS borrowed from other fields and many people plugging in what they think it means -- it'll be a while before there's consensus, if ever. So allow me to do some plugging of my own :-) Caveat: I'll only focus on reification as part of language design and implementation, not the other overloads.
The first example given (that C "reifies" addresses) is not a typical application of the concept -- I'd go so far as to call it wrong. Addresses are not abstract entities, they are concrete, as evidenced by the processor's address lines (unless you want to get really philosophical and claim that those are just bits and do not form a number, but this suggests an unhelpful reduction to denying the concreteness of anything but electrical impulses, or electrons even). That C exposes addresses is not an example of reification, that other languages do not expose addresses is abstraction. This article instead treats "reification" as if it meant "making something that is part of the entire execution environment somehow concrete in a language" or indeed just "first-class-citizenisation" of any kind, which considerably cheapens the notion. It's not just that.
The main difficulty with "reification" in computer science is that it's necessarily relative. Software isn't concrete, so all concepts treated as "real" by a language that don't have immediate hardware pendants could be called the result of reification. What we mean by reification is something vague like "making something that we talk about so much as if it were concrete a singular concept in the language". For example, giving a language ways of talking about expression trees of that same language (like C# has recently acquired) is an example of reification. But to a language where this is common or foundational, like Lisp, it would be weird to call this reification. Conversely, if a lower level already treats a concept as if it were real (like addresses, which are already "real" in assembly language at least) then a higher-level language couldn't "reify" such a concept, it could only choose not to abstract from it.
The confusion is well demonstrated by the first sentence of our own definition: "Reification is a process through which a computable/addressable object—a resource—is created in a system, as a proxy for a non computable/addressable object." If we take this literally, reification doesn't exist, because there's no such thing as a computing system capturing the notion of a noncomputable object, barring approximation, which is clearly something different. What this definition is trying to get at is something more complicated like "reification is creating a proper superset of a computable system that turns an abstract notion over the original system into a concrete feature of the new system" (I'm not proposing this monstrosity as an alternate introduction, by the way). Thus Lisp doesn't reify expression trees because there's no smaller language to reify them from (debatable, admittedly), and C doesn't reify addresses because addresses are not an abstract notion. Reification strongly implies that what's being reified is already "close" to a language on a higher level, but just outside the horizon of concrete features.
Reification is in a sense the dual of abstraction: abstraction is surveying the concrete and producing a more general notion, reification is surveying the abstract and producing a concrete feature. Both increase productivity, but in different ways. You can combine them, too: let's say you have a C program that uses setjmp/longjmp invocations all the time, and you realize that what it's doing is effectively implementing coroutines (abstraction). You could then add coroutines to the language and replace this particular pattern of setjmp/longjmp with the coroutine feature (reification). Having coroutines explicitly may suggest extensions of the concept that weren't possible with the original setjmp/longjmp implementation, moving further away from simple abstraction. You haven't removed setjmp/longjmp from the language and replaced them with an abstraction, instead you've identified an abstraction and made it singularly concrete. That's reification. 82.95.254.249 (talk) 16:12, 15 August 2010 (UTC)[reply]
The article in general mixes up the concepts of explication and reification. C language for instance, certainly explicates addresses as pointers. Reification should imply a concrete representation of a concept as a memory-resident data structure, in the context of programming. A translation of a recursive procedure into its iterative counterpart would reify the implicit data living inside run-time stack frames during the recursion, as some kind of a nested list, explicating them as an additional accumulator argument perhaps. WillNess (talk) 21:02, 28 February 2012 (UTC)[reply]

First image seems to be wrong[edit]

First image seems to be wrong[edit]

I think the first image is wrong. The correct values on the top right membership R1 should be

a P1
b P1 (not P2)
c P2 (not P1)

as the person P1 has two memberships. Sae1962 (talk) 14:17, 27 January 2011 (UTC)[reply]

I agree, the image looks wrong[edit]

Pure functional languages reify time?[edit]

What pure functional languages reify the whole execution history as a list? I haven't seen that done in Haskell, which is the purest functional language. Moreover, pure languages represent changes in computation states as rewrites of a single program in the form of a tree structure. This sentence seems as ungrounded original research, and it should be clarified and referenced to understand what it means. Diego (talk) 21:42, 28 February 2012 (UTC)[reply]

The state of computation is implicit in impure languages, which change their state by altering values of variables. Pure languages churn out new states, explicitly keeping the old ones, usually as a part of a list (or any lazily-constructed data structure will do). That's why persistence is a given in pure languages. Updating a leaf in a binary tree e.g. produces a new tree, and the old one is (can be) still available. A function has to produce same outputs for same inputs, so how the changing computation is to be represented? By a list of values, that the impure counterpart would have, over time. Thus the time that is implicit in impure setting, becomes explicit in pure. That's what all the books seem to be saying. I think I've seen this analogy in some book though don't remember exactly where.
Python generators yield new values on demand; in Haskell that is represented by lazily constructed list where new elements are computed on demand, mediated by access to that list. Conceptually, lazy lists are (memoized) generators. Primes' computation e.g. is timeful, where computing of new primes relies on previously computed ones. In Haskell the timing is automatically managed by lazy evaluation through thunks, one dependent on another, all values existent as elements of the whole sequence. In Python a top-level loop would pull new primes from the primes generator one-by-one, and print out each one, but in pure Haskell code the whole sequence is reified as a (e.g.) list. If you can improve the wording that would be great. WillNess (talk) 07:45, 29 February 2012 (UTC)[reply]
Ok, that's a clearer explanation, but this is not what is expressed in the article's paragraph. In particular I doubt that the whole collection of previous states can be accessed as a list through the language; in any case, the assertion that "Pure functional languages reify the concept of time" is entirely original research: it's an "exceptional claim that requires exceptional sources". It should not be included in the article in that form. I'm taking it out unless you find some source that explicitly calls "reification of time" this process of representing all the computation steps. If you find some sources for the idea of reified states maybe it can be included back into the article in a descriptive way, without making that bold claim. Diego (talk) 09:43, 29 February 2012 (UTC)[reply]
I hoped maybe you'd remembered that you saw it too, in some book, as I can't remember where I did. Of course the phrasing is entirely mine, and I see how it can be construed as OR, but here on the programming pages the attitude seems to be more lenient in that respect. :) I would much rather prefer you'd come up with a clearer phrasing and let it stay with "sources needed" tag or something. Oh well. :) WillNess (talk) 00:02, 2 March 2012 (UTC)[reply]
We could insert something about explicit rewrite steps, but that's different than "reifying time". Diego (talk) 10:53, 2 March 2012 (UTC)[reply]
This would be an operational-oriented sentence. :) Rewrites is something that a run-time system implementation of a pure functional language would do. Functional definition defines primes - all of them - even if sequential access by "a user" is implied by the run-time system, which is completely external to the language itself. Of course the internal states of RTS aren't reified - not even explicated - by a pure functional language; but it does explicate and reify its own time - all of it. I've seen many explanations of FP talking about having the whole function at once as opposed to calculating its values point by point, in imperative approach. Of course the run-time is itself imperative; but the language is functional. WillNess (talk) 09:46, 3 March 2012 (UTC)[reply]

Here's a talk on the issues of time and pure functional languges, by a distinguished speaker, the author of Clojure, Rich Hickey. WillNess (talk) 14:16, 7 March 2012 (UTC)[reply]