Jump to content

Talk:Hindley–Milner type system

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
(Redirected from Talk:Hindley–Milner)

Suggestions for improvement

[edit]
Missing
  • Reword the paragraph starting with "Perhaps a bit irritating, type variables are monotypes" to mention higher-ranked types and why they are not supported in HM, leading to the stratification of types into monotypes and polytypes.
  • Fix/recursion
  • A list of implementations would be helpful — Preceding unsigned comment added by 2601:645:8103:3CC8:F12C:66F9:7787:ADE6 (talk) 19:25, 24 June 2017 (UTC)[reply]
Syntax
  • Type functions are annotated with their arity. Perhaps kind (type theory)s should be discussed here as well?
  • In type judgements, leaving an empty context completely blank makes the notation slightly confusing. In the second example under Typing rules, , could maybe be written with an or to the left of the turnstile (the former often being used in literature, the latter being how the empty context is defined in the syntax box). — Preceding unsigned comment added by 2001:6B0:2:2801:8C67:A355:7F2:B217 (talk) 12:51, 24 April 2023 (UTC)[reply]
Principal type
  • Consider moving this under Polymorphic type order.
Deductive system
  • Drop the figure "The Syntax of Rules". This should most be self-explanatory from the previous sections.
Algorithm W
  • The run-time of HM can be exponential in the size of the expression, so this paragraph is somewhat strongly worded. There are some performance results in the literature (exponential in the depth of the nesting of let-bindings, so linear in practice, if I remember correctly).
  • There's no explanation of the `unify` function in the algorithm. — Preceding unsigned comment added by 38.88.172.69 (talk) 16:28, 22 November 2013 (UTC)[reply]
Ok. I thought this would be present in the Unification (computer science) article referred to, but it isn't. -- Cobalt pen (talk) 21:09, 26 March 2015 (UTC)[reply]
Further topics

Ruud 12:06, 12 September 2011 (UTC)[reply]

Ruud, thanks for your comments. I'll work through them. -- Cobalt pen (talk) 15:47, 14 September 2011 (UTC)[reply]
Style
Yes, please do. I appreciate the meaning might have been to mark it as different, but it certainly reads rather non-neutral — 152.78.176.27 (talk) 10:30, 2 December 2013 (UTC)[reply]
While the offending wording has been removed, I like to add that the intention was not to praise HM or something. The thought was rather that many attempts to extend HM, e.g. towards subtyping failed. There's quite some frustration about this expressed in the literature. "outstanding" was meant more as relatively "isolated" in respect of extensions or extensibility. This was a very neutral remark, actually, and I believe one worth to mention. I'll reinsert it later in the end in context of limitations again. -- Cobalt pen (talk) 21:28, 23 March 2015 (UTC)[reply]

Something that didn't parse for me

[edit]

The sentence: "Perhaps a bit irritating, type variables are monotypes, either."

It's the 'either' that's throwing me. Kashikom (talk) 23:09, 18 June 2012 (UTC)[reply]

The wording was corrected a day later. It's my bad English. I ment "also" or "likewise", referring to the ground terms of types in the preceding paragraph. -- Cobalt pen (talk) 06:28, 28 June 2012 (UTC)[reply]

Syntax discussion fails to parse

[edit]

"Readers unfamiliar with the lambda calculus might not only be puzzled by the syntax, which can quickly be straightened out translating, that the application e_1 e_2 represents the function application, often written e_1(e_2) and that the abstraction means anonymous function or function literal, common in most contemporary programming languages, there perhaps spelled only more verbosely \texttt{function}\,(x)\ \texttt{return}\ e\ \texttt{end}."

Readers unfamiliar with what this entire section is trying to say might be puzzled by its syntax which could sorely use some translation and straightening out. — Preceding unsigned comment added by Gwideman (talkcontribs) 06:42, 27 December 2012 (UTC)[reply]

Sure, but any concrete suggestions? -- Cobalt pen (talk) 13:13, 23 March 2015 (UTC)[reply]
Readers unfamiliar with lambda calculus should read the lambda calculus first!
Lambda calculus is a prerequisite to understand the HM type system.
I did not erased that part, only made a little fix in the notation for lambda expressions, because is confusing either or eliminate the ambiguity of the scope of the x variable.
Application should also be but I did not made the change because, as I said before, the reader should be addressed to the lambda calculus article first, explaining here that application is left associative, and how substitution is done, etc.. will extend this article with information that should be in the lambda calculus article. — Preceding unsigned comment added by 2806:107E:4:ADEE:218:DEFF:FE2B:1215 (talk) 10:10, 2 April 2018 (UTC)[reply]

Original research?

[edit]

I tagged the last 2 paragraphs in the "note on expressiveness" section as 'original research?'.

Maybe someone could clarify the following points:

  • What is meant by "too expressive to be meaningfully typed at all"? Is there some kind of limit in expressiveness that type systems can meaningfully handle?
  • Which "decision problem" is undecidable for "anything as expressive as the lambda calculus"? The obvious one would be the type inference problem, but AFAIK there are type systems with recursive types for which type inference is decidable and which are able to type every term of the untyped lambda calculus.
  • The "universal function" mentioned seems to be just an example of a function returning values of different types which in all static type systems can only be represented be an uppermost type (often called "top" or "object") or some wrapper-type. The type system thus does not "collapse", but correctly does its job by preventing these values to be used in any type-specific operation and this way preventing type errors.

Björn Engelmann (talk) 11:43, 3 January 2013 (UTC) • contribs) 11:38, 3 January 2013 (UTC)[reply]

I removed the whole section, since it does not add to the topic.

-- Cobalt pen (talk) 18:58, 23 March 2015 (UTC)[reply]

Qualifier --> Quantifier?

[edit]

I'm irritated by the name of "qualifier" for a "" prefix, which is commonly named "quantifier", at least in logic.

Looking in the original papers cited by the article, I found no occurrence of the search string "quali", but

"operators (such as infixed for functions and postfixed list for lists); a type-scheme is a type with (possibly) quantification of type variables at the outermost."

on p.2 of Damas, Milner (1982),

"Polytypes thereby also stand for subsets of V, and these are also directed complete. The reader may like to think of each type variable in a polytype as universally quantified at the outermost; for example,"

on p.360 of Milner, (1978).

So, if there is no particular reason I'm not aware of, the name should be changed to "quantifier". Jochen Burghardt (talk) 17:56, 14 June 2013 (UTC)[reply]

Yes, this is wrong and should be changed, especially as "qualifier" does have a very specific and different meaning in some type systems (e.g., Mark Jones' qualified types). —Ruud 22:17, 14 June 2013 (UTC)[reply]
Ok, I changed the names. Jochen Burghardt (talk) 22:52, 14 June 2013 (UTC)[reply]

Def. of monotype

[edit]

During browsing the original papers, I also noted that they define "monotype" different from this article, viz.

"A monotype µ is a type containing no type variables."

on p.3 of Damas, Milner (1982), and

"A monotype is a type containing no type variables."

on p.359 of Milner, (1978). In particular, I couldn't find a source for the Hindley–Milner#Monotypes section's last paragraph

"Type variables are monotypes. Standing alone, a type variable α is meant to be as concrete as int or β, and clearly different from both. Type variables occurring as monotypes behave as if they were type constants whose identity is unknown. Correspondingly, a function typed α→α only maps values of the particular type α on itself. Such a function can only be applied to values having type α and to no others."

If that parapgraph isn't revised, I suggest that a note is added, explaining the deviant notation in the original papers. Moreover, an example should then be given for a function of some type α, but not of type ∀α.α. This could also help making more clear what "type constant whose identity is unknown" means.

Moreover, in the Hindley–Milner#Monotypes section's 2nd sentence, the part

"it is equal only to itself and different from all others"

seems to be tautological, as I consider "different" to mean the same as "other". At least, the part is unclear. Maybe it was intended to mean that different monotype terms always denote different types, i.e. the mapping from terms to types is injective? Or maybe it is better to explain the difference between mono- and polytypes, rather than trying to define what a monotype is and what a polytype is? E.g. "A monotype always designates a particular type, while a polytype ...(whatsoever)..." ? Jochen Burghardt (talk) 18:37, 14 June 2013 (UTC)[reply]

"Type variables are monotypes" refers to the fact that plain HM doens't allow for higher-ranked polymorphism, i.e. if you would substitute a polytype/type scheme for some type variable in a type, that newly formed type might no longer be in prenex form and thus not a syntactically valid polytype.
But, yes, overall this section is awkwardly phrased. —Ruud 22:32, 14 June 2013 (UTC)[reply]
If I understood you right, "Type variables may be instantiated only by monotypes" would be more accurate - ?
Converning the overall style, I guess the questions in #Original_research.3F are also caused by poor understandability of the respective section. Jochen Burghardt (talk) 22:44, 14 June 2013 (UTC)[reply]
Yes, that would be a better way to phrase this. And more part of this article could use some improvements in the clarity of writing. This may have to do something with the original author not being a native speaker, although I'm not quite sure if the German version is much better in this respect. —Ruud 10:15, 15 June 2013 (UTC)[reply]
I do indeed tend to awkward expressions, as in the tautology above. I meant the tautology as an emphasis. (And yes, my German is also not any better in this respect. ;^)
To the variables in monotypes: I think it is actually as I wrote, and there is in fact a subtlety. More precisely a free type variable can only be interpreted as an Indeterminate, to give the "type constant whose identity is unknown" the proper technical name and meaning. An indeterminate cannot be instantiated in HM and thus remains indeterminate. A type variable can only be instantiated (to any monotype) if quantified. But then we a do have a polytype and not a monotype.
The example is: with one can infer a type for but not for nor for in HM, since (may) differ from any other type and so from and (or) . In particular, one cannot derive nor in HM. See and the Specialization Rule (and the text left to it).
Algorithm aside, HMs behavior makes sense. You have a pointer to something having this type in register x and a function f that deals with this type returning an int. Fine, just call it and receive the number. Thus the first case passes. But what would happen if you call f with an int? One doesn't know and HM soundly refuses to place a bet. Likely if you have another pointer to that type in z
Rereading the portion, I could perhaps better stated occuring unquantified or free instead of standing alone, but I am about to introduce monotypes at that point. Both quantifiers and a definition of 'free' were not yet at hand.
-- Cobalt pen (talk) 12:06, 23 March 2015 (UTC)[reply]
Actually, I now believe, there are only two points in HM which are difficult to understand. One is the relation between the rule systems and the algorithm. Though not perfect, this point should already made clear in the article.
The second is monotype variables or rather, whether a type variable is quantified or not. See Example 1 in 'Free type variables' for another related pitfall, where the variable is bound by another type. It is in the , which prevents generalization and with it, instantiation. Thus one can stare at the type syntax, the relevant freeness is there only w.r.t. instantiation. It is in the context , which prevents generalization. And this context is restricted in to a monotype preventing generalization and released to polytype in allowing it. I had failed to properly present this point in the article.
Focusing on this point should help to make the article clearer, I hope -- Cobalt pen (talk) 19:45, 23 March 2015 (UTC)[reply]

Dubious claim that the algorithm W is "hard to read"

[edit]

I just read the algorithm in the original paper and to be honest I found it easier to understand than the one presented here.

It definitely reads like an algorithm: if case 1 then do this, if case 2, then do that, etc, as compared to the "logical" presentation of the algorithm which is found here.

That you find my presentation harder to understand is very unfortunate. I have consciously not presented the original form in favor for the readers. I fail to see the critic, when you write reads like an algorithm: ..., since I even explicitly explained how to read the rule system as a procedure, saying The rules now specify a procedure with parameters yielding in the conclusion where the execution of the premises proceeds from left to right. This form of presentation is not so uncommon and bringing it to procedural form is a simple transformation.
Should I explicitly bring it to procedural form as an extra step?
-- Cobalt pen (talk) 22:39, 22 March 2015 (UTC)[reply]

So I think it's unfair to call it "hard to read", it is perhaps "hard to directly relate to the inference rules", but not hard to read. -- Humble reader (talk) 16:30, 10 October 2013 (UTC)[reply]

Right, this sound judging and should be re-worded.
But my point is actually only that all the substitutions can be carried out cheaply by a side effect eliminating all their various compositions, too. Making the side effects explicit does of course add extra variables and computations, making things "harder" (actually only more laborious) to follow but has the advantage to be far more logical (i.e. side effect free) than my "logical" presentation.
I do state all this in the article at that point, but what was meant merely as a note on advantages and disadvantages of different forms comes over as a negative personal judgment.
To fix this, I should perhaps turn around the order and emphasize first on the advantage of the form used in the original article, namely being side-effect free and explicit and reformulate its disadvantage positively, to that the substitutions and their composition needed in a fully explicit form can all be eliminated as shown when using procedural means, in particular side effects.
-- Cobalt pen (talk) 22:39, 22 March 2015 (UTC)[reply]

I do like the version in the article, it is very clean, and as noted, alludes to a comparison with the syntactic rules. For this reason I have seen this presentation being used on several occasions (e.g. in class). This is, however, a matter of taste; why not also provide the pseudo-code and let the reader decide? -- Corwin.amber (talk) 17:06, 12 July 2016 (UTC)[reply]

Paging through the literature, i found the algorithm as presented in Milner 1978, P 370 ff. as a simplified algorithm J. He notes there:
As it stands, W is hardly an efficient algorithm; substitutions are applied too often. It was formulated to aid the proof of soundness.
Reworking the article, i'll give a proper attributation to the algorithm. Since implementing side effects might not be possible or wanted in a functional language, algorithm W has of course its own rights. But as Milner emphasises, algorithm W was formulated to aid the proof. While writing the article, i was very eager to demonstrate that a both simple and efficient implementation is possible, while at this point appealing mostly to the reader's metalogical intuition. I'll add algorithm W in a calculus form and hint its use in a proof. -- Cobalt pen (talk) 05:04, 28 May 2018 (UTC)[reply]

Incomprehensible esp in context

[edit]

I don't even know where to start - this is incomprehensible. What's worse is that if you look up type inference, it says "the solution to type inference has been known since 1958" and points you here to five pages of formulas. Type inference in static languages is new for most people but increasingly common - c++ is a good example, it had no type inference before c+11 but now can infer types in some situations with auto or decltype. Start with an example of the problem and show how the algorithm solves the problem, like you would with an article on binary search or btree insertion or something similar. 98.169.58.20 (talk) 19:25, 29 June 2014 (UTC) [The Arcadian][reply]

Please discuss or fix the reference in question at its origin.
I do not agree that C++11 is a good example for the algorithm described in this article. In particular, HM cannot be extended to cope with subtyping. The other way round, C++11 inference is unable to go beyond a C++ expression, e.g. cannot infer the type of a function parameter from the functions body, which HM can.
I agree that the article is technical, long, an has various other readability issues. But I cannot help that the topic needs a few "formulas", definitions, mostly, to properly present it. Being vague would be wrong. Your reference to graph algorithms does not help, as this is not an algorithm on graphs, thus no pictures or anything else appealing much to "common sense", though I'd tried. I do provide simple examples as early as possible.
-- Cobalt pen (talk) 13:24, 23 March 2015 (UTC)[reply]
You may be more interested in the article Type inference, instead. —Ruud 14:49, 23 March 2015 (UTC)[reply]
While the suggestions how to "fix" the article are not helpful, I hold the complain for rightful and take it serious. As it currently is, the article has an unclear focus of the audience. This is a Wikipedia article and a programmer or CS student interested in type inference should be able to read at least parts of the article with benefit.
Because quite some technologies and notations uncommon in "normal" programming and CS are used here, virtually everything in the article fails to "parse" and the article becomes incomprehensible. Example current prerequisites are: understanding grammar and terms as a data type, i.e. discriminated union, syntax-directed rule system, familiarity with notations like or , lambda calculus, currying, pattern matching, unification, ... at least half a dozen more. Most of them only at hand for a reader with experience in functional programming or lambda calculus.
While the article has links to most of these topics, they do not improve readability since the prerequisite is unknown but used. Thus a link only sends the reader away. Not knowing the prerequisites is not the interested readers fault, but a challenge for the author. I did not consciously check this prerequisites but interspersed them very irregularly only making the article unbalanced for every reader. Oh my! This means the article need not only be reviewed but fully remolded. I'll try to sketch a new organization of the article including the other issues, but cannot promise anything. -- Cobalt pen (talk) 01:12, 24 March 2015 (UTC)[reply]

Thus clearly

[edit]

This sentence from the second paragraph in the introduction:

Thus clearly, decisions not present in the logic might have been made constructing the algorithm, which demand a closer look and justifications but would perhaps remain non-obvious without the above differentiation.

...seems not at all clear to me. I would try to rephrase it myself if I thought I knew what it meant. — Preceding unsigned comment added by 188.126.200.132 (talk) 18:33, 6 August 2014 (UTC)[reply]

Yep.
The first question is, how does the algorithm make use of the rules. The decision is when to apply which rule.
A wrong decision can have two effects. Either it fails to find a proof, i.e. claiming a type error where the is non, or second, producing not the principle type (technically by choosing not to generalize or wrongly instantiating), which could later lead to the first, a purported type error.
The second question, whether the algorithm makes use only of rules in the system and of no other rules. Otherwise, it could be inconsistent w.r.t. to the rule system or the rule system incomplete w.r.t. the algorithm. This is not really a 'decision' but rather the implementation of the rules or its consequences and short-cuts. I meant this second also at this point.
It may not be just the introduction, where I do not express clearly how things play to together. Actually, while reviewing the article and comments, I think, while almost everything else is there, its red thread linking all parts might not be made as proper as I hoped. I'll try to readjust the focus.
-- Cobalt pen (talk) 21:00, 23 March 2015 (UTC)[reply]

System F

[edit]

How about a back link to System F. I read:

"Wells' result implies that type inference for System F is impossible. A restriction of System F known as "Hindley–Milner", or simply "HM", does have an easy type inference algorithm and is used for many statically typed functional programming languages such as Haskell 98 and ML."

This could put HM into a proper context. Jan Burse (talk) 01:06, 5 January 2015 (UTC)[reply]

While is hesitate to add content to the article while it still is so "incomprehensible", as stated rightfully in other postings, I absolutely like your idea. One can easily put a section in the end noting this relation and known limitations of the HM, etc. -- Cobalt pen (talk) 21:11, 23 March 2015 (UTC)[reply]
Apropos proper context: HM predates System F. -- Cobalt pen (talk) 08:35, 3 December 2018 (UTC)[reply]

I've added a second reference to System F. It is now referred to both in "Meta Types" and in the introductory section about "let-polymorphism". -- 14:45, 19 December 2018 (UTC)

Let-polymorphism

[edit]

Perhaps it would be a good idea to emphasize that if polymorphic types were allowed for lambda parameters, type inference would be undecidable. 37.175.216.246 (talk) 11:22, 22 January 2016 (UTC)[reply]

Actually, i'm not sure, under which conditions, i.e. with which rules precisely the inference becomes "undecidable". Could you please add some citation? I guess, one could drop the let-clause and allow a more liberal version of generalization. Then one could attempt to construct a variant of the Post correspondence problem from such rules. But when this would succeed is not obvious to me. I've added your claim to the text anyway. --Cobalt pen (talk) 09:37, 25 November 2018 (UTC)[reply]
System F has undecidable type inference, which might be what is referred to. On the other hand, for things like the (λ id . ... (id 3) ... (id "text") ... ) (λ x . x) example, it seems that you could apply a purely syntactic substitution to do your below "abbrevation" first. --Ørjan (talk) 11:50, 25 November 2018 (UTC)[reply]
Agree, that this could be what 37.175.216.246 meant. Then the reference to undecidibitly might be misplaced. --Cobalt pen (talk) 17:12, 25 November 2018 (UTC)[reply]

Perhaps it would be a good idea to make this section comprehensible to somebody who doesn't already know what it is trying to say. 97.68.160.106 (talk) 21:52, 8 November 2018 (UTC)[reply]

Usually, the let-clause can be expressed in the lambda-calculus as an abbreviation: . This links both examples. In HM, the types of variables introduced in the let-clause can be instanciated as explained in the preceeding section, parameters can not. Thus, the abbreviation is still valid with respect to the computation, but not for the typing, which behaves different on both sides. I agree, that the section can be improved. --Cobalt pen (talk) 13:39, 24 November 2018 (UTC).[reply]

Sometimes too academic in tone

[edit]

It is legitimized here, for allowing a direct comparison with \vdash_S while expressing an efficient implementation at the same time.

This is stylized form of academic writing which does not best serve the purpose here. Legitimized by whom? Direct comparison with side-tack against what? And where the heck is the main verb in the long string of words following the comma? — MaxEnt 05:10, 6 March 2016 (UTC)[reply]

Errors

[edit]

This page is completely messed up. The biggest error is the definition of monotypes. In the original paper, the definition of a type is:

τ = α | ι | τ → τ

while the definition of a monotype is (literally) "a type containing no type variables". The whole Wikipedia page assumes monotypes contain type variables.

Another mistake is the way lateral conditions are placed. For example, in the "Declarative Rule System" box, the [Var] rule (whose name is not "Var" by the way, it should be "Taut" because it's a "tautology") shows "x : σ ∈ Γ" on top of the line, but that is a side condition and it should be on the side. Same for the [Inst] rule and possibly others.

There are varying styles writing rules in the literature and the syntax i used is explained in the box right above and carefully discriminates side conditions (predicates) and judgements. -- Cobalt pen (talk) 06:10, 15 April 2018 (UTC)[reply]

2620:0:1040:1D:EC47:412:B787:E3F (talk) —Preceding undated comment added 18:34, 3 May 2016 (UTC)[reply]

The definitions of the original author do not always stick. Damas & Milner (1982) stratify types into three classes: monotypes (without type variables), types (without quantifiers and with variables; the types of things a lambda can abstract over), and type-schemes (with quantifiers). Their monotypes are only used in the definition of the semantics. In many contemporary papers types are stratified into two classes: polytypes (Damas & Milner's type-schemes) and monotypes (Damas & Milner's types). Milner (1978) only uses two classes but leaves all quantifiers implicit and doesn't have a type annotation on his lambda (or even mention exactly over what it can abstract, I believe). It might be useful to note all this in the article, though.
I've seen the side condition x : σ ∈ Γ gone both above, below and besides the bar; the rule is more often called "var" than "taut"; these are just matters of presentation.
Ruud 10:43, 4 May 2016 (UTC)[reply]
What contemporary papers are you talking about? Are they more authoritative than the original Milner&Damas paper?
"You have seen" side conditions going to wild places doesn't have a place in an encyclopedia, as others might have seen very different things.
2620:0:1040:1D:7824:DAED:F681:D77F (talk) —Preceding undated comment added 13:36, 17 May 2016 (UTC)[reply]
This is the definition of mono- and polytypes used in TaPL (p. 359) and PFfPL (1st ed., p. 202). Some papers that allow type variables in monotypes: [1], [2], [3], [4]. Do a Google Scholar search on "monotypes polytypes" to find many more examples. I didn't see any paper other than Milner's that does not allow this.
Side-condition placed besides ([5]), above ([6], [7]), and below ([8] in rule TyVar). Placing it above the bar is the most common variant in my experience.
Ruud 09:33, 18 May 2016 (UTC)[reply]
Are mono-types constants or can be variables? That is what should be explained in plain English, that is the kind of change that can make this article clearer. Aren't polytypes universally quantified type variables? That should be the starting point, the description in plain English, then the formal notation to express it. It is just to make this article less wordy. More concise and clearer. A simple example of type inference should clarify both concepts! — Preceding unsigned comment added by 2806:107E:4:ADEE:218:DEFF:FE2B:1215 (talk) 10:51, 2 April 2018 (UTC)[reply]
I actually really confused monotypes and the syntax at time of writing. In fact, is named 'types' in the original article. I did not notice it, perhaps because variables bound in a non-empty initial context are proper constants. As mentioned, monotypes in the original article are a purely semantical feature and syntactically ground terms there. Thus the meaning of variables in 'types' depends of the initial context. If bound, they are constants, if unbound, they are implicitly all-quantified both by the generalization rule. Mentioning this would help to improve the paragraph starting with Type variables are monotypes.
-- Cobalt pen (talk) 06:10, 15 April 2018 (UTC)[reply]

Notation is not rendered correctly in Chrome:

[edit]

In the latest Chrome browser [Oct. 21, 2016, build 54.0.2840.59] the notation is not rendered properly, making the article unreadable. I tried to upload a screenshot of the rendering, but it was blocked as "abuse". Categorizing any old thing as "abuse" seems to be the latest advance in abuse. — Preceding unsigned comment added by Stuart.clayton.22 (talkcontribs) 06:16, 21 October 2016 (UTC)[reply]

I don't use chrome, but I suspect that you need to install some plugin to render math formulas. At 2 years from this comment, that should be solved, if someone has the same problem search in stackexchange or similar forum which plugin should be installed or what else can be the problem. — Preceding unsigned comment added by 2806:107E:4:ADEE:218:DEFF:FE2B:1215 (talk) 10:55, 2 April 2018 (UTC)[reply]

This subject is technical. It should only be written in a more readable style, not less technical!

[edit]

This subject is hardly be of the interest of a reader who ignores lambda calculus and logic. The inference rules are very common in logic and formal computer science. For example in semantic rules for programming languages.

The problem is the style of the presentation, is not clear. The intent to explain here what lambda calculus is, is absurd, it only makes the article more difficult to read.

Very simple explanations, even in natural language, can be given on what a type system is and what type systems exist with examples in the more popular programming languages to contrast with the Damas-Hindley-Milner system.

That should be enough for a casual reader, for example a programmer that want to know what means that Haskell has Hindley-Milner types. That intuitive introduction should be enough to satisfy those readers.

For a student or programmer who wishes to implement or understand it, because that is taught in programming languages courses or some course project to implement some interpreter, a detailed and "technical" exposition is needed.

Just improve the style, please don't try to make it less "technical". — Preceding unsigned comment added by 2806:107E:4:ADEE:218:DEFF:FE2B:1215 (talk) 10:37, 2 April 2018 (UTC)[reply]

Agree, and thanks for your suggestions. While writing, I had a veering image of the potential readers. As a consequence the article came out very unbalanced, and, as you put it, in parts absurd. Looking closer, i aparently decided at time of writing to intersperse basics trying to keep the article going longer as possible for a casual or less prepared reader to the effect of only irritating everyone. Yes. I think this is the point. I could rewrite it, starting with a low bar and quickly raise it without adding more basics at some point, but silently assuming them, indicating the end of the "less technical" part for the various groups of readers, consciously throwing them out, while providing the start for those looking for their particular point. As you say, this would also help to make article denser towards the end. Thus each group would have one continuous section, that could be read with some benefit.
-- Cobalt pen (talk) 12:15, 15 April 2018 (UTC)[reply]

Type inference of fixed point combinator is not trivial!

[edit]

In untyped lambda calculus, the fixed point combinator is easy to define, either or Y. Can be easily defined, but the later is also known as the paradoxical combinator because it is related to Russell's paradox.

The typed fixed point combinator is tricky, that section should be more detailed. If you define the Y combinator in Haskell, the systems signals an error, saying that it can not infer the type of the expression. However there are a couple of typed fixed point combinators, the simpler relies on the graph reduction semantics. The other is more complicate to explain it here now. — Preceding unsigned comment added by 2806:107E:4:ADEE:218:DEFF:FE2B:1215 (talk) 11:13, 2 April 2018 (UTC)[reply]

Right, i was not aware of the Curry-Howard correspondence at time of writing. But so may be the designers/users of the inference algorithm. IMO, the article could be improved, by first explaining the difference between type checking and type inference. Having the Curry-Howard correspondence at hand, one even has a third option: can we find an expression (i.e. a proof) that has a particular type? In this light, introducing a function means to introduce an axiom which is not true even in classical logic. On the other hand, the algorithm was designed to practically infere types of recursive programs. This contradicts with the otherwise very useful property of strong normalization imposed by typing. Thus, IMO, when dealing with recursion, one either has to reduce type inference of recursive functions to type checking (breaking inference but preserving the logic) or add some illocical rules to make the inference algorithm applicable, which would work well, i think, if the function typed is in fact terminating. --Cobalt pen (talk) 02:53, 15 April 2018 (UTC)[reply]

Reworking the article

[edit]

Finally, i found a bit time to rework the article hopefully addressing the critics and suggestions.

  • First, the introduction will now contain the most important features, without too much details. (Done)
  • As a second step, these features will be removed from the body of the article, where the now disturb the flow. (Done)
  • I'll add the substitution version of Algorithm W in deductive style, to have the four variants of calculus in a unified form. (Done)
  • Some special topics now mentioned in the body will perhaps better discussed at the end in a separate section. (Done)

-- Cobalt pen (talk) 14:10, 26 May 2018 (UTC)[reply]

Because some critic was raised that the article is not accessible enough for an encyclopedia, i've now added a longer introduction. Adding the introduction has a some disadvantages:

  • First of all, it makes an already long article even longer. WP:TECHNICAL suggests to consider splitting an introduction of into a seperate article, which i feel would be wrong in this case.
  • Additionally, i'm not sure, whether the level of introduction came out too low for an article with a highly technical body. But perhaps not...

Hmm, at this point, i feel the templates related to the technicity of the article has been addressed.

-- Cobalt pen (talk) 10:03, 29 May 2018 (UTC)[reply]

Finally, the overhaul of the article is done. It should be more accessible now in several aspects:

  • A casual non-programmer would hopefully get in idea what the article is about.
  • A programmer with no experience in parameteric types or functional programming might find the historical and pragmatic part of the introduction worth to read.
  • People with interest in learning about HM by reimplementing will find easy code examples and test cases to start with.
  • The ones working through the proofs in literature seeking hints or readily layouted formulas might be served, too.
  • Perhaps one or two ideas for a presentation of HM might come up in those who are preparing one.
  • Last but not least, and perhaps most important for a public encyclopedia, the article is now better prepared for additions and changes.

There are still quite some breaches and omission, but i hope, they can be closed or corrected more easily than before:

  • A most obvious omission is IMO substitution, substitution order, composition of substitution, mgu, etc. But much other stuff is missing, too. Whether the articles referred to will be enough, remains to be seen.
  • The "low" start in the introduction distracting towards the use of typing in natural language processing and the like may or may not be received well and may come out debatable, with perhaps interesting results.

Since i did not revised the article for quite a while and left it with its red line burried deeply under too many distracting side notes making it hard to read and improve, i thank all the other authors, proof-readers and those, who wrote on the talk page for their contributions and patience during this time. I hope to have managed to work in most of the valuable suggestions and critiques in this revision.

-- Cobalt pen (talk) 15:15, 30 May 2018 (UTC)[reply]

I've revised the english in the first half of the article. The second half is not too bad. It still needs another — perhaps more aggressive — style pass. DLeonard (talk) 14:19, 18 August 2018 (UTC)[reply]

And, if I may add Cobalt, I appreciated the low-tech introduction. It is not distracting. DLeonard (talk) 13:44, 19 August 2018 (UTC)[reply]
Thanks, Dleonard. Please don't hesitate to rewrite the text more aggressvily. I'm not a native speaker and my clumsiness in expression is unintented. --Cobalt pen (talk) 20:55, 19 August 2018 (UTC)[reply]

Sudden appearance of

[edit]

The article switches from to with no explanation or reference to what the latter means. Could we either stick to or define what means? --Doradus (talk) 02:08, 23 June 2019 (UTC)[reply]

Hmm, there are totally four different set of rules in the article
  1. declarative system
  2. syntactical system
  3. algorithm J
  4. algorithm W
The index is omitted in one place. There is no intentention in leaving out the index but only that none of the rule systems have been introduced at that point. And yes, the indexing is nowhere introduced. -- Cobalt pen (talk) 12:51, 12 August 2019 (UTC)[reply]

Question about "all-quantified"

[edit]

We say this:

is incomplete, as one cannot show in , for instance, but only

But earlier, we said this:

Finally, a type variable may legally occur unbound in a typing, in which case they are implicitly all-quantified

Doesn't that mean that these two are the same? The only difference between them is the universal quantification. --Doradus (talk) 02:26, 23 June 2019 (UTC)[reply]

No, not the same. That unbound type variables are unbound in a type expression does not imply they are universally quantified. They also have to be unbound in the initial environment. See the related note in the article. Your second quote says the same, but needs the sentences before it as precondion (which are intended to be included by the leading word finally). This point is really somewhat subtile. -- Cobalt pen (talk) 13:18, 12 August 2019 (UTC)[reply]

bilber - a typo?

[edit]

bilber seems to be a typo but it is not obvious what it should be - can anyone figure out what it should be? (from Wikipedia:Correct typos in one click) — Preceding unsigned comment added by Bellowhead678 (talkcontribs) 23:34, 20 October 2019 (UTC)[reply]

It is not a typo. That section uses a nonsense word for demonstration purposes. --Ørjan (talk) 10:54, 21 October 2019 (UTC)[reply]

wp:tone

[edit]

Hi Ed6767,

you have added a flag to Hindley–Milner type system. Not that i disagree, but I would appreciate if you'd add a word to the talk page, what in particular caught your eye.

Thanks and kind regards

-- Cobalt pen (talk)

Hi, Cobalt pen. While I can't remember the specifics of why a tagged it (as it was a good few months ago now), at the moment I feel the predominant use of lists and lack of inline citations in many places, along with other concern would warrant this tag. Hope this helps, and feel free to copy this message over to the appropriate talk page. Ed talk! 09:19, 2 October 2020 (UTC)[reply]
copied over -- Cobalt pen (talk) 14:24, 6 October 2020 (UTC)[reply]
Since this is (probably) already in the WP:GOCE's backlog from {{Tone}}, I'll point out some examples of what sounds unencyclopedic:
  • One and the same thing can be used for many purposes.
  • When no chair is at hand [...] Emphasis added.
  • [...] and much like letters in a book [...]
  • The type inference method designed by Hindley and Milner does just this for programming languages.
Some phrases could benefit from (as counterintuitive as it sounds) the passive voice instead of the active voice, but primarily most of the trouble phrases could be "drier" and should try to be as concise as possible. —Tenryuu 🐲 ( 💬 • 📝 ) 15:55, 6 October 2020 (UTC)[reply]
The examples are all from the introduction. I see some conflict of objectives, as the seemingly "wet" style was intended to give a proper introduction to non-technical readers, too. Please see WP:EXPLAINLEAD and, more explictly, WP:TECHNICAL. And yes, stylistically it can become less ornamented in parts, but the introduction's content might essentially be rather misplaced leading to an impression of wrong tone. Most particular, no referable lemma or section for "type" and other prerequisites are present in WP. Thus, the issue might better be fixed by distributing the content of the introduction to other lemmata and then drawing from them.
-- Cobalt pen (talk) 01:57, 9 October 2020 (UTC)[reply]

I've rewritten the offending section and moved it to type inference. Additionally, i removed some remarks with questionable tone. As a consequence, this article now lacks some introduction, which is a different issue to filled in later. I close the template. -- Cobalt pen (talk) 11:26, 12 October 2020 (UTC)[reply]

wp:length

[edit]

The article is too long. Beside rebalancing most of the introduction with other lemmata, the body of the article could be reorganized into two parts:

  • The first part could be shaped towards some Type inference (simply typed lambda calculus) lemma and later chopped off, leaving more room to present the let-polymorphism.
  • This second part could then draw more material from the System F lemma. In particular, the conceptually static binding of type variables is currently left implicit, making the rules appear too syntactical.

-- Cobalt pen (talk) 01:57, 9 October 2020 (UTC)[reply]

Discussion of polymorphism is misleading

[edit]

It's true that Pascal and C didn't have type polymorphism, but other languages from the same era as ML did, notably CLU. 128.84.125.220 (talk) 16:14, 27 April 2022 (UTC)[reply]

Hmm, not sure what to do with your note. You refer to the second paragraph? -- Cobalt pen (talk) 10:30, 23 May 2022 (UTC)[reply]

Article reads like a textbook

[edit]

There is a lot of wonderful information in this article, and I respect the amount of effort that has clearly been put into it. But it is decidedly not encyclopedic. Phases like “The remainder of this article proceeds as follows” are not a great sign, and the bulk of the content completely lacks inline citations. It would be nice to preserve the content in this article, perhaps as a blog post or a book on Wikibooks, which could then be provided in this article as an external link for further reading. However, the article itself really ought to be trimmed down to a more focused overview. — Alexis King (talk) 20:25, 30 September 2022 (UTC)[reply]

Yes, Alexis, there is room for improvement, but i disagree that the article violates WP guidelines, namely WP:NOTTEXTBOOK or WP:ORIGINAL.
For the first one, you should make precise, which information should be removed for brevity. At this point, please be aware, that many guidelines, e.g. WP:ACCESSIBILITY, too, have to be balanced, to make the lemma useful for different groups of readers. As a consequence, each group might find sections to simply page over. To facilitate this, the article is ordered from introductory to technical. I agree, that the "Overview" section can be removed or rewritten.
I also agree, that the article should be as focused as possible. As a minimum, it should present the declarative rules system as well as the algorithms W and J, but without just dumping the rules, like e.g. in calculus of constructions.
Now brevity and focus is also a matter of the state of other lemmas or material one can draw from. At this point you wish that the article should get out of the way, i.e. moved to a blog or something, as well as it should remain accessible, which is too unspecific for me. Please see the "wp:length" note in the talk page above. What i think is, that instead, the article should better be distributed to some degree into existing or new lemmas.
For instance, a new lemma Type inference (simply typed lambda calculus) could contain quite some of the material herein and could then be just summarized and referred to, allowing to focus on let-polymorphism and the generalization and instantiation rule. Additionally, and that is a question of content as well as presentation, the article has to bridge between the rule system and the algorithms in some way. Unfortunately, the rule system is both elegant and obscure and i have not seen this bridging be properly resolved in literature, so that it can be just summarized and referred to as you suggest.
This may be related to your second point. If you have a suitable article at hand, please drop a line. IMO, the point here is the static binding of the type variables left implicit in rule system, which obscures both rule system and the generalization process. I think, this might be clarified for sake of accessibility by looking at type system from point of System F, replacing the current bridging section, which is too syntactical for my taste. If you have a different suggestion, please drop a line. Otherwise, I disagree with your assertion that the text provides insufficient evidence or references.
Alexis, you or anyone else could help by being more specific in what should be removed or kept or how the lemma should be structured better. I leave these WP:SCs, you added to the head of the article, to draw some attention to this points, but find them too unspecific as they are and will remove them in while, if no more concrete suggestions are made.
-- Cobalt pen (talk) 10:44, 21 December 2022 (UTC)[reply]
Since there were no further posts to this topic, i remove the above WP-flags. Before, i considered you points again, but they would equally match almost any CS article in WP, say AVL tree or Turing machine for a random instances. I do agree with your last sentence. -- Cobalt pen (talk) 08:35, 8 February 2023 (UTC)[reply]

Second proof under "Typing Rules" is broken

[edit]

The second proof under "Typing Rules" (i.e., "To demonstrate generalization") is malformed. It concludes that the type of the let expression is forall α . α -> α, but this is a polytype (sigma σ) not a monotype (tau τ), and the [Let] rule may only conclude that a let expression has a monotype.

I would suggest instead including the proof from the Damas-Milner paper, which showed the type for let i = λx.x in i i, also demonstrating generalization. Stephen70edwards (talk) 02:54, 30 March 2023 (UTC)[reply]

You are right, Stephen70edwards. Thank you. Indeed, proofs that produce the principle type often have a generalization step in the end. I'll look for the proof in the Damas-Milner paper, as you suggest. -- Cobalt pen (talk) 14:16, 30 September 2023 (UTC)[reply]