Sank's Glossary of Linguistics 
Feature-Feec

FEATURE BLINDNESS
(Acquisition) A deficit in marking a specific class of linguistic features. | M. Gopnik, 1990

FEATURE CHECKING

  1. (Syntax) Notion in checking theory. Feature checking is a relation between two elements such that one or more designated features they share are eliminated. Example:

    1. who did you see

     The +wh feature of who is checked in the specifier position of CP (spec,CP) against the +wh feature of C. If who or C does not check its +wh feature, the derivation crashes:

    1. *you saw who

     | Utrecht Lexicon of Linguistics, 2001
  2. (Examples)
     ○ I argue that both versions of the Person Case Constraint arise when two objects enter a feature checking relation with one and the same functional head, namely transitive v. | Elena Anagnostopoulou, 2005
     ○ Structural relations are no longer absolute in the minimalist framework. Nevertheless, there is a relationship that can be absolutely (and unambiguously) determined in the minimalist theory; namely, relationship that is created by formal feature-checking. In the theory of Chomsky (1995), it is assumed that formal features such as Case-features or categorial features are syntactic primitives and that they play the role in entering into checking relations. Therefore, it is natural to hypothesize under this theory that grammatical relations / functions are related to checking relations. Hiroyuki Ura, 1995
     ○ Head chains (and subcategorization relations) can be formed in exactly the same environments that allow head-movement: a head X can subcategorize for features on a head Y iff Y can move to X. This makes subcategorization very similar to feature-checking as conceived in Chomsky 1993: a head X checks features on a head Y when Y moves to X. If subcategorization is seen as an LF condition, then it could be recast as feature-checking satisfied under abstract head movement. | Peter Svenonius, 1994

FEATURE GEOMETRY

  1. (Phonology) This theory, in its simplest and most general form, characterizes segment-internal feature structure in terms of a feature tree whose terminal nodes are features, whose intermediate nodes are feature classes, and whose root node groups all features defining the segment. The principle objective of this approach is to provide a formal characterization of the class of possible phonological processes. | G.N. Clements, 2006
  2. (Phonology) A fundamental problem in phonological theory is the fact that processes often operate on consistent subsets of the distinctive features within a segment, like the features that characterize place of articulation. Recent research has responded to this problem by proposing a hierarchical organization of the features into functionally related classes, grouped under nodes of a tree structure. | John J. McCarthy, 1988
  3. (Phonology) Since the publication of Clements's (1985) pioneering paper on the geometry of phonological features a consensus has emerged among many investigators that the complexes of features that make up the phonemes of a language do not form a simple list, but possess a hierarchical structure represented geometrically in a tree.
     A major argument in support of this proposal was the observation that only a small fraction of the logically possible pairs, triplets, ..., n-tuples of features have been shown to figure in actual phonological rules. For example, there are no phonological rules that involve groups of phonemes defined by such feature pairs as [−back, −continuant], [+strident, −round], or [−low, +stiff vocal folds]. The feature tree takes formal account of this observation by splitting the universal list of features into mutually exclusive subsets of features and grouping the subsets into higher-order sets. If it is further assumed that only these feature sets can be referred to by the rules and principles of the phonology, then other feature sets—for example, the feature pair [−back, −continuant] and the others just cited— are excluded from figuring in the phonology. | Morris Halle, 1995
  4. (Example)
     ○ The broader hierarchy determining affixal ordering in Nunggubuyu (Australian; Australia)
    person features > number features > gender features > class features
    is represented via degrees of embedding. These nodes, analogous to the organizing nodes of phonology, are in a dominance relation with one another, although the features they themselves dominate are not. This gives the following proposed feature geometry:
    
                AGR
                 |
               PERSON
                 /\
                /  \
               /    \
             prt    NUMBER
              |       /\
              |      /  \
              |     /    \
            spkr   pl    GENDER
              |            /\
              |           /  \
              |          /    \
             inc        f   ...class features...
    
    
     This feature geometry is of the type proposed for phonology by Rice and Avery (1991/1993), one in which only marked features can have dependents. The person features could be represented as follows:
    
                    PERSON
                      /\
                     /  \
                    /    \
                 +prt    -prt
                  /\
                 /  \
                /    \
             +spkr  -spkr
               /\
              /  \
             /    \
          +inc   -inc
    
    
     | Heidi Harley, 1994

FEATURE INHERITANCE

  1. (Syntax) Chomsky (2008) proposes a reinterpretation of the relation between the functional heads C and T: the Agree (φ-) and Tense features associated with the inflectional system are not an inherent property of T; instead, they belong to the phase head C. Traditional subject agreement and EPP (Extended Projection Principle) effects associated with T (A-movement of the formal subject to Spec,T, expletives, etc.) then arise via a mechanism of feature inheritance, whereby uninterpretable features are passed down from the phase head to its complement. It follows that T lacks uninterpretable features unless it is selected by C. That is, T is no longer a probe in its own right; it cannot initiate operations directly or independently of C.
     Clearly, in this way, feature inheritance captures the long-standing observation that raising/ECM (exceptional Case marking)-infinitival T, which lacks C, also lacks φ-features (failing to value Case on DP) and independent tense (see Chomsky 2000, 2004, 2005). However, where the previous system had to stipulate this connection by means of a selectional restriction (C selects φ-complete T; V selects φ-defective T), the feature inheritance model offers an arguably more explanatory account of T's featural dependence on C: the features are simply C's, not T's. This, in turn, allows a uniform characterization of phase heads (C, v*) as the locus of uninterpretable features, as is desirable on computational grounds. | Marc D. Richards, 2007
  2. (Syntax) Recent minimalist reinterpretation of the C-T dependency: T does not enter the derivation with an own set of inflectional features; rather, T inherits its feature content (φ- and Tense-features) from the phase head C before agreement with the subject is established (Chomsky 2004, 2008 and subsequent work). | Eric Fuß, 2012
  3. (Example)
     ○ Feature Inheritance (FI) of Chomsky (2008) is clearly an instance of feature splitting, as illustrated in (1).

    1.  [CP
       
       
       
      C
      [φ]
      ↓__
      FI
      [TP
       
      ____
       
      T
      [φ]
      _↑ ↓_
       
      [vP
       
      ____
      Agree
      ...
       
      ____
       
      DP
       
      _↑
       
      ...


      ] ] ]



     | Brian Agbayani and Masao Ochi, 2020

FEATURE MATRIX

  1. (General; Semantics) A set of features that characterizes a given set of linguistic units with respect to a finite set of properties. In lexical semantics, feature matrices can be used to determine the meaning of specific word fields.
    [male] [adult]
    man + +
    woman +
    boy +
    girl
     | Glottopedia, 2009
  2. (Morphosyntax) The alternative proposal presented in this paper is to represent complex grammatical categories as feature matrices. This solution is inspired by "distinctive features" in phonology that are used for classifying sounds in terms of binary values such as [voiced +] for /d/ and [voiced −] for /t/. We can easily extrapolate this idea to grammar and treat grammatical paradigms in terms of relevant distinctions.
     How can we capture relevant distinctions for German case? Assume that case is not a feature with a single value, but an array of the case paradigm of that language. Each case is explicitly represented as a feature whose value can be "+" or "−", or left unspecified through a variable (indicated by a question mark).
    The feature matrix for German case
    Case S-M S-F S-N PL
    ?NOM ?nom-s-m ?nom-s-f ?nom-s-n ?nom-pl
    ?ACC ?acc-s-m ?acc-s-f ?acc-s-n ?acc-pl
    ?DAT ?dat-s-m ?dat-s-f ?dat-s-n ?dat-pl
    ?GEN ?gen-s-m ?gen-s-f ?gen-s-n ?gen-pl
     Each cell in this matrix represents a specific feature bundle that combines the features case, number, and person. For example, the variable ?nom-s-m stands for 'nominative singular masculine'. Since plural forms do not mark differences in gender, only one plural cell is included for each case. Note that also the cases themselves have their own variable (?nom, ?acc, ?dat and ?gen). This column allows us to single out a specific dimension of the matrix for constructions that only care about case distinctions but abstract away from gender or number. Moreover, this additional column of variables captures crucial correlations between the various alternatives of case-gender-number assignment. | Remi van Trijp, 2011
  3. (Phonology) The complete set of specified features of a sound segment. | Utrecht Lexicon of Linguistics, 2001
  4. (Examples)
     ○ The table below shows features for a subset of Shilluk (Western Nilotic; Sudan, South Sudan) consonants.
    Shilluk
    PanPhon Feature Matrix
    (Mortensen et al. 2016)
    [PLACE] [SON] [CONT] [NAS] [DENT] [VOI]
    /p/ LAB
    /m/ LAB + + +
    /t/ COR
    /d/ COR +
    /n/ COR + + + +
    /d/ COR +
    /t̪/ COR +
    /d̪/ COR + +
    /n̪/ COR + + + +
    /l/ COR + + +
     | Lydia Quevedo and Kate Mooney, 2025
     ○ The paper focuses on the different functions of the Hijazi Arabic (HA) maa and contributes to the HA literature by describing these different functions and claiming that they are not instances of homonymy, but of multifunctionality. Those different functions are governed by the different syntactic environments that maa occurs in. Its occurrence in multiple syntactic environments suggests that maa has a feature matrix that includes its morphosyntactic features and their specifications that express the appropriate use and interpretation of a given structure. | Mohammad Ali Al Zahrani, 2020
     ○ Consider the normal interpretation of a two-dimensional feature matrix such as the following:
     p   i   n 
    syllabic +
    sonorant + +
    continuant +
    high
    back
    voiced + +
     ⋮
     Each phoneme in this matrix is defined by the set of feature values occurring in its column. More exactly, in the conception of Chomsky and Halle (1968), a feature column is a function assigning a certain entity, a phoneme, to a set of phonetic categories which determine its physical properties. | G.N. Clements, 1985

FEATURE MOVEMENT
(Examples)
 ○ According to Roberts and Roussou (2003), the process of grammaticalization is technically based on head movement. Since functional heads are bundles of features or maybe a single feature, I suggest that grammaticalization can arise via feature movement as well. | Ivona Kučerová, 2023
 ○ Inflectional features must be licensed on a V by Feature Transmission, i.e. feature movement. | Karlos Arregi and Peter Klecha, 2014
 ○ Another option could be that the [DEF] feature moves to right-adjoin directly to the AP. However, this is not a valid movement for features under the classical formulation of feature movement in Chomsky 1995. | Ruth Kramer, 2010
 ○ Under Cheng's (2000) analysis, the wh-feature first moves into the embedded CP, and this triggers pied-piping of the category. The wh-feature then moves on to the matrix CP where it is spelled out. This subsequent feature movement leaves the wh-phrase behind in the embedded CP. Given such an analysis, the existence of "partial" wh-movement, like the behavior of English wh-subjects, suggests that feature movement may indeed apply and that it may apply independently of category pied-piping in overt syntax. | Brian Agbayani, 2006
 ○ There are arguments that binding relations cannot be established via feature movement in LF (see Lasnik and Uriagereka 2005). | Dorian Roehrs, 2006
 ○ When the subject is a common-noun phrase, on the other hand, checking is postponed until LF and the finite verb will be perfectly capable of having its formal features checked against the wh phrase in SpecCP, after feature movement to C. | Marcel den Dikken, 2000
 ○ The account of long-distance binding in Russian in this work is not logophoric but syntactic. It is based on the head movement framework, but I modify this framework and implement it in the Minimalist framework. I consider reflexive movement as [+R] feature movement: the [+R] feature of the reflexive moves to the T whose specifier is the reflexive's antecedent. | Elena Leonidonna Rudnitskaya, 2000
 ○ Chomsky (1995) proposes that all movement is in essence feature movement. | Man-Ki Lee, 1996

FEATURE RETRIEVAL COST

  1. (Psycholinguistics) Cost function (at X given mx items to be retrieved from memory):
    FRC(x) = Πmxi=1 (1 + nFi)mi / (1 + dFi)

     | Cristiano Chesi, 2023
  2. (Psycholinguistics) To predict processing difficulties at retrieval, we associate a cost to the memory buffer access: this cost grows exponentially with respect to the number of items stored (m), linearly with respect to the number of new features to be retrieved from memory (nF), and it is mitigated (linearly, again) by the number of distinct cued features (dF) by x (the region where retrieval is requested, e.g. the verbal predicate). This is the core of the Feature Retrieval Cost (FRC) function:
    Feature Retrieval Cost (FRC)
    FRC(x) = Πni=1 (1 + nFi)mi / (1 + dFi)
     | Cristiano Chesi and Paolo Canal, 2019
  3. (Example)
     ○ In Chesi (2016) the author proposes a complexity metric called Feature Retrieval Cost that he associates to the Minimalist theory. | Rodolfo Delmonte, 2016

FEATURE SPLITTING

  1. (Syntax) I propose allowing for feature-splitting, in the spirit of Saito 2001/2003, whereby only the formal features attracted by a particular head move (or are retained, under a copy theory), the others remain behind (or are deleted, under a copy theory). | John Frederick Bailyn, 2003
  2. (Examples)
     ○ Feature Splitting under External Merge makes interesting predictions for Parasitic Gap constructions. | Brian Agbayani and Masao Ochi, 2022
     ○ The growth of causative structures in child language suggests that an operation of Feature-splitting must exist. | Thomas Roeper, 1999

FEATURE-SPLITTING INTERNAL MERGE

  1. (Syntax) One may wonder how to derive sentences with object wh-movement like What did you buy t?. In this case, vP is of the form
    [γ v [β t′IA [α R[uPhi] tIA]]]
    where what involving [vPhi] is escaped from Spec-R, and hence the uPhi on R cannot participate in feature-sharing with vPhi on the IA. This problem may be solved by the feature-splitting Internal Merge proposed by Obata and Epstein (2008), although it remains unclear whether it is compatible with the present framework by Chomsky (2013, 2015, 2020), where feature-driven IM is dispensed with. According to Obata and Epstein, a copy of a wh-phrase in an A-position involves uCase and vPhi but lacks an interrogative feature Q, whereas the one in an A′-position has Q but lacks uCase and vPhi. Given this, What did you buy t? is structured like
    [what[Q] C ... [γ what v [β what[vPhi][uCase] [α R[uPhi] what]]]]
    where the lower copy of what in Spec-R involves vPhi and uCase. Given this, the lower copy of what participates in feature-sharing with R, thereby identifying β as TD. | Takanori Nakashima, 2020
  2. (Syntax) I propose Feature-Splitting Internal Merge, where features on a single element are split into two landing sites, which enables valued [uCase] not to appear at a phase edge, but rather it is split off to a non-edge landing site inside the phasal domain. One of the direct consequences obtained from the proposed mechanism is to explain improper movement phenomena, which has been a longstanding problem since Chomsky (1973) discovered it. Under the feature-splitting system, improper movement is ruled out by causing featural crash. In addition, this system implies a new way to define two types of syntactic positions—A/A′-positions. | Miki Obata, 2010
  3. (Syntax) Chomsky (2007, 2008) proposes a feature inheritance mechanism, by which T and V do not inherently bear φ-features but rather inherit those features from C and v, respectively. Under this system, C and T (similarly, v and V) serve as probes simultaneously, which enables him to explain suppression of the subject condition. Feature-splitting Internal Merge, which is proposed in Obata and Epstein (2008, 2011), is a new mechanism for structure building evoked in such a context whereby a bundle of features on a single goal/element is split into two landing sites as a consequence of simultaneous application of Internal Merge triggered by C and T (and also by v and V). Given this mechanism, when T and C simultaneously attract a subject DP, T attracts only the features which it agreed with and C attracts the rest. That is, a copy moved to the edge of CP does not bear features which are involved in Agree by T, namely φ-features and Case-features. This system rules out so-called improper movement phenomena as featural crash. According to the proposed mechanism, improper movement (i.e. long-distance A-movement via an A′-edge, first discussed in Chomsky (1973)) induces crash because whenever a moving element reaches an A′-position it must necessarily (and simultaneously) φ-agree with the local T; therefore any element which has moved into an A′-position will be unable to value φ-features on a higher, probing T.
     One of the consequences of the proposed feature-splitting system is that the A/A′-distinction may be defined solely based on features, and not positions in a phrase structure tree. | Miki Obata, 2012
  4. (Example)
     ○ We claim that improper movement is excluded by virtue of Agree failure between a moving element and a finite T as a consequence of "feature-splitting" Internal Merge. We propose feature splitting as the most (or at least a very) natural implementation of Chomsky's φ-feature-inheritance system and Richards's (2007) value-transfer simultaneity. | Miki Obata and Samuel David Epstein, 2011

 

Page Created By Split April 11, 2026

 
B a c k   T o   I n d e x