Sank's Glossary of Linguistics 
Fop-Fz

FORCED ALIGNMENT

  1. (Phonetics) Forced phonetic alignment (FPA) is the task of aligning a speech recording with its phonetic transcription, which is useful across a myriad of linguistic tasks such as prosody analysis. However, annotating phonetic boundaries of several hours of speech by hand is very time-consuming, even for experienced phoneticians. As several approaches have been applied to automate this process, some of them brought from the automatic speech recognition domain, the combination of hidden Markov models and Gaussian mixture models has been for long the most widely explored for FPA. | Cassio Batista and Nelson Neto, 2022
  2. (Phonetics) Given only digital audio, we can study the distribution of speech and silence segments, or of purely acoustic-phonetic features such as fundamental frequency. But for most kinds of speech science, we need to know which words were said when, and how they were pronounced—and this entails the availability of phonetic segmentation and transcription. Relatively few speech corpora come with such annotations, because manual phonetic segmentation is time-consuming, expensive and inconsistent, with much less than perfect inter-annotator agreement (Godfrey et al. 1992, Leung and Zue 1984, Cucchiarini 1993).
     Automatic phonetic segmentation is, therefore, necessary for corpus-based phonetics research. Luckily, automatic phonetic segmentation is the essential result of forced alignment, a technique developed for training automatic speech recognition systems (Jelinek 1976) and for extracting acoustic units for speech synthesis systems (Wightman and Talkin 1997).
     This task normally requires two inputs: recorded audio and a conventional (orthographic) transcription. The transcribed words are mapped into a phone sequence or a lattice of possible phone sequences, by using a pronouncing dictionary and/or grapheme to phoneme rules. Phone boundaries are determined by comparing the observed speech signal and pre-trained, Hidden Markov Model (HMM) based acoustic models. Typically every phone in the acoustic models is represented as an HMM that consists of three left-to-right non-skipping states: the beginning (s1), middle (s2), and ending (s3) parts of the phone, plus empty start (s0) and end states (s4) for entering and exiting the phone. From the training data, an acoustic model (e.g., a Gaussian Mixture Model) is built for each state (except s0 and s4), as well as the transition probabilities between pairs of states. The speech signal is analyzed as a successive set of frames (e.g., every 10 ms). The alignment of frames with phones is determined by finding the most likely sequence of hidden states (which are constrained by the known sequence of phones derived from transcription) given the observed data and the acoustic models represented by the HMMs. The reported performances of state-of-the-art HMM-based forced alignment systems range from 80%-93% agreement (of all boundaries) within 20 ms compared to manual segmentation (Hosom 2009, Yuan et al. 2013) on the TIMIT corpus (Garofolo et al. 1993). Human labelers have an average agreement of 93% within 20 ms, with a maximum of 96% within 20 ms for highly-trained specialists (Hosom 2000). | Jiahong Yuan, Wei Lai, Chris Cieri, and Mark Liberman, 2023

FORMAL VALUATION OF GRAMMARS
(Grammar) The obvious means for selecting among grammars is in terms of the degree of significant generalization that they achieve. In the conventional sense of the term, a generalization is a single rule about many elements. Generalizing this notion, we might measure the degree of generalization attained by a grammar in terms of the formal similarity among its generative rules, the extent to which they say similar things about elements of various sorts. ... [G]rammars with a greater degree of similarity among rules become, literally, shorter than others which express the same mapping. ... This system of representation defines a "notational transformation" that assigns to each grammar a number, its length when rules are amalgamated. The system for amalgamating rules expresses a hypothesis as to the relations among rules that constitute linguistically significant generalizations. (Chomsky 1975)
 The more the set of primitives can be reduced without becoming inadequate, the more comprehensively will the system exhibit the network of interrelationships that comprise its subject-matter. (Goodman 1943)

 A linguist (and a child) will converge on the most highly valued grammar—that which expresses the greatest depth of generalization, given available data. | Robert May, 2023

See Also FRAZIER SCORING.

FORMANT DYNAMICS

  1. (Phonetics) While research in speaker characteristics has traditionally focussed on "static" properties of the speech signal (e.g. measurement of formant frequencies at a vowel's midpoint), more recent work has shown that dynamic (time-varying) features of speech carry important information about a speaker. An increasing number of studies have demonstrated speaker-distinguishing properties of formant frequency dynamics. | Kirsty McDougall and Francis Nolan, 2007
  2. (Phonetics) Changes in formant frequencies over time. These time-varying spectral features of vowels are also called vowel inherent spectral changes (Nearey and Assmann 1986). Different factors contribute to these spectral changes, for example, vowel-specific formant trajectories, consonantal contexts and prosodic effects (e.g. emphatic stress). Dialects of the same language can also differ in formant dynamics (Fox and Jacewicz 2009). Although the primary research focus of these dynamic formant changes is their relevance in vowel perception (e.g. see Morrison and Assmann 2013), these vowel inherent features, especially vowel-specific trajectories like those in diphthongs, allow much freedom for speaker-specific behaviors in production, e.g. variability in the amount of spectral change and the rate of change. Even if speakers produce the same acoustic targets, they can still differ in the transitions between targets. Therefore, formant dynamics are good candidates to compare between-speaker differences in speech production of identical twins.
     Formant dynamics have stronger discriminatory power than vowel center frequencies in both machine speaker recognition and machine speaker identification using statistical procedures. Greisbach, Esser, and Weinstock (1995) compared single-point measurement and multiple-point measurements in machine speaker recognition. They examined the formant trajectories of six German vowels, measuring the F1 and F2 of each token at every +25% point. It turned out that for all six vowels, the identification rate was much higher when multiple points were used compared to using the center frequency values only. Ingram, Prandolini, and Ong (1996) also measured different points along the formant trajectories and integrated the information in machine recognition, and the results showed a high recognition rate. Therefore, formant dynamics exhibit more between-speaker differences than vowel center formant frequencies do.
     Several studies have demonstrated the success of using formant dynamics in machine speaker identification. Rose, Kinoshita, and Alderman (2006) compared 25 male-speakers using the diphthong /aɪ/ in Australian English. Formant frequencies were measured at two sampling points—the steady states of /a/ and /ɪ/ identified from the spectrogram. The statistical model yielded a very high discrimination rate. Zhang, Su, Cao, and Zhao (2010), Zhang, Morrison, and Thiruvaran (2011) did a similar study with a set of Mandarin Chinese syllables containing either the diphthong /aɪ/ or the triphthong /ɪao/ produced by 20 female speakers. The formant frequencies at the starting points, the midpoints and the end points were used to perform a speaker identification task. When the values of the three points were combined, the identification rate was over 95%. | Donghui Zuo and Peggy Pik Ki Mok, 2015

FORTIS

  1. (Phonetics) A fortis consonant is a "strong" consonant produced by increased tension in the vocal apparatus. These strong consonants tend to be long, voiceless, aspirated, and high. | SIL Glossary of Linguistic Terms, 2003
  2. (Phonetics) It is claimed that in some languages (including English) there are pairs of consonants whose members can be distinguished from each other in terms of whether they are strong (fortis) or weak (lenis). These terms refer to the amount of energy used in their production, and are similar to the terms tense and lax more usually used in relation to vowels. | Peter Roach, 2011
  3. (Phonetics) Fortis and lenis (Latin for "strong" and "weak"), sometimes identified with tense and lax, are pronunciations of consonants with relatively greater and lesser energy. English has fortis consonants, such as the [p] in pat, with a corresponding lenis consonant, such as the [b] in bat. Fortis and lenis consonants may be distinguished by tenseness or other characteristics, such as voicing, aspiration, glottalization, velarization, length, and length of nearby vowels. Fortis and lenis were coined for languages where the contrast between sounds such as [p] and [b] does not involve voicing (vibration of the vocal cords) (Ladefoged 1996, Halle, Hughes, and Radley 1957). | Wikipedia, 2024

FORTIS ARTICULATION

  1. (Phonetics) Relatively strong or forceful overall articulation. Probably a matter of greater subglottal pressure accompanied by higher airflow and stronger and more definite supraglottal articulatory gestures. Sometimes described as tense articulation. | ?
  2. (Phonetics) The past six decades have seen a number of attempts to define the notion fortis. That its status is not firmly established is evident from a comment by Sihler (2000), who defines fortis as "literally 'strong', a phonetic term of imprecise and arbitrary use, contrasting with lenis 'weak'."
     Hock (1991), dealing only with tenseness in vowels, uses tense with the reservation that "... since 'tense' very frequently corresponds to 'long', these correlations often can instead be expressed as ones between length and relative vowel height or peripherality."
     Pike (1943), in one of the earlier attempts to specify the phonetic correlates of articulatory force, associated fortis articulation with "strong, tense movements ... relative to a norm assumed for all sounds." | Herbert F.W. Stahlke, 2003

FORTITION
(Phonetics) Or, strengthening. Antonym, lenition. A consonantal change that increases the degree of stricture. For example, a fricative or an approximant may become a stop (i.e. [v] becomes [b] or [r] becomes [d]). Although not as typical of sound change as lenition, fortition may occur in prominent positions, such as at the beginning of a word or stressed syllable; as an effect of reducing markedness; or due to morphological leveling. | Wikipedia, 2023

FRAME SEMANTICS

  1. (Semantics) A research program in empirical semantics which emphasizes the continuities between language and experience, and provides a framework for presenting the results of that research. A frame is any system of concepts related in such a way that to understand any one concept it is necessary to understand the entire system; introducing any one concept results in all of them becoming available. In Frame Semantics, a word represents a category of experience; part of the research endeavor is the uncovering of reasons a speech community has for creating the category represented by the word and including that reason in the description of the meaning of the word.
     Similar or comparable notions have developed and are employed in other fields, particularly artificial intelligence and cognitive psychology | Miriam R.L. Petruck, 1996
  2. (Semantics) With the term frame semantics (FS) I have in mind a research program in empirical semantics and a descriptive framework for presenting the results of such research. FS offers a particular way of looking at word meanings, as well as a way of characterizing principles for creating new words and phrases, for adding new meanings to words, and for assembling the meanings of elements in a text into the total meaning of the text.
     By the term frame I have in mind any system of concepts related in such a way that to understand any one of them you have to understand the whole structure in which it fits; when one of the things in such a structure is introduced into a text, or into a conversation, all of the others are automatically made available. I intend the word frame as used here to be a general cover term for the set of concepts variously known, in the literature on natural language understanding, as schema, script, scenario, ideational scaffolding, cognitive model, or folk theory.
     FS comes out of traditions of empirical semantics rather than formal semantics. It is most akin to ethnographic semantics, the work of the anthropologist who moves into an alien culture and asks such questions as, "What categories of experience are encoded by the members of this speech community through the linguistic choices that they make when they talk?" A frame semantics outlook is not (or is not necessarily) incompatible with work and results in formal semantics; but it differs importantly from formal semantics in emphasizing the continuities, rather than the discontinuities, between language and experience. | Charles J. Fillmore, 1982

FRAZIER SCORING

  1. (Syntax) The Frazier score essentially counts how many intermediate nodes exist in the tree between the word and its lowest ancestor that is either the root or has a left sibling in the tree. | Emily T. Prud'hommeaux, Brian Roark, Lois M. Black, and Jan van Santen, 2011
  2. (Syntax) Different syntactic complexity measures rely on varying levels of detail from the parse tree. Some syntactic complexity measures, such as that of Yngve (1960), make use of unlabeled tree structures to derive their scores; others, such as that of Frazier (1985), rely on labels within the tree, in addition to the tree structure, to provide the scores. This is an approach that relies upon the right-branching nature of English syntactic trees (Yngve 1960, Frazier 1985), under the assumption that deviations from that correspond to more complexity in the language.
     Frazier (1985) proposed an approach to scoring syntactic complexity that traces a path from a word up the tree until reaching either the root of the tree or the lowest node which is not the leftmost child of its parent. | Brian Roark, Margaret Mitchell and Kristy Hollingshead, 2007
  3. (Syntax) See below, a simple tree diagram illustrating the computation of Yngve (left) and Frazier (right) syntactic complexity measures. "x" indicates path termination for the Frazier method.
               / \
             /     \
          /           \   
       NP               VP
      2|1.5             1|x
       |               /  \
       |            /        \
       |         /              \   
      PRP    VBD                 NP
      0|1    1|1                 0|x
       |      |                  / \
       |      |               /       \
       |      |            /              \
       |      |        NP                    PP
       |      |        1|1                  0|x
       |      |        / \                  / \
       |      |       /   \                /   \
       |      |      /     \             /       \
       |      |    DT       NN        IN           NP
       |      |    1|1      0|0       1|1         0|x
       |      |     |        |         |         / | \
       |      |     |        |         |        /  |  \
       |      |     |        |         |       /   |   \
       |      |     |        |         |    DT    JJ    NN
       |      |     |        |         |    2|1   1|0   0|0
       |      |     |        |         |     |     |     |
      She   found   a       cat      with    a    red   tail
       |      |     |        |         |     |     |     |
      2|2.5  2|1   3|2      2|0       2|1   3|1   2|0   1|0  
    
     [The author's original diagram includes a sentence-final
    period that is scored "0|0". Also in the original, all
    of the lines are straight and solid, not dashed. ~jps]
     | Serguei Pakhomov, Dustin Chacón, Mark Wicklund, and Jeanette Gundel, 2010
See Also FORMAL VALUATION OF GRAMMARS; YNGVE SCORE.

FREE ENRICHMENT
(Pragmatics) Other enrichments are linguistically mandated, as they are enacted by specific elements (Carston 2000, Jary 2016). The procedures, or processing instructions, which some of those elements encode even constrain their output in precise manners (Blakemore 1987, 1992, 2002; Wilson and Sperber 1993, 2002, 2004). Also termed saturation (Recanati 1993, 2002, 2004), these enrichments include:

 In contrast, other enrichments are non-linguistically mandated because they are automatically performed as a prerequisite to turn the logical form into a fully propositional form. Known as free enrichment, they include:  | Manuel Padilla Cruz, 2022

FREE RELATIVE

  1. (Grammar) A non-interrogative wh-clause like the bracketed one here:
    Luca tasted  [ what Adam cooked ].
     Three kinds of free relatives are attested cross-linguistically:
    1. Definite free relatives: with the distribution and interpretation of definite descriptions like (1).
    2. Existential free relatives: occurring in the complement position of existential constructions.
    3. -Ever free relatives: occurring as:
      1. Arguments, like
         I'll do  [ whatever you say ].
      2. Clausal adjuncts, like
         [ Whatever you say ], I won't change my mind.
     | I. Caponigro, H. Torrence, and C. Cisneros, 2013
  2. (Syntax) The central question for the analysis of headless relative clauses, also known as "free relatives" (henceforth FRs), concerns the position of the wh-phrase (what in (1)), and in particular whether it raises to a clause-internal or clause-external position (see Van Riemsdijk 2006 for a survey and references).
    1. (I eat) [FR whati you cook ti ].
     The crucial property of (1) is its nominal character: it occurs in a position otherwise restricted to a DP argument. This is the way in which what you cook in (1) differs from what you cook in (2), where the same string is interpreted as an indirect question.
    1. (I wonder) [Q whati you cook ti ].
     In languages like German, where nominal elements can occur in the middle field but (finite) clausal categories can do so only very marginally, the asymmetry between FRs and embedded questions comes out clearly.
    1. Ich
      I
      werde [FR
      will
      was
      what
      ich
      I
      gefunden
      found
      habe ]
      have
      niemandem
      nobody
      (t)
      zeigen.
      show
      'I won't show to anybody what I found.'
    2. ??  Mir
        me
      hat
      has
      sie
      she
      [Q
      wer
      who
      es
      it
      gesagt
      said
      hat ]
      has
      ja
      PRT
      nicht
      not
      ( t )
      gesagt.
      said
        'She didn't tell me who said it.'
     | Dennis Ott, 2011

FREE VARIATION
(Phonology) A phenomenon where two different sounds can be used interchangeably in speech. Linguists define this phenomenon using the test of perceived authenticity by native speakers. In other words, if the two different sounds can both be used by native speakers, and are considered correct pronunciation, their dual use qualifies as free variation.
 The sounds used in free variation can be either vowels or consonants. One common example in English is the word data. Here, the short "a" sound, as in apple, can be used in the first vowel position, or, the speaker can instead use the long "a" sound as in the word day. These are commonly accepted pronunciations in American English and most other regional forms of the language.
 Other examples include the use of consonant sounds. Some of these can be extremely technical and nuanced. For example, in American English, words like rope can be pronounced either with a glottal stop, where the listener doesn't really hear the "p" sound, or with a full plosive, where the "p" at the end is prominent. | A. Leverkuhn, 2023

FREEZING PRINCIPLE

  1. (Syntax) Wexler and Culicover (1980) proposed the "Freezing Principle", based on considerations of language learnability. The basic idea was that a structure that is created transformationally that is not compatible with the base phrase structure rules of a language is frozen. Such a derivation is non-structure-preserving, in the sense of Emonds (1970, 1976).
     We give a simple illustration. In cases such as (1), the heavy NP a picture of who has arguably moved from the position adjacent to the verb to the end of the VP. Since the configuration [VP V PP NP] is not a base configuration in English, it is frozen. Hence it should not be possible to extract from any constituent of the VP, according to the Wexler-Culicover Freezing Principle.
    1. a. * Whoi did you give __ to Robin [ a picture of ti ]?
      b. * Whoi did you give __ to ti [ a picture of Sandy ]?
      ('__' indicates the gap corresponding to the canonical position of the direct object.)
     A point to note here is that the task of finding the gaps in such constructions is a complex one. There is one gap immediately after the verb, and another in the VP-final DP. Especially when the sentence is read silently and there is no context given, there is nothing to tell the processor to look for the trace of who in the VP-final DP. | Peter W. Culicover and Susanne Winkler, 2010
  2. (Syntax) Müller (2010) proposes a contemporary version of the Wexler-Culicover Freezing Principle to explain the fact that extraction is not possible in German from a specifier, if it is last-merged in its projection (e.g. subjects). However, it is possible when some other phrase scrambles over the last merged specifier and becomes itself the last-merged specifier within the same phrase, which he refers to as melting.
     For instance, he observes that (1b) is ungrammatical, but that (1a), where the freezing configuration has been removed, is grammatical.
    1. a.
      Was1
      what
      haben [ DP2
      have
      den
      the
      Fritz ]
      Fritz.ACC
      [ DP3t1
      für
      for
      Bücher ]
      books.NOM
      beeindruckt?
      impressed
      b.
      * Was1
       what
      haben [ DP3
      have
      t1
      für
      for
      Bücher ]
      books.NOM
      [ DP2
      den
      the
      Fritz ]
      Fritz.ACC
      beeindruckt?
      impressed
        'What kind of books impressed Fritz?'
     On Müller's account, was für Bücher in (1b) is frozen, because it is last-merged in the specifier-position of vP. However, it is not frozen in (1a), because the movement of den Fritz over it by scrambling removes the offending configuration that froze it—this is melting. | Peter W. Culicover and Susanne Winkler, 2010

FRUSTRATIVE
(Grammar) A functional element found in a number of languages which expresses, in its typical use, that an action did not have its intended consequences, as in the following examples:

  1. Mapudungun (Salas 1992 apud Soto and Hasler 2015)
    Katruü-fu-n
    cut-FRUSTR-1SG
    ñi
    1POSS
    wili
    fingernail
    'I cut my fingernails (but they’re still long).'
  2. Chorote
    A-lej-a-ta
    1SG.ACT-wash-MOM-FRUSTR
    ki
    DET
    i-ʼyuʼ,
    1SG.POSS-clothes
    ¡tʼọjliʼ!
    3.dirty
    'I washed my shirt (but) it's still dirty!'
 | Andrés Pablo Salanova and Javier Carol, 2020

FUCKIN' INSERTION
(Syntax) Or, Expletive Infixation. A process in English by which a restricted class of infixes (fuckin', bloody, bloomin') is inserted between two metrical feet of which the latter one contains the syllable that carries the main stress.

  1. a. mònonga-fuckin-héla
    b. fàn-bloody-tástic
    c. àbso-bloomin'-lútely
  2. a. * monòng-fuckin-ahéla
    b. * cháco-fuckin-pèe
    c. * chi-fuckin-cá
 (Aronoff 1976, McCarthy 1982) | Utrecht Lexicon of Linguistics, 2001

FULL-REVERSIBLE PASSIVE
(Syntax) Includes an agent by-phrase and two animate arguments. | Susannah Kirby, 2010

FUNCTIONAL DISCOURSE GRAMMAR
(Grammar) Like functional grammar (FG), a grammar model and theory motivated by functional theories of grammar, which explain how linguistic utterances are shaped, based on the goals and knowledge of natural language users. In doing so, it contrasts with Chomskyan transformational grammar. FDG has been developed as a successor to FG, attempting to be more psychologically and pragmatically adequate than FG. (Hengeveld and MacKenzie 2008, Mackenzie and Gómez-González 2005)
 The top-level unit of analysis in FDG is the discourse move, not the sentence or the clause. This is a principle that sets FDG apart from many other linguistic theories, including its predecessor FG. | Wikipedia, 2021

FUNCTIONAL HEAD

  1. (Syntax) Functional-head theory is the claim that grammatical meanings are represented as heads in syntax, in much the same way as contentive heads (verbs and nouns). | Martin Haspelmath, 2022
  2. (Syntax) D (determiner) is a "functional head" because it has abstract grammatical properties (rather than lexical properties), dominating features such as number, gender and case. | English Language and Linguistics Online

FUNCTIONAL PROJECTION
(Syntax) In X-bar theory, the projection of a functional head, such as COMP (CP), INFL (IP), or Det (DP). The underlying assumption is that, like the lexical heads N, A, V and P, functional heads have a syntactic projection as dictated by X-bar theory.
 A fundamental question is whether there is a limit to the number of functional categories. It seems reasonable to assume that their projection is in some sense parasitic on, or an extension of, a lexical projection. (Abney 1987, Chomsky 1986, Ouhalla 1990) | Utrecht Lexicon of Linguistics, 2001

FUNNY R, FUNNY W
(Phonology) The Comparative Siouan Dictionary (Rankin, Carter and Jones 1998, Rankin et al. 2015) points out that proto-Siouan seems to have had five sonorants, which they reconstruct as *r, *y, *w, *R and *W. Informally among themselves, the dictionary editors refer to *R and *W as "funny r" and "funny w". The difference between *r and *R, or between *w and *W, is murky and confusing, since their reflexes are pretty much the same segments (e.g. [l, r, n, d, y, ð], maintaining sonority and coronality for both *r and *R), but distributed differently among the daughter languages. In particular, in Lakota, *r descends as /y/, and *R descends as /l/. | David Rood, 2016

FUSION
See COALESCENCE.

FUSIONAL LANGUAGE

  1. (Typology) A language in which one form of a morpheme can simultaneously encode several meanings.
     Fusional languages may have a large number of morphemes in each word, but morpheme boundaries are difficult to identify because the morphemes are fused together.
     Most European languages are somewhat fusional. In Spanish the in habló 'to speak' simultaneously codes indicative mode, third person, singular, past tense, and perfective aspect. If any one of these meaning components changes, the form of the verbal suffix must change (Payne 1997).
     The opposite of a highly fusional language is a highly agglutinative language. | SIL Glossary of Linguistic Terms, 2008
  2. (Typology) Or, inflected language. A type of synthetic language, distinguished from agglutinative languages by their tendency to use a single inflectional morpheme to denote multiple grammatical, syntactic, or semantic features.
     For example, the Spanish verb comer ('to eat') has the first-person singular preterite tense form comí ('I ate'); the single suffix represents both the features of first-person singular agreement and preterite tense, instead of having a separate affix for each feature.
     Another illustration of fusionality is the Latin word bonus ('good'). The ending -us denotes masculine gender, nominative case, and singular number. Changing any one of these features requires replacing the suffix -us with a different one. In the form bonum, the ending -um denotes masculine accusative singular, neuter accusative singular, or neuter nominative singular. | Wikipedia, 2022

FUTURATE

  1. (Grammar) A sentence that has future reference in the absence of future-oriented morphology, as in (1). Futurates have a "planned" or "settled" flavor, as shown by the fact that the sentences in (2) are infelicitous:
    1. a. I make the coffee tomorrow.
      b. The Red Sox play the Yankees tomorrow.
    2. a. # I get sick tomorrow.
      b. # The Red Sox beat the Yankees tomorrow.
    Natural or clockwork futurates also exist; they can describe settled eventualities but not unsettled ones:
    1. a. The sun rises tomorrow at 5:48.
      b. # It rains tomorrow at 5:48.
     Futurates have complex meaning with seemingly no morphemes to express it. | Bridget Copley, 2023
  2. (Grammar) A usage in which the future is referred to without using a traditional future construction. The usual way to do this is with a multi-word form of the present tense.
     The two sentences I am being married in the fall and I am getting married in the fall are examples of the present progressive futurate. | Patricia T. O'Conner and Stewart Kellerman, 2014

 

Page Last Modified October 7, 2024

 
B a c k   T o   I n d e x