Tag Archives: Skills
Corpus-based Linguistic Research: From Phonetics to Pragmatics

Mark Liberman – University of Pennsylvania
Course time: Monday/Wednesday 1:30-3:20 pm
Aud C

See Course Description

Corpus-based Linguistic Research: From Phonetics to Pragmatics

Course website: http://languagelog.ldc.upenn.edu/myl/lsa2013/

Big, fast, cheap, computers; ubiquitous digital networks; huge and
growing archives of text and speech; good and improving algorithms for
automatic analysis of text and speech: all of this creates a
cornucopia of research opportunities, at every level of linguistic
analysis from phonetics to pragmatics. This course will survey the
history and prospects of corpus-based research on speech, language,
and communication, in the context of class participation in a series
of representative projects. Programming ability, though helpful, is
not required.

This course will cover:

* How to find or create resources for empirical research in linguistics
* How to turn abstract issues in linguistic theory into concrete
questions about linguistic data
* Problems of task definition and inter-annotator agreement
* Exploratory data analysis versus hypothesis testing
* Programs and programming: practical methods for searching,
classifying, counting, and measuring
* A survey of relevant machine-learning algorithms and applications

We will explore these topics through a series of empirical research
exercises, some planned in advance and some developed in response to
the interests of participants.

There will be some connections to the ICPSR Summer Program in
Quantitative Methods of Social Research:
http://www.icpsr.umich.edu/icpsrweb/sumprog/

, ,


Linguistics as a Forensic Science

Carole E. Chaski – Institute for Linguistic Evidence
Course time: Tuesday/Thursday 3:30-5:20 pm
2336 Mason Hall

See Course Description

Linguistics as a Forensic Science introduces students to the current state of the art in forensic linguistics. Students learn the legal standards that linguistic evidence must meet, how linguistic research has produced methods that meet these standards, as well as examples of methodological failure. Cases and rulings are discussed in the context of methodological issues for linguistics, and to demonstrate the seriousness of legal standards. Examined in detail are linguistic methods for author identification, text classification, intertextuality and linguistic profiling.  Most forensic linguistic methods attempt to identify, individuate or classify texts, so automatically texts are seen as instances of either individual or group variation (i.e. the method must be able to categorize texts as belonging to different individuals, the method must be able to classify texts as belonging to a particular type of text, the method must be able to identify texts as coming from a person with a certain level of education or dialect, and so forth).

 The paradigm which students learn in this course is one in which (1) universal principles provide methodological grounding for the analysis of variation, (2) texts are analyzed for the instantiation of syntactic and semantic properties, (3) the instantiations are quantified, (4) the quantifications are subjected to statistical analysis, (5) the statistical analysis is subjected to validation testing for error rates. This paradigm –known as computational forensic linguistics– poses several challenges to linguistics as a science, such as, the choice of levels and units for linguistic analysis of forensic texts for specific tasks, the predictability of linguistic behavior, tools for analysis of variable linguistic behavior, and the model of language which is both circumscribed or determined by universal principles but at the same time instantiated in group and individual behaviors. Thus, computational forensic linguistics provides a proving ground for how universal principles ground analysis and method so that individual and group variability can be accurately captured and then used for prediction –the core of scientific endeavors.

Current forensic linguistics methods exemplify the tension between universality and variability. The ways in which different methods embrace universality or variability have either enabled or prevented linguistic methods from reaching error rates low enough for legal use. Admissible methods that have successfully met the scientific rigor required for legal evidence combine analysis based on universal principles of linguistic structure with statistical analysis of linguistic variability.  On the other hand, methods which have focused on variability to the exclusion of universal principles have failed methodologically to produce repeatable results or low error rates, and have thus not met legal standards and are generally ruled as inadmissible.  The computational forensic linguistic paradigm embraces variability as the core of most forensic linguistic problems, with universal structural principles as the primary analytical approach for solving these problems. Only this synergistic approach — a structural-behaviorist approach— actually works to produce feasible forensic linguistic methods that are theoretically grounded, replicable and reliable.

Students in this course should have already taken an introductory linguistics course. Students in may also find the Institute courses on R and Python to be good courses to take at the same time but they are not required.

,


Machine Learning

Steve Abney – University of Michigan
Course time: Monday/Wednesday 11:00 am – 12:50 pm
1401 Mason Hall

See Course Description

This course provides a general introduction to machine learning. Unlike results in learnability, which are very abstract and have limited practical consequences, machine learning methods are eminently practical, and provide detailed understanding of the space of possibilities for human language learning.

Machine learning has come to dominate the field of computational linguistics: virtually every problem of language processing is treated as a learning problem.  Machine learning is also making inroads into mainstream linguistics, particularly in the area of phonology. Stochastic Optimality Theory and the use of maximum entropy models for phonotactics may be cited as two examples.

The course will focus on giving a general understanding of how machine learning methods work, in a way that is accessible to linguistics students. There will be some discussion of software, but the focus will be on understanding what the software is doing, not in the details of using a particular package.

The topics to be touched on include classification methods (Naive Bayes, the perceptron, support vector machines, boosting, decision trees, maximum entropy classifiers) and clustering (hierarchical clustering, k-means clustering, the EM algorithm, latent semantic indexing), sequential models (Hidden Markov Models, conditional random fields) and grammatical inference (probabilistic context-free grammars, distributional learning), semisupervised learning (self-training, co-training, spectral methods) and reinforcement learning.

, , ,


Mixed Effect Models

T. Florian Jaeger – University of Rochester
Course time: Tuesday/Thursday 3:30-6:00 pm, and all day Friday, July 12; last two weeks of Institute only (July 9, 11, 16, 18)
MLB

See Course Description

With increasing use of quantitative behavioral data, statistical data analysis has rapidly become a crucial part of linguistic training. Linguistic data analysis is often particularly challenging because (i) the relevant data are often sparse, (ii) the data sets are often unbalanced with regard to the variables of interest, and (iii) data points are typically not sampled independently of each other, making it necessary to account for—possibly hierarchical—grouping structures (clusters) in the data. This course provides an introduction to several advanced data analyses techniques that help us to address these challenges. We will focus on the Generalized Linear Model (GLM) and Generalized Linear Mixed Model (GLMM) – what they are, how to fit them, what common ‘traps’ to be aware of, how to interpret them, and how to report and visualize results obtained from these models. GLMs and GLMMs are a powerful tool to understand complex data, including not only whether effects are significant but also what direction and shape they have. GLMs have been used in corpus and sociolinguistics since at least the 60s. GLMMs have recently been introduced to language research through corpus- and psycholinguistics. They are rapidly becoming a popular data analysis techniques in these and other fields (e.g. sociolinguistics).

In this course, I will assume a basic statistical background and a conceptual understanding of at least linear regression.

, ,


Praat Scripting

Kevin McGowan – Rice University
Course time:
Tuesday/Thursday 11:00 am – 12:50 pm, MLB OR
Monday/Wednesday 1:30 pm – 3:20 pm, 2353 Mason Hall

See Course Description

This course introduces basic automation and scripting skills for linguists using Praat. The course will expand upon a basic familiarity with Praat and explore how scripting can help you automate mundane tasks, ensure consistency in your analyses, and provide implicit (and richly-detailed) methodological documentation of your research.  Our main goals will be:

    1.  To expand upon a basic familiarity with Praat by exploring the software’s capabilities and learning the details of its scripting language.

    2.  To practice a set of scripting best practices to help you not only write and maintain your own scripts but evaluate scripts written by others.

The course assumes participants have read and practiced with the Intro from Praat’s help manual. Topics to be covered include:

    o Working with the Objects, Editor, and Picture windows

    o Finding available commands

    o Creating new commands

    o Working with TextGrids

    o Conditionals, flow control, and error handling

    o Using strings, numbers, formulas, arrays, and tables

    o Automating phonetic analysis

    o Testing, adapting, and using scripts from the internet

, , ,


Python 3 for Linguists

Damir Cavar – Eastern Michigan University
Malgorzata E. Cavar – Eastern Michigan University
Course time: Monday/Wednesday 9:00-10:50 am, MLB OR
Tuesday/Thursday 11:00 am – 12:50 pm, 2347 Mason Hall

See Course Description

This course introduces basic programming and scripting skills to linguists using the Python 3 programming language and common development environments. Our main goals are:

- to offer an entry point to programming and computation for humanities students, and whoever is interested

- to do so without requiring any previous computer or IT knowledge (except basic computer experience and common lay-person computer knowledge).

The course covers in eight sessions the interaction with the Python programming environment, an introduction to programming, and an introduction to linguistically relevant text and data processing algorithms, including quantitative and statistical analyses, as well as qualitative and symbolic methods.

Existing Python code libraries and components will be discussed, and practical usage examples given. The emphasis in this course is on being creative with a programming language, and teaching content that is geared towards specific tasks that linguists are confronted with, where computation of large amounts of data or time consuming annotation and data manipulation tasks are necessary. Among the tasks we consider essential are:

- reading text and language data from- and writing to files in various encodings, using different orthographic systems and standards, corpus encoding formats and technologies (e.g. XML),

- generating and processing of word lists, linguistic annotation models, N-gram models, frequency profiles to study quantitative and qualitative aspects of language, for example, variation in language, computational dialectology, similarity or dissimilarity at different linguistic levels,

- symbolic processing of regular grammar rules to be used in finite state automata for processing of phonotactic information or morphology, but also context free grammars and parsers for syntactic analyses, and higher level grammar formalisms, and the use of these grammars and language processing algorithms.

, ,


Statistical Reasoning for Linguistics

Stefan Gries – University of California, Santa Barbara
Course time: Monday/Wednesday 3:30-5:20 pm
2407 Mason Hall

See Course Description

This course is aimed at beginners in statistics and will cover (1) the theoretical foundations of statistical reasoning as well as (2) selected practical applications. As for (1), we will discuss notions such as (different types of) variables, operationalization, (null and alternative) hypotheses, additive and interactive effects, significance testing and p-values, model(ing) and model selection, etc. As for (2), we will be concerned with how to annotate and prepare data for statistical analysis using spreadsheet software, how to use the open-source language and environment R <www.r-project.org>) to

- explore data visually using a multitude of graphs (an important precursor to any kind of statistical analysis) and exploratory statistical tools (e.g., cluster analysis);

- conduct some basic statistical tests;

- explore briefly more advanced statistical regression modeling techniques.

The course will be leaning on the second edition of my textbook on statistics for linguists (to be published 2013 by Mouton de Gruyter). Examples will include observational and experimental data from a variety of linguistic sub-disciplines.

, ,


Tools for Language Documentation

Claire Bowern – Yale University
Course time: Tuesday/Thursday 9:00-10:50 am
2330 Mason Hall

See Course Description

This four-week course will cover a selection of the software, hardware, and stimulus kits/surveys which are most useful in documenting languages. The course will begin with an overview of software tools for organizing language data, including Toolbox and Elan, and hardware (e.g. audio and video recorders) for making recordings. Week 2 will focus on tools related to grammatical documentation (e.g. in the writing of reference grammars) and will include the use of structured stimulus kits, questionnaires, and tools for organizing transcripts and analytical data. Week 3 will focus on corpus planning and the collection of narratives and conversational data. Week 4 will concentrate on software and techniques for lexical elicitation, along with collection archiving. Each class will have a practical component and class participants are encouraged to bring their own data sets; however, data samples will also be available for those who need them. Some familiarity with general linguistics is presumed.

, , ,