Call For Abstracts
​
​
Plenary Speeches
​
​
2019 International AALA Plenary Speakers
(listed in alphabetical order by the speaker's last name)
​
Professor Sara Cushing
Associate Professor Lorena Llosa
Dr Alistair Van Moere
Professor Sara Cushing
Sara Cushing (also known as Sara Cushing Weigle) is Professor of Applied Linguistics at Georgia State University and Senior Faculty Associate for the Assessment of Student Learning in the Office of Institutional Effectiveness. She received her Ph.D. in Applied Linguistics from UCLA. She has published research in the areas of assessment, second language writing, and teacher education, and is the author of Assessing Writing (2002, Cambridge University Press). She has been invited to speak and conduct workshops on second language writing assessment throughout the world, most recently in Vietnam, Colombia, Thailand, and Norway. Her current research focuses on assessing integrated skills and the use of automated scoring for second language writing.
​
Title: Assessment literacy for writing teachers
Abstract: Assessment literacy refers to the knowledge and skills that teachers need to design, implement and evaluate valid and fair assessments in their classrooms. Assessment literacy is also necessary for understanding the role that policy-driven large-scale tests play in the lives of students and teachers. In this presentation, I outline recent scholarly conceptualizations of assessment literacy and discuss the fundamental concepts of reliability, validity and practicality as they relate to writing assessment. I then discuss the notion of “assessment for learning,” (Assessment Reform Group, 2002) in contrast to “assessment of learning,” and provide some strategies to help writing teachers use assessment effectively and efficiently to support language learning.
​
​
Associate Professor Lorena Llosa
Lorena Llosa is an Associate Professor of Education in the Steinhardt School of Culture, Education, and Human Development at New York University. Her work addresses second and foreign language teaching, learning, and assessment. Her studies have focused on standards-based classroom assessment of language proficiency, validity issues in the assessment of academic writing, and the integration of language and content in instruction and assessment. She is currently Co-Principal Investigator on two projects funded by the National Science Foundation to develop science curricula and assessments that support English learners’ science learning, computational thinking, and language development. Her research has appeared in such journals as Language Testing, Language Assessment Quarterly, Educational Measurement: Issues and Practice, Educational Assessment, Assessing Writing, Language Teaching Research, Language Learning, Reading and Writing Quarterly, and the American Educational Research Journal. Dr. Llosa was awarded the National Academy of Education/Spencer Postdoctoral Fellowship in 2009 and the AERA Second Language Research SIG Mid-Career Award in 2019. Dr. Llosa received her Ph.D. in Applied Linguistics with a specialization in language assessment from the University of California, Los Angeles.
​
Title: Assessing learners at the intersection of content and language
​
Abstract: An important development in the field of language education has been the shift towards approaches that integrate content and language. Many of these approaches are not new. Bilingual education, for example, has a long history as a way to address the educational needs of students throughout the world who, due to globalization and immigration, are learning content in schools through a second or additional language. The field of language for specific purposes also has a long history of addressing teaching and learning at the intersection of language and a specific content area, often a professional field. In recent decades, approaches that integrate content and language have expanded further. Examples include the CLIL movement in Europe and Asia and the rapid increase in the number of English-medium universities in places where English is a second or foreign language. Although the integration of content and language in instruction is now fairly common, little is known about the role of assessment in these contexts. What should be assessed? Are language and content separate and distinct constructs or are they inextricably linked? And what are the implications for language assessment?
​
In this talk, I will argue that, despite increasing efforts to integrate content and language, current definitions of language proficiency may not be as useful when the goal is to support learners’ content learning as well as their language learning. Using science standards and instruction in the U.S. as an example, I will propose an alternate conceptualization of English language proficiency that embraces and leverages the overlap between content and language by focusing attention on the aspects of language that are most critical to communicating disciplinary meanings. I will illustrate how this approach affords unique opportunities for rich formative assessment practices in the classroom.
​
​
Dr Alistair Van Moere
Dr Alistair Van Moere is Chief Product Officer at MetaMetrics Inc, where he drives innovation and helps organizations make sense of test measurement. Previously Alistair was President of Pearson’s Knowledge Technologies group and managed artificial intelligence scoring in speaking and writing for tens of millions of learners. He has worked as a teacher, examiner, director of studies, university lecturer, and test developer, in the US, UK, Japan, and Thailand. Alistair’s PhD won the Jacqueline Ross TOEFL award for best dissertation in language testing; he has an MBA, and has authored over 20 research publications on assessment and educational technology.
​
Title: How should we interpret score fluctuations in repeated test-taking?
​
Abstract: In language assessment there is a tendency to interpret scores from single-administration tests as accurate indicators of student ability. In other words: when students take a test, we trust that their scores can be taken at face value. But the reality is that test scores are associated with uncertainty, and if a student were to take the same test again (or if they took a different form of the same test again) just one or two weeks later, their score would likely be different. That is, their new test score could be lower or higher than their original test score, even though their English proficiency has not changed.
​
There can be various reasons for score fluctuations, such as measurement error in the test, the student’s motivation, or test conditions. This poses a problem to the validity of test scores in many different contexts.
​
For example, in high-stakes exams such as university entrance or immigration tests, students with financial resources can (unfairly) take expensive international exams in test centers every month until they get a high enough score. Similarly, in formative testing contexts where we would like to track a student’s progress or score gains every few months, it can be problematic for a teacher to explain why a student’s standardized test scores dropped even though their English proficiency should have increased.
​
In this presentation I will present data from numerous contexts: university speaking and writing tests, large-scale automatically scored tests, and PISA exams. I will outline the causes of test score fluctuation and how researchers quantify it, as well as discuss the consequences and social impact of test score fluctuations. Finally, I propose how researchers can mitigate these effects in the reporting of test scores, and in statistical techniques for interpreting longitudinal data over many test administrations.