注册 登录  
 加关注
   显示下一条  |  关闭
温馨提示!由于新浪微博认证机制调整,您的新浪微博帐号绑定已过期,请重新绑定!立即重新绑定新浪微博》  |  关闭

Pragmatics 语用学

Research, Application & Developmt Trend

 
 
 

日志

 
 
关于我

有时我喜欢安静,有时我喜欢热闹。 有时我喜欢加入人群,有时我喜欢远离他们,独自呆着。 冬天我渴望阳光,夏天我盼望下雪。 春去秋来,不变的是我的学术信仰、志向和兴趣。一直思考着:什么是语用?为什么要研究语用?怎样研究语用?研究语用需要具备哪些素质?谁在研究语用?语用研究的走势如何?存在哪些问题?等等。 我深信“宁静”方可“致远”的道理,努力走向这种境界。 求学、求真的路上,深深领悟到过程决定结果,过程大于结果,远远大于结果。

网易考拉推荐

TOP Cited Articles(SSCI) · Language Assessment Quarterly   

2016-11-15 09:51:18|  分类: 教育哲学edu phil |  标签: |举报 |字号 订阅

  下载LOFTER 我的照片书  |
TOP Cited Articles(SSCI) · Language Assessment Quarterly(附全文下载) 2016-10-17 高教社外语 高教社外语推出针对外语领域SSCI来源检索期刊的“TOP 10 Hottest Articles”系列,成为大家了解学界关注热点的重要窗口,深受大家的关注和喜爱。为了给广大外语教师及外语研究者提供更为聚焦的科研文献资源,小编根据Wiley Online Library, Oxford University Press,Taylor & Francis Group等数据库整理了TOP Cited Articles 系列,每期选择一种期刊,列出被引用最多的5篇或10篇学术论文(依据CrossRef/HighWire-hosted articles等),包括题目、摘要。欢迎各位外语教师、外语研究者及外语学习者订阅"高教社外语"微信公众号并持续关注。本期为您送上 TOP Cited Articles(SSCI) · Language Assessment Quarterly as of October, 2016. 请点击文末“阅读原文”获取文章全文。 About Language Assessment Quarterly (SSCI , A&HCI来源检索期刊) 2015 Impact Factor:0.976 LanguageAssessment Quarterly: An International Journal (LAQ) is dedicated to the advancement of theory, research, and practice in first, second, and foreign language assessment for school, college,and university students; for employment; and for immigration and citizenship.LAQ publishes original articles addressing theoretical issues, empiricalresearch, and professional standards and ethics related to language assessment,as well as interdisciplinary articles on related topics, and reports of language test development and testing practice. All articles are peer-reviewed.The journal is directed to an international audience. TOP Cited Articles(SSCI) · Language Assessment Quarterly ? Taylor & Francis Group Building and Supporting a Case for Test Use Lyle F. Bachman ABSTRACT The fields of language testing and educational and psychological measurement have not, as yet, developed a set of principles and procedures for linking test scores and score-based inferences to test use and the consequences of test use. Although Messick (1989) discusses test use and consequences, his framework provides virtually no guidance on how to go about investigating these in the course of practical test development. Argument-based formulations of validity (e.g., Kane, 1992, 2000; Kane, Crooks, & Cohen, 1999; Mislevy, in press; Mislevy, Steinberg, & Almond, 2003) provide a logic and set of procedures for investigating and supporting claims about score-based inferences but do not address issues of test use and the consequences of test use. Recent formulations in language testing (e.g., Bachman & Palmer, 1996; Kunnan, 2003; Lynch, 2001) are essentially lists of more or less independent qualities and questions, with no clear mechanism for integrating these into a set of procedures for test developers and users to follow. What has been called "critical language testing" (e.g, Shohamy, 1999, 2001) has alerted us to the political uses and abuses of language tests and to the need for test developers and test users alike to be self-critical of the ways in which tests are used. However, this perspective treats consequences as essentially unrelated to the validity of inferences and provides little guidance about how to go about either anticipating and avoiding, or redressing, the problems with test use that it discusses. In this article I describe how an argument for test use might be structured so as to provide a clear linkage from test performance to interpretations and from interpretations to uses. An assessment use argument is an overall logical framework for linking assessment performance to use (decisions). This assessment use argument includes two parts: an assessment utilization argument, linking an interpretation to a decision, and an assessment validity argument, which links assessment performance to an interpretation. I then discuss ways in which issues and questions that have been raised by language testers regarding uses, abuses, consequences, validity, and fairness in language testing can provide a basis for articulating claims and counterclaims in an assessment use argument. In my view, an assessment use argument can guide the design and development of assessments and can also lead to a focused, efficient program for collecting the most critical evidence in support of the interpretations and uses for which the assessment is intended. Changing Communicative Needs, Revised Assessment Objectives: Testing English as an International Language Suresh Canagarajah ABSTRACT Postmodern globalization requires that students strive for competence in a repertoire of English varieties as they shuttle between multilingual communities. From this perspective, the current debate becomes irrelevant regarding whether local varieties or dominant varieties (British/American) be used in international proficiency tests. Because it is unwise to define proficiency based on a single variety and because it is impossible to teach or measure proficiency in many varieties simultaneously, we have to consider revising the dominant paradigms of assessment. The changing pedagogical priorities suggest that we have to move away from a reliance on discrete-item tests on formal grammatical competence and develop instruments that are sensitive to performance and pragmatics. In effect, assessment would focus on strategies of negotiation, situated performance, communicative repertoire, and language awareness. Deluded by Artifices? The Common European Framework and Harmonization Glenn Fulcher ABSTRACT This commentary provides a critical and historical review of the Common European Framework of Reference: Learning, Teaching, Assessment (CEF). It is presented within the context of political and policy issues in Europe, and considers the role that the CEF is likely to play in that context, which is beyond the control of language testers. The dangers of institutionalization through political mandate are explored for test providers, test takers, and score users. It is argued that the CEF should be treated as just one of a range of tools for reporting test scores. Language Proficiency in Native and Nonnative Speakers: An Agenda for Research and Suggestions for Second-Language Assessment Jan H. Hulstijn ABSTRACT This article addresses the question of what language proficiency (LP) is, both theoretically and empirically. It does so by making a distinction, on one hand, between basic and higher language cognition and, on the other hand, between core and peripheral components of LP. The article furthermore critically examines the notion of level in most second language (L2) assessment scales, showing that it is confounded with people's intellectual functioning because higher levels of LP cannot be attained by people with lower intellectual, educational, occupational, or leisure-time profiles. It is probably for this reason that the Common European Framework of Reference for Languages ( Council of Europe. 2001. Common European Framework of Reference for Languages: Learning, teaching, assessment,Cambridge, , UK: Cambridge University Press/Council of Europe.) fails to consistently distinguish between L2 development and L2 proficiency. The LP construct presented in this article can account for the fact that L2 learners with higher intellectual, educational, occupational, or leisure-time profiles may perform, at a given point in time, both better (i.e., in the domain of higher language cognition) and more poorly (i.e., in the domain of basic language cognition) than native speakers with lower profiles. While offering a research agenda for investigating individual differences in first language and L2 acquisition, the article also presents several implications for L2 assessment. Examining Rater Effects in TestDaF Writing and Speaking Performance Assessments: A Many-Facet Rasch Analysis Thomas Eckes ABSTRACT I studied rater effects in the writing and speaking sections of the Test of German as a Foreign Language (TestDaF). Building on the many-facet Rasch measurement methodology, the focus was on rater main effects as well as 2- and 3-way interactions between raters and the other facets involved, that is, examinees, rating criteria (in the writing section), and tasks (in the speaking section). Another goal was to investigate differential rater functioning related to examinee gender. Results showed that raters (a) differed strongly in the severity with which they rated examinees; (b) were fairly consistent in their overall ratings; (c) were substantially less consistent in relation to rating criteria (or speaking tasks, respectively) than in relation to examinees; and (d) as a group, were not subject to gender bias. These findings have implications for controlling and assuring the psychometric quality of the TestDaF rater-mediated assessment system. Developing Formative Teacher Assessment: Knowledge, Practice, and Change Constant Leung ABSTRACT Classroom-based formative assessment by teachers has received a good deal of renewed scholarly and policy interest. The overall aim of this article is to foreground some of the key constitutive issues in this approach to teacher assessment and to suggest possible ways of conceptualizing key epistemological and empirical questions. This discussion raises a number of research and development issues in respect of (a) the conceptual basis of investigating this kind of student-oriented contingent assessment, (b) some methodological questions concerned with classroom-based research, and (c) the need to understand teacher development and teacher change with reference to teacher assessment practice. The concept of construct- referenced assessment (Wiliam, 2001) will be used as a point of departure for the discussion on reconceptualizing and framing the investigation of teacher assessment. A discourse-based approach will be presented in relation to classroom research. The relevance of the work in teacher development and teacher change will be discussed; some of the complexities of teacher development in relation to teacher assessment will be illustrated through data from a teacher professional development program. This discussion has a second language orientation, but many of the arguments will be relevant to language assessment in general. Three Generations of DIF Analyses: Considering Where It Has Been, Where It Is Now, and Where It Is Going Bruno D. Zumbo ABSTRACT The purpose of this article is to reflect on the state of the theorizing and praxis of DIF in general: where it has been; where it is now; and where I think it is, and should, be going. Along the way the major trends in the differential item functioning (DIF) literature are summarized and integrated providing some organizing principles that allow one to catalog and then contrast the various DIF detection methods and to shine a light on the future of DIF analyses. The three generations of DIF are introduced and described with an eye toward issues on the horizon for DIF. Methods for detecting differential item functioning (DIF) and item bias typically are used in the process of item analysis when developing new measures, adapting existing measures for use in new settings or with populations not initially intended when the measure was developed, adapting existing measures to new languages and/or cultures, or more generally validating test score inferences. DIF methods allow one to judge whether items (and ultimately the test they constitute) are functioning in the same manner in various groups of examinees. In broad terms, this is a matter of measurement invariance; that is, is the test performing in the same manner for each group of examinees? In what follows I reflect on the state of the praxis and theorizing of DIF in general: where it has been; where it is now; and where I think it is, and should, be going. Along the way I intend to integrate and summarize major trends in the DIF literature, provide some organizing principles that allow one to catalog and then contrast the various DIF detection methods, and shine a light on what I believe is the future of DIF analyses. Those involved in this work have come to address a number of critical, and recurring, issues that face the future of DIF. These critical issues are threaded throughout. I propose that we consider three generations of DIF praxis and theorizing. In so doing, I am not suggesting distinct historical periods and a natural linear stepwise progression toward our current thinking. In fact, in using the expression “generations of DIF” I want to suggest quite the contrary. Note also that given the general purpose of this article, throughout I use the terms test and measure interchangeably. Peer–Peer Interaction in a Speaking Test: The Case of the First Certificate in English Examination Evelina D. Galaczi ABSTRACT This discourse-based study turns its attention to paired test-taker discourse in the First Certificate in English speaking test. Its primary aim is to focus on fundamental conversation management concepts, such as overall structural organisation, turn-taking, sequencing, and topic organisation found in the dyadic test-taker interaction in 30 pairs of test takers. The analysis highlights global patterns of interaction, termed “Collaborative,” “Parallel” and “Asymmetric,” and salient features of interaction characteristic to each pattern. A second goal of the study is to investigate a possible relationship between the patterns of peer-peer interaction and test scores for “Interactive Communication.” The results show a close relationship between type of talk and test scores, thus providing validity evidence for the test scores. The study discusses implications for a more accurate understanding of the construct of conversation management underlying the FCE examination, and for the empirically-based construction of assessment criteria for speaking tests. Analysing Tests of Reading and Listening in Relation to the Common European Framework of Reference: The Experience of The Dutch CEFR Construct Project J. Charles Alderson , Neus Figueras , Henk Kuijper , Guenter Nold , Sauli Takala & Claire Tardieu ABSTRACT The Common European Framework of Reference (CEFR) is intended as a reference document for language education including assessment. This article describes a project that investigated whether the CEFR can help test developers construct reading and listening tests based on CEFR levels. If the CEFR scales together with the detailed description of language use contained in the CEFR are not sufficient to guide test development at these various levels, then what is needed to develop such an instrument? The project methodology involved gathering expert judgments on the usability of the CEFR for test construction, identifying what might be missing from the CEFR, developing a frame for analysis of tests and specifications, and examining a range of existing test specifications and guidelines to item writers and sample test tasks for different languages at the 6 levels of the CEFR. Outcomes included a critical review of the CEFR, a set of compilations of CEFR scales and of test specifications at the different CEFR levels, and a series of frameworks or classification systems, which led to a Web-mounted instrument known as the Dutch CEFR Grid. Interanalyst agreement in using the Grid for analyzing test tasks was quite promising, but the Grids need to be improved by training and discussion before decisions on test task levels are made. The article concludes, however, that identifying separate CEFR levels is at least as much an empirical matter as it is a question of test content, either determined by test specifications or identified by any content classification system or grid. Testing the Claims of Asylum Seekers: The Role of Language Analysis Diana Eades ABSTRACT Language tests in immigration contexts typically perform a gate-keeping role in decisions about whether an applicant should be granted residence or citizenship in a new country. In refugee contexts, so-called language tests or language analyses also play a gate-keeping role, but a more ambitious one; namely that of providing answers to questions concerning the genuineness or honesty of asylum seekers' claims about their origins, whether national, regional, or ethnic. That is, the way that an asylum seeker speaks in an interview with immigration officials is analysed or assessed to help in the determination of whether to accept this person's claims about their origins. It is this assessment of language that is the subject of this article, in which I will explain the methods used and then highlight some problems that have been addressed by linguists. The acronym LADO is used to refer to such “language analysis” used for the determination of origin, but it should be understood that much of the “language analysis” in this area appears quite superficial. TOP系列往期回顾: TOP 10 Hottest Articles(SSCI) · Applied Linguistics TOP 10 Hottest Articles(SSCI) · Language Teaching TOP 10 Hottest Articles(SSCI) · Journal of English for Academic Purpose TOP 10 Hottest Articles(SSCI) · Journal of Neurolinguistics TOP 10 Hottest Articles(SSCI) · TESOL Quarterly TOP 10 Hottest Articles(SSCI) · Language Testing TOP 10 Hottest Articles(SSCI) · Brain and Language TOP 10 Hottest Articles(SSCI) · Lingua TOP 10 Hottest Articles(SSCI) · Teaching and Teacher Education TOP Cited Articles(SSCI) · ELT Journal TOP Cited Articles(SSCI) · Language Learning 1 研修课程 (一)混合式教学模式下的外语课堂评估 2016年11月19-20日 中国 · 广州 (二)外语教学中的行动研究 2016年12月2-3日 中国 · 北京 欢迎您报名参加以上研修课程,请点击【研修速递】获取更多研修课程详情。 【研修速递】混合式教学模式下的外语课堂评估 11月19-20日 广州 【研修速递】外语教学中的行动研究 12月2-3 日 北京 敬请关注 欢迎分享 喜欢点赞 welcome Read morePageview 185012Report 精选留言 写留言 3 jessica 谢谢分享。请问之前的几期文章分享现在已经不能下载了吗? 10月17日 1 作者回复 之前几期不是所有都附了全文,已附的都可以下载。 10月18日 2 的 主题可以再多些 10月17日 1 作者回复 欢迎您提出建议 10月18日 1 一剪梅 要是能多点赞的话,我要点一千个一万个赞! 11月2日 作者回复 感谢您的支持与肯定,小编受宠若惊 昨天 以上留言由公众号筛选后显示 TOP Cited Articles(SSCI) · Language Assessment Quarterly - 语用学博客 - Pragmatics  语用学
  评论这张
 
阅读(13)| 评论(1)
推荐 转载

历史上的今天

在LOFTER的更多文章

评论

<#--最新日志,群博日志--> <#--推荐日志--> <#--引用记录--> <#--博主推荐--> <#--随机阅读--> <#--首页推荐--> <#--历史上的今天--> <#--被推荐日志--> <#--上一篇,下一篇--> <#-- 热度 --> <#-- 网易新闻广告 --> <#--右边模块结构--> <#--评论模块结构--> <#--引用模块结构--> <#--博主发起的投票-->
 
 
 
 
 
 
 
 
 
 
 
 
 
 

页脚

网易公司版权所有 ©1997-2017