Jane Dunphy, a longtime collaborator with the Teaching & Learning Lab and the author of today’s post, directs MIT’s English Language Studies Program and has taught a variety of subjects in professional and cross-cultural communication. She works with colleagues across MIT by developing effective pedagogy for the multicultural classroom and providing support to international TAs and faculty members.
Melissa Barnett, one of TLL’s three Associate Directors for Assessment & Evaluation, clearly relishes the opportunity to discuss all things assessment. During a two-hour workshop tailored specifically for language instructors in Global Studies & Languages (GSL), we explored what we knew, what we didn’t know that we knew, and what we most definitely didn’t know about assessing our teaching and our students’ learning.
With Barnett’s expert facilitation, the workshop session flew by as we engaged in a lively examination of our practices in teaching and assessing the language coursework that GSL offers: Chinese, ESL, French, German, Japanese, Korean, Portuguese, Russian, and Spanish.
It was encouraging to realize that we already use many of the practices examined in the workshop. For example, formative and outcome-driven assessment is the norm in language pedagogy; we now know to refer to our assessment practices as backward design. All of us keep up with the literature on language teaching and report on our research and practices in various ways: e.g., publications, conferences, and training workshops.
We are now more aware about choosing the appropriate design and analysis tools for our purposes and asking the right questions. In our collection of data, do we want to go for breadth or depth . . . or both? And even the most tech-savvy among us enjoyed seeing how we could use online resources like Qualtrics Survey Service @MIT as well as PollEverywhere, Socrative, Knewton, and PlayPosit.
The workshop format was particularly helpful in examining two complex topics: the five key questions to ask about assessment and the surrounding legal and ethical framework. The five key questions seem straightforward —
- Why am I assessing?
- What, generally, am I assessing?
- What, specifically, do I want to know?
- What is a feasible way to measure the learning outcome?
- What is the best way to collect the relevant information?
— but as Barnett helped us focus each question on its essential components, we realized how much exegesis is actually required to design a rigorous assessment tool in response to these questions.
The legal and ethical framework for research that involves human subjects and that will be shared outside MIT is equally complicated. The anecdotes that Barnett provided to help us navigate the regulations of MIT’s Committee on the Use of Humans as Experimental Subjects (COUHES) were invaluable — and her tips helped make an intimidating process (slightly) more transparent. Those of us embarking on research requiring COUHES approval will be sure to refine our assessment tools before submitting our COUHES request and we’ll start collecting our data after we receive approval.
As a German speaker and humanist by inclination, Barnett crafted an extremely enjoyable, informative, and lively workshop for language instructors to explore and examine what assessment design and research in GSL might look like and how we could use it to our professional advantage.
(If you’re an MIT instructor or administrator interested in learning more about best practices for assessing student learning, please contact the assessment staff at the MIT Teaching & Learning Lab for information about individual consultations or group workshops.)
(“Archery, Goodyear blimp in background. Washington, D.C.” [July 1936] by Harris & Ewing, at the Library of Congress Prints and Photographs Division)