Dagstuhl-Seminar 16072
Assessing Learning In Introductory Computer Science
( 14. Feb – 19. Feb, 2016 )
Permalink
Organisatoren
- Michael E. Caspersen (Aarhus University, DK)
- Kathi Fisler (Worcester Polytechnic Institute, US)
- Jan Vahrenhold (Universität Münster, DE)
Kontakt
- Annette Beyer (für administrative Fragen)
Impacts
- A Pedagogical Analysis of Online Coding Tutorials : article in SIGCSE '17: Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education - Kim, Ada S.; Ko, Andrew J. - New York : ACM, 2017. - pp. 321-326.
- The Role of Self-Regulation in Programming Problem Solving Process and Success : article in ICER '16: Proceedings of the 2016 ACM Conference on International Computing Education Research - Loksa, Dastyni; Ko, Andrew J. - New York : ACM, 2016. - pp. 83-91.
Computing education is in an exciting period of experimentation. Rapid evolution of the field, diverse uses of computing across disciplines, and an exploding and broadening population of students interested in our courses challenge us to rethink how we approach computing education. At the same time, computing education research shows how much students still struggle to learn computing.
As a discipline, Computer Science has not yet converged on common learning outcomes for introductory computing. The computing community lacks commonly accepted objectives and assessment frameworks to make good comparative assessments of our educational experiments. Research often focuses on students' learning of basic control constructs, which is only a small corner of introductory computing. Programming environments collect all sorts of student data, but developers are often uninformed as to what data is relevant. What is needed are shared objectives and assessment methods that enable more useful computing education research while providing guidance to those outside the area.
This seminar aims to bring together computing and education researchers who think deeply about CS learning objectives and how to assess them. The goals of the seminar are to articulate flexible yet measurable learning objectives for the first year of university CS education, to brainstorm assessment questions that are worth asking about how and what students are learning about CS, and to identify concrete techniques for answering those questions. We focus on the first year, rather than simply CS1, as the longer time-span accommodates more variation in approach. We focus on university-level education both to scope our discussions and to align with the professional settings of the participants.
Topics discussed in this seminar include:
- What are concrete (measurable) concepts and competencies that we expect students to have after the first year of university-level computer science? What measurable outcomes for the first year could we assess beyond programming?
- What kinds of assessment techniques should we develop for these outcomes? Are there the viable and valuable alternatives to, e.g., concept inventories that are more cost-effective?
- What outcomes would we like to see in non-major courses that are not merely preparing students to write scripts needed on the job?
Our goal is not to finish articulating objectives and assessments. Rather, we hope to spur development and sharing of instruments for future research, while reflecting the burgeoning demands on computing education across universities.
The goal of the seminar was to focus on several broadly applicable learning outcomes for first year university computer science courses, looking at what it would take to understand and assess them in multiple pedagogic contexts.
In preparation for the seminar, we surveyed participants to get an understanding of a what could be a common denominator of CS1/2 learning outcomes, using the outcomes from the ACM CC 2013 curriculum as a starting point. We asked participants to (a) identify ones that are covered in their institution's CS1/2 courses, and (b) to identify ones that they have either experience or interest in investigating further. Participants also suggested objectives that were not included in CC 2013.
Of these candidate outcomes, we studied a subset during the seminar, as voted by the participants. We used breakout sessions to get small groups of participants to focus on individual outcomes, reporting on what is known about each outcome, its underlying challenges and/or relevant underlying theory, how to best assess it, and what sorts of research questions should be asked to advance educational research on that outcome. We had three separate sets of breakout sessions, so each participant had the chance to work on three outcomes in detail during the week. The discussion of some sessions was continued in a following session.
Rather than have most individual participants give talks, we ran three speed-dating poster sessions on the first afternoon: each person got to put up a poster on some outcome that they have studied, so others could see the research of other attendees.
In addition, we had three invited presentations focussing on workload and determinants of study success (Schulmeister), types of prior knowledge and their relation to study success (Theyssen), and Concept Inventories (Kaczmarczyk and Wolfman). The abstracts of these presentations are included in this report.
- Michael E. Caspersen (Aarhus University, DK) [dblp]
- Holger Danielsiek (Universität Münster, DE) [dblp]
- Brian Dorn (University of Nebraska, US) [dblp]
- Katrina Falkner (University of Adelaide, AU) [dblp]
- Sally Fincher (University of Kent, GB) [dblp]
- Kathi Fisler (Worcester Polytechnic Institute, US) [dblp]
- Mark J. Guzdial (Georgia Institute of Technology - Atlanta, US) [dblp]
- Geoffrey L. Herman (University of Illinois - Urbana Champaign, US) [dblp]
- Lisa C. Kaczmarczyk (San Diego, US) [dblp]
- A. J. Ko (University of Washington - Seattle, US) [dblp]
- Michael Kölling (University of Kent, GB) [dblp]
- Shriram Krishnamurthi (Brown University - Providence, US) [dblp]
- Raymond Lister (University of Technology - Sydney, AU) [dblp]
- Briana B. Morrison (Georgia Institute of Technology - Atlanta, US) [dblp]
- Jan Erik Moström (University of Umeå, SE) [dblp]
- Andreas Mühling (TU München, DE) [dblp]
- Anthony Robins (University of Otago, NZ) [dblp]
- Rolf Schulmeister (Universität Hamburg, DE) [dblp]
- Carsten Schulte (FU Berlin, DE) [dblp]
- R. Benjamin Shapiro (University of Colorado - Boulder, US) [dblp]
- Beth Simon (University of California - San Diego, US) [dblp]
- Juha Sorva (Aalto University, FI) [dblp]
- Martijn Stegeman (University of Amsterdam, NL) [dblp]
- Heike Theyssen (Universität Duisburg-Essen, DE)
- Jan Vahrenhold (Universität Münster, DE) [dblp]
- Mirko Westermeier (Universität Münster, DE)
- Steven A. Wolfman (University of British Columbia - Vancouver, CA) [dblp]
Klassifikation
- data structures / algorithms / complexity
- programming languages / compiler
- software engineering
Schlagworte
- Computer Science Education
- Educational Assessment
- Learning Objectives