Sign in or
Authentic Assessment and ICT
By Associate Professor Paul NEWHOUSE
Director, Centre for Schooling and Learning Technologies, Edith Cowan University, Western Australia
The past three decades have seen the emergence in education of support for theories of teaching and learning based on constructivist principles and for the use of Information and Communications Technology (ICT), that is, computer-related technology. However, in both cases the impact on classroom practice has not been as extensive as many had hoped. While there are many factors that may explain this situation the connection to assessment practice is a critical factor in both cases.
The central idea behind constructivism is that people construct new knowledge and understandings based on what they already know and believe. Further, it is commonly accepted that learning occurs within a physical and psycho-social learning environment that determines the roles of the teacher and students and comprises a complex of relationships between learners, instructors, curriculum, and resources. This has given rise to the notion of a constructivist learning environment. Interactive technologies, such as ICT, involved in the delivery of the curriculum have a place within the psycho-social, not just the physical structures of a learning environment. Thus there is a logical connection between the implementation of the use of ICT to support learning and the constructivist nature of the learning environments within which this occurs.
The intention of schooling is to provide students with the skills and knowledge needed to successfully live in society and increasingly this requires students to deal with complex ill-structured problems where practical skills are linked with theoretical knowledge. This is best achieved within a constructivist learning environment where students are supported in the development of complex performances that require cross-discipline knowledge, higher-order thinking and learning process skills along with practical skills. However, there is little doubt that such environments are still the exception rather than the rule for students in countries such as Australia.
It is obvious that what is taught should be assessed and what is taught should be determined by the needs of individuals and the society within which they exist. However, most often the reality is that what is taught is what is assessed and what is assessed bears little resemblance to what is needed. Rather, what is assessed is determined by what can easily be represented on paper using a pen in a short amount of time. Most often this is in contrast to the stated requirements of the curriculum and preferred pedagogy and does not match the rapidly changing requirements of future study, work or life activities associated with that curriculum. That is, assessment lacks alignment and authenticity but unfortunately drives education systems. Thus the provision of constructivist learning environments is being constrained by serious shortcomings in common methods of summative assessment (Note: Summative assessment is principally designed to determine the achievement of a student at the end of a learning sequence).
It is still the case in most nations such as Australia that high-stakes summative assessment for school students is conducted with little if any use of ICT. There is no doubt that the vast majority of teachers limit their use of ICT the closer students get to such assessment points and most often this is supported by parents and students. The rationale is that if the use of ICT is not permitted for the assessment task then students need to practice such a task without using the technology. For example, students are required to handwrite because word processing is not permitted in the ‘exams’. Thus the use of ICT to support learning in schools is being constrained by serious shortcomings in common methods of summative assessment.
Not only can it be argued that the implementation of constructivist learning environments and the use of ICT to support learning have been, and are currently, seriously constrained by summative assessment practices but also the limited implementation of each has impacted on the other. That is, there is evidence that ICT is more likely to be used in constructivist learning environments and such environments are more readily provided when ICT support is available. Learning environments are constructed by the participants and are thus dependent on their beliefs and actions, particularly the underlying pedagogical philosophy of the teacher. Therefore there is considerable variation in the ways ICT may be effectively incorporated within a learning environment, and there is no suggestion that a particular option is preferable. However, there has now been sufficient reported research to identify principles that may be applied to guide the inclusion of ICT support for effective learning environments. The base principles are critically built on theories of constructivism, including the concept of proximal learning, based on the work of Vygotsky, that has led to the use of the term computer supported learning. The hope is that with the support of ICT a wider range of effective learning environments will be employed than has traditionally been the case. However, both the use of ICT and the implementation of effective constructivist learning environments are constrained by a lack of authentic performance summative assessment.
Developing a means of reliable authentic assessment consistent with pedagogical practices associated with constructivist learning environments is regarded as critical for courses with a significant performance dimension, since while students tend to focus on, and be motivated by practical performance, teachers will inevitably tend to ‘teach to the test’. Appeal from a student perspective and authenticity of assessment will be compromised if core performance components of courses are excluded from ‘examinations’. Furthermore, contemporary societies increasingly expect that students will be able to demonstrate practical skills and the application of knowledge through performance in industry, professional and social settings. In part, the difficulties presented from an ‘examination’ perspective by practical performance have contributed to an historical tendency for distinct separation of so called ‘theory’ and ‘practice’ components of many courses, with often the elimination of the practical component altogether. Finally, students are more likely to experience deep learning through experiences that involve complex performance and there is a social justice imperative to the need to improve approaches to summative assessment and that the use of computer technology may assist in doing so.
The authentic assessment problem concerns more than assessing practical skills, it concerns assessing higher-order thinking and learning process skills, demonstrated through complex performance, and this traditional paper-based assessment does very poorly. Concern centres around the validity of such assessment in terms of the intended learning outcomes, where there is a need to improve the criterion-related validity, construct validity and consequential validity of high-stakes assessment. Tasks in the ‘real’ world require cross-discipline knowledge; relate to complex ill-structured problems; and are completed collaboratively using a wide range of technological tools to meet needs and standards. These characteristics are at odds with standardized pen-and-paper approaches to assessment and thus there is a need to consider alternative approaches with alternative technologies, in particular for the representation of student performance on complex tasks.
In general alternatives to paper-based exams for high-stakes assessment have been considered to be too expensive and too unreliable to ‘mark’. Recent advances in psychometric methods associated with Rasch modelling and improvements in digital technologies provide tools to consider alternative approaches to the assessment of a variety of performance relatively cost-effectively. Firstly, digital technologies may provide tools to collect evidence of performance including the use of video, audio, graphics and text representations. Secondly, digital technologies may be used to support marking processes through online repositories and marking interfaces and databases. Finally, digital technologies may be used to support the analysis of data generated by marking to provide reliable accurate scores. The paired comparisons method of marking that uses Rasch modelling is known to be highly reliable when human judgements are required but until recently was considered impractical for ‘front-line’ marking. With the use of digital representation of student performance accessed by markers online and the use of Rasch modelling software this method becomes manageable.
For decades students in schools in Western Australia wishing to enter universities have been required to undertake tertiary entrance examinations in a limited number of recognised courses considered to be ‘academic’. Almost all of these examinations were conducted using paper and pen over a three-hour time period. From 2007 a large number of courses were implemented for tertiary entrance, with many of these including a major element of practical performance work that raised a range of issues concerning ‘examinations’. The most notable challenge was that of authentic summative assessment – how to validly assess performance at a reasonable cost and in a manner that allows for reliable marking to generate a defensible score.
In 2008 a large collaboratively funded three-year study was commenced that addressed the performance assessment problem in four WA senior secondary courses. The aim was to successfully develop and implement a performance assessment task to create a single scale score. This study was carried out by researchers from the School of Education at Edith Cowan University (ECU) and managed by the Centre for Schooling and Learning Technologies (CSaLT) in collaboration with the Curriculum Council of WA. The study built on concerns that the assessment of student achievement should, in many areas of the curriculum, include practical performance and that this will only occur in high-stakes context if the assessment can be shown to validly and reliably measure the performance and be manageable in terms of cost and school environment. Thus the assessment is summative in nature with reliability referring to the extent to which results are repeatable, and validity referring to the extent to which the results measure the targeted learning outcomes. There is a critical need for research into the use of digital forms of representation of student performance on complex tasks for the purposes of summative assessment that are feasible within the constraints of school contexts. This study investigated authentic digital forms of assessment with high levels of reliability, manageability, which were capable of being scaled-up for state-wide implementation in a cost effective manner.
Latest page update: made by eduatecu
, Mar 7 2010, 2:50 PM EST
(about this update
About This Update
Moved from: Wiki
No content added or deleted.
- complete history)
More Info: links to this page
There are no threads for this page. Be the first to start a new thread.
ALISYA OLC4TPD.jpg (JPEG Image - 829k)
posted by ashev Feb 16 2010, 10:47 PM EST
This attachment has no description.
ADINDA OLC4TPD.jpg (JPEG Image - 929k)
posted by ashev Feb 16 2010, 10:31 PM EST
Newhouse_AuthenticAssessment_ICT.pdf (Adobe Portable Document Format - 65k)
posted by eduatecu Jan 25 2010, 4:09 AM EST
Authentic Assessment and ICT by Dr Newhouse