Many of us waited with a degree of eagerness – even excitement – for the release of the Parliamentary Inquiry Report into NAPLAN (Effectiveness of the National Assessment Program – Literacy and Numeracy Report). But what a disappointment!
While it makes a passable fist of identifying many, but by no means all, of the significant issues associated with how our NAPLAN is currently administered and reported, it does miss some of the important details. This could be forgiven if the recommendations showed any evidence of careful thinking, vision, or courage. But in my assessment they are trivial and essentially meaningless
We know a lot about the problems with our current approach to standardised testing and reporting. This Report, with the help of over 95 submissions from a wide range of sources, manages to acknowledge many of them. The key problems include:
- it is not valid and reliable at the school level
- it is not diagnostic
- the test results take 5 months to be reported
- it is totally unsuitable for remote Indigenous students – our most disadvantaged students – because it is not multilevel, in the language that they speak or culturally accessible (Freeman)
- now that it has become a high stakes test it is having perverse impacts on teaching and learning
- some of our most important teaching and learning goals are not reducible to multiple choice tests
- there is a very real danger that it will be used to assess teacher performance – a task it is not at all suited to
- some students are being harmed by this exercise
- a few schools are using it to weed out ‘unsuitable enrolments’
- school comparisons exacerbate the neoliberal choice narrative that has been so destructive to fair funding, desegregated schools and classrooms and equitable education outcomes
- there will always be a risk of league tables
- their unequal impact on high needs school
- they do not feed into base funding formulas. In spite of the rhetoric about equity and funding prioritization being a key driver for NAPLAN, it is not clear that any state uses the NAPLAN to inform their base funding allocations to schools[1]
However, the ‘solutions’ put forward by the report are limited to the following recommendations:
- develop on-line testing to improve test results turn around – something that is happening anyway
- take into account the needs of students with a disability and English language learners. Now this recommendation is so vague as to be meaningless
- have ACARA closely monitor events to ensure league tables are not developed and that the results feed into funding considerations. This is another vague do nothing recommendation and I am certain ACARA will say that they are already doing this.
This is a recommendation to do nothing – nothing that is not already being done or nothing of meaningful substance.
As an example of the paucity of its analysis I offer the following. The report writes about the lack of diagnostic power of the NAPLAN tests and then says that, even if they were diagnostic, the results come too late to be useful. The report then argues, as its first and only strong recommendation that there needs to be quicker timeframe for making the results available. Did the writer even realize that this would still not make the tests useful as a diagnostic tool?
This Report, while noting the many problems assumes that these can be addressed through minor re-emphasis and adjustments – a steady as she goes refresh. However the problems identified in the Report suggest that tiny adjustments won’t address the issues. A paradigm change is required here.
We are so accustomed now to national standardised testing based on multiple choice questions in a narrow band of subjects being ‘the way we do things’, that it seems our deliberations are simply incapable of imagining that there might be a better way.
To illustrate what I mean I would like to take you back to the 1990s in Australia – to the days when NAPLAN was first foisted on a very wary education community.
How many of us can remember the pre national testing days? Just in case I will try and refresh your memory on some key elements and also provide a little of the ‘insider debates’ before we adopted the NAPLAN tests.
1989 was the historic year when all Education Ministers signed up to a shared set of goals under the now defunct 1989 Hobart Declaration. Australia was also in the process of finalising its first ever national curriculum – a set of Profiles and Statements about what all Australian children should learn. This was an extensive process driven by an interstate committee headed by the then Director of School Education in NSW, Dr Ken Boston.
During this time, I worked in the mega agency created by John Dawkins, the Department of Employment, Education, Training and Youth Affairs, initially in the Secretariat for the Education Ministerial Council (then called the AEC) and a few years later heading up the Curriculum and Gender Equity Policy Unit.
The Education Division at that time was heavily engaged in discussion with ACER and OECD about the development of global tests –the outcomes of which are PISA and a whole swag of other tests.
This was also when standardised testing was also being talked about for Australian schools. Professor Cummings reminds us of this early period in her submission to the Parliamentary Inquiry when she says that
This was also when standardised testing was also being talked about for Australian schools. Professor Cummings reminds us of this early period in her submission to the Parliamentary Inquiry when she says that
…through the Hobart Declaration in 1989 … ‘Ministers of Education agreed to a plan to map appropriate knowledge and skills for English literacy. These literacy goals included listening, speaking, reading and writing….
National literacy goals and sub-goals were also developed in the National Literacy (and Numeracy) Plan during the 1990s, including: …comprehensive assessment of all students by teachers as early as possible in the first years of schooling…to ensure that…literacy needs of all students are adequately addressed and to intervene as early as possible to address the needs of those students identified as at risk of not making adequate progress towards the national…literacy goals….use [of] rigorous State-based assessment procedures to assess students against the Year 3 benchmark for…reading, writing and spelling for 1998 onward.
It is interesting to note that the on entry assessments of children by teachers commitment referred to by Cummings did result in some work in each state. But it never received the policy focus, funding or attention that it deserved., which I regard as a pity The rigorous assessments at Year 3 however grew in importance and momentum. But the key consequence of this commitment was the year 3 state based assessment. Professor Cummings goes on to say that in order to achieve this goal – a laudable goal – the NAP was born. State based at first with very strict provisions about not providing school based data and then eventually what we have today.
Professor Cummings may not have known that many of us working in education at the time did not believe that national multiple choice standardised tests were the best and only answer and that considerable work was undertaken to convince the then Education Minister, Kim Beazley, that there was a better way.
During this period, where National standardised literacy tests were being discussed in the media and behind closed doors at the education Ministers’ Council,
Over this same period the US based Coalition of Essential Schools was developing authentic classroom teaching and learning activities that were also powerful diagnostic assessment exercises. Its stated goal was to build a data bank of these authentic assessments activities and to benchmark student progress against these benchmarks across the US. Its long term goal was to make available to schools across the US a data-base of benchmarked (that is standardised) assessments with support materials about how to use the materials as classroom lessons and how to use the results to a) diagnose a students learning b) plan future learning experiences and c) compare their development to a US wide standard of literacy development.
As the manager of the curriculum policy area, I followed these developments with great interest, as did a number of my work colleagues inside and outside the Department. We saw the potential of these assessments to provide a much less controversial, and less damaging way of meeting the Ministers’ need to show leadership in this area.
Our initiatives resulted in DEETYA agreeing to fund a trial to develop similar diagnostic classroom friendly literacy assessment units as the first part of this process. We planned to use these to demonstrate to decision makers that there was a better solution than standardized multiple-choice tests.
As a consequence I commenced working with Geoff Masters (then at ACER as an assessment expert) and Sharon Burrows (who headed up the Australian Education Union at the time) exploring the potential power of well designed formative assessments, based on authentic classroom teaching formats, to identify those at risk of not being successful at developing literacy skills.
Unfortunately we failed to head off a decision to opt for standardised tests. We failed for a number of reasons:
- the issue moved too quickly,
- the OECD testing process had created a degree of enthusiasm amongst that data crunchers who had louder voices,
- our proposal was more difficult to translate to three word slogans or easy jargon,
- multiple choice tests were cheaper.
At the time I thought these were the most important reasons. But looking back now, I can also see that our alternative proposal never had a chance because it relied on trusting teachers. Teachers had to teach the units and assess the students’ work. What was to stop them cheating and altering the results? Solutions could have been developed, but without the ICT developments we have access to today, they would have been cumbersome.
I often wonder what would have happened if we had initiated this project earlier and been more convincing. Could we have been ‘the Finland’ of education, proving that we can monitor children’s learning progress, identify students at risk early in their school lives, prioritise funding based on need – all without the distorting effects of NAPLAN and MySchool?
We can’t go back in time but we can advocate for a bigger, bolder approach to addressing the significant problems associated with our current NAPLAN architecture. The parliamentary report failed us here but this should not stop us.
I have written this piece because I wanted us to imagine, for a moment, that it is possible to have more courageous bold and educationally justifiable policy solutions around assessment than what we have now. The pedestrian “rearrange the deck-chairs” of this Report is just not good enough.
So here is my recommendation, and I put it out as a challenge to the many professional education bodies and teacher Education Institutions out there.
Set up a project as follows:
Identify a group of our most inspiring education leaders through a collaborative peer nomination process. Ensure the group includes young and old teachers and principals, teachers with significant experience in our most challenging schools especially our remote Indigenous schools. Provide them with a group of expert critical friends – policy experts, testing and data experts, assessment and literacy experts and ask them to:
- Imagine there is no current assessment infrastructure
- Devise an educationally defensible assessment architecture – taking a green fields approach
I can almost guarantee that this working group would not invent NAPLAN and MySchool or anything like it, and we would be significantly better off.
We have dug ourselves into an educationally indefensible policy hole because we have allowed politicians and the media to drive key decision. To my knowledge we have never established an expert group of educational practitioners with access to specialist expertise to develop better policy solutions in education. Why don’t we give it a try?
Any takers?
[1] I understand that NSW does use the NAPLAN results to channel some additional funds to low performing schools but these are above the line payments.