The NAPLAN Parliamentary Review’s ‘do nothing’ recommendations: We can do better

Many of us waited with a degree of eagerness – even excitement – for the release of the Parliamentary Inquiry Report into NAPLAN (Effectiveness of the National Assessment Program – Literacy and Numeracy Report). But what a disappointment!

While it makes a passable fist of identifying many, but by no means all, of the significant issues associated with how our NAPLAN is currently administered and reported, it does miss some of the important details. This could be forgiven if the recommendations showed any evidence of careful thinking, vision, or courage. But in my assessment they are trivial and essentially meaningless

We know a lot about the problems with our current approach to standardised testing and reporting. This Report, with the help of over 95 submissions from a wide range of sources, manages to acknowledge many of them. The key problems include:

  • it is not valid and reliable at the school level
  • it is not diagnostic
  • the test results take 5 months to be reported
  • it is totally unsuitable for remote Indigenous students – our most disadvantaged students – because it is not multilevel, in the language that they speak or culturally accessible (Freeman)
  • now that it has become a high stakes test it is having perverse impacts on teaching and learning
  • some of our most important teaching and learning goals are not reducible to multiple choice tests
  • there is a very real danger that it will be used to assess teacher performance – a task it is not at all suited to
  • some students are being harmed by this exercise
  • a few schools are using it to weed out ‘unsuitable enrolments’
  • school comparisons exacerbate the neoliberal choice narrative that has been so destructive to fair funding, desegregated schools and classrooms and equitable education outcomes
  • there will always be a risk of league tables
  • their unequal impact on high needs school
  • they do not feed into base funding formulas. In spite of the rhetoric about equity and funding prioritization being a key driver for NAPLAN, it is not clear that any state uses the NAPLAN to inform their base funding allocations to schools[1]

However, the ‘solutions’ put forward by the report are limited to the following recommendations:

  1. develop on-line testing to improve test results turn around – something that is happening anyway
  2. take into account the needs of students with a disability and English language learners. Now this recommendation is so vague as to be meaningless
  3. have ACARA closely monitor events to ensure league tables are not developed and that the results feed into funding considerations. This is another vague do nothing recommendation and I am certain ACARA will say that they are already doing this.

This is a recommendation to do nothing – nothing that is not already being done or nothing of meaningful substance.

As an example of the paucity of its analysis I offer the following. The report writes about the lack of diagnostic power of the NAPLAN tests and then says that, even if they were diagnostic, the results come too late to be useful. The report then argues, as its first and only strong recommendation that there needs to be quicker timeframe for making the results available. Did the writer even realize that this would still not make the tests useful as a diagnostic tool?

This Report, while noting the many problems assumes that these can be addressed through minor re-emphasis and adjustments – a steady as she goes refresh. However the problems identified in the Report suggest that tiny adjustments won’t address the issues. A paradigm change is required here.

We are so accustomed now to national standardised testing based on multiple choice questions in a narrow band of subjects being ‘the way we do things’, that it seems our deliberations are simply incapable of imagining that there might be a better way.

To illustrate what I mean I would like to take you back to the 1990s in Australia – to the days when NAPLAN was first foisted on a very wary education community.

How many of us can remember the pre national testing days? Just in case I will try and refresh your memory on some key elements and also provide a little of the ‘insider debates’ before we adopted the NAPLAN tests.

1989 was the historic year when all Education Ministers signed up to a shared set of goals under the now defunct 1989 Hobart Declaration. Australia was also in the process of finalising its first ever national curriculum – a set of Profiles and Statements about what all Australian children should learn. This was an extensive process driven by an interstate committee headed by the then Director of School Education in NSW, Dr Ken Boston.

During this time, I worked in the mega agency created by John Dawkins, the Department of Employment, Education, Training and Youth Affairs, initially in the Secretariat for the Education Ministerial Council (then called the AEC) and a few years later heading up the Curriculum and Gender Equity Policy Unit.

The Education Division at that time was heavily engaged in discussion with ACER and OECD about the development of global tests –the outcomes of which are PISA and a whole swag of other tests.

This was also when standardised testing was also being talked about for Australian schools. Professor Cummings reminds us of this early period in her submission to the Parliamentary Inquiry when she says that

This was also when standardised testing was also being talked about for Australian schools. Professor Cummings reminds us of this early period in her submission to the Parliamentary Inquiry when she says that

…through the Hobart Declaration in 1989 … ‘Ministers of Education agreed to a plan to map appropriate knowledge and skills for English literacy. These literacy goals included listening, speaking, reading and writing….

National literacy goals and sub-goals were also developed in the National Literacy (and Numeracy) Plan during the 1990s, including: …comprehensive assessment of all students by teachers as early as possible in the first years of schooling…to ensure that…literacy needs of all students are adequately addressed and to intervene as early as possible to address the needs of those students identified as at risk of not making adequate progress towards the national…literacy goals….use [of] rigorous State-based assessment procedures to assess students against the Year 3 benchmark for…reading, writing and spelling for 1998 onward.

It is interesting to note that the on entry assessments of children by teachers commitment referred to by Cummings did result in some work in each state. But it never received the policy focus, funding or attention that it deserved., which I regard as a pity The rigorous assessments at Year 3 however grew in importance and momentum. But the key consequence of this commitment was the year 3 state based assessment. Professor Cummings goes on to say that in order to achieve this goal – a laudable goal – the NAP was born. State based at first with very strict provisions about not providing school based data and then eventually what we have today.

Professor Cummings may not have known that many of us working in education at the time did not believe that national multiple choice standardised tests were the best and only answer and that considerable work was undertaken to convince the then Education Minister, Kim Beazley, that there was a better way.

During this period, where National standardised literacy tests were being discussed in the media and behind closed doors at the education Ministers’ Council,

Over this same period the US based Coalition of Essential Schools was developing authentic classroom teaching and learning activities that were also powerful diagnostic assessment exercises. Its stated goal was to build a data bank of these authentic assessments activities and to benchmark student progress against these benchmarks across the US. Its long term goal was to make available to schools across the US a data-base of benchmarked (that is standardised) assessments with support materials about how to use the materials as classroom lessons and how to use the results to a) diagnose a students learning b) plan future learning experiences and c) compare their development to a US wide standard of literacy development.

As the manager of the curriculum policy area, I followed these developments with great interest, as did a number of my work colleagues inside and outside the Department. We saw the potential of these assessments to provide a much less controversial, and less damaging way of meeting the Ministers’ need to show leadership in this area.

Our initiatives resulted in DEETYA agreeing to fund a trial to develop similar diagnostic classroom friendly literacy assessment units as the first part of this process. We planned to use these to demonstrate to decision makers that there was a better solution than standardized multiple-choice tests.

As a consequence I commenced working with Geoff Masters (then at ACER as an assessment expert) and Sharon Burrows (who headed up the Australian Education Union at the time) exploring the potential power of well designed formative assessments, based on authentic classroom teaching formats, to identify those at risk of not being successful at developing literacy skills.

Unfortunately we failed to head off a decision to opt for standardised tests. We failed for a number of reasons:

  • the issue moved too quickly,
  • the OECD testing process had created a degree of enthusiasm amongst that data crunchers who had louder voices,
  • our proposal was more difficult to translate to three word slogans or easy jargon,
  • multiple choice tests were cheaper.

At the time I thought these were the most important reasons. But looking back now, I can also see that our alternative proposal never had a chance because it relied on trusting teachers. Teachers had to teach the units and assess the students’ work. What was to stop them cheating and altering the results? Solutions could have been developed, but without the ICT developments we have access to today, they would have been cumbersome.

I often wonder what would have happened if we had initiated this project earlier and been more convincing. Could we have been ‘the Finland’ of education, proving that we can monitor children’s learning progress, identify students at risk early in their school lives, prioritise funding based on need  – all without the distorting effects of NAPLAN and MySchool?

We can’t go back in time but we can advocate for a bigger, bolder approach to addressing the significant problems associated with our current NAPLAN architecture. The parliamentary report failed us here but this should not stop us.

I have written this piece because I wanted us to imagine, for a moment, that it is possible to have more courageous bold and educationally justifiable policy solutions around assessment than what we have now. The pedestrian “rearrange the deck-chairs” of this Report is just not good enough.

So here is my recommendation, and I put it out as a challenge to the many professional education bodies and teacher Education Institutions out there.

Set up a project as follows:

Identify a group of our most inspiring education leaders through a collaborative peer nomination process. Ensure the group includes young and old teachers and principals, teachers with significant experience in our most challenging schools especially our remote Indigenous schools. Provide them with a group of expert critical friends – policy experts, testing and data experts, assessment and literacy experts and ask them to:

  • Imagine there is no current assessment infrastructure
  • Devise an educationally defensible assessment architecture – taking a green fields approach

I can almost guarantee that this working group would not invent NAPLAN and MySchool or anything like it, and we would be significantly better off.

We have dug ourselves into an educationally indefensible policy hole because we have allowed politicians and the media to drive key decision. To my knowledge we have never established an expert group of educational practitioners with access to specialist expertise to develop better policy solutions in education. Why don’t we give it a try?

Any takers?

[1] I understand that NSW does use the NAPLAN results to channel some additional funds to low performing schools but these are above the line payments.


Is opting out of testing just selfish individualism?

In a recent article about American culture and the opt out society Alan Greenblatt described the growing and successful movement to encourage parents to refuse to allow their child to participate in national standardised testing as selfish individualism.  It might be driven by a parents individual interest, he argues, but it is selfish and against collective interests:

 It’s probably true that the time spent on testing isn’t going to be particularly beneficial to the kids, but it’s very beneficial to the system,” says Michael Petrilli, executive vice president of the Fordham Institute, an education think tank. “If you have enough people opt out of these tests, then you have removed some important information that could make our schools better.

I find this amusing because the whole corporate reform movement, for which testing is the centerpiece, is built on the neoliberal belief that the best solution to everything – prisons, health, education etc – is to turn everything into a market and allow competition and individual choice to drive better value.

In fact this was the prime motivation described by Kevin Rudd when he first announced the ‘school transparency agenda’ on the 21 August 2008 at the National Press Club. The speech has mysteriously disappeared but I am quite clear that Kevin Rudd said something along the following lines

“If parents are unhappy with their local school because of the information in MySchool, and decide to transfer their child to another better performing school, then that is exactly what should happen.  This is how schools will improve, through parents voting with their feet.”

Now nobody who works in a struggling school thinks this is the way schools improve. Australia has run an aggressive market choice model of school funding for nearly 2 decades now and all we have to show for it is a highly class segregated schooling system and high levels of inequality.

So let me reassure parents who are concerned about our high stakes NAPLAN testing regime.  Opting out of having your child participate in these tests is much more of a community act than deciding to send your child to an elite school.

A Tale of ACARA and the see-no-evil monkeys – Subtitle: There is no excuse for willful ignorance

The Australian Senate’s Education, Employment and Workplace Relations Committee is currently holding an Inquiry into the effectiveness of the National Assessment Program – Literacy and Numeracy (NAPLAN).

Over 70 submissions have been received of varying quality. In this article, I focus on the submission from the Australian Curriculum, Assessment and Reporting Authority (ACARA). ACARA is the custodian of NAPLAN and how it is used for school transparency and accountability purposes on the MySchool website.

One of the focus questions in the Inquiry’s Terms of Reference refers to the unintended consequences of NAPLAN’s introduction.  This is an important question given widespread but mainly anecdotal reports in Australia of: test cheating; schools using test results to screen out ‘undesirable enrolments’; the narrowing of the curriculum; NAPLAN test cramming taking up valuable learning time; Omega 3 supplements being marketed as able to help students perform better at NAPLAN time; and NAPLAN test booklets hitting the best seller lists.

Here is what ACARA’s submission to the Senate inquiry  has to say about this issue:

To date there has been no research published that can demonstrate endemic negative impacts in Australian schools due to NAPLAN.  While allegations have been made that NAPLAN has had unintended consequences on the quality of education in Australian schools there is little evidence that this is the case. 

The submission goes on to refer to two independent studies that investigated the unintended consequences of NAPLAN.

ACARA dismisses a Murdoch University research project[1] led by Dr Greg Thompson as flawed because its focus is on changes to pedagogical practices as a result of the existence and uses made of NAPLAN.  The basis of the dismissal is that if teaching practices change then that is all about teachers and nothing to do with NAPLAN.  Yet this report makes clear that teachers feel under pressure to make these changes, changes they don’t agree with, because of the pressures created by the use of NAPLAN as a school accountability measure.  In other words, in one clever turn of phrase, ACARA rules out of court any unintended consequences of NAPLAN that relate to changes to teachers’ practice.

ACARA also dismisses a survey undertaken by the Whitlam Institute because it  “suffers from technical and methodological limitations, especially in relation to representativeness of the samples used” and dismisses its conclusions, without  outlining detailing the findings. Now this survey was completed by nearly 8500 teachers throughout Australia and it was representative in every way (year level taught, sector, gender, years of experience) except for the oversampling in Queensland and Tasmania. In relation to this sampling concern the writers even reported that they weighted the responses to compensate for this sampling differential.  This report documents many unintended consequences that ACARA are now saying are not substantiated because of a spurious sampling critique. This is intellectually dishonest at best.

ACARA’s dismissal of all of the findings of these two research projects on spurious grounds while refraining from stating their findings is in stark contrast to its treatment of unsupported statements by non impartial stakeholders about enrolment selection concerns.

In response to the claims that some schools are using NAPLAN results to screen out ‘undesirable students’ ACARA states that it is aware of these claims but appears willing to take at face value comments from stakeholders who represent the very schools accused of unethical enrolment screening.

It is ACARA’s understanding that these results are generally requested as one of a number of reports, including student reports written by teachers, as a means to inform the enrolling school on the strengths and weaknesses of the student. The purpose for doing so is to ensure sufficient support for the student upon enrolment, rather than for use as a selection tool. This understanding is supported by media reporting of comments made by peak bodies on the subject (my emphasis).

ACARAs approach to this whole matter comes across as most unprofessional.  But, unfortunately for ACARA, this is not the whole story.  There is a history to this issue that began in 2008, almost as soon as the decision to set up MySchool was announced by the then PM Kevin Rudd as part of his school transparency agenda.

Three years ago this month I wrote an article[2] about the importance of evaluating the impact of the MySchool Website and the emergence, under FOI, of an agreement in September 2008 by State and Commonwealth Ministers of Education to:

…commence work on a comprehensive evaluation strategy for implementation, at the outset, that will allow early identification and management of any unintended and adverse consequences that result from the introduction of new national reporting arrangements for schools(my emphasis).

It was clear from the outset that this evaluation should have been managed by ACARA as the organisation established to manage the school transparency agenda.  In 2010, in response to my inquiry to ACARA on this Ministerial agreement the CEO of ACARA stated that it was not being implemented at that point in time because, early reactions to hot button issues are not useful and because the website did not yet include the full range of data planned.

This was a poor response for three reasons.

Firstly, well-designed evaluations are designed, not as afterthoughts, but as part of the development process. One of the vital elements of any useful evaluation is the collection of baseline data that would enable valid comparisons of any changes over time.  For example, information could be collected prior to the MYSCHOOL rollout on matters such as:

  • Has time allocated to non-test based subjects reduced over time?
  • Has teaching become more fact based?
  • Has the parent demographic for different schools changed as a result of NAPLAN data or student demographic data?
  • Are more resources allocated to remedial support for students who fail to reach benchmarks?
  • Are the impacts different for different types of schools?

Secondly, the commitment to evaluate was driven by concerns about the possibility of schools being shamed through media responses to NAPLAN results, the narrowing of curriculum and teaching, further residualisation of public schools, test cheating and possible negative effects on students and teachers. Identifying these concerns early would allow for revising the design elements of MySchool to mitigate the impacts in a timely fashion.  There is no real value in waiting years before deciding corrections are needed.

Thirdly, anyone who seriously believed that all of the data elements agreed as possibly in scope for MySchool was a complete list and able to be developed quickly, was dreaming.  Waiting for the full range of data meant, in reality, an indefinite delay. There are still data items in development today.

So now, five years on from the Ministerial directive that there was a need to actively investigate any unintended consequences, there is still no comprehensive evaluation in sight. One suspects that ACARA finds this quite convenient and hopes that its failure to act on this directive stays buried.

However, Ministers of Education still had concerns.  In the MCEECDYA Communiqué of April 2012 the following is reported:

Ministers discussed concerns over practices such as excessive test preparation and the potential narrowing of the curriculum as a result of the publication of NAPLAN data on My School.  Ministers requested that ACARA provide the Standing Council with an assessment of these matters.

On the basis of this statement, I wrote to ACARA on 27 April 2013 requesting information on action in response to this directive  – then over 12 months old.  To date I have received no reply.

So what sense can be made of this?

If one takes at face value the statements by ACARA that it knows of no information regarding the extent of unintended consequences one can only conclude that ACARA has twice not undertaken a Ministerial directive.

Here we have a Government body: aware of Ministers’ concerns about unintended negative consequences about a program it manages; aware of widespread anecdotal concerns, some of them quite serious; dismissing without any proper argument the few pieces of evidence that do exist; and. refusing to undertake any investigation into this matter despite two Ministerial directives to do so.

Willful ignorance about the potential unintended and harmful impacts of a program an agency has responsibility for whilst all the while professing a strong interest in this matter is highly irresponsible and unprofessional.

It is also quite astonishing given the Government’s commitment to the principle of transparency and the fact that ACARA was established specifically to bring that transparency and reporting to Australia’s schools.

But to then write a submission that almost boasts about the lack of information on this issue, while dismissing with poor arguments the evidence that is growing, is outrageous. It also gives new meaning to a throwaway line in its submission about the negative findings from the Whitlam Institute survey  “Further research with parents, students and other stakeholders is needed to confirm the survey results on well-being

Further research is indeed needed, and this further research should have been initiated by ACARA quite some time ago– five years to be precise.  It is convenient, for ACARA, that such research is not available. It is intellectually dishonest and misleading for ACARA to now state that it “takes seriously reports of unintended adverse consequences in schools. It actively monitors reports and research for indications of new issues or trends.”

Of course, there is another,  more alarming possibility, that this work has been undertaken but is not being made public, and that ACARA is misleading the Parliamentary inquiry and the public by denying that any such information exists.

In either case, I am forced to conclude that ACARA does not want any unintended consequences of a program for which it is responsible to be known, in spite of its ‘interest in this issue’ and is persisting in its position of willful ignorance.

In an effort to restore public confidence in its work, ACARA should commit to undertaking this research at arms length using independent researchers and reporting the findings to Parliament, without delay.  Perhaps this Inquiry could recommend this.

Evaluating MySchool – We are still waiting ACARA

Note:  Three years ago this month, I wrote about the Ministers’ of Education agreement to evaluate MySchool in order to identify and mitigate unintended consequences in June 2010.  There is still no such evaluation, nor any commitment by ACARA to do this.  This is being republished as a backgrounder to my next post

At last it is official.  Well before the launch of the MySchool website, state and federal education ministers agreed to task an expert working group to ‘commence work on a comprehensive evaluation strategy for implementation at the outset, that will allow early identification and management of any unintended and adverse consequences that result from the introduction of new national reporting arrangements for schools’, according to minutes of the ministers’ meeting held in Melbourne on 12 September 2008.

It appears, however, that the decision was never implemented and responsibility for it has transferred to the Australian Curriculum, Assessment and Reporting Authority (ACARA).

Now that the initial hype surrounding the launch of the MySchool website is over and the education community is settling down for a more considered debate on the opportunities and challenges of MySchool, I thought it might be useful to put forward some thoughts about what should be in scope for this MySchool evaluation.


My understanding of the ACARA position is that it is too early to make an evaluation because early reactions to a ‘hot button issue’ area not an accurate reflection of the longer term impacts, and because the MySchool website does not yet have the full range of information that is planned.

I have to agree that at this stage in the rollout of the full MySchool concept, we have an unbalanced set of data.  As data on parent perceptions, school level funding inputs and hopefully data on the quality of school service inputs (like the average teaching experience of the staff and their yearly uptake of professional learning) are added to MySchool, the focus of schools and systems might well shift.  I, for one, hope so.  If extra information like this doesn’t shine a bright light on the comparative level of the quality of the schooling experience for high-needs students relative to advantaged students, this will be a lost opportunity.

So yes, the effects of MySchool might change over time.  But this doesn’t mean we should wait to evaluate what is happening.  In fact, I think we have already missed the best starting point.  A good evaluation would have included a baseline assessment report, something that would have told us what sort of accountability pressures schools were already experiencing for good or ill prior to the introduction of MySchool.

For several years, education systems already had access to data from the National Assessment program – Literacy and Numeracy (NAPLAN) down to the school, class and even student level.  In most government systems, NAPLAN data was but one small component of a rich set of data used by schools to review their strategic plans, identify improvement priorities, set improvement targets and develop whole-school improvement plans.  They were already using parent and student surveys, student data on learning progress, attendance turnover, post-school destinations, school expulsion and discipline, and teacher data on absenteeism, turnover, professional learning and performance reviews.  At the same time, most education systems had access to this same information and were using it to prioritise schools in need of external reviews or other forms of intervention as part of system-level school-improvement strategies.

Contrary to popular belief, the introduction of MySchool did not commence a new regime of school accountability but it did both broaden it out and narrow it down.  It broadened it out to include parents and the community as arbiters of a school’s performance and applied this public accountability to independent as well as systemic schools. It narrowed it down by using just a tiny amount of data, and that has had the effect of privileging the NAPLAN test results.


Even before the launch of MySchool, systemic schools such as government and some Catholic schools were already feeling some degree of pressure from their systems about student performance in NAPLAN.  For example, in the Northern Territory, this had already given rise to a strong push from the education department to turn around the high level of non-participation of students in NAPLAN testing – an initiative that was highly successful.

A baseline study could have taken a reading on the extent that schools and systems were responding in educationally positive ways in the previously established accountability regimes and whether there were already unintended impacts.

Without a baseline assessment we are left with only anecdotal evidence of the effect of and reaction to MySchool, and responses vary widely.  I recall hearing a more outspoken principal in a very remote school saying something along these lines:

If I have to go to one more workshop about how to drill down into the NAPLAN data to identify areas of weakness and how to find which students need withdrawal based special support, I will scream.  All areas require focused intervention at my school and all our students require specialised withdrawal support.  We don’t need NAPLAN to tell us that.  What we need is …

I suspect that for schools where the NAPLAN outcomes were poorest, there was already a degree of being ‘over it’, even before the advent of MySchool.  The challenge for the most struggling schools is not about ‘not knowing’ that student progress is comparatively poor.  It is about knowing what can be done.  For years, struggling schools have been bombarded with a steady stream of great new ideas, magic bullets and poorly funded quick fixes that have all been tried before.  Their challenge is about getting a consistent level of the right sort of support over a sustained period of time to ‘crash through’. What has MySchool done for schools with this profile?  And more significantly, what has it does for their school communities?

On the other hand, I have heard anecdotally from teachers in the Australian Capital Territory that the launch of the site has drawn some comments to the effect that ‘we probably needed a bit of a shake up’ or ‘perhaps we have been a bit too complacent’.  Will the effect of MySchool be different depending on school demography and schools’ previously experienced accountability pressures?

There is evidence from the United States that this is likely to be the case.  Linda Darling-Hammond’s new book The Flat World and Education makes the point that the negative impacts of accountability driven by multiple-choice tests are greater when the stakes are higher but are also greater in schools serving high-needs communities.  For schools in high-needs communities, the stakes are higher than for comparatively advantaged schools precisely because their results are lower and their options fewer.

Darling-Hammond’s research suggests that this unequal impact results in more negative consequences for schools serving high-needs communities on three fronts.  The first relates to the impact on student school engagement and participation.  The more pressure on a school to perform well in national testing, the more likely it will be that subtle and not so subtle pressures flow to the highest-needs students not to participate in schooling or testing.  She documents unequal patterns with high-needs schools experiencing very marked increases in student suspensions, drop-outs, grade repeating and non-test attendance as a result of accountability measures driven largely by tests.

The second relates to the stability, quality and consistency of teaching.  Darling-Hammond notes that the higher the stakes, the more the negative impact on the stability and quality of teaching.  Her book documents decreases in the average experience level of the teachers in high-needs schools after the introduction of the US No Child Left Behind program of accountability.  Ironically, some of the systemic responses to failing schools exacerbate this as systems frequently respond by funding additional program support for at–risk students, English language learners and special-needs students.  These programs almost always engage non-teaching support staff.  The practical impact of this is that the students with the highest needs get even less access to a qualified teacher during their classroom learning time.

The third relates to the quality of the teaching and learning itself.  This is the topic that has most occupied the Australian debate on MySchool.  There has been no shortage of articles in the media predicting or reporting that MySchool will result, or has resulted, in a narrowing of the curriculum, an increase in the teaching of disconnected facts or rote learning, and classroom time being devoted to test preparation at the expense of rich learning.  In addition, there area websites surveying changes at the should level in terms of what gets taught and how it gets taught.  Less commonly acknowledged is the overseas experience that suggests the quality of teaching and learning is most negatively affected in schools serving high-needs communities.

These differentiated impacts need to be identified and addressed because, if they are not, difference in educational outcomes between high-needs schools and advantages schools are likely to increase.  The impact of parents making choices to change schools sometimes on the basis of quite spurious information will also be felt more in lower socioeconomic communities.  Increased revisualisation of government schools leads to higher concentrations of students of lower socioeconomic status.  This has a negative effect on all the students of those residualised schools.  Of course, schools that struggle the most may ironically be exempt from this kind of adverse pressure because many families in these communities do not really have the choice of sending children anywhere other than the local government school.

Are there other unintended consequences worth investigating? The potential impact of the index of Community Socio-Educational Advantage (ICSEA) values being used to set up a notion of like schools, for example, deserves further attention.

The ICSEA assigns a value to each school based on the socioeconomic characteristics of the area where students live, as well as whether a school is in a regional or remote area, and the proportion of indigenous students enrolled at the school.  The development and use of ICSEA is a complex matter that deserves a separate article.  Here I am just looking at what an evaluation should focus on around ICSEA being used as part of the MySchool website.  For all its faults, in very broad terms it tells us how advantaged a school is (a high score) or disadvantages it is (a low score).

I must admit that when the concept of reporting by school demographic type was first mooted, I got a little excited.  I took for granted that this would lead to a focus on the fact that, in Australia, a student’s chance of success at school is still fundamentally influenced by the student’s level of disadvantage and the relative disadvantage of the school she or he attends. I thought, at last, we had a chance to expose the fact that Australia has not managed to break the ‘demography as destiny’ cycle.  I took for granted that this would lead to a focus on this outrage and lead to a renewed commitment to addressing it.  I also thought that the ICSEA had the potential to become a framework for a more focused needs-based funding approach than the current state approaches to this.

I was wrong.  Since the launch of MySchool in January,[1] I have googled and googled and found a lot of reporting about the way in which the tool, in its first iteration, has lead to some very strange school groupings, but with the exception of an excellent article by Justine Ferrari in the Australian, almost nothing about the link between poverty, disadvantage and NAPLAN results.

This initially surprised me – but then I was reminded that John Hattie, in his recent book, Visible Learning, makes the point, almost in passing, that differences between student learning outcomes within schools are greater than the difference in learning outcomes between schools in developed countries.  He refers to interschool differences as trivial at best.  Maybe we in Australia assume that this is so for us.

The MySchool data tells a very different story and anybody can verify this for themselves.  It shows that the schools in the lowest ICSEA grouping (those which have ICSEA values in the low 500s), who are assessed as doing better than their like-school peers (a dark green rating), have NAPLAN scores for their Year 9 students that are below the NAPLAN scores for Year 3 students in schools serving advantages communities (those with an ICSEA values about 1,100s).  This means that schools where Year 9 students are achieving average Year 3 results are rated as good for these schools

I had assumed the ICSEA would be a tool to shine a light on the issue of systemic educational disadvantage, but have come to understand that it may well be inadvertently taken attention away from this issue. The structure of the website precludes easy comparisons (for good reasons) but also draws web users’ attention to comparisons between like schools, not to unlike schools.

If the combination of the limited searchability of the MySchool website, with the complexity of the ICSEA concept has led unintentionally to an unspoken legitimising of ‘demography as destiny’ this is a serious concern that an evaluation would pick up.   Will schools start to say, ’we are doing pretty well considering the community we serve’? Have we inadvertently naturalised different outcomes for different groups of students?

In our efforts to minimise the shaming of high need schools and their communities, we need to ensure that we have not taken away the shame that all Australians ought to feel about our failure to the vicious cycle of family poverty and the failure of the institution of education to systemically and sustainably disrupt this link.

This is not a failure that can be laid at the feet of the individual classroom teacher or school principal.  It is a systemic failure.  If we insist on putting pressure solely on individual schools to do the ‘educational heavy lifting’, our revolution will fail.  If we believe that encouraging parents to vote with their feet for the best school without first guaranteeing that every choice can be a high-quality choice, with equal opportunity to learn, we will also fail.  Ideally, the evaluation should include, in scope, not just unintended impacts but the explicitly intended effects too.

While there is no space to embark on this discussion here, I would like to end with a quote from Linda Darling-Hammond’s The Flat World and Education that sums up most eloquently the kinds of changes that education transparency and accountability frameworks ought to drive:

Creating schools that enable all children to learn requires the creation of systems that enable all educators and schools to learn.  At heart, this is a capacity-building enterprise leveraged by clear, meaningful learning goals and intelligent, reciprocal accountability systems that guarantee skilful teaching in well-designed schools for all learners.


Publication details for the original article: Margaret Clark, Evaluating MySchool, Professional Educator, Volume 9, Number 2 June 2010 pp 7 – 11

Darling-Hammond, L  (2010) The Flat World and Education: How America’s commitment to equity will determine our future.  New York: Teachers College Press.

Ferrari, J. (2010, May 1) On the Honour Roll: the nation’s top schools.  The Australian Inquirer, p5.

Hattie, J. (2009) Visible Learning: A synthesis of over 800 meta-analyses relating to achievement.  London, New York: Routledge.

Patty, A. (2010, 30 March). Evaluation of MySchool pushed aside say critics.  Sydney Morning Herald, p2.

[1] Of course the research undertaken by Richard Teese and also by Chris Bonnor and Bernie Shepherd since this date have achieved this.  We do not hear the line – the biggest differences are between classes in the same c schools in Australia anymore.  Thanks to the work of Teese, Bonnor and Shepherd this myth has been busted.

What have schools got to do with neo-liberalism?

Neoliberalism is not a term that everyone is happy to use.  Some see it as ideological jargon and for others it might describe what is happening but its use by education academics seems to get in the way of teachers and practitioners hearing its central message.

My own view is that the basic assumptions, frameworks and processes of neo-liberalism have been so well incorporated into our economic frameworks, social policies and thinking, that unless we name it and unpack it, we cant talk about what is happening sensibly or view things through any other lens.

In this blog I want to point out just how deeply school education has become infected with the neoliberal ideas.

So what is neoliberalism?  In a recent post by Chris Thinnes[1] the following definition is used

[Neoliberalism is] …an ensemble of economic and social policies, forms of governance, and discourses and ideologies that promote individual self-interest, unrestricted flows of capital, deep reductions in the cost of labor, and sharp retrenchment of the public sphere. Neoliberals champion privatization of social goods and withdrawal of government from provision for social welfare on the premise that competitive markets are more effective and efficient

Now its not hard to see the relevance of this to school reform policies of the US, UK and increasingly in Australia:

  • School choice and competition – highly entrenched in Australia
  • MySchool providing the information to support parents voting with their feet and forcing schools to worry more about student test performance than about the school learning and well being environment
  • high stakes testing – creating commodities out of smart kids and relegating others to a ‘take a sick day on testing’ status,
  • performance pay for teachers – introducing competition where there needs to be collaboration and team building
  • competing for a place in the PISA top 5 – turning school quality into an international productivity competition

Thynne’s post, The Abuse and Internalization of the ‘Free Market’ Model in Education, shows how school policies and practices promote individual self-interest over the common good and the market as the arbiter of values.  In this he is not unique. But Thinnes also reminds us that its fundamental ideas exist at a much deeper level – how this way of thinking has become the air we breathe in school policy and practice, even within the field of education.

His very first example emerges from comments made by both teachers and students about the challenges and opportunities of collaborative or group work in classrooms:

The problem with group projects is that somebody might end up doing all the work, but somebody else will get the credit

 It’s too hard to grade each student when you’re not sure how they contributed Collaboration is great, but somebody ends up not carrying their weight

When you try to help each other, the teachers sometimes treat you like you cheated

The message coming through from these comments  is that although student collaboration might be important to learning in theory, “the assessment and affirmation of individual contributions, achievements, and accomplishments is what matters most in our schools”.

Thinnes observes that

The persistence of such beliefs should come as no surprise to any of us, who find ourselves in a society with an education system that has embraced prevailing myths about competition, meritocracy, and economic and social mobility in its education policy. It should strike us with a great sadness, however, for those of us who question and resist those myths in our classroom practice and learning communities.

This internalization of neoliberal commitments to the individual achievements of our students and teachers, and the market competition of our schools, is naturalized even in our most informal, everyday conversations about education. It is enforced by many of our classroom practices. It is celebrated in many of our school-wide rituals. But I find it perhaps most disturbing when it frames our thoughts, subconsciously or purposefully, about how to improve our schools.

Unfortunately we see evidence of this in the Australian context wherever we look.

The only two items mentioned in the 2013 budget speech in relation to Indigenous education and closing the gap were scholarships for individual Indigenous students to attend elite schools and the Clontarrf Football academy.  Neither of these offer any systemic strategies for improving Indigenous education.  It seems we have decided to give up on structural systemic improvements in Indigenous education, in spite of appalling and systemic failure  – particularly in remote contexts.  The vast majority of Indigenous students and their families are left untouched by these two strategies.  In fact it is possible they will be worse off as the more aspirational students  – those who can contribute to the quality of learning in a classroom  – are plucked out and removed.  And  of course the fact that both these strategies result in the funding of non Government bodies to deliver the programs has not even been seen as odd or of concern.

Today in the Canberra Times Tony Shepherd argues that wealthy parents who choose to suck of the public teat by going to public schools should be charged a levy.  This only makes sense of schools are considered a commodity – a product and students it customers. This is a total repudiation of the fundamental democratic purpose of schools but the impact of neoliberal thinking and its saturation is to make these seem like a logical and sensible idea.

Thynne ends his article with the following message

The end-run of the logic of the ‘free market model’ and its application to schools is simple: the repudiation of schools as we have come to know them; the abandonment of democratic principles on which they are based; and the service of a technocratic vision of education as matrix of individual relationships with private providers….

This internalization of neoliberal commitments to the individual achievements of our students and teachers, and the market competition of our schools, is naturalized even in our most informal, everyday conversations about education. It is enforced by many of our classroom practices. It is celebrated in many of our school-wide rituals. But I find it perhaps most disturbing when it frames our thoughts, subconsciously or purposefully, about how to improve our schools.

We should take note before it is too late.

Kevin Donnelly thinks that Fabianism is a dirty word.


We’ve put up with absolute rubbish from Kevin Donnelly for too long.  It’s time to look at his claims without the emotion and invective

In his latest rant, in The Australian, called, “Education saviour is pulling too many levers[1]”, Donnelly makes the following claims.

1.        Julia Gillard “in a desperate attempt” is going to use education as her lever to stay in power

Sadly, and a little reluctantly, I share concerns about the growing centrality of education in the future election debate.  Although chances are slim, I am pinning my hopes on progress on implementing the key components of the Gonski reforms prior to the election to the extent that they cannot easily be rolled back. 

The temptation to use it the Gonski implementation plan as an election carrot will not save the ALP but it will cost public schools dearly.

2.        Billions have been wasted on the Building the Education Revolution program that forced off-the-shelf, centrally mandated infrastructure on schools with little, if any, educational benefit;

Donnelly clearly has not read the ANAO Audit report into the BER[2], because it concludes that where there were poor decisions and centralized rollouts the culprits were state Governments not the Commonwealth and that to some extent this was inevitable given the justifiable time constraints.  May I also remind him that this was a GFC response first and foremost not an education initiative? The audit report makes this clear:

The Government decided on school based infrastructure spending because it had a number of elements that supported stimulus objectives

It also notes that:

The objectives of the BER program are, first, to provide economic stimulus through the rapid construction and refurbishment of school infrastructure and, second, to build learning environments to help children, families and communities participate in activities that will support achievement, develop learning potential and bring communities together[3]

For many schools the capital works were a godsend because the new hall or learning space gave them the capacity to do the thing that Donnelly most encourages – use new space to increase local innovative solutions to education challenges.  Indeed the audit report noted that over 95% of principals that responded to the ANAO survey indicated that the program provided something of ongoing value to their school and school community.[4]

3.        The computers in schools program delivered thousands and thousands of now out-of-date computers that schools can ill-afford to maintain or update.

I am not one to argue that ICT is the magic bullet answer to everything about teaching and learning in our schools.  However I am convinced that with well-informed computer literate teachers, who are also good teachers in the broader sense, students can only benefit.  I also acknowledge that a high level of computer literacy is now a core area of learning.   To achieve this even “out of currency” computer hardware will be better than no computers

Any ICT hardware rollout will result in out-of-date computers and a maintenance/update impost.  But the state of ICT infrastructure in our schools desperately needed to be addressed.  Is Donnelly really arguing that schools that do not have enough in their budgets to manage the whole-of-life costs of having computers should go without?  I wonder which schools these might be?


4.        Julia Gillard’s data fetish is forcing a centralised and inflexible accountability regime on schools, government and non-government, that is imposing a command and control regime on classrooms across the nation.

There is no doubt that we could benefit from a better accountability and reporting regime  – for all schools. So this is one of the few areas where Donnelly and I have aligned concerns but possibly for different reasons. I continue to believe that the changes to the original intention of NAPLAN testing has been disastrous for some Australian schools – but possibly not the ones dear to Donnelly’s heart. 

The reporting of NAPLAN results at the school level has, almost certainly, distorted what is taught in schools[5].  This is especially the case in schools where students struggle – our highly concentrated low SES schools.  It has also contributed to the residualisation of the public school system.  And we now have evidence that when the middle class students are leached out of public schools, public school students loose out in lots of ways.  For example they lose out because of the loss of articulate and ‘entitled’ parent advocates for the needs of the schools.  But they also lose out because each middle class child is actually a resource.  That is their existence in the class enhances the learning of all students in that class.[6]

Donnelly, on the other hand, appears to be more concerned that non-Government schools are now under the same reporting obligations as government schools.  I know of no other area of Commonwealth funding that was not expected to provide a defined level of accountability and reporting.   This anomaly was way overdue. 

5.        The Gillard-inspired national curriculum, instead of embracing rigorous, academic standards, is awash with progressive fads such as child-centred, inquiry-based learning, all taught through a politically correct prism involving indigenous, Asian and environmental perspectives.

Donnelly appears to have a short memory on this matter.  The national curriculum effort was kicked off by the previous Howard Government – and that is why History was singled out above other social science disciplines. 

Perhaps Donnelly has not read the national curriculum? If he had he would know that it is just a sequence and scoping exercise and does not address pedagogy at all.  Donnelly has had a bee in his bonnet for years about so called ‘progressive fads’ based on nothing more than sheer ignorance.  And as for the cross curriculum perspectives – these came out of extensive consultation and negotiation and were not imposed by the Gillard Government.  While there are unfortunately many examples of Commonwealth overreach, the cross-curricular perspectives are not examples.

6.        Even though the Commonwealth Government neither manages any schools nor employs any teachers, Gillard is making it a condition of funding that every school across Australia must implement Canberra’s (sic) National Plan for School Improvement.

This is another area where, to some extent, I do agree with Donnelly but for very different reasons. 

My position is that the National Plan for School Improvement is Commonwealth overreach that was unnecessary and risky because it could have put the Gonski implementation at risk.

The National Plan for School Improvement was unnecessary because, all education systems throughout the country already had some form of school improvement planning and annual reporting, and had begun to share good practice through the National Partnership process.  It was also unnecessary because it foolishly cut across the more informed and consultative process being undertaken by AITSL to grow the teacher performance feedback and improvement process in collaboration with the various teaching institutes around Australia.  This process had a strong emphasis on supporting teacher development and self-reflection based on well-supported peer, supervisor and student feedback.  The Commonwealth initiative has recast the whole process into a high stakes, external reporting context that will be much less useful and teacher friendly.  This is a pity.  AITSL’s work should not have been distorted in this manner.

It was, and is, risky as some states seized on the obligations of the Plan as the rationale to push back on the Gonski reforms.  Tying the two together  was poor strategy, in light of the importance of implementing Gonski between now and September 2013.

Donnelly’s objection to the Plan appears to be that is is imposed on the non Government sectors that should, according to Donnelly, be able to receive significant levels of Commonwealth funding with no accountability?.   It’s the imposts he objects to, not their design elements.

7.        Research here and overseas proves that the most effective way to strengthen schools, raise standards and assist teachers is to embrace diversity, autonomy and choice in education. The solution lies in less government interference and micro-management, not more.

I am afraid that Donnelly’s claims that autonomy and choice is the best way to strengthen schools does not have a shred of evidence.  I, and others, have written about the autonomy claims[7] and there is now solid international evidence confirming that market models of education choice are disastrous for education equity and therefore for education overall[8].

8.        Autonomy in education helps to explain why Catholic and independent schools, on the whole, outperform government schools.

There is now enduring evidence that the differences in school outcomes are overwhelmingly connected with student demography and not schooling system.  When SES is taken into account the non Government systems do not perform any better at all.  The very detailed research undertaken by Richard Teese[9] in the context of the Gonski Review process concluded that:

Using NAPLAN data, the paper shows that public schools work as well or better than private schools (including Catholic schools).  This finding echoes the results of PISA 2009 that, after adjustment for intakes, public schools are as successful as private schools

9.        Gillard’s plan for increased government regulation and control and a one size fits all, lowest common denominator approach is fabianism and based on the socialist ideal of equality of outcomes.

Now this is the strangest claim of all.  Here Donnelly uses fabianism as a slur and it is not the first time he has taken this tack.  However it is a term so quaint, so rarely used, that this tactic may well pass unnoticed.  In fact in order to find a useful definition I had to go back to 1932 to an essay by GDH Cole[10].  Cole’s explanation is interesting given the implied nastiness of fabianism:

Whereas Marxism looked to the creation of socialism by revolution based on the increasing misery of the working class and the breakdown of capitalism through its inability to solve the problem of distribution, Webb argued that the economic position of the workers had improved in the nineteenth century, was still improving and might be expected to continue to improve. He regarded the social reforms of the nineteenth century (e.g. factory acts, mines acts, housing acts, education acts) as the beginnings of socialism within the framework of capitalist society. He saw legislation about wages, hours and conditions of labor, and progressive taxation of capitalist incomes as means for the more equitable distribution of wealth; …


The Fabians are essentially rationalists, seeking to convince men by logical argument that socialism is desirable and offering their arguments to all men without regard to the classes to which they belong. They seem to believe that if only they can demonstrate that socialism will make for greater efficiency and a greater sum of human happiness the demonstration is bound to prevail. 

So our progressive tax system, our Fair Work Australia, our transfer payments to those in poverty, our national health system, our public education system, our welfare safety net, our superannuation minimums – these are all examples of fabianism at work, not because fabianism is a secret sect with mal intent as implied by Donnelly but because we have come to see the benefits of a strong cohesive society where the wealth of the country is not enjoyed by the few while the majority slave in misery. 

What’s so bad about our proud achievements Donnelly?  I for one want to keep moving in this direction and for me implementing the Gonski reform is the essential next step in schooling policy.

10.     Tony Abbott’s view of education, is based on diversity and choice where schools are empowered to manage their own affairs free from over regulation and constraint.

It is interesting that Donnelly thinks he knows what Tony Abbott’s view of education is, because I suspect most of us remain unclear on this matter.  Abbott has said on one occasion that more funding should go to Independent schools – an astonishing claim given our profile relative to all other countries.  His shadow Minister has said a bit more but his statement that we should go back to didactic teaching (like when he was a boy) does not imply a commitment to allowing schools to manage their own affairs to me.  But maybe he only means that this is what Government schools should do.  That would probably be OK according Kevin Donnelly’s view of the world.

[3] Ibid P 8

[4] Ibid P 26

[5] A useful, research article about this is the submission prepared by Dr Greg Thompson in response to the Parliamentary Inquiry into the Australian Education Bill 2012 – Submission no. 16 available at this URL

[6]The best explanation for the important of ‘ other student affect’ on student learning is from an unpublished paper by Chris Bonner where he notes that “the way this resource of students is distributed between schools really matters. Regardless of their own Socio-economic background, students attending schools in which  the average socio economic background is high tent to perform better that if they are enrolled in a school with below Socio-economic intake



A vision for a new unified and credible approach to school assessment in Australia


I was only partly surprised to read in the Adelaide Advertiser[1] that Geoff Masters, CEO of the Australian Council for Educational Research (ACER) has called for the scrapping of the A-E grading system and replacing it with NAPLAN growth information.

To be blunt, I regard the A-E system as a nonsense cooked up by the previous Coalition Government and imposed on all states as a condition of funding.  It has never meant much and the different approaches to curriculum taken by the different state systems made its reporting even more confusing.

With the introduction of the Australian National Curriculum, the A-E grading system may have a more consistent approach across states but that meaning itself is often confusing and unhelpful.  As Masters notes

If a student gets a D one year and a D the next, then they might think they’re not making any progress at all when they are but the current reporting process doesn’t help them see it… [T]his could contribute to some students becoming disillusioned with the school system.

Abandoning this approach makes sense.  But the Advertiser article also implied that Masters is arguing that we should replace the A-E reporting with a NAPLAN gains process.  This to me was a complete surprise.

This is because I believe that would be a disaster and, more importantly, I am pretty sure that Masters would also see the limitations of such an approach.

At the 2010 Australian Parliamentary Inquiry into the Administration and Reporting of NAPLAN, Geoff Masters spoke at length about the limitations of NAPLAN covering the following:

  • Its limitation for students at the extremes because it is not multilevel
  • Its original purpose as a population measure and the potential reliability and validity problems with using it at school, classroom and individual student level
  • Its limited diagnostic power – because of the narrow range of testing and the multiple choice format

He also acknowledged the potential dangers of teachers teaching to the test and the narrowing of the curriculum.  (Unfortunately there appears to be a problem with the APH website and I was unable to reference this, but I have located a summary of the ACER position[2])

Now these are not minor problems.

I was also surprised because the idea that the CEO of ACER would not use this as an opportunity to talk about the benefit of diagnostic and formative assessments is unlikely. After all, these tests are important for ACER’s revenue stream.

So what is going on here?

To investigate, I decided to look beyond the Advertiser article and track down the publication that Masters was speaking to at the conference. It’s a new publication launched yesterday called Reforming Educational Assessment: Imperatives, principles and challenges[3]

And low and behold, the editor Sheradyn Holderhead got it wrong.  What Masters is arguing for is anything but the swapping out of one poorly informed reporting system (A to E Reporting) for a flawed one (NAPLAN)   He is mapping out a whole new approach to assessment that can be built on our best understandings of assessment and learning but also meet the “performativity”[4] needs of politicians and administrators.

Now some will object to the compromise taken here because they see “performativity” as a problem in and of itself.  At one level I agree but because I also look for solutions that are politically doable I tend to take a more pragmatic position.

This is because I see the reporting of NAPLAN through MySchool as a kind of one way reform – a bit like privatization of public utilities.  Once such system has been developed it is almost impossible to reverse the process.  The genie cannot be put back into the bottle.  So to me, the only solution is to build a more credible system – one that is less stressful for students, less negative for lagging students, more helpful for teachers, less likely to lead to a narrowing of the curriculum through teaching to the test and less prone to be used as a basis for school league tables.

And my take on Master’s article is that, if taken seriously, his map for developing a new assessment system would have the potential to provide the design features for a whole new approach to assessment that doesn’t require the complete overthrow of the school transparency agenda to be effective.

Here are some of the most significant points made by Masters on student assessment:

Assessment is at the core of effective teaching

Assessment plays an essential role in clarifying starting points for action. This is a feature of professional work in all fields. Professionals such as architects, engineers, psychologists and medical practitioners do not commence action without first gathering evidence about the situation confronting them. This data-gathering process often entails detailed investigation and testing. Solutions, interventions and treatments are then tailored to the presenting situation or problem, with a view to achieving a desired outcome. This feature of professional work distinguishes it from other kinds of work that require only the routine implementation of pre-prepared, one-size-fits-all solutions.

Similarly, effective teachers undertake assessments of where learners are in their learning before they start teaching. But for teachers, there are obvious practical challenges in identifying where each individual is in his or her learning, and in continually monitoring that student’s progress over time. Nevertheless, this is exactly what effective teaching requires.

Understandings derived from developments in the science of learning challenge long-held views about learning, and thus approaches to assessing and reporting learning.

These insights suggest that assessment systems need to

  • Emphasise understanding where students are at, rather than judging performance
  • Provide information about where individuals are in their learning, what experiences and activities are likely to result in further learning, and what learning progress is being made over time
  • Give priority to the assessment of conceptual understandings, mental models and the ability to apply learning to real world situations
  • Provide timely feedback in a form that a) guides student action and builds confidence that further learning is possible and b) allows learners to understand where they are in their learning and so provide guidance on next steps
  • Focus the attention of schools and school systems on the development of broader life skills and attributes – not just subject specific content knowledge
  • Take account of the important role of attitudes and self belief in successful learners

On this last point Masters goes on to say that:

Successful learners have strong beliefs in their own capacity to learn and a deep belief in the relationship between success and effort. They take a level of responsibility for their own learning (for example, identifying gaps in their knowledge and taking steps to address them) and monitor their own learning progress over time. The implications of these findings are that assessment processes must be designed to build and strengthen metacognitive skills. One of the most effective strategies for building learners’ self-confidence is to assist them to see the progress they are making.

…..  current approaches to assessment and reporting often do not do this. When students receive the same letter grade (for example, a grade of ‘B’) year after year, they are provided with little sense of the progress they are actually making. Worse, this practice can reinforce some students’ negative views of their learning capacity (for example, that they are a ‘D’ student).

Assessment is also vital in order to assess how a system is progressing – whether for a class, school, system, state or nation

Assessment, in this sense, is used to guide policy decision making or to measure the impact of interventions or treatments or to identify problems or issues

In educational debate these classroom based and the system driven assessments are often seen as in conflict and their respective proponents as members of opposing ideological and educational camps.

But the most important argument in the paper is that we have the potential to overcome the polarised approach to assessments that is typical of current discussion about education; but only if we start with the premise that the CORE purpose of assessment is to understand where students are in their learning. Other assessment goals should be built on this core.

Once information is available about where a student is in his or her learning, that information can be interpreted in a variety of ways, including in terms of the kinds of knowledge, skills and understandings that the student now demonstrates (criterion- or standards-referencing); by reference to the performances of other students of the same age or year level (norm-referencing); by reference to the same student’s performance on some previous occasion; or by reference to a performance target or expectation that may have been set (for example, the standard expected of students by the end

of Year 5). Once it is recognised that the fundamental purpose of assessment is to establish where students are in their learning (that is, what they know, understand and can do), many traditional assessment distinctions become unnecessary and unhelpful.

To this end, Masters proposes the adoption and implementation of a coherent assessment ‘system’ based on a set of 5 assessment design principles as follows

Principle 1: Assessments should be guided by, and address, an empirically based understanding of the relevant learning domain.

Principle 2: Assessment methods should be selected for their ability to provide useful information about where students are in their learning within the domain.

Principle 3: Responses to, or performances on, assessment tasks should be recorded using one or more task ‘rubrics’.

Principle 4: Available assessment evidence should be used to draw a conclusion about where learners are in their progress within the learning domain.

Principle 5: Feedback and reports of assessments should show where learners are in their learning at the time of assessment and, ideally, what progress they have made over time.

So, to return to the premise of the Advertiser article, Masters is not arguing for expanding the use value of the currently model of NAPLAN.  In fact, he is arguing for the reconceptualisation of assessment that:

  • starts with the goal of establishing where learners are in their learning within a learning domain; and
  • develops, on the basis of this a new Learning Assessment System that is equally relevant in all educational assessment contexts, including classroom diagnostic assessments, international surveys, senior secondary assessments, national literacy and numeracy assessments, and higher education admissions testing.

As the Advertiser article demonstrates, this kind of argument is not amenable to easy headlines and quick sound bytes.  Building the support for moving in this direction will not be easy.

But the first step is to recognize that the popular understanding that system based assessment and ‘classroom useful’ assessment are and must necessarily be at cross purposes and to start to articulate how a common approach could be possible.  Masters refers to this as the unifying principle:

….. it has become popular to refer to the ‘multiple purposes’ of assessment and to assume that these multiple purposes require quite different approaches and methods of assessment. …

This review paper has argued …. that assessments should be seen as having a single general purpose: to establish where learners are in their long-term progress within a domain of learning at the time of assessment. The purpose is not so much to judge as to understand. This unifying principle, which has potential benefits for learners, teachers and other educational decision-makers, can be applied to assessments at all levels of decision-making, from classrooms to cabinet rooms.

So if you are still not convinced that Masters is NOT arguing for replacing the A-E reporting with NAPLAN growth scores, this quote may help:

As long as assessment and reporting processes retain their focus on the mastery of traditional school subjects, this focus will continue to drive classroom teaching and learning. There is also growing recognition that traditional assessment methods, developed to judge student success on defined bodies of curriculum content, are inadequate for assessing and monitoring attributes and dispositions that develop incrementally over extended periods of time.

[4] This is a widely used term usually associated with the work of Stephen J. Ball. In simple terms it refers to our testing mania in schools and the culture and conceptual frameworks that support reform built around testing data.  To read more this might be a useful starting point