The Greens take a position on NAPLAN

This was posted today be Penny Wright, Greens Senator: NAPLAN? New plan please..  I must admit to being a bit – well more than a bit – disappointed.

It gives a bit of a summary of the things that are problematic about the current arrangements but it doesnt do much else. I expected more – a considered and comprehensive policy perhaps?

What would you include in this?




NAPLAN DAY – What did your child do today: go to the zoo or sit a test?

Today is the start of NAPLAN day for every Australian parent with a child in years 3,5,7 or 9. The vast majority of parents will send their children off to school as per usual, perhaps with an extra hug and an exhortation to” just do your best and don’t get stressed”.

But for a small, but growing, number of parents, this is a day to do something quite different – to go to the movies, the zoo, a picnic or just stay home and have a pajama day. They have taken the decision to remove their child from testing.

Now there are no rights or wrongs about this. It is a personal decision. But you may be wondering why people are making this decision.

I have been reading the many testimonials from US parents about why they have come to this decision and the few statements I have come across about withdrawal decisions from Australian parents. In this piece I bring together the key reasons.

Here is one US parent speaking:

 As a nation we have been convinced that our public schools are failing, that the “status quo” is unacceptable, that schools need standards and testing in order to succeed, and that market based reforms such as privatization, charter schools, vouchers and “dumping the losers” are the way to get it done.  The only problem is that none of this is true. None of it…..

It is the test that binds all of this insanity together.  Without the tests, the reformers have nothing to threaten schools with.  Without the tests, the federal government loses power over states.  Without the tests, schools would be able to stop assigning multiple choice tests to kindergarteners.  Without the tests, there would be no way for education reformers to convince you that your schools are much worse than they really are.  Without the tests, there wouldn’t be a target on our teachers.

But tests aren’t really the problem, the real problem is how the tests are used. Tests are an important form of data that can help educators determine how students are doing and how they need to improve.  When used for that purpose, tests are great.  Still limited, but great.  However, when used as a tool for propaganda, profit and pressure, tests are more punitive than positive.  As long as high stakes standardized tests – despite their limitations – are used as the primary means for evaluating schools, they will continue to be far more valuable for punishing states, schools and teachers than for evaluating student achievement.

There isn’t much I can do about this as an educator and an academic other than write and speak when I’m allowed.  But as a parent I have the power to take control over the education of my child, and that’s exactly what my wife and I have decided to do.


This opt out movement in the US started as a mere trickle but this year it has reached a critical mass. In Long Island alone more than 20,000 school children did not take the first round of state tests that began April 1[1].

Here is another parent – this one not a teacher – explaining her decision to opt out

Lawmakers and education reformers are pushing policies that subtract joy from the classroom, and as a parent of two public school students I am looking to push back. That’s why I joined the opt-out movement ..

Lawmakers and education reformers are pushing policies that subtract joy from the classroom, and as a parent of two public school students I am looking to push back. That’s why I joined the opt-out movement ..

…this year their father and I refused to send our kids to school for …testing. Instead they slept in, watched TV, played outside and read for pleasure. Their grandma also took them to the museum….

I’ve come to believe standardized tests are to learning as an exhibit of butterflies is to nature. In the attempt to pin down what is measurable, we render something wild and beautiful, dead and on display.

While our public school leaders pay lip service to creativity and innovation, they are mandating more class time be devoted to standardized testing in the name of holding teachers accountable for student progress. Next year, Colorado charges headlong into a pay-for-performance system tying 50 percent of our public school teachers’ evaluation to student progress.

Ravitch, … believes parents can halt this parasitic process by refusing to allow students to take the tests that feed it. “Deny them the data,” is the slogan inspiring me and thousands of parents around the country.


But my personal favourite is this letter from Will and Wendy Richardson from Delaware

To the Editor:

After much thought, we have decided to keep our son home during …standardized assessments …. we are basing this decision on our serious concerns about what the test itself is doing to our son’s opportunity to receive a well-rounded, relevant education, and because of the intention of state policy makers to use the test in ways it was never intended to be used. These concerns should be shared by every parent and community member who wants our children to be fully prepared for the much more complex and connected world in which they will live, and by those who care about our ability to flourish as a country moving forward.

Our current school systems and assessments were created for a learning world that is quickly disappearing. In his working life, my son will be expected to solve real world problems, create and share meaningful work with the world, make sense of reams of unedited digital information, and regularly work with others a half a world away using computers and mobile devices. The NJ ASK tells us nothing about his ability or preparedness to do that. The paper and pencil tasks given on the test provide little useful information on what he has learned that goes beyond what we can see for ourselves on a daily basis and what his teachers relay to us through their own assessments in class. We implicitly trust the caring professionals in our son’s classroom to provide this important, timely feedback as opposed to a single data point from one test, data that is reported out six months later without any context for areas where he may need help or remediation. In short, these tests don’t help our son learn, nor do they help his teachers teach him. 

In addition, the test itself poses a number of problems:

         Over the years, the “high stakes” nature of school evaluation has narrowed instruction to focus on only those areas that are tested. This has led to reductions in the arts, languages, physical education and more.

         Research has shown that high scores can be achieved without any real critical thinking or problem solving ability.

         The huge amount of tax dollars that are being spent on creating, delivering and scoring the tests, dollars that are going to businesses with, no surprise, powerful lobbyists in the state capitol and in Washington, DC, is hugely problematic.

         Proposals to use these test scores for up to 50% of a teacher’s evaluation are equally problematic. The tests were not created for such a use, and to create even higher stakes for the NJ ASK will only create more test prep in our classrooms at the expense of the relevant, authentic, real world learning that our students desperately need.

         These tests create unnecessary anxiety and stress in many students who feel immense pressure to do well.

In no way are we taking this step because of our dissatisfaction with our son’s public school, the teachers and administrators there, or our school board. We have simply had enough of national and state policies that we feel are hurting the educational opportunities for all children. At the end of the day, we don’t care what our son scores on a test that doesn’t measure the things we hold most important in his education: the development of his interest in learning, his ability to use the many resources he has at his disposal to direct his own learning, and his ability to work with others to create real world solutions to the problems we face. And we feel our tax dollars are better spent supporting our schools and our teachers who will help him reach those goals as well as the goals detailed by the state standards in ways that are more relevant, engaging and important than four days of testing could ever accomplish.

There are many many parent testimonials to opting out and many impassioned arguments about why they feel it necessary to take this step. But for me the following themes appear to stand out:

  1. The problem isn’t testing per se – but how tests are used –  the lack of validity and reliability in their unintended uses. This testing culture punishes and diminishes teachers.

 In the US this is particularly problematic, because of federal Government mandates that require states to use standardized tests as one of the measures to assess teachers. This was mooted by Ben Jenson from the Grattan Institute at one point and also by Julia Gillard. But because of excellent intervention by AITSL this disastrous situation has been avoided – at least for now.

But we do use NAPLAN scores as the basis for student outcomes reporting on the MySchool website. This turns these tests from a low stakes test to a high stakes event, uses the data in ways that are psychometrically questionable and fosters an unhealthy market choice model of education.

  1. The testing culture has impoverished what happens in classrooms and parents want education to be a joyful experience and to prepare students for active participation as adults in social, economic and political life. The kind of learning that can be tested will not equip students for this.

It is interesting to note that almost none of the testimonies I located were from parent who had children who were stressed or made sick by testing days. This is not to suggest that this situation does not exist , but that this is not what is driving the opt out movement. These are parents who want education to be the best it can be for all students and see the testing culture as undermining that, not just for their child but for all students.

  1. We don’t want to be part of the problem, so we are pushing back, refusing to provide our data to a bad process. In this way we haope to be part of building a movement that will destroy the corporate education stranglehold on our nation’s education.

Many many parents were at pains to state that they don’t believe there is a crisis in public education in the US and that they trust teachers as professionals more than they trust a multiple choice test to assess their childrens’ progress

How will you know what your child is capable of if you don’t have test scores?”  The answer to that is pretty simple.  We trust our son’s teachers.  The privileging of standardized test score data above all other forms of information regarding a student’s progress is a relatively recent phenomenon.  There was a time when we trusted teachers to teach, assess, and evaluate the progress of our students.  We believe this should still be the case.  We don’t need standardized tests to tell us what our kids are capable of.  Our sons’ teachers are more than capable of evaluating and communicating our son’s capabilities in the class using the data they collect through classwork, teacher created assessments and other formative data points that aren’t mandated by the federal government.  Did you know that the new assessments for CCSS will be graded completely by a computer?  Even students’ writing will be scored by a computer.  They’ll tell you that algorithms can be constructed to evaluate a human’s writing capacity.  As an expert in how kids think and learn, I’ll tell you that’s ridiculous.  Testing is one of the least authentic ways to determine  what any child is capable of. Nowhere else in life do we try to determine what somebody is capable of by putting them in front of a test and asking them to fill in bubbles.  Yet in in American public education, that’s quickly becoming the ONLY way we determine what students are capable of.

In Australia one person who has gone public about his decision to withdraw his eldest child from NAPLAN testing is Glen Fowler, ACT branch secretary of the Australian Education Union.

He has withdrawn his year 3 child, because NAPLAN data is published to show how individual schools are performing.

The use of this data to compare and rank schools is a disingenuous practice, and from my point of view, if the data is being misused, there will be no data provided by my family….

I’ve got no issue with standardised tests which are low stakes – I’ve got no issue with sample testing which is done by PISA [Program for International Student Assessment] every year … there’s no capacity for that to damage the reputation of a school or a teacher or a student.

If I had kids of NAPLAN age I would definitely withdraw them, not because of concerns about the effects on my child but as a political act. If enough parents acted in this way, the results would become even more unreliable and eventually there might need to be an acknowledgement that this is not our best policy. NAPLAN is NOT diagnostic; it narrows the curriculum and encourages low-level thinking, and it harming some children[2].

Maybe all this could be seen to be acceptable if there was a more important upside to the enterprise, When the decision to publish NAPLAN results to the school level o MySchool was first announced, there were many noble speeches about using NAPLAN to assess which children and which schools need extra help so that resources can be appropriated for this purpose,  But NAPLAN is NOT being used to identify those schools needing extra funding. And with tonight’s budget decision I very much fear, school funding in Australia will continue to ignore the needs of our most disadvantaged students. In this context NAPLAN is nothing but a cruel joke.


[2] if you want to think through your position on NAPLAN the ‘Say no to NAPLAN’ site established by Literacy Educators at Sydney University provides an excellent set of papers about why NAPLAN is problematic.


The NAPLAN Parliamentary Review’s ‘do nothing’ recommendations: We can do better

Many of us waited with a degree of eagerness – even excitement – for the release of the Parliamentary Inquiry Report into NAPLAN (Effectiveness of the National Assessment Program – Literacy and Numeracy Report). But what a disappointment!

While it makes a passable fist of identifying many, but by no means all, of the significant issues associated with how our NAPLAN is currently administered and reported, it does miss some of the important details. This could be forgiven if the recommendations showed any evidence of careful thinking, vision, or courage. But in my assessment they are trivial and essentially meaningless

We know a lot about the problems with our current approach to standardised testing and reporting. This Report, with the help of over 95 submissions from a wide range of sources, manages to acknowledge many of them. The key problems include:

  • it is not valid and reliable at the school level
  • it is not diagnostic
  • the test results take 5 months to be reported
  • it is totally unsuitable for remote Indigenous students – our most disadvantaged students – because it is not multilevel, in the language that they speak or culturally accessible (Freeman)
  • now that it has become a high stakes test it is having perverse impacts on teaching and learning
  • some of our most important teaching and learning goals are not reducible to multiple choice tests
  • there is a very real danger that it will be used to assess teacher performance – a task it is not at all suited to
  • some students are being harmed by this exercise
  • a few schools are using it to weed out ‘unsuitable enrolments’
  • school comparisons exacerbate the neoliberal choice narrative that has been so destructive to fair funding, desegregated schools and classrooms and equitable education outcomes
  • there will always be a risk of league tables
  • their unequal impact on high needs school
  • they do not feed into base funding formulas. In spite of the rhetoric about equity and funding prioritization being a key driver for NAPLAN, it is not clear that any state uses the NAPLAN to inform their base funding allocations to schools[1]

However, the ‘solutions’ put forward by the report are limited to the following recommendations:

  1. develop on-line testing to improve test results turn around – something that is happening anyway
  2. take into account the needs of students with a disability and English language learners. Now this recommendation is so vague as to be meaningless
  3. have ACARA closely monitor events to ensure league tables are not developed and that the results feed into funding considerations. This is another vague do nothing recommendation and I am certain ACARA will say that they are already doing this.

This is a recommendation to do nothing – nothing that is not already being done or nothing of meaningful substance.

As an example of the paucity of its analysis I offer the following. The report writes about the lack of diagnostic power of the NAPLAN tests and then says that, even if they were diagnostic, the results come too late to be useful. The report then argues, as its first and only strong recommendation that there needs to be quicker timeframe for making the results available. Did the writer even realize that this would still not make the tests useful as a diagnostic tool?

This Report, while noting the many problems assumes that these can be addressed through minor re-emphasis and adjustments – a steady as she goes refresh. However the problems identified in the Report suggest that tiny adjustments won’t address the issues. A paradigm change is required here.

We are so accustomed now to national standardised testing based on multiple choice questions in a narrow band of subjects being ‘the way we do things’, that it seems our deliberations are simply incapable of imagining that there might be a better way.

To illustrate what I mean I would like to take you back to the 1990s in Australia – to the days when NAPLAN was first foisted on a very wary education community.

How many of us can remember the pre national testing days? Just in case I will try and refresh your memory on some key elements and also provide a little of the ‘insider debates’ before we adopted the NAPLAN tests.

1989 was the historic year when all Education Ministers signed up to a shared set of goals under the now defunct 1989 Hobart Declaration. Australia was also in the process of finalising its first ever national curriculum – a set of Profiles and Statements about what all Australian children should learn. This was an extensive process driven by an interstate committee headed by the then Director of School Education in NSW, Dr Ken Boston.

During this time, I worked in the mega agency created by John Dawkins, the Department of Employment, Education, Training and Youth Affairs, initially in the Secretariat for the Education Ministerial Council (then called the AEC) and a few years later heading up the Curriculum and Gender Equity Policy Unit.

The Education Division at that time was heavily engaged in discussion with ACER and OECD about the development of global tests –the outcomes of which are PISA and a whole swag of other tests.

This was also when standardised testing was also being talked about for Australian schools. Professor Cummings reminds us of this early period in her submission to the Parliamentary Inquiry when she says that

This was also when standardised testing was also being talked about for Australian schools. Professor Cummings reminds us of this early period in her submission to the Parliamentary Inquiry when she says that

…through the Hobart Declaration in 1989 … ‘Ministers of Education agreed to a plan to map appropriate knowledge and skills for English literacy. These literacy goals included listening, speaking, reading and writing….

National literacy goals and sub-goals were also developed in the National Literacy (and Numeracy) Plan during the 1990s, including: …comprehensive assessment of all students by teachers as early as possible in the first years of schooling…to ensure that…literacy needs of all students are adequately addressed and to intervene as early as possible to address the needs of those students identified as at risk of not making adequate progress towards the national…literacy goals….use [of] rigorous State-based assessment procedures to assess students against the Year 3 benchmark for…reading, writing and spelling for 1998 onward.

It is interesting to note that the on entry assessments of children by teachers commitment referred to by Cummings did result in some work in each state. But it never received the policy focus, funding or attention that it deserved., which I regard as a pity The rigorous assessments at Year 3 however grew in importance and momentum. But the key consequence of this commitment was the year 3 state based assessment. Professor Cummings goes on to say that in order to achieve this goal – a laudable goal – the NAP was born. State based at first with very strict provisions about not providing school based data and then eventually what we have today.

Professor Cummings may not have known that many of us working in education at the time did not believe that national multiple choice standardised tests were the best and only answer and that considerable work was undertaken to convince the then Education Minister, Kim Beazley, that there was a better way.

During this period, where National standardised literacy tests were being discussed in the media and behind closed doors at the education Ministers’ Council,

Over this same period the US based Coalition of Essential Schools was developing authentic classroom teaching and learning activities that were also powerful diagnostic assessment exercises. Its stated goal was to build a data bank of these authentic assessments activities and to benchmark student progress against these benchmarks across the US. Its long term goal was to make available to schools across the US a data-base of benchmarked (that is standardised) assessments with support materials about how to use the materials as classroom lessons and how to use the results to a) diagnose a students learning b) plan future learning experiences and c) compare their development to a US wide standard of literacy development.

As the manager of the curriculum policy area, I followed these developments with great interest, as did a number of my work colleagues inside and outside the Department. We saw the potential of these assessments to provide a much less controversial, and less damaging way of meeting the Ministers’ need to show leadership in this area.

Our initiatives resulted in DEETYA agreeing to fund a trial to develop similar diagnostic classroom friendly literacy assessment units as the first part of this process. We planned to use these to demonstrate to decision makers that there was a better solution than standardized multiple-choice tests.

As a consequence I commenced working with Geoff Masters (then at ACER as an assessment expert) and Sharon Burrows (who headed up the Australian Education Union at the time) exploring the potential power of well designed formative assessments, based on authentic classroom teaching formats, to identify those at risk of not being successful at developing literacy skills.

Unfortunately we failed to head off a decision to opt for standardised tests. We failed for a number of reasons:

  • the issue moved too quickly,
  • the OECD testing process had created a degree of enthusiasm amongst that data crunchers who had louder voices,
  • our proposal was more difficult to translate to three word slogans or easy jargon,
  • multiple choice tests were cheaper.

At the time I thought these were the most important reasons. But looking back now, I can also see that our alternative proposal never had a chance because it relied on trusting teachers. Teachers had to teach the units and assess the students’ work. What was to stop them cheating and altering the results? Solutions could have been developed, but without the ICT developments we have access to today, they would have been cumbersome.

I often wonder what would have happened if we had initiated this project earlier and been more convincing. Could we have been ‘the Finland’ of education, proving that we can monitor children’s learning progress, identify students at risk early in their school lives, prioritise funding based on need  – all without the distorting effects of NAPLAN and MySchool?

We can’t go back in time but we can advocate for a bigger, bolder approach to addressing the significant problems associated with our current NAPLAN architecture. The parliamentary report failed us here but this should not stop us.

I have written this piece because I wanted us to imagine, for a moment, that it is possible to have more courageous bold and educationally justifiable policy solutions around assessment than what we have now. The pedestrian “rearrange the deck-chairs” of this Report is just not good enough.

So here is my recommendation, and I put it out as a challenge to the many professional education bodies and teacher Education Institutions out there.

Set up a project as follows:

Identify a group of our most inspiring education leaders through a collaborative peer nomination process. Ensure the group includes young and old teachers and principals, teachers with significant experience in our most challenging schools especially our remote Indigenous schools. Provide them with a group of expert critical friends – policy experts, testing and data experts, assessment and literacy experts and ask them to:

  • Imagine there is no current assessment infrastructure
  • Devise an educationally defensible assessment architecture – taking a green fields approach

I can almost guarantee that this working group would not invent NAPLAN and MySchool or anything like it, and we would be significantly better off.

We have dug ourselves into an educationally indefensible policy hole because we have allowed politicians and the media to drive key decision. To my knowledge we have never established an expert group of educational practitioners with access to specialist expertise to develop better policy solutions in education. Why don’t we give it a try?

Any takers?

[1] I understand that NSW does use the NAPLAN results to channel some additional funds to low performing schools but these are above the line payments.

Evaluating MySchool – We are still waiting ACARA

Note:  Three years ago this month, I wrote about the Ministers’ of Education agreement to evaluate MySchool in order to identify and mitigate unintended consequences in June 2010.  There is still no such evaluation, nor any commitment by ACARA to do this.  This is being republished as a backgrounder to my next post

At last it is official.  Well before the launch of the MySchool website, state and federal education ministers agreed to task an expert working group to ‘commence work on a comprehensive evaluation strategy for implementation at the outset, that will allow early identification and management of any unintended and adverse consequences that result from the introduction of new national reporting arrangements for schools’, according to minutes of the ministers’ meeting held in Melbourne on 12 September 2008.

It appears, however, that the decision was never implemented and responsibility for it has transferred to the Australian Curriculum, Assessment and Reporting Authority (ACARA).

Now that the initial hype surrounding the launch of the MySchool website is over and the education community is settling down for a more considered debate on the opportunities and challenges of MySchool, I thought it might be useful to put forward some thoughts about what should be in scope for this MySchool evaluation.


My understanding of the ACARA position is that it is too early to make an evaluation because early reactions to a ‘hot button issue’ area not an accurate reflection of the longer term impacts, and because the MySchool website does not yet have the full range of information that is planned.

I have to agree that at this stage in the rollout of the full MySchool concept, we have an unbalanced set of data.  As data on parent perceptions, school level funding inputs and hopefully data on the quality of school service inputs (like the average teaching experience of the staff and their yearly uptake of professional learning) are added to MySchool, the focus of schools and systems might well shift.  I, for one, hope so.  If extra information like this doesn’t shine a bright light on the comparative level of the quality of the schooling experience for high-needs students relative to advantaged students, this will be a lost opportunity.

So yes, the effects of MySchool might change over time.  But this doesn’t mean we should wait to evaluate what is happening.  In fact, I think we have already missed the best starting point.  A good evaluation would have included a baseline assessment report, something that would have told us what sort of accountability pressures schools were already experiencing for good or ill prior to the introduction of MySchool.

For several years, education systems already had access to data from the National Assessment program – Literacy and Numeracy (NAPLAN) down to the school, class and even student level.  In most government systems, NAPLAN data was but one small component of a rich set of data used by schools to review their strategic plans, identify improvement priorities, set improvement targets and develop whole-school improvement plans.  They were already using parent and student surveys, student data on learning progress, attendance turnover, post-school destinations, school expulsion and discipline, and teacher data on absenteeism, turnover, professional learning and performance reviews.  At the same time, most education systems had access to this same information and were using it to prioritise schools in need of external reviews or other forms of intervention as part of system-level school-improvement strategies.

Contrary to popular belief, the introduction of MySchool did not commence a new regime of school accountability but it did both broaden it out and narrow it down.  It broadened it out to include parents and the community as arbiters of a school’s performance and applied this public accountability to independent as well as systemic schools. It narrowed it down by using just a tiny amount of data, and that has had the effect of privileging the NAPLAN test results.


Even before the launch of MySchool, systemic schools such as government and some Catholic schools were already feeling some degree of pressure from their systems about student performance in NAPLAN.  For example, in the Northern Territory, this had already given rise to a strong push from the education department to turn around the high level of non-participation of students in NAPLAN testing – an initiative that was highly successful.

A baseline study could have taken a reading on the extent that schools and systems were responding in educationally positive ways in the previously established accountability regimes and whether there were already unintended impacts.

Without a baseline assessment we are left with only anecdotal evidence of the effect of and reaction to MySchool, and responses vary widely.  I recall hearing a more outspoken principal in a very remote school saying something along these lines:

If I have to go to one more workshop about how to drill down into the NAPLAN data to identify areas of weakness and how to find which students need withdrawal based special support, I will scream.  All areas require focused intervention at my school and all our students require specialised withdrawal support.  We don’t need NAPLAN to tell us that.  What we need is …

I suspect that for schools where the NAPLAN outcomes were poorest, there was already a degree of being ‘over it’, even before the advent of MySchool.  The challenge for the most struggling schools is not about ‘not knowing’ that student progress is comparatively poor.  It is about knowing what can be done.  For years, struggling schools have been bombarded with a steady stream of great new ideas, magic bullets and poorly funded quick fixes that have all been tried before.  Their challenge is about getting a consistent level of the right sort of support over a sustained period of time to ‘crash through’. What has MySchool done for schools with this profile?  And more significantly, what has it does for their school communities?

On the other hand, I have heard anecdotally from teachers in the Australian Capital Territory that the launch of the site has drawn some comments to the effect that ‘we probably needed a bit of a shake up’ or ‘perhaps we have been a bit too complacent’.  Will the effect of MySchool be different depending on school demography and schools’ previously experienced accountability pressures?

There is evidence from the United States that this is likely to be the case.  Linda Darling-Hammond’s new book The Flat World and Education makes the point that the negative impacts of accountability driven by multiple-choice tests are greater when the stakes are higher but are also greater in schools serving high-needs communities.  For schools in high-needs communities, the stakes are higher than for comparatively advantaged schools precisely because their results are lower and their options fewer.

Darling-Hammond’s research suggests that this unequal impact results in more negative consequences for schools serving high-needs communities on three fronts.  The first relates to the impact on student school engagement and participation.  The more pressure on a school to perform well in national testing, the more likely it will be that subtle and not so subtle pressures flow to the highest-needs students not to participate in schooling or testing.  She documents unequal patterns with high-needs schools experiencing very marked increases in student suspensions, drop-outs, grade repeating and non-test attendance as a result of accountability measures driven largely by tests.

The second relates to the stability, quality and consistency of teaching.  Darling-Hammond notes that the higher the stakes, the more the negative impact on the stability and quality of teaching.  Her book documents decreases in the average experience level of the teachers in high-needs schools after the introduction of the US No Child Left Behind program of accountability.  Ironically, some of the systemic responses to failing schools exacerbate this as systems frequently respond by funding additional program support for at–risk students, English language learners and special-needs students.  These programs almost always engage non-teaching support staff.  The practical impact of this is that the students with the highest needs get even less access to a qualified teacher during their classroom learning time.

The third relates to the quality of the teaching and learning itself.  This is the topic that has most occupied the Australian debate on MySchool.  There has been no shortage of articles in the media predicting or reporting that MySchool will result, or has resulted, in a narrowing of the curriculum, an increase in the teaching of disconnected facts or rote learning, and classroom time being devoted to test preparation at the expense of rich learning.  In addition, there area websites surveying changes at the should level in terms of what gets taught and how it gets taught.  Less commonly acknowledged is the overseas experience that suggests the quality of teaching and learning is most negatively affected in schools serving high-needs communities.

These differentiated impacts need to be identified and addressed because, if they are not, difference in educational outcomes between high-needs schools and advantages schools are likely to increase.  The impact of parents making choices to change schools sometimes on the basis of quite spurious information will also be felt more in lower socioeconomic communities.  Increased revisualisation of government schools leads to higher concentrations of students of lower socioeconomic status.  This has a negative effect on all the students of those residualised schools.  Of course, schools that struggle the most may ironically be exempt from this kind of adverse pressure because many families in these communities do not really have the choice of sending children anywhere other than the local government school.

Are there other unintended consequences worth investigating? The potential impact of the index of Community Socio-Educational Advantage (ICSEA) values being used to set up a notion of like schools, for example, deserves further attention.

The ICSEA assigns a value to each school based on the socioeconomic characteristics of the area where students live, as well as whether a school is in a regional or remote area, and the proportion of indigenous students enrolled at the school.  The development and use of ICSEA is a complex matter that deserves a separate article.  Here I am just looking at what an evaluation should focus on around ICSEA being used as part of the MySchool website.  For all its faults, in very broad terms it tells us how advantaged a school is (a high score) or disadvantages it is (a low score).

I must admit that when the concept of reporting by school demographic type was first mooted, I got a little excited.  I took for granted that this would lead to a focus on the fact that, in Australia, a student’s chance of success at school is still fundamentally influenced by the student’s level of disadvantage and the relative disadvantage of the school she or he attends. I thought, at last, we had a chance to expose the fact that Australia has not managed to break the ‘demography as destiny’ cycle.  I took for granted that this would lead to a focus on this outrage and lead to a renewed commitment to addressing it.  I also thought that the ICSEA had the potential to become a framework for a more focused needs-based funding approach than the current state approaches to this.

I was wrong.  Since the launch of MySchool in January,[1] I have googled and googled and found a lot of reporting about the way in which the tool, in its first iteration, has lead to some very strange school groupings, but with the exception of an excellent article by Justine Ferrari in the Australian, almost nothing about the link between poverty, disadvantage and NAPLAN results.

This initially surprised me – but then I was reminded that John Hattie, in his recent book, Visible Learning, makes the point, almost in passing, that differences between student learning outcomes within schools are greater than the difference in learning outcomes between schools in developed countries.  He refers to interschool differences as trivial at best.  Maybe we in Australia assume that this is so for us.

The MySchool data tells a very different story and anybody can verify this for themselves.  It shows that the schools in the lowest ICSEA grouping (those which have ICSEA values in the low 500s), who are assessed as doing better than their like-school peers (a dark green rating), have NAPLAN scores for their Year 9 students that are below the NAPLAN scores for Year 3 students in schools serving advantages communities (those with an ICSEA values about 1,100s).  This means that schools where Year 9 students are achieving average Year 3 results are rated as good for these schools

I had assumed the ICSEA would be a tool to shine a light on the issue of systemic educational disadvantage, but have come to understand that it may well be inadvertently taken attention away from this issue. The structure of the website precludes easy comparisons (for good reasons) but also draws web users’ attention to comparisons between like schools, not to unlike schools.

If the combination of the limited searchability of the MySchool website, with the complexity of the ICSEA concept has led unintentionally to an unspoken legitimising of ‘demography as destiny’ this is a serious concern that an evaluation would pick up.   Will schools start to say, ’we are doing pretty well considering the community we serve’? Have we inadvertently naturalised different outcomes for different groups of students?

In our efforts to minimise the shaming of high need schools and their communities, we need to ensure that we have not taken away the shame that all Australians ought to feel about our failure to the vicious cycle of family poverty and the failure of the institution of education to systemically and sustainably disrupt this link.

This is not a failure that can be laid at the feet of the individual classroom teacher or school principal.  It is a systemic failure.  If we insist on putting pressure solely on individual schools to do the ‘educational heavy lifting’, our revolution will fail.  If we believe that encouraging parents to vote with their feet for the best school without first guaranteeing that every choice can be a high-quality choice, with equal opportunity to learn, we will also fail.  Ideally, the evaluation should include, in scope, not just unintended impacts but the explicitly intended effects too.

While there is no space to embark on this discussion here, I would like to end with a quote from Linda Darling-Hammond’s The Flat World and Education that sums up most eloquently the kinds of changes that education transparency and accountability frameworks ought to drive:

Creating schools that enable all children to learn requires the creation of systems that enable all educators and schools to learn.  At heart, this is a capacity-building enterprise leveraged by clear, meaningful learning goals and intelligent, reciprocal accountability systems that guarantee skilful teaching in well-designed schools for all learners.


Publication details for the original article: Margaret Clark, Evaluating MySchool, Professional Educator, Volume 9, Number 2 June 2010 pp 7 – 11

Darling-Hammond, L  (2010) The Flat World and Education: How America’s commitment to equity will determine our future.  New York: Teachers College Press.

Ferrari, J. (2010, May 1) On the Honour Roll: the nation’s top schools.  The Australian Inquirer, p5.

Hattie, J. (2009) Visible Learning: A synthesis of over 800 meta-analyses relating to achievement.  London, New York: Routledge.

Patty, A. (2010, 30 March). Evaluation of MySchool pushed aside say critics.  Sydney Morning Herald, p2.

[1] Of course the research undertaken by Richard Teese and also by Chris Bonnor and Bernie Shepherd since this date have achieved this.  We do not hear the line – the biggest differences are between classes in the same c schools in Australia anymore.  Thanks to the work of Teese, Bonnor and Shepherd this myth has been busted.

What have schools got to do with neo-liberalism?

Neoliberalism is not a term that everyone is happy to use.  Some see it as ideological jargon and for others it might describe what is happening but its use by education academics seems to get in the way of teachers and practitioners hearing its central message.

My own view is that the basic assumptions, frameworks and processes of neo-liberalism have been so well incorporated into our economic frameworks, social policies and thinking, that unless we name it and unpack it, we cant talk about what is happening sensibly or view things through any other lens.

In this blog I want to point out just how deeply school education has become infected with the neoliberal ideas.

So what is neoliberalism?  In a recent post by Chris Thinnes[1] the following definition is used

[Neoliberalism is] …an ensemble of economic and social policies, forms of governance, and discourses and ideologies that promote individual self-interest, unrestricted flows of capital, deep reductions in the cost of labor, and sharp retrenchment of the public sphere. Neoliberals champion privatization of social goods and withdrawal of government from provision for social welfare on the premise that competitive markets are more effective and efficient

Now its not hard to see the relevance of this to school reform policies of the US, UK and increasingly in Australia:

  • School choice and competition – highly entrenched in Australia
  • MySchool providing the information to support parents voting with their feet and forcing schools to worry more about student test performance than about the school learning and well being environment
  • high stakes testing – creating commodities out of smart kids and relegating others to a ‘take a sick day on testing’ status,
  • performance pay for teachers – introducing competition where there needs to be collaboration and team building
  • competing for a place in the PISA top 5 – turning school quality into an international productivity competition

Thynne’s post, The Abuse and Internalization of the ‘Free Market’ Model in Education, shows how school policies and practices promote individual self-interest over the common good and the market as the arbiter of values.  In this he is not unique. But Thinnes also reminds us that its fundamental ideas exist at a much deeper level – how this way of thinking has become the air we breathe in school policy and practice, even within the field of education.

His very first example emerges from comments made by both teachers and students about the challenges and opportunities of collaborative or group work in classrooms:

The problem with group projects is that somebody might end up doing all the work, but somebody else will get the credit

 It’s too hard to grade each student when you’re not sure how they contributed Collaboration is great, but somebody ends up not carrying their weight

When you try to help each other, the teachers sometimes treat you like you cheated

The message coming through from these comments  is that although student collaboration might be important to learning in theory, “the assessment and affirmation of individual contributions, achievements, and accomplishments is what matters most in our schools”.

Thinnes observes that

The persistence of such beliefs should come as no surprise to any of us, who find ourselves in a society with an education system that has embraced prevailing myths about competition, meritocracy, and economic and social mobility in its education policy. It should strike us with a great sadness, however, for those of us who question and resist those myths in our classroom practice and learning communities.

This internalization of neoliberal commitments to the individual achievements of our students and teachers, and the market competition of our schools, is naturalized even in our most informal, everyday conversations about education. It is enforced by many of our classroom practices. It is celebrated in many of our school-wide rituals. But I find it perhaps most disturbing when it frames our thoughts, subconsciously or purposefully, about how to improve our schools.

Unfortunately we see evidence of this in the Australian context wherever we look.

The only two items mentioned in the 2013 budget speech in relation to Indigenous education and closing the gap were scholarships for individual Indigenous students to attend elite schools and the Clontarrf Football academy.  Neither of these offer any systemic strategies for improving Indigenous education.  It seems we have decided to give up on structural systemic improvements in Indigenous education, in spite of appalling and systemic failure  – particularly in remote contexts.  The vast majority of Indigenous students and their families are left untouched by these two strategies.  In fact it is possible they will be worse off as the more aspirational students  – those who can contribute to the quality of learning in a classroom  – are plucked out and removed.  And  of course the fact that both these strategies result in the funding of non Government bodies to deliver the programs has not even been seen as odd or of concern.

Today in the Canberra Times Tony Shepherd argues that wealthy parents who choose to suck of the public teat by going to public schools should be charged a levy.  This only makes sense of schools are considered a commodity – a product and students it customers. This is a total repudiation of the fundamental democratic purpose of schools but the impact of neoliberal thinking and its saturation is to make these seem like a logical and sensible idea.

Thynne ends his article with the following message

The end-run of the logic of the ‘free market model’ and its application to schools is simple: the repudiation of schools as we have come to know them; the abandonment of democratic principles on which they are based; and the service of a technocratic vision of education as matrix of individual relationships with private providers….

This internalization of neoliberal commitments to the individual achievements of our students and teachers, and the market competition of our schools, is naturalized even in our most informal, everyday conversations about education. It is enforced by many of our classroom practices. It is celebrated in many of our school-wide rituals. But I find it perhaps most disturbing when it frames our thoughts, subconsciously or purposefully, about how to improve our schools.

We should take note before it is too late.

EYES WIDE OPEN: What to make of Gonski Lite?


I have now read over 150 articles on the Commonwealth’s new funding model – most of them little more than repeats of press releases or snide remarks about its destined failure.

There are a few that stand–out, but unfortunately only a tiny minority have bothered to go beyond the media briefings, to analyse the figures and investigate the issues to any extent.  This is particularly shocking given how important this proposed new policy is for all Australians.

So what to make of what is on the table? Here is my take on the good, the bad and my on balance assessment.  But firstly I would like to be clear that I approach this issue from a social justice value base.  And, unlike many, I acknowledge that this does not make me an impartial observer – just a well informed, committed and passionate one.

I will deal with the bad first

The funding falls far short of the Gonski Recommendations

The oft-quoted Gonski figure of $5 billion per year in 2009 terms has gone forever.  Others have assessed that over 6 years this would have increased to about $39 billion in real terms. What is on the table is less $14.5 Billion over six years, or less than 50 per cent of what was assessed as necessary to achieve a quality needs based education funding regime.

The $14.5 billion includes $2.34 billion ($390 million per annum) that is already out in schools through the National Partnership Programs.  Yes this program was lapsing in 2014, but as far as schools are concerned, it is out there funding extra teaching resources.  It just means they won’t experience the taking-away of these much-needed resources.

The targeting approach recommended by Gonski has been diluted in significant ways to the detriment of our most needy schools

Gonski’s key message was that if Australia is ever to lift its educational outcomes it has to do it through targeting those most disadvantaged.

The current funding offer provides the vast bulk (83 per cent) of the funds in the form of base funds based on the Student Resource Standard.  The remaining 15 per cent of the funds are for needs based loadings.

The loadings or targeting measures are a key element of the Gonski reforms because as Colebatch notes “our funding system gives too little to the students who need it most, and the growth in funding should be used to redirect money to the most disadvantaged 25 per cent”

However while there are still targeted measures or loadings in the Government plan they are not well targeted and this difference is crucial.

Where Gonski proposed targeting the bottom 25 per cent of Socio-economic status the Government’s offer targets the bottom 50 per cent.  This makes a very big difference for schools at the low end of the ICSEA scale, because the money is spread as thin as vegemite over the vast majority of schools.  I had a brief search on MySchool and, although that is hard to do, it confirmed my sense that almost all schools can find a student or 2 in the bottom 50 per cent.

Where Gonski advocated for needs based loading for Indigenous students it would only apply the loading when the proportion of Indigenous students reached 5 per cent.  This would have included over 95 per cent of NT schools but only a minority of other schools.  There is no doubt that the decision to apply this loading for every Indigenous student has cost the NT dearly.

It is clear that the non-Government sector influenced this part of the deal making, as this dilution represents a clear win for them at the expense of the needs of the most disadvantaged schools.  The greatest need by far is in the public system and few schools serving the poorest communities in Australia are non-government.  Richard Teese notes that

About 80 per cent of all disadvantaged children attend government schools. Yet despite this, state and federal governments are set to give all non-government schools real increases in funds over the next three, and possibly six years. This includes the 1000 schools currently overfunded – schools that are “funding maintained”.

This more than anything cements our divided and highly unequal system into the future – a savage irony as also noted by Teese

We risk emerging from the most thorough review of national school funding with an architecture of advantage and disadvantage that is even stronger than when we began.

This is also a fantastic political win for the Independent education sector because it opens the door to a voucher type approach where wily non-Government schools can cherry pick the highest performing students who meet any of the loadings criteria but who do not require the extra ‘heavy lifting’ required by state schools who must take all comers.

By putting a price on the child’s head we are assuming that all children all Indigenous children are alike and all children in the bottom 50% are alike.  The NAPLAN results for NT Indigenous compared to the NAPLAN results for non-NT Indigenous are very very different – suggesting this is not the case.  The schools and associated families of the children with the highest needs have indeed been sold out.  And of course the NT has been sold out too.

On the other hand, Bill Daniel’s (Executive Director of the Independent Schools Council of Australia) implicit endorsement of the proposal suggests that they are indeed highly satisfied[1].

It brings with it all the inherent risks associated with federal overreach

Bernard Keane[2] makes the point that the real benefit of these funding reforms may not lie in the additional funding.  The funding he claims is just a means to an end –  “… the real benefits may well lie not in the extra dollars but in the changes to performance information and allocation of decision-making within large systems.” He goes on the say that “ In effect, for that extra $9.4 billion, Gillard wants the state to sign up to more rigorous entry and assessment standards for teachers, more power for school principals and greater performance information for parents”.

Keane might view these as benefits but I take a different view.  All states already have in place comprehensive and well-researched school improvement processes and were already sharing ideas based on what they had learnt from their programs.

And there is no strong evidence that giving more staffing hire and fire power and budget autonomy to principals enhances education equity but there is strong evidence that competition between schools over their ‘market share’ of ‘desirable student enrolments’ increases inter school differences and further disadvantages schools that have the hardest job.

 It does not address the schools that are currently overfunded under the current SES model

This is disappointing especially as I can recall there were articles that made it clear that even the coalition MPs acknowledged that the grandfathering of the overpayments needed to have a use by date.

It does not play fair between the states

This has been the focus of the WA Premier and he does have a point.  The logic outlined by Garrett is that they have drawn a Student Resource Standard line and applied a simple state blind gap filling model to the funding allocations.  That is, their allocations are based on what it costs to bring all schools up to this standard.

WA currently funds schools at a higher pre student rate than either Victoria and NSW so their funding gap is less.  This sounds fine from a distance, but it is worth remembering that Victoria, NSW and Qld have all taken funding away from schools and now appear to get a windfall gain from this cynical action.

It is also worth comparing this funding carve-up to other similar state negotiations over education funding.  For example, when the Early Childhood Education National Partnership funding shares between states were being negotiated Qld and NT has a much lower proportion of 4 year olds in preschools and argued that they aught to receive a larger share.  This ‘state blind gap resourcing’ approach was not followed on that occasion although there were some minor adjustments in recognition of this gap.

I think the key thing to take from this is that all states do not have equal bargaining power just like the unequal lobbying power between the education sectors.

It does not address the fact that Australia has one of the most class segregated and unequal schooling models in the world

I nearly didn’t include this negative because, even if ‘Gonski original’ had been proposed, this problem would have remained.  This was because the terms of reference for Gonski placed this out of bounds

It has been our obstinate commitment to the god of parent choice that has led to this outcome

So after all this – what are the positives?

The proposal offers new money to the public education system

The Government school share of the funding is $12.1 Billion.  Some $2 billion is already out in schools (under National Partnership Programs) but around $10 Billion is clearly additional to current expenditure.

This is important and we shouldn’t waste this opportunity because it is not on ideal terms.  It was never ever going to be. Moreover, opposition education spokesman Christopher Pyne says an Abbott government would keep the old system, implying that it will offer nothing new for public schools.

If we don’t embrace this offer during this Government’s term we may end up with something far far worse.  Tony Abbott has already gone on record saying that equity should mean all schools get the same level of Government funding.  This would be an absolute outrage.

It tosses out, once and for all, the AGSRC – and this is critical

The AGSRC or Average Government Student Resource Cost was the basis of the old funding model.  It was just that – it was a costing figure derived from calculating the average cost of educating a child at a Government school.  This has been a sore point for decades because the average cost for Government school students is based on a student population that is very different from the non-Government school population and is getting more and more different over time.  The effect of using AGSRC to determine funding formula meant that non-Government schools were financially rewarded when public school residualisation caused the costs of educating the increasingly poorer and needy students at Government schools to rise.

This new offer ushers in a Student Resource Standard based on calculations that are much more defensible.

The need for a better deal for public schools is urgent – It cannot wait

As David Zyngier notes[3] currently only 71 per cent of Australian government spending goes to public schools. Only Belgium and Chile spent a lower proportion of government funding in the public sector.

The debate we have around school funding and school choice in Australia is absolutely unique.  We take as normal and natural that Governments fork out a large amount of dollars to pay for the education of parents who chose not to use the Government provided systems.  In the vast bulk of countries this choice would not be subsidized.

We are paying the price for this choice in our international test results.  I must say I don’t particularly care about that, but I do care that we are paying the price in terms of large numbers of children who fail to reach their potential because our schooling arrangements have disadvantaged them.  We need to acknowledge this and put this right.  This is a start.

[1] “The success of this funding model depends heavily on the response from state and territory governments,” responded Independent Schools Council of Australia executive Bill Daniels.

A vision for a new unified and credible approach to school assessment in Australia


I was only partly surprised to read in the Adelaide Advertiser[1] that Geoff Masters, CEO of the Australian Council for Educational Research (ACER) has called for the scrapping of the A-E grading system and replacing it with NAPLAN growth information.

To be blunt, I regard the A-E system as a nonsense cooked up by the previous Coalition Government and imposed on all states as a condition of funding.  It has never meant much and the different approaches to curriculum taken by the different state systems made its reporting even more confusing.

With the introduction of the Australian National Curriculum, the A-E grading system may have a more consistent approach across states but that meaning itself is often confusing and unhelpful.  As Masters notes

If a student gets a D one year and a D the next, then they might think they’re not making any progress at all when they are but the current reporting process doesn’t help them see it… [T]his could contribute to some students becoming disillusioned with the school system.

Abandoning this approach makes sense.  But the Advertiser article also implied that Masters is arguing that we should replace the A-E reporting with a NAPLAN gains process.  This to me was a complete surprise.

This is because I believe that would be a disaster and, more importantly, I am pretty sure that Masters would also see the limitations of such an approach.

At the 2010 Australian Parliamentary Inquiry into the Administration and Reporting of NAPLAN, Geoff Masters spoke at length about the limitations of NAPLAN covering the following:

  • Its limitation for students at the extremes because it is not multilevel
  • Its original purpose as a population measure and the potential reliability and validity problems with using it at school, classroom and individual student level
  • Its limited diagnostic power – because of the narrow range of testing and the multiple choice format

He also acknowledged the potential dangers of teachers teaching to the test and the narrowing of the curriculum.  (Unfortunately there appears to be a problem with the APH website and I was unable to reference this, but I have located a summary of the ACER position[2])

Now these are not minor problems.

I was also surprised because the idea that the CEO of ACER would not use this as an opportunity to talk about the benefit of diagnostic and formative assessments is unlikely. After all, these tests are important for ACER’s revenue stream.

So what is going on here?

To investigate, I decided to look beyond the Advertiser article and track down the publication that Masters was speaking to at the conference. It’s a new publication launched yesterday called Reforming Educational Assessment: Imperatives, principles and challenges[3]

And low and behold, the editor Sheradyn Holderhead got it wrong.  What Masters is arguing for is anything but the swapping out of one poorly informed reporting system (A to E Reporting) for a flawed one (NAPLAN)   He is mapping out a whole new approach to assessment that can be built on our best understandings of assessment and learning but also meet the “performativity”[4] needs of politicians and administrators.

Now some will object to the compromise taken here because they see “performativity” as a problem in and of itself.  At one level I agree but because I also look for solutions that are politically doable I tend to take a more pragmatic position.

This is because I see the reporting of NAPLAN through MySchool as a kind of one way reform – a bit like privatization of public utilities.  Once such system has been developed it is almost impossible to reverse the process.  The genie cannot be put back into the bottle.  So to me, the only solution is to build a more credible system – one that is less stressful for students, less negative for lagging students, more helpful for teachers, less likely to lead to a narrowing of the curriculum through teaching to the test and less prone to be used as a basis for school league tables.

And my take on Master’s article is that, if taken seriously, his map for developing a new assessment system would have the potential to provide the design features for a whole new approach to assessment that doesn’t require the complete overthrow of the school transparency agenda to be effective.

Here are some of the most significant points made by Masters on student assessment:

Assessment is at the core of effective teaching

Assessment plays an essential role in clarifying starting points for action. This is a feature of professional work in all fields. Professionals such as architects, engineers, psychologists and medical practitioners do not commence action without first gathering evidence about the situation confronting them. This data-gathering process often entails detailed investigation and testing. Solutions, interventions and treatments are then tailored to the presenting situation or problem, with a view to achieving a desired outcome. This feature of professional work distinguishes it from other kinds of work that require only the routine implementation of pre-prepared, one-size-fits-all solutions.

Similarly, effective teachers undertake assessments of where learners are in their learning before they start teaching. But for teachers, there are obvious practical challenges in identifying where each individual is in his or her learning, and in continually monitoring that student’s progress over time. Nevertheless, this is exactly what effective teaching requires.

Understandings derived from developments in the science of learning challenge long-held views about learning, and thus approaches to assessing and reporting learning.

These insights suggest that assessment systems need to

  • Emphasise understanding where students are at, rather than judging performance
  • Provide information about where individuals are in their learning, what experiences and activities are likely to result in further learning, and what learning progress is being made over time
  • Give priority to the assessment of conceptual understandings, mental models and the ability to apply learning to real world situations
  • Provide timely feedback in a form that a) guides student action and builds confidence that further learning is possible and b) allows learners to understand where they are in their learning and so provide guidance on next steps
  • Focus the attention of schools and school systems on the development of broader life skills and attributes – not just subject specific content knowledge
  • Take account of the important role of attitudes and self belief in successful learners

On this last point Masters goes on to say that:

Successful learners have strong beliefs in their own capacity to learn and a deep belief in the relationship between success and effort. They take a level of responsibility for their own learning (for example, identifying gaps in their knowledge and taking steps to address them) and monitor their own learning progress over time. The implications of these findings are that assessment processes must be designed to build and strengthen metacognitive skills. One of the most effective strategies for building learners’ self-confidence is to assist them to see the progress they are making.

…..  current approaches to assessment and reporting often do not do this. When students receive the same letter grade (for example, a grade of ‘B’) year after year, they are provided with little sense of the progress they are actually making. Worse, this practice can reinforce some students’ negative views of their learning capacity (for example, that they are a ‘D’ student).

Assessment is also vital in order to assess how a system is progressing – whether for a class, school, system, state or nation

Assessment, in this sense, is used to guide policy decision making or to measure the impact of interventions or treatments or to identify problems or issues

In educational debate these classroom based and the system driven assessments are often seen as in conflict and their respective proponents as members of opposing ideological and educational camps.

But the most important argument in the paper is that we have the potential to overcome the polarised approach to assessments that is typical of current discussion about education; but only if we start with the premise that the CORE purpose of assessment is to understand where students are in their learning. Other assessment goals should be built on this core.

Once information is available about where a student is in his or her learning, that information can be interpreted in a variety of ways, including in terms of the kinds of knowledge, skills and understandings that the student now demonstrates (criterion- or standards-referencing); by reference to the performances of other students of the same age or year level (norm-referencing); by reference to the same student’s performance on some previous occasion; or by reference to a performance target or expectation that may have been set (for example, the standard expected of students by the end

of Year 5). Once it is recognised that the fundamental purpose of assessment is to establish where students are in their learning (that is, what they know, understand and can do), many traditional assessment distinctions become unnecessary and unhelpful.

To this end, Masters proposes the adoption and implementation of a coherent assessment ‘system’ based on a set of 5 assessment design principles as follows

Principle 1: Assessments should be guided by, and address, an empirically based understanding of the relevant learning domain.

Principle 2: Assessment methods should be selected for their ability to provide useful information about where students are in their learning within the domain.

Principle 3: Responses to, or performances on, assessment tasks should be recorded using one or more task ‘rubrics’.

Principle 4: Available assessment evidence should be used to draw a conclusion about where learners are in their progress within the learning domain.

Principle 5: Feedback and reports of assessments should show where learners are in their learning at the time of assessment and, ideally, what progress they have made over time.

So, to return to the premise of the Advertiser article, Masters is not arguing for expanding the use value of the currently model of NAPLAN.  In fact, he is arguing for the reconceptualisation of assessment that:

  • starts with the goal of establishing where learners are in their learning within a learning domain; and
  • develops, on the basis of this a new Learning Assessment System that is equally relevant in all educational assessment contexts, including classroom diagnostic assessments, international surveys, senior secondary assessments, national literacy and numeracy assessments, and higher education admissions testing.

As the Advertiser article demonstrates, this kind of argument is not amenable to easy headlines and quick sound bytes.  Building the support for moving in this direction will not be easy.

But the first step is to recognize that the popular understanding that system based assessment and ‘classroom useful’ assessment are and must necessarily be at cross purposes and to start to articulate how a common approach could be possible.  Masters refers to this as the unifying principle:

….. it has become popular to refer to the ‘multiple purposes’ of assessment and to assume that these multiple purposes require quite different approaches and methods of assessment. …

This review paper has argued …. that assessments should be seen as having a single general purpose: to establish where learners are in their long-term progress within a domain of learning at the time of assessment. The purpose is not so much to judge as to understand. This unifying principle, which has potential benefits for learners, teachers and other educational decision-makers, can be applied to assessments at all levels of decision-making, from classrooms to cabinet rooms.

So if you are still not convinced that Masters is NOT arguing for replacing the A-E reporting with NAPLAN growth scores, this quote may help:

As long as assessment and reporting processes retain their focus on the mastery of traditional school subjects, this focus will continue to drive classroom teaching and learning. There is also growing recognition that traditional assessment methods, developed to judge student success on defined bodies of curriculum content, are inadequate for assessing and monitoring attributes and dispositions that develop incrementally over extended periods of time.

[4] This is a widely used term usually associated with the work of Stephen J. Ball. In simple terms it refers to our testing mania in schools and the culture and conceptual frameworks that support reform built around testing data.  To read more this might be a useful starting point

The Scourge of School League Tables


I was stunned to see that the Canberra Times published its own league table about ACT Schools.  I mean I was sure that a respectable rag like the CT would know better than to engage in a cheap stunt like this


Sadly I was wrong.


Thankfully Trevor Cobbold of Save our Schools fame has stepped in and provided a telling commentary here Save Our Schools Canberra: The Whackiness of School League Tables


The table shows the following


  1. That the data from NAPLAN at the schools level is completely meaningless and unreliable at least when it comes to drawing any conclusions about school or teacher quality.
  2. Across year groups, across the disciplines, and across the independent, Catholic and government sectors, schools are jumping around all over the place!


The simple fact is that student cohorts change every year. And the smaller the school, the greater the chance of wild fluctuations.


You see NAPLAN was never ever designed to be reliable and valid at the individual school level  – never.  It is/was designed a population measure and at that level and that level only it is quite reliable and useful.  At the school level – not so much


The decision to provide NAPLAN results at the school level is a political decision and there is no evidence that the results are valid at this level – they were not intended to be used in this way.


According to Cobbold in the ACT In Year 3 writing, one school went from 1st last year to 66th this year, whilst in Year 3 grammar another school went from 81st to 4th!


These are not aberrations, as similarly spectacular rises and falls appear throughout the tables.


So what if anything d these league table results tell us:


“what really matters – attracting and retaining the best in teaching, giving schools and systems the support they need to become hubs of collaborative professional learning, and improving equity by targeting resources to students who need extra assistance, as recommended by the Gonski report into school funding”.




It really is as simple at that


Is NAPLAN a high-stakes test? No, says Barry, but I say Yes

According to Professor Barry McGaw, Chair of ACARA, NAPLAN is not a high stakes test.[1]

He made this comment in response to a study[2] released by the Whitlam Institute claiming that NAPLAN testing is being treated as a high-stakes program and that this has led to unacceptable levels of stress for students and a narrowing and a distortion of what is taught in classrooms across Australia

McGaw’s attempt to ‘set the record straight’ about this relied on the following facts:

  • Testing students competence in basic skills in Australia  as been going on for many years – in NSW since 1989
  •  The tests are not onerous or intrusive – they occur 4 times in the life of a student spread over a few days and each lasting only a few hours
  • They just don’t compare to high stakes tests such as year 12 exams or the long eliminated years of primary exams – student futures do not rest on the outcomes
  •  While there have been irresponsible attempts to create league tables there a have been strong steps taken to counter this.  MySchool only compares schools with schools with similar demographic intakes.

I don’t disagree with any of these points and I could add that as currently organised  NAPLAN results do not appear to directly impact the teachers’ performance review process or the future of any particular school.  In this sense we are different from most US states where Race To The Top has forced education reform in this direction

Now I use the word appear because there have been hints that this may not be the case now and may not always be the case in the future.

In relation to school closures, the closing of the Steiner stream at the Footscray school in Victoria was in part justified in terms of concerns about NAPLAN results. Similarly, in Queensland the decision to defund the school for travelling children was also justified on this basis.  This does not yet equate to a strong relationship between NAPLAN results and school closure decisions.

When it comes to teacher performance reviews the details are still a little unclear.

The DEEWR fact sheet[3] on this matter states that “Under the new performance and development framework all teachers will participate in an annual appraisal process ….The framework will set out the aspects of a teacher’s performance that will be assessed and will include such aspects as lesson observations, student results, parental feedback, and contribution to the school community. “ (my emphasis)

AITSL, the organisation tasked with developing the framework has released a performance and development Framework document which was endorsed by Ministers of Education in August 2012. [4] In this document it states under “A focus on student outcomes” that this is not about simplistic approaches “that tie evaluation of teaching directly to single outcome measures” and that this Framework “defines student outcomes broadly to include student learning, engagement in learning and wellbeing, and acknowledges that these can be measured in a variety of ways”.

So it appears that the worst element of Value Added Measures approach are not going to be an explicit part of the Teacher Performance and Review Process. at least not yet.  Of course, if there is a change of Government, My Pyne has already flagged that this is the path he will take us down[5].

So what does all this mean?  The arguments presented here to date appear to suggest that indeed NAPLAN is not a high stakes test and that perhaps McGaw is correct when he argues that, if teaching has been effected and students made to feel stress it is entirely on the head of teachers ,who are test cramming for no apparent reason.[6]

However there is another factor that McGaw has not considered.  Even if the publication of NAPLAN results does not become tied to teacher evaluations; does not result in school closures: and is not ever again presented in league table format on the front page of the Sydney Morning Herald, it is a high stakes test because of our unique and regressive school funding and hyper school choice policies and practices, that pit schools against one another for ‘favourable enrolments’

Indeed this was an explicit intent behind the decision to go down the school transparency reform route.  When former PM Kevin Rudd announced his new transparency agenda in August 2008 at the National Press Club, it is reported that he said to journalists after his speech that, if after seeing their schools performance data “… some [parents] walk with their feet that’s exactly what the system is designed to do.[7]

Now if our school set up was like that of Finland where the vast majority of students go to their local school and there is a high level of buy in and confidence in schools, this new transparency might not have had a big impact.  But our school set up is very different.  And it is different in a way that makes our schools very different from each other.

Not only is school resourcing not delivering equal quality of educational servicing, but schools serve very different communities and these combined factors contribute to wide disparities in school outcomes.

For parents of students attending the most concentrated of high need schools – the most socially and economically marginalised school parent bodies, the logic of parent power and school choice, as a response to NAPLAN comparative information, does not apply.  The 75 schools with ICSEA values below 800[8] (mostly small remote schools for Indigenous students) are not likely to experience much in the way of  ‘white middle class flight’ There are almost none to fly and no school alternative, apart from distance education. These parents don’t have a choice and are unlikely to lead the charge about unacceptable student performance.  This is not an effective lever for school improvement for these schools.

But schools with ICSEA scores between 800 and 1000 serve low to middle low SES communities where the parent demography can be more diverse.  I predict that these schools must worry about losing those parents and students with the highest economic and social capital.  These schools need active articulate, high expectation parents but may well lose them as they choose moving rather than improving.  They also lose these students. This serves to further concentrate the social mix of the student body with quite well known and predictable effects on student performance outcomes.

This is why Australia is a global leader in the extent to which our test results show the influence of what is known as student effect.

The effect of the decision to publish individual school test results has been to imply to parents that the responsibility for ensuring high school quality for all children – actually the responsibility of Government  – has in a sense been transferred to individual parents.  It is now their responsibility to choose the best option in terms of their child’s individual benefit.  To fail to do so is to be a somewhat neglectful parent.

What particularly saddens me about this is that the role of parents in schools has been an important civil society tradition. The local school in a local community used to be seen as ‘our school’, educating ‘our kids’.  This was rich local social capital.  It was a tradition based on enlightened self interest – of seeing the benefits in working, not just for the educational benefits for our own children,  but in working to ensure that education  works to build the kind of world they desire all children to inherit.

The publication of NAPLAN results has taken us further into the market model of schooling.  The school autonomy agenda will intensify this.  And this is the reason why NAPLAN is experienced as a high stakes test with all of the negative consequences.

[6] “If NAPLAN is being made high-stakes for students, with some reported to be anxious and even ill when the tests approach, this is due to teachers transferring stress to their students.”  The Conversation 11057

[8] Barry McGaw, “The Expectations Have it” in Phillip Hughes (Ed) Achieving Quality Education for All, Perspectives from the Asia-Pacific Region and Beyond, Springer 2013 p. 107

Don’t be fooled: Pyne’s NAPLAN proposal is worse – much worse

The response of the press to the Media Release My School the source of NAPLAN angst – Liberal Party of Australia by Christopher Pyne, Shadow Minister for Education, announcing that the Liberal Party have heard all the concerns of educators and parents about the danger of publishing NAPLAN results so they will stop it,  shows us everything that is wrong with the press in Australia.

The press passed the contents of this press release on with absolutely no analysis whatsoever.  A supposedly good news story.  The Liberal Party is listening and responding.  Wrong.  Their tears of concern are but crocodile tears.

What Pyne actually announced is that the Liberal Party will stop publishing the NAPLAN raw scores and would start publishing school improvement measures.  This will lead to the same pressure, but draw on different rubbish data.

Those of you who keep up with the education reform policy debates in the US might know improvement measures by other names:

– AYP – Adequate Yearly Progress or

– VAM – Value Added Measures

Do these terms start to ring any bells?  They should.  AYP measures were used by Joel Klein in NYC schools to target schools for closure and set up in their place – often on the same site  – Charter schools.  This has been incredibly disruptive for the families suddenly left with no local public school (often Charters selected students by application from parents and lotteries) and there is no peer-reviewed research that  shows sustainable learning improvements.

VAM measures are now being used across the US to assess teacher quality even though no reputable psychometrician will confirm that the national testing results – at the classroom level  – have any validity or reliability.  Teachers’ futures are made or broken on the basis of this sort of dodgy data dealing.

It doesn’t take a conspiracy minded person to join the dots.  Of course VAM and or AYP would be good for the LNP.  They can use it to adopt the disastrous Ed Reform policies of the US – to destroy teacher conditions and the power of unions, to close public schools and set up private schools using public school funds and so on.

I have another concern about Value Added Measures.  With Value Added Measures the actual results of a school are not relevant.  All that is measured is the growth in student learning from one year to the next.  Now some argue that this is good for struggling schools because it will end the ‘shame job’.  Arguably a high performing school that is resting on its laurels could end up exposed  and a low performing school that still has well below average results but is improving can look good. This is good, surely?

How is a piece of data that says, “This [low performing] school [in a low SES area] is doing a very good job.  The student learning outcomes are excellent [for these students], ever justifiable?

One of the few good things that has come out of this whole NAPLAN debacle is that it gave teeth and exposure to the work by equity researchers and activists like Professor Richard Teese,  Chris Bonner and Bernie Sheperd.  Their work saw the light of day through the Gonski review process  and importantly could no longer be disputed.  Their research changed the Gonski debate – there is no doubt about this.

Now don’t for a moment think that I am justifying the publication of NAPLAN results at school level.  In fact strangely enough the equity research referred to above was impeded as much as it was aided by the Myschool data because of the format of and level at which it is presened.  Myschool won’t allow the data to be manipulated or rolled up.  But  their analysis relied on rolling up the data so that groups of schools (e.g. low SES schools, rural schools, etc) could be compared to other groups of schools.  This is not possible using Myschool and researchers had to go to great lengths to get around this problem.

If I was a defender of continuing high levels of Government funding to the schools who need it least it would be in my interest to make the raw scores, that fed the work of equity campaigners, simply disappear.  Without it we could return to the she said he said debates about equity in Australia.

The raw NAPLAN scores have proven once and for all that demography still is destiny in today’s Australia.  They also prove that two children of equal social background going to different schools will have different student learning outcomes because of our highly socially segregated schooling system.  This is known as ‘ the school effect’ and Australia leads the way in this area, to our shame.

I will continue to oppose the way in which we use NAPLAN scores at school level, but I will continue to fight for the data about children’s learning by student demography, by school type and so on, to be available for equity research.  NAPLAN is not the best data, but that is a whole other debate that we wont get to have if we reduce NAPLAN raw scores to Value Added Measures.

I oppose the idea of schools being wholly accountable for the progress of their students without any support that recognises their unequal challenges, but I will continue to fight for the notion that Governments should be accountable, to the public, as citizens – not  just as parents for providing a high quality education with equal opportunity for all. Pyne’s proposal will kill the data available to support this, but wont stop the negative NAPLAN effects.