Evaluating MySchool – We are still waiting ACARA

Note:  Three years ago this month, I wrote about the Ministers’ of Education agreement to evaluate MySchool in order to identify and mitigate unintended consequences in June 2010.  There is still no such evaluation, nor any commitment by ACARA to do this.  This is being republished as a backgrounder to my next post

At last it is official.  Well before the launch of the MySchool website, state and federal education ministers agreed to task an expert working group to ‘commence work on a comprehensive evaluation strategy for implementation at the outset, that will allow early identification and management of any unintended and adverse consequences that result from the introduction of new national reporting arrangements for schools’, according to minutes of the ministers’ meeting held in Melbourne on 12 September 2008.

It appears, however, that the decision was never implemented and responsibility for it has transferred to the Australian Curriculum, Assessment and Reporting Authority (ACARA).

Now that the initial hype surrounding the launch of the MySchool website is over and the education community is settling down for a more considered debate on the opportunities and challenges of MySchool, I thought it might be useful to put forward some thoughts about what should be in scope for this MySchool evaluation.

EVALUATING ACCOUNTABILITY

My understanding of the ACARA position is that it is too early to make an evaluation because early reactions to a ‘hot button issue’ area not an accurate reflection of the longer term impacts, and because the MySchool website does not yet have the full range of information that is planned.

I have to agree that at this stage in the rollout of the full MySchool concept, we have an unbalanced set of data.  As data on parent perceptions, school level funding inputs and hopefully data on the quality of school service inputs (like the average teaching experience of the staff and their yearly uptake of professional learning) are added to MySchool, the focus of schools and systems might well shift.  I, for one, hope so.  If extra information like this doesn’t shine a bright light on the comparative level of the quality of the schooling experience for high-needs students relative to advantaged students, this will be a lost opportunity.

So yes, the effects of MySchool might change over time.  But this doesn’t mean we should wait to evaluate what is happening.  In fact, I think we have already missed the best starting point.  A good evaluation would have included a baseline assessment report, something that would have told us what sort of accountability pressures schools were already experiencing for good or ill prior to the introduction of MySchool.

For several years, education systems already had access to data from the National Assessment program – Literacy and Numeracy (NAPLAN) down to the school, class and even student level.  In most government systems, NAPLAN data was but one small component of a rich set of data used by schools to review their strategic plans, identify improvement priorities, set improvement targets and develop whole-school improvement plans.  They were already using parent and student surveys, student data on learning progress, attendance turnover, post-school destinations, school expulsion and discipline, and teacher data on absenteeism, turnover, professional learning and performance reviews.  At the same time, most education systems had access to this same information and were using it to prioritise schools in need of external reviews or other forms of intervention as part of system-level school-improvement strategies.

Contrary to popular belief, the introduction of MySchool did not commence a new regime of school accountability but it did both broaden it out and narrow it down.  It broadened it out to include parents and the community as arbiters of a school’s performance and applied this public accountability to independent as well as systemic schools. It narrowed it down by using just a tiny amount of data, and that has had the effect of privileging the NAPLAN test results.

ANECDOTAL EVIDENCE

Even before the launch of MySchool, systemic schools such as government and some Catholic schools were already feeling some degree of pressure from their systems about student performance in NAPLAN.  For example, in the Northern Territory, this had already given rise to a strong push from the education department to turn around the high level of non-participation of students in NAPLAN testing – an initiative that was highly successful.

A baseline study could have taken a reading on the extent that schools and systems were responding in educationally positive ways in the previously established accountability regimes and whether there were already unintended impacts.

Without a baseline assessment we are left with only anecdotal evidence of the effect of and reaction to MySchool, and responses vary widely.  I recall hearing a more outspoken principal in a very remote school saying something along these lines:

If I have to go to one more workshop about how to drill down into the NAPLAN data to identify areas of weakness and how to find which students need withdrawal based special support, I will scream.  All areas require focused intervention at my school and all our students require specialised withdrawal support.  We don’t need NAPLAN to tell us that.  What we need is …

I suspect that for schools where the NAPLAN outcomes were poorest, there was already a degree of being ‘over it’, even before the advent of MySchool.  The challenge for the most struggling schools is not about ‘not knowing’ that student progress is comparatively poor.  It is about knowing what can be done.  For years, struggling schools have been bombarded with a steady stream of great new ideas, magic bullets and poorly funded quick fixes that have all been tried before.  Their challenge is about getting a consistent level of the right sort of support over a sustained period of time to ‘crash through’. What has MySchool done for schools with this profile?  And more significantly, what has it does for their school communities?

On the other hand, I have heard anecdotally from teachers in the Australian Capital Territory that the launch of the site has drawn some comments to the effect that ‘we probably needed a bit of a shake up’ or ‘perhaps we have been a bit too complacent’.  Will the effect of MySchool be different depending on school demography and schools’ previously experienced accountability pressures?

There is evidence from the United States that this is likely to be the case.  Linda Darling-Hammond’s new book The Flat World and Education makes the point that the negative impacts of accountability driven by multiple-choice tests are greater when the stakes are higher but are also greater in schools serving high-needs communities.  For schools in high-needs communities, the stakes are higher than for comparatively advantaged schools precisely because their results are lower and their options fewer.

Darling-Hammond’s research suggests that this unequal impact results in more negative consequences for schools serving high-needs communities on three fronts.  The first relates to the impact on student school engagement and participation.  The more pressure on a school to perform well in national testing, the more likely it will be that subtle and not so subtle pressures flow to the highest-needs students not to participate in schooling or testing.  She documents unequal patterns with high-needs schools experiencing very marked increases in student suspensions, drop-outs, grade repeating and non-test attendance as a result of accountability measures driven largely by tests.

The second relates to the stability, quality and consistency of teaching.  Darling-Hammond notes that the higher the stakes, the more the negative impact on the stability and quality of teaching.  Her book documents decreases in the average experience level of the teachers in high-needs schools after the introduction of the US No Child Left Behind program of accountability.  Ironically, some of the systemic responses to failing schools exacerbate this as systems frequently respond by funding additional program support for at–risk students, English language learners and special-needs students.  These programs almost always engage non-teaching support staff.  The practical impact of this is that the students with the highest needs get even less access to a qualified teacher during their classroom learning time.

The third relates to the quality of the teaching and learning itself.  This is the topic that has most occupied the Australian debate on MySchool.  There has been no shortage of articles in the media predicting or reporting that MySchool will result, or has resulted, in a narrowing of the curriculum, an increase in the teaching of disconnected facts or rote learning, and classroom time being devoted to test preparation at the expense of rich learning.  In addition, there area websites surveying changes at the should level in terms of what gets taught and how it gets taught.  Less commonly acknowledged is the overseas experience that suggests the quality of teaching and learning is most negatively affected in schools serving high-needs communities.

These differentiated impacts need to be identified and addressed because, if they are not, difference in educational outcomes between high-needs schools and advantages schools are likely to increase.  The impact of parents making choices to change schools sometimes on the basis of quite spurious information will also be felt more in lower socioeconomic communities.  Increased revisualisation of government schools leads to higher concentrations of students of lower socioeconomic status.  This has a negative effect on all the students of those residualised schools.  Of course, schools that struggle the most may ironically be exempt from this kind of adverse pressure because many families in these communities do not really have the choice of sending children anywhere other than the local government school.

Are there other unintended consequences worth investigating? The potential impact of the index of Community Socio-Educational Advantage (ICSEA) values being used to set up a notion of like schools, for example, deserves further attention.

The ICSEA assigns a value to each school based on the socioeconomic characteristics of the area where students live, as well as whether a school is in a regional or remote area, and the proportion of indigenous students enrolled at the school.  The development and use of ICSEA is a complex matter that deserves a separate article.  Here I am just looking at what an evaluation should focus on around ICSEA being used as part of the MySchool website.  For all its faults, in very broad terms it tells us how advantaged a school is (a high score) or disadvantages it is (a low score).

I must admit that when the concept of reporting by school demographic type was first mooted, I got a little excited.  I took for granted that this would lead to a focus on the fact that, in Australia, a student’s chance of success at school is still fundamentally influenced by the student’s level of disadvantage and the relative disadvantage of the school she or he attends. I thought, at last, we had a chance to expose the fact that Australia has not managed to break the ‘demography as destiny’ cycle.  I took for granted that this would lead to a focus on this outrage and lead to a renewed commitment to addressing it.  I also thought that the ICSEA had the potential to become a framework for a more focused needs-based funding approach than the current state approaches to this.

I was wrong.  Since the launch of MySchool in January,[1] I have googled and googled and found a lot of reporting about the way in which the tool, in its first iteration, has lead to some very strange school groupings, but with the exception of an excellent article by Justine Ferrari in the Australian, almost nothing about the link between poverty, disadvantage and NAPLAN results.

This initially surprised me – but then I was reminded that John Hattie, in his recent book, Visible Learning, makes the point, almost in passing, that differences between student learning outcomes within schools are greater than the difference in learning outcomes between schools in developed countries.  He refers to interschool differences as trivial at best.  Maybe we in Australia assume that this is so for us.

The MySchool data tells a very different story and anybody can verify this for themselves.  It shows that the schools in the lowest ICSEA grouping (those which have ICSEA values in the low 500s), who are assessed as doing better than their like-school peers (a dark green rating), have NAPLAN scores for their Year 9 students that are below the NAPLAN scores for Year 3 students in schools serving advantages communities (those with an ICSEA values about 1,100s).  This means that schools where Year 9 students are achieving average Year 3 results are rated as good for these schools

I had assumed the ICSEA would be a tool to shine a light on the issue of systemic educational disadvantage, but have come to understand that it may well be inadvertently taken attention away from this issue. The structure of the website precludes easy comparisons (for good reasons) but also draws web users’ attention to comparisons between like schools, not to unlike schools.

If the combination of the limited searchability of the MySchool website, with the complexity of the ICSEA concept has led unintentionally to an unspoken legitimising of ‘demography as destiny’ this is a serious concern that an evaluation would pick up.   Will schools start to say, ’we are doing pretty well considering the community we serve’? Have we inadvertently naturalised different outcomes for different groups of students?

In our efforts to minimise the shaming of high need schools and their communities, we need to ensure that we have not taken away the shame that all Australians ought to feel about our failure to the vicious cycle of family poverty and the failure of the institution of education to systemically and sustainably disrupt this link.

This is not a failure that can be laid at the feet of the individual classroom teacher or school principal.  It is a systemic failure.  If we insist on putting pressure solely on individual schools to do the ‘educational heavy lifting’, our revolution will fail.  If we believe that encouraging parents to vote with their feet for the best school without first guaranteeing that every choice can be a high-quality choice, with equal opportunity to learn, we will also fail.  Ideally, the evaluation should include, in scope, not just unintended impacts but the explicitly intended effects too.

While there is no space to embark on this discussion here, I would like to end with a quote from Linda Darling-Hammond’s The Flat World and Education that sums up most eloquently the kinds of changes that education transparency and accountability frameworks ought to drive:

Creating schools that enable all children to learn requires the creation of systems that enable all educators and schools to learn.  At heart, this is a capacity-building enterprise leveraged by clear, meaningful learning goals and intelligent, reciprocal accountability systems that guarantee skilful teaching in well-designed schools for all learners.

REFERENCES:

Publication details for the original article: Margaret Clark, Evaluating MySchool, Professional Educator, Volume 9, Number 2 June 2010 pp 7 – 11

Darling-Hammond, L  (2010) The Flat World and Education: How America’s commitment to equity will determine our future.  New York: Teachers College Press.

Ferrari, J. (2010, May 1) On the Honour Roll: the nation’s top schools.  The Australian Inquirer, p5.

Hattie, J. (2009) Visible Learning: A synthesis of over 800 meta-analyses relating to achievement.  London, New York: Routledge.

Patty, A. (2010, 30 March). Evaluation of MySchool pushed aside say critics.  Sydney Morning Herald, p2.


[1] Of course the research undertaken by Richard Teese and also by Chris Bonnor and Bernie Shepherd since this date have achieved this.  We do not hear the line – the biggest differences are between classes in the same c schools in Australia anymore.  Thanks to the work of Teese, Bonnor and Shepherd this myth has been busted.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s