A Tale of ACARA and the see-no-evil monkeys – Subtitle: There is no excuse for willful ignorance

The Australian Senate’s Education, Employment and Workplace Relations Committee is currently holding an Inquiry into the effectiveness of the National Assessment Program – Literacy and Numeracy (NAPLAN).

Over 70 submissions have been received of varying quality. In this article, I focus on the submission from the Australian Curriculum, Assessment and Reporting Authority (ACARA). ACARA is the custodian of NAPLAN and how it is used for school transparency and accountability purposes on the MySchool website.

One of the focus questions in the Inquiry’s Terms of Reference refers to the unintended consequences of NAPLAN’s introduction.  This is an important question given widespread but mainly anecdotal reports in Australia of: test cheating; schools using test results to screen out ‘undesirable enrolments’; the narrowing of the curriculum; NAPLAN test cramming taking up valuable learning time; Omega 3 supplements being marketed as able to help students perform better at NAPLAN time; and NAPLAN test booklets hitting the best seller lists.

Here is what ACARA’s submission to the Senate inquiry  has to say about this issue:

To date there has been no research published that can demonstrate endemic negative impacts in Australian schools due to NAPLAN.  While allegations have been made that NAPLAN has had unintended consequences on the quality of education in Australian schools there is little evidence that this is the case. 

The submission goes on to refer to two independent studies that investigated the unintended consequences of NAPLAN.

ACARA dismisses a Murdoch University research project[1] led by Dr Greg Thompson as flawed because its focus is on changes to pedagogical practices as a result of the existence and uses made of NAPLAN.  The basis of the dismissal is that if teaching practices change then that is all about teachers and nothing to do with NAPLAN.  Yet this report makes clear that teachers feel under pressure to make these changes, changes they don’t agree with, because of the pressures created by the use of NAPLAN as a school accountability measure.  In other words, in one clever turn of phrase, ACARA rules out of court any unintended consequences of NAPLAN that relate to changes to teachers’ practice.

ACARA also dismisses a survey undertaken by the Whitlam Institute because it  “suffers from technical and methodological limitations, especially in relation to representativeness of the samples used” and dismisses its conclusions, without  outlining detailing the findings. Now this survey was completed by nearly 8500 teachers throughout Australia and it was representative in every way (year level taught, sector, gender, years of experience) except for the oversampling in Queensland and Tasmania. In relation to this sampling concern the writers even reported that they weighted the responses to compensate for this sampling differential.  This report documents many unintended consequences that ACARA are now saying are not substantiated because of a spurious sampling critique. This is intellectually dishonest at best.

ACARA’s dismissal of all of the findings of these two research projects on spurious grounds while refraining from stating their findings is in stark contrast to its treatment of unsupported statements by non impartial stakeholders about enrolment selection concerns.

In response to the claims that some schools are using NAPLAN results to screen out ‘undesirable students’ ACARA states that it is aware of these claims but appears willing to take at face value comments from stakeholders who represent the very schools accused of unethical enrolment screening.

It is ACARA’s understanding that these results are generally requested as one of a number of reports, including student reports written by teachers, as a means to inform the enrolling school on the strengths and weaknesses of the student. The purpose for doing so is to ensure sufficient support for the student upon enrolment, rather than for use as a selection tool. This understanding is supported by media reporting of comments made by peak bodies on the subject (my emphasis).

ACARAs approach to this whole matter comes across as most unprofessional.  But, unfortunately for ACARA, this is not the whole story.  There is a history to this issue that began in 2008, almost as soon as the decision to set up MySchool was announced by the then PM Kevin Rudd as part of his school transparency agenda.

Three years ago this month I wrote an article[2] about the importance of evaluating the impact of the MySchool Website and the emergence, under FOI, of an agreement in September 2008 by State and Commonwealth Ministers of Education to:

…commence work on a comprehensive evaluation strategy for implementation, at the outset, that will allow early identification and management of any unintended and adverse consequences that result from the introduction of new national reporting arrangements for schools(my emphasis).

It was clear from the outset that this evaluation should have been managed by ACARA as the organisation established to manage the school transparency agenda.  In 2010, in response to my inquiry to ACARA on this Ministerial agreement the CEO of ACARA stated that it was not being implemented at that point in time because, early reactions to hot button issues are not useful and because the website did not yet include the full range of data planned.

This was a poor response for three reasons.

Firstly, well-designed evaluations are designed, not as afterthoughts, but as part of the development process. One of the vital elements of any useful evaluation is the collection of baseline data that would enable valid comparisons of any changes over time.  For example, information could be collected prior to the MYSCHOOL rollout on matters such as:

  • Has time allocated to non-test based subjects reduced over time?
  • Has teaching become more fact based?
  • Has the parent demographic for different schools changed as a result of NAPLAN data or student demographic data?
  • Are more resources allocated to remedial support for students who fail to reach benchmarks?
  • Are the impacts different for different types of schools?

Secondly, the commitment to evaluate was driven by concerns about the possibility of schools being shamed through media responses to NAPLAN results, the narrowing of curriculum and teaching, further residualisation of public schools, test cheating and possible negative effects on students and teachers. Identifying these concerns early would allow for revising the design elements of MySchool to mitigate the impacts in a timely fashion.  There is no real value in waiting years before deciding corrections are needed.

Thirdly, anyone who seriously believed that all of the data elements agreed as possibly in scope for MySchool was a complete list and able to be developed quickly, was dreaming.  Waiting for the full range of data meant, in reality, an indefinite delay. There are still data items in development today.

So now, five years on from the Ministerial directive that there was a need to actively investigate any unintended consequences, there is still no comprehensive evaluation in sight. One suspects that ACARA finds this quite convenient and hopes that its failure to act on this directive stays buried.

However, Ministers of Education still had concerns.  In the MCEECDYA Communiqué of April 2012 the following is reported:

Ministers discussed concerns over practices such as excessive test preparation and the potential narrowing of the curriculum as a result of the publication of NAPLAN data on My School.  Ministers requested that ACARA provide the Standing Council with an assessment of these matters.

On the basis of this statement, I wrote to ACARA on 27 April 2013 requesting information on action in response to this directive  – then over 12 months old.  To date I have received no reply.

So what sense can be made of this?

If one takes at face value the statements by ACARA that it knows of no information regarding the extent of unintended consequences one can only conclude that ACARA has twice not undertaken a Ministerial directive.

Here we have a Government body: aware of Ministers’ concerns about unintended negative consequences about a program it manages; aware of widespread anecdotal concerns, some of them quite serious; dismissing without any proper argument the few pieces of evidence that do exist; and. refusing to undertake any investigation into this matter despite two Ministerial directives to do so.

Willful ignorance about the potential unintended and harmful impacts of a program an agency has responsibility for whilst all the while professing a strong interest in this matter is highly irresponsible and unprofessional.

It is also quite astonishing given the Government’s commitment to the principle of transparency and the fact that ACARA was established specifically to bring that transparency and reporting to Australia’s schools.

But to then write a submission that almost boasts about the lack of information on this issue, while dismissing with poor arguments the evidence that is growing, is outrageous. It also gives new meaning to a throwaway line in its submission about the negative findings from the Whitlam Institute survey  “Further research with parents, students and other stakeholders is needed to confirm the survey results on well-being

Further research is indeed needed, and this further research should have been initiated by ACARA quite some time ago– five years to be precise.  It is convenient, for ACARA, that such research is not available. It is intellectually dishonest and misleading for ACARA to now state that it “takes seriously reports of unintended adverse consequences in schools. It actively monitors reports and research for indications of new issues or trends.”

Of course, there is another,  more alarming possibility, that this work has been undertaken but is not being made public, and that ACARA is misleading the Parliamentary inquiry and the public by denying that any such information exists.

In either case, I am forced to conclude that ACARA does not want any unintended consequences of a program for which it is responsible to be known, in spite of its ‘interest in this issue’ and is persisting in its position of willful ignorance.

In an effort to restore public confidence in its work, ACARA should commit to undertaking this research at arms length using independent researchers and reporting the findings to Parliament, without delay.  Perhaps this Inquiry could recommend this.

Evaluating MySchool – We are still waiting ACARA

Note:  Three years ago this month, I wrote about the Ministers’ of Education agreement to evaluate MySchool in order to identify and mitigate unintended consequences in June 2010.  There is still no such evaluation, nor any commitment by ACARA to do this.  This is being republished as a backgrounder to my next post

At last it is official.  Well before the launch of the MySchool website, state and federal education ministers agreed to task an expert working group to ‘commence work on a comprehensive evaluation strategy for implementation at the outset, that will allow early identification and management of any unintended and adverse consequences that result from the introduction of new national reporting arrangements for schools’, according to minutes of the ministers’ meeting held in Melbourne on 12 September 2008.

It appears, however, that the decision was never implemented and responsibility for it has transferred to the Australian Curriculum, Assessment and Reporting Authority (ACARA).

Now that the initial hype surrounding the launch of the MySchool website is over and the education community is settling down for a more considered debate on the opportunities and challenges of MySchool, I thought it might be useful to put forward some thoughts about what should be in scope for this MySchool evaluation.

EVALUATING ACCOUNTABILITY

My understanding of the ACARA position is that it is too early to make an evaluation because early reactions to a ‘hot button issue’ area not an accurate reflection of the longer term impacts, and because the MySchool website does not yet have the full range of information that is planned.

I have to agree that at this stage in the rollout of the full MySchool concept, we have an unbalanced set of data.  As data on parent perceptions, school level funding inputs and hopefully data on the quality of school service inputs (like the average teaching experience of the staff and their yearly uptake of professional learning) are added to MySchool, the focus of schools and systems might well shift.  I, for one, hope so.  If extra information like this doesn’t shine a bright light on the comparative level of the quality of the schooling experience for high-needs students relative to advantaged students, this will be a lost opportunity.

So yes, the effects of MySchool might change over time.  But this doesn’t mean we should wait to evaluate what is happening.  In fact, I think we have already missed the best starting point.  A good evaluation would have included a baseline assessment report, something that would have told us what sort of accountability pressures schools were already experiencing for good or ill prior to the introduction of MySchool.

For several years, education systems already had access to data from the National Assessment program – Literacy and Numeracy (NAPLAN) down to the school, class and even student level.  In most government systems, NAPLAN data was but one small component of a rich set of data used by schools to review their strategic plans, identify improvement priorities, set improvement targets and develop whole-school improvement plans.  They were already using parent and student surveys, student data on learning progress, attendance turnover, post-school destinations, school expulsion and discipline, and teacher data on absenteeism, turnover, professional learning and performance reviews.  At the same time, most education systems had access to this same information and were using it to prioritise schools in need of external reviews or other forms of intervention as part of system-level school-improvement strategies.

Contrary to popular belief, the introduction of MySchool did not commence a new regime of school accountability but it did both broaden it out and narrow it down.  It broadened it out to include parents and the community as arbiters of a school’s performance and applied this public accountability to independent as well as systemic schools. It narrowed it down by using just a tiny amount of data, and that has had the effect of privileging the NAPLAN test results.

ANECDOTAL EVIDENCE

Even before the launch of MySchool, systemic schools such as government and some Catholic schools were already feeling some degree of pressure from their systems about student performance in NAPLAN.  For example, in the Northern Territory, this had already given rise to a strong push from the education department to turn around the high level of non-participation of students in NAPLAN testing – an initiative that was highly successful.

A baseline study could have taken a reading on the extent that schools and systems were responding in educationally positive ways in the previously established accountability regimes and whether there were already unintended impacts.

Without a baseline assessment we are left with only anecdotal evidence of the effect of and reaction to MySchool, and responses vary widely.  I recall hearing a more outspoken principal in a very remote school saying something along these lines:

If I have to go to one more workshop about how to drill down into the NAPLAN data to identify areas of weakness and how to find which students need withdrawal based special support, I will scream.  All areas require focused intervention at my school and all our students require specialised withdrawal support.  We don’t need NAPLAN to tell us that.  What we need is …

I suspect that for schools where the NAPLAN outcomes were poorest, there was already a degree of being ‘over it’, even before the advent of MySchool.  The challenge for the most struggling schools is not about ‘not knowing’ that student progress is comparatively poor.  It is about knowing what can be done.  For years, struggling schools have been bombarded with a steady stream of great new ideas, magic bullets and poorly funded quick fixes that have all been tried before.  Their challenge is about getting a consistent level of the right sort of support over a sustained period of time to ‘crash through’. What has MySchool done for schools with this profile?  And more significantly, what has it does for their school communities?

On the other hand, I have heard anecdotally from teachers in the Australian Capital Territory that the launch of the site has drawn some comments to the effect that ‘we probably needed a bit of a shake up’ or ‘perhaps we have been a bit too complacent’.  Will the effect of MySchool be different depending on school demography and schools’ previously experienced accountability pressures?

There is evidence from the United States that this is likely to be the case.  Linda Darling-Hammond’s new book The Flat World and Education makes the point that the negative impacts of accountability driven by multiple-choice tests are greater when the stakes are higher but are also greater in schools serving high-needs communities.  For schools in high-needs communities, the stakes are higher than for comparatively advantaged schools precisely because their results are lower and their options fewer.

Darling-Hammond’s research suggests that this unequal impact results in more negative consequences for schools serving high-needs communities on three fronts.  The first relates to the impact on student school engagement and participation.  The more pressure on a school to perform well in national testing, the more likely it will be that subtle and not so subtle pressures flow to the highest-needs students not to participate in schooling or testing.  She documents unequal patterns with high-needs schools experiencing very marked increases in student suspensions, drop-outs, grade repeating and non-test attendance as a result of accountability measures driven largely by tests.

The second relates to the stability, quality and consistency of teaching.  Darling-Hammond notes that the higher the stakes, the more the negative impact on the stability and quality of teaching.  Her book documents decreases in the average experience level of the teachers in high-needs schools after the introduction of the US No Child Left Behind program of accountability.  Ironically, some of the systemic responses to failing schools exacerbate this as systems frequently respond by funding additional program support for at–risk students, English language learners and special-needs students.  These programs almost always engage non-teaching support staff.  The practical impact of this is that the students with the highest needs get even less access to a qualified teacher during their classroom learning time.

The third relates to the quality of the teaching and learning itself.  This is the topic that has most occupied the Australian debate on MySchool.  There has been no shortage of articles in the media predicting or reporting that MySchool will result, or has resulted, in a narrowing of the curriculum, an increase in the teaching of disconnected facts or rote learning, and classroom time being devoted to test preparation at the expense of rich learning.  In addition, there area websites surveying changes at the should level in terms of what gets taught and how it gets taught.  Less commonly acknowledged is the overseas experience that suggests the quality of teaching and learning is most negatively affected in schools serving high-needs communities.

These differentiated impacts need to be identified and addressed because, if they are not, difference in educational outcomes between high-needs schools and advantages schools are likely to increase.  The impact of parents making choices to change schools sometimes on the basis of quite spurious information will also be felt more in lower socioeconomic communities.  Increased revisualisation of government schools leads to higher concentrations of students of lower socioeconomic status.  This has a negative effect on all the students of those residualised schools.  Of course, schools that struggle the most may ironically be exempt from this kind of adverse pressure because many families in these communities do not really have the choice of sending children anywhere other than the local government school.

Are there other unintended consequences worth investigating? The potential impact of the index of Community Socio-Educational Advantage (ICSEA) values being used to set up a notion of like schools, for example, deserves further attention.

The ICSEA assigns a value to each school based on the socioeconomic characteristics of the area where students live, as well as whether a school is in a regional or remote area, and the proportion of indigenous students enrolled at the school.  The development and use of ICSEA is a complex matter that deserves a separate article.  Here I am just looking at what an evaluation should focus on around ICSEA being used as part of the MySchool website.  For all its faults, in very broad terms it tells us how advantaged a school is (a high score) or disadvantages it is (a low score).

I must admit that when the concept of reporting by school demographic type was first mooted, I got a little excited.  I took for granted that this would lead to a focus on the fact that, in Australia, a student’s chance of success at school is still fundamentally influenced by the student’s level of disadvantage and the relative disadvantage of the school she or he attends. I thought, at last, we had a chance to expose the fact that Australia has not managed to break the ‘demography as destiny’ cycle.  I took for granted that this would lead to a focus on this outrage and lead to a renewed commitment to addressing it.  I also thought that the ICSEA had the potential to become a framework for a more focused needs-based funding approach than the current state approaches to this.

I was wrong.  Since the launch of MySchool in January,[1] I have googled and googled and found a lot of reporting about the way in which the tool, in its first iteration, has lead to some very strange school groupings, but with the exception of an excellent article by Justine Ferrari in the Australian, almost nothing about the link between poverty, disadvantage and NAPLAN results.

This initially surprised me – but then I was reminded that John Hattie, in his recent book, Visible Learning, makes the point, almost in passing, that differences between student learning outcomes within schools are greater than the difference in learning outcomes between schools in developed countries.  He refers to interschool differences as trivial at best.  Maybe we in Australia assume that this is so for us.

The MySchool data tells a very different story and anybody can verify this for themselves.  It shows that the schools in the lowest ICSEA grouping (those which have ICSEA values in the low 500s), who are assessed as doing better than their like-school peers (a dark green rating), have NAPLAN scores for their Year 9 students that are below the NAPLAN scores for Year 3 students in schools serving advantages communities (those with an ICSEA values about 1,100s).  This means that schools where Year 9 students are achieving average Year 3 results are rated as good for these schools

I had assumed the ICSEA would be a tool to shine a light on the issue of systemic educational disadvantage, but have come to understand that it may well be inadvertently taken attention away from this issue. The structure of the website precludes easy comparisons (for good reasons) but also draws web users’ attention to comparisons between like schools, not to unlike schools.

If the combination of the limited searchability of the MySchool website, with the complexity of the ICSEA concept has led unintentionally to an unspoken legitimising of ‘demography as destiny’ this is a serious concern that an evaluation would pick up.   Will schools start to say, ’we are doing pretty well considering the community we serve’? Have we inadvertently naturalised different outcomes for different groups of students?

In our efforts to minimise the shaming of high need schools and their communities, we need to ensure that we have not taken away the shame that all Australians ought to feel about our failure to the vicious cycle of family poverty and the failure of the institution of education to systemically and sustainably disrupt this link.

This is not a failure that can be laid at the feet of the individual classroom teacher or school principal.  It is a systemic failure.  If we insist on putting pressure solely on individual schools to do the ‘educational heavy lifting’, our revolution will fail.  If we believe that encouraging parents to vote with their feet for the best school without first guaranteeing that every choice can be a high-quality choice, with equal opportunity to learn, we will also fail.  Ideally, the evaluation should include, in scope, not just unintended impacts but the explicitly intended effects too.

While there is no space to embark on this discussion here, I would like to end with a quote from Linda Darling-Hammond’s The Flat World and Education that sums up most eloquently the kinds of changes that education transparency and accountability frameworks ought to drive:

Creating schools that enable all children to learn requires the creation of systems that enable all educators and schools to learn.  At heart, this is a capacity-building enterprise leveraged by clear, meaningful learning goals and intelligent, reciprocal accountability systems that guarantee skilful teaching in well-designed schools for all learners.

REFERENCES:

Publication details for the original article: Margaret Clark, Evaluating MySchool, Professional Educator, Volume 9, Number 2 June 2010 pp 7 – 11

Darling-Hammond, L  (2010) The Flat World and Education: How America’s commitment to equity will determine our future.  New York: Teachers College Press.

Ferrari, J. (2010, May 1) On the Honour Roll: the nation’s top schools.  The Australian Inquirer, p5.

Hattie, J. (2009) Visible Learning: A synthesis of over 800 meta-analyses relating to achievement.  London, New York: Routledge.

Patty, A. (2010, 30 March). Evaluation of MySchool pushed aside say critics.  Sydney Morning Herald, p2.


[1] Of course the research undertaken by Richard Teese and also by Chris Bonnor and Bernie Shepherd since this date have achieved this.  We do not hear the line – the biggest differences are between classes in the same c schools in Australia anymore.  Thanks to the work of Teese, Bonnor and Shepherd this myth has been busted.

A Small Step for Government But a Giant Step for Remote Indigenous Children: NAPLAN and Indigenous Learners

The current review of NAPLAN is the second Parliamentary Inquiry into the use of NAPLAN data in Australia.  If it goes the way of the first inquiry then little change can be expected.

When the previous Inquiry was initiated, I was working for the Australian College of Educators.  We put a huge amount of effort into our submission.  It went through a member consultation process, was submitted and then sank like a stone.  Indeed even the website that hosted all the submissions appears to have disappeared. Nothing much came of it as can be expected when an issue has become politicised.

I am much less optimistic about what can be achieved this time round. My ideal outcome is unrealistic. It will not lead to a change in emphasis from testing and measuring to supporting and building capability –  no matter how much the evidence supports such a change.

However we can and should advocate to address the most egregious problems and unintended consequences associated with NAPLAN.  This is our chance to highlight them.

For this reason I was very excited to see that Submission No. 71 to the Inquiry comes from Leonard Freeman, the Principal of Yirrkala School in the remote northeast of the Northern Territory.

Yirrkala College is a K-12 very remote school quite near the mining township of Nhulumbuy on the Gove Peninsula.  It serves a discreet, remote Indigenous Community on Aboriginal land and 100% of the students that attend are Indigenous.  According to MySchool 97% of the students at the school have a ‘Language Background Other Than English (known as LBOTE).

Now, it is easy to underestimate the significance of this language background issue in Indigenous contexts.  Children who grow up in a remote Indigenous community where their mother tongue is still alive and thriving are, in every sense of the word, still residing in a non-English speaking background country.  They arrive at school with almost no experience of hearing English spoken.  They don’t hear it at home, around town, on their radio station, local stores, health centre, or at social/cultural events.

LBOTE is a broad category and very unhelpful for understanding language challenges and issues.  Children and their families can be quite fluent in English, but if they speak a language other than English in their home they are still classified as LBOTE.  Most LBOTE children who have very little or no English are recent arrivals from a non-English background country.  They might reside in suburbs where English is not the dominant home language and for the first school year attend an Intensive English language unit but English is still heard around them – in playgrounds, health centres, playgroups, libraries, radio, TV and in the school playground and classrooms.  They are, at some level,immersed in a culture where English is heard.

Children at Yirrkala can grow up hearing almost no English spoken.  When they get to school, their classes are in language for the first few years (in spite of the NT Governments poor decision to change this Yirrkala maintained this policy) – in fact right up to year 3 where teaching in English is gradually introduced.

So what does Leonard Freeman have to say about NAPLAN?

He argues that while there is a perception that NAPLAN is a fair test it is anything but.

[NAPLAN] is a testing scheme that seems as fair as it could possibly be – all students sit the same tests and the marking is conducted by an independent central body. However, this perception of fairness is a thin veil that covers a system that disadvantages students who speak English as a Second Language.

There are a number of issues wrapped up in this notion of unfairness.

Firstly, the NAPLAN exemption criteria do not give adequate consideration to the English language development of Indigenous children living in non-English speaking discreet Indigenous communities.

Most Australian educators assume that students who speak little or no English can be identified by the category “newly arrived from a non-English background country”.  In fact, when I worked in education in the NT I found that I had to constantly remind education administrators at national meetings that their proxy use of newly arrived non-English speaking migrants leaves out Indigenous children with identical or even greater challenges.

Nationally two per cent of Australian children are exempt from sitting the NAPLAN test. Students can be exempted from one or more NAPLAN tests if they have significant or complex disability, or if they are from a non-English-speaking background and arrived in Australia less than one year before the tests. 

So in fact almost all other children, who have as little English language competence as Year 3 and even year 5 remote Indigenous children from communities like Yirrkala, are exempt from NAPLAN.  No children at Yirrkala were identified as exempt from NAPLAN testing.

This leads to the ridiculous situation where remote Indigenous children with almost no exposure to English language, especially in written form,    “must take the test under the same conditions as students who speak English as their first language and have their results evaluated in terms of the ‘typical achievement’ standards of mother tongue English speakers. “

Now one of the reasons why education Institutions and administrators resort to the category ‘recently arrived migrant from a non-English speaking background country’ as a proxy for children who do not yet have a sufficient grasp of Standard Australian English is because we don’t have sensible data on this matter.  We have data on children who have a language background other than English but this tells us nothing about their level of competence with written English.

This Inquiry could secure bipartisan support to fix this matter up – this is not a politicised, hot issue.  This is about applying definitions of technically relevant matters in an inclusive and fair manner. Children in years 3 and 5 who reside in communities where standard Australian English is not spoken could be either exempted from NAPLAN until their English language learning enables them to read English to a defined level.

Secondly, NAPLAN is not a culturally fair test and this further discriminates against remote Indigenous children.

Back again to Leonard Freeman:

….NAPLAN reading tests assess students’ reading ability by judging their answers to multiple choice questions which ask often complex questions about the reading material.

He provides the following example of a multiple choice item in a Year 3 reading test

‘But I feel funny about saying I own him’. What reason does Tim give for feeling this way?

a) Elvis is really Malcolm’s dog.

b) Tim thinks dogs cannot be owned.

c) All the family helps to look after Elvis.

d) Elvis is much older than Tim in dog years. 

It is pretty obvious that there is a great deal of non-accessible cultural knowledge required to eliminate supposedly irrelevant answers for this item.

These sorts of questions do not simply assess whether the student can read the reading material and basically comprehend the story; they go well beyond that. A Year 3 student from a remote Indigenous community who is still trying to master spoken English and western cultural norms would find a question like this very difficult to answer. The assumed knowledge, about dogs ages being measured by ‘dog years’, the use of the word ‘funny’ to mean uncomfortable rather than humorous, and the concept of questioning the definition of ownership are all things that would be unfamiliar to a child growing up in a remote indigenous setting.

The NAPLAN reading test actually tests analytical skills which are coated heavily in western cultural norms. 

Another example provided by Freeman of an item around the delivery of newspapers provides further insights into cultural inaccessibility.

The story begins with the householder complaining to the newspaper boy ‘you left the paper jutting out of the back of my box’ and we also learn the owner had previously complained the paper needs to be left ‘in line with the fence’. This question was designed to test whether students could infer the meaning of new words and constructions. Yet to do so the students need to be familiar with the cultural context, in this case the students need to know that houses have a box on their fence line where mail and newspaper deliveries are left.  If the student has grown up in a remote community or refugee camp where there are no letter boxes and few houses have fences they will not be able to access the meaning of the text. 

Thirdly, The lack of fit between the NAPLAN tests and the kinds of assessments needed to effectively support teachers in these challenging context leaves teachers unsupported and undermined.

Now it would be reasonable to expect that the NT Department of Education should be fully cognizant of these circumstances and to make it their business to ensure that the unintended consequences of this unintended situation could be addressed, or at least mitigated.  Sadly, when I worked in the NT I found that this was not the case.  And scrolling through their website today I found that nothing much had changed.  There is now an acknowledgement that Indigenous children are English Language learners but what this means in terms of resourcing is minimal and what it means for teachers across remote schools appears to be completely ignored.

The absurdity of this is best illustrated through the following personal experience of what can only be described as an absurd professional development event

This event took place at a beautiful new resort in the remote mining community on Groote.  The attendees at this session were principals and a group of their nominated teachers from schools in remote Indigenous communities.

The aim of the 3 days session was to ‘teach’ the attendees – all from remote Indigenous schools  – how to drill down into their schools test results and develop, not just a strategic, but a tactical response to what they find.  It was a highly structured event. First of all the groups were given a spreadsheet showing their NAPLAN results for al year levels and for all tested areas and a detailed worksheet to work through.

I sat next to a school team that came from a large school in Eastern Arnhem, similar in key features to Yirrkala.  It was also a school that ran a bilingual program, which meant that all student in year 3 and almost all in year 5 could, not yet read in English – even at a basic level.

This school had NAPLAN results that were marginally worse than the other schools represented.  At this school, in almost every subject, in almost every year level, 80 – 100% of the students scored zero – that is they did not get one answer right – not one.  Some classes in some schools had a small minority of students who did receive a higher score – a few even approaching the relevant band for their year but they were a tiny tiny minority.

The professional development session required the teachers to group their students by what they did not know.  For example – how many students did not understand the convention of the full stop?  Put a ring around these students.  The teachers next to me sighed and ringed the whole class.  And it went on like this for three whole days.  It was idiotic and devastating.

These teachers went back to the school not just demoralized but with decontextualised lesson plans on full stops, the sound ‘CH’, prime numbers and so on.

I tell this story because it is an extreme example of just how stupid it is for people to invent prescriptive solutions that must be rolled out across all schools, with no exception.

There is no doubt that this is damaging for teachers in remote schools.  It was political exposure about the poor NAPLAN results that forced Marion Scrymgour to preemptively abolish the struggling, underfunded bilingual program – something she later came to regret – for good reason.

Leonard Freeman sees the NT Department priorities and the experiences and struggles of remote teachers as heading down a collision course:

The NT government made a commitment to having 75% of NT students meet the Year 3 NAPLAN benchmarks and teaching programs are aimed at achieving this. The amount of English instruction is being increased under the misguided belief that elevating the focus and importance of English will yield better English results. 

The inclusion of ESL students in NAPLAN testing places ESL researchers, specialist ESL teachers and classroom teachers in a conflict between the principles of effective ESL teaching and assessment practices and the requirements of governments and education departments. Instead of working together to attain the best educational outcomes for students’ researchers, policy makers, teachers and governments are locked in a fundamental disagreement between meeting the needs of ESL students and the administrative and political advantages of a uniform testing regime.

 One of the perverse consequences of this is that programs which claim to accelerate English literacy or which are aimed at native English speakers are now favoured ahead of academically sound ESL programs which demonstrate the genuine progression of ESL students.

It has also led to effective and evidence-based programs such as the Step Model bilingual program to be shut down to the detriment of Indigenous ESL students.

Now some readers this may be thinking that I am arguing for lower expectations for remote Indigenous children.  This is not my message.  These children get exposed to English in school for the first time and it is often their third and fourth language.

We exempt newly arrived LBOTE children ‘down south’ not because we expect less of them but because we recognize that their learning journey has to include an additional learning pathway.  But we do not expect less of them in the long run.

Back to Freeman again:

… an ESL approach is not a lesser approach. It is aimed at getting students who are speakers of other languages to a point where they can access mainstream education. A program may be deemed ineffective if ESL students never reach age-grade performance, but ESL programs that successfully move students along the continuum at a rate that is acceptable based on the research should be regarded as valid and ideal for ESL learners.

It is important to recognise the research which shows that it takes a minimum of 4 years, and more like 7 years, to achieve proficiency in a second language to a level that is sufficient for academic pursuits. The efficacy of ESL programs should be judged against the backdrop of this research.

So what are the small steps Governments could take in order to stop getting in the way of effective education for remote Indigenous children

  1. Stop NAPLAN Testing for remote Indigenous children until year 7.
  2. In the meantime, agree on an alternative form of testing[1] that is more appropriate for ESL students in terms of cultural content and recognition of ESL learning stages
  3. Address cultural bias in NAPLAN testing so that when remote Indigenous students are linguistically ready to sit the tests they can understand what is being asked of them
  4. Develop a national agreed English Language Learner Scale (ELLS) to replace LBOTE as a student  category so there is a far and consistent way to measure disadvantage based on English language learning needs

[1] The ACER Longitudinal Literacy  and Numeracy Study for Indigenous Students (LLANS) test has been trialled in all states and territories with both indigenous and non-indigenous students. Researchers have now aligned the LLANS test results to the NAPLAN data scores. So it would be possible for ESL students in the primary years to be given an appropriate test which can give a much clearer indication of their actual literacy and numeracy skills.

Poverty is important but inequality matters more

9781608193417Richard Wilkinson and Kate Pickett’s book, The Spirit Level: Why More Equal Societies Almost Always Do Better (London: Allen Lane, 2009) was for a brief moment in time a hot topic- at least in the US and UK.  In Australia it passed without much of a ripple.  This is a pity because its message on education is stark and simple.

The research on which this book is based draws on mainstream longitudinal data from around 200 different sets of data, using reputable sources such as the United Nations, the World Bank, the World Health Organisation and the US Census.  They correlate economic growth and levels of equity with a wide range of social data.

These data, they argue, tell a powerful, convincing and important story – Inequality is bad, not just for the poor, but for everyone.

Wilson and Pickett ‘s evidence shows that for ‘the developed world, the pursuit of economic growth may once have been an important goal that contributed to our wealth and national well being, but this is now longer the case.

Historically the pursuit of economic growth has benefited humanity by providing better education, health, increased longevity, well-being and happiness.  They also argue that for poor countries today, life expectancy increases rapidly during the early stages of economic development.

However at a certain stage of economic development (middle-income countries) this rate of improvement slows down.

Finally, when countries become wealthy economies, the benefits of narrowly pursuing a growth agenda disappear and getting richer adds nothing further to life expectancy.  At this point, there are ever diminishing social returns to investing in the neoliberal agenda and developed societies have very little to gain in the continued sole pursuit of economic growth.

As countries move along the development continuum, infectious diseases common in the poorest countries gradually cease to be the most important cause of death but they are replaced with the diseases of affluence (cardiovascular disease and cancers). As affluent societies grow richer, there are also long-term increases in significant social problems across the board.

It is important to note that this has nothing to do with total wealth – usually expressed as average per-capita income. The US is still among the world’s wealthiest nations in terms of average income per person, but it has the lowest longevity of the developed nations, and a level of violence that is off the scale.

This is because it is not about wealth, nor is it just about poverty. It is about the levels of inequality that have been created in many economies as a direct result of intense wealth creation and the policies that have supported this path.  The authors contrast the US and the UK with Japan and Scandinavian countries – all wealthy economies but the differences between the income of the top 10% and the bottom 10% are in stark contrast.

Note: The data in the book on Australia suggests that we are closer to the US high inequality profile, but ACT Federal Member Dr Andrew Leigh will be launching his latest book Battlers and Billionaires: The Story of Inequality in Australia, on 1 July 2013 at ANU. For more information click here.

Countries that have lower economic standing but are relatively more equitable will do better on almost everything.  And even though rich people tend, on average, to be healthier and happier than poor people in the same society it is important to note that both richer and poorer will do better in more equitable societies. This is demonstrated through a detailed comparison of nations by levels of inequity and rates of social/economic problems and then by comparing the 50 US states by the same two dimensions. Almost all problems that are more common at the bottom of the social ladder are more common in more unequal societies.  Or to put it another way, there is a very strong tendency for ill health and social problems to occur less frequently in more equal societies.

Inequality is life diminishing, not just for those at the bottom of the heap but right through society.  It increases the rate of teenage pregnancy, violence, obesity, imprisonment and addiction; and it functions as a driver of consumption that depletes the planet’s resources.

Of all crimes, those involving violence are most closely related to high levels of inequality. This makes sense intuitively.  Many baby boomer aged educators will recall the groundbreaking research undertake buy Paul Willis, recounted in his book Learning to Labour, published in the mid eighties.  In rich and disturbing detail, this book provided an up close and personal account of the ways in which young men, with no economic or educational route to achieving high status and earnings, embraced a different form of status – being ‘bad boys; at school, in the gangs and through a hyper masculinity that embraced violence and petty crime. Reading this book was a light bulb moment for me, because it made sense of my growing awareness of the complexity of challenges faced by teachers in high need schools, where these dynamics play out everyday.

Mental health is the stand out example. There have been substantial increases in actual rates of anxiety and depression, and as all teachers know this has been accompanied by increases in behavior problems. There is a strong relationship between mental illness and inequity. They also show that levels of trust between members of the public are lower in countries where income differences are the largest and also argue that this is because of the kind of stratification that takes place in association with inequity. It entails placing a high value on acquiring money and possessions.

Obesity, which is rapidly increasing throughout the developed world, is a major health crisis. In the past the rich were fat and the poor were thin, but in developed countries these patterns are reversed. Fat is now a class issue. Figures show that levels of obesity tend to be lower in countries where income differences are smaller.

More unequal countries have worse educational attainment. This suggests that there is more to educational equity than overcoming the unequal school readiness starting point of disadvantaged students and a relentless focus on student learning progress.  Differentiated levels of educational attainment are strongly influenced by the kind of communities we create and the sense of possibility and aspiration that exists.  Communities with high levels of trust and social capital and societies with high social mobility are more likely to flourish in more equal economies where governments invest in high quality public services, housing is not highly stratified and schools are not highly segregated by income.

Most policy measures directed to addressing the social determinants of health and educational inequality would have to be rather different if they were to take the thesis of this book into account.

This book provides a convincing critique of any narrow ‘close the educational achievement gap’ agenda. We may be able to marginally reduce gaps in levels of reading at a point in time by a relentless focus, but this book suggests that a belief that education measures alone can bring about greater equality needs to be flipped.  If we want better educational outcomes we also have to work for a more inclusive participatory civil society and greater economic and social equity – through wage fairness, job security, good working conditions, addressing career pathways for people in dead end jobs, retraining support, housing policies that reduce income based residential and educational segregation, food security and so on.

It also affirms the central importance of implementing school funding reforms currently on the table but suggests that this should be a start, and not an end.

Its important that the new funding arrangements will give greater scope for high need schools to provide much needed wrap around services, remedial support, greater subject choice, enrichment and early intervention to address the immediate learning and social emotional needs of their students.  However, if the funding reforms do not do anything to change the Government funding share across the different systems, then public schools, no matter how hard they work and how good they are, will not attract back to them, any of the parents who have opted out.

This means that, just as the US might have a level of violence that is off the scale in international terms, Australia will continue to have a level of educational segregation that is also off the scale.  If Gonski Reforms are agreed to, and I desperately hope they are, we will have made a start on addressing school funding poverty, but the relative levels of school inequality and the high levels of education segregation will remain until we have a government that is willing to stand up to the power of the non Government school lobby, for whom ‘market share’ of students is key.