Strenuous regulation of ECEC may mean individual quality is overlooked, study finds
A recent study of Head Start early childhood education and care (ECEC) services in the United States has found that the “high-stakes accountability policies” used to monitor the quality of centres may miss important variation in classroom quality within centres, which could lead to incorrect representations of centre quality and inaccurate decisions about which programs need to re-compete for their funding.
The study, titled Are All Head Start Classrooms Created Equal? Variation in Classroom Quality Within Head Start Centers and Implications for Accountability Systems was published yesterday in the American Educational Research Journal, found that the current evaluation process used to assess Head Start centres is not a consistently reliable measure of centre quality as a whole.
Head Start is the United States’ largest publicly funded preschool program for low-income children, serving more than one million children each year with a federal annual budget of more than $9 billion. It is administered by the US Department of Health and Human Services, which awards grants to individual agencies that operate local centre-based programs. Nationwide, there are currently about 1,700 Head Start agencies, which provide program services to about 15,000 Head Start centres with more than 41,000 classrooms.
To determine the quality rating of Head Start centres, the current method employed at both Federal and State inspection levels is to randomly select a portion of classrooms within a given centre to determine centre-wide quality, under the assumption that quality is consistent across a centre’s classrooms. However, across Head Start programs, the authors found that one third to one half of the variation in quality is due to differences between classrooms within a centre, as opposed to quality differences from centre to centre.
The findings are significant, in this case, because the Head Start program relies on state and federal funding to maintain viability. The authors’ analysis suggests that 37 per cent of centres in their sample would have received different funding decisions by the major accountability system for Head Start, the Head Start Designation Renewal System (DRS), depending on which half of classrooms in a centre were randomly included in the assessment of quality.
In addition, average centre-level quality, as determined by current accountability policies, was not found to be related to measures of children’s development, calling into question the common approach of averaging classroom quality within centres to represent children’s experiences.
Instead, differences in classroom instructional quality within a centre was significantly related to differences in children’s academic and social skills. In other words, the authors found that educators, and individual classrooms, were a stronger predictor of children’s outcomes than the quality rating of the centre as a whole.
“Our findings suggest that there are still key issues to address in how we measure quality and use measurements to hold programs accountable and to encourage quality improvements,” study co-author, Associate Professor Terri J. Sabol said.
Fellow author, Emily C. Ross, policy fellow for the American Association for the Advancement of Science and the Society for Research in Child Development, agreed, saying “This study speaks specifically to concerns about how accountability systems monitors quality. It suggests the importance of ensuring that quality is measured and represented accurately so that, ultimately, all children can have a high-quality experience.”
For their study, the authors examined variation in common indicators of classroom quality in current accountability systems — including class size, child-adult ratio, teacher education, the global environment, and instructional support — using a large, nationally representative sample of Head Start centres.
The authors assessed data on 196 centres, 596 classrooms, and 4,130 students from the Department of Health and Human Services’ 2006 and 2009 cohorts of the Head Start Family and Child Experiences Survey (FACES), to determine the extent to which classroom quality varies within centres, whether current accountability practices provide an accurate or fair representation of centre quality, and how classroom- and centre-level quality relates to children’s outcomes.
“Our results indicate that a number of current choices in how Head Start centres are evaluated may interfere with fairness and accuracy in accountability systems,” Ms Sabol said.
“Educators and classrooms matter,” Ms Sabol said. “Although it is important to select a high-quality ECEC setting, our results suggest that it’s also important — if not more important — to find high-quality educators. These findings apply to program administrators, as well, in terms of offering support services to ensure that all education within a setting is of high quality.”
To read the full study, please click here.