Our Blog
CCE staff and partner reflections on our collaborative work to create schools where learning is engaging and rewarding, and every student is set up for success.
As I approach my 15th anniversary as an educator and my 10th anniversary as an evaluator of educational programs, I am still amazed at how little the two fields actually interact in practice. I find that educators are exceptional at discussing student level achievement and growth – yet when the conversation shifts to the program level, things become a little vaguer and, therefore, less useful. Importantly, this is no fault of the educators themselves - their training often focuses on student-level interactions and curriculum rather than program effectiveness. Yet, the demands of education law, particularly the current re-authorization of the Elementary and Secondary Education Act (i.e., the Every Student Succeeds Act) requires that managers of Title I programs evaluate their programs on a regular basis. In order to fulfill these requirements, many program managers’ positions force them to go beyond their training and enter the world of program evaluation.
In early August of 2017, I was invited by the New Hampshire Department of Education to present at their annual New Hampshire Educators Summer Summit and provide an opportunity for interested districts to learn more about program evaluation and explore the needs of their own Title I programs. Over the three days of the conference, my conversations with educators across the state reminded me how hard New Hampshire educators are working towards improving the lives of all students. However, these conversations also highlighted how little training educators have in the collection, analysis, and reporting of program-level data to assess the interventions they have designed.
In my presentation, I reminded the group that both Schoolwide and Targeted Title I programs are expected to evaluate their Title I Programs on a regular basis, but I also reminded them that they have a great deal of control in creating those Title I programs – and thus control over what they need to evaluate. I made it clear that I recognize many schools and districts have concerns over their time or capacity to conduct high-quality program-level evaluations. However, I reiterated that without strong evaluation, Title I program managers will lack the data they need to improve their programs – or see if the changes they had made were effective. The Every Student Succeeds Act (ESSA) puts even more emphasis on state and local assessment of programs, making the local evaluation of these programs even more important. Though the participants ranged in evaluation expertise, by the conclusion of the presentation, they all had some idea of where their next steps might be towards better understanding their programs.
The needs of districts, schools, and programs vary enough that giving one general piece of advice would be misguided. However, a few key takeaways from the presentation may be informative for most schools:
Rather than fearing data about a program – use it to your advantage. For many districts, there is no data on how their Title I programs are doing; they simply continue with past programming models and some even fear what the results would be if they looked at them. Remember that if you do not find evidence of improvement that may not be an indication that the program is failing. It could be you need another source of data (e.g., teacher or student interviews) to find what is working. Mixing surveys (quantitative methods) with interviews (qualitative methods) can also greatly enhance a school’s understanding of what parts of a program were effective and which were not. If it seems that the program genuinely is not working, do not be afraid to tweak it – maybe it should focus on a different grade level – or maybe there needs to be fewer students that get a more in-depth intervention. Title I programs are supposed to be an effective short-term intervention for a select group of students (or schools) – if it is not working do something else!
If you lack program evaluation experience, ask simple questions and see what works (e.g., How many times a month are Title I students actually receiving their intervention?). If you are experienced, go more in-depth with your evaluation questions (e.g., Do students whose families attend our Title I night show greater improvement?) Whether your evaluation is basic or more complex, both process (i.e., program functioning) and outcome (i.e., participant outcomes) questions need to be explored to get the whole picture.
I believe that educators have or can develop the skill sets to be excellent program evaluators. The skill is already present in most; most are simply limiting their evaluation skills to the student level. With some basic assistance educators can translate what they already know how to do with students to program-level assessment. In New Hampshire, regional Title I consultants are more than happy to provide advice, and I have met personnel from other states who provide similar services free of charge. If, however, you would like more help the Research, Evaluation, and Policy service area at the Center for Collaborative Education has the capacity to provide a variety of professional development and evaluation services to schools and districts of any size.
If you have questions about program evaluation or professional development opportunities you can reach me at rfeistman@ccebos.org.