Friday, September 11, 2009

Assignment 1

The model that was used in the evaluation “Report on a program evaluation of a telephone assisted parenting support service for families living in isolated rural areas” (Cann, Rogers & Worley, 2003) most closely resembles Robert Tyler’s model of objectives and outcomes. The objectives of this program were to influence various factors (i.e. reduce depression, increase parental satisfaction). The researchers measured these factors pre- and post-project and evaluated the outcomes (changes in levels of depression and parental satisfaction). There results were definitely in the predicted direction, with parents showing significantly reduced levels of depression, anxiety, stress and inter-parent conflict. There were also significant improvement in parents’ reports of their children’s behavior, parenting styles as well as parent satisfaction and efficacy. From this the authors concluded that this program had influenced both the parents and the children’s behavior, and the overall well being of the parents.

This model was a very simple means of measuring the changes in these factors, before and after participating in the program. However, do the results mean that the program directly influenced both behavior and feelings of the parents? Not necessarily. It only tells us that self-reported levels of these variables changed after participation. This doesn’t necessarily tell us anything about the effectiveness of the program, only that the changes occurred. For example, parents reported lower levels of stress, anxiety and depression after participation. It’s hard to say which part of the program, if any, led to these changes.

If the researcher had employed a CIPP model, more attention would have been given to the actual process of the program, thereby providing us with more feedback about potential direct relationships between participating in the program and the variables measured. A qualitative approach provides richer data, and would be a more reliable means of determining whether participation in the program had a direct impact on the variables. Interviews could have been conducted during the course of the program, which could have easily been done during the weekly conversations participants had with counselors. The researchers could have evaluated how feelings and behaviors evolved over the course of the program, rather than merely measuring changes from point A to point B.

As noted by the authors, future research ought to consider using a control group. This would increase the confidence that participation in the program led to changes in the variables measured. In addition to a control group, additional experimental groups could also be looked at, and various components (i.e. the addition of more or less self-help materials, amount or type of contact with counselors) could be manipulated, which would allow one to evaluate which parts of the program are effective, and which are expendable.

There are some strengths in the simplicity of the evaluation design. It was a straight forward approach that easily showed the reader that a large number of variables significantly changed after participating in this program. They also conducted a satisfaction survey, which revealed that the participating families were very satisfied with the content and results of the program. It is a very good starting point, and the researchers provided a number of limitations and directions for future research that can be used to further evaluate this program.

Here is a link to the online article:

http://auseinet.flinders.edu.au/journal/vol2iss3/index.php

1 comment:

  1. Lynsay

    Excellent analysis of the article. You correctly point out the advantages and disadvantage to a simplistic evaluation. It does not sound as though you have much faith in the results of this evaluation despite the positive outcomes. I agree that the CIPP model would do a much better job of determining which factors had impacts.

    Jay

    ReplyDelete