The Wilder Research Group used the mixed collected data to evaluate if the MVNA met their desired goals. In order to conduct data analysis the Wilder Research Group used the range and spread of the data gathered to obtain results for birth outcomes, school enrollment, delay of subsequent pregnancy, maternal-infant bonding, connection to resources, child growth and development, and overall teen satisfaction. The group then compared the sample population rates to that of the overall Minnesota metropolitan area. The group found that the program had immediate effects in several areas. For example, the teenage mothers who were served by the program had better birth outcomes than that of the population. More specifically those enrolled in the program and had at least six visits had babies with healthier birth weights, met full term gestation and had more adequate prenatal care compared to the general population. The program also had a significant impact in the teenagers who continued and graduated high school compared to the general population. The teenage parents who were apart of the program completed school by 20% more than that of the general population. The research group also found that teenagers who were apart of the program used community resources or knew about resources such as Women, Infants, and Children Supplemental Food Program (WIC) and …show more content…
The group determined the goals set up at the beginning of the program and designed a methodology to answer if the goals were met through a set of mixed research methods. The strengths of the study included that the research methods were mixed, ethical measures were kept, and one of the two evaluation studies for summative evaluations were implemented. The evaluation design did have weakness as well and may have resulted due to funding. Weakness of program design included not using an impact assessment, not using statistical analysis, and lack of follow
Chapter 1 defined the research problem, the purpose of the study, provided definitions, assumptions and limitations to the study, and presented four research questions for this feasibility study:
Taylor, Y. J., & Nies, M. A. (2012, June 23). Measuring the impact and outcomes of maternal child health federal programs. Maternal and Child Health Journal, 17(5), 886-896.
Program planning is a process to achieve a particular goal and/or mission. Program planning is an organized process through which a set of coordinated activities or interventions is developed to address and facilitate change in some or all of the identified problems. Program evaluation provides useful information for improving the programs and the service delivery systems. Program evaluation is to improve the program planning, effectiveness, design, and efficiency. The two are different processes, but ideally they hold the same goals and/or mission. The evaluation process takes place after the planning of a
Currently, Community Prevention Partnership of Berks County, Nurse Family Partnership, home visitation Program, provides services to first time low income expectant mothers. The organization has been delivering the program for many years. Successfully, the program serves 250 families. The Berks county, Nurse Family Partnership program has served 1600 first time poor mothers, and 1250 children since it began. Most NFP clients by the time of referral are 18 years of age. Accordingly, the thirty-one percent of this first-time mothers receives Supplemental Nutritional Assistance Program (SNAP) and about fifty five percent are receiving Medicaid assistance. In fact, the household income average is 16, 000 and fifty two percent of mothers have not obtained a high school or GED diploma. Indeed, NFP outcomes involves maternal and child’s development education, referral and follow ups. Also, the program encourages breastfeeding, immunization updates developmental screenings. It has reduced smoking during pregnancy by 16.9% as well as prematurity rate by 4.5 %. (Michalopoulos, Lee, Duggan, Lundquist, Tso, Crowne, Burrell, Somers, Filene, & Knox, 2015).
A program evaluation offers a way to determine if adjustments are needed to improve upon the project in order for it to remain successful. Furthermore, the project evaluation team will analyze and measure each component of the outcome, input, and process in order to clarify the program’s objectives and goals. Thus creating a framework of evaluation methods and questions in addition to setting up a timeline for the evaluation activities will assist in the evaluation (CDC, 2011; HRSA, n.d.; McGonigle & Mastrian, 2015). The goal of outcome measures is to describe the overall performance of the process; therefore, outcome measurement will determine the program cost-effectiveness, attribution, and efficiency (CDC, 2012; HRSA, n.d.; McGonigle & Mastrian, 2015). There will be additional evaluation concerning the input measures, which are the resources that were put into the process. Lastly, the appraisal of process measures will provide data regarding the performance each course of action involved in the implantation of the project (HRSA, n.d.). After a thorough evaluation of the project, recommendations and the dissemination of results will be prepared and
Creswell, J. (2009). Research Design: Qualitative, quantitative, and mixed methods approaches. (3rd ed). Thousand Oaks, CA: SAGE Publications.
In order to evaluate research projects, and understand their strengths and limitations, it is necessary to first understand which type of research method is being used. First, type your name in the space indicated above. Then classify each of the following research studies as experimental, correlational, or observational/qualitative. Simply type an E, C, or O in the space below each study to indicate type of study. This assignment will be graded on a 100-point scale. Ten points will be deducted for each wrong categorization of a study. The assignment will be worth 3% of your overall grade in the course.
This essay discusses the philosophies, concepts, and methodologies of research investigations. Research designs are contrasted and compared to assess benefits, limitations, and applications. Approaches to quantitative and qualitative studies are illustrated and explained. The operations and purposes of program evaluations and action research studies are elucidated.
The author was very thorough in the discussion of his study. The author used a statistical table to show each area covered in the survey. The author also covered in detail the procedures they used and how they found their study subjects and the programs use to affect a positive outcome.
This study will implement a mixed methods design to include both quantitative and qualitative methods.
It should be noted that this assessment was completed for research purposes and does not cover all areas of development. Results of some of the measures administered are reported below.
Evaluation is “decisive assessment of defined objectives, based on set of criteria to solve a given problem.” Evaluation mainly serves three purposes: to compare results with the goals and excepted effects of the system, to direct work towards the expected result with the help of formative evaluation during development and introduction of the system, to use the finding and outcomes of the evaluation process as an experience base for future projects(Ammenwerth, Gräber, Herrmann, Bürkle, & König, 2003).Whereas, some define evaluation as the systematic process of information collection about program activities and features to judge and enhance its effectiveness. Evaluation leads to the settled opinion that something about the program is the case, which may lead to a decision to act in a certain way(Institute, 2007). The process Evaluation describes the implementation of an information source and judging the merits and worth which includes
Two reasons for evaluation are to assess and improve the quality of a program and to determine the effectiveness of a program. To evaluate the program, summative and impact evaluations deem the best. An impact evaluation is described as the immediate observable effects of a program, it measures the knowledge, skills, awareness, attitudes, and behaviors. (McKenzie et al., 2013, p.376). Because the program intends to spark a behavior modification, this evaluation style works best. The impact evaluation can help to show the strengths of the program and what needs to be improved if the desired behavior is not achieved.
The rationale of this evaluation is to confirm that the goals of the program are being meet. That the children are being educated in making healthy choices and participating in the program. In order for the evaluation to be accurate research needs to determine if the program will be effective.
I also learned the difference between basic and applied research where basic research focuses mostly on things such as emotion, cognition, social behavior, personality development, learning, and neuropsychology. Basic research will be more helpful for answering fundamental questions about the nature of behavior. Applied research however, addresses more of a specific problem with a possible solution. Program evaluation covers a large area of applied research. Program evaluation assesses the social reforms and innovations that occur in government, education, the criminal justice system, industry, health care, and mental health institutions. When discussing behavioral research it is imperative to note the precedents that it has in many fields and the significance of the applications to public policy.