Front Public Health. 2014; 2: 111. The aims of this article are to provide a rationale about the importance of evaluation in public health initiatives; justify Public Health Education and Promotion’s decision to create an Evaluation Article Type; and outline the evaluation criteria from which submitted articles will be assessed for publication. Evaluation is a process used by researchers, practitioners, and educators to assess the value of a given program, project, or policy (1). The primary purposes of evaluation in public health education and promotion are to: (1) determine the effectiveness of a given intervention and/or (2) assess
and improve the quality of the intervention. Through evaluation, we can identify our level of success in evoking desired outcomes and accomplishing desired objectives. This is accomplished by carefully formulating specific, measurable objective statements that enable evaluators to assess if the intervention influences intended indicators and/or if the correct measures were used to gage effectiveness. Determining the impact of our efforts has vast implications for the future of the intervention.
For example, through evaluation we are able to identify the essential elements of a given intervention (e.g., activities, content, resources, and structure), refine content and implement strategies, and decide whether or not to invest more resources for scalability. High-quality evaluation is contingent upon the appropriateness of the design and selected measures for the questions being posed and the population being studied. Measurement is especially critical to evaluation
because it enables the evaluator to know if changes or improvements occur as a result of the intervention, and it provides testable evidence for participant progress and program success. Evaluation is a critical factor for demonstrating accountability to all stakeholders included in the intervention. More specifically, conducting an appropriate and rigorous evaluation shows that the evaluator is accountable to the audiences and communities they serve, the organization for which they work, the
funding agency supporting the project, and the greater field of public health. Evaluation serves many varied purposes in addition to providing accountability for the stakeholders. At the very core, evaluative efforts help determine if predetermined objectives related to behavior change or health improvement were achieved in the proposed health education or promotion initiative. Evaluation is also useful to improve elements surrounding program implementation (e.g., partnership
development, fidelity, effectiveness, and efficiency) and can increase the level of community support for a given intervention or initiative. Further, evaluation contributes to our knowledge about the determinants of health issues as well as the best and most appropriate public health interventions to address them. This knowledge is extremely valuable to guide future research and practice. Evaluation also informs policy decisions at the organizational, local, state, national, and international
level. The role of evaluation has evolved over time. There are many types of evaluation, which are primarily defined by their design and purpose (2). The selection of an evaluation design is dependent upon the initiative’s focus, health issue being targeted, audience, setting, and timeline. Efficacy research includes evaluation
performed under strict and regulated conditions, often in the form of randomized controlled trials (RCT). This type of evaluation is beneficial to determine what types of interventions work, while controlling for confounders and external influences. Effectiveness research includes evaluation performed in less controlled situations. This type of evaluation is beneficial to determine if the effects from RCT can be replicated in ‘real-world’ settings and conditions, often on a grander scale.
Dissemination and implementation research typically includes evaluations performed in ‘real-world’ settings. This type of evaluation is beneficial to determine how to get what is known to be effective into the hands of the people, organizations, and communities that need them most. Much of this translational and pragmatic research includes evaluation about participant recruitment and retention, organizational adoption, fidelity, partnership formation and collaboration, data collection processes,
scalability, and sustainability. There are many phases of evaluation, which are primarily defined by their purpose and timing in the initiative’s delivery (3–5). Formative evaluation typically occurs in the early stages of an initiative to ‘pilot test’ for the purposes of obtaining
feedback from involved parties, adjusting and enhancing the intervention components and content, and guiding the future directions of the initiative. Formative evaluation is most often concerned with feasibility and the appropriateness of materials and procedures. Formative evaluation permits preliminary testing and refinement of study hypotheses, data collection instruments, and statistical/analytical procedures. Generally, this form of evaluation occurs on a small scale to ensure unanticipated
problems (e.g., glitches, breakdowns, lengthy delays, and departures from the design) are identified and the intervention quality is improved before ‘going to scale’ (i.e., prior to allocating larger investments of time, effort, and resources). Process evaluation is a type of formative evaluation that focuses on the intervention itself (as opposed to the outcomes) and should occur throughout the ‘life’ of an initiative. This type of evaluation uses data to assess the delivery of
services and examine the nature and quality of processes and procedures. Process evaluation helps the evaluator to define the content, activities, and parameters of the initiative. It also addresses whether or not the intervention reached the intended audience, was appropriate for the audience, and was delivered as intended (including elements of fidelity and receipt of adequate intervention dose). Summative evaluation encompasses the overall merit of the intervention in terms of
immediate impact as well as intermediate- and long-term outcomes. In addition to the intervention’s effectiveness, this type of evaluation also encompasses process evaluation, considering that predicted outcomes and objectives can only be achieved if the intervention is delivered with fidelity, as intended. Recognizing the importance of evaluation, the Public Health Education
and Promotion section has created an Article Type dedicated to evaluation. Evaluation is a special niche of public health education and promotion that assesses interventions’ ability to change health-related knowledge, perceptions, behavior, and service/resource utilization. While many public health education and promotion evaluations examine program efficacy and effectiveness, the emergent emphasis on translational issues of program dissemination and implementation (e.g., participant and
delivery site recruitment and retention, fidelity, and maintenance/sustainability) requires the application of pragmatic research principles and methodologies (6, 7). Such translational evaluations address different research questions than traditional efficacy and effectiveness evaluations and are often
conducted under pragmatic research designs. Pragmatic designs also attempt to promote the translation between research and practice (8). Thus, articles written using these methodological techniques require tailored review criteria to determine their appropriateness for publication. Further, in public health, there are many types of innovations (e.g., trainings, courses, curricula,
health promotion programs, and environmental or policy change), and there are many ways to report the participants, procedures, and findings of these initiatives based on the data collection methodology and research design (e.g., CONSORT for reporting controlled randomized trials, TREND for non-randomized evaluations, and STROBE for observational studies)
(9–11). Although these guidelines are good for documenting the quality of the evaluation in terms of the appropriateness, sophistication, and replicability of the research design and evaluation, they are not all encompassing for innovations in public health education and promotion. As such, general,
expansive, and all-encompassing set of criteria are needed to assess evaluation-related manuscripts submitted to the Public Health Education and Promotion section to ensure published manuscripts are rigorous, timely, relevant, and responsive to public health needs. Public Health Education and Promotion will accept a broad spectrum of articles that evaluate programs, courses, curricula, teaching
methods, and other pedagogical elements as well as public health innovations at the organizational, environmental, or policy levels relevant to our mission. Such translational research articles will require a sufficient description of the program logistics, procedures, and participants/sample. Additionally, submissions will require a Discussion section that shares practical implications, lessons learned for future applications of the program, and acknowledgment of any methodological constraints.
Articles should not exceed 6,000 words and include a maximum of five tables/graphs. Details about the Evaluation Article Type can be found online (http://www.frontiersin.org/Public_Health_Education_and_Promotion/articletype). A detailed description of the criteria used by Review Editors during the peer-review process is available as a separate file. While this information is obviously
beneficial for Review Editors, we hope it will be consulted by authors prior to submitting evaluation-related manuscripts to the Public Health Education and Promotion section. The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Indicate what this article evaluates __an educational or training program or intervention __a course __a curriculum __a teaching method __multiple pedagogical facets __a health promotion program __environmental, technological or policy natural or planned change __none of the above (i.e., inappropriately categorized for submission as an Evaluation article) __other. Please specify: ________________________________________________ Indicate the target audience
Note: In the following questions, “intervention/program” is used to encompass the innovation (subject of the targeted effort, whether it is a course, curriculum, pilot project, program, etc.) being evaluated. Ratings:
Mandatory Sections:
Abstract:
Introduction:
Background and Rationale:
Methods:
Results:
Discussion:
Conclusion:
References:
Article Length:
Language and Grammar:
Other Comments:
References1. Springett J. Issues in participatory evaluation. In: Minkler M, Wallerstein N, editors. Community Based Participatory Research for Health. New York: Jossey-Bass; (2003). p. 263–86 [Google Scholar] 2. Flay BR, Biglan A, Boruch RF, Castro FG, Gottfredson D, Kellam S, et al. Standards of evidence: criteria for efficacy, effectiveness and dissemination. Prev Sci (2005) 6(3):151–75 10.1007/s11121-005-5553-y [PubMed] [CrossRef] [Google Scholar] 3. McKenzie JF, Neiger BL, Thackeray R. Planning, Implementing, & Evaluating Health Promotion Programs: A Primer. 5th ed San Francisco: Benjamin Cummings; (2009). 464 p. [Google Scholar] 4. Royse D, Thyer B, Padgett D. Program Evaluation: An Introduction. Belmont, CA: Cengage Learning; (2009). 416 p. [Google Scholar] 5. Windsor RA, Baranowski T, Clark N, Cutter G. Evaluation of health promotion and education programs. J Sch Health (1984) 54(8):318. 10.1111/j.1746-1561.1984.tb08946.x [CrossRef] [Google Scholar] 6. Glasgow RE. What does it mean to be pragmatic? Pragmatic methods, measures, and models to facilitate research translation. Health Educ Behav (2013) 40(3):257–65 10.1177/1090198113486805 [PubMed] [CrossRef] [Google Scholar] 7. Glasgow RE, Chambers D. Developing robust, sustainable, implementation systems using rigorous, rapid and relevant science. Clin Transl Sci (2012) 5(1):48–55 10.1111/j.1752-8062.2011.00383.x [PMC free article] [PubMed] [CrossRef] [Google Scholar] 8. Glasgow RE, Lichtenstein E, Marcus AC. Why don’t we see more translation of health promotion research to practice? Rethinking the efficacy-to-effectiveness transition. Am J Public Health (2003) 93(8):1261–7 10.2105/AJPH.93.8.1261 [PMC free article] [PubMed] [CrossRef] [Google Scholar] 9. Des Jarlais DC, Lyles C, Crepaz N. Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: the TREND statement. Am J Public Health (2004) 94(3):361–6 10.2105/AJPH.94.3.361 [PMC free article] [PubMed] [CrossRef] [Google Scholar] 10. Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet (2001) 357(9263):1191–4 10.1016/S0140-6736(00)04337-3 [PubMed] [CrossRef] [Google Scholar] 11. von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP. The strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies. Prev Med (2007) 45(4):247–51 10.1016/j.ypmed.2007.08.012 [PubMed] [CrossRef] [Google Scholar] Articles from Frontiers in Public Health are provided here courtesy of Frontiers Media SA What are the 3 types of evaluation?The main types of evaluation are process, impact, outcome and summative evaluation.
What is the purpose of the evaluation step?Evaluation provides a systematic method to study a program, practice, intervention, or initiative to understand how well it achieves its goals. Evaluations help determine what works well and what could be improved in a program or initiative.
What is evaluation in community health nursing?Evaluation is the process by which a nurse judges the value of nursing care that has been provided. As with any type of nursing care, the community/public health nurse seeks to determine the degree to which planned goals were achieved and to describe any unplanned results.
Which of the following is the goal of evaluation?It demonstrates that outcomes of the program were met. Evaluation yields data, which allow better comprehension and therefore better decisions. The purpose of evaluation is to facilitate additional decision making. The evaluation will help show whether the outcomes of the program have or have not met the goals.
|