Viadro, Earp, & Altpeter (1997) evaluated the NC-BCSP breast cancer screening program for African-American women in 2500 miles of 5 rural counties in North Carolina for an 8 year period in the 1990s. The goals of the NC-BCSP were to “build on community networks, enhance the delivery of local screening services, and reduce health system barriers in widely dispersed and resource-poor communities.” (p. 237, para1).
The program and its evaluation exhibit areas where improved cultural competency, safeguarding of data, and effective benchmarks for success are possible. Some improvements were achieved while the intervention program was being evaluated. Beyond the achieved improvements and process evaluation, the program and its evaluation illustrate areas where better understanding of the risk-factors found to be linked to breast cancer would benefit the priority population and researchers alike.
Because healthcare, community awareness, and social accountability standards have changed in the ensuing 1 ½ decades, herein the current approaches are retroactively contemplated. To wit, the development and evaluation of health intervention and community awareness programs will be applied to the scenario along with the most current protocols for community health needs assessment. Concurrent with the health program aspect of analysis, project management and documentation design considerations are evaluated.
Analysis of Viadro, Earp & Altpeter’s
Designing a Process Evaluation for a Comprehensive Breast Cancer Screening Intervention: Challenges & Opportunities
Viadro et al (1997) indicate the programmatic capacity, goals and evaluated outcomes of the NC-BCSP for the time of the study. The NC-BCSP program sought to reduce African American women’s barriers to a care while increasing local screening through building relationships and networks across a 5 county (2500 square mile) region of rural North Carolina. Because the process evaluation conducted by Viadro et al predates current national mandates for community health needs assessment and program evaluation, a post-mortem assessment is herein considered.
Program planning, implementation, and evaluation in the healthcare field converge with the technical writing principles of project management and documentation design. Current IRS regulations require county health departments and not-for-profit hospitals to conduct community health needs assessments every 5 and 3 years respectively (§501(r) Internal Revenue Code).
Further, the programs developed from such assessments and the outcomes associated therewith are required to be disseminated widely to the public. At this point the Internal Revenue Service is using this data for informational purposes only. However, coupled with the regulations of SOX (Gertner, 2006), the new section is causing field experts such as Preston Quesenberry to speculate that this data will ultimately be used to drive future regulatory patterns for determining facilities’ legitimacy as not-for-profit entities (CHAUSA, 2012). Key to this element is not so much the IRS’ intent (or imagined intent) but the need-based program structure mandated for not-for-profits and health-oversight entities to determine community health needs via assessment, determine which needs can be met (that aren’t already), develop programs matching needs with resources, establishing measures/means of determining success and reporting directly linked outcomes with those programs.
Considerations of Program Planning & Evaluation
The American Hospital Association (2013) provides the following items for program planners to consider when developing both programs AND the evaluations of said programs – because effective program development includes concurrent evaluation development (Posavac, 2011 and McKenzie, Neiger & Thackeray, 2009):
1. We need to understand the community or situation better.
2. We don’t understand the problem or goal.
3. We don’t know what to do to solve the problem.
4. There is no clear direction or communication within the group.
5. There is not enough community participation.
6. There is not enough leadership.
7. We are facing opposition or conflict.
8. There is not enough action to promote change.
9. There is not enough change in the community or system.
10. We don’t know how to evaluate our program or initiative.
11. There is not enough improvement in outcomes.
12. There are unintended or unwanted outcomes.
13. Not enough money to sustain the program or initiative.
14. We need to assure better conditions for implementation.
Figure 1. University of Kansas Workgroup via American Hospital Association (2013) The Community Toolbox: Trouble Shooting Guide for Solving Problems: Common Problems, Reflection Questions, and Links to Support Tools (line items are HTML enabled – CTRL + click to follow link)
Breast Cancer Risk Factors – General
Since Viadro et al (1997) completed their study, the risk factors associated with breast cancer have been more clearly defined. On average, women in the US have 12% chance of developing breast cancer (Fletcher, 2009 & Mahoney et al, 2008 as referenced in Saria, 2011, p. 361). While gender and age are the primary risk-factors identified with breast cancer (Saria, 2011), providing knowledge to women about the other risk factors of breast cancer becomes essential to early detection. Researchers have identified major and minor risk factors – the differentiating criterion being that a major risk factor causes the likelihood of developing cancer to at least double, whereas a minor risk factor cause less than double the likelihood of developing cancer.
Figure 2. The relationship between base risk and major / minor risk factors as defined by Joy, Penoet & Petitti, 2005 & Schwartz et al., 2008 as referenced in Snyder & Crihfield, 2011.
This relationship between risk factors is especially alarming on a larger scale as researchers indicate that the lifetime risk of a woman in the US developing cancer is in excess of 33% (American Cancer Society, 2011 as referenced in Kessler, 2012) and “a woman’s risk (of breast cancer) is closely linked to a variety of modifiable and nonmodifiable factors such as age, race or ethnicity, family history, postmenopausal obesity, physical inactivity, and alcohol consumption” (Kessler, 2012).
Beyond the criterion “greater than or less than doubling risk” illustrated above, following are the major and minor risk factors associated with breast cancer – which despite risk factor analysis “the majority of breast cancers are sporadic” (Katapodi & Aouizerat, 2005 as referenced in Snyder & Crihfield, 2012).
Figure 3. Major and Minor Risk Factors as indentified by (Major) Boyd et al., 2007; Edwards et al,. 2009; Joy et al., 2005; Mahoney et al,. 2008; McKian et al., 2009; Palomares, Machia, Lehman, Daling, & McTiernan, 2006; Schwarts et al., 2008; Travis et al., 2005 as referenced by Snyder & Crihfield, 2011 and (Minor) Gail et al., 1989; Joy et al., 2005; Schwartz et al., 2008).
(Breast cancer awareness stamp image from US Postal Service)
Other Concurrent Efforts
The Journal of Women’s Health (Bernard et al, 2011) reports “in the United states, low-income women have poorer breast and cervical cancer survival and mortality outcomes compared with women in higher incomes” (p. 1479, para 1). It further reports that in the 20 years preceding the study that nearly 45,000 cases of breast cancer were diagnosed through 3.7 million screenings funded by the National Breast and Cervical Cancer Early Detection Program (NBCCEDP). The dates of this program and the NC-BCSP as evaluated by Viadro et al (1997) coincide. It is curious that the NBCCEDP was not the source of funding or data for the NC-BCSP, particularly since NBCCEDP participating physicians are “significantly more likely to practice … in a rural location” and “significantly more likely to be female … obstetrician/gynecologists” (Benard et al, 2011, p. 1481, para. 5 & 4).
Benard et al (2011) go on to state: “Our finding that more program physicians practice in rural settings … reflects successful outreach to providers who serve women living in rural areas, a demographic group with high rates of both breast … cancer and poor access to care” (p. 1481, para. 11). Because the NC-BCSP was purposely geared toward breast cancer screenings, not breast and cervical cancer, it is challenging to compare the sets of data empirically. On the surface, they seem to indicate that together they’ve reduced the barriers to care and improved clinical outcomes of rural women experiencing breast cancer. However, the overall likelihood of breast-cancer in the priority population identified by the NC-BCSP may not be statistically significant.
Since the Viadro et al (1997) study was completed, the National Cancer Institute has made publicly available Breast Cancer Risk Assessment Tool (Cancer.Gov/BCRiskTool) which enables a woman at or over age 35 (or her health care provider) to assess risk based on several of the above factors, as follows:
(bmp file wouldn’t load)
Figure 4. BCRAT Input Screen (NCI, 2013) with the analyst’s data input.
These inputs generate the following results:
(bmp file wouldn’t load)
Figure 5. BCRAT Output based on Figure 4 Inputs from the Analyst.
As these figures show, the analyst has a slightly less than average risk of developing invasive breast cancer before age 90 when compared to other women of the same age & ethnicity. To better gauge the efficacy of the Viadro et al (1997) study, the analyst changed the age to 65 and ethnicity to African American leaving all other criteria as “unknown” and the result was less than a 5% chance of developing breast cancer in the woman’s lifetime and less than 2% in the next 5 years as illustrated below:
(bmp file wouldn’t load)
Figure 6. BCRAT Output based on Inputs from the Viadro et al (1997) study.
How is North Carolina Doing?
The current State Cancer Profiles Table (2013) prepared by the National Cancer Institute shows that 77.2% of women in all races/ethnic groups over age 40 in North Carolina have had a mammogram in the past 2 years. This statistic places North Carolina’s screening rate almost 2% over the national average and as the 18th best ratio nation-wide. These data suggest education and intervention programming on a large scale is working. However, women’s health education programs targeted for African American women continue to be a need (Ochoa et al, 2012).
Considering the above calculated risk for an age 65 African American woman (particularly when compared to an age 35 Caucasian woman) and the successful mammography utilization/access rates in North Carolina, the question then becomes: What needs were met through the efforts that Viadro et al (1997) exerted in improving the outreach to older African American women (p. 238) by the “Lay Health Advisors” (p. 238) in discussing breast cancer risk-factors and detection/screening program availability?
Yes, the program met its stated purposes to “build on community networks, enhance the delivery of local screening services, and reduce health system barriers in widely dispersed and resource-poor communities.” (p. 237, para1). But here are the issues the program would encounter 15 years later based on current Community Benefit / Community Health Needs Assessment program/evaluation protocols:
- The program is replicated in the community/priority population? (i.e. NBCCEDP)
- Given the relatively low likelihood of a member of the priority population being diagnosed with breast cancer in the next 5 years or in her lifetime (see Figure 6) was Breast Cancer screening truly a “need” for the priority population?
- Were those perceived to be “in need” included in the program development and its evaluation?
- It’s both fair and logical to involve those who are most directly affected by adverse conditions. They know best what effects those conditions have on their lives, and including them in the planning process is more likely to produce a plan that actually speaks to their needs. (University of Kansas, 2013)
- Did the stated goals (and their evaluation) match with the real goals of the project?
Health Literacy – Cultural Competency – The Key to Reducing Morbidity
Despite the program’s shortcomings relative to contemporary evaluation/regulatory benchmarks, it does highlight a concern that is not quite verbalized in the Viadro et al (1997) study. The authors refer to “lay health advisors” (LHAs) throughout the text – individuals who meet the cultural competency goal of being gatekeepers (McKenzie, Neiger, & Thackeray, 2009) and possibly the role of early adaptors (McKenzie, Neiger, & Thackeray, 2009) if they are members of the priority population. Ochoa et al (2012) looked for a correlation between the cervical and breast cancer screening rates of African American women and their health beliefs – and then sought to evaluate whether culturally competent education could improve screening rates.
Ochoa et al (2012) demonstrate that there is little to no correlation between an African American woman’s concern about cancer and her adherence to the medical community’s guidelines about monthly self-exams and screening mammograms at appropriate ages. Further, the study demonstrates no correlation between adherence to screening guidelines and religious beliefs – even if the woman held religious beliefs contrary to the guidelines.
By considering the impacts of the Witness Project of Harlem (WPH) education sessions providing “a culturally sensitive, faith-based breast and cervical cancer screening program targeting African American women in medically underserved New York City communities” (Ochoa et al, 2012, p. 447, para 1) researchers came to understand that making room for the discussion about cancer screening and its importance to each individual woman as part of who she is and how she lives her life is essential to counteracting the potential for members of the priority population to “eschew taking control of their medical care in favor of faith in the ability of God to handle medical problems” (Holt, Clark, Kreuter & Rubio, 2003; Matthews et al., 2002 as referenced in Ochoa et al, 2012, p. 448, para. 6).
Most significantly, however, Ochoa et al (2012) bring to light a point that the other researchers do not: “both (cervical & breast) cancers have disproportionately higher mortality rates in African American women than in White (sic) women. During 2000 to 2003, African American women had a 36% higher breast cancer mortality rate than White women” (p. 447, para. 2). Considering the comparative BCRAT data in Figures 4 & 6 above, this disproportionate mortality rate is magnified because the African American woman should be nearly 2/3 less likely than the Caucasian woman to be diagnosed with invasive breast cancer in her lifetime.
What Ochoa et al (2012) and Vaidro et al (1997) show then is not the impact of community based programs on the likelihood of women HAVING cancer, but that by opening up the conversation – whether with WPH or with the LHAs – the reduction of morbidity and mortality become possible. Prevention and early detection becoming part of an ongoing conversation empowers individuals to be in control of preventative medical care and diagnostic testing. Through the facilitation of dialogue with culturally competent program leaders, the gap between access to care and utilization of resources can be bridged and the differences in prognoses may become markedly improved. As a result of these programmatic evaluations and understandings, researchers and professionals in the position of evaluating community health needs and crafting programs to meet those needs have the opportunity and obligation to apply the lessons of cultural competency, program efficacy and optimum evaluation.
By learning from Vaidro et al (1997) and Ochoa et al (2012) finding the key to unlocking conversations about taboo or culturally controversial topics such as women’s healthcare is within reach of all healthcare professionals. The key is finding the right person (or group) to bring the message – and crafting the reporting of program outcomes and evaluations to reflect the right elements of success. The goal, then, of a women’s cancer outreach program to medically disadvantaged or geographically disenfranchised African American women would not be “to reduce the instance of cancer by X% in Y years” but rather “to reach X% of the priority population in Y geographic area through culturally appropriate education programs”. The evaluated outcome then would be the education program presenters reporting the number of first-time and repeat attendees at each instance of the program; measuring the number of attendees could be as simple as having a spotter report the number of empty chairs in the room.
Because times and the overall social discourse about women’s cancers and healthcare program reporting have changed dramatically since Viadro et al (1997) were studying the NC-BCSP it would be unfair to categorize their study in any manner. However, having access to the study and its contents provides tremendous opportunity for current professionals in healthcare discourse and in project management/documentation to fully understand the why and how of reporting the real outcomes of health intervention programs (or their evaluations). The audience needs to understand the true “key” of the discussion – even if that audience is a student 15 years later.
Barnum, C.M. (2002). Usability Testing & Research. Longman. San Fransisco.
Bernard, V.B., Saraiya, M.S., Soman, A., Roland, K.B., Yabroff, K.R. & Miller, J. (2011)
Cancer Screening Practices Among Physicians in the National Breast & Cervical Cancer
Early Detection Program. Journal of Women’s Health. 20(10). 2011.
Catholic Health Association of the United States (CHAUSA) (2012). CHA/VHA, Inc. Webinar:
Notice of Proposed Rule Making on Additional Requirements for Charitable Hospitals.
September 20, 2012. PDF available at:
1209120_ProposedRulemakingPresentation (9/21/2012 10:55:26 AM)
Gertner, R. 2006. “Non-profit hospitals take action to comply with Sarbanes-Oxley.” Daily
Record and the Kansas City Daily News Press. As referenced by the Healthcare Financial Management Association in the 2013 Healthcare Finance Core Curriculum.
Kessler, T.A. (2012). Increasing Mammography & Cervical Cancer Knowledge & Screening
Behaviors with an Educational Program. Oncology Nursing Forum. 39(1). January 2012.
Oncology Nursing Society.
McKenzie, Neiger & Thackeray (2009). Planning, Implementing & Evaluating Health
Promotion Programs: A Primer. 5e. Pearson: Benjamin Cummings. San Francisco.
National Cancer Institute (2013). Breast Cancer Risk Assessment Tool. Cancer.gov/bcrisktool.
National Cancer Institute (2013). State Cancer Profiles: Dynamic views of cancer statistics for
prioritizing cancer control efforts in the nation, states, and counties.
Ochoa-Frongia, L., Thompson, H.S., Lewis-Kelly, Y., Deans-McFarlane, T., & Jandorf, L.
(2012). Breast & Cervical Cancer Screening and Health Beliefs Among African
American Women Attending Educational Programs. Health Promotion Practice. 13(4).
Posavac, E.J. (2011). Program Evaluation: Methods & Case Studies. 8e. Prentice Hall. San
Shows, J. & Wu, D. (2011). Inferences for the Lead Time in Breast Cancer Screening Trials
Under a Stable Disease Model. Cancers. 2011(3). Pp. 2131-2140.
Snyder, C. & Crihfield, P. E. (2011). Evidence Based Practice: Performing Breast Cancer Risk
Assessments in a Community Setting. Clinical Journal of Oncology Nursing. 15(4). August 2011. Pp. 361-364.
University of Kansas (2013). The Community Toolbox: Trouble Shooting Guide for Solving
Problems: Common Problems, Reflection Questions, and Links to Support Tools.
Accessed through the American Hospital Association’s Association for Community
Health Improvement members’ portal (public link http://ctb.ku.edu/en/solveproblem/index.aspx)
Viadro, C. Earp, J. & Altpeter, M. (1997). Designing a Process Evaluation for a Comprehensive
Breast Cancer Screening Intervention: Challenges & Opportunities. Evaluation and
Program Planning 20(3) pp. 237-249.