Prosper Data Sharing for PSU Investigators - Appendix C

Appendix C

Guidelines on Common Content and “Checklist”
for Authors of PROSPER Papers and Presentations

This list addresses commonly occurring content in PROSPER papers and presentations. Before submitting your proposal, paper or presentation for EC review, please ensure that you have addressed each of the relevant issues discussed below and include reference to your review of this appendix in your cover note. Proposals, papers and presentations submitted for review that include content inconsistent with the guidelines outlined below will be returned to the author for revision.

  1. FUNDER CREDITS and Acknowledgements
    • First and foremost, as referenced earlier in these guidelines, please remember to acknowledge the PROSPER “parent” grants (NIDA and NIAAA) in all papers/presentations, along with additional funding information, as indicated (e.g., grants supporting additional data collection and/or analyses). A list of relevant grants is provided in Appendix D.
      • The primary data source and the original grant should be included in this acknowledgement, with suggested wording similar to the following examples.
      • An example of the credit statement for a paper supported only by the original parent or core project grants (including the competing continuations) is as follows:
        • Work on this paper was supported by the National Institute on Drug Abuse (grant R01 DA013709) and co-funding from the National Institute on Alcohol Abuse and Alcoholism.
      • An example of a paper supported by both the original PROSPER grants and other grants (including the PROSPER-affiliated grants noted above):
        • Grants from the W. T. Grant Foundation (8316) and National Institute on Drug Abuse (R01 DA018225) supported this research. The analyses used data from PROSPER, funded by grant R01 DA013709 from the National Institute on Drug Abuse, and co-funded by the National Institute on Alcohol Abuse and Alcoholism.
  2. Features of the PROSPER MODEL—It is important that readers see consistent descriptions of the PROSPER model across publications/presentations. Please consider the following:
    • PROSPER is a delivery system (or model) for evidence-based interventions, not a school-based or other type of program.
      • Sample text: A three-component community-university partnership model guided the implementation of evidence-based interventions (EBIs). The three components of the PROSPER model are local community teams, state-level university researchers, and a Prevention Coordinator team in the land grant university Extension System. Prevention Coordinators served as liaisons between the community-based teams and university researchers, providing continuous, proactive technical assistance to the community teams. Community teams were comprised of an Extension staff team leader, a public school representative co-leader, and representatives of local human service agencies, along with other local community stakeholders (e.g., youth and parents). Community teams selected and oversaw the implementation of one family and one school evidence-based intervention in their communities.
    • The PROSPER delivery system supported the implementation of both a family-focused and a school-based preventive intervention.
      • Sample text: Teams selected a sequence of two interventions from a menu of evidence-based programs to be delivered via the PROSPER model, beginning with a family-focused intervention for 6th graders and their parents, and a school-based intervention implemented for the same cohort of youth in 7th grade. As part of the trial, communities administered these programs for two successive cohorts of 6th graders.
      • Additional description: For their family-focused intervention, all 14 community teams selected the Strengthening Families Program: For Parents and Youth 10-14 (SFP 10-14). For their school-based intervention, four community teams selected Life Skills Training, four selected Project Alert, and six selected the All Stars curriculum. Brief information on each program follows (for more detail, see Spoth, et al., 2007).
  3. MEASURES—Consistent labeling for the same measures used in different publications and presentations, as well as appropriate cross-referencing of previously published measures. Please consider the following:
    • When possible, use measures that have been described in the PROSPER Tech Report or in other PROSPER publications.
      • If you need to create a new measure for your paper/presentation, describe it in your proposal (or amend your proposal if the need arises later), along with its rationale.
      • If you wish to use a measure that is different than, but similar to, a Tech Report measure, explain in your proposal why the Tech Report measure is unacceptable.
      • If you use a measure from the Tech Report, but wish to use a different label, reference the original label in your paper/presentation.
  4. DESIGN and METHODS—Readers should see consistent descriptions of the PROSPER study design and research method detail across publications and presentations. Please consider the following:
    • Check reported Ns versus CONSORT figures (in-school assessment–Figure 1, and in-home assessment–Figure 2).
      • If the relevant sample size for the paper/presentation is different from the CONSORT figure (see Figures 1 and 2), make sure your paper explains which subgroup of participants were included in your analyses and why (e.g., “Only those who completed certain activities, or were in the sample at Wave X, etc., are included in the analysis”).
      • Please note that the N for the 10th grade follow-up assessment reported in Figure 1 has been updated from that reported in earlier CONSORT figures, after a survey scanning error was uncovered and corrected. The correction did not carry any implications for the analyses and the accuracy of the findings reported previously.
    • The PROSPER project began with 2 cohorts of 6th grade students
      • Sample text: The project recruited 28 school districts from Iowa and Pennsylvania and used a cohort sequential design involving two cohorts (designated Cohort 1 and Cohort 2). Initial eligibility criteria for communities in the study were (1) school district enrollment between 1300 and 5200 students, and (2) at least 15% of students eligible for free or reduced-cost school lunches. Communities were blocked on school district size (enrollment) and geographic location, and then they were randomly assigned to the partnership intervention and “normal programming” comparison conditions.
    • Participation in the in-school assessments
      • All students in the targeted grade-level were eligible to participate in the in-class assessments (including students new to the school/district; the in-school sample was not precisely a panel sample).
      • Sample text: In-class pretest self-report assessments were conducted with 6th graders during the fall semesters of 2002 (for Cohort 1) and 2003 (for Cohort 2). Baseline assessments were completed with 10,849 6th graders. Follow-up assessments through high school were conducted annually in the spring of 6th to 12th grades (i.e., from 0.5 years to 6.5 years past baseline). On average, 88% of all eligible students completed surveys across the eight data points, with slightly higher rates of participation at earlier data collection points.
    • Participation in the in-home assessments
      • In-home family assessments were conducted with randomly selected and recruited families of 6th graders from the second cohort of students (participation in the in-class student assessments was not a pre-requisite for participation in the in-home assessment).
      • Of the 2,267 families recruited for in-home family assessments, 980 (43%) completed at least one in-home assessment. Only 977 completed Wave 1, however; 3 additional families first completed an assessment at the Wave 2.
      • The in-home sample constituted a panel—pretested families were followed even if they left their original school district.
    • Young adult follow-up assessments
      • A total of 1,985 emerging adults participated in the age 19 assessment. At age 19, participants could complete assessments either through a computer-assisted telephone interview or a web-based survey (16% and 84%, respectively).
      • The young adult sample was randomly selected and recruited from those students completing the in-class 6th grade pretest and still enrolled in the same school district in the 9th grade.
      • Higher-risk students (based on 6th grade data) were over-sampled for the young adult follow-ups.
      • A total of 1,628 emerging adults participated in the age 22 assessment, from among the 1,985 participating at age 19.
    • Defining the high-risk subsample for the young adult data collection
      • Sample text: Risk status was based on how many of the following five factors were reported by participants at baseline: lifetime gateway substance use (alcohol, cigarettes, or marijuana); conduct problem behaviors (at least 2 of 12); eligibility for the free/reduced cost school lunch program; lower family cohesion (dichotomized); and living with only one or no biological parents. Students were classified as higher-risk if they reported (a) any three or more of the five risk factors or (b) two risk factors, if gateway substance use and/or conduct problems were among the two. In order to have sufficient statistical power to investigate risk-related moderation effects when applying multilevel analytic models, higher-risk participants were oversampled.
    • Additional detail: In order to improve statistical power to investigate risk-related moderation effects when applying multilevel multi-level analytic models, higher-risk participants were oversampled for the age 19 survey. They comprised 37.4% of the age 19 participating subsample, versus 29.2% of the eligible sample.
    • Controlling for intervention effects
      • Considering PROSPER’s experimental design, in cases in which the research question does not entail tests of intervention effects, clarify how analytic procedures will address the possible influence of intervention effects.
    • PROSPER Study Design
      • There were four design factors at the time the study began: Condition, State, Cohort, and Block.
      • As noted above, Risk was incorporated into the design for the young adult follow-up assessments.
  5. PRIOR PROSPER PUBLICATIONS—Finally, it also is important to consider prior PROSPER publications (now well over 100) that address relevant substantive topics and methodological strategies and issues. A current PROSPER publication list will be available upon request from Sherry Robison.

In-School Survey Total Participation by Wave, Grades 6th to 12th plus two follow-ups

* Reported participation rates include all students in both study cohorts completing the in-school survey at the indicated wave through 12th grade. All students enrolled in the project school districts and in the targeted grades were eligible for participation, regardless of their participation in earlier survey waves; as a result, participation at later waves may exceed participation at pretest.

 1 Emerging adult assessments were conducted with randomly selected participants from the in-school assessment sample that completed 6th grade pretest assessments and were still enrolled in their baseline school district in the 9th grade.

In-Home Survey Total Participation by Wave, Grades 6th to 9th

*Only families of students that participated in the 6th grade pretest were considered eligible for follow-up data collections. However, 2 students/families began participation at the posttest and 1 first participated at 9th grade; all 3 were in the control condition (randomized prior to the 6th grade assessment). For this figure, 3 students/families first assessed at baseline were eliminated due to non-identifiable treatment condition (e.g., moved from a school district in one study condition to a district in the other condition).

 

Return to the main page for Prosper Data Sharing for PSU Investigators