ncate Institutional Report
banner  
 

 

logo

 

The Director of Assessment and the Data Management Coordinator are responsible for data analysis at the unit level.  They also analyze and report unit data at the program level.  However, the users of the Initial and Advanced Programs Assessment System include more than official assessment staff.  Faculty are key users of assessment system data, as well.  Assessment staff work with faculty, as needed, to assist them in understanding and analyzing programmatic data, focusing on the following questions:

  • What do you observe?
  • What patterns do you notice (e.g., areas of strength?  Areas of need?)
  • What does this tell you?
  • What are the questions this raises?
  • What are some possible implications or areas for further investigation?
  • What next steps do we need to take?  (Smith & Miller, 2003, p. 43)

While quite positive, it should be noted that the transition from paper to various modes of electronic data collection and the ongoing the revision of the Initial and Advanced Programs Assessment Systems, pose some temporary challenges to efficient data collection and analysis. As the unit has shifted from paper to electronic data collection, the Data Management Coordinator and Director of Assessment have struggled to merge data files in multiple formats and from diverse sources (e.g., PeopleSoft, Chalk & Wire, surveygizmo, etc.).  While this task is not inordinately difficult, it is a time consuming and painstaking process.

Additionally, as early adopters have piloted FSEHD's revised assessments, other faculty members have continued to use the “old” assessments (until a specified cut off point).  This has necessitated “double” data collection, analysis, and reporting for a number of semesters as FSEHD reports on assessment results from “old” and new assessments.  When all users are using the same assessments via a single mode of administration, these challenges will be eliminated.

Data Evaluation

Day-to-Day Monitoring

The Data Management Coordinator is responsible for identifying day-to-day problems with data collection (e.g. any missing data or evident problems with rubrics) that might influence data quality or suggest possible improvements to the assessment and data collection systems.  Furthermore, all new electronic assessments are tested by multiple users before they are launched.  In the event that users notice a glitch in a launched electronic assessment, issues are addressed within one day of their being reported. 

Ensuring the Technical Adequacy of Assessment Measures

Assessment experts advocate that assessment systems be regularly evaluated to ensure the technical adequacy of its measures. To this end, FSEHD has instituted procedures for verifying that the following aspects of validity and reliability are attended to:

  • Validity
    • Content-related validity
      • Alignment
      • Balance of representation
    • Construct validity
      • Internal consistency of scales
      • Factor analysis and theoretical frameworks
      • Expected developmental changes
    • Prediction
    • Fairness
      • Freedom from bias
      • Transparency of expectations
      • Opportunity to learn.
      • Accommodations
      • Multiple opportunities
    • Consequences
      • Positive and negative
      • Intended and unintended
    • Utility
  • Reliability
    • Across raters (inter-rater reliability)
    • Over time
    • Across different tasks or items (internal consistency)

Other documents on this web site, including STANDARD 2:  MOVING TO TARGET LEVEL and PROCEDURES FOR ENSURING THAT KEY ASSESSMENTS OF CANDIDATE PERFORMANCE AND EVALUATIONS OF UNIT OPERATIONS ARE FAIR, ACCURATE, CONSISTENT, AND FREE OF BIAS describe in detail unit procedures to evaluate and document the consistency of scoring within the assessment system, as well as the validity of the uses and interpretations of Initial and Advanced Programs Assessment System results.  These documents also describe findings from several reliability and validity studies conducted by FSEHD. 

A final technical criteria for a high quality assessment system is standard setting.  In other words, programs and the unit must identify the amount and quality of evidence necessary to demonstrate proficiency on assessments. These are performance standards (Measured Measures, 2000).  The standard setting process cannot begin until criteria for levels of student performance (i.e., rubrics) are well-articulated (Smith & Miller, 2003).  This is why reliability must be established first. 

FSEHD plans to train selected faculty in two approaches to standard setting by the end of 2011:  the Angoff method and the Examination of Student Work method.  The Angoff method is an assessment-based method for faculty to work collaboratively to determine a passing grade or acceptable performance on an assessment in a course.  It can be used with traditional assessment types (such as selected response) that are frequently used on advanced coursework.  The Examination of Student Work method of standard setting involves the review of student work (yielded through performance assessment) and results in the establishment of data-based cut off scores and anchor papers/benchmark performances.  Skill in using these standard-setting methods and implementation of these procedures within programs will yield more consistent scoring of student work samples at the formative and exit transition points, as well as within courses in programs, resulting in higher reliability in candidate's final course grades.  A subsequent training will be offered to faculty in 2012.  Additionally, faculty who are trained in standard-setting methods will be encouraged to share their knowledge with their peers in their departments.  Standard setting procedures will be applied to all unit performance assessments.

An additional, useful benefit of the standard-setting process is that it often exposes flaws in scoring rubrics or the design of assessments.  As such, it is part of an iterative process of ongoing revision and improvement.

Data Reporting

There are four primary audiences for each unit assessment, and each is taken into account for reporting purposes.  Assessment data can be utilized to address questions and concerns relevant to students, faculty, program coordinators, and unit staff.  Examples of questions and concerns pertaining to various audiences include (but are not limited to):

    • Candidates:  Am I improving over time?  Am I succeeding at the level that I should be?  What help do I need?
    • Faculty:  Does this candidate meet the admissions or exit criteria for our program?  Which candidates need help?  What grades should candidates receive?  Are my instructional strategies working?
    • Program:  Is our program effective?  How can it be improved?  Which candidates are making adequate progress? Are our candidates ready for the workplace or the next step in learning?
    • Unit:  Who is applying to our programs?  Are programs producing the intended results?  How should we strategize to achieve success? Which programs need/deserve more resources?  (Stiggins, 2001, pp. 11-12)

Candidates complete most unit and program assessments in their courses and therefore receive feedback on their performance within their courses through the scoring rubrics associated with the assessment. Scoring rubrics provide concrete information on the specific standards that were met and not met, and whether they were adequately addressed. Those candidates who do not adequately meet the standards on a unit assessment meet with their faculty member to discuss standards/indicators that have not been met and what the candidate needs to do to meet those standards/indicators. Candidates then have the opportunity to revise and resubmit the assessment. 

FSEHD provides regular and comprehensive data on program quality, unit operations, and candidate performance at each stage of its programs, extending into the first years of completers' practice. Assessment data from candidates, graduates, faculty, and other members of the professional community are based on multiple assessments from both internal sources (faculty) and external sources (cooperating teachers, internship mentors, employers, and other field contacts) that are systematically collected and reported as candidates progress through programs. These data are compiled, aggregated, summarized, analyzed, and reported publicly each semester for the purpose of improving candidate performance, program quality, and unit operations.  Data are generally reported in terms of descriptive statistics (measures of central tendency, standard deviation, range, frequencies), cross tabulations, correlations, and comparisons of means.  Results are presented in table, chart, and graph form.

 Initial and advanced programs assessment reports are shared with faculty and published on the FSEHD web site each semester as they are completed by the Data Management Coordinator.   They are available at ricassessment.org, where they are easily accessible by faculty and open to the public.  In addition, the Data Management Coordinator and Director of Assessment regularly respond to faculty and administrator requests for specialized data sets that they wish to analyze themselves.  Faculty are regularly provided with raw data in a format that they request so that they can conduct their own investigations.  Access to raw data is crucial to fostering consistent data exploration and use, and research has demonstrated that educators who have ready access to data tend to use data more frequently and more effectively.  In addition, educators who explore their own data “invariably want more detailed data, or want data presented in different ways, than paper reports typically provide…  Preformatted data reports, while useful, cannot be cross-analyzed or connected with other data.” (McLeod, 2005, p. 2)  This underscores the continued need for data that is accessible to FSEHD programs faculty and staff, data that staff can “get their hands on.”

Use of Data for Program Improvement and Unit Evaluation

The Initial and Advanced Programs Assessment Systems aim to quickly provide stakeholders with meaningful assessment data and will assist in making appropriate data-driven improvements. This plan includes creating a sustainable, data-driven system of continuous program improvement, and that is a major culture change. To these ends, staffing in assessment functions has been substantially increased since the last NCATE visit.  In 2004, the unit created and staffed Data Management Coordinator and Director of Assessment positions.  In 2008 and 2010, the unit also supported part-time Assistant Director of Assessment positions to assist in key assessment functions.  Finally, the unit is currently supporting a half-time position to provide support and train users in Chalk & Wire, as well as release time for a faculty member to assist with faculty training on Chalk & Wire.  These additional human resources greatly facilitate the likelihood that the unit will use data for program improvement and unit evaluation.

Since 2008, FSEHD has been routinely evaluating the technical adequacy of its assessment system and making changes consistent with the evaluation results.  Since that time, the entire initial programs assessment system has been revised, as have significant portions of the advanced programs assessment system (see summary of changes).  The rationale for evaluating the capacity and effectiveness of the unit assessment system arose from multiple sources.  First, analyses of unit assessment data revealed little variability in candidate scores over time points or across conceptual framework competencies and professional/state standards, rendering it challenging to identify areas for program or unit improvement, instances of candidate growth, or the value-added of initial teacher preparation or advanced professional training. For example, mean ratings of both initial and advanced candidate performance at admissions, mid-point, exit, and post-graduation exhibited little variability and generally averaged 3.5 or higher on a scale of 1 to 4.  While these results appeared very positive, faculty questioned their validity and were hard pressed to use the data for program improvement.  Validity and reliability studies called into question the validity of some of the constructs that were purportedly being measured, as well as the clarity and consistency of performance expectations.  Finally, qualitative feedback from faculty members overwhelmingly supported the notion that the unit assessment system be evaluated and refined.  Issues repeatedly brought up by initial and advanced faculty included:  their feedback had not been adequately sought or incorporated in the existing design of the assessment system; rubrics were vague and interpreted in different ways across and within programs; performance expectations were implicitly understood and differed within and across programs; important candidate performances and data were not included in the assessment system; and the system did not capture candidate's effects on student learning. 


What evolved from these observations and subsequent discussion was an assessment system revision process that generally followed the following course of action:


assessment revision graphic

Following analysis of quantitative and qualitative data regarding an existing unit assessment, the assessment committee gathered and asked, “What evidence do we need about candidates at various time points?”  From there, committee members reviewed the strengths and weaknesses of existing assessments and assessment models used nationwide.  Using this information, a new or revised assessment was developed.  This was presented to faculty by way of a retreat, feedback was gathered from faculty, and the assessment was revised.  Next, faculty volunteered to pilot the new assessment.  Based on pilot feedback, continuing solicitations to other faculty for feedback, and analysis of data collected in pilots, the assessment was revised further.  The process began again, as the revised, improved assessment was presented again to faculty at a subsequent retreat.  Pilot and revision activities were repeated as needed until faculty and the committee recommended that the new assessment undergo full scale implementation. Details describing the evaluations of various components of FSEHD's unit assessment system and the process implemented to improve the system are available in the document on this web site entitled ASSESSMENT REVISION PROCESS.  

Specific examples of assessment data use are described in the document on the web site entitled EXAMPLES OF CHANGES MADE TO COURSES, PROGRAMS AND UNIT IN RESPONSE TO DATA GATHERED FROM THE ASSESSMENT SYSTEM.

While the direct assessment of candidates through the Initial and Advanced Programs Assessment System is important, other mechanisms can and should also be used to gauge the quality of the FSEHD and its programs. Other sources are the evidence on faculty searches and applicant diversity and the amount of re-assigned time granted to FSEHD faculty applicants to engage in analysis of assessment data and other scholarly pursuits. Unit data are also gathered through faculty productivity and course enrollment reports and candidate retention data provided by the Rhode Island College Office of Institutional Research and through regular Admissions Office reports.   The FSEHD Department Annual Report format was revised for 2005-2006 to focus on reporting quantitative data.  The Dean analyzes these reports and uses the data to make decisions about faculty professional development, ways to better support scholarship, faculty hiring, and program effectiveness with respect to faculty and candidate quality. Unit evaluation is also sought through external review as evidenced by the submission of unit data reports to the College's Special Assistant to the Vice President for Outcomes Assessment.  FSEHD assessment and data systems have been recognized as models for College-wide NEASC assessment requirements.

Unit improvements based on data from aggregated candidate assessments, faculty evaluations, and statistics from the Office of Institutional Research include: more pro-active search processes to enhance faculty diversity; focused attention to improve candidate preparation for work with English language learners from a variety of sociocultural backgrounds; needs-based faculty professional development events (e.g. on new state standards and proficiency-based graduation requirements, on case-based instruction, and on using technology to enhance instruction); professional development for faculty about the assessment system and how to use it; redesign of the Annual Department report format; grant-seeking to support faculty knowledge and skill development (successes include a $500,000 federal Teacher Quality Enhancement grant); reconsideration of FSEHD graduate candidate recruitment efforts and a new focus on growing programs; and expansion of Continuing Education opportunities for practicing educators in Early Spring (a new 4-week January semester only for FSEHD) and summer.

As the Initial and Advanced Programs Assessment Systems mature and further evidence of their reliability and validity is amassed, they will be used systematically to provide evidence for needed improvements in programs and the unit.  As data-based programmatic and unit changes are implemented, the validity and reliability of the modified system will continue to be assessed, in order to ensure that subsequent inferences and decisions made based on assessment data are appropriate and defensible.

References

  1. McLeod, S. (2005).  Data-driven teachers.  Minneapolis:  School Technology Leadership Initiative, University of Minnesota.  Available at:  www.scottmcleod.net/storage/2005_CASTLE_Data_Driven_Teachers.pdf
  2. Smith, D. & Miller, L.  (2003).  Comprehensive local assessment systems (CLASS) primer:  A guide to assessment system design and use.  Gorham, ME:  Southern Maine Partnership, University of Southern Maine.
  3. Stiggins, R.J. (2001).  Leadership for Excellence in Assessment: A Powerful New School District Planning Guide. Portland, OR: Assessment Training Institute.
   
d other scholarly pursuits. Unit data are also gathered through faculty productivity and course enrollment reports and candidate retention data provided by the Rhode Island College Office of Institutional Research and through regular Admissions Office reports.   The FSEHD Department Annual Report format was revised for 2005-2006 to focus on reporting quantitative data.  The Dean analyzes these reports and uses the data to make decisions about faculty professional development, ways to better support scholarship, faculty hiring, and program effectiveness with respect to faculty and candidate quality. Unit evaluation is also sought through external review as evidenced by the submission of unit data reports to the College's Special Assistant to the Vice President for Outcomes Assessment.  FSEHD assessment and data systems have been recognized as models for College-wide NEASC assessment requirements.

Unit improvements based on data from aggregated candidate assessments, faculty evaluations, and statistics from the Office of Institutional Research include: more pro-active search processes to enhance faculty diversity; focused attention to improve candidate preparation for work with English language learners from a variety of sociocultural backgrounds; needs-based faculty professional development events (e.g. on new state standards and proficiency-based graduation requirements, on case-based instruction, and on using technology to enhance instruction); professional development for faculty about the assessment system and how to use it; redesign of the Annual Department report format; grant-seeking to support faculty knowledge and skill development (successes include a $500,000 federal Teacher Quality Enhancement grant); reconsideration of FSEHD graduate candidate recruitment efforts and a new focus on growing programs; and expansion of Continuing Education opportunities for practicing educators in Early Spring (a new 4-week January semester only for FSEHD) and summer.

As the Initial and Advanced Programs Assessment Systems mature and further evidence of their reliability and validity is amassed, they will be used systematically to provide evidence for needed improvements in programs and the unit.  As data-based programmatic and unit changes are implemented, the validity and reliability of the modified system will continue to be assessed, in order to ensure that subsequent inferences and decisions made based on assessment data are appropriate and defensible.

References

  1. McLeod, S. (2005).  Data-driven teachers.  Minneapolis:  School Technology Leadership Initiative, University of Minnesota.  Available at:  www.scottmcleod.net/storage/2005_CASTLE_Data_Driven_Teachers.pdf
  2. Smith, D. & Miller, L.  (2003).  Comprehensive local assessment systems (CLASS) primer:  A guide to assessment system design and use.  Gorham, ME:  Southern Maine Partnership, University of Southern Maine.
  3. Stiggins, R.J. (2001).  Leadership for Excellence in Assessment: A Powerful New School District Planning Guide. Portland, OR: Assessment Training Institute.