33 Interpreting and Reporting Results
Integrating results
When reviewing the direct testing results, it is important to go beyond just the standardized test scores to find meaningful insights. Consider using records reviews, interviews, observations, rating scales, and checklists as valuable data points for assessing cognitive and processing skills. Analyze students’ performance on standardized tests and dynamic testing activities to identify strategies and accommodations that can support their learning and daily functioning. Focus on what fosters problem-solving, understanding, and functional skills. The following questions may guide your analysis of the results.
- What information about the student’s learning style can be determined from the testing performance?
- What developmental growth over time does the student’s performance indicate compared to previous evaluation results?
- What data from the test reflects the student’s ability and not challenges stemming from visual impairment?
- How has the student’s conceptual knowledge impacted by visual impairment affected performance on specific test items?
- How does time influence the student’s capabilities to perform the given task?
- When testing the limits, what accommodations or modifications allowed the student to demonstrate a skill?
- How do data from standardized testing, criterion-referenced testing, interviews, observations, and work samples compare?
For rating scales, if a scale includes items directly impacted by vision (e.g., related to eye gaze, pointing, or play imitation), consider whether or not it is appropriate to report scores for those scales. Some scales might be reported qualitatively while other scales without items impacted by visual items might be reported quantitatively.
Reporting results
While it is permissible to use tests normed on sighted individuals, remember to include statements that clarify validity and limitations. The following topics are points to consider including in your validity statement.
- Whether standardized procedures were followed
- What specific test accommodations or modifications were utilized to reduce the impact of visual impairments on test performance
- How testing of the limits was conducted
Consider that difficulty with vision or the use of accommodations/modifications may have changed the nature of the task or made the task more challenging or more manageable. In most cases, it is appropriate to interpret the results of subtests with visual stimuli in a clinical/qualitative rather than quantitative manner. The student’s performance on tests with visual stimuli may be more suitable for guiding recommendations on classroom accommodations or modifications than drawing conclusions on ability.
If deemed appropriate to report scores, keep in mind that students who are cognitively intact may score lower than their sighted peers since vision facilitates cognitive and language development. Scores may be conservative reflections of the student’s abilities. It may be more meaningful to report testing results qualitatively rather than quantitatively. Use confidence intervals of 90% or 95% when reporting scores.
Sample validity statements
It is always important in your reports to address any specific concerns you have about the validity of the assessment. Avoid blanket or generalized statements; make them personal to the particular student.
Cross-battery assessment example
Instruments designed for fully sighted individuals but used with blind and low vision students can be useful when interpreted by persons knowledgeable of the effects of visual impairment on cognitive development and test performance. The results of many norm-referenced tests standardized on sighted individuals are conservative indicators of the characteristics being measured. The results presented in this report were compiled from tests that do not share a common norm group; however, test results have been interpreted following the cross-battery approach and integrated with data from other sources including educational records, parent/teacher interviews, behavioral observations, work samples, and other test findings to ensure ecological validity. Standardization was followed for all test administrations, except for [document test accommodations and modifications]; performance was interpreted qualitatively instead of quantitatively where standardized procedures were broken and validity was compromised. The results provide a snapshot of current skills and knowledge and are deemed to be a meaningful characterization of [the student’s] current performance. No single test or procedure was used as the sole criterion for classification, eligibility, or educational planning.
Traumatic Brain Injury (TBI) example
[The student] had a traumatic brain injury at [age], had to relearn all language following the injury, lacks [eye and/or field vision impacts], and cannot use [specific body part or side of the body]. No test has been validated on a population of students with such a constellation of circumstances and injuries. The tests used for this assessment “assume” facility with skills that [the student] does not have and therefore become a measure of these skills rather than what was intended. For example, if a test requires speed and fine motor manipulation of blocks to assess perceptual reasoning, it will measure speed and fine motor skills as much as perceptual reasoning when [the student] attempts it. These factors were taken into careful consideration during assessment and interpretation. This assessment aimed to measure [the student’s] strengths and weaknesses with minimal interference from factors that are not under [the student’s] control. Speed, fine motor skills, and vision requirements were minimized as much as possible. Observation, experience, and examples were given significant weight in addition to test score results.