- Uncategorized

 

Antecedent

2015-07-07 14:37:01Uncategorized

Definition: The setting(s) and their events in which a specific behavior(s) occurs.

Why it matters: Understanding when and where a behavior occurs is important in determining it’s function as well as for improving behavior change programming.

Example of use: Careful observation of antecedent events allow a school psychologist to determine that a student makes funny remarks in Math class in order to escape the class (the teacher quickly sends him out) but in Social Studies class the same behaviors result in attention from the teacher without escape from classwork.

References:

Gresham, F. M., Watson, T. S., & Skinner, C. H. (2001). Functional Behavioral Assessment: Principles, Procedures, and Future Directions. School Psychology Review, 30(2), 156.

Smith, R. G., & Iwata, B. A. (1997). Antecedent influences on behavior disorders. Journal of Applied Behavior Analysis, 30(2), 343-375. doi:10.1901/jaba.1997.30-343

CBA

2015-07-07 14:14:56Uncategorized

Definition: CBAs have test stimuli taken from the curriculum, are intended to be administered regularly, and inform instruction. CBM is a type of CBA. CBM measures general outcomes that incorporate many skills. CBA can also thought of in terms of mastery and progress measured according to mastery of the sub-skills needed to meet global skills. Most CBA models fall under mastery.

Why it matters: Mastery measurement CBAs are not standardized so validity and reliability are unknown. They are most often teacher made assessments for individual students.

Example of use: A teacher creates several math assessments to measure of a student toward several sub-skills needed to add or subtract mixed fractions.

References:

Hosp, M. K., & Hosp, J. L. (2003). Curriculum-Based Measurement for Reading, Spelling, and Math: How to Do It and Why. Preventing School Failure, 48(1), 10-17.

Hosp, J. L., Hosp, M. K., Howell, K. W., & ebrary, I. (2014). The ABCs of curriculum-based evaluation: A practical guide to effective decision making. New York: The Guilford Press

CBM

2015-07-07 14:12:34Uncategorized

Definition: CBMs are standardized assessments focused on long-term goals that measure progress within many skill domains. CBM was created to be administered regularly throughout the school year. They are easy to administer and score. They provide ongoing data that is used to make instructional decisions.

Why it matters: The increase in accountability and Data Based Decisions within IDEA are addressed by the use of CBM. CBM has good validity and reliability as well as treatment validity. Treatment validity means that the assessment (CBM) also assesses the instruction or treatment being used.

Example of use: A teacher gives CBM-R to her 3rd grade class at the beginning of the year. She identifies one student that scores well below everyone else. She administers 2 additional CBM-R tasks to this student and creates a baseline. Using national benchmarks, she creates a goal line for the student and administers CBM-R once a week to measure the student’s progress. After about 6 weeks, the teacher decides the student is not making enough progress to meet his benchmark and selects an evidence-based reading intervention. The teacher continues to monitor the student weekly during the intervention.

References:

Fuchs, L. S., & Fuchs, D. (1999). Monitoring student progress toward the development of reading competence: A review of three forms of classroom-based assessment. School Psychology Review, 28(4), 659.

Hosp, M. K., & Hosp, J. L. (2003). Curriculum-Based Measurement for Reading, Spelling, and Math: How to Do It and Why. Preventing School Failure, 48(1), 10-17.

Momentary Time Sampling

2015-07-07 13:59:03Uncategorized

Definition: Time is broken into intervals, such as 1-minute intervals for 50 minutes, and data is taken at the end of each interval. Data is recorded as percentage of intervals in which the behavior is occurring. It is assumed the behavior occurred for the entire interval. It is best for continuous activity. There are three types: fixed, variable, and planed activity check (PLACHECK). Fixed: observation period in equal intervals. Variable: intervals are of variable length. PLACHECK: is used for group behavior. At the end of the interval, the data collector records the number of participants engaging in the target behavior. Data is reported as percentage of group engaging in target behavior for each interval.

Why it matters: This method of data collection method does not require continuous observation, making it more practical for teachers or other practitioners.

Example of use: A teacher is measuring on-task behavior for a student. She sets her phone to buzz every 2 minutes. When her phone buzzes, she records whether the student is on-task or not. She assumes the behavior occurred throughout the interval when calculating the student’s time on-task.

References:

Gast, D. L., Ledford, J., & ebrary, I. (2014). Single case research methodology [electronic resource]: Applications in special education and behavioral sciences. New York, NY: Routledge.

Whole Interval

2015-07-07 13:57:03Uncategorized

Definition: Data is taken on behavior in intervals of time (i.e., ever 10 seconds or every 1 minute). A behavior is recorded as ‘occurring’ if it occurs for the entire interval of time. If the behavior is recorded as occurring for two 10-second intervals in a minute then this can be calculated as the behavior occurring 20 seconds every minute.

Why it matters: It provides an estimate of time/duration but, if the intervals are not small enough, could result in an underestimate of duration.

Example of use: A teacher is measuring on-task behavior for a student using whole interval data collection for a 10 second intervals in her class.

References:

Gast, D. L., Ledford, J., & ebrary, I. (2014). Single case research methodology [electronic resource]: Applications in special education and behavioral sciences. New York, NY: Routledge.

Gresham, F. M., Watson, T. S., & Skinner, C. H. (2001). Functional Behavioral Assessment: Principles, Procedures, and Future Directions. School Psychology Review, 30(2), 156.

Partial Interval

2015-07-07 13:54:28Uncategorized

Definition: Data is taken on behavior in intervals of time (ever 10 seconds or every 1 minute) A behavior is recorded as ‘occurring’ if it is observed during any portion of the interval of time. If the behavior is recorded as occurring for two 10-second intervals in a minute then this can be calculated as the behavior occurring 20 seconds every minute.

Why it matters: It is effective for measuring behaviors that have no discrete beginning or end while also allowing for duration to be estimated. If the intervals are not small enough then it can result in an overestimation of duration.

Example of use: A teacher is measuring out-of-seat behavior but event recording does not fully capture the information wanted because the student occasionally stays out of his seat for several seconds. The teacher uses partial interval data collection for 10-second intervals to measure out-of-seat behavior. The data allows the teacher to estimate amount of time out of seat (duration).

References:

Gast, D. L., Ledford, J., & ebrary, I. (2014). Single case research methodology [electronic resource]: Applications in special education and behavioral sciences. New York, NY: Routledge.

Gresham, F. M., Watson, T. S., & Skinner, C. H. (2001). Functional Behavioral Assessment: Principles, Procedures, and Future Directions. School Psychology Review, 30(2), 156.

Event Recording

2015-07-07 13:53:23Uncategorized

Definition: Record each occurrence of behavior over a set amount of time. Is best used to measure frequency or rate of a behavior.

Why it matters: Easy for most teachers to use to measure discrete behaviors (clear beginning and end) and rate for baseline can be used to measure progress in terms of increasing or decreasing rate.

Example of use: A teacher decides to use event recording for number of times a student speaks out in class without raising his hand first in Math class. The teacher collects data for 3 consecutive classes and divides the total behaviors by the total minutes to get a rate of behaviors per minute. The teacher then uses several strategies and continues to take event recording data to see if the rate of a behaviors per minute changes.

References:

Gast, D. L., Ledford, J., & ebrary, I. (2014). Single case research methodology [electronic resource]: Applications in special education and behavioral sciences. New York, NY: Routledge.

Gresham, F. M., Watson, T. S., & Skinner, C. H. (2001). Functional Behavioral Assessment: Principles, Procedures, and Future Directions. School Psychology Review, 30(2), 156.

Sensitivity to Change

2015-07-07 13:50:42Uncategorized

Definition: The level or amount of learning needed to result in a measurable change in the assessment outcome. Assessments that are more sensitive to change are able to measure smaller increments of learning.

Why it matters: Sensitive assessments allow you to measure learning and rates of learning in order to make data based instructional decisions before the student fails. Highly sensitive instruments can provide both students and teachers positive feedback when a student is successful before other instruments.

Example of use: A teacher uses CBM-R to create a baseline and track rate of learning for a student that is struggling to read. Using CBM-R, the teacher is able to verify that the student is learning at a slower rate than his peers and decide to use mini-lessons to build vocabulary. The teacher is able to verify whether the student is improving their rate of learning or not before any summative assessments are done.

References:
Fuchs, L. S., & Fuchs, D. (1999). Monitoring student progress toward the development of reading competence: A review of three forms of classroom-based assessment. School Psychology Review, 28(4), 659-671.

Fuchs, L. S., Fuchs, D., & Courey, S. J. (2005). Curriculum-based measurement of mathematics competence: From computation to concepts and applications to real-life problem solving. Assessment for Effective Intervention, 30(2), 33-46. doi:10.1177/073724770503000204

Validity

2015-07-07 13:44:21Uncategorized

Definition: An assessment is valid if it measures what it purports to measure. A statement or conclusion regarding the relationship between two or more variables is valid if it is backed by sound and rigorous research.

Why it matters: If an assessment states that it measures a numeracy skill that predicts success in later math classes but it does not actually measure the skills it claims to measure or if that skill is not really a good predictor of later success, then the test is invalid. Using invalid tests is a waste of time and money because no one is sure what it measures, if anything, or what it means if it does measure something.

Example of use: Always check the validity of a assessments and the predictive claims of those assessments.

References:
Rumrill Jr, P. D., Cook, B. G., & Wiley, A. L. (2011). Research in special education: Designs, methods, and applications. Charles C. Thomas, Publisher, Ltd. 2600 South First Street, Springfield, IL 62704.

Adequate Yearly Progress (AYP)

2015-07-07 13:41:41Uncategorized

Definition: Mandated by No Child Left Behind Act of 2001 (NCLB), requires states to implement a single accountability system to evidence yearly progress toward state academic content standards for every student. States have some flexibility in defining and measuring AYP, but state tests are the primary measurement.

Why it matters: If a school district fails to meet AYP for two consecutive years then it is identified as in need of improvement. States develop their own rewards and sanctions, but at minimum low-performing schools have to notify parents of their status, students may transfer to another school, supplemental services must be provided, and the school must be provided additional assistance. If a school continues to fail then it may be restructured.

Additional Criteria: 95% of all students as well as 95% of each sub-group of students must take the state tests each year and each sub-group, including students with disabilities, must meet or exceed the annual expectations set by the state. Progress is tested yearly for grades 3 through 8 and in one grade in high school for reading/language arts and math. All students are to be proficient by 2013/2014.

References:

Schwarz, R. D., Yen, W. M., & Schafer, W. D. (2001). The challenge and attainability of goals for adequate yearly progress. Educational Measurement: Issues and Practice, 20(3), 26-33.

Yell, M. L. (2012). The law and special education (3rd ed.). Merrill/Prentice-Hall, Inc., 200 Old Tappan Road, Old Tappan, NJ 07675.