SPED 5302 Week I Assignment

.docx

School

Lamar University *

*We aren’t endorsed by this school

Course

5302

Subject

Statistics

Date

Apr 3, 2024

Type

docx

Pages

11

Uploaded by ChiefAtom11560

Report
1 Assessment Terminology SPED 5302 Week I Assignment: Assessment Terminology Module I Lamar university SPED 5302: Tests, Measurements & Evaluations Dr. Marjason 11/12/23
2 Assessment Terminology 1. Age equivalent- Refers to the score used to compare the performance of a student to that that of a student their same age. This score is explained in years and months. For example, if a student gets a score of 12-4, the student performed as well as a 12-year- old, 4-month-old student. (Pierangelo & Giuliani, 2017, p.48) 2. Alternate Forms Reliability- Alternate forms reliability indicates the reliability obtained by combining the scores of two similar tests. One example is if a teacher wants to find out how well the students understand fraction problems, the teacher can gather fraction problems, split the amount in half, and administer them as two tests. The fewer errors, the more reliable the score. (Pierangelo & Giuliani, 2017, p.103) 3. Assessment- Assessment is used when a student is found to show difficulty in certain educational areas; to address these weaknesses, a test is administered to produce results that point out the difficulties. An example of an assessment is the Woodcock Johnson IV Tests of Achievement. (Pierangelo & Giuliani, 2017, p.5) 4. Chronological Age- Chronological age is the exact age, when a student is being assessed, it is imperative to gather the precise age for accuracy. For example, if a student’s chronological age is 10-4-22, the student is 10 year, 4 months, and 22 days old. (Pierangelo & Giuliani, 2017, p.41) 5. Concurrent Validity- Concurrent validity refers to the precision in which a student’s current state when referring to the criterion. For example, on a test that measures the levels of anxiety, it would show a concurrent validity if measured when the student was demonstrating anxiety. (Pierangelo & Giuliani, 2017, p.99) 6. Construct Validity- Construct validity refers to the level to which an assessment measures a theoretical construct, or better known as an attribute. It refers to the area measured and how it’s characterized. Level of intelligence is an example of construct validity. (Pierangelo & Giuliani, 2017, p.99) 7. Content validity- content validity is demonstrated when a test measures the content that we wish to see. For example, if a student does well on a Spanish spelling test, and we are trying to determine how well the student knows Spanish sight words, the test is said to have content validity. The score provided is valid, true to what was meant to be scored. (Pierangelo & Giuliani, 2017, p.99)
3 8. Content- referenced test- Tests that offer a mastery of certain skills. Such tests indicate if a student has mastered the skills or not, they can be given before new content, to see how much prior knowledge the student has; or after going over content, to see how well the student mastered the skills. (Pierangelo & Giuliani, 2017, p.22) 9. Convergent Validity- Refers to the scores of two similar test and how they compare. The convergent validity is high if both tests produce similar scores when they measure the same/similar abilities. A high score in reading comprehension for two different stories. (Pierangelo & Giuliani, 2017, p.100) 10. Correlation- Correlation is the relationship between two variables, the correlation, or relationship can be positive, negative or zero. When looking at two variables, we can see how well they associate. For example, the higher the job position, the higher the stress levels. (Pierangelo & Giuliani, 2017, p.37) 11. Criterion-referenced test (CRT)- CRT refers to a test that represents a level of achievement based on a predetermined standard; a standard is set, if it is reached then the student has mastered the skill. An example of a CRT is the state assessment, as the passing score is set for all students. (Pierangelo & Giuliani, 2017, p.22) 12. Criterion-related Validity- The process of determining the validity of an instrument/assessment by comparing it to one that is said to be valid. The closer that both instruments are, the higher the validity. (Pierangelo & Giuliani, 2017, p.98) 13. Curriculum-based Assessment (CBA)- An assessment that is based on the curriculum in which the student participates. A student in 5 th grade may be learning about fractions as it is part of the 5 th grade curriculum. CBA identifies the level of mastery in the instructional curriculum for the student’s grade level. (Pierangelo & Giuliani, 2017, p.23) 14. Curriculum-based Measurement (CBM)- refers to the speed in which a student can perform a given task. CBM is often used to determine fluency. One example is timing a kindergarten student for 5 minutes to see the amount of sight words read; such assessment can be done throughout the school year to see progression. (Pierangelo & Giuliani, 2017, p.24)
4 15. Deciles- The division of scores into tenths or ten equal units. For example, the 8 th decile in the point at which 80 percent of the scores fall below. (Pierangelo & Giuliani, 2017, p.44) 16. Discriminant Validity- When the measures of a test are in fact unrelated, they are said to have a discriminant validity. Because of the unrelated measures, these tests will demonstrate a low validity. (Pierangelo & Giuliani, 2017, p.313) 17. Dynamic Assessment- Dynamic assessments refer to assessments in which data is gathered throughout time, to track progress and ensure that supports are allowing the student to be successful. Past data is compared to present data; student’s learning is determined via such results. (Pierangelo & Giuliani, 2017, p.313) 18. Ecological Assessment- An ecological assessment refers to observing and assessing the child in different environments throughout the day. For example, the student may be observed during lunch, math class, reading, music, from these observations the observer can see how the student interacts in different areas, or where they demonstrate difficulty/frustration. (Pierangelo & Giuliani, 2017, p.23) 19. Grade Equivalent- This refers to the score of performance when compared to children in the same grade level. For example, a grade equivalent score of 5.4 means the student is performing as well as an average student in fifth grade, 4 th month. (Pierangelo & Giuliani, 2017, p.48) 20. Informal Reading Inventory- Refers to a nonofficial or non-formal that is used to identify reading levels: independent, instructional, and frustration. It is used for word recognition and passage reading. (Pierangelo & Giuliani, 2017, p.22) 21. Instructional Planning- Instructional planning refers to the planning for instruction after assessments or based on the needs of the student. Information collected is taken into consideration when planning for supports for the student; instruction is tailored to the student’s needs. (Pierangelo & Giuliani, 2017, p.317) 22. Interrater Reliability- This is when two teachers observe and gather data for specific behaviors. Both teachers then collaborate to determine the best support for the student. An example would be when the behavioral strategist and a general education
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help