For the latest Ontario Ministry of Education document on assessment of learning, for learning and as learning please see Growing Success (2010).

Note: This is a PERSONAL blog, not an official Ministry of Education website. This is a forum for sharing.

Please add comments and your favourite resources (and let me know if there are any dead links!) Thank you!

The following is a summary of my talk about Assessment.  This is my own work, and based on Special Education in Ontario Schools (2008) by Ken Weber & Sheila Bennett. 

©Angela onthebellcurve (2012)

Assessment and Evaluation:  What is it?
•An Educated Guess
•A snapshot of a student’s strengths and needs at a particular time
•Tools to get some understanding of the students’ abilities, strengths and needs
•Assessment can be Formal (Standardised) or Informal (teacher created)
•Essentially, a description of a student’s performance or ability
Assessment: Why do it?
•To develop an IEP and make program and placement decisions
•To get information about a student’s academic abilities, intelligence (cognitive ability), behaviour, strengths, needs and so on
Assessment: Who does it?
•Originally only ‘experts’ would conduct ‘assessments’ that would be used to make decisions about placement and program
•Now, assessment is more team-based and involves input from a variety of sources and people
•Team members can include: classroom teacher, the School Special Ed. Teacher, and / or other professionals such as Speech Language Pathologists, Occupational Therapists or Psychologists
Caution: A test is only as good as the person administering it.
•We can only assess what we observe
–Although most testing attempts to assess ‘invisible’ processes such as working memory, visual-spatial ability and vocabulary by having the student demonstrate these abilities, these tests may not capture all of a student’s abilities

There are many things that influence student performance – such as anxiety, inattention, hunger, fatigue, depression

•Assessment gives a snap shot of the student’s performance and ability under particular circumstances at a particular time
•i.e.  Students with average intelligence may perform very poorly on standardized cognitive assessments like the C-CAT if they have difficulties with Attention or Impulsivity

Reliability and Validity

  • Reliability – will the assessment measure the same thing repeatedly at different times?  If you measure this item again will you have the same measurement?
  • I.e. a broken ruler will always measure the same measurement repeatedly
  • Definition of Reliability –  the degree to which a student would obtain the same score if the test were re-administered (assuming no further learning, practice effects or change).
  • Validity  (definition) – the extent to which a test measures what it is designed to measure.
  • In our broken ruler example, the assessment tool is reliable.  It is not valid.
  • Our broken ruler does not correspond with any external measurements.
  • If our broken ruler did correspond with external measurements, it would be valid.

Validity – IQ and Shoes???

Imagine we created a test for intelligence that found shoe size is positively correlated with our definition of intelligence

-i.e. the bigger the shoe size, the ‘more’ intelligent you are.  Therefore we would measure feet to determine intelligence.

This test would be reliable (we would have the same results over repeated measurements)

However this test is not valid.  Our measure of intelligence and our definition of intelligence do not correspond to any external measurements (pre-existing intelligence tests or research about intelligence)

Informal Assessments

  • May not be as reliable as formal assessments, but may be more valid!
  • Different assessments and evaluations over time give a more comprehensive view of the student
  • Examples include: portfolios, teacher created tests and assignments, running records / miscue analysis, teacher observations

Benefits of Informal Assessments

  • Assessment conducted by person working with student on ongoing basis
  • Assessment can be tailored to meet specific needs (i.e. decoding ability or borrowing in subtraction)
  • May provide a picture of why and when a student fails to demonstrate a specific skill rather than just confirming they cannot do it!!!!!

Formal Assessments

  • May be called ‘standardized’ because the test results are compared to norms (the groups used by published that are supposed to be representative of the population)
  • Tests are usually timed (this may be difficult for slow or deep thinkers)
  • Group tests usually have single answers to multiple choice questions (this may be difficult for divergent thinkers)
  • Formal Assessments Include: Rating Scales, Inventories & Checklists, Intelligence tests
  • Specific examples include the WISC, Canadian Achievement Tests (CAT) [This assesses academic achievement] and Canadian Cognitive Abilities Test (C-CAT) [This assesses cognitive (Thinking) ability]

Follow this link for details about the WISC.

Follow this link for details about Norms, Percentiles, Stanines, Grade Equivalents etc..

Important to Know: Percentiles are NOT people

One of my students once said:

Yo Man, I don’t like, um what do you callit… psychologists.  They say this and they say that.  They tell my mom I need meds.  I don’t need meds.  I’ve matured.  (referring to his problems with anger management)

Those psychologists don’t know me.  What do they know about my life?  How can they say that I’m this or I’m that.  They don’t know me.

When reading standardised assessment results be aware of Band of Confidence / Standard Error of Measurement

  • Standard Error of Measurement  – the extent to which a subject’s score is ‘out.’  This information is in the technical manual for published tests.
  • Band of Confidence – because of the Standard Error of Measurement, a test score can never be considered absolutely correct.  Therefore some test manuals offer a range around a score can be interpreted with confidence.

Issues Around Formal Assessment

  • Sometimes student performance on Formal Assessments may not reflect their actual ability due to individual student factors such as anxiety, non-compliance, impulsivity, etc.
  • Sometimes Formal Assessments may miss key ecological factors in student performance (i.e. social-emotional issues, triggers for behaviour)
  • Sometimes Formal Assessments may not reflect the frequency or intensity of behaviour
  • To have an accurate profile of the student you need information from a variety of sources and assessment tools
  • You need a balance of Formal and Informal assessments – standardized tests, teacher-created evaluations and observations
  • Information from a variety of sources – parents, teachers, TAs and even the student!

©Angela onthebellcurve (2012)


1 Comment

Filed under Assessment

One response to “Assessment

  1. I haven’t had a chance to go through all of the content on this website yet, but it seems to have a lot of content that could be applied to the Ontario curriculum as well. I also know that this website doesn’t fully solve the difficulty of assessment but anything that helps is good in my opinion.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s