Thursday 12 November 2015

Testing Dilemma Part II

Acronyms:
 
SAT – Scholastic Aptitude Test
ACT – American College Test
PARCC – Partnership for Assessment of Readiness in College and Careers

Remember when the SAT upgraded its 1600 point Math and ELA test to include a new 800 point essay portion?  This was in 2005.  The College Board felt that the bubble-answers did not sufficiently reflect a student’s “aptitude” for college success.  The other two sections were also modified, but it was the essay section that proved to be very expensive to conduct.  This, of course, is because humans had to read and score all these essays.

The SAT will now relaunch next March with the essay component being “optional”.  The new test will be three hours with three parts:  reading, writing and language, and math.  The math portion will include open-ended short answers, but the essay portion (50 min) will be optional and not included in the overall score.  Basically, the SAT will more closely resemble the ACT (developed in 1950 as a rival to SAT).

Both the SAT and ACT are now competing for state-funded contracts to offer these tests to public high school students.  In fact, the College Board has begun offering tests to 8th graders to help them “address weaknesses early”.   A third competitor is PARCC, designed as a for-profit organization to provide K-12 testing in alignment with Common Core.  All three companies are strategizing to offer tests that will help students along the way rather than being simply a college admissions test.

So, the test mania thrives because it is lucrative business, and in some cases appropriate and helpful.  But, I do not believe any of these test methods do justice to measuring how well a student will perform in college or in the work world.  The tests are missing the elements that portray the student as a whole person with creative ideas, possible solutions to world problems, passions and opinions.  Measuring these attributes requires a lot of human effort and resources.

Time Magazine (Oct. 12, 2015) described what Singapore does for its assessments.  “The government-run test for college-bound students requires them to complete a group project over several weeks that is meant to measure their ability to collaborate, apply knowledge, and communicate – all skills both educators and employers say are critical for the future economy.”Max McGee, who runs the Palo Alto school system, suggests that a better assessment would “… look like a portfolio students generate over time that reflects their passion, purpose in life, their sense of wonder, and that demonstrates their resilience and persistence and some intellectual rigor.”

In the meantime, it is important to note that top-ranked colleges are beginning to either eliminate or diminish the focus on SAT or ACT scores.  Harvard, for example, has found that the very high test scorers only do a little better than the lowest scorers.  College officials feel, in general, that it is high school transcripts that can reveal what a student will do.  A major test of 33 colleges found that high school GPA – even at poor schools with easy curriculums – was better at predicting success in college than any standardized test.  What does impress school officials is “rigor” and “curriculum”.  Getting good grades in tough courses in the field of your interest in high school will stand out as much as or more than a high entrance exam score in most colleges.

So, it may be worthwhile to start planning your portfolio and what you would like it to include that best represents who you are, what your interests are and some of the things you can envision for yourself in the future.  It can contain all sorts of things:  pictures, videos, reports, journal entries, volunteer activities, projects (for school or just for fun), ideas, perhaps graded papers or projects that show improvement, awards, honor role, and on and on.  I would also suggest starting this as soon as you want to.  Your parents can help in the beginning.   I will also be glad to offer help or ideas.  It can be fun and a good habit to start now.  You will always benefit from having a solid portfolio in this highly competitive world.

Saturday 7 November 2015

The Testing Dilemma

Obama has made a statement that is rattling the test-makers, but undoubtedlygettingsighs of relief from teachers, students and parents who believe that the current testing mania in most schools, is not helping the learning process.  In fact, it may be a hindrance to good teachers who fail to deliver the scores and students who may be learning to take tests rather than important concepts.

Having worked extensively with test development including reliability and validity analysis – first for students at San Jose State, then later with employees at an insurance company – I can state with confidence that all these tests for students have not been validated.  This means that there are no studies that indicate a relationship between higher test scores and whatever criteria they are meant to predict.  Would it be better jobs?  More money?  Better quality of life?  How about college or even high school graduation?

A validity study requires first the definition of criteria.  Let’s say we want a test that will predict high school graduation – simple enough.  We then set up a hypothesis that says something like:  “Students scoring above a certain point on this test will have a significantly higher rate of high school graduation.”  Significant means higher probability than random, and how much higher is pre-established.

When we design such a study, we have to include our method of tracking these students for at least 4 or more years and then identifying them by their previous test scores, once we know if they did or did not graduate.  We may find a very significant relationship, or we may find little or no correlation, meaning the tests did not work for our purposes.  But, this is the ONLY way to validate the tests.

Since students are now taking new tests, there has not been a chance to track them and compare with future data.  With this, I feel it is important to consider these tests as data-gathering instruments and definitely not used as the basis for decision-making regarding student, teacher, school or district performance.  If researchers can have access to these scores and track the students, we may have good validity studies in a few years – and THEN decide how to use them (or discard them).

In the meantime, I feel testing is critical and useful for measuring learning for each student as long as the results are used for feedback to improve the learning program for that student.  We want to see growth and progress.  And, we want the opportunity to step in when it does not occur.  We may also want to compare aggregate scores between classes, schools, districts or countries to see if we can learn better ways of doing things, but the focus should be on improvements in scores, not whether they meet some external benchmark.

At Pinecone, we rely on results of the weekly tests in our booklets, along with homework and class performance.  Does the student need more practice?  An explanation of a particular concept? Or has the student mastered this concept and can comfortably move on.

While we do not have rigorous studies (because we do not have access to future grades and scores), we have plenty of anecdotal data that indicate the effectiveness of our programs in terms of several indicators of success:  higher grades, honor rolls, graduations, post-grad work, and even great careers.  You can check the testimonials on our website.

I would agree with Obama that we need to address the fanatic testing and misapplication of results.  Hopefully, the Smarter Balanced Testing will yield good information to help each teacher provide, and each student receive an enhanced education.