Recent MSA testing spurs questions


March 16, 2004, midnight | By Kedamai Fisseha | 20 years, 1 month ago

State official offers some answers


In accordance with federal legislation passed last year, which established new accountability regulations for public schools, Maryland has implemented a new program of standardized testing. Blair sophomores recently took one of these tests, the Maryland State Assessment (MSA), which is now in its second year of administration, on Feb. 25 and 26. The state also requires a similar group of tests called the High School Assessments (HSA), which have been in use since the 2001-2002 school year and which Blair students will take during the last week of May.

After collecting questions from Blair teachers, students and administrators, Silver Chips spoke to the Deputy State Superintendent for Academic Policy, Ronald A. Peiffer Ed.D., about Maryland's program of standardized testing and allowed him to provide the Blair community with some answers.

QUESTIONS SUBMITTED BY TEACHERS

Are there any realistic plans to make students take the test seriously?

Peiffer: Once the HSAs are tied to the diploma, students will take them seriously. This happened with the Maryland Functional Tests when they were first tied to the diploma and also in other states as graduation tests came on the line. With the MSAs, we're in a bit of a dilemma because we were required by federal law to put a separate test in at the high school level. Our original assumption was that the English I test, which is part of the HSA, would do the job. We were told by the federal government at a fairly late date that that would not work, that we would have to develop our own reading test. So we built it very quickly and we got it online and in place in the time we had. It's possible, and we're looking at some options down the road, to be able to actually tie the MSA into the HSA. This is not to say that the reading test will become an HSA, required for graduation, but it certainly is an option.

How do you personally feel about the MSA?

Peiffer: It is a good test. It is constructed well and seems to be pretty well accepted around the state. The results on the tests mirror the results on other tests. The same groups who do poorly on the MSAs also do poorly on these other tests.

What happens if (once) a school fails?

Peiffer: The federal law and state regulations call for a series of increasing consequences such as revamping school programs and staffs, ultimately. The current accountability program gives schools an opportunity to see the specific deficit area and subgroup having a problem and permits the school to focus its remedies on those deficits rather than calling for a complete overhaul. It is a good system overall. The schools never leave the oversight of the school system, but their plans must be approved by the State Board for those schools at the extreme end of the school improvement process. Schools failing to make adequate yearly progress in one or more areas are not necessarily in deep trouble. The law simply asks them to understand where their programs aren't working and to remedy them.

Why are the special education and ESOL students included in the testing? What is the rational for presenting the same test to such a wide assortment of students?

Peiffer: Federal law requires that they be included in assessment. Federal law requires that you test all students on the same content and the same level of rigor. This means that we cannot have lower expectations for any students. That part of the plan is good. The fact is that some students, particularly English Language Learners (ELL) students cannot show their best reading skills via some English language tests. The U.S. Department of Education (USDE) has recently given us some flexibility to adjust the way we include ELL students in our accountability program and the flexibility to develop tests that respond to their needs better. These are in process of being developed. Schools should find a fairer assessment of ELL scores in this year's accountability reports with scores bumping up closer to the levels of other groups.

In your personal opinion, how realistic is the No Child Left Behind (NCLB) program?

Peiffer: It is based on the right premises, but there are some glitches in the design. Maryland feels that the program deserves a chance and that we will do the best to implement it fairly. We are at the same time on the lookout for glitches and have a number of issues on our radar screen. We are submitting, by April 1, 2004, a list of changes in our state NCLB plan to US Dept. of Education for approval. We have a good sense that they will agree to these changes. We will continue to look for ways to improve it.

Have you had any personal experience with the classroom dynamics of the MSA? Do you have firsthand experience with student reaction to the test? If so, what is your impression? If not, what have you heard?

Peiffer: While I have not been in classrooms for the administration of the MSA this year, I spoke to many teachers and principals and to Maryland State Department of Education (MSDE) staff who were in schools during testing. This was a smooth administration all the way around, and we are very appreciative of the many students and teachers who have made the best of it.

QUESTIONS SUBMITTED BY STUDENTS

What is the point of giving us such simple questions?

Peiffer: The test needs to be easy enough that by 2014, all students could achieve a proficient score. Teachers helped us set the passing scores for the MSA tests and for the HSA tests. The test should have a range of questions from very easy to more challenging questions.

Why did the test have to take so long?

Peiffer: The Geometry test and the Algebra test are both HSA designs. They include enough questions to provide valid and reliable data overall and at the sub-score level for students. Fewer questions would make the results invalid. There are additional questions in each HSA (including the Geometry MSA test) that are field-testing items to count the following year or so. By field testing items in each test, we can release and make public one full form of each test. At www.mdk12.org, you can see one form for each of the tests since 2000.

How are we graded? What is the scoring scale?

Peiffer: Tests frequently have about 75 points or items each, and the total value of those items are put on an 800 point scale. The multiple choice items are scored by machine. The short and long answer items are scored by trained scorers. Usually, two scorers score each item. If they do not agree on the score, a third person, a supervisor, scores it. Each short answer item is usually about 2 points with partial credit permitted. The passing scores range about 400 or so, pending the test. In other words, for most of the tests, about half of the points or items must be correct for the student to receive a passing score for the HSAs, in particular.

QUESTIONS SUBMITTED BY ADMINISTRATORS

If so many of our students, who are just getting into this school and in some instances - this school system - are expected to pass this test, how does this properly reflect our school's performance?

Peiffer: Let me explain the answer to that one – it's a good question. In order to combat this issue, there are some new rules. Any student who comes into a school after Sep. 30 will not be included in the testing. The same applies to limited English students within their first year of coming to the United States. By limiting this, we can ensure that schools are evaluated fairly.



Tags: print

Kedamai Fisseha. Kedamai Fisseha sorely misses the computer lab where Silver Chips was born and is daily reborn. He is currently living and writing from London, England where he is glad for the chance to continue his participation in the organization. More »

Show comments


Comments


Please ensure that all comments are mature and responsible; they will go through moderation.