The MILEX workshop, Measuring Up: Tools for Assessing Information Literacy held Friday, Nov. 9 at the Loyola Graduate Center was very productive! It gave me a new and impassioned perspective on what is usually considered a very dry subject. Throughout the entire workshop, the presenters always reinforced the idea that that assessment just boils down to helping students learn better.
At (:30 AM, after coffee and a delicious continental breakfast, keynote speaker Patricia Dwyer, the Associate Vice President for Academic Affairs at the College of Notre Dame of Maryland was the first to stress that assessment was all about student learning. I liked the way she outlined creating a culture of assessment, especially in way she described enabling a safe environment to honestly assess student outcomes. I took this to mean that it is okay if low assessments pop up as opportunities for improving instruction. After all, nobody’s perfect!
In the information literacy assessment culture of my academic library, sometimes I think that we too often find fault with the wording of survey questions whenever there is a low percentage score and don’t own up to potential weaknesses in our instruction program. That’s not being totally honest, is it? Thanks, Patricia Dwyer for improvement tips!
The late morning program, The Nuts and Bolts was co-presented by Beth Mulherrin, Academic Director of the Library Skills 150 Program at University of Maryland University College,, Susan Cooperstein from the Loyola Notre Dame Library, and Thomas Arendall-Salvetti, Instruction Coordinator at the University of Baltimore Langsdale Library. They reviewed a variety of assessment tools and effectively organized them into three categories:
· Knowing (objective assessment – Beth)
· Showing (performance assessment – Susan)
· Doing (authentic task assessment – Thomas)
Beth Mulherrin discussed the advantages and disadvantages of objective assessment tests. She averred how they were ideal for assessing large groups on an institutional level and good benchmarks for comparison with consistency across cohorts, institutions, consortia, etc. The tool example that Beth gave was Project SAILS (SAILS – Standardized Assessment of Information Literacy Skills). The disadvantages from such objective instruments, as Beth pointed out, is the “multiple guess” format common to standardized tests. She also pointed out a couple of fallacies in the SAILS instrument, where questions were vague and their responses hardy measurable.
Susan Cooperstein discussed performance assessment and gave iSkills as a relevant example. Formerly called the ICT Literacy Assessment, this ETS tool measures information communication, and technology literacies. Susan showed us the iSkills tour. Questions and scenarios in this performance-based assessment tool that requires students to demonstrate real-world tasks focused towards life-long learning, but not necessarily library research skills. Susan mentioned that iSkills was often rejected by librarians due to just that fact and challenged us to widen our perspectives.
In the final segment of Nuts and Bolts, Thomas Arendall-Salvetti covered authentic task assessment, which measures actual class work. He stressed, however, this is not the same as summatively grading individual assignments, but looking at student outcomes on a whole in a formative manner. The advantage of authentic task assessment is that it is a good measure of higher-order information literacy skills. Among examples of authentic task assessment tools, Thomas mentioned research logs and annotated bibliographies. I can really relate to these as a faculty instructor in charge of multiple course sections where I assign both research logs and annotated bibliographies. I assert that annotated bibliographies are the best assessments of higher-order information literacy skills because they require students to find, access, properly cite, and thoroughly evaluate information sources, which they have to furthermore annotate as to how they synthesize the source into their information need. Doesn’t that cover just about every tenet of information literacy? Thomas mentioned also how important rubrics are as aids in authentic assessment.
After a delicious lunch of catered sandwiches, delicious salads, and a cookie assortment, we started putting a few things about the late morning session Nuts and Bolts into practice in a hands-on activity called The Building. Groups of us attendees cleared away the lunch dishes to make way for wireless laptops and scenarios for which we had to develop assessment tools on the fly. My table was involved with assessing how to measure students’ ability to discern between scholarly and popular journal sources. We came up with a couple of survey questions that were okay, but a group at another table, led by Ginny Polley from the Villa Julie College Library , astounded us all with an elegant rubric measuring the effectiveness of a thesis statement!
The final event of the day was the impassioned presentation of Dr. Marguerite Weber, Director of the First and Second Year Program at the University of Baltimore. In her Closing the Loop presentation, she approached assessment from an ethical perspective as the “right thing to do” harking back to our initial passion for higher education and helping students learn. Marguerite stressed that assessment is the passion-driven way to help students learn better. I enjoyed her unique storytelling approach to assessment, asking me to read an excerpt from Saint-Exupéry’s Little Prince, and Ginny Polley to read from a German fairy tale I have yet to identify (although I took grad school classes in German fairy tales and really should know!).
By 3 PM the program was over … In short, this was a wonderful and empowering workshop! Please leave your comments below, or contact me by email if you’d like to find out how you can contribute reports or entries anytime as a MILEX Member Blogger.