Subtopic Deep Dive

Information Literacy Assessment
Research Guide

What is Information Literacy Assessment?

Information Literacy Assessment encompasses standardized tools, rubrics, and methodologies for measuring information literacy competencies at individual and programmatic levels.

Researchers develop and validate instruments like the Information Literacy Test (ILT) while addressing reliability and validity. Oakleaf (2008) maps assessment approaches, citing dangers and opportunities in higher education contexts (166 citations). Over 50 studies reviewed by Mahmood (2016) examine self-reported versus demonstrated skills, confirming Dunning-Kruger effects (193 citations).

15
Curated Papers
3
Key Challenges

Why It Matters

Robust assessment supports evidence-based library instruction improvements and accreditation. Oakleaf (2009) introduces the ILIAC cycle for iterative instructional enhancements (126 citations). Sparks et al. (2016) recommend next-generation digital literacy assessments for higher education success (124 citations). Schilling and Applegate (2012) compare methods revealing disparities in attitudes, skills, and behaviors, guiding librarians toward effective evaluations (104 citations).

Key Research Challenges

Dunning-Kruger Overestimation

Individuals overestimate information literacy skills compared to demonstrated abilities. Mahmood (2016) systematically reviews 53 studies confirming this effect (193 citations). Valid assessments must counter self-reported biases.

Instrument Reliability Validity

Developing standardized tools meeting ACRL standards faces psychometric hurdles. Cameron et al. (2007) validate the web-based ILT through rigorous testing (123 citations). Reliability across diverse populations remains inconsistent.

Performance Measure Gaps

Disparities persist between attitudes, skills, and information usage. Schilling and Applegate (2012) compare evaluation methods showing significant gaps (104 citations). Bridging faculty perceptions and activities is challenging, as noted by DaCosta (2010) (101 citations).

Essential Papers

1.

Do People Overestimate Their Information Literacy Skills? A Systematic Review of Empirical Evidence on the Dunning-Kruger Effect

Khalid Mahmood · 2016 · Communications in Information Literacy · 193 citations

This systematic review has analyzed 53 studies that assessed and compared peoples' self-reported and demonstrated information literacy skills. The objective was to collect empirical evidence on the...

2.

Dangers and Opportunities: A Conceptual Map of Information Literacy Assessment Approaches

Megan Oakleaf · 2008 · portal Libraries and the Academy · 166 citations

The culture of assessment in higher education requires academic librarians to demonstrate the impact of information literacy instruction on student learning. As a result, many librarians seek to ga...

3.

Information literacy assessment

Andrew Walsh · 2009 · Journal of Librarianship and Information Science · 138 citations

Interest in developing ways to assess information literacy has been growing for several years. Many librarians have developed their own tools to assess aspects of information literacy and have writ...

4.

The information literacy instruction assessment cycle

Megan Oakleaf · 2009 · Journal of Documentation · 126 citations

Purpose The aim of this paper is to present the Information Literacy Instruction Assessment Cycle (ILIAC), to describe the seven stages of the ILIAC, and to offer an extended example that demonstra...

5.

Assessing Digital Information Literacy in Higher Education: A Review of Existing Frameworks and Assessments With Recommendations for Next‐Generation Assessment

Jesse R. Sparks, Irvin R. Katz, Penny Beile · 2016 · ETS Research Report Series · 124 citations

Abstract Digital information literacy ( DIL )—generally defined as the ability to obtain, understand, evaluate, and use information in a variety of digital technology contexts—is a critically impor...

6.

The Development and Validation of the Information Literacy Test

Lynn Cameron, Steven L. Wise, Susan Lottridge · 2007 · College & Research Libraries · 123 citations

The Information Literacy Test (ILT) was developed to meet the need for a standardized instrument that measures student proficiency regarding the ACRL Information Literacy Competency Standards for H...

7.

Strengthening Connections Between Information Literacy, General Education, and Assessment Efforts

Ilene F. Rockman · 2002 · Illinois Digital Environment for Access to Learning and Scholarship (University of Illinois at Urbana-Champaign) · 118 citations

Academic librarians have a long and rich tradition of collaborating with discipline-based faculty members to advance the mission and goals of the library. Included in this tradition is the area of ...

Reading Guide

Foundational Papers

Start with Oakleaf (2008, 166 citations) for conceptual maps of approaches; Oakleaf (2009, 126 citations) for ILIAC cycle; Cameron et al. (2007, 123 citations) for ILT validation.

Recent Advances

Mahmood (2016, 193 citations) on Dunning-Kruger evidence; Sparks et al. (2016, 124 citations) on digital frameworks.

Core Methods

Web-based multiple-choice tests (ILT, Cameron 2007); performance rubrics (Walsh 2009); pre-post interventions (Mery 2012); systematic reviews (Mahmood 2016).

How PapersFlow Helps You Research Information Literacy Assessment

Discover & Search

Research Agent uses searchPapers and citationGraph to map Oakleaf (2008) connections, revealing 166-cited assessment approaches; exaSearch uncovers Dunning-Kruger studies like Mahmood (2016); findSimilarPapers extends to Walsh (2009) tools.

Analyze & Verify

Analysis Agent applies readPaperContent to extract ILIAC stages from Oakleaf (2009), verifies claims with CoVe against ACRL standards, and runs PythonAnalysis on ILT validation data from Cameron et al. (2007) for statistical reliability checks using GRADE scoring.

Synthesize & Write

Synthesis Agent detects gaps in self-assessment biases from Mahmood (2016); Writing Agent uses latexEditText and latexSyncCitations to draft rubrics citing Sparks et al. (2016), with latexCompile for accreditation reports and exportMermaid for ILIAC cycle diagrams.

Use Cases

"Compare pre-post test scores in information literacy interventions like Mery et al. 2012"

Research Agent → searchPapers → Analysis Agent → runPythonAnalysis (pandas on score data) → statistical output with p-values and effect sizes.

"Generate LaTeX rubric for assessing digital information literacy frameworks"

Synthesis Agent → gap detection → Writing Agent → latexEditText + latexSyncCitations (Sparks 2016) → latexCompile → formatted PDF rubric.

"Find GitHub repos validating information literacy tests like Cameron 2007 ILT"

Research Agent → paperExtractUrls → Code Discovery → paperFindGithubRepo → githubRepoInspect → code snippets for psychometric analysis.

Automated Workflows

Deep Research conducts systematic reviews like Mahmood (2016) over 50+ papers, generating structured reports with citation graphs. DeepScan applies 7-step analysis to Oakleaf (2008) maps, with CoVe checkpoints verifying assessment efficacy. Theorizer builds theory from ILIAC cycles (Oakleaf 2009) for programmatic improvements.

Frequently Asked Questions

What is Information Literacy Assessment?

It involves tools and methods measuring competencies per ACRL standards at individual and program levels.

What are key assessment methods?

Methods include ILT (Cameron et al., 2007), ILIAC cycle (Oakleaf, 2009), and conceptual maps (Oakleaf, 2008).

What are top papers?

Mahmood (2016, 193 citations) on Dunning-Kruger; Oakleaf (2008, 166 citations) on approaches; Walsh (2009, 138 citations) on tools.

What open problems exist?

Bridging self-reported and demonstrated skills (Mahmood 2016); standardizing digital assessments (Sparks 2016); faculty integration (DaCosta 2010).

Research Library Science and Information Literacy with AI

PapersFlow provides specialized AI tools for your field researchers. Here are the most relevant for this topic:

Start Researching Information Literacy Assessment with AI

Search 474M+ papers, run AI-powered literature reviews, and write with integrated citations — all in one workspace.