Information Seeking in the Age of Agentic AI

A Half-Day Tutorial at ACM SIGIR CHIIR'26

March 22--26, 2026 | Seattle, WA, USA

Abstract

Agentic AI systems are changing how people seek and use information. However, many common methods for studying, building, and assessing these systems were developed for more static settings, and they often miss the interactive, temporal, and evidence-driven dynamics of real information seeking. This half-day tutorial equips the CHIIR community with a concise, practice-oriented methodology for designing and evaluating information-seeking agents. We first establish a shared vocabulary for agentic systems and connect it to user-centered IR constructs. We then show how to design agent workflows that elicit effective evidence seeking under temporal change, including planning, tool choice, and grounding. Finally, we introduce trace-based rubrics that score correctness, evidence support, sufficiency, and cost. Short case studies and optional demonstrations using open frameworks (for example, Perplexica, local LLMs via Ollama, and metasearch engines such as SearXNG) illustrate how these ideas map to real systems. Attendees will receive reusable materials, including slides and selected supplemental resources (for example, example traces and optional demo notebooks), suitable for research and teaching. The tutorial assumes familiarity with core IR concepts but does not require prior experience with agentic frameworks.

Presenters

Preetam Dammu

Preetam Dammu

University of Washington, Information School

Preetam Dammu is a Ph.D. candidate in Information Science at the University of Washington. He works at the intersection of Information Retrieval and Generative AI, studying how people and AI systems seek, verify, and use information in dynamic, open-world environments. His current research focuses on making information-seeking agents and retrieval-augmented systems more reliable, auditable, and safe, with an emphasis on evidence-grounded behavior, robustness to changing information, and careful evaluation in real-world settings. His work appears in venues including SIGIR, WSDM, EMNLP, IJCAI, and WebConf, and has also received broader media attention through MIT Technology Review. He brings experience from both academia and industry research, including roles at UW, Amazon Science, and AWS AI, and is an inventor on multiple U.S. patents.

Tanya Roosta

Tanya Roosta

UC Berkeley School of Information & Amazon

Tanya is a senior science manager at Amazon, working on generative AI techniques for natural language processing and information retrieval problems, and leading feature development for various aspects of Amazon Shopping. She concurrently holds a lecturer position at Department of Information Science at UC Berkeley. Prior to Amazon, she worked at an early-stage Fintech startup as the lead research scientist working on efficient topic modeling, sentiment analysis and social media trending-topic detection. Her research used deep neural networks, and advanced statistical modeling, and the resulting features were implemented through AWS APIs. Tanya also has over 9 years of work experience in quantitative finance and investment banking, working as a director of risk and finance analytics at Moody's, quantitative researcher at the Economic department of Federal Reserve Bank of San Francisco, and quantitative modeling for systematic portfolio management at Allianz. She holds a Ph.D. in Electrical Engineering, a Masters in Mathematical finance, and a Masters in Statistics. She has published in several conferences and journals, and holds patents as part of her industry work.

Learning Outcomes

The tutorial is designed for practitioners and researchers interested in understanding how to leverage and evaluate agentic systems for information seeking. It is relevant to all conference attendees, including students, early-career, and experienced researchers.

This tutorial will provide attendees with:

  • An understanding of agentic workflows for information seeking and how they connect to CHIIR constructs;
  • Insights into designing tasks that emphasize information seeking over summarization;
  • Methods for evaluating agentic systems using trace-aware rubrics and diagnostics;
  • Practical exposure to open frameworks and tools for prototyping agentic IR scenarios.

Tentative Schedule

The following schedule reflects a typical half-day flow; minor adjustments may be made on site for pacing and audience interaction.

Time Activity
00:00--00:15 Goals, audience, and participation (talk)
Objective: align on scope, outcomes, and how to participate.
00:15--00:35 Module 1: Information seeking and the agent turn (talk)
Objective: motivate the agentic shift for CHIIR and define key terms.
00:35--01:10 Module 2: Agentic AI foundations (talk)
Objective: establish shared vocabulary and evaluation dimensions.
01:10--01:25 Activity A: workflow sketch or trace anatomy (guided activity)
Objective: surface design choices and observable agent behaviors.
01:25--01:35 Break
01:35--02:05 Module 3: Agentic AI in IR and Generative IR (talk)
Objective: present workflow patterns for planning, tool choice, and grounding under temporal change.
02:05--02:35 Module 4: Measuring what matters (talk + short demo)
Objective: introduce trace-aware rubrics for correctness, support, adequacy, and cost.
02:35--02:55 Activity B: answer evaluation (guided activity + pair debrief)
Objective: practice applying lightweight rubrics and articulating rationales.
02:55--03:00 Closing: readings, materials, and directions (talk)
Objective: provide takeaways and pointers for reuse.

Intended Audience and Prerequisites

Researchers, students, and practitioners in IR, HCI for IR, evaluation, and applied ML who are interested in user-centered study of agentic information access. Familiarity with core IR concepts is assumed. No prior experience with agentic frameworks is required.

Materials

We will release slides and selected supplemental materials (e.g., example traces and optional demo notebooks) on this website soon, and we will update materials before delivery.

Slides

TBD

Example Traces

TBD

Demo Notebooks

TBD

Reading List

TBD