Activity D: Agentic Navigation
A hands-on Jupyter notebook that demonstrates why some questions require
multi-step navigation — following links programmatically rather than relying on
a single search or API call. The agent is implemented with the
ReAct framework
(Reason + Act) using plain requests and BeautifulSoup — no external
agent library required.
How to run this notebook
Option A — Google Colab (easier)
- Click the Open in Colab button above.
-
In the top-left menu choose File → Open in Colab.
Use a personal Google account rather than a work or university account — institutional accounts sometimes block third-party Colab access. -
Make it editable: File → Save a copy in Drive.
The shared notebook is view-only; saving a copy to your own Drive gives you a personal, fully editable version. -
Enable a GPU: Runtime → Change runtime type → T4 GPU → Save.
This activity uses Phi-3.5-mini (3.8 B params) which needs ~4 GB VRAM. A T4 is required for comfortable speed. - Run cells top-to-bottom with Runtime → Run all, or step through them with Shift+Enter.
Option B — Local Jupyter (laptop)
- Download the notebook from the Drive link (File → Download → Download .ipynb).
- Install dependencies:
pip install transformers torch accelerate requests beautifulsoup4 ddgs - Open with
jupyter notebook activity_d.ipynbor in VS Code. - On CPU the model loads slowly and each generation step is much slower than on GPU. A local GPU or the Colab option is recommended.
What you will learn
- Why some questions require multi-step navigation and cannot be answered by a single search query.
- How the ReAct framework structures agent reasoning: alternating
Thought:/Action:/Observation:until anAnswer:is reached. - How to implement three navigation tools (
navigate,find_link,read_page) withrequestsandBeautifulSoup. - How to replicate the agent's steps manually in a browser and compare the traces.
- What evaluation looks like at Level 5: faithfulness, step efficiency, and robustness to page structure changes.
Complexity Ladder level covered
| Level | Name | Key property |
|---|---|---|
| 5 | Agentic Navigation | Multi-step navigation guided by a Reason+Act loop. The agent discovers the target URL by reading pages and following links — it is not given the URL directly. Evaluation requires checking faithfulness, step efficiency, and robustness. |
Notebook outline
- Setup: install 5 packages; load
Phi-3.5-mini-instruct(T4 recommended; Qwen2-0.5B as CPU fallback). - Part 1 — Search fails: run a DuckDuckGo query for the latest commit → results point to blog posts and the repo homepage, not the commit page.
- Part 2 — ReAct agent: define
navigate(),find_link(),read_page()tools; implement the ReAct loop; run the agent and watch the full Thought → Action → Observation trace. - Part 3 — Browser comparison: step-by-step instructions to replicate the agent's navigation manually; observations table comparing search vs. navigation.
- Recap: full Complexity Ladder table; Level 5 evaluation criteria; transition to Level 6 (Corpus Sensemaking).
Requirements
No API keys or signups needed:
- Web navigation:
requests+beautifulsoup4— fetch and parse any public HTML page. - Search demo: ddgs — DuckDuckGo search, free, no key.
- Default model: microsoft/Phi-3.5-mini-instruct — 3.8 B params, ~7 GB, T4 GPU recommended.
- CPU fallback: Qwen/Qwen2-0.5B-Instruct — runs on any laptop; ReAct format adherence is less reliable.
Citation
If you use this tutorial in your work or teaching, please cite:
@inproceedings{dammu2026information,
title={Information Seeking in the Age of Agentic AI: A Half-Day Tutorial},
author={Dammu, Preetam Prabhu Srikar and Roosta, Tanya},
booktitle={Proceedings of the 2026 Conference on Human Information Interaction and Retrieval},
pages={429--430},
year={2026}
}
View on ACM DL · Contact: Preetam Dammu <preetams@uw.edu>, PhD Candidate, University of Washington