Considering AI within the Informational Swirl
This reflection began as coursework during Brown University’s Effective and Ethical AI program and has since been expanded with new perspective.
When Information Architectures Drift
As a communications and content-systems professional, I’ve seen how fragile information architectures can be. Even well-designed systems age faster than we expect.
Context shifts, attention wanders, and language evolves faster than our taxonomies. As people move on and collective focus shifts, the mental landscape changes too. Categories that once felt natural start to feel misaligned.
AI in the Swirl of Change
Now, we’re adding AI to this already shifting terrain. It doesn’t stop the drift: it joins it. Sometimes it’s a stabilizing collaborator. Other times it reveals how fragile the foundations have become.
AI can show us where the structure is weak, but it still takes human judgment to rebuild it. When the goal becomes to “not have to understand,” we’ve missed something essential. The point of clarity isn’t to be done thinking—it’s to keep thinking together.
When Structures Shape Perception
That’s true whether we’re working with human colleagues or AI. I’ve seen this play out again and again. Teams set out to create structures—or inherit ones they barely understand—and somewhere along the way the structure itself starts to shape what people see, and what they stop seeing.
Real-world IA Gaps in the Field
For example, at AppDynamics, I advocated for a flexible, faceted knowledge base that could keep pace with our rapidly evolving technology and vocabulary.
That effort eventually stalled—partly because the technology and terminology kept changing, and partly because the subject-matter experts we relied on were stretched thin. As a content expert rather than a technical one, I often found myself translating between languages and priorities, trying to surface shared understanding in limited time.
Looking back, I can see how conversational AI might have changed those dynamics. It could have served as an intermediary—helping me prompt more focused exchanges with busy SMEs or highlight emerging patterns in how people searched, asked, and learned. The goal wouldn’t have been to replace those human conversations, but to make them more possible. With the right tools, we might have found a clearer path through the ambiguity together.
Maybe that’s what collaboration with AI really is: not outsourcing clarity, but widening the circle of who gets to participate in it.