A follow-up to "The Human Touch: Why AI Craft beats AI Tricks"
I need to tell you something: I wrote an entire blog post about AI craft and human judgment, and then realized I had no idea how to actually practice what I was preaching. Not the first time.
You know that person who gives great advice about work-life balance while answering emails at 11 PM? Or who writes eloquently about mindfulness while scrolling Twitter during meditation? That's me with attention management. I can wax poetic about the importance of human wisdom and contextual judgment, but my actual day-to-day reality is... messier.
I'm the person who opens a browser tab to research one thing and somehow ends up reading about 15th-century manuscript illumination three hours later. I start deep work sessions with noble intentions, only to find myself "quickly checking" Insta for what turns into an hour of context-switching chaos. I write about the irreplaceable value of human attention while my own attention scatters like leaves in the wind.
So after posting "The Human Touch," I found myself staring at an uncomfortable question: How do you actually develop the sophisticated attention that lets you work with AI rather than just being guided by it? How do you cultivate the judgment and wisdom I'd been writing about?
This led me down a rabbit hole that I'm still tumbling through—one that feels too important not to share, even though I'm absolutely not an expert. What I've discovered is something I'm calling "attention architecture," and it's becoming the foundation for how I think about staying meaningfully human in an AI world.
Here's where the research got personal in a way I hadn't expected. Studies on meerkats (I know, I know) show something that completely reframed my understanding of attention: their sentinel behavior isn't just about individual survival. They're more likely to stand guard when vulnerable group members are present. Their vigilance serves the collective good, not just their own interests.
These little creatures have mastered what I'm calling "attention architecture"—the ability to consciously adjust their cognitive aperture based on what the situation demands. A meerkat can scan the horizon for threats while simultaneously staying alert to what's happening with the family group below. They instinctively shift between narrow focus on specific dangers and broad awareness of the entire landscape.
This connects to something I touched on in my AI craft post but didn't fully grasp: the most valuable human contribution isn't competing with AI on raw processing power, but bringing distinctly human capacities like contextual judgment and collective wisdom to AI-augmented work.
But here's what I completely missed before: these capacities don't just happen. They require what I'm now thinking of as sophisticated attention architecture—the ability to consciously adjust your cognitive aperture based on what the situation demands.
The meerkats get this instinctively. I'm having to learn it the hard way.
I want to be honest about what I'm trying, because I think the internet has enough confident experts telling you what works. I'm sharing my experiments not because they're proven, but because the process itself has been valuable—even when I fail at it.
Morning Horizon Scanning (Success Rate: Maybe 30%): I'm attempting to start each day with 15-20 minutes of what I call "wide aperture" time. Industry news, broader trends, long-term goals—looking for patterns and weak signals rather than just reacting to immediate demands.
Reality check: Most mornings I still dive straight into email because it feels urgent and important. When I do manage this practice, though, the difference in my day is noticeable. I make better decisions about where to spend my time, and I catch opportunities I would have missed.
Managing Input Variety (Success Rate: Wildly inconsistent): This might be my biggest challenge. I consume a weird mix of podcasts—one day I'm deep in a series about Byzantine history, the next I'm listening to someone explain how AI coding is making programming languages obsolete and "plain English" is the new Python.
The problem is knowing when this variety is valuable cross-pollination versus when it's just intellectual scatter. Sometimes the contrast between ancient history and cutting-edge tech sparks genuine insights about human patterns and technological change. Other times it leaves me feeling like I'm floating on the surface of everything without diving deep into anything.
I'm experimenting with what I call "thematic seasons"—spending a few weeks focused on related topics, then consciously shifting to a different domain. But honestly, I'm still figuring out when I need more variety to stimulate thinking versus when I need to resist the novelty and go deeper.
Transition Rituals (Success Rate: About 40%): Between different types of work, I'm experimenting with brief pauses to consciously "adjust my settings." Before a strategic planning session, I might take a few breaths and expand my attention to include broader context. Before deep technical work, I consciously narrow down to the specific problem at hand.
This sounds simple, but it's surprisingly difficult to remember to do. I'll catch myself switching contexts abruptly and realize I forgot to "change lenses" again. But when I do remember, the quality of my work improves dramatically.
The Two-Minute Check-In (Success Rate: Rising slowly to maybe 50%): Before starting any significant task, I'm trying to pause and ask: "What aperture setting does this require?" Am I scanning for opportunities, focusing on execution, or trying to balance multiple streams?
The failures here are instructive. When I skip this step, I often find myself using the wrong type of attention for the task at hand—trying to scan broadly when I need to focus, or diving too deep when I need to maintain peripheral awareness.
Giving Myself Permission to Not Know Yet (Success Rate: Getting better at this): One practice that's been surprisingly liberating is allowing myself to say "I just don't know enough yet to dig deep—I need more time with a wide aperture." This goes against every productivity instinct I have, but I'm learning that feeling like I must produce something rarely delivers the best results. Only a good deadline does that.
Sometimes the most productive thing I can do is acknowledge that I'm still in the scanning and absorbing phase, rather than forcing myself into execution mode prematurely. It's uncomfortable because it feels unproductive, but it often leads to much better work when I finally do focus down.
This attention work is completely reshaping how I think about working with AI. Instead of just treating AI as a more sophisticated search engine or writing assistant, I'm starting to see AI interaction as requiring different types of human attention.
Wide Aperture AI Work: When I'm using AI to scan for trends, generate multiple perspectives, or explore possibility spaces, I need to maintain broad awareness. I'm looking for unexpected connections, challenging my assumptions, staying open to surprising directions. If I'm too focused, I miss the interesting tangents and novel combinations.
Narrow Aperture AI Work: When I'm using AI for specific tasks—refining ideas, working through structured problems, getting help with the occasional bit of code—I need focused attention on the details, the specific requirements and context that only I fully understand. The AI handles the heavy lifting, but I need to maintain sharp focus on quality and nuance.
Hyperfocal AI Work: This is the sweet spot I'm still learning to find—using AI as a thinking partner where I'm simultaneously focused on the immediate problem and aware of broader implications. This requires holding multiple levels of attention simultaneously, which is challenging but incredibly powerful when it works.
The difference is stark. When I'm unconscious about my attention state, AI interactions feel mechanical and unsatisfying. When I'm intentional about matching my attention to the task, AI becomes a genuine thinking partner rather than just a fancy tool.
Here's where this connects back to my earlier post about AI craft, and where I realized how much I'd been missing. The research on collective intelligence shows that the smartest groups aren't just collections of smart individuals—they're characterized by specific interaction patterns: how well members build on each other's contributions, maintain diverse perspectives, and balance individual insight with collective wisdom.
But this only works when individuals can manage their own attention architecture first. If I'm constantly scattered, reactive, and unaware of my own cognitive state, I can't contribute meaningfully to collective intelligence—whether that collective includes other humans, AI systems, or both.
This hit me hard because I realized I'd been thinking about AI craft as primarily an individual skill. But the research suggests something more complex: our ability to work effectively with AI isn't just about personal productivity or even creative uniqueness. It's about developing the capacity to participate in hybrid human-AI collective intelligence.
And that requires a level of attention sophistication I'm only beginning to understand.
Digging into this led me to some thinkers who are reframing how I understand attention and craft:
Jenny Odell's "How to Do Nothing" argues for "resisting the attention economy" not through productivity optimization but through reclaiming ownership of our attention itself. As she puts it, "life is more than an instrument and therefore not something that can be optimized." This helped me realize that attention architecture isn't just another productivity hack—it's about preserving something essentially human.
Matthew Crawford's "The World Beyond Your Head" makes attention itself a moral and political issue. He argues that our inability to pay attention "dissolves our individuality and our freedom." His emphasis on engagement with "the world of flesh, blood, wire, and mud" offers a counterpoint to AI's abstracted intelligence.
Cal Newport's work on deep work and slow productivity—doing fewer things but at a higher level of quality—perfectly complements what I'm learning about attention architecture. As AI handles more shallow work, the humans who can master deep, meaningful work become increasingly valuable.
But here's what's uncomfortable: these aren't just nice ideas. They're describing capabilities I need to develop if I want to stay meaningfully human in an AI world. And I'm honestly not there yet.
I'm still figuring this out, and I expect my understanding to evolve significantly. But a few directions feel promising:
Experimenting Across the Spectrum: Some people love structure and frameworks; others prefer intuitive, meditative approaches. I'm exploring how attention architecture might work across this spectrum, because what's emerging for me might not work for you at all.
Understanding the Stakes: The research suggests that AI's capabilities in pattern recognition and even creative tasks are advancing faster than many anticipated. This makes attention architecture more urgent, not less. The question isn't whether AI will get better at mimicking human capabilities—it's whether we'll develop the distinctly human capacities that remain irreplaceable.
Building Community: I'm curious about creating spaces for others to explore their own attention architecture and share experiments. The response to my AI craft post was quiet but thoughtful—a few people reached out privately to say it resonated, which made me realize there might be others wrestling with similar questions. Because honestly, I need the help.
If any of this resonates, I invite you to experiment alongside me. This isn't about finding the "right" system—there probably isn't one. It's about developing awareness of your own attention patterns and learning to be more intentional about how you allocate this precious resource.
Maybe start simple: Before your next important task, pause and ask yourself what kind of attention it requires. Wide aperture for scanning and connecting? Narrow aperture for focused execution? Or that hyperfocal sweet spot where you can handle multiple streams while maintaining quality?
I'm convinced that learning to consciously architect our attention isn't just about personal effectiveness—it's about staying irreplaceably human while leveraging the incredible tools at our disposal. It's about developing the capacity to synthesize across domains, make contextual judgments, and bring wisdom to complex situations.
But I also think it's about something deeper: preserving our agency in a world where it's increasingly easy to let algorithms make decisions for us. As a Gen-X guy who grew up believing in the promise of technology but now watches it reshape everything in ways that aren't always comfortable, attention architecture feels like a way to stay intentionally human while still engaging with powerful tools.
It's not about rejecting AI or nostalgia for simpler times. It's about developing the capacity to choose how we engage, rather than just being carried along by whatever captures our attention most effectively.
I don't have this figured out yet. I'm probably failing at it more often than I'm succeeding. But I'm learning, and that feels like progress. If you decide to explore your own attention architecture, I'd love to hear what you discover—especially your failures, because I suspect that's where the real learning happens.
The goal isn't perfection. It's conscious evolution, stumbling forward together toward something more intentionally human.
This post represents my current thinking and many current struggles as of June 2025. I expect both my understanding and my practice to evolve, and I'll share updates as I learn more. If you're experimenting with attention architecture yourself, or if you have research, frameworks, or insights to share, please reach out—this feels like work that benefits from collective intelligence, which I'm still learning how to contribute to meaningfully.