Day 2 at the Society of American Foresters Conference: The Technology-Capability Gap
The exhibit floor at SAF25 is impressive. LiDAR vendors demonstrating individual tree crown resolution. AI companies showing treatment prescriptions generated in seconds. Digital twin platforms simulating decades of forest succession. Remote sensing tools that can detect forest health changes weeks before they're visible to field crews. The technology is phenomenal—and not particularly new.
Walk the aisles, and you'll hear the same quiet refrain from forest managers: "We bought something similar last year. It's... underutilized." From HR directors: "We can't figure out how to classify the position that would actually run this." From district rangers: "Our GIS specialist produces beautiful maps, but the field crews don't know how to validate the outputs."
So why isn't breakthrough technology deployed at scale?
After spending years inside federal forestry—including positions implementing billions in Bipartisan Infrastructure Law funds and serving as the Forest Service's first AI Program Manager—I learned an uncomfortable lesson: The bottleneck isn't the technology. It's workforce capability.
We had the money. We had access to the tools on display here. What we didn't have were clearly defined roles that bridge traditional field knowledge with digital fluency and governance competence. We couldn't name what we needed, so we couldn't hire for it, train for it, or structure organizations around it.
And this isn't just a federal challenge. It's visible everywhere in environmental management.
-
Here's how the cycle plays out:
A forestry organization sees a demo like the ones here at SAF. The tool is genuinely impressive—AI that can analyze thousands of public comments for NEPA, remote sensing that identifies optimal planting microsites, digital platforms that track landscape-scale restoration progress. The organization procures it.
Both parties declare success. The vendor books revenue. The agency marks the budget line as obligated.
Then the tool sits. Not because it doesn't work, but because no one's job explicitly includes:
Validating algorithmic outputs against field conditions and Traditional Ecological Knowledge
Maintaining the data infrastructure the tool depends on (provenance, metadata, documentation)
Training staff in critical use (when is this guidance versus gospel?)
Integrating outputs into governance processes (NEPA, consultation, transparent decision-making)
The transaction happened. The outcome didn't.
I watched this pattern repeat with various technologies during BIL/IRA implementation. We invested billions in ecosystem restoration. Some went to advanced tools and data systems. But we didn't invest proportionally in the workforce capability to operationalize those tools. We hired "boots on the ground"—field crews, firefighters, specialists—because that's what we knew how to do. We didn't hire the translators, the validators, the data stewards, the digital ecologists.
Result: technology underdelivered, and we struggled to demonstrate outcomes at the pace appropriators expected.
-
At the Environmental Policy Innovation Center, we've been researching why the gap between available technology and deployed capacity persists. Our hypothesis:
We haven't explicitly named that modern technologies require modern skillsets—and integrating those skillsets into the workforce requires naming them explicitly.
Not just "foresters need to learn GIS" (that's been said for decades). But specifically:
→ In curriculum: What does it mean to prepare a forester who will validate AI outputs, maintain digital ecological twins, and navigate Tribal data sovereignty?
→ In job postings: How do you write a description for roles that integrate field ecology, data literacy, and governance fluency when those don't map to traditional classifications?
→ In position descriptions: What competencies define a "Digital Forest Planner" or "Environmental Data Steward" in operational terms HR can understand?
→ In skills frameworks: What does progression look like from entry-level to leadership when the work requires translation across domains, not just depth within one?
Universities can't design programs for roles that don't have names. HR can't approve positions that don't fit classifications. Students can't prepare for careers they can't see described. Employers can't recruit for capabilities they can't articulate.
The unnamed remains unbuilt.
-
Here's the other critical piece: The future workforce doesn't just need technical proficiency with new tools. The technology vendors here can teach someone to run their software in a week.
What takes longer—what's actually scarce—is the integration:
Critical thinking skills: Knowing when an AI recommendation makes sense and when it's algorithmically confident but ecologically wrong. Understanding what data went into a model and what got left out. Asking "what did this optimize for?" before implementing.
Systems thinking: Recognizing that a fuel treatment affects water quality, wildlife habitat, cultural resources, and community smoke exposure—and that those relationships exist whether or not they're in your model. Designing for emergence and feedback loops, not just linear cause-effect.
Socioemotional intelligence: Navigating the stakeholder complexity these tools often surface. An AI-powered public comment analysis might reveal conflicting values you'd have missed manually—but now what? How do you facilitate across difference? How do you translate uncertainty into trust?
Place-based knowledge: This is where traditional forestry craft becomes even more critical, not less. Technology scales observations, but validation requires intimate familiarity with specific landscapes. You need to know that the LiDAR-derived fuel model looks right everywhere except that north-facing slope where moisture regime creates conditions the algorithm didn't capture. You need to recognize when an AI-generated species recommendation ignores cultural significance identified in Tribal consultation.
A technology-augmented future doesn't replace field sense—it requires field sense plus the ability to interrogate and contextualize what technology produces.
The vendors here can sell you the tools. What they can't sell is the workforce capable of using those tools for outcomes, not just outputs.
-
This is why EPIC is building an environmental-management skills taxonomy—a framework that explicitly names the competencies, roles, and pathways needed in a digitally modern environmental management practice.
Not just for forestry, but across environmental domains: watershed management, habitat restoration, climate adaptation, fire management, ecological monitoring. The challenges are similar: rapidly advancing tools meeting organizations structured for previous eras.
The taxonomy is designed to create shared language:
→ For educators: What should programs teach to prepare graduates for the work that actually needs doing—not just the work we've historically done?
→ For employers: How do you describe positions that integrate domains? What competencies do you hire for? How do you structure teams for translation, not just specialization?
→ For HR and classification systems: What does "forestry" or "environmental management" encompass in 2025? How do you create pathways that reward integration, not just depth within silos?
→ For students and early-career professionals: What skills should you develop? What does career progression look like in a field that's evolving rapidly?
→ For credential and certification bodies: What should professional standards measure? How do you signal competence across the braid of field operations, data fluency, and governance capability?
It maps competencies across three integrated domains—field operations, data infrastructure, and civic governance—and defines role archetypes that bring those competencies together: Digital Forest Planners, Environmental Data Stewards, Model Validators, Permitting Integrators.
These roles exist informally in scattered places—talented individuals who've pieced together integration through initiative and generous mentors. The taxonomy makes them explicit and systematic.
-
The technology on this exhibit floor is remarkable. It can genuinely transform how we manage landscapes at scale, how we make decisions under uncertainty, how we learn and adapt.
But technology alone doesn't create capacity. People with deliberately cultivated skillsets create capacity.
If you're a student looking at these demos and wondering "what do I need to learn to actually do this work?"—we want to hear from you.
If you're an employer struggling to write job descriptions for the roles you actually need, or finding that traditional classifications don't fit modern work—your hiring challenges are the map to what the taxonomy must address.
If you're an educator wondering how to evolve curriculum faster than accreditation cycles typically allow—the taxonomy is designed to be a shared reference point.
If you're in HR or workforce planning trying to figure out how to classify positions that don't fit neat categories—we're building language that can translate to existing systems while pushing them to evolve.
Because the gap between the technology available and the capacity deployed at scale? It's solvable. Not by better tools—those are already here—but by naming, training for, and recruiting for the integration those tools require.
The demos are impressive. Now let's build the workforce that can actually operationalize them.
Help Build the Language We Need
📚 Students: What skills do you see needed that aren't in your program?
🏢 Employers: What roles can't you fill with current pipelines?
🎓 Educators: Where do curriculum and workforce needs diverge?
📧 Share your perspective: jwashebek@policyinnovation.org
📅 Join a listening session: [Register here]

