Automatically Generated EA Content—What Is Left for Architects?

Link: https://www.eatransformation.com/p/automatically-generated-enterprise-architecture-content

From Enterprise Architecture Transformation: A Practical Guide

Let me start with a necessary clarification.

In enterprise architecture (EA), the intent is always to work from source information and to apply architectural judgment. What I am discussing here is something else: content that is produced without an architect’s active involvement. Automatically. Via integrations, mining tools, AI assistants, or the famous “one-click EA.”

This idea is heavily marketed today. Well-known EA tool vendors talk about it, smaller and newer players talk about it even more loudly, and the promise keeps getting broader. I also touched on the topic in the final chapter of my book, because it keeps coming up in real projects and tool selections.

The appeal is obvious. Architecture that updates itself. A current state that is always up to date. No workshops, no interviews, no digging through old PowerPoint decks and SharePoint folders.

And, perhaps most importantly, the promise of finally getting rid of routine work. The idea that architects could stop maintaining inventories and diagrams and instead focus on more “strategic” work. That is a familiar message. Almost every major technology shift of the past decades has been sold with the same promise: automate the boring parts, free people to think at a higher level.

The real question is not whether it sounds attractive. The question is how realistic it actually is today—and at what level.

To keep this grounded, I will structure the discussion into three categories: what is easy, what is possible under certain conditions, and what is either unrealistic or simply not worth doing.

What Is Relatively Easy: IT Current State (If the Sources Exist)

Automatically generated EA content works best when you stay firmly on the IT current-state side.

Application portfolios are the classic example. If an organization already has reasonably structured sources—CMDBs, ITSM tools, or asset inventories—then producing an application list or a basic application map is fairly straightforward. In many cases, the information already exists; EA tooling mainly aggregates and visualizes it.

Workload-wise, that the real value here is not the list of applications itself, but their attributes. Ownership, lifecycle state, criticality, vendor, technology stack, data sensitivity, cost signals. These are the things architects actually use when supporting decisions. Automated approaches work reasonably well as long as these attributes are present, consistently defined, and kept up to date in the source systems. That is a big “as long as,” but when it holds, automation genuinely helps.

Even in this “easy” category, problems show up quickly. Coverage is rarely complete. Definitions differ between systems. One source describes applications at a logical level, another at a technical deployment level, and a third mixes the two freely. Depending on the source, entire classes of applications are often missing—especially externally used systems, SaaS tools adopted outside IT, and applications sitting at the far ends of integrations. Ironically, these are often the systems that matter most from a risk and dependency perspective.

The resulting architecture picture can look coherent at first glance, but small inconsistencies accumulate fast. Over time, the model becomes less a representation of reality and more a reflection of whatever data happened to be easiest to integrate. At that point, keeping the model useful usually requires manual work: normalizing definitions, filling gaps, reconciling conflicting sources, and making conscious decisions about what belongs at the EA level and what does not.

Still, as long as expectations are modest, this is where automated EA content delivers the most value with the least friction. As a continuously updated baseline for IT current state, it can work well—not as a fully self-sustaining architecture, but as something that stays usable precisely because someone takes responsibility for its interpretation and upkeep.

Thanks for reading Enterprise Architecture Transformation: A Practical Guide! Subscribe for free to receive new posts and support my work.

What Is Possible, With Conditions: Processes, Data Flows, Information, and Technology

Once you move beyond relatively static IT inventories, things become more fragile.

Processes and Process Mining

Automatically identifying processes is usually more limited than tooling demos suggest. Process mining relies on event data, which means it only covers processes that are both sufficiently digitized and instrumented. In practice, data sources rarely cover all relevant processes, and almost never the full end-to-end picture. As a result, an architect still needs to identify the relevant processes, understand their dependencies, and decide which parts of the organization and value chain actually matter from an EA perspective.

With well-defined, repeatable processes and good event data, process mining can nevertheless produce genuinely useful insights. You can see real execution paths, variations, bottlenecks, and deviations from the “official” process descriptions.

At an EA level, this supports qualitative assessment rather than precise modeling. Process mining can provide indicators such as throughput times, error rates, and rework frequencies, which can be aggregated and visualized at the EA level—for example as heatmaps. This kind of information can be very valuable when deciding where deeper analysis or change is needed.

The trade-off is abstraction. Process mining tends to operate at a more detailed level than EA normally does. You are no longer describing structure so much as behavior. That does not make it useless, but it does mean that process mining complements EA rather than replaces process maps and other architectural views.

Data Flows and Integrations

There are tools that claim they can automatically discover integrations and data flows. Technically, they often can. APIs, message brokers, configurations, logs, and integration platforms all provide material that can be analyzed and correlated.

The challenge is not detection, but meaning. A technical connection between application A and application B does not yet tell you what information actually matters, why it is exchanged, or how critical it is to the business. Automated discovery tends to produce diagrams that are technically accurate but architecturally blunt. Everything looks equally important, equally complex, and equally connected. In addition, the discovered integrations do not necessarily map cleanly to the logical applications that EA typically deals with. Technical endpoints, middleware components, and shared services often sit in between, obscuring the high-level architectural intent.

Even integration platforms are not a complete answer. They rarely cover all integrations, and they usually reflect how things were implemented, not how they should be understood at the EA level. Point-to-point integrations often fall outside their scope, even though these are precisely the integrations that tend to be most interesting from an EA perspective.

At this point, automated EA starts to resemble IT asset mining rather than EA. It gives the architect raw material to work with, not a ready-made architectural model that supports decision-making. Without interpretation, you get noise faster than insight.

Information Architecture

Information architecture highlights the same pattern as integrations and processes. Technical metadata, schemas, and data flows can be discovered automatically, and they provide useful raw material. At that level, automation can be genuinely helpful.

At the EA level, however, information architecture is not about tables and fields. It is about meaning, ownership, and usage across processes, capabilities, and applications. Defining what the organization’s core information actually is—and how it should be understood consistently—requires abstraction and shared interpretation. That cannot be inferred reliably from technical sources alone, no matter how complete they appear.

Technology and Infrastructure

A similar logic applies to technology and infrastructure views, but the abstraction challenge is even more explicit. Servers, platforms, cloud services, containers, and environments are already machine-readable by nature, and from a purely technical standpoint, discovering and updating them is not particularly controversial. Most of the work is about reuse and consolidation of existing data sources.

From an EA perspective, the key question is not visibility but level. EA should usually operate at a logical level, not at the level of individual virtual machines, Kubernetes pods, or cloud accounts. The architectural value comes from abstraction: platforms, environments, and major technology choices—not their every instantiation.

This is where linking becomes essential. Automated tooling needs to respect the boundary between EA-level structures and operational detail. Without conscious abstraction, technology views either become unreadable very quickly or drift into operational documentation that belongs elsewhere. In practice, the only workable approach is to keep EA content logical and high-level, while linking it explicitly to more detailed technical inventories and management tools when deeper inspection is needed.

What Usually Does Not Make Sense: Business Architecture, Target States, and Dependencies

The higher the level of abstraction, the less convincing automated content becomes.

Business architecture—such as capability models—and especially target states are not discovered from data. They are defined through choices. They reflect intent, priorities, and trade-offs, not historical traces left behind in applications or documents.

There is growing interest in using AI solutions to extract business concepts from documents, meeting notes, or strategy decks. As a drafting aid, this can be helpful. As a source of inspiration, even more so. But treating such output as EA content is risky.

Capabilities are not just frequently mentioned nouns. They represent shared agreement about what the organization must be able to do, at what level, and why. That agreement cannot be inferred reliably from transcripts or loosely related documents. It emerges through discussion, negotiation, and conscious decision-making.

Target architectures are even more problematic. They are normative by nature. They describe what should exist, not what already exists. No amount of system telemetry will tell you where the organization wants to go next year, what it is willing to invest in, or which compromises it is prepared to accept. And meeting notes do not fundamentally change this. They may capture intentions, open questions, and provisional agreements, and they can be useful as supporting material. But they usually reflect conversations in motion, not settled architecture-level decisions. Extracting target architectures from meeting notes risks mistaking discussion for commitment and ambiguity for direction. At best, such material can inform architectural thinking; it cannot define a target state on its own.

Finally, what automation does not reliably produce are the connections between architectural perspectives. Linking applications to processes, processes to capabilities, and processes to information requires interpretation and choice. These relationships are not hidden in any single data source. Defining which connections matter—and which do not—remains an architectural responsibility.

It is also worth pointing out that current state is rarely the hardest part of EA work—except, perhaps, for integrations. In practice, setting up and configuring automated data collection, integrations, and tooling often takes more time and money than producing the initial current-state descriptions manually. The real effort usually lies elsewhere: in abstraction, interpretation, prioritization, and change planning. Reality does not present itself in an architecturally meaningful form. Someone still has to do that work.

So What Is Automation Actually Good For?

Automatically generated EA content is not a scam, but it is not a shortcut out of architecture work either. Used well, it removes some mechanical effort and gives architects more time to think. Used poorly, it produces polished diagrams that look authoritative and say very little.

Automation does not remove the need for EA; it shifts where the effort goes. The hard part was never the diagrams themselves, but deciding what matters and at what level—and that remains a human responsibility. Many architects are highly technical and enjoy building and tuning tools, which is fine. But automation is only useful when it serves architectural purpose, not when it becomes an end in itself.

If you want a short, practical takeaway:

  • Use available source data wherever it makes sense, including automated and integrated sources. Application attributes are often the easiest and most useful starting point—if there is a reasonably high-quality source behind them.

  • Treat discovery tools and AI assistants as support, not as replacements for architectural judgment. They can surface patterns, gaps, and candidates for discussion, but they do not decide what matters.

  • Respect the abstraction level of EA. Do not try to model reality at full detail. EA is about selecting, grouping, and simplifying—not about mirroring operational complexity.

If automation truly frees architects to focus on more strategic work, that work still has to be done. EA does not disappear when tools get better—it simply becomes more visible where judgment is missing.


✍️ Author News

My debut novel, Pohjoisen tie (The Northern Road), will be published in Finnish in March 2026 by Momentum Kirjat.

The novel is a psychological and adventurous story about loss, family secrets, and the search for answers. It follows a young woman traveling through the far North—both geographically and mentally—after receiving an unexpected lead about her father’s disappearance years earlier.

The book can be preordered without commitment, directly from the author. Preordering ensures early delivery at a reduced price, with an optional signed copy.

More details closer to publication.


🔗 You May Also Like

Looking to dive deeper? Here are more enterprise architecture insights you might find useful:


👨‍💻 About the Author

Eetu Niemi is an enterprise architect, consultant, and author.

Follow him elsewhere: Homepage | LinkedIn | Substack (consulting) | Medium (writing) | Homepage (FI) | Facebook | Instagram
Books: Enterprise Architecture | The Senior Expert Career Playbook | Technology Consultant Fast Track | Successful Technology Consulting | Kokonaisarkkitehtuuri (FI) | Pohjoisen tie (FI) | Little Cthulhu’s Breakfast Time
Web resources: Enterprise Architecture Info Package (FI)


📬 Want More Practical Enterprise Architecture Content?

Subscribe to Enterprise Architecture Transformation for real-world advice on architecture that supports change, strategy, and delivery.