25 days ago

Knowledge Creation and the Four Great Cultural Transformations of Humanity

[Sidebar:  While starting with a sidebar is unusual to say the least, I felt it necessary because this post has gotten itself completely out of hand in terms of length.  But then when I thought about it attempting to synthesis history and future history into a single post should be long.  Also forgive me for injecting myself into many of the sidebars.]

It’s weird when you think about it, the historians, archeologists, and other social scientists discuss the ages of culture as the early stone age, the late stone age, the copper age, the bronze age, the iron age, and so on, as the way to name the cultures of humans.
I think there is a better way to understand the ages of humanity. It is through humanity’s creation and dissemination of information and knowledge.  All the current ages of humanity are really based on these four transformations of data, information, and knowledge.

Knowledge Generation before Speech

Prior to the development of speech there were two ways that life created “information and knowledge”.  The first started with the start of life itself on earth.  It was the combination of changes in the DNA chains and natural selection by the environment.  [Sidebar: This is still the foundation on which all other forms of knowledge creation is built; that is, except for a relatively small number of differences, human DNA and tree DNA is the same.]  One form of this knowledge creation and communications is instinctual behaviors.  Additionally, this forms the basis for the concept of Environmental Determinism.
The second way “information and knowledge” was created was through the evolution of “monkey-see-monkey-do”; that is, through “open instincts”.  An open instinct is one that allows the life-form, generally, animals to observe its surroundings, orientthe observations (food, a place to hide, a threat), decide on an action, and act. [Sidebar: Oh shoot, there goes Boyd’s OODA loop again.]  No longer does DNA only decide.  To make the decision the life-form must learn to observe, choosing which input is data and which is noise, and must create a mental model in order to orient the observed data.  Both of these require the ability and time to learn.  This learn-by-doing forms the basis for the concept of “Possiblism”.

The Age of Speech (~350,000 to 80,000 BC)

As noted, the learn-by-doing (monkey-see-monkey-do) process requires both the ability and time to learn.  According to studies of DNA and archeology, the average man would live about 20 years. [Sidebar: note that even today a boy becomes a man at age 13 in the Jewish religion.  This means that thousands of years ago, the man would have 7 years to procreate before dying.  Today, that 13 year old kid is not even in high school.]  So there is a time when the young need adult protection to learn.  This may be a few months as in the case of deer, or a couple of years.
The problem with learn-by-doing is that it requires both the ability to learn and the time to learn.  Because DNA evolution continues with each biological experiment—each child—there will be significant variations in the ability to learn.  So, sometimes knowledge would be lost by the inability of a child to learn-by-doing.  Other times, the parent/coach could die unexpectedly early, so that there was insufficient time for knowledge transfer.  Either way, information and knowledge was lost.
At some point between approximately 350,000 BC and 80,000 BC, possibly in several steps a new hopeful dragon (to use Carl Sagan’s term) was born. This hopeful dragon had some ability to articulate and an open instinct to most probably create a noun (the name of a thing) and/or a verb (the name of an action).  This gave birth to language.  And language allowed for learning-by-listening, which turned out to be a competitive advantage for the groups and tribes that had it when compared with those that didn’t.
This is Learning-by-listening resolves the problem of loosing knowledge gained by previous generations.  As language evolved it enabled humans to communicate increasingly abstract concepts to others.  Initially, (for 100,000 years or so) much of this knowledge was communicated by statement of observations and commands, some of it evolved (likely at a much later date) into stories, odes, epic tales, sagas, and myths.  These tales encapsulated the knowledge of prior generations; the tribal or cultural memory.
Toward the end of the period (~80,000 BC) when speech and language were born Homo sapiens started migrating from Africa.  Some researcher believe this was due to the competitive advantage of speech and language, that is, better methods of knowledge accretion and communication when compared with other animals.
The Age of Speech allowed for the accumulation of data, information, and knowledge.  Much of this was passed along in the form of tails, odes, myths, and so on.  At the same time practical skills like hunting and gathering were learned more effectively when verbal instructions and especially critics could be given.  Students learned much faster and at a much higher level.  The result was a differential in knowledge among the many, many small family groups and tribes.
After many millennia of inter-tribal wars and with some inter-tribal trading enough data, information, and knowledge was created to begin the long  trek to civilization.  [Sidebar: During the “hunter gatherer stage of human “civilization” there were no “Noble Savages”, just savages.  According to DNA evidence and studies of tribes in New Guinea, the average male was killed when approximately 20 years old.]  During the time from the Paleolithic through the  Neolithic ages, knowledge accumulated very slowly.  Archeologists have found the innovations diffused through the human population over hundreds of years.  Many archeologists want to attribute this to trading, but evidence suggests that much of the time violence was involved.

The Age of Writing (~3000 BC)

Speech and language enabling and supporting Learning-by-doing and learning-by-listening provided the basis for humans’ knowledge development for the next 70,000+ years. It was not until human organizations grew beyond a few hundred individuals with a geographic territory beyond what a person could walk in a day that humans had a need for data, information, and knowledge transfer/communications that went beyond speech.  
At about the time the first large kingdoms were formed, apparently the traders of the era found a need to track their trading.  And traders and trading was the main vehicle for communicating data, information, and knowledge during this entire period.  [Sidebar: At least this is what the archeologists have found so far.]  Additionally, the tribal shaman (priests) started to create documents so that their religious beliefs, traditions, knowledge, and tenets would not be lost by their successors.  [Sidebar: these were the scientists of their age.]  Consequently, religious documents, together with trade documents are among the earliest writing found.
Understand, writing came into existence at about the same time as many large construction projects, like pyramids and ziggurats.  And this was when city states, the forerunners of the modern state formed.
For the next 4400+ years writing continued to be the main medium for data, information, and knowledge documentation and communications.  During this time many kingdoms and empires rose and fell, including The Roman Empire, and a vast quantity of data, information, and knowledge was created documented and lost.  [Sidebar: The worst was the destruction of the Library and Museum (University) at Alexandria.] 
Finally, with the beginnings of the European Renaissance in the 1100s AD schools in Italy and Spain, initially created to teach monks to read and write, began to collect and copy works from early times (including Greek and Roman).  The copies were exchanged and libraries began to appear within these schools that were called and were Universities.  [Sidebar: This age is called the “Renaissance” because it was the time when initially data, information, and knowledge were recovered and new knowledge was documented.]
During this same period, and in part using the recovered knowledge-base, came the slow innovation of new instruments including the mechanical clock, and new navigational instruments, and new methods for ship construction; all leading to an economic sea change in the European kingdoms.  Further, during this time, apprentice schools (schools of learn-by-doing) appeared in greater number with more formality to their coursework.  These schools taught “manual trades”, the start of formal engineering and technology programs.

The Age of Printing (1455 AD)

All during this time, more and more clerics, (clerks) were coping more documents.  And though the costs were high, there was a major demand for more copies of books, like the Christian bible. 
In about 1440, a German, Johannes Gutenberg, developed a system that could make hundreds of copies.  In 1455, he printed what is known as the Gutenberg Bible and created the technology infrastructure for a paradigm shift.  He also printed a goodly number of these bibles.
Another German, Martin Luther, subsequently kick started this shift by nailing his 95 theses to the door in 1517.  Prior to Luther most Europeans could not read.  The Roman Catholic clergy up to and including the Pope took advantage of this to create highly imaginative church doctrine that would provide them with a large money stream.  Since they had been infected with the edifice complex they used this money stream to indulge their favorite activity at Rome and elsewhere.  
Luther was intensely unhappy with this church doctrine in his theses.  Instead of the Pope being the final Authority on Christianity, he preached that the Christian Bible was the final authority and that all Christians had the right to read it.  So, by the late 1500s, there were many printed books in an increasing number of libraries with an increasing number of Europeans (and shortly, American colonists) that could read.  [Sidebar: Remember that Harvard College, now Harvard University was founded in 1636.]  And this was only step one of the Age of Printing.
Step two in the Age of Printing was Rev. John Wesley’s creation of “Sunday School”.  Many or most of the members Wesley’s sect, “The Methodists”, had been tenant farmers, laborers, or cottage industry owners who had lost their jobs or their businesses in the early stages of the industrial revolution (The late 1600s and through 1700s).  
At this time, machines began to be used on farms and in factories, putting these people out of work.  Wesley and the Methodists, by teaching them to read and write on Sunday, their day off from work, enabled them to move into and participate in the profits of the industrial revolution.  Together with other movements toward “schooling”, the age of printing and economic progress happened, creating the “middle class.” [Sidebar: In Colonial New England, early on—in the 1640s—primary schooling became a requirement.  For more information, see my book.]  Glossing over the many upgrades and refinements, knowledge creation and communications was base on printing technology until the 1980s, more or less.
During the Age of Writing, but particularly during the Age of Printing, the methods for the communications data, information, and knowledge began to diverge from trade.  In fact, in the US Constitution the founding fathers treated the “the US mail” as a direct government function because they felt that communications for everyone was so important.  On the other hand, they indicated that the government should “regulate” commerce among the states; and there is a great difference between a function of government and regulation by government.

The Age of Computing (~1940 AD)

There are two roots of the Age of Computing.  These had to do with improving print-based data storage and the communication of data and printed materials.  The first root was data and information communications.  While there were many early attempts of high-speed communications over long distances in Europe over the ages, the first commercially success telegraph was developed by Samuel Morse in 1837 [Sidebar: together with a standard code, coincidently call the Morse Code.]  By the 1850s this telegraphic system had spread to several continents.  
By 1874, Émile Baudot invented the teletype machine which allowed any typist to type a message on a typewriter keyboard, which the machine would then translate into Morse code.  A second teletype machine would then print the message out at the other end.  This meant that typists, rather than trained telegraphers could send and receive messages.  Additionally, the messages could be coded and sent much faster.  Three other inventions/innovations the facsimile machine, the telephone, [Sidebar: a throwback to the Age of Speech], and the modem complete the initial intro the Age of Computing.
The second root was the evolution of the computer itself.  Early in the industrial revolution, Adam Smith discussed the assembly line process and the fact that tooling can be made to improve the quality and quantity of output in every activity in the process.  Using this process more or less, the hand tooling of the late 1700s gave was to increasing complex powered mechanical tooling for manufacturing products in the 1800 and 1900s.  
While that helped the manufacturing component of the business, it did not help the “business” component of the business.  While the need for improving the information handling component (reducing the time and cost) of a business was recognized in the 1500s, it wasn’t until 1851 that a commercially viable adding machine became available to help with the “book keeping/accounting” of a business.  These machines produced a paper tape (printing) on which the inputs and output was reported.
From 1851 to at least 1955 these mechanical wonders were improved, to the point that in the early 1950s, they were call “analog computers”.  And for a short time there was discussion about whether analog or this new thing called digital computers were better. [Sidebar: Into the 1990s tidal predictions were made by NOAA using analog equipment, since they kept proving to be more accurate.]
The bases for the electronic, digital computer came from several sources, mostly in the United States and in Britain, during the late 1930s and early to mid-1940s. However, it wasn’t until the invention of the transistor in 1948 coupled with the concept of the Turing Machine (Alan Turing, working from 1941 to 1950) that the first prototype commercial “electronic computers” were developed. 
In 1956 I “played” with my first computer. It consisted of a Hollerith card reader for data input, electronics, a breadboard (a board with a bunch of holes arranged in a matrix) on which a program could be “coded” by connecting the holes with wires (soft wiring), and a 160 character wide printer for the output.  The part I played with was the card sorter.  Rather than sorting the data in the “computer”, it was done by arranging and ordering the Hollerith cards before inserting them into the card reader.  The card sorter enabled the computer’s operator to sort them very much faster than attempting to sort them by hand.
By 1964, computers had internal memory, about 40K bits, and storage, tape drives (from the recording industry) and disks (giant multi-platter removable disks) holding up to 2MB of data.  [Sidebar: I learned to code on two of these; IBM’s 1401 and 1620. I coded in machine language, symbolic programming system and Fortran 1 and 2.]  These computers had rudimentary operating systems (OS) with input and output being a card reader and a punch card writer.  And they had teletype machines attached as control keyboards.
Fast forward to 1975; by this time, Technology had advanced to the point where teletypewriters were attached as input/output terminals.  These were running at 80 to 120 baud (charters per minute, fast for a human typing, but very slow for a computer).  Some old style television-like (cathode ray tube, or CRT) terminals were becoming commercially available.  Mostly, this were simply glass versions of teletype printers, allowing the use to type into or read from an 80 characters-wide by 24 lines long green screen; and it was at about the same speed as a 120 baud teletype.  But, Moore’s Law was in high gear with respect to hardware so that with each two years, computers doubled in speed and capacity.
In about 1980 networking started to develop commercially, though there were several services over telephone networks earlier. [Sidebar: The earliest global data network that I know of was NASA’s network for data communications with the Mercury spacecraft in 1961.]  Initially, this development was in terms of a Local Area Network (LAN), linked through the use of telephone cables. [Sidebar: During this time, I set up some LANs at Penn State University and at Haworth, a furniture manufacturing company.]
By 1985 the Internet protocols evolved.  [Sidebar: Between approximately 1985 and 1993, a significant group of engineers created a set of protocols to international standards; they were called Open Systems Interconnect or OSI protocols.  They were a set of protocols based on a seven layered model.  This group formed one camp; the other was from an amorphous organically evolving TCP/IP group of protocols.  This group included academics, hackers, and software and hardware suppliers.  This group preferred TCP/IP because it was a free open source technology with few if any real standards—One HP Vice President said of TCP/IP that it was so wonderful because there were so many “standards” to choose from—and because OSI required significantly more computing power because of architectural complexity of its security and other functionality.  Consequently, TCP/IP won, but we are now facing all of the security and functionality issues that would have been resolved by OSI.]  [Sidebar: In 1987, I predicted that the internet would serve as the nervous system of all organizations and was again looked at like I had two heads.]  And technology had evolved to the point the PCs on LANs were replacing CRTs as terminals to mainframe computers.  Additionally, e-mail, word processing, and spreadsheet software were coming into their own, replacing typewriters and mail carried memo and documents.
In the early 1990s fiber optic cables from the Corning Glass Works revolutionized data and information transfer in that it was speeded up from minutes to micro-seconds with approximately the same cost. [Sidebar: Since I worked with data networks from 1980 on, and since I led an advanced networking lab for a major defense contractor, I could go into the hoary details for many additional posts, but I will leave it at that.]  As fiber optics replaced copper wires, the speed of transmission went up and the cost went down.  There were two consequences.  First, the number of people connected to the internet drastically increased.  Second, more people became computer literate, at least to the point of using automated devices—especially, the children.
By 1995, the Internet was linking home and work PCs with the start of web (~1993), and by the 1996/1997 timeframe the combination of home computers, e-mail, word processing, and the Internet/web was beginning to disrupt retail commerce and the print information system.  At this point the computer started to affect all of data, information, and knowledge systems, which is disrupting culture worldwide.

User Interfaces and Networking

As I discussed in a previous post and in SOA and User Interface Services: The Challenge of Building a User Interface in Services, The Northrop Grumman Technology Research Journal, ( Vol. 15, #1, August 2007), pp. 43-60, there three sets of characteristics of every user interface.  The first is the type of user interface, the second is how rich the interface is, and third, how smart the interface is. 
There are three types of user interfaces, informational, interaction oriented, and authoring.  The first is typical of the “Apps” on your smart phone, getting information.  The second is transaction oriented.  This means interacting with a computer in a repeated manner, like when an operator is adding new records to a database.  The third is authoring.  This doesn’t mean writing only, it means creating anything from a document, a presentation, to a movie, to a song, to an engineering drawing, or to a new “App”lication.  This differentiation of the user interface only really developed in the late 1990s and early 2000s as each of these types requires a different form factor for the interface and increasingly complex software supporting it. 
A rich user interface is an interface that performs many functions internally, i.e. does a lot for you.  As computer chips have become smaller, using less power, and much faster, the interface has become much richer.  This started with the first graphics terminals (in which there were 24 by 80 address locations) in the early 1970s.  Shortly, real graphics terminals appeared costing upwards of $100K.  These graphics terminals required considerable computing power from the computers they were directly connected with to operate. 
In an effort to relieve the host computer of having to support the entire set user interface functions Intel and others developed chips for performing those functions.  When some computer geeks looked at the functionality of these chips, (the Intel 8008 chip, among them) they decided they could construct small computers from them; the genesis of the PC [Sidebar: I was one of these.  With two friends, a home grown electrical engineer and an account, I tried to convince a bank to loan us $5000 to start a “home computer” company and failed; most likely because of my lack of marketing acumen].
A smart user interface is one that that takes the information of a rich interface and intercommunicates with mainframe applications (“the cloud” as marketers like pretend is a new concept) and their databases to bi-directionally update (share) their data.  Rich interfaces have rapidly evolved as network technology has grown from copper wire in the 1950s to fiber optics, Wi-Fi, and satellite communications as competing interconnection technologies at the physical through network layers of the OSI model.  These enabled first the Blackberry devices and phones, then in 2003, the Iphone and competing products.  The term “App” from application is a rich and generally “smart” user interface.  [Sidebar: I put “smart” in quotes because many of these “rich/smart apps” require constant updating burning data minutes like they are free.  When you allow them to only use Wi-Fi they complain bitterly.]

The library

Initially, in the late 1970s, information technology started to disrupt the printed information center, that is, the library.  The library is the repository of printed documents (encompassing data, information, and knowledge) of the Age of Print.  It uses a card catalog together with an indexing system, like the Dewy Decimal or Library of Congress systems, creating metadata to organize the documents to enable a library’s user to find documents containing data or information contained in the document pertaining to the user’s search requirements.
It started from the use of the rudimentary data bases’ (records management systems’) ability to control inventory, in the case of a library the inventory of books.  Initially, automation managed the metadata about the library’s microfilm and/or microfiche collections.  [Sidebar: The libraries used microfilm and microfiche technologies to reduce the volume and floor space of its collections as well as enabling easier searches of those collections.  Microfilm and microfiche technologies greatly reduced the size of the material.  For example, an 18 by 24 inch newspaper could be reduced to less than a two inch square (or rectangle).  However, with so many articles in each daily paper, library patrons had difficulty finding articles on particular topics; enter automation.
Initially, the librarians used the one or two terminals connected to the computer to either enter the metadata about what was on the microfilm or fiche or pull that data for a library’s customer.  They would enter the data using a Key Word In Context (KWIC) indexing system. 
Gradually, as computing systems evolved the quantity and quality of metadata of what was in the libraries increased and access within the library’s computing system increased; generally with a terminal or two sitting next to the card catalog.  However, none of the metadata was available outside the library.
With the advent of the World Wide Web standards and software (both servers and browsers) all of that changed.  [Sidebar: Interestingly, at least to me, the two basic protocols of the web, HTML and XML were derivatives of SGML, Standard Generalized Markup Language.  SGML is a standard developed by the printing industry to allow it to transmit electronic texts to any location and allow printers at that location to print the document.  It’s ironic that derivatives of that standard are putting the printing industry out of business. One of the creators of SGML worked for/with me for awhile.] 
With the advent of the Internet, browser and server software, and HTML (and somewhat later XML), the next step in the disruption of libraries as repositories of data, information, and knowledge started with search engines.  The first commercially successful search engine was Yahoo.  It used (as do all search engines) web crawler technology to discover metadata about websites then organizes it in a large database.  The most successful search engine to date is Google; the key reason being that it was faster than Yahoo and contained metadata about more websites.  These search engines replaced card catalogs of libraries before the libraries really understood what they were dealing with.  This has been especially true since as a great deal of data and information has migrated to the web in various forms and formats.
One of the things many library users went to the library for, before the advent of the web, was to use encyclopedias, dictionaries, and other such materials.  Now, Wikipedia and others sites of this type are the encyclopedias, dictionaries, thesaurus, and so on, of the Age of Computing.  Additionally, many people read newspapers and magazines at the library.  These too, are now available on any rich, smart user interface.  [Sidebar: For the definitions see my paper on Services at the User Interface Layer for SOA.  There is a link on this blog.]   The net result is that libraries, as physical facilities, are nearly obsolete.  Now “Big Data” (actually the marketing term for knowledge management of the 1990s) libraries and pattern analysis algorithms are taking data, information, and knowledge development of the library to the next level, as I will discuss shortly.

Imaging: Photos, Videos, Television, Movies, and Pictures

One of the greatest transformations, so far, from the Age of Print to the Age of Computing is in the realm imaging.  Images, pictures if you will, have been found on cave walls inhabited in the early “stone age” and some written languages are still based on ideographs.  So imaging is one of the oldest forms of communications.
Late in the Age of Writing, in the Italian Renaissance, images became much more realistic with the “discovery” of perspective.  Up to that point images (paintings) had been very “two dimensional”; now they were three.  Early in the Age of Print, actually starting with Guttenberg, wood cut images were included in printed materials.  From ­­­1800 onward, a series of inventors created photography, capturing images on a photo-reactive film.  Lithography allowed these images to be converted into printed images.  Next moving images, the movies came into being; as well as color photography.
From the 1960s, the U.S. Defense Department started looking for methods and techniques to gather near real-time intelligence by flying over the area—in this case areas in the USSR; and the USSR objected.  The first attempt was through the use of aerial photography, which started with a long winged version of B-57, then the U2, and finally the SR-71.  All of these used the then state-of-the-art film-based photography.  But all had pilots and only the SR-71 was fast enough to evade anti-aircraft missiles. 
So a second approach was used, sending up satellites and then parachuting the film back to earth. There were two major problems with this approach.  First, was getting the satellite up in a timely manner.  Rockets at the time took days to launch so getting timely useful data was difficult.  Second, having the film canister land at the proper location for retrieval was difficult.
Therefore, the US government looked for another solution.  They, and their contractors, came up with digital imaging.  This technology crept into civilian use over the next 20 years. Meanwhile, the photographic industry, in the main, ignored it, in part, because of the relatively poor quality of the images early on.  But this improved, both the resolution and the number of colors.  Among others, this led to the demise of Kodak and Fuji Films.
Another part of the reason the photo film industry ignored digital imaging is the quantity of storage and the physical size of the storage units required to store digital images.  But as Moore’s Law indicated, the amount of storage went up while the cost dropped drastically and this size of the hardware needed decreased even more.  With the advent of SD and Micro-SD cards there was no need for film.  And with the advent of image standards like .tif, .gif, and .jpg the digital images could be shared nearly instantly.

Retail Selling

From before the dawn of history, until 1893, trade (buying and selling) was a face to face business.  In 1893, Sears, Roebuck, and Company started selling watches and then additional products by catalog using the railroad to deliver the goods.  When coupled with the Wells Fargo delivery system—across the railroad system—allowed people in small towns to purchase nearly any “ready-made” goods, from dresses to farm implements.  This helped mass production industries and helped to create cities of significant size.  It then followed (or led) the way by building retail outlets (stores) in every town of even small size. 
This model of retailing is still the predominate model, but is the one being challenged by the Sears and Roebucks catalog model in an electronic internet-based form of retailing.   Examples include the electronically based, Amazon, eBay, and Google.  Amazon rebooted the no bricks and mortar retailing catalog with an internet version.  It is successfully disrupting the retail industry.  Likewise, eBay used the earliest market model, trading in the local market, in a global version.  Early on in the existence of the internet various groups developed search engines.  Currently, Google is the primary search engine. But it is supporting a concierge service which the Agility Forum, The Future Manufacturing Consortium said would be a requirement for the next step in manufacturing and retailing, that is, mass customization.

Additive Manufacturing

Early in my studies in economics, the professors tied economic progress of the industrial age,  to mass production, to economies of scale.  However, in the Age of Computing mass production is giving way to mass customization.
Initially, in the 1970s, robotic arms were implemented on mass production lines to reduce the costs of labor [Sidebar: especially in the automotive industry.  At the time US automakers found it infeasible to fire inept or unreliable employees do to union contracts.  Additionally, the labor costs, do to those contracts priced the US automobiles out of competition with foreign automakers.  To reduce their labor costs the automakers tried to replace labor with robots numeric controlled machines.  They had mixed success do to both technical and political issues raised.  This is not unlike the conversion of the railroads for steam to diesel and the “featherbedding that forced many railroad into contraction or bankruptcy.]  By the 1990s automation and in particular agile automation (automation that is leading to mass customization) is becoming the business-cultural norm in manufacturing and fabrication industries.  Automation is replacing employees in increasingly complex activities.  It will continue to do so and will continue to enable increasing mass customization of products.
For thousands of years components for everything from flint arrowheads to automobile engine blocks to sculptures were created by subtracting material from the raw material.  This subtracted material is waste.  A person created a flint arrowhead by removing shards from a flint rock. 
Automobile engine blocks are created by metal casing, then milling the casting to smooth the surfaces for the moving engine components. 
Stone and wood sculptures use the same material removal procedures as creating an arrowhead.  These too create waste.  Some cast sculptures may not be milled or polished, but these are the exceptions and the mold for the casting is still waste material.
Recently, a process similar to casting called injection molding does create products with relatively little waste.  But most component manufacturing processes create considerable waste.
However, with the rise of ink jet printing technology, people began to experiment with overlaying layers of material and found they could create objects. This technology is called 3D printing or additive manufacturing. It will have a much greater impact on manufacturing and mass customization.
A simple example is car parts for older model vehicles.  A car enthusiast orders a replacement part for the carburetor in his 1960s vintage muscle car.  The after-market parts company can create the part using additive technology rather than warehousing hundreds of thousand parts, just in case.  The enthusiast gets a part that is as good as, or perhaps better than, the original, the after-market parts company doesn’t need to spend money on warehousing, and the manufacturing process doesn’t product waste (or at least only a nominal amount).
Research and development is using this technology is now looking at creating bones to replace bones shattered in accidents, war, and so on; in nano-versions to create a wide variety of products.  [Sidebar: Actually, one of the first “demonstrations” of the concept was on the TV show, Star trek, where the crew went to a device that would synthesis any food or drink they wanted.] 
In the future this technology will disrupt all manufacturing processes while creating whole new industries because it can create products that meet the customer’s individual requirement better, while costing less, and being produced in less time.  For example, imagine a future where this technology can create a new heart identical to the heart that needs replacement, except fully functional—researchers are looking into the technology that could, one day, do that.


The automotive industry is already starting to feel the effects of the Age of Computing.  The automotive industry has been based on cost efficiency since Henry Ford introduced the assembly line.  The industry was among the first to embrace robots on the assembly line.  But, there is much more.
The cell phone is becoming the driver’s interactive road map.  This road map tells the driver which of several routes is the shortest with respect to driving duration based on the current traffic and backups, as well as speed, and distance.
Since the 1970s automobiles have had engine sensors and “a computer” to help with fuel efficiency and identifying engine malfunctions.  These have become increasingly sophisticated.
Right now the automotive industry is driving toward self-driving cars.  There some on the roads and many that have sensors (and “alerts”) that “assist” drivers in one or more ways.

In the Near Future

And there are many industries like the automotive industry which are feeling the effects of The Age of the Computer.  That is, there are many more systems which the technology and processes of the Age of Computing are disrupting.
While processes are in transformation today, it’s nothing compared with what will happen in the immediate and not very distant future.


Shortly, in the Age of Computing, information technology will disrupt schools.  People learn in two ways, by doing (showing, or “hacking”) and by listening.  And everyone learns using differing combinations of these two methods.
Technology can and will be used to “teach” in all of these combinations.  Therefore, “the classroom” is doomed.
Some students learn by doing, a method that “academics” pooh-pooh; only “stupid” children take shop and apprenticeships don’t count, you must of a “degree” to get ahead.
However, children do learn by doing, and enjoy it.  Why do you think that so many boys, in particular, choose to play video games? 
Why is it, that pilots of the United States Navy have go through 100 hours or more of computer simulations before trying a carrier landing?  Why, because they learn by doing. 
In the near future most of the jobs will require learn by doing.  Learn by doing includes simulations, videos, solving problems, labs.  Automation has and increasingly will impact all of these, giving the learn-by-doers the opportunity the current mass production education system doesn’t.
The other method for learning is “learn by listening”.  Learn by listening includes reading and audio (audio includes both lectures and recordings of lectures).  Over the past two hundred years, these have been the preferred methods of “teaching” in mass production public schools.
In the main, it has worked “good enough” for a significant percentage of the students, but numbers of students have fallen from the system.  Part of the problem is that some teachers can hold the interest of some students better than other students, other teachers may hold the interest an entirely different group of students, and some may just drone on.
Now, using the technology of the age of computers, students will be able to listen to lectures from teachers that they are best suited to learn from.  This means that the best teachers are able to teach hundreds of thousands of students across the globe, not just the 30 to 50 using the tools of the age of print.
It also means that students can learn in ways the more align with their interests. [Sidebar: I saw a personal example of this when I was working on my Ph.D. at the University of Iowa.  The Chair of the Geography Department, Dr. Clyde Kohn was also a wine connoisseur.  He decided to offer a course, called “the world of wines” to a group of 10 to 15 students.  He would teach them about climates and geomorphology (soils, etc.) that create the various varieties of wine.  He would also teach them about wine making and distribution worldwide; so there was physical and economic geography involved.  In the first 5 minutes of enrollment the class was filled and students were clamoring to get into to it.  He opened it up.  By the time all students had enrolled there were 450 students in the geography class and they probably learned more geographic information than they ever had before. It also gave the state legislator apoplexy.]  As the technology becomes more refined, students will be able to learn whatever they need to learn without ever going near a classroom.  I suspect that home (computer) schooling will become the norm.  Even “class discussion” can be carried on using Skype/Gotomeeting/etc. like tools.  Sports will be team-based rather than school-based.
I will define a prescriptive architecture for education in another post.  It turns the educational system on its head.  [Sidebar: Therefore, it will be ignored by the academic elite.]


Medicine, too, is starting and will continue to go a complete disruption of the way it is performed (not practiced).
Currently, most of medical performance is in the rational “weegee”-boarding stage and uses mass production methods, not mass customization.  But all people are biological experiments and are, consequently, individuals.  And every malfunction should be treated the same way.
To get the best result for the individual, each type of drug and dosage of that drug should be customized for the individual from the start—not by trial and error.
In the near future, people will be diagnosed using their complete history, analyzing their DNA, body scanning, and other diagnostic measurements (both current and undiscovered). Then, using additive nano-technology an exact prescription will be created.  The medicine may be a single pill, mixed with a liquid, through a shot, or some other method, introduced into the individual.
Much of this analysis will be done by a computer.  Already, in the 1070s, a program simulated a patient, so that medical students could attempt to diagnose the “patient’s” problem.  In order for this program to serve its intended function, the MDs and Computer Assisted Instruction mavens were continually refining the data used by the program.  If this continued, and I suspect it did, the database from this single program could have been used by an analysis program to produce a diagnosis that would be comparable with that of expert diagnosticians.
This type of program could be, and likely will be, used by every hospital in the country, saving time and a great deal of money in identifying problems.  The key reason that it is not used today is that it has poor “bedside” manners—but so to do many of the best diagnosticians. 
Also, in many situations, this will take “The Doctor” out of the loop.
For example, instead, the patient walks into “the office”, which may be in front of the home computer.  The analysis “App” asks the patient questions and gets the patient’s permission to access his or her medical record.  If the patient is at home and the “Analysis App” needs more information, the app may ask the patient user to go to the nearest analysis point of service (APOS) for further tests.
At the APOS the patient would lay on a diagnostic table, not unlike those mocked up in Star Trek.  This table would have all sensors needed to take the necessary measures—in fact; there will be a mobile version of this table in the back of a portable APOS vehicle.
Once the analysis is complete, the APOS will use additive manufacturing to incorporate all of the medicines needed in a form usable by the patient.
For physical trauma or where this is irreparable damage to a bone or organ, additive manufacturing will create the necessary bone or organ and a robotic system will then transplant it into the patient’s body.
The heart of this revolution in medical technology is Integrated Medical Information System based on the architecture I’ve presented in the post entitled “An Architecture for Creating an Ultra-secure Network and Datastore.”  Without such an ultra-secure system for the medical records of each individual, externalities are too grave to consider.
However, even with an Integrated Medical Information System there will be substantial side effects for all stakeholders, doctors, nurses, technicians, and patients.  There need no longer be any medical professions, except for medical research organizations. 
Because the recurring costs of an APOS are low when compared with the current doctor’s office/hospital facility, all people should be able to pay for their own medical costs.  So there will be little or no need for insurance.
Additionally, because medicines are manufactured on a custom basis as needed by the patient, there will be no need for pharmacies or systems for the production and distribution of medicines.
With no medical professionals, no insurance, and no need for the production and distribution of medicine, this whole concept will be fought, in savage conflict, by the those groups, as well as Wall Street and federal, state, and local welfare agencies, all of whom will lose their jobs.  However, it will be inevitable, though perhaps greatly slowed by governmental regulation.
Again, I will say a good deal more on this topic in a separate post.

Further into the Future

There are three alternative future cultures possible in the Age of Computers, the Singularity, Multiple Singularities, or the Symbiosis of Humans and Machines.  These may all sound like science fiction or fantasy, but they are based on my 50+ years of watching the Age of Computers and technology advance.

The Singularity

In a story that someone told me in the 1960s, a man created a complex computer with consciousness.  He created it to answer one question, “Is there a god?”  The computer answered, “Now there is.”  A definition of “The Singularity” is that all of the computers and computer controlled devices, like smart phones become “cells” in a global artificial consciousness. 
Many science fiction writers and futurists have speculated on just such an occurrence and its implications.  John von Neumann first uses the term “singularity” in the early 1950s as applied to the acceleration of technological change and the end result. 
In 1970, futurist Alvin Toffler wrote Future Shock. In this book, Toffler defines the term “future shock” as a psychological state of individuals and entire societies where there is “too much change in too short a period of time”.

The Singularity Is Near: When Humans Transcend Biology is a 2006 non-fiction book about artificial intelligence and the future of humanity by Ray Kurzweil
Many science fiction writers and many movies have speculated about what happens when the Singularity arrives.  For the most part these stories take the form of Man/Machine Wars or conflicts.  In the first Star Trek movie, the crew of the Enterprise had to battle” a world consuming machine consciousness.  In the Terminator series of movies it’s man versus machine and man and machine versus a machine.  And in The Matrix, it’s about man attempting to liberate himself from being a slave of the machine consciousness. [Sidebar: In the mid-1970s I had a very interesting discussion with Dr. John Crossett about the concept that formed the plot for The Matrix.]
There are literally hundreds of other books and short stories about dealings and conflicts with the singularity.  While this is all science fiction, science fiction has often pointed the way to science and technology fact.

Multiple Singularities

A second scenario is that because of the advances in artificial intelligence there are multiple singularities.  Again, Science Fiction has dealt with this scenario.  Isaac Asimov was one that dealt with multiple singularities and the results in his I Robot series of stories.  In this scenario, more than one robot achieved consciousness.  In these scenarios, humanity plays a subordinate role to the “artificial intelligence”.  These singularities interact with each other in both very human and very un-human ways.

Symbiosis of Humans and Machines

The best set of scenarios, from the perspective of humanity, is the symbiotic scenarios.  All multi-cell life, above a very rudimentary level is composed of a symbiosis of cells and bacteria.  So it is reasonable that there could be a symbiosis of humans and machines.
For example, nano-bots could be inserted that would deliver toxins to cancerous cells to directly kill those cells, to inhibit their transmission of the cancer causing agent to other cells or to  link with brain with orders to repair any damaged cells. These nano-bots would be excreted when their work is complete.
Taking this a step further, these nano-bots could allow the human brain direct access to the information on the Internet or “in the cloud” (as marketers like to say). [Sidebar: “Cloud Computing” has been with us ever since the first computer terminals used a proprietary network to link themselves to a mainframe computer.  Yes, the technology has been updated, but it’s still remote computing and storage.]  This would mean that all you would have to do is think to watch a movie, or gain some knowledge about the world around you.  The very dark downside of this is that terrorists, politicians, news commentators, or other gangsters and thugs could control your thinking, i.e., direct mind control.  And actually the artificial consciousness could take over and use human to their benefit.  [Sidebar: Remember a thief is nothing more than a retail politician, retail socialist, or retail communist.  Real politicians, socialists, and communists steal at the wholesale level.]   This mind control is the ultimate greedy way to steal—anyone whose mind is controlled is by definition a slave of the mind controller.

“Space the Final Frontier”

I see only one way out of the mind-control conundrum, traveling into and through space.  Once humans leave the benign environment of the earth, the symbiosis of humans and machines (computers and other automation) becomes imperative for both humans and their automated brethren.  Allies are not made in peace, only when there are risks or threats.

Even the best astronomical physicists readily admit that while we don’t understand our universe, as humans we may never be able to understand the universe.  There is simply too much to fathom.  However, with the symbiosis with artificial consciousness, we may be able to take a stab at it.   
1 month, 5 days ago

An Architecture for Creating an Ultra-secure Network and Datastore

The Problem
According to United States records, from 2006 cyber attacks to 2016, (crimes, intelligence gathering, and warfare) have gone up 1300 percent.  Other reports identified in Forbes Magazine indicate that between 2015 and 2016 there was a 200 to 450 percent increase in attacks.  I suspect that though the numbers vastly underestimate the total number of attacks.  I know that in the late 1980s, one company was averaging 10,000 attacks per day on its website and access points to the internet; of which 4000 originated in Russia (then the USSR), China, North Korea, and the like
There are two goals for attacks, to disrupt the entire IT infrastructure or to gather or change protected data for various nefarious purposes.  There is a multiplicity of reasons for these attack, monetary gain, political change, and so on; the “so on” is too long to enumerate.
The cost for preventing and mitigating the effects of these attacks has spawned a new multi-billion dollar industry.  Consequently, the need is for an entirely new system (network and datastore) that completely defeats all attack vectors.  That is what I’m proposing here.

The Solution A Disruptive Architecture: The Once and Future System

The Goal

The goal of the architecture presented here is to define a highly secure system for the transmission and storage of data.
The architecture is for a fundamentally different “new” network and datastore.  I put “new” in quotes because I based the architecture on a number of concepts and standards from the late 1970s to the mid-1990s.  For reasons of economies and business politics these concepts and standards were abandoned.  When I submitted the architecture for a patent and even though the architecture uses old concepts and standard in a new way, I was told that since it was based on well known concepts and standards the architecture it is unpatentable. 
Consequently, I’m presenting it in this post in the hope that someone will take serious look at it and communicate with me so that I can present the details and we can build a secure network and datastore.

The Architecture

My fundamental idea is to create a separate “data only” network and datastore.  While initially, having a worldwide network for the storage and transmission of data separate from the Internet “of everything” may seem as a ludicrous idea for those looking at the “short-term” costs for an organization; what the cost of having data stolen, corrupted, or destroyed would be for an organization?  And remember that there are  initial and recurring costs for data security on a cloud or across the internet.
This new architecture has five components.  One of them has evolved over the past twenty years.  One of them was declared obsolete thirty years ago.  One of them is based on petrified standards of the 1980s.  And one uses a new twist on current hardware and software.  The fifth is a particular form of governance.

New User Interface Security

The base technology of the new user interface has been evolving over the past twenty years at least.  It is a combination of three functional technologies.  The first is biometric recognition.  Any secure system requires some form of authentication; that you are who you say you are.  Various forms of biometric authentication, facial recognition, fingerprint identification, retinal pattern recognition, and so on, are currently the least likely forms of identification to be broken by cyber attacks.
The second security technology is a version of the smartcard.  These are credit-card-like with a data storage computer chip embedded.  Under this new function the card reader would communicate the location, time of day, and date, whereupon the card would generate a pass code based on those parameters.
At the same time the reader would generate a pass code also based on those parameters.  The system would accept the identification if and only if they matched.  Since any secure system requires at least to factor authentication, a user would need both the smart card (which additionally could store the biometric data) and their own body.
Finally, authorization and access control are both static for a given user interface to the system.  This means that the user of a given device (be it a terminal, PC, smart phone, etc.) can only gain access to the set of data, records, or summaries to which their entitled. 
So a contract specialist has no access to engineering data for the contract or only a limited set.  If the contract specialist attempted to sign on to another device, to which he was not preapproved, he could not get to the data to which he is entitled. The reason is that an individual must be preapproved of every terminal the individual wants to use. 
Or a doctor may not see a patient’s complete medical history without the patient’s permission. This would be a two step process.  The doctor would have to sign in on his or her device using the two-factor authentication, described above.  Then the patient would have to sign on to the same device using the same two-factor authentication to give the doctor permission to access his or her medical record.
The security meta-data and parameters are stored on the ultra-secure data network (USDN).  Any updates or changes must be made and approved through the system’s security governance function.  No dynamic changes can be made until the changes are approved.  In a political/cultural context, this governance process will be the most difficult to secure since users expect changes to be made “NOW” and the process doesn’t allow “NOW” to happen.

The Bridge

The second architectural component is the bridge from the Internet to the USDN.  This is really the key component securing the USDN from attacks.  And this is the component that was declared obsolete thirty years ago.  In the early 1980s there were many proprietary data networks.  To communicate data from one network to another required a network bridge.
The following diagram is from the patent that I applied for.  It shows an example of how changing the protocol layers or stacks creates a portcullis in the bridge that provides the ultra-security.  On the left side of the bridge are the standard Internet protocols.  Other than the top layer (called the Application Layer in the OSI model) and the bottom layer (the Physical Layer in the OSI model), all layers link and guide the communications between the sender and receiver.
Notice that the functional protocols on each side of the bridge, with the exception of the physical layer are different.  On the left side all protocols are current Internet standards.  However, on the right side the bridge uses protocols from the Open Systems Interconnect (OSI) suite.  These protocols were abandoned in the 1990s in favor of the earlier TCP/IP suite, that at the time were less expensive and much less capable. [Sidebar: “The first example of a superior principle is always less capable than a mature example of an inferior principle”].

What this means is that the entire USDN will use these OSI protocols.  Any cyber attack software developed for Internet protocols would have to be redesigned for the OSI protocols.
Even if the hackers of whatever stripe did develop software capable of exploiting vulnerabilities in the OSI protocol stack they would still need to get it onto the network.  But the design of the bridge includes a portcullis in the middle of the bridge.
This portcullis is designed to allow only data and records in well defined formats to pass.  This means that no documents can move across the bridge.  In this case “documents” includes e-mail, documents, unformatted text, files, or other unformatted data.
This stringent requirement eliminates nearly every attack vector by hackers.  For example, there is no way that a Trojan horse attachment can get into the system because e-mail, let along e-mail with attachments, is allowed access across the bridge.
As shown in the diagram, only data in specific and static XML formats is allowed to move through the portcullis.  The XML data structures are installed in the portcullis only after approval using one of the governance processes.
So, for example, medical data would use an XML version of the international medical standards, engineering data would use an XML version of STEP, and so on.  Only data exactly following those standards to which the user is entitled would get through the portcullis.. This would initially have a very large overburden in meta-security and access control data about all individuals.

The Network

The third architectural component is the network.  The network is based on petrified standards of the 1980s.  Inside the portcullis-bridge data would be free to move among the various nodes of the network using the same OSI protocol stack that is used on the right side of the portcullis-bridge shown in the diagram.
Additionally, it would use improved versions of the Directory Service (X.500) standard.  This would include using static routing meta-data (which many network analysts would say is not an improvement).  However, static routing meta-data means that if an unauthorized node magically appeared on the USDN (because some hacker tapped one of the USDN lines) the node would be recognized as a threat immediately.  Consequently, any attempt to breach the security imposed by the portcullis-bridge by directly attacking the network would fail, as long as good governance is in place.


The last technical function is data storage.  This datastore function uses a new twist on current hardware and software design for the storage of data and information.  The twist is that only specific data and records are store, not files from outside the network. 
An organization using an USDN-like system would have its data file structures created by authorized personnel inside the USDN.  These file structures would follow the various authorized XML data structures.  No freeform data like e-mail or documents would be allowed.  [Sidebar: remember its much much simpler to create documents from data than to glean data from documents.] 
The only applications that are authorized to run on the USDN and its datastore computers are those that create, read, update, or delete records or data elements.  Reading data would include reading for transfer, and for summarization. 
For example, suppose the medical profession of a state or of the United States adopts the USDN to protect patients’ medical records.  A medical researcher may be granted access to summaries of certain data elements of patients’ record that have a particular medical problem.  This access would be granted through an approval process—part of governance—prior to obtaining the summaries.
The advantage is that the medical researcher has access to a complete set of data for the population of an area.  The downside for the researcher is that they need to have a well formulated and defensible hypothesis to work from, in order to obtain the data, and that the governance processes take time.


The Governance processes function of the system’s architecture is most critically important of the five functions because it is the only one where humans are involved—Big Time.  As discussed above there are many security functions that are static and require administrative functions to change the parameters and meta-data.  While I expect that actually changing the meta-data and parameters will be automated, the various decision making processes will not.
One obvious example is in banking.  Some financial data must be secure within a financial institution and only shared with a client.  Other data, in the form of transactions must be shared between and among banks and other financial institutions.
The USDN security meta-data would determine which data could be sent to another financial organization, what data can be sent, and other characteristics of transaction.  It would be within the USDN and not across any portion of the Internet.  It can then be retrieved by the destination organization.
For example, if all defense contractors were on the USDN then when teams formed to respond to a DoD Request For Proposal (RFP), the various teams of contractors and subcontractors could share requirements and other data within their team.  When the DoD chose the winning team, program/project, risk, and design data could be shared and shared with the customer without fear a cyber attack on one of the sub-contractors leading to the capture or corruption of the program or mission critical data.  [Sidebar: frequently a third or fourth tier sub-contractor has more vulnerabilities than the prime contractor.]


Again,”The first instance of a superior principle is always inferior to a mature example of an inferior principle.”
There are three issues with the creation of such a system. 
The first is cost; creating an entire nationwide or worldwide network is very expensive in the startup phase.  Creating (or really resurrecting in many cases) software to support the functions of the USDN will be very expensive.  There is the cost of implementing software services to interface with existing organizational applications.  Acquiring the physical cabling for the system will be expensive.  
Modifying routers to use the new protocols will be expensive. Designing, constructing, and testing the new portcullis-bridge will be very expensive.  Most of this investment will need to be done before one data element is protected.
The cost is more than a straight financial issue of building the system.  It will threaten much of the multi-billion dollar cyber security industry’s income stream.  This industry will market and lobby against building out the system.
The second issue may be used by that industry as an argument against the USDN.  The issue that the system only protects data and not other types of information like e-mail and documents.  This is true.  However, the core of any organization is its data.  Documents can be easily constructed from data, but not the other was around.
The third issue, at least initially, is the response time of the system.  Currently applications and users have come to expect nanosecond response times to dynamic requests.  Initially, at least, I predict that the response time to requests will be in terms of seconds; maybe many.  I saw this with Microsoft DOS—until version 3.1 it was bad—other products from Microsoft, Apple, and Oracle [Sidebar: I worked with Oracle 4.1] and many other hardware and software products.]  So it will be a rocky start, but ultimately it will cost much less than the recover, rebuild, patch, upgrade, and get hacked again systems of today.


While the USDN does not protect an organization from cyber attacks, it does make an organization’s mission critical data nearly invulnerable an organization will be able to recover from an attack and will make it nearly impossible for terrorists, cyber criminals, etc. to get a personal data or its mission critical data protected.

For anyone who is interested, please comment on this post.  I have much of knowledge of the processes, technology, and construction process involved than I can put in a post, but would be happy to discuss it.
6 months, 21 days ago

Hurricanes and an Architecture for Process

The idea for this post came from a previous post on this blog regarding enterprise architecture and religious organizations.

Definitions or Definitional Taxonomies

There are three definitions that are needed to understand this post; the definition of Architecture, process using the OODA Loop, and the Hierarchy of Knowledge.  These are somewhat personnel definitions and you may interpolate them into your model universe.


Architecture—A functional design of a product, system, or service incorporating all the “must-do” and “must-meet” requirements for that product, system, or service.


All processes supporting the mission, strategies, or infrastructure of any organization fall into the OODA model of Col. John Boyd.  The OODA, as discussed in several previous posts includes four steps: Observe, Orient, Decide, and Act.  I will discuss this shortly (below) using the example of the prediction of a hurricane.

A Taxonomy for the Hierarchy of Knowledge

I defined the following taxonomy of knowledge based on much reading and some thinking.  I will demonstrate what I mean for each definition by using it in the appropriate place in my example of the hurricane prediction.
Datum—some observation about the Universe at a particular point in four dimensions

Data—a set consistent datum

Information—patterns abstracted from the data.

Knowledge—identified or abstracted patterns in the information.

Wisdom—is the understanding of the consequences of the application of knowledge

Process Architecture and Forecasting Hurricanes

Since the process for predicting or forecasting hurricanes is most likely to be familiar to anyone watching weather on TV, it is the best example I can think of to illustrate how the OODA Loop process architecture works with the taxonomy of knowledge.


Initially, data is gathered by observing some aspect of the current state of the Universe.  This includes data about the results of their previous actions.  In point of fact,

Datum—some observation about the Universe at a particular point in four dimensions

Data—a set consistent datum

Obviously observations of current temperature, pressure, humidity, and so on, are basic to making weather forecasts, like the prediction of a hurricane.  And each observation requires latitude, longitude, height above sea level, and the exact time to be useful. So one datum, or data point would include, temperature, latitude, longitude, height, and time.

When the temperature is measured at many latitudes and longitudes concurrently, it produces a set of temperature data.  If other weather measurement, like pressure and humidity, are taken at the same locations at the same time, then you have several data sets with which to make a forecast (or a composite weather data set for a particular time).  Because this is a recurrent measurement process, the weather folk continue to build a successively larger composite weather data set.  But large data sets don’t make a forecast of a hurricane.


The Orient step in the process is inserting the new data into an individual’s model of how the world (Universe) works (descriptive) or should work (prescriptive).  These models are sometimes called paradigms.  Rules from Governance enable and support the Orient step, by structuring the data within the individual’s or organization’s model.  Sometimes these model or paradigms stick around far after they have demonstrated to be defective, wrong, or in conflict with most data and other information.  An example would be the model of the Universe of the Flat Earth society.

Information—patterns abstracted from the data.  This is the start of orienting the observations, the data and information.  Pattern analysis, based on the current model converts data into information is derived from the organization’s model of its environment or Universe.  For hurricane forecasting this would mean looking for centers of low pressure within the data.  It would also include identifying areas with high water temperatures in the temperature data, areas of high humidity, and so on.  This and other abstractions from the data provide the information on which to base a forecast.  But it is still not the prediction of a hurricane.

Knowledge—identified, combined, or abstracted patterns in the information.  Using the same paradigm, environmental model, or model Universe, people analyze and abstract patterns within the information.  This is their knowledge within the paradigm.  In weather forecasting the weather personnel uses the current paradigms (or information structures) to combine and abstract the knowledge that a hurricane is developing.

When they can’t fit information into their model, they often discard as aberrant, an outlier, or as an anomaly.  When enough information doesn’t conveniently fit into their model the adherents have a crisis.  In science, at least, this is the point of a paradigm shift.  And this is what has happened to weather forecasting models over the past one hundred and fifty years.  The result is that the forecasting has gotten much better, though people still complain when a storm moves off its track by fifty miles and causes unforecast wind, rain, and snow events.


Once the organization or individual has the knowledge, he or she uses input their knowledge within their models of the Universe to make decisions.

Wisdom—is the understanding of the consequences of the application of knowledge. 

This is the hard part of the OODA Loop because it’s difficult to understand both the consequences and the unintended consequences of a decision.  If your paradigm, environment, or Universe model is good, or relatively complete, then you’re more likely to make a good decision.  More frequently than not people make decisions that are “Short term smart and long term dumb.” 

Part of the reason is that they are working with a poor, incomplete or just plain wrong paradigm (view of the world or universe).  This is where the Risk/Reward balance comes in.  When choosing a path forward, what are the risks and rewards with each path?  [Sidebar:  A risk is an unknown and it is wise to understand that “you don’t know what you don’t know”.]  For weather forecasters it’s whether or not to issue a forecast for a hurricane.  To ameliorate the risk that the are wrong, weather forecasters have invented a two part type of forecast, a hurricane watch and a hurricane warning. [Sidebar: It’s split more by giving a tropical storm watch and warning.]

The reason for this warning hierarchy is that the weather forecasters and services are wise enough to know the public’s reaction to warnings that don’t pan out and lack of warnings that do; they understand the risks.  So when they give a hurricane warning, they are fairly sure that the area they have given the warning for will be the area affected by the hurricane.


Once the decision is made people act on those decisions by planning a mission, strategies, and so on within their paradigm.

For governments and utilities in the U.S. this means putting preplanned hurricane preparations into effect.  For people this generally means boarding up the house, stocking up on food, water, and other essentials or packing and leaving.  And for the drunk beach comber this means grabbing the beers out of the frig, and either heading to the hills or back to the beach to watch the show.

Programmed and Unprogrammed Processes

In the 1980s I wrote a number of articles on process types.  After doing much research and much observation, I came to the obvious conclusion that there are two types of processes, Programmed, and those that I will call Unprogrammed and which includes design, discovery, and creative processes. [Sidebar: These three sub-types differ in that design processes start from a set of implicit or explicit customer requirements, discovery processes start from inconsistencies in data or in thinking and modeling the data, and creative really have no clear foundation and tend to be intuitive, chaotic, and apparently random.] 

Programmed Processes

Programmed processes are a set of repeatable activities leading to a nearly certain outcome.  Almost all processes are of this type.  The classic example is automobile manufacturing and assembly.  Since they are repeatable how does the process architecture model apply?
The short answer is to keep them repeatable and to increase their cost efficiency (that is making them less expensive in terms of time and money to create the same outcome).

Keeping Programmed Processes Rolling

To keep your truck or car operating requires maintenance.  How much and when is an OODA Loop process.  For example, you really should check your tires, both for wear and pressure regularly; otherwise the process of driving can become far too exciting.  So you are in the “observe” step.
As the tires wear out or lose pressure too often, the data set becomes information triggering the “orient” step in the process.  At this point a driver starts to gather data on which tires are wearing fastest and where on the tire they are wearing.  This last point may indicate that tire rotation is all that’s needed.
Based on this data our driver models the information (using the computer between his or her ears) to determine how much longer it’s safe to drive on the tires.  Based on results of that model, the driver will “decide” to buy tires, rotate tires, or wait a little longer.  [Sidebar: Actually, many drivers will go through the buy/wait decision based on additional data, the cost of new tires, and their budget.]  The driver will then “act” on the decision, either get new tires or wait (which is really an action).
The point here is that like everything else in our Universe, over time, everything breaks down; so repeatable processes require an OODA Loop process to maintain them.

More Cost Efficient

However, the OODA Loop process architecture elucidated here is important for programmed processes for another reason.  For repeatable processes the key OODA process is always how to make the faster, cheaper, or making a higher quality product at the same or lower price, and producing it in the same or less time.
This is famously what Adam Smith described in his pin example in the first chapter of “The Wealth of Nations”. By creating an assembly line process that he called “the division of labour”, he demonstrated that the same number of workers produce hundreds of more pins.  He went on to describe that each worker might now invent (or innovate, that is, refine or enhance) tools for that worker’s job. [Sidebar: Henry Ford went on to prove this in spades or should I say Model-T’s.]  [Sidebar: To me the worker’s inventing or innovating tools to help them in their job is quite interesting and often missed.  Tools are process or productivity multipliers.  In the military they are called force multipliers; you always want your enemy to bring a knife to your gunfight, because a rifle multiplies your force.  Likewise, the tooling in manufacturing can increase the quality of the product, the uniformity of what is produced, while reducing the cost and time to produce…fairly obvious.]
Both the “division of labour” and the creation of tooling require the use of the Architectural OODA Loop, which means that increasing the cost efficiency of manufacturing uses the OODA Loop with the knowledge taxonomy.

Unprogrammed Process

Unprogrammed processes have stochastic (creative and unplanned activities within them).  There are really three sub-types of unprogrammed processes, design, discovery, and creative. The key differences between the design, discovery, and creative processes are the whether the process has customer requirements driven and what the requirements are.


The design process has customer requirements.  [Sidebar: As I’m using it, the design process also includes custom development, implementation, and manufacturing.]  It uses a semi-planned process, that is, program or project planning creates a plan to meet the objectives, but with latitude for alternative activities because there is significant risk.  The actual design activities within the programmed process are stochastic, from a program management perspective.  That is, the creative element of these activities makes less predictable, and therefore with programmatic risk with respect to cost and schedule.  Therefore, program management must use (some form or) the OODA architecture to manage the program or project.
The stochastic activities are themselves OODA Loop processes.  The designers have to identify (observe) detailed data of the functions they are attempting to create (the “must do” requirements), while working to the specifications (the”must meet” requirements) for the product, system, or service.  The designers then have to “orient” these (creating a functional design or architecture for the function), “decide” which of several alternate proposed designs is “the best”. Finally, they “act” on the decision.


The requirements for research, risk reduction, and root cause analysis are generally unclear and may be close to non-existent.  One of my favorite research examples, because it’s well known, and because it so clearly proves the point is the Wright Brothers researching and developing the aircraft.  In 1898 the brothers started their research efforts.  Starting with data and information then publicly available they built a series of kites. With each kite, they collected additional data and new types of data.  They then used this to reorient their thinking and their potential design for the next kite.  They found so many problems with the existing data that they created the wind tunnel to create their own data sets on which to base their next set of designs.  By the end of 1902 they had created a glider capable of controllable manned flight and by 1903 they created the powered glider known as the Wright Flyer.  It took them at least two more cycles of the OODA Loop to develop a commercially useful aircraft.
Risk reduction also uses the OODA Loop.  A risk is an unknown.  It requires some type of research to convert the unknown into a known.  There are four alternative.  First, the project/research team must decide whether or not to simply “accept” the risk. Many times the team orients the observed risk in a risk/reward model and accepts the risk.  However, another to orient is to determine if there is knowledge or knowledgeable people to convert the unknown into a known. So “transferring” the risk is one way to reduce the risk.  A second method is to “avoid” the risk.  This means redesigning the product, system, or service, or changing the method for achieving the goal or target.  The final way is to “mitigate” the risk.  This is nothing short of or more than creating a research project (see above) to convert the unknown into a Known.
Likewise, root cause analysis is a research effort.  However, the target of this analysis is to identify why some function or component is not working the way it is supposed to work.  Again, its observing the problem or issue, that is gathering data, orienting it through modeling the functions based on the data, deciding what the cause is or causes are for the problem and how to “fix” the problem, and then acting on the decision.  Sound a whole lot like the OODA Loop.


Creative processes like theory building (both through extrapolation and interpolation), and those of the “fine arts” that come from emotions also use the Architecture of Process defined in this post.  For some theories and all fine arts, “the requirements” come from emotion, based on an intuitive belief structure [Sidebar: a religious structure, see my post on architecture of religion].  This intuitive structure provides “the data, information, and knowledge” on which the creativity builds.
However, for scientific theories, at least, they are sometimes based on inconsistencies in the results of the current paradigm.  The theorist attempts to define a new structure that will account for these aberrant data.

Why make a Deal about the Obvious?

To many it might seem that I’m making a big deal about what is obvious.  If it is the case, why hasn’t it been inculcated into enterprise architecture and used in the construction and refinement of processes and tooling to support the charter, mission, objective, strategies and tactics of all organizations?  Using formal architectural models like the architecture for process, presented in this post, will enable the enterprise architects to “orient” their data and information more clearly making Enterprise Architecture that much more valuable.
6 months, 28 days ago

Organizational Economics and the Enterprise Architecture of a Religious Organization

The Question

A reader of my blog, who is a minister in the Methodist Church, commented on one of my posts, (this is my paraphrase of the question)”How do you measure the benefits of a religious organization, like a local church?”  Or, “How would I apply Enterprise Architecture to religious organization”, since I posit that all organizations can benefit from Enterprise Architecture, as I’ve discussed in several previous posts.
This post is written with a slant toward the Methodist tradition, of which I am a part, but will apply equally well to all religious organizations.

An Organization’s Enterprise Architecture

Within Organizational Economics, any organization’s Enterprise Architecture has three sub-components, Mission, Governance, and infrastructure.
·         Mission: What the organization is supposed to do; it’s goal, target, or objective.
·         Governance: Within what parameters or rules it can perform its mission.
·         Infrastructure: What personnel, intellectual, physical, and financial support it has for achieving its mission.
To support the Missionof an organization, its leadership chooses Strategies(approaches or plans) for going from where it is to where it wants to be.  It implements these strategies using tactics, plans that account for the organization’s Governance and Infrastructure (its rules and talents/abilities/support).  Management then executes the tactics in operations(the actions of the organization).  The operations have two components, processes and tooling.
Additionally, the leadership and management of the organization is responsible for legislating, enforcing, and adjudicating some or all of the laws, rules, and/or regulations the make up the organization’s Governance[Sidebar: For an individual Methodist churches this would be called Administration.]
Finally, the organization must provide for its Infrastructure, “the tools and talents” it needs to perform the operations.  These tools include financial, physical, and intellectual.  For a religious organization this would be the money, time, talents of the adherents and the buildings, property and assets of the organization.

Processes and the OODA Loop

All processessupporting the mission or infrastructure of any organization fall into the OODA model of Col. John Boyd.  The OODA, as discussed in several previous posts includes four step: Observe, Orient, Decide, and Act.


Initially, data is gathered by observing some aspect of the current state of the Universe.  This includes data about the results of their previous actions.  In point of fact,
Datum—some observation about the Universe at a particular point in four dimensions
Data—a set consistent datum


The Orient step in the process is an individual’s model of how the world (Universe) works (descriptive) or should work (prescriptive).  These models are sometimes called paradigms. 
Rules from Governance enable and support the Orient step, by structuring the data within the individual’s or organization’s model.
Information—patterns abstracted from the data.  This is the start of orienting the observations, the data and information.  The pattern analysis to convert data into information is derived from the organization’s model of its environment or Universe.  For religious organizations this is found in its “bible” and its organizationally related texts like the “Book of Discipline” of the Methodist Church.
Knowledge—identified or abstracted patterns in the information.  Using the same paradigm, environmental model, or model Universe, people analyze and abstract patterns within the information.  This is their knowledge within the paradigm.  When they can’t fit information into their model, they often discard as aberrant, an outlier, or as an anomaly.  When enough information doesn’t conveniently fit into their model the adherents have a crisis.  In science, at least, this is the point of a paradigm shift.  In religion this is a reformation (the reforming of the “bible” and/or the “book of discipline”, that is the rules of governance.  While in science some conservative adherents to the old model lose their reputations after a time, in religion people on both sides of the model’s discontinuity lose their lives.


Once the organization or individual has the knowledge, he or she uses input their knowledge within their models of the Universe to make decisions.
Wisdom—is the understanding of the consequences of the application of knowledge.  
This is the hard part of the OODA Loop because it’s difficult to understand both the consequences and the unintended consequences of a decision.  If your paradigm, environment, or Universe model is good, or relatively complete, then you’re more likely to make a good decision.  More frequently than not people, even religious people, make decisions that are “Short term smart and long term dumb.”  Part of the reason is that they are working with a poor, incomplete or just plain wrong paradigm (view of the world or universe).  This is where the Risk/Reward balance comes in.  When choosing a path forward, what are the risks and rewards with each path?  [Sidebar:  A risk is an unknown and it is wise to understand that “you don’t know what you don’t know”.]


Once the decision is made people act on those decisions by planning a mission, strategies, and so on within their paradigm.

Religious Organization’s Orienting Model

Joseph Campbell’s four categories of functions of religions: include: the metaphysical, the cosmological, sociological, and pedagogical.  While there may be much quibbling with some of what Mr. Campbell writes, the four functions of religion (and perhaps culture) ring true.

The Metaphysical Function

Awakening a sense of awe before the mystery of being
“According to Campbell, the absolute mystery of life, what he called transcendent reality, cannot be captured directly in words or images. Symbols and mythic metaphors on the other hand point outside themselves and into that reality. They are what Campbell called “being statements” and their enactment through ritual can give to the participant a sense of that ultimate mystery as an experience. ‘Mythological symbols touch and exhilarate centers of life beyond the reach of reason and coercion…. The first function of mythology is to reconcile waking consciousness to the mysterium tremendum et fascinans of this universe as it is.’”
This is truly the “religious function of the four; the other three tending to be more cultural than religious.

The Cosmological Function

Explaining the shape of the universe
“For pre-modern societies, myth also functioned as a proto-science, offering explanations for the physical phenomena that surrounded and affected their lives, such as the change of seasons and the life cycles of animals and plants.”
While there still is much proto-science, science is serving the cosmological function in today’s culture and has identified many patterns in information and knowledge, and clarified many previously fuzzy concepts and theories.  Still, at this time, religion plays a significant role in many “ultimate” questions.  These include: What was there before the Big Bang (if there was one), what architected “the laws” of the Universe (e.g., the speed of light), why am I here, and what happens to me after I lose consciousness in the process of dying?

The Sociological Function

Validate and support the existing social order
“Ancient societies had to conform to an existing social order if they were to survive at all. This is because they evolved under “pressure” from necessities much more intense than the ones encountered in our modern world. Mythology confirmed that order, and enforced it by reflecting it into the stories themselves, often describing how the order arrived from divine intervention. Campbell often referred to these “conformity” myths as the “Right Hand Path” to reflect the brain’s left hemisphere’s abilities for logic, order and linearity. Together with these myths however, he observed the existence of the “Left Hand Path”, mythic patterns like the “Hero’s Journey” which are revolutionary in character in that they demand from the individual a surpassing of social norms and sometimes even of morality.”
More than any other the sociological function of religions leads to culture, to cultural conflict, and religious wars.  This is the key reason for the incessant wars among the three great monotheistic religions—especially when “the authorities” in each want to hold the political power that comes with the cosmological function (the function of how the Universe and God work).

The Pedagogical Function

Guide the individual through the stages of life
“As a person goes through life, many psychological challenges will be encountered. Myth may serve as a guide for successful passage through the stages of one’s life.”
Within the context of a given combined metaphysical, cosmological, and sociological model or paradigm, teaching the paradigm becomes important so that members of the organization can navigate in an orderly manner through the model.  Order reduces risk and increases cost efficiency, while creativity increases risk but may increase effectiveness.  All religious/cultural models work to decrease risk for its adherents and teaching the adherents the cultural behaviors is seminally important for the religious organization to last.

The Methodist Denomination; an Example

All religions create prescriptive paradigm or orienting model that include all four functions (or dimensions) as discussed by Campbell.
All religious orienting models are based on religious authority; either priests, shaman, etc., “Holy” texts, or both.

The Catholic Church before 1500

The Catholic Church before Luther and the Reformation and before Guttenberg and printing used both written text and Clerical Authority, with the latter being far more important.  Clerical Authority caused the burning and killing of the faculty or the library and museum (university) at Alexandria, the extermination of the Templers, the near extermination of the Huguenots, and Inquisitions killed hundreds of people and attempted to rewrite science (see the biographies of Galileo, Copernicus and others).  A big part of this was that the Catholic Church’s hierarchy believed their paradigm that they were the final authority on knowledge and wisdom.  They’re model included an Earth centered Universe with the Pope or Jerusalem at the very center.  This meant that they were always right and competing models damned the heretics to Hell.  To this was added a major dose of politics; e.g., “The ends justifies the means” inferred to the Jesuits.    

Strategies (Based on the Christian Protestant Paradigm)

Enter, initially, Luther and Guttenberg.  In 1455 Guttenberg has perfected the printing press and began to print the Bible so that by 1500 there were a comparatively large number floating around, as well as many other books with both ancient and “modern” ideas.  In 1507, Professor Dr. Luther challenged the authority of the Catholic Church hierarchy, saying that the scriptures, not the Pope and his minions held the core to the Christian paradigm or prescriptive model of how the Universe should work and that all people should be allowed to read these and interpret them for themselves.  This change or shift in strategy was greatly facilitated by the increasing number of printed scriptures.
This meant that people had to learn to read, which meant they learned to write.  The ability to write meant that many more people had the ability to express concepts, ideas, and theories across space and time.  Learning was not just for the clerics and clergy.
One consequence for the Catholic Church was that science took on the cosmological functions, reducing the church hierarchy’s political authority.  Another was the increased risk of “Christians” against “Christians”.  And finally there was the blossoming of intellectual and economic wealth; since knowledge is the root of all wealth.

John Wesley, Adam Smith, and the United States

In 1783, John Wesley had his epiphany; he called it his “heart-warming” experience.  He continued his work among the poor and ostracized, attempting to bring them into the church.  These people had been tenet farmers and owners and workers in “cottage industry” manufacturing that supported the farmers and the estates on which they worked.  These people were being displaced by the new and very controversial mass production using powered tools; that is, the nascent industrial revolution of the early and middle 1700s. 
These people migrated to towns and cities in search of work.  Many that migrated had no skills that were needed in the new industrial economy.  With the debtor laws then in place, they ended up in prison or worse.  By 1811, the displaced workers formed radical groups, called Luddites, who destroyed machinery, especially in cotton and woolen mills, that they believed was threatening their jobs; which the machines were.  These were the people that Wesley sought out and these were the people he reached.
As his “cult”, the Methodists, continued to grow, he a) had to have help; additional “clergy” to preach, teach, and comfort the cultists, b) these people needed to read but many couldn’t, and c) most of the rest of the very early Methodists couldn’t either.  Wesley set about educating his clergy and many of the cult members by teaching them to read.  In turn, reading and other skills taught in Sunday school were used by these “Methodists” to compete for jobs and to become entrepreneurs in their own right; that is, the Church of disciplined learning, demonstrated that there was a “Method” to John Wesley’s heretical madness. The Methodist Sunday School (A real school teaching reading, riting, and rithmtic) enabled Methodists to compete for better paying jobs and join the “Middle Class”.  This follows Wesley’s admonition, “Earn all you can, save all you can, give all you can”.  This is really the credo for the knowledge-based Enlightened Capitalism as espoused by Adam Smith.
As espoused by Smith, Enlightened Capitalism is really about ensuring that there is an even economic and regulatory platform for all individuals to start from; no one individual being favored in an economic or political sense or even perceived as such.  This means that all individuals feel they have a chance to succeed to the full measure of their God (or nature) given talents.
In 1789, the framers of the United States Constitution used many of the concepts from the An Inquiry into the Nature and Causes of the Wealth of Nations.  These include:
·         Defense of the country
·         Support of the country’s infrastructure through creation and maintenance of standards that cross state boundaries and support of intra-country communications.
Everything else was left to the states and the people.  The Methodist church and other religious organizations noticed there was a need for, what is now called, “a social safety net”.  This was initially for its members.  So they constructed and supported hospitals, orphanages, old folks’ homes, and so on.  Many of the most prestigious hospitals still include the name of a domination or religious organization.  Many modest sized towns ended up with a Catholic and a Protestant hospital, while cities might have two or three of each plus a Jewish hospital.
In the 1880s and 90s, most Christian churches recognized the need for kids to have physical activity, since fewer of them were “working the farm”.  So, along with Sunday School to teach them to read, the churches built gyms for them to play in.

The Changed Mission

Politically correct, social liberal cultists in the Methodist denomination have turned the strategies of this denomination from a focus on religious activities to forcing societal change through political action (tactics).  They no longer give any weight to the other religious functions discussed by Campbell.
In my opinion, in doing so, they have lost focus.  The consequence is that young adults (gen X and Y) see no difference between the Methodist Church and the Democratic or socialist parties, other than possibly this is the organization to belong to, if you want to earn your way into heaven, (but more about Heaven and Hell in my other blog).  So they see no reason to join the Methodist Church.  Those that are looking for a religious organization head to fundamentalist churches, even religious cults, like James Jones’ Jonestown.  But defining social injustice is even harder and religious organizations have three other functions.  “Wicked Clowns lives matter” is an organization for “social justice”, but does that serve all four functions of a religion?
Remember while “Social Justice” is easy to proclaim, it’s hard to remember the individual as embodied in the song “Easy To Be Hard”
Especially people who care about strangers
Who care about evil and social injustice
Do you only care about the bleeding crowd
How about a needy friend
I need a friend

Choosing a Mission, the Governance, and Infrastructure to Support a Religious Institution

The Three Great Principles

For a Christian church community any mission should be founded on the three great principles of Christianity. 
·         Love and respect God no matter what
·         Treat all others as you would want to be treated
·         Try to be your ownself at your very best all the time.
The first, in the Christian Bible is that, “Thou shalt love the Lord thy God with all thy heart, and with all thy soul, and with all thy mind. This is the first and great commandment.”  If a religious institution forgets this principle, it is no longer a religious organization, but possibly a civic or political one.  Additionally, from any serious reading of history, it is the principle all people find most difficult to inculcate into their being and also the one that has caused more wars and more massacres than any other.  The reason is that many religions believe they have a lock on God will and how to please him/her/it.  Their mighty God has given them the right to enslave or kill anyone that espouses any variation from their orthodoxy.  This is true of all closed religions.
However, any Christian denomination must have this as their chief goal and guiding principle.
“The second is like unto it, Thou shalt love thy neighbour as thyself. On these two commandments hang all the law and the prophets.”  This is the chief principle of all civic and political organizations, as well as a secondary principle of religious organizations (at least this is what most religious organizations espouse).  This principle is the basis for all laws internal to a culture.  Most people, even those espousing religion, follow the law rather than inculcating the principle into their lives.  My mother said most followers of a particular Christian domination followed the principle of “sowing their wild oats six days a week and praying for a drought on Sunday.”  Hundreds of laws are needed to ensure that not too many “wild oats” are sown.
There is a significant problem with “loving your neighbor as yourself” and that is, many (most) people hate themselves in one way or another.  This may be caused by poor brain wiring, by bad experiences, or both.  This is the reason that I include the third principle.  People, especially young people, try to distance themselves by drink and drugs, and destruction of anything that might be beautiful. Why; because they can’t stand or understand themselves and act out on those feelings.  That is, “I’m entitled and if I can’t…then I’m being disrespected.”
So any local church mission statement must include teaching “my own self at my very best all the time.” (Which is impossible for any human but should be the goal of all humans).

Organizational Architecture and the Protestant Church

A Mission Statement and the Strategies

There are four dimensions of “my own self”: mental, physical, social, and religious (notice these fit well with Campbell’s functions).  As discussed earlier, John Wesley intuitively understood that the Methodists had to address all of these within the organization that he created.  First and foremost, it addresses the religious needs of its adherents.  Second, from the history of Methodism, it is plain his “methods” and governance created a secure internal environment for his adherents and that their openness combined with discipline continued to attract more.  Third, his Sunday school addressed their mental dimension, while including gyms, etc., addressed the physical.  And like his mentor, Jesus of Nazareth, the people of early Methodism “…grew stature (the physical), wisdom (the mental), and in favor with God (the religious), and man (the social).”
Any mission statementor goal and the strategies for achieving the goal should include a balance of all four religious functions, rather than a great emphasis on just one.   Having said, there need to be a set of strategies for meeting the goal.  These should encompass all four dimensions.  Once these are decided on, the church organization must decide on processes (ordered sets of activities or “methods”) that move the organization toward the goal. 

Processes and Governance

However, the strategies and processes must be limited to those that can function within governance of the organization.  If the mission simply cannot be met within the rules and regulations of organization then either: 1) the governance should change, 2) the strategies should change, or 3) the processes.  The simplest to change are the processes; the most difficult is the governance.  One other thing, the mission or goal should not be changed.


These follow the practices of organizational architecture.  Finally, the religious organization has to work within the limits of its infrastructure and support systems (even though with the right blessing these may greatly multiple to feed the “my own self” of all members).
7 months, 8 days ago

Enterprise Architecture, Systems Engineering, and Regulations: Process Rudders or Process Brakes

Regulations: The “Must Meet” Requirements

Regulations directly affect all organizations, products, systems, and services. Further, they can be a rudder guiding the organization or a brake causing the organization to stop making any progress in meeting its charter, goal, or in completing its mission.  This post discusses laws, rules, specifications, standard, regulations (or simply regulations) as a part of any enterprise architecture or systems engineering effort.

The Customer Requirements Identification and Management tool that I’ve developed, CARRMA®, uses the concept of Must Meet requirements to store both the regulations and the metrics for meeting the regulation.  If more than one project has a particular regulation imposed on it, the CARRMA®’s data store will allow for reuse.

Regulations the rudders of an organization

Any commandment, law, rule, specification, standard, or regulation creates process friction, by its very nature.  It inhibits what can be done or defines what must be done. For example, saying “Thou Shall not Kill” means that it’s not nice to end a vehemently intense discussion by bouncing your opponent “six feet under”, though that might be very satisfying at the moment.  That is, killing is not a good solution to an intra-group disagreement since it doesn’t promote understanding, knowledge and therefore value growth of the group and doesn’t instill trust with other groups.

Hoping that I’ve made the point that regulations curb action or ensure action, by an over-the-top example, a regulation acts like a rudder, making it difficult for a process and thus a strategy to go one way, thereby making it easier to go another.  This, in turn, may directly affect the organization’s charter, mission, or goal.  In any rational system it should enable both the charter/mission/goal and the strategies and processes for achieving these.
Fortunately or more importantly, unfortunately systems and organizations built by humans are not entirely rational and sometimes not rational at all.  If the economy is the engine that powers the ship-of-state in an organization, then each commandment, law, rule, specification, standard, or regulation enacted is a rudder with its own wheel guided by part of the crew steering the organization toward their own Avalon.

Arrow’s Paradox and Catch 22s

At some point the number of rudders pointing at all points of the compass are such that the organization be it private or a ship-of-state either comes to a complete halt or turns in tight circles.  The rudders have formed a damn that effectively stops all forward progress of the organization, which no amount of churning by the organization’s economic engine can overcome.  Some of these regulation rudders are small and some are very large.  Twist the large ones hard and the ship brakes to a crawl and it may not turn at all. 

Arrow’s Paradox

A bigger problem is that too many regulation rudders will simply cause the organizational ship to stop and not make any of the ports.  Dr. Kenneth Arrow’s is known for “Arrow’s Impossibility Theorem” [Sidebar: For which he won the Nobel Prize], which is also known as Arrow’s Paradox.  He demonstrated mathematically that if an organization has three or more goals that they want to optimize, they will be able to only optimize on two (or I suspect they can sub-optimize on all three).

The point here isn’t that organizations can’t have more than one goal or objective; it is that if the organization attempts to define more than two (or maybe three) objectives using the “policy/regulation” strategy, the organization will slow down, wobble around, and achieve none of the objectives.

If you add in that in a democracy the goals and objectives change as controlling constituencies change, normally you end up with more and more laws, regulations, rules, and standards each attempting to use its regulatory rudder to change the course of the ship-of-state to reach the objective for which it was enacted.  The net result is that either the ship-of-state will end up on the rocks or going in circles.

From personal experience with federal contracts and from a recent DoD report, I can illustrate the problem.  One goal, which probably should be the only goal of DoD contracting, is to provide the most effective weapons in the world for the US Military.  That is, the weapons should be the greatest force multiplier.

However, the contracting office is faced with a second objective (a second rudder) of acquiring these weapons cost efficiently.  There are two metrics for cost efficiency, the initial cost (including research, development, and construction), and life-cycle costs (maintenance, upgrades, and disposal).  The question for the contracting officer is, “Do you contract for the cheapest initial cost product or for the one that will cost the least over the product’s projected lifespan?”

Then there is a third rudder in the form of the dependability of the product, system, or service. “Dependability encompasses all of the “illities”, including reliability, maintainability, serviceability, and so on.  Each of these has metrics and standards that must be met.  In some cases the metrics for the standard or policy is either untestable in a timely manner or in a few infeasible or impossible to meet.  All of these rudders will force addition time and expense into the effort.

A fourth rudder is reconfigurability and upgradeability of the product, system, or service.  Before the US Civil War, the rate of change of weapons and support systems was such that weapons and weapons systems had no need for this “Must Meet” requirement/policy.  The weapon would be worn out long be the technology changed.  However, since then it’s obvious that technology has and is continuing to accelerate (to the point that for many systems, they must be upgraded before their development and implementation is completed).  These continue to increase the initial design costs. [Sidebar: Services Oriented Architecture can reduce these costs greatly for IT systems, and modular systems/product can do the same for hardware of all types.]

A fifth rudder, and first one that is politically/social motivated as well as costly is the implied policy that all congressional districts and states should have jobs related to weapons and intelligence system development, especially where the senator or representative sits on committees dealing with budgets and military programs.  A recent DoD study has shown that this can add 20 to 25 percent to the cost of a product, system, or service for the DoD.
Then comes the politically motivated socially liberal welfare policy rudders (those intended to regulate social welfare and social change).  For weapons and intelligence tools, these require that a certain percent of the work on the product be from female owned companies and another percentage be from “minority” owned businesses.  [Sidebar:  The social actives’ idea was that the only way these groups could break into DoD contracting was through regulation.  I think they were correct because most of the work they did that I observed over the 25 years I was associate with government contracted engineering demonstrated beyond a doubt that they were incompetent to compete with a level playing field.  Many times the prime contractor had to supply the engineering capability to complete the job over schedule and way over cost.]  While it isn’t Politically Correct to attempt to define how much money was spent on this contracting welfare, from personal experience I expect that it is very significant.

The point of this section is not to discuss the problems with the regulations and informal policies of DoD contracting, rather it’s to demonstrate that as more laws, policies, regulations, business rules, standards, and so on (the “Must Meet” requirements) are imposed on a program, especially, extraneous ones, that both the effectiveness of product and the cost efficiency of the project or program are reduced.  And at some point there are so many “Must Meet” requirements that the effort, even at the enterprise architectural level will fail.


The extreme case of Arrow’s Paradox is the famous Catch-22 where two regulations are diametrically opposed leaving whatever effort, project, program, organization, or enterprise going in circles and making no progress in any direction. Even with a ship-of-state the size of the United States (which is a supertanker sized economy), given enough Catch-22s and nothing will get done; too many steersmen, too many rudders, and too many goals (targets, harbors, or whatever).

A good current and everyday example of regulatory Catch-22 is deicing roads.  Having icy sidewalks can be very exciting and occasionally lethal—so it really is not a good thing.  So deicing the sidewalks is mandatory.  Deicing calls for the use chemicals like sodium chloride (salt).

The problem is that there are regulations (must meet) requirements for the use and storage of “salt” because it “pollutes the environment” (and it does, you should see my grass near the sidewalk and road).  So you must use chemicals to save lives but you must not to save the environment.  Two regulatory “must meet” requirements (rudders) are in opposition, one to save lives and one to save the environment.  This is a small example of a big problem that can and will bring the economic ship-of-state to a dead stop.

Reducing the Number of Regulatory Brakes but Keeping the Rudders

To get any organization to at least head to a goal it should be clear that removing internal policies that interfere with the attainment of the goal is necessary.  For large organizations with many sub-organizations, the issue becomes one of identifying which regulations guide the organization in the direction its charter, goal, or mission state and which are braking it to a stop.  For many organizations, but especially democratic style governments, there will always conflicting goals and missions and therefore conflicting regulations.  So how should a large organization or government determine which are rudders and which are brakes?
To my mind, this is a good place to apply Enterprise Architecture and the architectural model that I set out both in my book and in this blog.  The nice thing about that architectural model is that it can start as a static model that can be used to identify customer requirements and end up as a dynamic model of the enterprise (even the Ship-of-State).  As such, it can identify policies that are braking or causing bottlenecks in the processes enabling the strategies for attaining the goal or mission.  Until you can dynamically model the enterprise, you will never really be able to identify the unintended consequences and negative externalities of any policy, standard, or regulation.  Nor, as the goal or mission changes can you identify those policies, standards, or regulations that truly impede progress in the changed direction (though many politicians in the organization will be able to tell you, or so they believe).

For those policies and standards internal to the organization, the leadership should be able to understand which regulations support the organization’s strategies and process and which don’t.  Additionally, the leadership can propose changes, deletions, and new regulations, which the enterprise architect can then model to determine the likely consequences, both intended and unintended.  Once the enterprise architects have oriented the changes the leadership proposes [Sidebar: See the OODA loop] the leadership can then choose what internal policies, standards, and regulations to change and which changes to implement with a much lower risk while seeing the their organization move more quickly toward achieving its goal.  [Sidebar: The modeling will also show where leaders and managers of sub-units are working on their own agenda which might or might not be steering toward the overall goal.  Remember the Systems Engineering axiom, “Optimizing the sub-systems sub-optimizes the system.”]

For governments, especially for the legislative branch, architectural modeling is particularly important both to determine conflicting laws, regulations, rules, standards and codes.  If, as the architectural models mature and their predictions help to make better decisions, there may even be fewer vehemently intense discussions about which laws, regulations, rules, standards, and codes to enact and which to remove or rewrite…Interesting.
7 months, 19 days ago

If You Want to Create an Enterprise Architecture; Don’t!

One of the last presentations I made as an Enterprise Architect for a major DoD contractor was to the Chief Architect of the US Veterans Administration.  I walked in with a fully prepared presentation that was to take about 10 minutes of the time allotted to our team only to find the Chief Architect cutting the presentation off with a question, “How do we go about creating an IT architecture for the VA?”  Even though I had a very good answer and had applied it on a couple of occasion, my mind blanked.  I want to share with you his problem and the answer I should have given.

The Problem

The problem that the Chief Architect of the VA has is the same problem that plagues CA’s of all large organization and most of medium and smaller organizations.  That question is base on the very logical idea very much the analog of the idea that before you start changing the plumbing, you should know design of the current plumbing; that is, before you can create a “to be” or “next step” architecture you need to have a “current architecture”.  Obviously, if you don’t know which pipes connect where and start making changes to the plumbing you could end up with some very interesting and exciting results for which you may need to call your insurance company.  Likewise, if you want to improve the effectiveness and/or the cost efficiency of the organizational processes and information systems, most Enterprise Architects assume they must first define and delimit the “as-is” processes and information systems for the organization.

The conundrum is that, in today’s technological environment, by the time an IT architecture team has mapped out (structured and ordered) an “as is” architecture, some, most, or all of the elements and data of the architecture will be obsolete and out of date.  For something as large as a major corporation, a department within a state or the federal government, the cost and effort involved would require a tour de force on a very large perhaps unprecedented scale.  This cost and level of effort would be such that the senior management would cut funding to the effort as a waste of time and money, since having an “as-is” architecture by itself produces little in value to the organization.

As can be found in the literature, there are many ways to “solve” or at least ameliorate the problem of creating an “as-is” architecture.  For example, one of the best, that almost works, is to chop the organization into its components and create an “as-is” architecture for each component separately.  Then try to stitch the architectures together.  I’ve tried this and it works up to a point.

There is a truism in Systems Engineering, Systems Architecture, and Enterprise Architecture, “Optimizing the sub-systems will sub-optimize the system”.  I have demonstrated this to many people many times and experienced it several times.  But this is crux of the problem for those that try to create an Enterprise Architecture for a large organization.

The Solution

The simple answer is “Don’t”.  That is, “Don’t attempt to create an “as is” architecture for an organization, especially a large organization, because it will create itself with the proper procedures in place.  So how would I do it?

 Define, delimited, and structured an initial set of classes and attributes for the Organization’s Enterprise Architecture.  These should include:

  • Its Charter, Mission, Goal

  • Its Strategies for achieving its charter, mission, or goal

  • Its Processes supporting its strategies

  • Its Tooling and infrastructure

  • Its Governance that affects any of the above, including:

  • Internal Policies and Standards

  • External Regulations and Standards

I worked with one Enterprise Architecture database that had over fifty classes, each class with ten or more attributes.  This was a fairly mature architecture.  My recommendation, don’t try to think of all the classes you may need or all of the attributes for each class; that’s way over thinking.  Instead, start simple and add through the cycles

2. Once you have designed and structured the initial set of classed and attributes, create a data base structured according to the design.

 Here is the key to creating an “As-Is” Architecture by not creating it…Huh?  Design and implement processes to capture the current state of strategies, processes, and tooling/infrastructure as part of review of funding for revision and upgrades to the current systems and processes.  

 When personnel in the organization propose a project insist that these personnel demonstrate the value of the process or procedure that they intend to update or upgrade. The “value” would include demonstrating which of and how the current product, system, or service enables the processes, strategies, and charter, mission, or goal of the organization. My experience has been that the initial attempts will be fuzzy and incomplete, but that as the number of proposed projects in the database (which is generally called the Asset Management System and on which the “as-is” architecture is built) increases both the completeness and clarity of the current enterprise architecture will increase.

The reason I say “Don’t” try to create an “as-is” architecture is that
 every 3 to 7 years every component of the organization’s information system will need replacement.  This means that within 3 to 5 years simply by documenting and structuring the inputs from all of the efforts the organization’s “as-is” architecture will be synergistically created (and at minimal cost) [Sidebar: There will be some cost because the project proposers will need to think through how their current charter, mission, or goal and the strategies they support links to and supports the overall charter, mission, or goal of the organization.  This is not necessarily a bad thing.]  For large organizations, no matter how much time or effort is put into attempting to create an “As-Is” Enterprise Architecture, it will take a minimum of a year and a great deal of funding; so it simply makes no sense.

As this Enterprise Architecture evolves you will begin to see a number of things that managers want to obfuscate or hide completely.  For example, a number of processes and component or sub-organizations will be demonstrated to be obsolete.  In this case obsolete means that the process or component organization no longer supports any of the organization’s strategies or its goal.  Since managers want to build or at least keep their fiefdoms they will not appreciate this much.  Additionally, it will demonstrate which internal policies, regulations, and standards help the organization and which hurt it in meeting its goal.  Again, the gatekeepers of these policies, regulations, and standards will object–strenuously.  

But there are two more insidious problems that a good “As-Is” Enterprise Architecture will reveal, nepotism and the famous “Catch-22s”.  


Nepotism in this case is more broadly defined than what most people think of as “nepotism”.  In the sense I mean, nepotism can include creating a non-level economic playing field. In all large organizations, but especially in the U.S. Federal government (probably in all governments) the type of nepotism I’m identifying is rampant.  In fact a December 2016 report from the Department of Defense highlights what most federal employees and DoD contractors have known for years, because representatives and senators will only vote for a large program if their district or state gets a part of it, the DoD estimates that the cost of the program increases approximately 20 percent.  This is “jobs welfare” on a massive scale.  Some major defense contractors have plants in every state for just this reason, not because it make any sense from a cost efficiency perspective.  Further, Congress had passed laws to ensure that minority and female owned businesses.  The reason is that minorities and women scream that the good old boy network doesn’t allow them to compete for sub-contracts [Sidebar: Actually the reason for the “good ole boy” network is that the prime contractors have sub-contractors that actually know what their doing.  In my experience, many times primes will “encourage”–read subsidize–inexperienced and frequently incompetent minority and female owned businesses in order to meet these regulations imposed on their proposals.]  Again, this is a form of social welfare to ensure all political constituents that scream loudly are appeased.  This adds up to the DoD being one of the larger governmental welfare organizations. [Sidebar: While, seemingly, I’ve picked on government organizations, especially the U.S. DoD, and while I have found that all governmental organizations in a democracy will have this type of nepotism.  This is what lobbing is all about.  Only when it goes so far that it’s plain to all and when it’s not openly enacted into law that we call it graft and corruption.]  And it’s not only governments that suffer from this type of nepotism, all large organizations have the same problems, though generally on a smaller scale.  For example, sometimes the nepotism is written into union contracts.  Along with finance engineering, the auto industry in Detroit suffered a near collapse due to contractual nepotism.

This presents a problem for any Enterprise Architect.  The as-is architecture will highlight the nepotism of this type more clearly than any report.  The Enterprise Architect won’t need to report it to the management, it will be self-evident.  I’ve experienced a situation, as I suspect many of you have, where the management kills the messenger in order to not address the problem.  In my case, three times I’ve been chased off programs when I reported that the effort was subsidizing silliness.


The second significant problem that policies, regulations, and standards become contradictory to each other or in combination make it impossible for the organization to achieve its goal.  Again, a good enterprise architecture will highlight these, though frequently, when management from one generation of technology with its set of policies and standards, finds the next upon them, they will refuse to resend or modify the existing regulations, preferring instead, again, to kill the messenger.  So like Systems Engineers, I’ve found that enterprise architects are only respected by other enterprise architects.

 When the development and implementation team completes a project, and once it goes into operation, then as a final step in their effort, they should review the data they gave to the enterprise architect, revising the data to accurately reflect the “as-built” instead of the “as-proposed”.  The as-built documentation must include all component, assembly or functional, and customer acceptance testing, and all post production required changes.  This documentation will inevitably lead to additional class attributes of the Asset Management System and structure in the enterprise architecture.

 As the Asset Management System and the Enterprise Architecture matures, management should prepare for a paradigm shift in the way projects and other efforts are proposed.  This is where Enterprise Architecture really demonstrates how it can make the organization both more effective and cost efficient.

A mature enterprise architecture can serve as the basis for a dynamic business or organizational process model for the organization.  Management can use this model to identify obsolete processes, (and as discussed) policies, regulations, and standards; ones that the organization should eliminate.  Additionally, with the help of the Enterprise Architect, management can identify missing or inhibiting processes and tools, and identify bottlenecks and dams in process flows.

Further, they can model what happens when the missing and inhibiting processes and tools are added or when the bottlenecks are eliminated or reduced.  This modeling will then indicate where there is a need for new efforts and to some degree the effectiveness and cost efficiency of such efforts.  It’s a paradigm shift in that no longer to component or sub-units of the organization propose changes.  Instead, senior management working with the Enterprise Architect and the component or sub-units will identify and fund efforts.  They now have a way to measure the potential of the change in meeting the organizational goal, which means senior management has a better way of managing organizational change.

Finally, once management has identified targets for change or upgrade, the enterprise architect together with a system architect can define alternatives to meet the effort’s requirements.  They can model alternative process and tooling changes to forecast which has the lowest potential risk, the highest potential return, the least disruption of current activities, lowest initial cost, lowest lifecycle cost, the most adaptable or agile, or any number of other targets defined by the senior management.  This will make the organization much more cost efficient, and perhaps more cost effective; and this is the purpose of Enterprise Architecture,

To sum up, using this six step, high-level process is an effective way to create both an Asset Management System (an “As-Is” Architecture) and an effective Enterprise Architecture process; perhaps the only way.  

7 months, 20 days ago

Agility, SOA, Virtual Extended Enterprise, Swarming Tactics, and Architecture

Agility and the Virtual Extended EnterpriseIn the 1990s, The Agility Forum of Lehigh University defined Agility as “The ability to successfully respond to unexpected challenges and opportunities.” The forum chartered a technical committe…

5 years, 7 months ago

Enterprise Portfolio Management and Enterprise Architecture Paper Available

I have added another paper to my list of papers.  This one is on the central role of the Enterprise Architect in the Enterprise Portfolio Management Process and how Systems Engineering, System Architecture, and Enterprise Architecture are inter-re…

5 years, 7 months ago

New: Links to My Papers

I added a new page (above and to the right).  It contains links to some of my papers.  I will be posting more from time to time.  I am ordering the papers so that the readers might be better able to follow my discussion in each.  I hope that helps.  Let me know with comments, what you think.


The Papers include:
1. Enterprise Portfolio Management and Enterprise Architecture
Coming shortly

2. The Role and Development of an Enterprise Architect: A Devil Advocate’s Perspective

3. Systems Engineering and System Architecture in an Agile and Short-cycle Transformation Environment