The Museum of Future Housing Tech

On June 24th I’ll be at House Party 2014 ’Curating’ a ‘Museum of Future Housing Technology’.

What is a museum of future housing technology?

Well, that is up to whoever visits the museum! The visitors will be creating the exhibits!

During the day visitors to the museum will have the opportunity to:

  1. Join a group of interesting people
  2. Share and discuss their problems and those of their customers
  3. Pick an interesting area or theme from the discussion
  4. Prototype a technology enabled solution to their problem. Each team will create a prototype of an idea using Post-it’s, paper, cardboard, plasticine, role play, video, storyboards and whatever else can be found lying around.
  5. The prototype will become an exhibit in the ‘Museum of Future Housing Tech’

Hopefully over the day we’ll build up an interesting and thought provoking collection of exhibits as together we explore how technology might help us solve problems for ourselves and our customers.

Sessions will run at set times throughout the day and will last approximately 30 minutes, so hopefully there be lots of opportunity to drop in and contribute to the museum and view the freshly created exhibits.

The end collection of exhibits will hopefully be

  • A representation of some key challenges that the housing sector faces
  • A manifestation of some interesting and innovative thinking about how technology might enable solutions to those challenges

Sound interesting? then if you haven’t already head over here to get your ticket.

Fancy helping out and/or contributing to the ‘Museum of Future Housing Technology’? then please get in touch here

Categories Uncategorized

Not Another Framework? Part 2

In my last post Oh No! We need another Practice Framework,  I developed the earlier theme commenced in “Beware the New Silos”. I argued that the widely used frameworks are narrowly discipline centric and actually inhibit cross discipline working. I described how my own firm’s experiences have led to the development of a de facto framework, (we call it SOAM) and illustrated how this is essentially a value chain commencing with customer demand and finishing with value add to some enterprise.

I ended by sketching some basic principles concluding that we need a new framework that is goal driven and incorporates the entire value chain of capabilities, which of course may selectively reuse some parts of existing frameworks. In this post, I suggest a strawman that covers a) principles and b) capability model.

Before diving into principles, it will be useful to declare some scope. Our framework has developed from working with larger enterprises, both commercial and government in the area of business service and solution delivery. All of these enterprises share common issues that they have extensive legacy application assets that act as a serious inhibitor to business change, and successive, narrowly scoped solution projects over many years have often resulted in great complexity and technical debt. It is also common in my experience that enterprise architecture functions are routinely bypassed or ignored; that Agile methods have been attempted and found useful on narrow focused projects, but because of the constrained focus, tend to increase overall complexity of the ongoing application asset base; that consistent customer experiences are commonly compromised by narrow focused projects; and line of business managers in large enterprises are frequently dissatisfied with IT application service support.

The objectives of the framework are to:

– describe practices relevant to service and solution delivery in the digital business environment
– achieve a balance between short term goals and longer term objectives
– support progressive transformation to an enterprise comprised of independent business capabilities
– facilitate continuous, short cycle time evolution of business capability
– progressively and continuously resolve legacy portfolio complexities
– enable rapid delivery at low cost without compromise in quality

Principles are foundational for any framework. 

Principles should be enduring and lead to both excellent policy communication and policy interpretation in everyday situations. I also find it useful to classify principles by subject.

Capability Model

In business architecture the capability model has become ubiquitous. And in thinking organizations I observe delivery of highly independent service and solution components that reduce dependencies and the impact of change, as well as mirroring the IT architecture on the business organization. Why wouldn’t we use the same approach in defining a set of activities to deliver services and solutions?

If you are uncertain about the capability concept, it’s important to appreciate that the optimum business capability is one that enables:
maximum cohesion of internal functional capability, plus consistency of life cycle, strategic class (core, context, innovating . . . ), business partition (global, local, LoB . . ), standardization, customizability, stability, metrics and drivers
defined, stable dependencies that are implemented as services
[Further reading on capability optimization ]

In the capability dependency model below, the arrows are dependencies. For example, Demand Shaping is dependent upon Conceptual Business Modeling and Portfolio Management.  So this is not a flow diagram, rather all the capabilities should be regarded as iterative, I will come back and discuss how Lean principles operate in a framework like this, and as discussed above, highly independent.

Most of the capabilities in the model are self-explanatory. However some need explanation:
1. The Conceptual Business Modeling capability is the ability for business stakeholders to describe business improvement in conceptual terms. Many business people speak in solution terms. Most business requirements therefore surface as solutions, some more baked than others. Because the business stakeholder generally has the budget, the solution vision frequently drives and shapes the project with outcomes that frequently compromise the existing and planned portfolio. By educating business stakeholders to communicate in concepts, the opportunity is created to develop the business improvement idea without preconceptions of implementation or product, and to optimize architectural and portfolio integration. 
2. Demand Management is reasonably well understood. Demand Shaping is best regarded as a complementary capability that takes raw customer demand and decomposes into components such as pre-existing or planned services/APIs, considers opportunities for modernization and provisioning, and reassembles as a set of projects or project components that optimize the progressive development of the portfolio. Demand Shaping is primarily an architectural task, but should be run by a cross functional team including architect, product management, business design and technical expert roles. 
3. The Architecture Capability is shown as a decomposition of sub-capabilities, essentially one for each View, plus modernization. Whilst modernization is not classically an architecture view, there is commonly a specialist requirement for modernization architecture that will include identification of appropriate transformation and transitional architecture patterns. The primary objective of all of the architecture sub-capabilities is to define realizable structure to meet the demand and, as discussed above, to optimize opportunities for modernization and provisioning. While there is no explicit enterprise architecture View called out, each architecture capability should be executed separately and iteratively for reference, portfolio, program, project and module, thereby defining progressive layers of standard functionality that will be common to the defined scope, as well as situation specific business functionality. 
I will detail all the capabilities in a subsequent post.

Final remarks. 

This high level view of the framework has attempted to list a set of principles and associated capabilities required to support the value chain illustrated in Part 1 of this extended blog post. What will hopefully have become clear is the need for architecture capabilities particularly to be involved throughout the value chain. This approach integrates all types of architecture (enterprise, service, solution, deployment  . . . ) into the business improvement value chain and creates better opportunity to demonstrate the ROI on architecture. Further the approach prevents enterprise architecture particularly becoming divorced from mainstream business improvement and encourages a better balance of short term and strategic goals. What will not yet be fully clarified is how the framework is very strongly focused on realizing architecture in delivered services and solutions, as a series of successive collaborations. I will describe how this is done using a Lean approach in a subsequent post. 
                  Beware the New Silos

Not Another Framework? Part 2

In my last post Oh No! We need another Practice Framework,  I developed the earlier theme commenced in “Beware the New Silos”. I argued that the widely used frameworks are narrowly discipline centric and actually inhibit cross discipline working. I described how my own firm’s experiences have led to the development of a de facto framework, (we call it SOAM) and illustrated how this is essentially a value chain commencing with customer demand and finishing with value add to some enterprise.

I ended by sketching some basic principles concluding that we need a new framework that is goal driven and incorporates the entire value chain of capabilities, which of course may selectively reuse some parts of existing frameworks. In this post, I suggest a strawman that covers a) principles and b) capability model.

Before diving into principles, it will be useful to declare some scope. Our framework has developed from working with larger enterprises, both commercial and government in the area of business service and solution delivery. All of these enterprises share common issues that they have extensive legacy application assets that act as a serious inhibitor to business change, and successive, narrowly scoped solution projects over many years have often resulted in great complexity and technical debt. It is also common in my experience that enterprise architecture functions are routinely bypassed or ignored; that Agile methods have been attempted and found useful on narrow focused projects, but because of the constrained focus, tend to increase overall complexity of the ongoing application asset base; that consistent customer experiences are commonly compromised by narrow focused projects; and line of business managers in large enterprises are frequently dissatisfied with IT application service support.

The objectives of the framework are to:

– describe practices relevant to service and solution delivery in the digital business environment
– achieve a balance between short term goals and longer term objectives
– support progressive transformation to an enterprise comprised of independent business capabilities
– facilitate continuous, short cycle time evolution of business capability
– progressively and continuously resolve legacy portfolio complexities
– enable rapid delivery at low cost without compromise in quality

Principles are foundational for any framework. 

Principles should be enduring and lead to both excellent policy communication and policy interpretation in everyday situations. I also find it useful to classify principles by subject.

Capability Model

In business architecture the capability model has become ubiquitous. And in thinking organizations I observe delivery of highly independent service and solution components that reduce dependencies and the impact of change, as well as mirroring the IT architecture on the business organization. Why wouldn’t we use the same approach in defining a set of activities to deliver services and solutions?

If you are uncertain about the capability concept, it’s important to appreciate that the optimum business capability is one that enables:
maximum cohesion of internal functional capability, plus consistency of life cycle, strategic class (core, context, innovating . . . ), business partition (global, local, LoB . . ), standardization, customizability, stability, metrics and drivers
defined, stable dependencies that are implemented as services
[Further reading on capability optimization ]

In the capability dependency model below, the arrows are dependencies. For example, Demand Shaping is dependent upon Conceptual Business Modeling and Portfolio Management.  So this is not a flow diagram, rather all the capabilities should be regarded as iterative, I will come back and discuss how Lean principles operate in a framework like this, and as discussed above, highly independent.

Most of the capabilities in the model are self-explanatory. However some need explanation:
1. The Conceptual Business Modeling capability is the ability for business stakeholders to describe business improvement in conceptual terms. Many business people speak in solution terms. Most business requirements therefore surface as solutions, some more baked than others. Because the business stakeholder generally has the budget, the solution vision frequently drives and shapes the project with outcomes that frequently compromise the existing and planned portfolio. By educating business stakeholders to communicate in concepts, the opportunity is created to develop the business improvement idea without preconceptions of implementation or product, and to optimize architectural and portfolio integration. 
2. Demand Management is reasonably well understood. Demand Shaping is best regarded as a complementary capability that takes raw customer demand and decomposes into components such as pre-existing or planned services/APIs, considers opportunities for modernization and provisioning, and reassembles as a set of projects or project components that optimize the progressive development of the portfolio. Demand Shaping is primarily an architectural task, but should be run by a cross functional team including architect, product management, business design and technical expert roles. 
3. The Architecture Capability is shown as a decomposition of sub-capabilities, essentially one for each View, plus modernization. Whilst modernization is not classically an architecture view, there is commonly a specialist requirement for modernization architecture that will include identification of appropriate transformation and transitional architecture patterns. The primary objective of all of the architecture sub-capabilities is to define realizable structure to meet the demand and, as discussed above, to optimize opportunities for modernization and provisioning. While there is no explicit enterprise architecture View called out, each architecture capability should be executed separately and iteratively for reference, portfolio, program, project and module, thereby defining progressive layers of standard functionality that will be common to the defined scope, as well as situation specific business functionality. 
I will detail all the capabilities in a subsequent post.

Final remarks. 

This high level view of the framework has attempted to list a set of principles and associated capabilities required to support the value chain illustrated in Part 1 of this extended blog post. What will hopefully have become clear is the need for architecture capabilities particularly to be involved throughout the value chain. This approach integrates all types of architecture (enterprise, service, solution, deployment  . . . ) into the business improvement value chain and creates better opportunity to demonstrate the ROI on architecture. Further the approach prevents enterprise architecture particularly becoming divorced from mainstream business improvement and encourages a better balance of short term and strategic goals. What will not yet be fully clarified is how the framework is very strongly focused on realizing architecture in delivered services and solutions, as a series of successive collaborations. I will describe how this is done using a Lean approach in a subsequent post. 
                  Beware the New Silos

One-liner for removing *ALL* ZFS snapshots

Desparately needed this one after tinkering with a ZFS rolling backup script:

sudo zfs list -H -o name -t snapshot | xargs -n1 zfs destroy

Note: This will remove ALL snapshots. Use at your own risk.

Categories Uncategorized

Your enterprise and social media?! We’ve got an IDEA!

Social media is indispensable from the organizational environment. Where people collaborate interaction exists and since society’s large-scale adoption of the internet, social media shaped online conversations about, with and within organizations. Social media is a fact of life; it is no longer the question whether an organization should use social media, but how they should use it. However, research by Gartner shows that most social media initiatives fail to achieve participation from the community or to achieve any meaningful purpose. So why do some organizations fail in using social media, while others – for instance KLM – are extremely successful in it? I think because many organizations do not understand the importance of adequately incorporating social media initiatives within their organizational structure. They do not know how to use social media in the context of their enterprise and become a ‘Social Enterprise’.

Designing the Social Enterprise

I strongly believe in a sort of ‘manufacturability’ of organizations. With manufacturability I mean designing the organization(al change) by the use of business models, enterprise architecture and process management. These are the fundamentals of delivering customer value in an effective and efficient way (although more than just these disciplines is required). I think that social media should be subject to these disciplines too. Maybe social media is not that ‘manufacturable’ as a business process or architecture, in the end it is part of a ‘social system’ about which we should carefully think why and how we participate in it. It is indeed the field in which most of our customers (internal or external to our organization) are active. That’s why social media offers us great opportunities and hazards for creating and delivering customer value.

IDEA for the Social Enterprise

In the consortium project ‘New Models for the Social Enterprise’ we designed ‘IDEA for the Social Enterprise’. IDEA is an abbreviation for the Interactive Design and Engineering Approach. It offers a method – with its roots in design thinking – to incorporate social media in your organization. By several diverging and converging phases we propose coherent instruments which help you to understand the value of social media in relation to your business model, the related business processes and the customers.

To conclude: whether you like it or not, social media is one of the trends we cannot deny from a perspective of organizational design. Social media has become an important channel for creating and delivering customer value. In order to use social media in delivering optimal customer value, I am convinced that organizations need a good IDEA about how to integrate social media in their enterprise!

If you have any questions or interest in IDEA or our research project ‘New models for the Social Enterprise’, feel free to contact me at b.beuger@bizzdesign.nl

Categories Uncategorized

Achieving agility with data virtualization (2/2)

This posting is the follow up from a previous post where we described the need for agility as well as a setting where we believe that data virtualization techniques can help.

Following the definition of data virtualization of Rick van der Lans, we see data virtualization as a group of technologies that makes a heterogeneous set of databases and files look like one integrated database, which has some commonality with how many people see the concept of a federated database. As we will see shortly, though, data virtualization picks up where “traditional” data federation stops and provides organizations with a rich set of techniques for data integration issues:

Starting at the bottom, we see a series of source systems (or at least: the data part of it). The data structures are replicated and wrapped in the data virtualization server. The idea is that the virtualization software discovers the data structures in the source systems to make them available as virtual table structures. This achieves the notion of federation as mentioned earlier. If desired, the actual content of the source systems may be (partially) cached: this has the advantage that queries can be handled mostly in the virtualization environment to prevent huge workloads for the source systems.

Based on this virtualized ‘foundation’ layer, it is fairly straight forward to build new layers of virtual tables on top. This allows for building data structures that are close to the needs of end-users (i.e. star schema’s). It also allows for easy integration, application of transformation and integration rules and so on. In practice we increasingly see virtualized data warehouses, master data management hubs, etcetera.

One aspect of agility should be obvious from this discussion: development and data integration within the virtualized environment can be considerably more agile than in a traditional setting. Requirements and specification (e.g. meta-data management) could still be used, but rather than a long build and deploy time, we now have results available immediately in a virtual table structure. As a result, it is easy to learn-while-doing in quick and highly interactive cycles with end users: quick sprints will deliver a working prototype and later adjustments can easily be made without having wasted many valuable development hours.

This also demonstrates the fact that such a system itself is also considered to be agile:

  • It will be fairly easy – and most of all: fast – to adapt to ever changing business needs for information.

  • Deploying changes to a virtualized data model is easier than changing the data structure of a physical database, which can entail all kinds of data conversion issues.

  • Dealing with the impact of changes is easier, since no software is adapted, lowering the risk of disruptions and keeping the impact localized.

  • Integration of the solution is simple, since existing interfaces remain stable.

Built-in features around security, auditing, logging, and monitoring (i.e., when things change in the source systems) provide the organization with the means to stay in control of their data. In short:

  • Data virtualization decouples access to data from the source systems. This allows further manipulation of data without impacting the original systems.

  • Virtualized access to structured and unstructured data allows for uniform querying. Caching avoids heavy work-loads on the original transaction systems.

  • Data access can be optimized for various stakeholders with different needs, concept definitions, permissions etc.

  • Virtualization allows for rapid, incremental development & delivery of information with minimal impact on source systems.

This mechanism can be considered a key resource for agility that supports key activities in the organization. A virtual data warehouse with rapid / agile development of new data structures will make it easier to accommodate management that increasingly seeks data-based, rationalized decision making to complement with creative strategic skill. Suppose, for example, that there is a feeling that international markets can be conquered with a cross selling strategy: offer one product at a discount to generate interest for high-end services that will generate revenue. Running the numbers based on historic sales in countries where the organization was already active must be swift. Even more, when executing this strategy, the system should be flexible enough to easily monitor actual investments and revenues in near real-time.

The other obvious need for system agility in the field of data lies with compliance and regulations. Many industries are heavily regulated, for example in finance or healthcare, and rules for compliance reporting change all the time. In and by itself this need not be an issue. However, we often see that concept definitions change slightly, derivations and key calculations become more complex, other types of information are required, and so on. Here also, the rapid development cycles and flexibility.

If you have any questions or suggestions: either drop a note or get in touch via E-mail!

Categories Uncategorized

Achieving agility with data virtualization (2/2)

This posting is the follow up from a previous post where we described the need for agility as well as a setting where we believe that data virtualization techniques can help.

Following the definition of data virtualization of Rick van der Lans, we see data virtualization as a group of technologies that makes a heterogeneous set of databases and files look like one integrated database, which has some commonality with how many people see the concept of a federated database. As we will see shortly, though, data virtualization picks up where “traditional” data federation stops and provides organizations with a rich set of techniques for data integration issues:

Starting at the bottom, we see a series of source systems (or at least: the data part of it). The data structures are replicated and wrapped in the data virtualization server. The idea is that the virtualization software discovers the data structures in the source systems to make them available as virtual table structures. This achieves the notion of federation as mentioned earlier. If desired, the actual content of the source systems may be (partially) cached: this has the advantage that queries can be handled mostly in the virtualization environment to prevent huge workloads for the source systems.

Based on this virtualized ‘foundation’ layer, it is fairly straight forward to build new layers of virtual tables on top. This allows for building data structures that are close to the needs of end-users (i.e. star schema’s). It also allows for easy integration, application of transformation and integration rules and so on. In practice we increasingly see virtualized data warehouses, master data management hubs, etcetera.

One aspect of agility should be obvious from this discussion: development and data integration within the virtualized environment can be considerably more agile than in a traditional setting. Requirements and specification (e.g. meta-data management) could still be used, but rather than a long build and deploy time, we now have results available immediately in a virtual table structure. As a result, it is easy to learn-while-doing in quick and highly interactive cycles with end users: quick sprints will deliver a working prototype and later adjustments can easily be made without having wasted many valuable development hours.

This also demonstrates the fact that such a system itself is also considered to be agile:

  • It will be fairly easy – and most of all: fast – to adapt to ever changing business needs for information.

  • Deploying changes to a virtualized data model is easier than changing the data structure of a physical database, which can entail all kinds of data conversion issues.

  • Dealing with the impact of changes is easier, since no software is adapted, lowering the risk of disruptions and keeping the impact localized.

  • Integration of the solution is simple, since existing interfaces remain stable.

Built-in features around security, auditing, logging, and monitoring (i.e., when things change in the source systems) provide the organization with the means to stay in control of their data. In short:

  • Data virtualization decouples access to data from the source systems. This allows further manipulation of data without impacting the original systems.

  • Virtualized access to structured and unstructured data allows for uniform querying. Caching avoids heavy work-loads on the original transaction systems.

  • Data access can be optimized for various stakeholders with different needs, concept definitions, permissions etc.

  • Virtualization allows for rapid, incremental development & delivery of information with minimal impact on source systems.

This mechanism can be considered a key resource for agility that supports key activities in the organization. A virtual data warehouse with rapid / agile development of new data structures will make it easier to accommodate management that increasingly seeks data-based, rationalized decision making to complement with creative strategic skill. Suppose, for example, that there is a feeling that international markets can be conquered with a cross selling strategy: offer one product at a discount to generate interest for high-end services that will generate revenue. Running the numbers based on historic sales in countries where the organization was already active must be swift. Even more, when executing this strategy, the system should be flexible enough to easily monitor actual investments and revenues in near real-time.

The other obvious need for system agility in the field of data lies with compliance and regulations. Many industries are heavily regulated, for example in finance or healthcare, and rules for compliance reporting change all the time. In and by itself this need not be an issue. However, we often see that concept definitions change slightly, derivations and key calculations become more complex, other types of information are required, and so on. Here also, the rapid development cycles and flexibility.

If you have any questions or suggestions: either drop a note or get in touch via E-mail!

Categories Uncategorized

Scaling Agile for the Enterprise

Guest post by Tim Mattix, Mario Gouvea and Vikram Purohit With the ever-evolving software development landscape, large enterprises are increasingly “going Agile.” Agile is applicable to many scenarios; for example, Extreme Programming (XP) zeroes in on software engineering while wrapping in novel approaches to boost quality, and Scrum is the most widely adopted agile method. While both of these frameworks work well for software development teams, Agile is even suitable for less obvious initiatives, such […]

Does anonymity promote ill-informed consensus?

There are a spate of new Social Media apps that have emerged lately, all of which allow people to post comments and ideas anonymously.  They are being quickly adopted, especially among the very important 13-18 year old “adolescent market.”  They are also being quickly banned for promoting cyberstalking, cyberbullying, and otherwise cruel behavior.  Does anonymity protect cruelty?  And what does that say about more established Anonymous sites, like Wikipedia?

Normally I don’t comment on Social Media.  My regular readers know that I tend to focus primarily on enterprise architectural concerns like business model viability and strategic alignment.  But there is an interesting cross-over between Enterprise Architecture and Social Media, especially anonymous social media: the creation of community consensus.

The state of anonymity

For those not keeping up, there is a spate of new social media apps that have emerged lately, from Whisper to Secret to Yik Yak, that allow smartphone users to sign up and then post messages unfiltered and anonymously.  When in anonymous mode, users tend to say things that they feel uncomfortable saying on Twitter or Facebook (where their friends, family, and coworkers may discover a side of them that they may not agree with). 

YikYak especially is troubling because it uses a geolocation filter… you can see things posted by people within a certain distance of you.  Sounds innocent, right?  After all, young adults filtering through Bourbon Street festivities in New Orleans could share that a particular bar was playing really good jazz, or that drinks are strong and cheap across the street.  But you may quickly see the problem when I use two words: middle school.  Already, some High Schools and Middle Schools have had to ban the app because it became a platform for bullying and cruel comments.

The effects of anonymity

But what does it mean to be anonymous?  What are these comments that the guy next to you would like to send to “the world” without anyone knowing it was him?

You can look for yourself at Whisper.sh.  I spent a few minutes browsing through some of today’s messages.  Most were simple secrets… many were sexual or related to dating.  Some were work related.  Most had responses from equally anonymous people, and most were fairly benign.  Of course, there could be some judicious editing going on for the sake of casual surfers like me that own a Windows phone (and therefore can’t use the app).  Secret and Yik Yak don’t even make an effort to show any of their messages on their website.  It’s all in the app (once again, only for IPhone or, in the case of YikYak, android).

Of these, I think Yik Yak is the most interesting from a consensus point of view, because it is the only one that attempts to filter according to a community.  GeoLocation, especially when it comes to universities or even small towns, is sure to limit the reach of a message to people who share something in common with you.  That sense of “sharing something in common” is really what defines a community, and consensus only really matters in a community.

Anonymity and consensus

Does anonymity work to create consensus?  Sure.  Think of standing in a large crowd.  If one person yells something, you don’t normally turn to them and identify the source before considering, and possibly agreeing with, the content.  This is the very essence of a political rally or a protest march.  Taking in unfiltered ideas and deciding on them, on the spot, is part of how consensus is built.  Of course, there is no good way to take in ONLY good ideas when you are in a crowd.  We count on the crowd to do that for us.  If someone in a political rally yells “Death to the other guys!” we would expect the folks standing next to them to react, possibly causing the rabble-rouser to back down.  (Unless your protest march is in Karachi or Tehran or Cairo… but that’s another post).

In that sense, standing in a crowd is only “partially” anonymous.  There are still people who can see you, and if you do something really outrageous, there are people who could react by hitting you.  This is why you won’t find many people who will go to a crowded Yom Kippur (Jewish) service and stand up in the middle of the crowd and yell “Hitler was right!”  Pandemonium. 

But consensus and anonymity online is very different than standing in a crowd, and I think we need to be aware of the differences. 

The perils of anonymity online

Online, you can make claims that are difficult for another person to dispel, without consequence at all.  There is no one next to you ready to elbow you when you use name calling, or circulate unfounded rumors, or simply make things up!  Even when we use our actual names, we may participate in a discussion where we are not in the same room, or even the same continent, as our peers, and this can cause problems.

I cannot count the number of times I’ve witnessed this on LinkedIn.  A person will ask a question about frameworks, and I may point them to PEAF (a framework created by Kevin Smith).  No problem.  But if Kevin himself gets on the thread and mentions PEAF, his messages are blocked and he may even be kicked out of the discussion.  Why?  Because someone somewhere made a spurious charge (that he makes money when you use PEAF, which is not true).  Since the administrators of most LinkedIn Groups are anonymous, they can make bad decisions without consequence.  There is no good way for Kevin to clear his name of these charges because he does not know who the administrators are, and they appear unwilling to consider the possibility that he is not, in fact, using the platform to promote his own self interests.  Rumor rules the roost.  Not good.

I believe that the same thing applies to Wikipedia. 

Wikipedia, with its millions of articles, has emerged as one of the chief sources of encyclopedic content on the Internet.  It is widely respected, and most search engines make a point of returning Wikipedia entries near the top of their search results.  However, the administrators on Wikipedia are mostly anonymous.  (They use pseudonyms to do their editing work). 

This causes the same problems to occur in Wikipedia that occur in any other setting where people can be anonymous… mostly benign behavior with occasional outburst of bad behavior (with nearly no consequence). 

There is an essay (not a policy) on Wikipedia that says “Only Martians Should Edit.”  This policy says that some topics are so controversial that anyone associated with the actual content would be too biased to edit the content in a neutral manner.  Therefore, topics dealing with such things as State or Provincial politics, or national boundary disputes, or whether specific historic events should be counted as a genocide.  These things trigger strong emotions, so having people edit the articles as though they are “from Mars” can be a good policy.

On the other hand, for some topics that are very narrow, it is not possible to edit the article without knowledge of the subject.  If you are not an expert in African pop music, you may not do a good job discussing Azonto music and dance from Ghana.  In this case, an editor with no grounding in the subject is likely to make mistakes. 

The problem is that Wikipedia is based on consensus, and you may find yourself editing a page on Wikipedia where you have to build consensus among anonymous people, people that may or may not have ANY understanding of the subject matter.  And those people can be nice, or cruel, with no consequence.  There is no one in the crowd next to them ready to elbow them for making an outrageous statement… because the other editors don’t know if the statement is outrageous!  You can build credibility on how well you enforce the rules, and then use that credibility to attack someone, and no one else can tell the difference.

Anonymity: Handle With Care

I’m of the opinion that anonymity on the Internet has to be handled with care.  There are times when it is necessary, especially when attempting to avoid governmental or organized oppression to free speech.  On the other hand, there are times when it is a license for ill-informed people to promote nonsense as a consensus.  After all, one third of Louisiana Republicans have been misled into thinking that Obama is to blame for the poor response to Hurricane Katrina.  I can think of a other examples of an ill-fated consensus among the ill-informed, but rarely one so laughable.

I believe that Sites and Apps should not leverage anonymity as a feature.  I make exceptions for Tahrir Square and Occupy Wall Street, etc, where rumor may be the only information you can trust, but that is not what these apps do. For normal social interactions, anonymity is actually a problem.  On Wikipedia, I believe that anonymity has outlived its usefulness.