4 years, 8 months ago

Software is disrupting Industries: are you Trendwatcher or Trendsetter?

A couple of months ago I was keynote speaker at an event about modern application development environments. Here are the slides and a brief abstract. Software is disrupting industries. Startups are changing the playing field. This trend is only speeding up. Existing companies struggle to deal with this. It feels like their IT is stuck in groundhog day. They need to radically change how they manage application lifecycles to get.

The post Software is disrupting Industries: are you Trendwatcher or Trendsetter? appeared first on The Enterprise Architect.

4 years, 8 months ago

The PaaS shakeup and what it means for OpenStack

Platform-as-a-Service (PaaS) is hot! Why? Because the year started with a lot of link-bait articles declaring PaaS dead. That’s a clear sign, right? More seriously, the expected size of the public PaaS market will be $14 billion by 2017. That’s about 10 percent of the application development and deployment market, which means two things: PaaS is becoming significant and there still is plenty of room to grow. If this doesn’t.

The post The PaaS shakeup and what it means for OpenStack appeared first on The Enterprise Architect.

5 years, 1 month ago

The cloud landscape described, categorized, and compared

“I work for a PaaS company” I answered him. “Ah, okay, great”, and he moved to another subject. It was a cold winter day on a hipster cloud conference. He wasn’t the only one that directly knew what my company did. Most people there knew the difference between Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS) and therefore knew exactly what a PaaS company did, right? Well, not exactly… Nowadays, it’s.

The post The cloud landscape described, categorized, and compared appeared first on The Enterprise Architect.

5 years, 1 month ago

The cloud landscape described, categorized, and compared

“I work for a PaaS company” I answered him. “Ah, okay, great”, and he moved to another subject. It was a cold winter day on a hipster cloud conference. He wasn’t the only one that directly knew what my company did. Most people there knew the difference between Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS) and therefore knew exactly what a PaaS company did, right?

Well, not exactly… Nowadays, it’s almost as broad as saying “I work for a software company”. What is your point of reference when you hear the term PaaS? Heroku? Google App Engine? Force.com? Mendix? I think the popular wisdom that cloud comes in three flavors (IaaS, PaaS, SaaS) is not providing a realistic picture of the current landscape. The lines between these categories are blurring and within these categories there are numerous subcategories that describe a whole range of different approaches. I think it’s time to have a closer look at the cloud landscape, to really understand what IaaS, PaaS, and SaaS are (and are not), and distinguish the different categories within. I would like to propose a framework to categorize the different approaches seen on the market today.

Let’s start by taking a look at what’s needed to run an application in “the cloud”. I will start with the hardware and move up the stack until we end up with the actual applications. The end result is a framework with 3 columns and 6 layers as depicted in image 1.

A framework to categorize and compare cloud platforms

Image 1 – A framework to categorize and compare cloud platforms

In this article I will explain this framework. I will also explain how I constructed this framework, give some example technologies/solutions for each cell of the framework, and show how this framework can be used to compare some of the popular cloud platforms (e.g. OpenStack, AWS, Heroku, CloudFoundry).

Contents:

  • Layer 1: the software-defined datacenter.
  • Compute, communicate, and store: the three aspects that come back on each layer.
  • Layer 2: deploying applications.
  • Layer 3: deploying code.
  • Identifying the target user for each layer.
  • Layer 4: empowering the business engineer.
  • Layer 5: orchestrating pre-defined building blocks.
  • Layer 6: using applications.
  • Using the framework to compare cloud platforms. 

Layer 1: the software-defined datacenter

In the end every application (yes, even “cloud applications” or SaaS) needs to run on physical hardware somewhere in a datacenter. Nowadays most hardware is virtualized. I think it was VMWare who coined the term “Software Defined Datacenter” (SDDC) to indicate that the main hardware elements of a datacenter can be virtualized now: servers, networks, and storage. In a Software Defined Datacenter these elements are all provisioned, configured, and managed by a central software solution.

Server virtualization is the most mature part of a Software Defined Datacenter. Virtualization is not just abstracting the logical view of the servers from the physical hardware that implements them. It is also possible to pool resources of many different servers, thereby creating a virtual layer that offers automatic load balancing and higher levels of availability than the underlying hardware can offer. In principle, commodity hardware can be used as the more advanced management and reliability is taken care of by the virtualization layer. When people talk about “cloud” or “IaaS” they often refer to server virtualization. The best-known examples in the market today probably are the Amazon Elastic Compute Cloud (EC2) and OpenStack Nova. But basically any big IT player has an offering, like Microsoft Windows Azure, Google Compute Engine, and IBM SmartCloud, to name a few.

Network virtualization decouples and isolates virtual networks from the underlying network hardware, in the same way as server virtualization abstracts the underlying physical hardware. The network hardware is only used for the physical link; virtual networks can be created programmatically. These virtual networks offer the same features and guarantees of a physical network, but with the benefits of virtualization like higher utilization of the hardware, more flexibility, and easier management. Network virtualization is often referred to as Software Defined Networking (SDN). The best-known example of SDN probably is Nicira, not in the least because of their $1 billion acquisition by VMWare. OpenStack Neutron (formerly known as Quantum) is also in this space.

It shouldn’t be a surprise that storage can be virtualized too, and that it is referred to as Software Defined Storage (SDS). Storage virtualization means decoupling storage volumes from the underlying physical hardware. This means that more advanced features like caching, snapshotting, high availability, etc. are managed in the software layer (and thus independent from the specific hardware brand). It also means that a lot of optimizations can be done to organize IO in such ways that performance degradation due to server virtualization can be minimized. An example of a storage hypervisor is Virsto (again, acquired by VMWare). Another (different) example is the OpenStack Cinder project that provides block storage and integrates with a range of storage vendors.

 

The Software Defined Datacenter


Image 2 – The Software Defined Datacenter

What you see in practice of all this software-defined hardware are the APIs provided by cloud-players that give the ability to provision virtual servers, virtual networks, and all kinds of storage volumes.

Until now I referred to this layer as the Software Defined Datacenter. I knowingly avoided the term Infrastructure-as-a-Service (IaaS) as this often includes more than just virtualization (depending on who you talk to), but also often includes less as most IaaS products are mainly focused on server virtualization and offer less functionality in the networking and storage space. Oh, and when we’re at it, let’s just use numbers to indicate the layers to avoid confusion with all kinds of buzzwords. This layer, the software defined datacenter or “virtual hardware”, will be layer 1 (I leave the 0 to the hardware).

Compute, communicate, and store: the three aspects that come back on each layer

Before we move on to describing layer 2, I want to do a small sidestep and research if we can use the distinction (compute, networking, storage) we saw in the previous section for the other layers as well.

These layers are all built on top of the previously described virtualization layer and, of course, consist of software. If we look at software, and especially the object-oriented paradigm, we can make a distinction between the 3 main aspects of software: state, behavior, and messages (or data, methods, and messages, if you will). This beautifully aligns with the previous distinction between storage (vs. state), compute (vs. behavior), and networking (vs. messages).

The same distinction can be made if you look at the different types of middleware. There is middleware that focuses on facilitating compute, like application servers, process and rule engines, etc. There is also middleware that is mainly focused on facilitating communication, like enterprise service buses (ESBs), messages queues, etc. And finally we have all kinds of middleware that facilitate the storage of data, like database management systems in all their flavors (ranging from relational to document- and key/value-oriented storage).

Even for us humans (I expect you to be a human, dear reader) we can distinguish these 3 aspects of operating with data: we store data (e.g. memorizing it, writing it down), we process data (e.g. combine, compute), and we communicate data (e.g. speak, write).

So, without further ado, I will use three columns in my layered framework: compute, communicate, and store. We will see that they help a lot in clarifying the technology stack on each layer.

Layer 2: deploying applications

Let’s go back to describing the layers that are needed to run an application in the “cloud”. We started by describing the first layer: virtual hardware. On the second layer the focus switches from infrastructure-centric to application-centric. Let’s see what that means for each of the three columns: compute, communicate, and store.

For compute this means that we do not talk about virtual machines anymore, but application containers. An application container is an isolated, self-described, infrastructure-agnostic container for running apps. Usually running an app means compiling the code and bundling it with the necessary runtime, like Java code and a Java Virtual Machine (optionally with additional middleware like Tomcat and/or Spring). The resulting package can be deployed in the app container. Docker is a nice example of such an app container. Other examples are what Heroku calls a Dyno, CloudFoundry Warden, and the most recent one by Google: lmctfy (it seems that creating and open sourcing your own application container is the new hip).

The communicate part of layer 2 focuses on routing messages among app containers, storage systems, and external systems. This includes routing, queuing, and scheduling system instructions for e.g. app lifecycle management (e.g. the messaging / NATS component of CloudFoundry). It also includes the smart routing and load-balancing of incoming requests to application instances (e.g. the Heroku routing layer or the CloudFoundry router).

Lastly, the store part of layer 2 isn’t about block storage anymore but provides object storage. This means you don’t have to bother about mounting, partitioning, and formatting virtual disks, you can just store and retrieve objects via an HTTP API. Examples of object storage are Amazon S3, OpenStack Swift, and Ceph Storage.

 

Foundational PaaS


Image 3 – Foundational PaaS

With these 3 elements: a place to run applications, communication to and from these places, and a way to store objects, all with additional features like high-availability and horizontal scalability, we have an additional layer on top of (virtual) infrastructure that provides a higher level of abstraction and automation. I would like to call this layer “foundational PaaS” to distinguish it from the higher-level PaaS approaches we will discuss in the next section.

Layer 3: deploying code

While layer 2 is focused on application infrastructure, layer 3 shifts the focus to code. In other words: layer 2 has binaries as input, layer 3 has code as input. With “code” I mean a bunch of text that describes an application in a certain programming language, integration configurations, or database configurations. Let’s have a more in-depth look at these elements while we discuss the three columns of the framework again.

For compute we do have language runtimes on top of the application container. This means you can deploy e.g. Java code to such an environment, the Java Virtual Machine (JVM) and other runtime packages are already available as a service. You could see this as an application server as a service. This is mostly referred to as aPaaS (application Platform-as-a-Service). Examples are Google App Engine (which allows you to deploy Java, Python, Go, and PHP) and Heroku Buildpacks. The buildpack mechanism basically works on top of a layer 2 application container but automatically finds the right runtime if you upload code and compiles it into a binary that can be deployed into an application container. CloudFoundry borrows the same mechanism from Heroku.

In the communicate column we see what is called iPaaS (integration Platform-as-a-Service). An iPaaS provides the communication among applications whether it is in the cloud or on-premise. This communication is defined using integration flows. An iPaaS could be seen as an ESB (Enterprise Service Bus) in the cloud and contains features like routing, transformations, queuing, and in most cases a range of pre-defined adapters. Examples of iPaaS are TIBCO Cloud Bus, WS02 StratosLive and Windows Azure BizTalk Services.

We also see a higher level of abstraction in the store column. It probably isn’t a surprise that we talk about dbPaaS (database Platform-as-a-Service) here. dbPaaS basically is a database delivered as a service, including things like availability and scalability, in most cases in a multi-tenant setup. These services are not limited to relational ones, but include key-value and column-oriented datastores. Amazon provides a range of database services: SimpleDB, DynamoDB, Relational Database Service, and RedShift. Other examples are Heroku Postgres, Windows Azure SQL Database, and Salesforce Database.com.

 

Platform-as-a-Service


Image 4 – Platform-as-a-Service

Please note that the “cells” in the framework do not by definition use the cells on the lower layer in the same column. A database for example runs in a container (compute column, one layer down), and uses block storage (not object storage). A layer up in the same column just means that the abstraction and automation increases in such a way that you do not have to deal with issues on the lower layers yourself.

We could stop at this layer as the three layers we described until now, give a clear picture of the biggest part of the current PaaS market. However, as you may suspect from my previous ramblings about productivity and higher abstraction levels I would like to explore additional layers. These layers appear to bring surprising advantages.

Identifying the target user for each layer

Until now we talked about the focus of each layer (e.g. infrastructure-centric vs. app-centric) and type of things you can do on each layer (e.g. deploying binaries vs. deploying code). To clarify the distinction between layer 4 and the previous layers we need to look at it from a different angle. Layer 4 mainly distinguishes itself from layer 3 in the type of user that is targeted.

If we look back at the previous layers we clearly see how they are targeting different types of users. Layer 1 (the virtual hardware) is used by infrastructure engineers and/or system administrators. They need to have deep knowledge of the different infrastructure elements and how to combine them to create a resilient and performing infrastructure layer. Layer 2 targets DevOps teams as the focus is on application deployment and how to automate all the operations around it. These teams work with code and application binaries and select and configure the needed infrastructure services. Layer 3 clearly targets professional developers. They work with code and configurations and can deploy that to a ready-made environment without the need to focus on the operations or infrastructure side of things.

Adding the target user for each layer


Image 5 – Adding the target user for each layer

Layer 4: empowering the business engineer

Layer 4 is focused on the same kind of artifacts as layer 3, but the specifications and configurations are at a higher abstraction level. Layer 4 aims at non-professional developers, i.e. people that don’t program for a living, and enables them to build applications, to connect systems, or to manage data. I tend to call these people “business engineers” as this term emphasizes that these people are not professional developers (i.e. they are “business” people) while the term also tells us that there need to be some sort of affinity with technology (i.e. they are “engineers”). It might be easier to define these people with some examples: the users of business process tools, the users that configure business rules engines, people that configure reporting and analytic tools, people that define applications using high-level models, etc. Technologies and solutions in layer 4 are able to empower these business engineers by focusing on two things: abstracting away from technical details and a focus on easy to learn languages.

The definition of this layer can look a bit artificial at first sight because the users of layer 3 and layer 4 deliver the same type of things: applications, integrations, and data definitions. However, you will see that it makes perfect sense to have a different layer because of the clear difference between solutions/technologies on this layer versus the solutions/technologies in layer 3. Additionally, you will see that this layer creates a nice transition between layer 3 and layer 5. Let’s make things concrete by looking at the columns of our framework again.

In the compute column we see language runtimes as a service, just as on layer 3. The difference is in the type of languages that are supported. These languages are of a higher abstraction level and are often domain-specific (we call these languages DSLs – Domain-Specific Languages). I would still categorize the solutions and technologies in this column as Application Platform-as-a-Service (aPaaS). It basically is a subset of aPaaS, the subset that is focused on Model-Driven Development (MDD). Examples are workflow or business process management (BPM) tools that are delivered as a service (some people call this bpmPaaS). In this category it will be interesting to keep an eye on Effektif. There are also platforms that cover all aspects of application development (data, UI, logic, integration, etc.) like the Mendix App Platform (full disclosure: I work for Mendix).

Does this mean that tools in this category make professional developers obsolete? In my experience this isn’t the case. First, we will of course need professional developers to develop these tools. Second, professional developers are also needed to work with these tools and focus on the more technical side of things, like integrations, complex algorithms, etc. What I see in practice is mixed teams of business engineers and professional developers in a 3:1 ratio.

In the communicate column we see the same difference as in the compute column. We still talk about iPaaS, but the languages used are aimed at different people. They are DSLs for the integration domain. An example going into this direction is MuleSoft CloudHub that is accompanied with a graphical design environment. However, I think there is much to gain here. IFTTT is a great example of how a focus on simplicity can lead to an easy to use service while still being very powerful. I’m waiting for the first iPaaS to focus on creating an IFTTT like experience, combined with enterprise grade service bus features.

The store column of layer 4 contains tools that focus on making data storage, data retrieval, and data processing accessible for business engineers. Data has never been available is such huge amounts (that’s why people refer to data as big data in most cases nowadays) and it never has been more important to quickly act upon this data. These trends have pushed vendors to have a strong focus on enabling business people to work with their tools. Traditional examples are TIBCO Spotfire and QlikView. With TIBCO Silver Sportfire, TIBCO starts to provide its offering as a service. Other examples are SAP BusinessObjects BI OnDemand and Platfora (basically Hadoop for business users). This category is sometimes referred to as (drum roll please) Business Analytics Platform-as-a-Service (baPaaS).

Model-Driven PaaS

Image 6 – Model-Driven PaaS

We have seen that most technologies and solutions in this layer make use of models to empower business engineers, most models being graphical. That’s why I prefer to refer to this layer as Model-Driven PaaS. There is only one layer left before we end up with the actual applications. Let’s have a look at what layer 5 entails.

Layer 5: orchestrating pre-defined building blocks

In layer 5 we enter the field between developers and end-users. If we approach layer 5 from a developer perspective we can define it as a layer of out-of-the-box services that can be re-used and orchestrated to become part of a new application. If we approach layer 5 from the end-user perspective we can define it as SaaS extensions or SaaS customizations. We then talk about end-user development or so-called citizen developers. So, to summarize it: layer 5 is about pre-built services that can be configured and composed to extend existing applications or build completely new ones.

Let’s look at some examples. The best-known example for building SaaS extensions or deep customizations that involve integrations is force.com (for customizing and extending salesforce). Another example is the AppXpress developer platform that let developers extend the GT Nexus platform. The Mendix App Services approach is an example of building and re-using out-of-the-box services (via an App Store) that can be composed into (or used as a basis for) new enterprise applications. The last example I want to mention is MuleSoft, which basically turns any API out there into an app service that can easily be re-used. They have a big number of so-called Anypoint connectors available that give access to third-party services.

The examples mentioned above all provide tools and solutions that cover all three columns of the framework. They are platforms that can be used to develop or connect to any type of customization or service. If we zoom in on the actual service types, we see that the columns of the framework provide a nice way to classify these app services: we can have compute app services (e.g. the Google Prediction API), communicate app services (e.g. the Amazon Simple Email Service), and store app services (e.g. the Dropbox API).

App Services and SaaS extensions

Image 7 – App Services and SaaS extensions

In this layer we clearly moved from developing things from scratch to customizing or re-using existing services. There is only one step left.

Layer 6: using applications

Here we are, finally! We reached layer 6, which finally is about the applications themselves. We do not talk about the columns of our framework anymore on this layer. These applications cover all three columns, as they all need to compute, communicate, and store data.

We call the applications on this layer Software-as-a-Service (SaaS). SaaS is as-a-Service from the user perspective. From the developer/provider perspective SaaS can be built on top of the previously mentioned layers, but if an application for example isn’t built using a PaaS and runs on dedicated hardware, it can still be SaaS. This basically holds for each layer in the framework: it can be, but is not necessarily build on top of the layers below. It is all a matter of choice for the provider what part of their stack they get from another provider as a service and what part of the stack they build themselves.

Now, without further ado, the complete framework:

A framework to categorize and compare cloud platforms

Image 8 – A framework to categorize and compare cloud platforms

Now that we have the complete framework, we see that the six layers provide a nice continuum between infrastructure and applications. From layer 1 on each layer abstracts further away from technical details until we end up with the applications themselves. I think that this framework gives a much better overview and classification of the current cloud landscape than the often-used simplification of just having IaaS, PaaS, and SaaS.

Using the framework to compare some popular cloud platforms

Besides giving a description of the cloud landscape, the proposed framework can be used to compare cloud platforms. I already mentioned quite a number of examples during the description of each cell of the framework in this article. In addition, I want to take a stab at classifying platforms that cover more than one cell of the framework like OpenStack, Amazon Web Services, Heroku, and CloudFoundry. Please note that these platforms are changing every month, which means that the classifications as presented below can become outdated.


OpenStack

OpenStack


Amazon Web Services

Amazon Web Services

Notes: AWS offers app services, mainly in the communicate category. The AWS marketplace is not taken into account in this picture.

Update: included AWS Elastic Beanstalk and therefore colored the aPaaS cell too. Elastic Beanstalk provides the ability to deploy code and manage applications. It is infrastructure/VM-oriented though (that’s also what is priced). That’s why I didn’t color the “Application container” cell, as solutions in this cell really abstract away from the infrastructure. 


Heroku

Heroku

Notes: Dyno’s, Buildpacks, Routing layer, and Heroku Postgres. The add-on marketplace is not taken into account in this picture.


CloudFoundry

CloudFoundry

Notes: Warden, Router, NATS, buildpacks. Services / add-ons are not taken into account in this picture. Note that this picture is focused on the CloudFoundry open source project, vendors providing CF as a service can deliver additional services that cover other cells.


I hope you enjoyed this article. Any feedback, correction, etc. is welcome!

You can also discuss this article on Hacker News.

5 years, 2 months ago

Holiday: time to do some in-depth reading

I’m just back from a great, relax holiday. Apart from having good quality time with my family, holiday is also a great time to immerse myself in some books. Instead of the quick information gathering I normally do by scanning my twitter timeline, quick-reading blog posts, and some more in-depth articles from time to time, I really focused on a single book for a couple of hours. Reading books really.

The post Holiday: time to do some in-depth reading appeared first on The Enterprise Architect.

5 years, 2 months ago

Holiday: time to do some in-depth reading

I’m just back from a great, relax holiday. Apart from having good quality time with my family, holiday is also a great time to immerse myself in some books. Instead of the quick information gathering I normally do by scanning my twitter timeline, quick-reading blog posts, and some more in-depth articles from time to time, I really focused on a single book for a couple of hours. Reading books really helps me to practice my ability to focus (which is necessary in today’s distraction culture – being offline helps a lot by the way) and, if I read the right books, is very interesting and inspiring.

In this post I just want to share two of the books I read during my holiday, they are very different, but both interesting and inspiring in their own way.

Small Giants

The tagline of “Small Giants” by Bo Burlingham basically says it all: “companies that choose to be great instead of big”. In the book Bo Burlingham describes 14 companies that he selected to be examples of what he calls “small giants”. These small giants are businesses with soul, with mojo (as he calls it). These companies do seven things to generate mojo:

  • Their founders and leaders do not accept the standard menu of options, they recognize the full range of choices they have about the type of company they can create. They question the usual definitions of success in business and imagine possibilities other than the ones all of us are familiar with.
  • Their leaders have overcome the enormous pressures on successful companies to take paths they have not chosen and do not necessarily want to follow. The people in charge have remained in control. The author emphasizes here that private ownership is key, later he states that it is possible to do the same with publicly owned companies, but explains that it is more difficult.
  • Each of these companies have an extraordinary intimate relationship with the local city, town, or country in which they do business. A relationship that went well beyond the usual concept of “giving back”.
  • They cultivate exceptionally intimate relationships with customers and suppliers, based on personal contact, one-on-one interaction, and mutual commitment to delivering on promises.
  • They have unusually intimate workplaces. They care for their people in the totality of their lives.
  • These companies have come up with an impressive variety of corporate structures and modes of governance. Some of them created their own management systems and practices. My interpretation: they also innovated and continuously improved their way of working, not just their product.
  • Their leaders have passion for what the companies do, they love the subject matter. They have deep emotional attachments to the business, to the people that work in the company, and to its customers and suppliers.

All these concepts are explained in the book with examples and stories of one or more of the 14 selected companies. This book is an easy read with a lot of stories that inspire and invite you to keep on reading. I really liked the book, it is different, thought-provoking, and inspiring!

The principles of Product Development Flow

One of the other books I read was “The principles of Product Development Flow – second generation lean product development” by Donald G. Reinertsen. This book was a whole different cup of tea. In the introduction the author already states it: “I aspire to write books with more content than fluff”. He definitely achieves this goal: it is not a business novel, not a collection of stories of successful companies, but a book with a high density of information organized in 175 principles.

What I like about the book is that it takes (and shares) lessons from many different fields. The key idea sources are lean manufacturing (of course one of the main sources, but it goes way beyond that), economics, queueing theory, statistics, telecommunications networks, computer operating system design, control engineering, and maneuver warfare. Lessons from all these fields are applied to the subject of product development and lead to principles that describe how the product development flow can be influenced.

What I also like is the design of the book. Because of the focus on principles the applicability of the book is quite broad. It doesn’t eliminate the need for thinking, but it helps to think more deeply about your choices. After reading the book you will be able to understand and explain why things like Scrum (or other agile approaches) improve software development processes a lot in comparison to some other approaches. The book does barely mention the term agile by the way. I think that’s one of it powers: it focuses on grounded principles that can be applied (and yes, that will lead you to what some people will call an agile process).

This book really inspires to focus on the the things that really matter, to not measure and act on proxy variables. It conveys a mindset of having proper economics reasons for everything you change (and thus improve) in your product development process. I will definitely use this book as a reference in the future.


Please let me know in the comments what recommendations you have for me to read next!

5 years, 3 months ago

10 reasons why you should organize a FedEx day

As I briefly described in my tale of a 7 year journey in developing software for the enterprise, at Mendix every month we give employees the chance to work on anything they like and deliver it in 24 hours. We call these 24 hour hackathons &ld…

5 years, 3 months ago

10 reasons why you should organize a FedEx day

As I briefly described in my tale of a 7 year journey in developing software for the enterprise, at Mendix every month we give employees the chance to work on anything they like and deliver it in 24 hours. We call these 24 hour hackathons “FedEx days”: build it and ship it in one day. We normally start on Thursdays at 4pm with a kickoff during which people will team up. Dinner.

The post 10 reasons why you should organize a FedEx day appeared first on The Enterprise Architect.

5 years, 9 months ago

Designing the next programming language? Understand how people learn!

Somehow it is a recurring theme in computer science: create a “programming” system that is easier to use and learn than the existing programming approaches. I am not just talking about better tools, like IDEs, but also new languages. It seems as if each self-respecting programmer creates his/her own language or tool-set nowadays, right?

Okay, I have to admit that not all efforts are focused on making things easier, often the focus is on productivity. However, if we, for example, look at the “programming” languages created in the Model-Driven Development there, we see quite some focus on involving domain experts in development by creating higher level, domain-specific, or sometimes even visual languages. Although there are much more reasons to do Model-Driven Development, it is the ease of use and the lower entry barrier that captures the imagination.

There is a fundamental flaw in our thinking around these approaches, though…

Language fundamentals

Let’s look at the fundamentals of a language before we dive into the details. A language specification consists of three main elements:

  • Concrete syntax: the concrete syntax defines the physical appearance of language. For a textual language this means that it defines how to form sentences. For a graphical language this means that it defines the graphical appearance of the language concepts and how they may be combined into a model. Multiple concrete syntaxes can be defined for the same language. You could, for example, define both a textual and a graphical concrete syntax.
  • Abstract syntax: the abstract syntax defines the concepts of a language and their relationship to each other.
  • Semantics: the semantics describe the meaning of a sentence or model specified in some language. In the context of programming this means that the semantics of a language describe what the effect is of executing the statements of that language.

All these elements play an important role in how easy it is to learn a language.

How do we learn a new programming language?

In general we learn in a number of different ways. Up to 10 different learning styles have been defined, but I think they can boil down to 3 main styles:

  • Listen: let someone educate you.
  • See: read stuff, watch someone else do it.
  • Experience: just start and try it yourself, experiment, trial and error.

Most people combine multiple styles when they learn something new.

Let’s go one step further: when do people understand how something works (we are not talking about factual knowledge here)? If they hear, see, or experience cause and effect. That’s when we connect the dots. If you hit a play button and the music starts to play you understand the function of that button, and if you hit different kinds of buttons on different systems that all lead to “the music starts playing” you will probably understand that the triangle icon means “play”.

When we learn how a system works or more specifically when we learn a new programming language, we can have different learning styles, but in the end it is about relating cause and effect. Whether you hear someone explain it, see someone do it, or experience it yourself.

The difference between simple and complicated, and thus easy or difficult to learn, is due to the “distance” between cause and effect. If there are many steps between cause and effect it can be difficult to connect the two. If a “system” is a black-box with multiple inputs and outputs, with a complex relation among inputs and outputs it is difficult to determine cause and effect and hence it is difficult to understand the system.

If we want to create a new programming language that has “easy to learn and use” as a core design principle, we should aim our efforts on an easy-to-understand cause-effect relationship between language concepts and the actual semantics of the language.

Let’s see how approaching a language from this angle influences concrete syntax, abstract syntax, and semantics.

On concrete syntax: how does it convey meaning?

Most discussions (and/or flame wars) around languages are focused on concrete syntax (sometimes combined with discussion about abstract syntax because there is a close relation between abstract and concrete syntax in most languages). Syntactic sugar like white-space and symbols as well as how concise the language is, are hugely interesting discussion topics

Our main question around concrete syntax, however, should be: how does the language convey meaning? If we read the language, do we know what the words mean? Do we understand the meaning of the data (inputs, variables)? Can we easily follow the flow? These are the things that matter!

You want examples you say? Well, this may be the worst readable language of all, that’s the goal at least. The microflow language we use in our platform is a bit easier to read and understand and hence conveys the meaning of the program much better (and it even leads to art).

On abstract syntax: being declarative is not the holy grail

In the world of Model-Driven Development and Domain-Specific Languages the focus tends to be more on the abstract syntax. The core tenets of Model Driven Development, or Model Driven Engineering if you will, are abstraction and automation. Normally if you want to create a language that is easy-to-use by domain experts (often non-programmers) you focus on creating an abstract syntax that leaves out technical details and is as declarative as possible, right? Do not focus on technical details (abstract them away) and specify the “what”, not the “how” (declare the program).

However, abstracting away technical details does not automatically make a language easier to understand. It all depends on what the relation is between the language concepts and the actual behaviour of the application you see (remember: cause and effect). A low-level language creates too much of a distance between cause and effect as, for example, machine-level instructions are not easy to relate to application behaviour. On the opposite side we have language that are too high-level and thereby create a difficult relation between cause and effect too. If one line of code (or a single activity in your process diagram) leads to the execution of a range of actions that can result in a plethora of results, it can be hard to grasp the impact of changing something.

This leads us to the subject of declarative languages. Declarative languages can be very powerful as you just declare the result, not how to get it. Example languages are SQL (well-known, more declarative than procedural), Prolog (if you studied computer science you probably know it), or the languages used by most business rule engines. The nice thing about declarative language is that you abstract away all the “how”, you avoid implementation details. What these languages promise is that programs created with them are easier to understand and maintain, and sometimes that’s completely true.

However, what always bugged me is that users of our App Platform learn to use our procedural DSLs quicker than the more declarative ones. In hindsight that’s maybe not that difficult to explain. It’s probably best understood if you imagine a large system that consists of 1000 rules (or predicates) that are automatically woven into a working system. Do you think a “programmer” can easily connect cause and effect when changing a rule? The lesson: being declarative shouldn’t be a goal on its own, it can even make things more difficult if not applied well. Please note that I focus on the learnability of the language; productivity, conciseness, etc. are different considerations that may influence your choice for declarative languages.

In the end, the abstract syntax of a language should help to reason about the problem you are solving using the concepts of the language. It should be a proper abstraction of the resulting application, this abstraction can be as high-level as possible as long as this abstraction decreases the “distance” between cause and effect. The abstract syntax, the concepts of the language, should also help to decompose a problem in manageable pieces and glue them back together as a complete solution.

On semantics: live programming helps to understand cause and effect

The semantics are the most important aspect for people to learn and understand a language. Semantics connect cause (a certain language construction) and effect (the result of executing that language). As explained in the previous sections the choice for concrete and abstract syntax is important as it defines the “distance” between cause and effect and hence how easy to understand the semantics of the language are. In addition to the language itself, the tools are of the same importance in conveying the semantics of a language.

The environment you use can, for example, greatly enhance the readability of a language by providing mouse-overs, integrated documentation, highlighting, etc. But an environment should do more. It should help you to sculpt your program. It should provide refactoring options that help you to start concrete and refactor to more abstract code when needed. It should provide ways to step forward (and backward!) through the program while inspecting state and behaviour.

A great debugger can really help you to learn and understand the cause-effect relationship of your language and the actual execution. A nice example is the visual debugger we feature in our platform, and you maybe also know the Eclipse Java debugger (or even the Smalltalk debugger) that allows you to change code during a debugging session (within some limits).

Changing code and directly seeing the effects of it (dubbed “live programming” sometimes) is the ultimate way to closely connect cause and effect. If you combine that with stepping forward and backward through the execution (and thus modify state and code at the same time) you have a powerful environment that really helps a user to relate cause and effect and hence understand the language.

Warning: live programming really helps in understanding the language, but not on its own. Please do not forget the things I previously mentioned about concrete and abstract syntax. Cause and effect can be clearly connected, but if you cannot understand that connection when you read the language it will still be hard to understand the language.

Conclusion

If you want to design the next programming language that is easy to learn and use, you should first understand how people learn. The relation between cause and effect plays an important role in the learnability of a language. If you want to clarify the cause-effect relationship you should focus on both the language design and the tools.

If you like this subject you should definitely read this excellent essay (not written by me, you can safely click the link and will not regret it 😉 ) that goes into great lengths to explain how we can let people understand programming.

5 years, 9 months ago

Designing the next programming language? Understand how people learn!

Somehow it is a recurring theme in computer science: create a “programming” system that is easier to use and learn than the existing programming approaches. I am not just talking about better tools, like IDEs, but also new languages. It seems as if each self-respecting programmer creates his/her own language or tool-set nowadays, right? Okay, I have to admit that not all efforts are focused on making things easier, often.

The post Designing the next programming language? Understand how people learn! appeared first on The Enterprise Architect.

5 years, 10 months ago

A tale of a 7 year journey in developing software for the enterprise

Much is written about lean startups, agile software development, continuous integration or even continuous deployment. All with the goal to help us understand the dynamics of successful software companies and their development teams. I am inspired by these stories, they show me what the ideal situation is like, they challenge me to improve our current way of working.

Today I want to give something back. I want to share with you our tale of developing software for the enterprise over the last 7 years at Mendix. Not to show you the “ideal situation”, but to share with you the challenges we encountered in scaling our teams and in getting customers that depend their business on our App Platform. We experienced exponential growth ever since we started and much of the phases we went through are not specific for our situation, so I hope you enjoy the story and perhaps learn something from it.

Start up: what to build?

I never forget that moment, more than 7 years ago when I first entered the “office”. Just a few people in one room starting a business. It felt exciting and I couldn’t wait to be part of it. One day later I was sitting at a desk with code in front of me (and two other people constantly using the phone for “sales”).

Did we know what to build? Did we know what our product would look like, even in a month’s time? No. We did have a clear vision, though. A vision that, we believed, would change the world. Our one and only goal in these early months was to create features as quickly as we could. To make our vision tangible and share it with potential customers. To say it in other words: we quickly iterated with the “market”.

The first customers: we need this feature, it blocks this important deal

And then it became even more exciting: the first customers started using our software! However, there was often an accompanying message to the development team: we really need these new features otherwise we cannot deliver what we promised. The situation for our development team in principle stayed the same: cranking out new features as fast as possible. Only we did it for real customer cases this time, instead of for “just” our own vision.

I think every developer recognizes this situation where sales people “demand” new features to be able to close a deal. In this phase of your company you should welcome this (within certain boundaries of course) as your main goal is to search for the right market. The main challenge often is to make sure you do not keep doing it this way, to determine the right moment to switch to a more structured approach.

In the first year we mainly flip-flopped between the customer- and vision-driven method of operation. How long you iterate between these two modi depends much on the stability of your vision and how well it fits with the actual market you find.

Stabilizing: not all customers want to “live on the edge”

Luckily we found a market that was waiting for a product like ours. There was a lot of pain we could take away with our product. So, we stabilized our product focus and our user base grew quickly. This doesn’t mean our messaging and positioning has been the same since then. We continuously change and improve those, but the core vision of our product is the same today.

A growing customer and user base also introduced new challenges. No longer was our focus just on cranking out new features as quickly as possible. There are these customers, you know, that value stability over new features. Whether they do depends on the “maturity” of your customer, but even if they do not value stability, they will quickly learn to. It goes like this: the first time you tell the customer “we fixed the issues you reported and we also added this cool new feature” they will be very happy. But, after learning that every tiny change in software also brings new risks they will start asking for releases without these “cool new features”.

I know, your quality assurance should be so good that users of your product trust every change you release. However, no quality assurance approach can match the “quality increase” a piece of software goes through when it is hardened in production. So, when developing mission-critical software you will end up with different types of releases. In our case we just followed the best practice of having major, minor, and patch releases with the accompanying 3-part versioning scheme (x.y.z).

The additional challenge this brings is how to plan these releases. Ideally, as a development team using an agile software development method, you have a single backlog and you work on a single sprint. Just work with different branches in your version control system to make sure you can release features separate from bug fixes. However, we did experience two difficulties with this approach. First, it is becoming a full-time job to discuss the importance of incoming issues with the caller. They quickly learn that the resolve time of their issue depends on how well they explain the severity of their issue. Second, it is difficult to keep focus if you mix issues that address any part of your product with the tasks that cover the specific major feature you are working on.

We ended up with a process we still happily use today. We divide our work in chunks of 8 weeks. Each 8 week period contains the following elements:

  • 5 weeks of uninterrupted “roadmap” work. We can really focus on difficult problems, be creative, be innovative, bring our product to the next level. These 5 weeks are divided in multiple sprints and each sprint results in a working piece of software. One or more of these 5 week periods result in a major release.
  • 3 weeks of “maintenance” work. This period results in a minor release and is focused on processing defects. We include minor features too, but only if the use case is very clear and the risks are low. The changes we do in these minor releases are always merged back to the main line and will thus be included in any future release.
  • 2 “FedEx” days. A monthly day during which developers build a new feature they like, research a new technology, etc.

Organizing our work in this way gives us the following advantages:

  • Our users know that we do a low-impact minor release every 8 weeks. Only if a defect is so severe a user cannot wait we have a discussion about a patch release. In general our users really like the clarity of the release scheme and issue deadline before each release.
  • It makes product management easier. It is very difficult to decide on the priority of small(er) feedback items versus strategic product changes. Now we always dedicate time for the smaller feedback items and we know the amount of time we have for “roadmap” work.
  • It ensures time and focus for innovation, for bold new ideas. It is so easy to just be busy with maintaining the current status quo. You will keep your current users happy, but risk becoming irrelevant in no time.

So, business is coming in, users are happy, development is structured, enough said right? Well, that could have been the case, but when going through these changes we didn’t see the dark clouds gathering on the horizon…

The 2nd system effect: let’s do it all over again, but better (we thought)

When you develop a product like ours, you make choices, lots of choices, everyday. Especially when under pressure to deliver, you need to make choices about leaving out features or implementing a feature less elegantly than you would have done in an ideal situation. Combine that with all the new insights you get when your product is used by more and more customers and slowly you get the feeling some fundamental elements need to change to keep up with the growth and the new usage you see (and didn’t predict before you started). At least, that happened to us. We clearly identified a new vision for our product, one that extended the initial vision we had. We were not afraid to aim high, to have ambition. It truly was a so-called Big Hairy Audacious Goal.

So far, so good. We had this great vision and we used our roadmap time to start with the implementation. During the maintenance periods we maintained the current version of our product. The first problems arose when we were in the middle of refactoring each and every element of the core of our system when an important, bigger feature was needed by our users. What to do? We only had one option: build it in the current stable line and just keep working on the new major stuff in a separate line. Guess what happened? A couple of months later almost the complete team was working on the “stable” line and the new amazing upcoming major release parts were gathering dust somewhere in a corner of our version control system. Finally, more than half a year after we started, we decided to kill the development of the new major release. In hindsight that was one of the best decisions we ever made, and I can tell you that it took a lot of courage.

What did we learn from this experience? First, these kind of “big mistakes” quickly become some sort of a legend that is told to every new team member that joins our company. Second, we realized we had fallen victim to the so-called second-system effect. This effect refers to the tendency of small, elegant systems to have big, feature-creeped systems as their successors (luckily we never released the successor we had in mind). You try to compensate for all the stuff you didn’t get to do in the first version of the system. We thought it was heroic to take the time to re-think everything and to dare to re-implement big parts of our product. We were partly right, it wasn’t the re-thinking that was the problem, it did go wrong in the way we defined “working software”, in the way we defined the slices of work for each sprint.

We still worked in sprints and did sprint demo meetings with stakeholders involved, but we focused too much on horizontal slices. Slicing a system horizontally means implementing a system one layer after another, e.g. first build the database layer, followed by the business logic layer, and finally the UI layer (or any other layers you have in your system). Using this approach you cannot show a real end-user feature until you finish almost all layers. Slicing a system vertically means developing just the parts of each layer that are needed for a specific feature, followed by the parts that are needed for the next feature, etc. Vertical slicing enables to deliver working functionality as soon as possible. It also helps to focus the development of the underlying technical layers, i.e. only build what you need.

By focusing on horizontal slices we moved into a modus of brainstorming and speculating about everything that was needed in a certain layer. This results in discussions and ideas that lead to an endless increase of the scope. In fact, years later we can still refer to this version we were trying to build. When we start to build a new feature we often realize: “this was also part of our big master plan years ago, right?”. The amount of times this happens is truly astonishing (“did we really think we could include all this in a single major release?”).

So, we changed some things, or better formulated, we emphasized some parts of our working process. First, the 3 week maintenance period is for minor stuff, if it doesn’t fit in the 3 weeks, it should be part of our roadmap time. Second, during roadmap time we focus on vertical slices and work in sprints with a maximum of 2 weeks that always result in working software.

Sounds trivial right? Why is it then that so much people can relate to a story like this? I think because doing it right takes a lot of continuous discipline. As developers we tend to think about the horizontal slices when we plan the implementation of new features. Furthermore, a focus on vertical slices tends to create waste. You have to build stuff that you know you will throw away if you implement one of the next features, or you already know what other things you need in a certain layer, better create it directly, right?

We had this experience in the past, but it is an ongoing challenge to keep our focus. From time to time we need to “reset” ourselves and make sure we focus on having a big vision, but operate in small steps. What we truly experience: complexity is easy, simplicity is difficult. And this tendency to the “complex” isn’t the only challenge we had in keeping our focus…

Ramping up: introducing “departments”

Focus is more than “not doing everything at once”. It also means uninterrupted time to really focus on doing highly-skilled work. Our 8-week rhythm is helping with that, but as you can imagine we cannot close the doors for 5 weeks if we are in our roadmap period. However, we also know that a developer that operates in “the zone” is exponentially more productive than one that is interrupted multiple times per hour. If your company grows this becomes quite a challenge.

What we did over the years was introducing departments around product development. There is no definite guide that tells you when to introduce a “department” and how, but these are my main observations:

  • As long as your company has less than 20 employees you won’t need formal departments. You probably do not even have team leads or “managers” except for the founders of the company.
  • Once you grow bigger than, let’s say, 20 employees you will probably need to introduce team leads and departments as the founders cannot manage everything themselves anymore.
  • There will be a point in time when the team lead(s) of your product development team(s) are completely occupied with “operational stuff”. That’s a dangerous situation and you have to detect the early signals and make sure you start to “outsource” part of the work to other (probably new) departments.
  • From a development team perspective one of the first things you will need are people that connect new customers, and in our case to actually implement a solution for customers using our product. As a company you have to do that yourself to make sure you have the right “lighthouse customers” that help you attract new customers (customers and users of your product need examples to be convinced to start using your product). The dynamics of this type of work are completely different from product development, hence you do want to split that to keep your focus.
  • Another type of work with completely different dynamics is “support”. You do want dedicated attention for that as early as possible. Embedding a good support process in your organization that ensures knowledge sharing is a key element to scaling your organization. No proper support means that you become victim of your own success: more customers means more developer “distraction”. There are of course multiple ways to solve this, but in the end you want a team with dedicated attention for users, creating FAQs, handles incoming issues, etc. And, do not forget we are talking about enterprise software, hence we have some SLAs we need to comply with.
  • Tech sales is also something you want to formalize quite early. Selling software to enterprises means answering a lot of questions. And not just an explanation of how your product works, that’s something that needs to be explained and written down by developers that build the features in my opinion. But, think about the positioning of your product compared to other types of products, compared to competitors, creating and giving demo’s, convincing customer architects, etc. Again, there are smart ways to do this, but someone needs to do it.
  • What you should NOT just split out of the product development department is “operations” (i.e. everything that is needed to run your software – we’re talking “as-a-Service” here). I strongly believe in cross-functional development teams that are responsible for running their own parts of the products (you built it, you run it). This creates a culture of responsibility, developers will think about the consequences of their changes thoroughly. It also ensures that a vertical slice for a feature incorporates the operational aspects, meaning that finishing such a vertical slice really results in working, releasable software. You can (and probably should) of course create a separate team that supports the other teams in the way they deploy new versions of their software, monitor it, etc. by automating it and providing the developers with the right tools.

In gradually moving to a situation with more “departments” we were able to return to more focused development and enough time for innovation. Every change creates a new challenge, though. Let me tell you a story… after a short intermezzo…

Knowledge transfer: how to keep other departments and users up-to-date?

Outsourcing part of the work to other departments (other than your product development teams) leads to more “distraction” instead of less in the short time. The main reason: knowledge transfer. Support and pre-sales for example need to know everything about the product, new releases, etc. We have seen two things that really help with this.

First, you need developers with writing skills. I do not mean “writing code” but rather documenting new features in a reference guide, explaining how to use new features in a how-to, and explaining the rationale behind new features in technical blog posts.

Second, we have a “tech-of-the-day”. That’s one of the developers who is available for answering questions from other departments. We rotate this role every day. This approach makes sure that other departments can always approach someone with in-depth product knowledge and at the same time it ensures focus for all other developers. It also ensures that each developer is exposed to user feedback and discussions about how to use or sell our product.

The ivory tower: how to keep the feedback loop connected?

Let’s go to the story I promised you. So, we released this cool new feature. We, as a development team, were enjoying the moment. You have to celebrate your milestones! However, within 5 minutes someone from our “professional services” department stood at my desk with a less joyful face. He had just glanced at our new release and then ran to my desk to explain to me: “this feature you just released is completely useless”. That’s fun, right? What wasn’t so funny was the fact he was right.

How is it possible that we were crafting a new feature for weeks and thought it was the best thing since sliced bread, while one of our users could see the flaws in 5 minutes? Shielding the product development team as much as possible from external “distractions” brings some serious issues with it. Isn’t one of the core principles of agile software development to value “customer collaboration”? That’s for a reason. Without the collaboration with your users and stakeholders you are building stuff you think is valuable, but the better and longer the shield works, the more you are moving away from reality. Product development slowly turns into an Ivory Tower. How to find the right balance between focus and collaboration?

To start with: these two things, focus and collaboration, are not mutually exclusive. However, we have experienced first hand that finding the right balance is a challenging, continuous process. Especially in a fast growing company. Let’s look at some things we introduced over the years (some more recent than others) to fight the creation of an Ivory Tower:

  • The most logical moment to involve users in the development process is the sprint review meeting. This meeting is meant to demo the completed work to the stakeholders. We always try to involve a user in this meetings. This is often quite challenging as these users have a busy schedule. If it really isn’t possible to have a user in these meetings regularly the Product Owner should put more effort in aligning with users (at more flexible times) and being the “voice of the user” during the meetings.
  • We start new projects with an “inception” phase that is used to identify the vision and story for the release, bring stakeholders to agreement, and create an initial idea of the user experience. We literally start with writing the press release for the release, present it to stakeholders and write down all the questions that arise (and their answers) in an initial FAQ. This really helps in guiding and scoping development and involves users as early as possible. What we essentially do is to start with the customer and work backwards until we get to the minimum set of technology requirements that satisfy what we try to achieve. Using such a continuous, explicit customer focus we aim to drive simplicity throughout the process. But again, we continuously need to remind ourselves: have a big vision, operate in small steps.
  • All of our products include a feedback mechanism that makes giving feedback really effortless. This increases the chance that a user shares his thoughts, frustrations, and awesome ideas with us. This of course also brings an additional challenge: processing all this feedback in such a way users feel their feedback is valued and taken seriously.
  • We sometimes invite users of our products to present at one of our Friday-afternoon-beer-sessions. In our case this means we have users from partner companies that present the enterprise apps they have build for their customers using our PaaS. This really helps to broaden the perspective of our development teams. Additionally these users are always asked to share with us what they like and dislike. In our experience users are always happy to share their work (and opinion) with us, and they are rewarded: almost every time they present something that really annoys them (or a hint for something they really would like) we can just include that in the next minor release.
  • Product management should not purely be based on user feedback. Product management should be based on multiple influences, amongst them: your own product vision, market trends, marketing, sales, research and development, user feedback, etc. That being said, product management is a tough job. Most of the time you should be able to value different inputs and say “no” to a lot of them. If you want to please everyone, don’t become a product manager, you will create awful, feature-laden products. However, you shouldn’t become arrogant too. Using just your own vision as input will not lead to the best product.

This list is by no means complete. We feel we still need a lot of improvement in this area and it has our continuous focus to improve this.

Entering the real enterprise world: when you thought functional requirements are the only things that interest customers

We talked a couple of times about developing “enterprise software”. What do we mean by that? Actually, that changed over time. We did not start with a fortune 100 company as customer. What we first saw as a big company is small compared to the current customers we have. And in delivering software to the real big companies we also encountered additional requirements. If you ever built enterprise software this shouldn’t be a surprise.

First, the product or service you deliver should be covered by a Service-Level Agreement (SLA). I am not a legal expert, so I won’t go into the details. Just one piece of advice: make sure you carefully read and understand the SLAs of the services your product depends on. If you for example build software that runs in “the cloud” like we do, you need to understand the details of the SLAs of the Infrastructure-as-a-Service (IaaS) providers (hint: they can be meaningless). Your own SLA is limited by the SLAs of your providers, if it isn’t you probably want to contact an insurance company.

For us as a product development team the nice thing is that these kind of agreements with customers lead to higher uptime requirements. Hence, more time on the roadmap for things like high-availability, horizontal scalability, failover mechanisms, etc. Which is interesting stuff, right?

Another requirement some of our customers (especially in the financial and healthcare industries) have is that we as a company are audited every year. These customers are under a lot of regulations and are only allowed to do business with companies that comply to a number of standards. Examples are the ISAE 3402 and SSAE 16 standards. So, what do auditors look for in your product development processes? In my experience these are the main elements:

  • A traceable software development lifecycle: have a look at the current version of your product that runs in production. You have to be able to show who approved the release and when, what the test report was, the source code for that release including the commit history, and the requirements and design decisions for the new features in that release. This maybe sounds bureaucratic, but with the proper tooling it is fairly easy without much overhead. We use a version control system connected to a Scrum tool (part of our own platform) containing the user stories (including related discussions), the results of our test runs (connected to builds) are saved, and we have a lightweight approval process for releases (just me giving a written approval as response to a request by one of the development teams that contains build number / revision and test report).
  • Disaster recovery: you have to have a disaster recovery plan (and not just for audit purposes). You have to test the plan at least once a year and document that. Again not just for audit purposes, an untested backup (or failover environment) equals no backup.
  • Maintenance processes: you have to have change management processes in place that make sure that changes on environments are properly scheduled, approved, and traceable. Furthermore, you have to establish processes that make sure that (security) patches of the third-party software you use are noticed and applied as soon as possible.
  • Documentation of procedures: most people I know aren’t fans of audits because of the focus on documentation and I have to admit that was my point of view, too. Documentation is, of course, the only way an auditor can verify your processes, however, in my experience most are quite pragmatic and do not ask for multi-page process descriptions. You can keep it short and to the point. In a growing organization with a growing product you inevitably need more documentation. Not everybody will automatically know and understand everything (as opposed to the early days) and new employees need to learn a lot. Early-day employees often don’t realize how complex the current organization is for new employees as these early-day employees never had to learn everything at once, they have just grown into it.
  • Staff training: for me this is probably the most “difficult” part of the audit. You need to show that your employees are capable and know their stuff (especially with regard to security). That’s difficult to show if you have a team of A-players that are autodidactic and use blogs and social media to keep their knowledge up-to-date. We hire people with such skills, we do not look at certifications. So, we do not organize formal trainings. However, organizing an in-house training day (read: a hacking contest – learn a lot about security by hacking example systems in an increasing order of difficulty) with an attendee list and short description of the day can often do the job too.

The main challenge? Fighting the obvious: becoming a bureaucratic, inert organization. So, keep it lightweight and be pragmatic. Focus on the reasons behind these questions and controls, in the end it is all about quality and continuity of service, which are good things to focus on.

SLAs and audits are just two examples of so-called non-functional requirements for enterprise software. In all this, it is again about finding the right balance: non-functionals need more attention than in the early years, but do not stop the innovation from a functional perspective.

Scaling the team: how to keep the “startup” vibe?

If you are still with me, you probably sensed the growth we had over the last years. That growth also influenced the size of our product development team. We started with just a few people, the company was one team. As explained, over time “departments” were introduced and thus a product development “department” was created. Currently, a couple of years later, our product development department consists of 5 teams and within a couple of months team 6 and 7 will be created as teams that grow bigger will be split.

In product development we didn’t grow very fast. If you grow too fast, teams are more busy with getting new people up-to-speed than with new versions of the product. Furthermore, it is important to not only get people up-to-speed with the technical work, they should also become part of the culture, they should breathe it. We call that: painting them Mendix-blue. Your culture is a competitive advantage that cannot be copied, foster it! Part of a focus on “culture” is the way we evaluate if someone fits in the team. We do not just look at skills, we let them talk with some team members and see if they add up to something more than the sum of the individual people. It is about passion, enthusiasm, and thereby pushing each other to excel.

So, culture is important, but how to stay focused, aligned, and flexible while also growing the product development team? Here are some of the things we do to prevent ourselves from becoming the next big inert organization:

  • We keep the teams small (8 people at maximum) and autonomous.
  • Teams are cross-functional. A team is responsible for a product, part of product (a service), or a certain feature. This responsibility includes everything from design and implementation to releasing and running it.
  • Each team has a Scrum master who is part of the team. In principle a Scrum master is a developer with an additional role. With the small teams we have that’s working successfully. 
  • Everybody is a “maker”. Everybody creates stuff, whether it is code, tests, infrastructure automation, or designs. We do not have multiple layers of “management”.
  • We have weekly Scrum-of-Scrums meetings to align the teams. Each team writes down the “what did we do” and “what are we going to do” stuff on beforehand. During the meeting (which is attended by a team member from each team) we discuss impediments and resolve issues that affect multiple teams. Everything we share and discuss is posted on our internal “social network” so that each team member always knows what’s going on within the other teams.

Much more can be written about the subject of growing a product development team. For now, just a final observation: system architecture tends to follow team structure, even if communication among teams is well-organized. Keep that in mind when you think about your team structure.

Moving on

Well, this piece became a bit longer than initially planned, but I hope you enjoyed it. Please share your experiences, comments, or questions in the comments!

We will continue our journey and continuously improve our way of working. Because it’s fun, but also because it’s necessary. We need to stay innovative, we need to challenge ourselves, to be able to stay successful.

Let me finish with a meta-observation: the word “focus” is mentioned quite a lot of times in this story (30 times actually). That surely aligns with my experience: nothing beats a motivated, highly focused team of professionals with a clear goal!

By the way: we are hiring for multiple development positions including a software development manager / agile coach. So, if you enjoyed this article and want to join (and impact) our process of continuous improvement you should check out our open positions.

You can also discuss this article on Hacker News.

5 years, 10 months ago

A tale of a 7 year journey in developing software for the enterprise

Much is written about lean startups, agile software development, continuous integration or even continuous deployment. All with the goal to help us understand the dynamics of successful software companies and their development teams. I am inspired by these stories, they show me what the ideal situation is like, they challenge me to improve our current way of working. Today I want to give something back. I want to share with.

The post A tale of a 7 year journey in developing software for the enterprise appeared first on The Enterprise Architect.