Both InfoQ and QCon focus on topics which we believe fall into the innovator, early adopter, and early majority stages. In essence what we try to do is identify ideas that fit into what Geoffrey Moore referred to as the early market - where “the customer base is made up of technology enthusiasts and visionaries who are looking to get ahead of either an opportunity or a looming problem” and that we think are likely to “cross the chasm” to broader adoption. The original basis for this work is the research of Everett Rogers, a professor of communication studies, who popularised the theory in his book Diffusion of Innovations ; the book was first published in 1962, and is now in its fifth edition (2003).
This article, following on from theCulture and Methods piece we published last week, provides a summary of how we currently see the operations space, which for us is mainly DevOps and cloud. As we noted in that piece:
If a topic is on the right-hand part of the graph, you will probably find lots of existing content on InfoQ about it – we covered it when it was new, and the lessons learned by the innovators and early adopters are available to help guide individuals, teams and organisations as they adopt these ideas and practices. The things on the left-hand side are the ones we see as emerging now, being used by the innovators and early adopters, and we focus our reporting and content on bringing these ideas to our readers’ attention so they can decide for themselves which they should be exploring now, or waiting to see how they unfold.
Notable changes since we last reviewed this area include the consolidation of container orchestration withKubernetes having “won”, though it seems likely that the key cloud platforms will start to abstract some of these details away. We think thatService Meshtechnology will become the new focal point for developers with options including Envoy, Istio, Conduit and Linkerd (both from Buoyant), and NGINX’s nginmesh amongst others.
Chaos Engineering- defined in the “ Principles of Chaos ” as “the discipline of experimenting on a distributed system in order to build confidence in the system’s capability to withstand turbulent conditions in production” - is a field where we’re seeing growing interest. We also see Site Reliability Engineering (SRE), and what Google refers to as " Customer Reliability Engineering (CRE) " as an area to keep an eye on.
We have a general sense that Unikernels have so far failed to get much traction and most likely won’t; likewise interest in bootable apps seems to have waned.
For context, here is what the topic graph looked like for the second half of 2017. The 2018 version is at the top of the article.
The following is a lightly-edited copy of the corresponding internal chat-log between the InfoQ Cloud and DevOps queue editors, which provides for more context for our recommended positioning on the adoption graph.
What’s your sense of what is happening with container orchestration - is that growing? Is Kubernetes winning? We don’t have DevEx here (we do have it in culture - should we?) Are Unikernels still a thing?
Kubernetes is def winning the war, and adoption is growing. I would still keep it as early adopter, but rising fast.
I'd place DevEx under innovator. Eventually there will be an "OpsEx" movement too, but I'm not sure if we can add it at this moment :)
Unikernels don't seem to have picked up yet I think. My gut feeling is that it's still missing some easy-to-adopt way to use them, like Docker did for Linux containers. I'd keep them at innovator.
Something that's been talked about a lot these days is psychological safety, although they might fall under Culture more than DevOps? Seems to be in early adopter phase as a thought out concept.
Also "new"/missing from current graph is multi-cloud infrastructure as code (in other words, Terraform). I'd probably place it as early majority already.
Other changes to current graph for me:
Kubernetes is winning the orchestrator battle, but the war is shifting. IMO, the release of AWS Fargate is the first sign that platforms will start pushing the running of these frameworks under the covers and away from developers/devops (and you can argue that Cloud Foundry's adoption of K8s is also encouraging this). The service mesh will be the new focal point of developer interaction in combination with a simplified declarative ops model (e.g. developers specify runtime requirements of 2 vCPUs and 1Gb RAM, but not full k8s YAML config), and CD with intelligent canarying.
I agree with Manuel on DevEx being innovator. I also think there is to some degree an OpsEx thing going on, and we are seeing this with more attention being paid to control planes of operational tools, e.g. Istio, Terraform and AWS CLI etc. There have always been ops control planes, but these have typically been humans (translating requirements to scripts), or super clunky CLIs.
I agree also with Manuel on Unikernels, although I think the limiting factor at the moment is the lack of "killer app". Mainstream OS containers give us 80% of what we need, and most people only use 20% by simply throwing apps into a container package without thought to security or performance benefits that could be utilised :-) I think Unikernels could be interesting in the IoT space, and also with AWS and Oracle pushing bare metal services once again, there could be an opportunity for some (niche?) shops to run their own hypervisors for Unikernels - perhaps for specialist or legacy tech?
+1 on the psychological safety too.
I also like the idea of multi/hybrid cloud support, both at the infra and service layer. You can argue services like CoreOS Tectonic and Rancher are also looking to exploit this space, and exploit fears of single cloud lock-in.
I'll also +1 everything Manuel said with current graph, with one exception:
In addition, here are my thoughts:
My goodness, what a good set of answers so far. Nothing I disagree with. We have CI/CD here, but I'd also add "pipelines for infrastructure." Thinking of continuously updated platforms that power those continuously updated apps. Feels like innovator or early adopter. I'd put "devops for Data" in the same bucket. Seeing more chatter amongst those who want to see DevOps principles applied to storage, network, and security.
I think Manuel/Daniel pretty much nailed it, so a few comments to add some colour:
Unikernels - Justin Cormack posted a blog a few days back where he made the observation that, "The sort of size you can purchase is specified by the vendor. This was one of the issues with unikernel adoption, as the sizes of VMs provided by cloud vendors were generally too large for compact unikernel applications, and the smaller ones are also pretty terrible in terms of noisy neighbour effects and so on." If I look at what's happening with the latest generation of Nitro powered C5/M5 AWS instances, the situation is getting worse rather than better.
The D in CD meaning Delivery rather than Deployment - as CD has moved on from early majority it's hit a lot more regulated companies where a straight through path all the way to production is seen as violating segregation of duties. So everybody that can do Continuous Deployment probably is already, and the expanding edge (into the later adopters) is all about Continuous Delivery. Also if you get to Continuous Delivery and decide to flip a bit to enable full Continuous Deployment it's hardly a big deal given all of the other work and culture change that will have happened to make that possible.
Kubernetes definitely won the war, and EKS shows the capitulation of the last major holdout. I'm not sure Fargate matters much here (it's just what ECS should have been in the first place); but there's some way to go on how we think about capacity allocation and charging (the white space between billing models predicated on VM sizing and billing models predicated on function invocation). There's also miles to go in terms of making Kubernetes accessible and useful, which is why we see Heptio, Istio, Metaparticle etc.
Multi/Hybrid Cloud - every practical application of this I've seen has been done with Kubernetes. Cloud Brokerage seems to have become problematic as a broker can only work across a lowest common denominator of services, whilst most users want the ability to harness the latest and greatest - whether that’s new VM instance types or entirely new services (e.g. Barclays declaring Cloud Brokers as an anti pattern ).
Atomist - I see this as metaprogramming rather than ChatOps; and we're seeing a fresh dawn for metaprogramming beyond what's happened before with LISP and Ruby DSLs. The next leap will probably come from the ML world (think of what happened to auto suggest and language translation when Google indexed the whole web being applied to a machine that's 'read' the whole of GitHub and StackExchange) - so the don't repeat yourself (DRY) principle becomes don't repeat anybody (DRA)."
Nothing fundamental to add here indeed, so likewise just a few additional cents from me (cloud focused):
I agree that Kubernetes has won. That being said, I fully expect approaches that a) hide its operational complexity and b) embrace billing models based on fine-grained resource usage (function invocation etc.) to grow important and dominant quickly, because the old PaaS mantra that most developers ultimately just want to build apps remains true (mechanical sympathy notwithstanding), as ever so eloquently analyzed by Simon Wardley in general , and regarding AWS in particular . Accordingly, I think Fargate matters, because it provides serverless operations and FaaS style pricing via ECS right away, and for Kubernetes workloads on the most popular cloud platform soon(ish).
From that 'Serverless' angle I've been missing the keyword 'Function as a Service (FaaS)' (though conceptually present) and think we should embrace the distinction between 'Backend as a Service (BaaS)' as fully managed and automatically scaled application components in general (early adopter to early majority already - vast space of course) vs. FaaS as a specific compute pattern that matches particularly well with microservices and event-driven programming (innovator to early adopter still).
All has been said about the D for delivery and pipelines based configuration as code across the entire stack.
I also agree with Daniel's qualification of ML for Ops insights and alerting as 'innovator' - the need for this has been apparent in 2012 already, but back then we lacked sufficiently mature and accessible ML technology, which seems to change right now. Same goes for ML in the context of metaprogramming, I like Chris' DRY => DRA evolution here, nice one :) (though way to go it seems ...)
Finally, I'd also add 'edge computing' as an innovator/early-adopter topic (e.g. Lambda@Edge, S3 Select, Amazon Athena, AWS Greengrass, Joyent's Manta) - while e.g Manta has been around for a while, these capabilities are quickly becoming accessible once attached to already widely adopted cloud services like CloudFront, S3, Lambda.
One more big thing which we seemed to have overlooked: SRE. Until 2017 I didn't really think this was going to take off. I still think there are more straightforward ways to reach the same goals (if we look at it from a methodology/organizational point of view) for most organizations which are not Google-scale, but the truth is that the term SRE seems to have taken a turn towards reliability as a capability (regardless of the responsible team: dev, ops, or actual SRE engineers), and thus more and more orgs are adopting/claiming to do it. I would place it under early majority.
I missed the launch, and it's very old news now, but perhaps we'll start seeing case studies emerge in 2018
If you think we’ve missed anything important, or are off base on something, please let us know in the comments.
The InfoQ editorial team is built by recruiting and training expert practitioners to write news items and articles and to analyse current and future trends. Apply to become an editor via theeditor page, and get involved with the conversation.
is a DevOps and Delivery Consultant, focused on teams and flow. Manuel helps organizations adopt test automation and continuous delivery, as well as understand DevOps from both technical and human perspectives. Co-curator of DevOpsTopologies.com . DevOps lead editorfor InfoQ. Co-founder of DevOps Lisbon meetup . Co-author of the upcoming book " Team Guide to Software Releasability ". Tweets @manupaisable
Daniel Bryant is leading change within organisations and technology. His current work includes enabling agility within organisations by introducing better requirement gathering and planning techniques, focusing on the relevance of architecture within agile development, and facilitating continuous integration/delivery. Daniel’s current technical expertise focuses on ‘DevOps’ tooling, cloud/container platforms and microservice implementations. He is also a leader within the London Java Community (LJC), contributes to several open source projects, writes for well-known technical websites such as InfoQ, DZone and Voxxed, and regularly presents at international conferences such as QCon, JavaOne and Devoxx.
Richard Seroter is a Senior Director of Product at Pivotal, with a master's degree in Engineering from the University of Colorado. He's also a 10-time Microsoft MVP, trainer for developer-centric training company Pluralsight, speaker, the lead InfoQ editor for cloud computing, and author of multiple books on application integration strategies. Richard maintains a regularly updated blog on topics of architecture and solution design and can be found on Twitter as @rseroter .
Chris Swan is CTO for the Global Delivery Organisation at DXC.technology , where he leads the shift towards design for operations across the offerings families, and the use of data to drive optimisation of customer transformation and service fulfilment. He was previously CTO for Global Infrastructure Services and General Manager for x86 and Distributed Compute at CSC. Before that he held CTO and Director of R&D roles at Cohesive Networks, UBS, Capital SCF and Credit Suisse, where he worked on app servers, compute grids, security, mobile, cloud, networking and containers.
Steffen Opel is managing partner at Utoolity , a provider of tools for cloud computing operations and software development processes. With a formal education in C++, and an early focus on rich client technologies, he joined the paradigm shift to RESTful web service architectures early on. The major industry move towards cloud computing refueled his interests in thorough automation of development processes and his focus shifted to DevOps scenarios, where he enjoys API driven development in agile teams