I had a chance to chat with Beth Pariseau a while back for an article she was working on titled AI's Role in Future of DevOps Provokes IT Industry Agitation. I love that we're finally having this disceussion and am intrigued by the responses I've seen and heard from Beth's article.

The Real Conversation: New Patterns Breaking our Assumptions

When it comes the (now) age-old debate as to what DevOps is, how organizations embrace these types of practices and who might be involved, we're really only working to avoid the actual issues 'under the hood.' If we were to take a whilrwind tour of the history of technology and software in real-world and practical applications, we'd see an interesting trend:

  • In the beginning, there were engineers and they did everything.
  • As time went on, the role of technology grew, and 'engineers' specialized more and more heavily over time.
  • These specializations grew into separate disciplines focused on specific aspects of technology (software, security, networking, operations, etc.)
  • As this continued, these disciplines detached from eachother, but also, in many ways, direct value to the mission they were serving (business or otherwise).
    • In many ways, I believe that the (well intentioned) assumption was that those in a position of 'technology leadership' were assumed to be the unifying layer between these disparate parts.
  • We then see an explosion of programatically accessible (api) tools and services within the market and as these capabilities beocome easier to consume, pervasive automation becomes more accessible.
  • Then we saw (and still see!) companies working to further democratize these types of activities from configuration managemnt to governance automation.

Interestingly in this (VERY) brief and focusd history of the industry, we spent a lot of time working to make it easier for humans to interact with machines directly (building graphical user interfaces, etc.) until a point where the scale of things reinforced an 'order of magnitude' delta between current capabilities and market demand. That to say, this 'cloud' thing that has been the harbinger of change on multiple levels for IT professionals affected change not only in terms of convenience (whenever you want it) and a compelling economic model (on-demand), it fundamentally changed the nature of how we could interact with it (programmatic access via APIs). That last piece is critical in understanding the fervor, excitement, fear and confusion over DevOps: no longer is it necessary to manage with point-and-click processes and done right, you can manage massive fleets of resources with relatively few people. Further, it offered an opportunity for software engineers, specialized into their own corner of the process, to have a massive impact in shaping how their software is deployed and managed by applying the same basic patterns they learned from object oriented programming (OOP) to the servers (or instances) they were deploying to. This level of automation and, in turn, access (and visibility) quickly showed us the power of applying this pattern and created an entire sub-industry within the technology operations world. To be clear, it's not that automation on some level wasn't happening with bash and powershell scripts, it's that the engineering discipline, specifically OOP insight and experience, coming to bear unlocked a whole new level of capability and productivity in the IT Operations and Systems Management space.

Now we sit on the cusp of another, eerily similar opportunity where patterns in how we leverage data offer incredible opportunities for further efficiency, insight and improvement. Similar to the DevOps journey, it's not like we (I'll include myself here too!) haven't been watching graphs, setting static and even calaculated threshholds for alerting on systems, and then services, but it's nearly always been reactive. Our fundamental assumption that by putting people in front of monitors, watching graphs and setting these alerts, we'd build a safe system that we could react to when issues did happen. Done well, it's not a terrible assumption, but even in the best of setups, there's always a set of 'unknown unknowns' without anyone actively really seeking them out to reduce risk.

(r)Evolution of the Role of Technology

What I'm trying to establish here is that, while accessible and productive artificial intelligence and machine learning capabilities are becoming more prevalent, the issue of change in technology isn't something new. Is this a true 'revolution' in terms of how we think about IT operations? Maybe... I'd argue that it's likely bigger than that. These techniques and capabilities fundamentally change how we might look at leveraging the data that sits at our fingertips every day. Applied to he IT Operations space, a place where there's a lot of well known and mostly structured data, it seems like an obvious 'win' from the perspective of working to improve systems visibility, health, performance and ultimately service uptime and availability.

I think that this is an application of intelligent algorithms and systems where there is such a body of knowledge and experience in the market that training these systems would be fairly straightforward. Depending on the amount of historical data that's been kept, it could even be possible to train and calibrate these systems based on real-world incidents and issues in order to build confidence in the process.

This, the issue of confidence, is where it all comes to focus and where the vast majority of the arguments against such a system stem from. Our lack of confidence in the 'system' then calls to question how these capabilities will have the same level of 'insight' and 'intuition' though, I'd argue that the two (machine learning and human interaction) aren't mutually exclusive. As simple as it might sound, the easiest path forward may be in simply being intentional about measuring and reporting on outcome achievement over time. As simple as this sounds, it's far harder to actually accomplish than it would seem--as evidenced by the multiple books, seminars and other publications on the subject.

All the Things?


How far does it go? How far is 'too far?' Is there a 'too far?'

Here's the hard question - how much is enough? How much do you need to invest in to even get started and still be able to see it 'work'? Honestly, the vote's still out on that one.

Truth be told, there's so much speculation in the market today around what 'could be' that it's easy to get lost in the art of the possible. I fully believe that there are far fewer impossibilities than there are opportunities that aren't worth the cost (as of now), but as technology continues to mature and we see further acceleration through tooling, automation and other management techniques, I'd expect those costs to go down. Re-evaluating the cost/benefit calculus on a regular basis is critical to knowing when to engage on ideas that were once whimsical and conceptually distant.

Wouldn't it be cool if...

That's the question I want to be asking more often, with greater confidence in my ability to speculate, dream and even experiment with the question to it's fullest extent.

This is where I'm excited to see the pattern replaying for AI and Machine Learning in much the same way it played out for DevOps. Our collective use of automation has helped us accelerate our ability to experiment while the (seemingly) relentless waves of 'new and interesting' things coming out on a nearly daily basis allow us to exercise our curiosity on a more regular basis, even in seemingly minor things. We're getting better at seeking the answer to the 'what if' question, and that makes me excited for how this next wave of innovation will break open new opportunities that we aren't even dreaming about today.

AWS re:Invent 2017 Preview

I'll keep it short and sweet--re:Invent 2017 is downright crazy. Highlights include:

  • Expanding to 5 major venues
  • Over
Read more

Unifi ... Yep!

Basic Layout


Their Story's Kinda Cool...

Robert Pera builds the Apple Airport and then goes to do it 'the right

Read more