DevOps, the latest in software engineering and IT operations trends, defines a set of processes, tools, practices, and interactions that foster collaboration between developers and operations to achieve high-quality business outcomes. In this article, we are going to focus on the “Ops” aspect of DevOps. If you’re a healthcare IT organization and you haven’t yet begun to adopt DevOps principles into your infrastructure provisioning and application deployment processes, you may already be behind.
Traditional IT Vs. DevOps
Healthcare IT organizations have many processes. Traditional IT operations organizations use processes to twine together interactions between various operations teams. These teams are typically either purposely organized as, or have evolved into, silos – a team of specialists responsible for a specific function of the operations. One team may have responsibility for compute, another for storage, another for networking. Hand-offs between teams may take hours, even days. A single server may take days to weeks to fully provision. This delay frustrates the business owner who is waiting for the system.
DevOps requires a fundamental shift from the traditional IT operations processes. It’s about taking the standard roles – “Dev” and “Ops” and melting them together and ensuring that they, along with the business owners, work together as a team. The team is then responsible for the outcome and the speed of the outcome. This can be achieved either organizationally (all the players on the same team) or with shared goals, outcomes, and incentives to deliver a product quickly and with high quality. Automation plays a key role in delivering the services quickly, but it’s not the only key.
You may think that your infrastructure deployment process is already automated, and may be also fooled into thinking your organization is “doing the DevOps already”. If that’s the case, ask yourself how your teams work together, or if they work together. Does everyone on a team (or project) have a common goal? That common goal should be working together to enable a rapid delivery of the software or system, which is enabled by having as many of the processes slowing down the traditional delivery methods being as automated as possible. As one step completes, the next kicks off.
Now have a second look at your processes and think if your server provisioning or software deployment process is completely automated from start to finish. Are there steps in between that require approval? Emails to be sent? Paperwork to be filled out? Files to be copied from one environment to another? It is still acceptable to have approval steps required, in fact, in a healthcare IT environment, these types of “gates”, or “checkpoints” are required for auditing and compliance. These gates should be present in the process to ensure that previous steps have been completed successfully, but not to block or hold up the deployment. The processes that define these checkpoints or interim steps should also be automated.
Tools for Automation
Tools are an absolute necessity for the automation of the processes, and there are many different tools that can be used for each step of the process. Source control, build systems, testing software, and deployment products are all components that add structure or enable automation, and each type of tool has a few different vendors and options. Give them all a quick evaluation.
Don’t get hung up in trying to find a tool that does everything. Acknowledge that a few tools will be needed, and that your talented software engineers may need to add a few ad-hoc items to ensure that the pipeline is completely automated.
For instance, MOVEit Automation can automate the transferring of files securely and reliably. If you are moving sensitive data between teams or external sources then you should consider an automated solution.
High Performing IT in Healthcare
DevOps advocates preach concepts such as “fail fast, recover faster”. Healthcare IT vendors and their customers are fearful of DevOps because of the “fail fast” portion of this concept. Failure is deemed unacceptable, and can cause downtime that can hinder patient care. In the 2016 State of DevOps Report, statistics show that in a “high-performing” IT company with frequent deployments, the failure rate is 3 times lower and 24 times faster to recover, compared to “low-performing” IT companies who deploy less frequently.
DevOps is All About People
Lastly, success of DevOps relies very heavily on teamwork between the business, developers, testing, and IT operations personnel. Organizing the team in such a way where they are truly a team – having a common set of goals and responsibilities, sometimes even the same management or reporting structure – can help to drive the consistency and focus needed to be successful. Success in a DevOps world relies on teamwork, and it can be a huge cultural challenge for teams that aren’t used to working this way.
A large healthcare organization, rooted in traditional processes, may view DevOps as a radical shift in the way business is done – and it is, but the entire lifecycle for an application, and the infrastructure to which is deployed, doesn’t have to be transformed on the first day. Start with understanding what your business’ needs are. What could you provide them that would make their processes quicker? Start with a small, non-mission-critical item, and work with the team to make that process faster. Once the build process and feedback loop are successful, move to deploying that piece with confidence.
Healthcare IT organizations should begin by building the foundation for rapid deployment – automation, a solid testing framework, a team bonded by common goals, and a decent toolset. All the building blocks must continue to remain a focus to be successful with DevOps and deliver changes to the customers – the doctors, nurses, and patients receiving care – with the quality and speed that is needed.
Missy Januszko
Missy Januszko is an independent IT consultant, with more than 20 years of experience as an enterprise hosting architect, large-scale infrastructure designer, and hosted application designer. She specializes in DevOps, automation and configuration management, PowerShell, and Active Directory, and has broad experience across the entire line of Microsoft business technologies. Missy is a co-author of “The DSC Book” with Microsoft MVP Don Jones, and she is also a conference speaker on DSC-related topics. She is a contributor to a number of open-source projects, including “Tug”, the open-source DSC pull server, and “Autolab”, an automated, rapid-install lab build.