That was one of the topics of a live discussion organized by Electric Cloud, a DevOps optimization software company, and broadcasted over Google Hang Outs, on Tuesday April 7, to which I was invited to be one of the 4 panelists.
Read below some of my contributions to the discussion:
My take on this was (and is) that 3-tier systems, i.e. systems that run on a web server, an application server, and a database server, that were written during the last decade, are good candidates for optimization using DevOps. You need to review them thoroughly and identify the parts of them that are still providing business value to their end-users versus the parts that are a “pile of mud”. Certainly their high operational cost and the manual system upgrades and releases are worth looking at more as DevOps automation can save a lot of money. Those who pay for software, they do it for the time they spend using the software, not the time you and your team spend during your weekends to update it. Perhaps, the most publicly quantified case of applying DevOps to automate system releases is that of Fidelity Worldwide Investment. By using IBM UrbanCode Deploy, applications that took days to release now take just a couple of hours and that saved them 2.3 million per year. Read the case study here.
I believe experimentation & innovation is in the nature of everyone that works in Technology. However, without change of culture in the entire organization they will not bring any results. This change needs to happen organically. Start from yourself and the people that work close to you, and start small. Always make sure you prove the value of any innovation you apply before you take the next step, and slowly the change you desire will propagate to the rest of the organization.
We have helped various clients with optimizing their pipelines and bring down their operational cost, and we do it for our own. For our projects, we deploy teams that are in three geographical areas, New York, North Dakota, and Montenegro, and we need to be very efficient on how we build, test, and deploy systems from Dev, to QC, to UAT, to PROD. Just to give a few examples:
- For our Microsoft-based systems we use Release Management, Microsoft’s ALM suite for Agile Project Management, and DevOps.
- For RPAS Cloud, our financial Research Publishing Automation Platform, that runs on Amazon Web Services, and its technology stack includes Java, Node.js, and MongoDB, we have built a fully automated pipeline using Jenkins for Continuous Integration, AWS CloudFormation for Configuration Management, and AWS OpsWorks for automated deployments. The pipleline allows us to easily deploy multiple releases a day. And we keep looking for ways to optimize further.
Dimitris Papathomopoulos, Director of Technology and Cloud Systems at InfoTech Solutions, has more than 15 years of experience in all stages of application lifecycle: inception, design, architecture, development, implementation, integration, continuous improvement and user support. He is a big fan of the Agile movement, the Lean Start Up, and DevOps as frameworks for businesses to reduce waste and bring systems to end-users quickly. His portfolio includes projects with a diverse array of technologies, e.g. .NET/SQL Server, Java/MySQL, Node.js/MongoDB. He’s currently managing and growing the ecosystem of InfoTech’s RPAS Cloud, a cloud-based platform for financial research data modeling, data publishing, data visualization & delivery that runs on Amazon Web Services. His areas of interest include Big Data, Internet of Things (IoT), API Management, and Agile Portfolio Management. Dimitris holds a BS in Computer Engineering and a Master’s in Computer Science.