Rethinking DevOps To Create A More Efficient Delivery Environment

IBA Group
Yuliya Varonina
DevOps Engineer

People often talk of apps as something new – an area of IT development that has only been around since smartphones became commonly used, but that’s not right. At IBA Group we have been developing apps for over 25 years. Before your iPhone, we were developing apps that could run on mainframe systems – we have over 80 teams and over 100 products in this area.

Every year we bring dozens of young people into our development team, often from those we see at the various hackathons we organize. There are still many new ideas for how to improve development inside mainframe culture.

It’s true that mainframes often look like legacy systems. I know that’s how I felt when I approached my first mainframe project three years ago. The infrastructure is quite complicated and the qualifications to work on these systems are quite specialized. It’s not an easy environment, but our teams are enthusiastic and they embrace new ideas.

Some of the key problems developing in the mainframe environment are:

  1. DevOps pain; a lot of manual operations for code building, customization, and setup for various environments.
  2. Development cycle; typically the cycle is between a week to a month – it’s not a rapid development environment.
  3. Version control; we have systems to help, but nothing is integrated with the modern version control systems most developmnt environments use today.
  4. Limited automation.
  5. Poor visibility and control at all stages of development.

If you also work with mainframe development then you might know about these problems already. So what did we do in our own development environment to try addressing these problems?

  1. Automation; we started using the UrbanCode family of tools to start automating some of the infrastructure tasks.
  2. Integration; we integrated the UrbanCode processes with the Rational tools family – RTC Rational Concert and RQM- Rational Quality Management.
  3. Reducing tools; we reduced the number of tools being used so we could focus on using the remaining ones more effectively.
  4. Scalable Pipeline; we built one project using the new DevOps methods and then assisted all teams to develop their projects this way, so these methods scaled across all development teams.
  5. Security; increased automation left gaps in security so we used DevSecOps to embed security functionality.

I can talk in detail about how we did all this, and the benefits we found, but for the sake of this blog it’s better to just highlight the main benefits we found from this approach to DevOps.

  1. Faster deployment; it’s faster to develop new processes and systems therefore your business operations can be more efficient.
  2. System Thinking; building this DevOps environment creates a culture of system thinking which means that responsibility, transparency, and feedback are all improved. Systems Thinking creates a much more focused team that works together.
  3. Increased Effectiveness; IT development is typically full of waste. People are waiting on others to deliver and they cannot work until a specific part of a project is handed over. Managing pipelines makes deliveries more predictable and allows resource to be allocated more effectively.
  4. Better Quality; We now have more tests, more automation, and User Acceptance Testing. We also trust the pipeline. People are used more effectively and this also increases the quality of deliveries.

We know from our own experience that our team now spends 20% less time on unplanned work and reworking problems. This has led to a 40x reduction in systems failure and the team is 50x more satisfied with their work. Even when a failure occurs, we can now recover 20x faster than before.

The list of benefits from this approach is endless. If this blog has sparked your interest in what is possible then please leave a comment here or get in touch with me. I can give you more detail and also personal experiences of going on this transformation journey.

Yuliya delivers her DevOps presentation at SHARE Pittsburgh 2019

We are about to enter a new era of mainframes?

IBA Group

Mark Kobayashi-Hillary

Ask a computer science student in the US or Western Europe what technologies they are studying, and what they want to work with in future, and it is almost one hundred per cent certain they won’t say mainframes.

The mainframe computer – bedrock of the computing industry – has been apparently in decline since the IBM PC invaded desks with DOS, and subsequently Windows, from Microsoft. Yet, though consumers don’t use mainframes and students have no interest in them, it does not mean their use has ceased entirely.

Mainly large organisations with complex legacy systems, such as retail banking or life insurance, have extensive mainframe estates. And even where the hardware itself has remained unchanged for many years, the software continues to require updates due to product changes, new regulations, and changes in the law.

So if nobody is studying how to maintain these systems, or the programming languages used to modify them, then how can those important industries still rely on the mainframe?

There are several strong pockets of mainframe resource located around the world. Eastern Europe, and particularly the former Soviet bloc, has a deep pool of expertise in both the ongoing maintenance of these systems – and developing new software for them.

This is a classic example of how outsourcing to an offshore service provider can be about more than just the cost of service. If your legacy systems are running in COBOL on an IBM mainframe, yet the people cannot be found locally to modify the code, then outsourcing is the natural solution. Forget cost; go offshore for access to the skills you need just to keep your business running.

Mainframes are not going to die just yet. Many large organisations have systems that cannot be wound-up quickly, and as applications move further into the cloud, perhaps we are about to enter a new era of mainframes?