DevOps is the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity: evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes. This speed enables organizations to better serve their customers and compete more effectively in the market.
Under a DevOps model, development and operations teams are no longer “siloed.” Sometimes, these two teams are merged into a single team where the engineers work across the entire application lifecycle, from development and test to deployment to operations, and develop a range of skills not limited to a single function. Quality assurance and security teams may also become more tightly integrated with development and operations and throughout the application lifecycle.
These teams use practices to automate processes that historically have been manual and slow. They use a technology stack and tooling which help them operate and evolve applications quickly and reliably. These tools also help engineers independently accomplish tasks (for example, deploying code or provisioning infrastructure) that normally would have required help from other teams, and this further increases a team’s velocity.
Traditionally in the enterprise, the development team tested new code in an isolated development environment for quality assurance (QA) and if requirements were met released the code to operations for use. The operations team deployed the program and maintained it from that point on. One of the problems with this approach, which is known as waterfall development, is that there was usually a long time between software releases and because the two teams worked separately, the development team was not always aware of operational roadblocks that might prevent the program from working as anticipated.
The DevOps approach seeks to meld application development and deployment into a more streamlined process that aligns development, quality assurance (QA) and operations team efforts. This approach also shifts some of the operation team’s responsibilities back to the development team in order to facilitate continuous development, continuous integration, continuous delivery and continuous monitoring processes. The necessity for tearing down the silos between development and operations has been expedited by the need to release code faster and more often in order to help the organization respond in a more agile manner to changing business requirements. Other drivers for breaking down the silos include the increasing use of cloud computing and advances in software defined infrastructures, microservices, containers and automation.
“Ops” is a blanket term for systems engineers, system administrators, operations staff, release engineers, DBAs, network engineers, security professionals, and various other subdisciplines and job titles. “Dev” is used as shorthand for developers in particular, but really in practice it is even wider and means “all the people involved in developing the product,” which can include Product, QA, and other kinds of disciplines.
DevOps has strong affinities with Agile and Lean approaches. The old view of operations tended towards the “Dev” side being the “makers” and the “Ops” side being the “people that deal with the creation after its birth” – the realization of the harm that has been done in the industry of those two being treated as siloed concerns is the core driver behind DevOps. In this way, DevOps can be interpreted as an outgrowth of Agile – agile software development prescribes close collaboration of customers, product management, developers, and (sometimes) QA to fill in the gaps and rapidly iterate towards a better product – DevOps says “yes, but service delivery and how the app and systems interact are a fundamental part of the value proposition to the client as well, and so the product team needs to include those concerns as a top-level item.” From this perspective, DevOps is simply extending Agile principles beyond the boundaries of “the code” to the entire delivered service.
Development is a stage in the DevOps lifecycle where the development of software takes place with no halt. Here the entire software development process is divided into smaller development cycles, developed simultaneously which take lesser time and so that the software can be completely developed and delivered at the earliest. Continuous development encompasses coding then creating different forms of the code using SVN, Git and then wrapping the code into an executable folder to forward it to the quality analysts for testing the codes.
Quality Analysts use tools like Selenium, Junit, etc. to remove the bugs while testing the developed software and to ensure that there are no flaws in the functionality of the software. Testing multiple parts of the code takes place continuously and once each part is tested, it is integrated with the main prevailing code.
Any code providing a new functionality is integrated with the prevailing code in the integration stage. Since continuous development and testing of codes take place, hence to update the new code, integration must be unified. The updated code should ensure zero failure, so that once the new code is added, no error occur in the runtime. Hence testing plays an important role making sure that the new code does not bring negative reaction during the runtime. Jenkins is a tool used in the integration purpose that triggers any change made in the code automatically or even manually.
This is the particular phase where the software is deployed to the production house. All the codes developed are deployed to the servers ensuring 100 percent accuracy. As the deployment process takes place in a nonstop process, so the automation tools like the SaltStack, Puppet, Chef etc. plays an important role. Deployment should be such that any changes made any time on the code, should not affect the runtime of the code even in high traffic of the website. Hence, during the deployment process, the system administrator should keep on scaling up the servers in order to welcome higher number of host users.
Monitoring is a vital phase in the DevOps lifecycle. It determines the quality of the entire DevOps lifecycle. The operations team take care of the inappropriate system behavior or if any bugs that may arise during the user activity. To maintain proper system health, bugs need to be highlighted frequently so that the degrading performance of the system can be avoided. For this, the operations team uses popular tools like Sensu, NewRelic, and Nagios. These improves the efficiency and dependency of the system resulting in lower support rate. Any chief matter that gets highlighted during the monitoring phase, could be informed to the developers who solve the issue in the continuous development stage.
We can move at high velocity so that we can innovate for customers faster, adapt to changing markets better, and grow more efficient at driving business results. The DevOps model enables developers and operations teams to achieve these results. For example, microservices and continuous delivery let teams take ownership of services and then release updates to them quicker.
Increases the frequency and pace of releases so that we can innovate and improve our product faster. The quicker we can release new features and fix bugs, the faster we can respond to our customer’s needs and build competitive advantage. Continuous integration and continuous delivery are practices that automate the software release process, from build to deploy.
Ensure the quality of application updates and infrastructure changes so you can reliably deliver at a more rapid pace while maintaining a positive experience for end users. Use practices like continuous integration and continuous delivery to test that each change is functional and safe. Monitoring and logging practices help you stay informed of performance in real-time.
We can operate and manage the infrastructure and development processes at scale. Automation and consistency will help manage complex or changing systems efficiently and with reduced risk. For example, infrastructure as code helps us manage development, testing, and production environments in a repeatable and more efficient manner.
We can build more effective teams under a DevOps cultural model, which emphasizes values such as ownership and accountability. Developers and operations teams collaborate closely, share many responsibilities, and combine their workflows. This reduces inefficiencies and saves time (e.g. reduced handover periods between developers and operations, writing code that takes into account the environment in which it is run).
We can move quickly while retaining control and preserving compliance. We can adopt a DevOps model without sacrificing security by using automated compliance policies, fine-grained controls, and configuration management techniques. For example, using infrastructure as code and policy as code, we can define and then track compliance at scale.
Software and the Internet have transformed the world and its industries, from shopping to entertainment to banking. Software no longer merely supports a business; rather it becomes an integral component of every part of a business. Companies interact with their customers through software delivered as online services or applications and on all sorts of devices. They also use software to increase operational efficiencies by transforming every part of the value chain, such as logistics, communications, and operations. In a similar way that physical goods companies transformed how they design, build, and deliver products using industrial automation throughout the 20th century, companies in today’s world must transform how they build and deliver software.
1. Understand the collaboration and shared tools strategy for the Dev, QA, and infrastructure automation teams
DevOps teams need to come up with a common tools strategy that lets them collaborate across development, testing, and deployment (see Figure 1). This does not mean that you should spend days arguing about tooling; it means you work on a common strategy that includes DevOps...
• Processes
• Communications and collaboration planning
• Continuous development tools
• Continuous integration tools
• Continuous testing tools
• Continuous deployment tools
• Continuous operations and CloudOps tools
Coming up with a common tools strategy does not drive tool selection — at least not at this point. It means picking a common share strategy that all can agree upon and that is reflective of your business objectives for DevOps. The tool selection process often drives miscommunication within teams. A common DevOps tools strategy must adhere to a common set of objectives while providing seamless collaboration and integration between tools. The objective is to automate everything: Developers should be able to send new and changed software to deployment and operations without humans getting in the way of the processes.
No ad hoc work or changes should occur outside of the DevOps process, and DevOps tooling should capture every request for new or changed software. This is different from logging the progress of software as it moves through the processes. DevOps provides the ability to automate the acceptance of change requests that come in either from the business or from other parts of the DevOps teams. Examples include changing software to accommodate a new tax model for the business, or changing the software to accommodate a request to improve performance of the database access module.
3. Use agile Kanban project management for automation and DevOps requests that can be dealt with in the tooling
Kanban is a framework used to implement agile development that matches the amount of work in progress to the team's capacity. It gives teams more flexible planning options, faster output, clear focus, and transparency throughout the development cycle. Kanban tools provide the ability to see what you do today, or all the items in context with each other. Also, it limits the amount of work in progress, which helps balance flow-based approaches so that you don’t attempt to do too much at once. Finally, Kanban tools can enhance flow. In Kanban, when one work item is complete, the next highest item from the backlog gets pushed to development.
Select tools that can help you understand the productivity of your DevOps processes, both automated and manual, and to determine if they are working in your favor. You need to do two things with these tools. First, define which metrics are relevant to the DevOps processes, such as speed to deployment versus testing errors found. Second, define automated processes to correct issues without human involvement. An example would be dealing with software scaling problems automatically on cloud-based platforms.
Test automation is more than just automated testing; it’s the ability to take code and data and run standard testing routines to ensure the quality of the code, the data, and the overall solution. With DevOps, testing must be continuous. The ability to toss code and data into the process means you need to place the code into a sandbox, assign test data to the application, and run hundreds — or thousands — of tests that, when completed, will automatically promote the code down the DevOps process, or return it back to the developers for rework.
Part of the testing process should define the acceptance tests that will be a part of each deployment, including levels of acceptance for the infrastructure, applications, data, and even the test suite that you’ll use. For the tool set selected, those charged with DevOps testing processes should to spend time defining the acceptance tests, and ensuring that the tests meet with the acceptance criteria selected. These tests may be changed at any time by development or operations. And as applications evolve over time, you'll need to bake new requirements into the software, which in turn should be tested against these new requirements. For example, you might need to test changes to compliance issues associated with protecting certain types of data, or performance issues to ensure that the enterprise meets service-level agreements.
Finally, you'll need feedback loops to automate communication between tests that spot issues, and tests that process needs to be supported by your chosen tool. The right tool must identify the issue using either manual or automated mechanisms, and tag the issue with the artifact so the developers or operators understand what occurred, why it occurred, and where it occurred. The tool should also help to define a chain of communications with all automated and human players in the loop. This includes an approach to correct the problem in collaboration with everyone on the team, a consensus as to what type of resolution you should apply, and a list of any additional code or technology required. Then comes the push to production, where the tool should help you define tracking to report whether the resolution made it through automated testing, automated deployment, and automated operations.
Tools that track software versions as they are released, whether manually or automatically. This means numbering versions, as well as tracking the configuration and any environmental dependencies that are present, such as the type, brand, and version of the database; the operating system details; and even the type of physical or virtual server that’s needed. This category is related to change management tools.
Tools that automate the building and deployment of software throughout the DevOps process, including continuous development and continuous integration.
Tools that provide automated testing, including best practices listed above. Testing tools should provide integrated unit, performance, and security testing services. The objective should be end-to-end automation.
Tools to provision the platforms needed for deployment of the software, as well as monitor and log any changes occurring to the configuration, the data, or the software. These tools ensure that you can get the system back to a stable state, no matter what occurs.