Blog

XP Values – The Forgotten Agile Guidance!

By: Mike Hall | Jun 21, 2017 |  Article,  Team,  Technical Practices

XP Values Diagram featuring Courage, Feedback, Respect, Communication, and SimplicityWhen you hear the term “extreme programming”, what do you think of? Coding while skydiving at 15,000 feet? Bug-fixing while scaling the north face of Kilimanjaro? Refactoring while swimming with sharks at the Great Barrier Reef?

For tech nerds like me, we probably think of the wealth of technical practices espoused by this Agile method. These technical practices include stories, pair programming, slack, continuous integration, test-first, and others. Extreme Programming (XP) practices are typically used within a Scrum or Kanban team to help ensure high levels of software quality and continuous integration.

(more…)

Blog

Tech Debt Game

By: Agile Velocity | May 12, 2015 |  Agile Technical Practices,  Agile Tools,  Article,  Technical Practices

David Croley, Agile Player-Coach, presented Tech Debt – Understanding its Sources and Impacts Through a Game at the Keep Austin Agile conference.Tech Debt - Understanding its Sources and Impacts Through a Game

Definition of Technical Debt:

  • A useful metaphor similar to financial debt
  • “Interest” is incurred in the form of costlier development effort
  • Can be paid down through refactoring the implementation
  • Unlike monetary debt, it is difficult to quantify

Impact of Tech Debt

  • Long delivery times
  • Poor customer responsiveness
  • Mounting defects
  • Frustrated and poor performing teams
  • Late deliveries
  • Rising development costs

Understanding its Sources and Impacts through a Game includes:

  • Demonstrate how unresolved Technical Debt impacts a development team’s efficiency,
  • Learn sources
  • Realize possible solutions

Click here to see the slide deck.

Click here to download the game.

For more information about the Tech Debt game

Blog

Legacy Applications: Lessons in Coupling

By: Mike Lepine | Feb 09, 2015 |  Agile Technical Practices,  Article,  Technical Practices

2 ladybugs coupling - legacy applicationsI struggled with a number of potential topics for this blog before settling on this one. I understand this isn’t a flashy topic like a comparison of JavaScript MVC frameworks or centralized logging solutions; however, I’ve been working on-and-off with legacy applications throughout my career and have come to truly realize the constrictive nature of coupling logic. Avoiding coupled applications can save your sanity and possibly your business as well. In this post, I will discuss coupling in software applications, strategies for avoiding it and a plan to refactor a legacy application into a decoupled one to help your organization get more value from its product.

Coupling – What is it?

When software contains knowledge of the inner workings of other layers or modules, it is considered to be coupled. As an example, let’s take a hypothetical student registration system created for web users. Fundamentally, “the system” is comprised of registration (business) logic, student data, and a user interface – in this scenario, web pages. Coupling occurs when core application logic appears in one of the other tiers or the core application becomes aware of the specifics of the external modules. Using our example as a reference, if the core logic was programmed for web page presentation or worse, application logic seeped into the presentation tier, then it would be considered coupled. In this case, it would be coupled to a web-based scheme.

A lot of readers will recognize our strategy for the student registration system as an MVC (Model-View-Controller) architecture. This is probably the most pervasive strategy leveraged by software organizations building web-based solutions over the last 15 years. You may not necessarily hear MVC being touted as frequently these days, but it feels just as relevant (and possibly more poignant) now than it was years ago. One of the main goals of the MVC pattern is to prevent coupling typically in a 3-tier architecture. If implemented as designed, then interchanging the endpoints or tiers is not only possible but should require minimal effort. With a decoupled design, a new requirement to support native mobile apps or to switch from a traditional RDBMS to a NoSQL data store should be doable without rewriting the core application logic. The MVC design strategy isolates and identifies the responsibilities of each tier but doesn’t discuss how communication should be handled between the tiers to avoid coupling. This brings me to another design approach that emphasizes a decoupled solution – Hexagonal Architecture.

Decoupling Strategies – Designing for Decoupled Solutions

Alistair Cockburn wrote an article on Hexagonal Architecture in which he explains the motivation for decoupled solutions. The design is known by two common names (and possibly others as well). The first name, Hexagonal Architecture (see diagram on the right), comes from the fact that the figure used to describe the architecture is comprised of two hexagons, an inner one representing the application (logic) and an outer one representing the boundary between it and other systems. Alistair said he used hexagons not because the number 6 (hexagons have 6 sides) is important, but to provide room for illustrating multiple ports and adapters.

This brings us to the second common name which is the Ports & Adapters Strategy. Going back to the two hexagons representing the architecture, the inner hexagon contains the application logic and represents the boundary for ports (connections to it). The area between the inner hexagon and the outer one represents adapters which manage communication between external devices/systems and the ports.

Now that we understand the origin of the name, let’s focus on the strategy itself. As emphasized by the MVC strategy, a fundamental principle is to avoid coupling and keep the application logic from leaking into other tiers. A Hexagonal Architecture surrounds the application with ports that allow interaction with it. These ports can be thought of as APIs. The term port is used to evoke thoughts of a standard port on a network. If I have an application that supports the FTP protocol, I should be able to connect to servers/networks that expose FTP, typically on port 21. The thought here is the same. From our application, we don’t need to know what is communicating with a port, and it can communicate as long as it conforms to the interface (API). The adapters are the layer between ports and other devices/systems. They are responsible for managing and translating the communication between the ports and other systems. Using this strategy, the level of effort to allow the application to interact with other systems is directly related to the creation of the adapter.

This was a very brief overview of the architectural approach and more can be read by using the links in the References section; however, my goal was to introduce the strategy and bring an emphasis to decoupled architectures. Leveraging these strategies, separate the core application/domain logic from the delivery mechanisms such as web, mobile, data store, etc. It decouples and separates each of those concerns so they can be tested independently and adapted over time as needs change without disruption to the rest of the system.

Decoupling Legacy Applications

This sounds great, right? I’m sure anyone reading this and working on a legacy application that suffers from coupling is thinking it’s impossible to get here without rewriting the application entirely. Depending on the size of the application, that may be the case; however, using an agile approach, it can be accomplished iteratively. All is not lost but it will take discipline and buy-in from not only the team but possibly the organization. First, there has to be a business need. What value can be gained by transitioning to a decoupled solution? Obviously there are numerous technical reasons, but the business must benefit to provide justification. For instance, quality has been an issue and it is difficult and time-consuming to write/maintain tests for the application due to coupling (logic cannot fully be tested without the UI because logic exists there as well). Or, the business is losing opportunities in the mobile space because the application only works properly when rendered in a web browser.

Once the business justification has been adequately identified and the team is in agreement, the strategy should be identified and communicated. Most likely this will require significant refactoring so you’ll need to know how the application functions before you start and be able to verify functionality after refactoring. Do this through automated testing. If unit and integration tests do not already exist, then be sure to include this as part of the effort. Another point to keep in mind is functional and non-functional testing. It is important that the functionality of the system logic is verified before, during and after changes, but it is also important to monitor the non-functional behavior as well. For instance, if performance and security are important to your application, having automated approaches to monitor these during the refactoring process (and beyond) is a good idea. It’s important to know quickly when a change has been made that negatively impacts one of the thresholds critical to your application and business.

With the safety net of automated tests in place, use small, targeted iterations to progressively work through the changes. You will learn more and more as you proceed and it may take a few iterations to determine the right approach. Once again, with the business justification validated upfront, it is understood that this will take time, needs to be done right and not rushed. I believe the single biggest threat to decoupled systems is artificial deadlines and upper management pressure.

Final Points

Finally, I have obviously only skimmed the surface here and could go on a few more pages on this topic. (I’m lucky if you have stuck with me this long.) It requires discipline to avoid coupled solutions. The benefits far outweigh the individual effort to keep systems clean. One final thing–once you’ve encountered a system that suffers from coupling issues and you’ve understood the negative impact it has on the business and even the development team, please use that knowledge as a tool to help future initiatives and teams you interact with to avoid the pitfalls. A favorite quote of mine to drive this home:

“Those who cannot remember the past are condemned to repeat it.”

– George Santayana.

References

MVC Design Pattern

3-Tier Architecture (simple explanation)

Alistair Cockburn’s article on Hexagonal Architecture

Blog

Virtualization Makes Me a Better Developer

By: Agile Velocity | Jan 19, 2015 |  Article,  ScrumMaster,  Technical Practices

woman wearing virtual reality glasses for virtualizationIn the past, developers either had only the choice between multiple machines or sharing development environments. In the first instance, this is an expensive solution and in the second, lots of developers sharing a single environment leads to contention issues. To further exacerbate this problem, the trend is towards application deployments that involve multiple servers.

Luckily, a solution has presented itself in the form of virtualization. Now, all of the systems needed for running a deployed application can be hosted on my laptop using Virtual Machines (VMs). I have been using Oracle’s VirtualBox on my MacBook Pro (16GB RAM), but the same applies for any virtualization technology you choose. By using a VM (or several), I am able to isolate configuration changes from my development system to help make my test environment as production like as possible.

Not Just Linux

Also, I am able to easily switch out dev environments, so I can test on multiple target OSs, all while using the same underlying hardware. “That’s great”, you say, “but what about Windows and its associated licenses?”. Well, Microsoft has made available VM images for a full spectrum of host OSs, VM software platforms, and a range of IE and Windows flavors. These Windows images have a 90-day time limit from when you first start using them, but, the same VM image file can be used to make new Virtual Machines and the 90-day time limit starts over. See Ray Bango’s Blog for more on this.

Automate Them

The next step in making the best use of VMs is to make the process of spinning them up and deploying to them automated. Vagrant can really help here. I’ve used Vagrant to automate the whole process of provisioning, deploying, and even testing an application and the benefits were enormous. Note using Vagrant with a Windows VM can be a bit more difficult. You might need to create your own Vagrant Box using a tool like Packer.  See the article Creating Windows BaseBoxes and this Packer Windows GitHub repo for pointers on making Windows work with Packer and Vagrant.

You can also take things a step further and automate the deployment of your app with tools like Chef or Puppet, which you should probably be doing anyway.

Costs

You might think by using VMs I have to give up some performance, but that is not always the case. For example, my current application development requires an Oracle DB. I run this on a Linux VM. I have found that it is much easier and faster to backup and restore snapshots of the entire VM in VirtualBox than to use Oracle’s tools to backup and restore the DB.

Another cost associated with using VMs is memory. This is unavoidable. But, with memory prices at the point that 16GB is under $200, this is a price I am more than willing to pay.

VMs are not just for the Cloud

Because virtualization has been used mostly to support IT and DevOps activities, developers may be unaware of their capabilities or how easily they can be used, but with a little effort, the payoff can be dramatic. Also, I have found deploying to locally hosted VMs to be a much faster process when compared with deploying to Cloud VMs. In conclusion, by deploying and developing using a Virtual tier, I am able to get more accurate and more rapid feedback when compared to using a shared Dev or QA tier, and that makes me a much better developer.

Blog

The Technical Debt Game

By: Agile Velocity | Oct 21, 2014 |  Article,  Technical Practices

Technical Debt

Every software project has it. Technical debt in a software project is essentially a backlog of technical stories the development team thinks need to be completed that are not visible to the product owner and don’t directly implement user features. These stories arise from either the inevitable trade-offs made during rapid implementation, or from inadvertent discoveries after implementation that some code should be refactored. By itself, technical debt is not a bad thing. It’s more prudent to “minimally” implement some features to get rapid feedback, and only later more fully flesh-out the implementation after requirements are more clear.

Impact of Technical Debt

Technical debt has a constant impact on the efficiency of the development team. Technical debt makes completing feature stories more difficult than it would otherwise be. Technical debt can make the code base more difficult to understand and can lead to a higher defect rate. Perhaps the most frustrating aspect of Technical Debt is that it is very difficult to quantify its impact. This makes it difficult for the development team to make the case with their Product Owner for reducing tech debt. The development team is usually left using financial (or even food) metaphors to help convey why tech debt stories should be tackled sooner rather than later.

The Tech Debt Game

Technical Debt Game-Agile Velocity

 

To help demonstrate and better understand the trade-offs and various strategies tech debt can have, my colleagues and I have developed a “Tech Debt Game” which can be used to simulate development iterations with both feature backlogs and technical debt backlogs. Players can try out different strategies of dealing with the tech debt with the goal of completing the most feature story-points. This game can be used as a way to demonstrate the impact of technical debt and help start the conversation with Product Owners and other external stakeholders.

Get the Tech Debt Game

Blog

Behavior Driven Development: Steps to Implement, Part I

By: Agile Velocity | Apr 14, 2014 |  Article,  Technical Practices

Dog jumping over hurdle - behavior driven developmentBehavior Driven Development is the process of writing high-level scenarios verifying a User Story meets the Acceptance Criteria and the expectations of the stakeholders. The automation behind the scenarios is written to initially fail and then pass once the User Story development is complete.

BDD is largely a collaboration tool that forces people working on a project to come together. It bridges the gap between stakeholders and the development team. Practicing BDD is fairly painless but does require a meeting to discuss the intended behavior the scenarios will verify. The meeting to write the BDD tests is usually an informal one which is led by the Product Owner or stakeholder and involves members of the development team. The goal is to collaborate so everyone is on the same page as to what this User Story is trying to achieve: Start with the story. Talk about the “so that”. Discuss how the software currently works and how this story will change or impact current functionality.

Scenarios are written from a user’s perspective. Because they are run on a fully functioning system with real data, they can be expensive (meaning the time it takes to write, execute, and maintain the tests). However, the scenarios will serve as executable documentation for how the software behaves. This is useful for developers to understand each other’s code and gives them tools to refactor with confidence. Over time, the tests will evolve as they are maintained and also serve as easy-to-read descriptions of features. Documenting behavior in this manner is useful when onboarding new team members by communicating the software’s functionality.

Things to ask yourselves when writing scenarios:

*  What functionality is not possible or feasible to cover with lower-level automated tests? How do these tests differ from unit tests, integration tests, or service tests?

*  What is the “happy path” of the User Story functionality? This is the typical usage of the software.

*  What is the atypical usage? Are there variations of inputs that are possible, but used less frequently?

*  How should the system handle bad input?

*  How does the system prevent bad output? How does it display or log errors?

*  What is the impact on other parts of the system?

*  What are the integration points – other components, other applications? Should the tests include some verification of this integration, or is it covered elsewhere?

What Works Well

The process that has worked well for me is to, first, write the scenarios together with the Product Owner.

The PO should lead this discussion and write the steps in a way that makes sense to stakeholders. At least one Developer and one Tester should also be present. Some like to call this the “Power of Three”. Read through the Acceptance Criteria asking the questions listed above. Try to use consistent language in the steps and use terms that make sense to people outside the development team. It may be tempting to write steps involving user interaction with the software, like this:

Given I am on the Login screen
When I enter “user1″ in the username field
And I enter a “mypassword” in the password field
And I click the Login button
Then I should see the error message “Username/password is invalid”

I have found, it is better to describe the behavior in broader terms not closely tied to the application layout itself. For example:

Given I am on the Login screen
When I enter invalid credentials
Then I am not logged in and a meaningful error message is displayed

This way, the code behind each step is closely tied to the application or the user interface, not the test scenarios themselves. As the application grows, the test steps themselves will be less likely to require changes, isolating the maintenance to the code behind the test scenarios.

In the second article in this series, we’ll discuss what to do once the scenarios are defined.

Tool references: http://cukes.info/ (Ruby)
                         http://www.specflow.org/specflownew/ (.NET)
                         http://jbehave.org/ (Java)

 

Blog

Behavior Driven Development: Steps to Implement, Part II

By: Agile Velocity | |  Article,  Technical Practices

2 programmers back to back - behavior driven development or BDDIn the first article of this two-part series, we discussed how to define scenarios for testing.  Now, once this is done,  the Testers can partially implement the new steps to fail.

For example, adding assertions that a file exists on the file system.

Or, writing code returning a negative result for now.

Maybe the code will eventually query the database to return a positive result, or maybe it will ensure some value is displayed on the UI. Sometimes this part is minimal; sometimes it can include almost all of the step implementation.

Lastly, during feature implementation, the Developer writes the final test code to make the scenario pass.

I encourage Developers and Testers pair at this stage. This type of teamwork keeps the Tester engaged in how the code is being implemented and ensures they understand how the software works. An informed Tester is a good Tester.

As you probably know, automated tests provide more value the more frequently they are executed. This is why you want to be smart about the tests covered at the user level. Automated testing is an investment. The team should view the tests as code as well.

Automated tests require maintenance to keep them passing and best if shared by all members of the team. When practicing BDD, make sure all scenarios provide value. They should not be too difficult to automate. Be careful not to include too many variations of input data. If possible, cover the various inputs using tests at lower levels: unit, integration, service-level. Use the BDD scenarios to cover what is not, or better yet, cannot be covered by other types of automated tests. Don’t be afraid to get rid of a scenario altogether if it doesn’t provide value. It is okay to run some tests manually as long as the team understands manual tests are executed much less frequently, so feedback is delayed.

Tips for BDD:

*  Write them to be executed locally on a Developer’s machine
*  Monitor execution time and keep it to a minimum
*  Scenarios should not be dependent on each other
*  Scenarios can be executed multiple times in a row and still pass (some cleanup may be necessary)
*  To keep the number of steps from getting out of hand, pass in variables to the steps to maximize reuse
*  Keep your steps organized in separate files by major functional area
*  Scenarios are grouped to allow for a quick regression test of each major functional area, or a full regression of the entire system
*  Use source control for your test code

When BDD is done properly (before implementation), the real value is gained by simply collaborating and discussing the expected behavior of the software!

Once implementation is done, the scenarios ensure the software meets the needs of the stakeholders. From then on, the automated tests act as a safety net for developers to refactor code and implement new features with confidence.

Teams should strive to make execution of at least a portion of the BDD tests part of their Continuous Integration build/deployment process and make the test results visible. Failing test scenarios around existing functionality should be a top priority to fix.

Have fun!

See Behavior Driven Development Part I

Other resources for BDD:

Tool referenceshttp://cukes.info/ (Ruby)
                         http://www.specflow.org/specflownew/ (.NET)
                         http://jbehave.org/ (Java)
Blog

It’s Not Just About Process

By: Agile Velocity | Aug 12, 2013 |  Article,  Scrum,  Technical Practices

process flow chartAgile software practitioners focus a lot of attention on people, communication, collaboration, and strong values. In many environments, these are unquestionably the best opportunities for improvement. Inevitably, though, all software development teams reach a point where their greatest opportunity for improving the way they implement and deliver a product are of a technical nature. There is no single standard set of universal technical practices, and Best Practices really need to be thought out in the context in which they would be used. Each team should look at what is most appropriate for their context.

Practices

Here are the core practices we see used by high-performing Agile teams:

Agile Testing

Agile testing is a core part of applying the lean principle of Build Quality In as we develop products. Testing in Agile involves applying testing in a way that increases collaboration and understanding, improves feedback, and helps implement quality products quicker. While the use of automation is an important practice, ensuring we can do the types of manual testing not easily automated, but provide valuable feedback, are just as important.

The key practices and principles here are:

Other Resources:

Test-Driven Development

For most teams automated testing is not new. Some form of unit testing or functional testing is common, but to get a better return on our testing investment, there is a need to level up. We can increase the level and timeliness of feedback by adopting the practice of Test-Driven Development (TDD). In addition to earlier feedback, TDD leverages tests to give the team design feedback that influences better and less wasteful implementation of features than is usually found by testing after implementation.

Test driven development processThe same TDD workflow that is most often applied to Unit Testing can be applied to different levels of tests. The most common TDD related testing activities are:

Other Resources:

Continuous Integration

Software implementation involves the combined efforts of one or more people brought together to form something that is ultimately deliverable. Feedback should be given at the earliest point possible to tell us when new features do not cleanly integrate with existing ones, or the behavior of the system has changed in an unexpected way. While merging and integrating code lines is often considered painful, doing it more often and in smaller increments makes is easier, as well as provides earlier feedback.

The most common practices within Continuous Integration are:

  • Use a single source repository or main code line
  • Automate the build
  • Test each build
  • Commit to the mainline frequently (daily)
  • Drive integration, build, etc from every commit

Other Resources:

Continuous Delivery

continuous delivery processContinuous Delivery is in many ways an evolution or extension of Continuous Integration to extend benefits of frequent feedback and automation to packaging and deployment. This means continually delivering code to relevant environments (even production) as soon as it is ready by leveraging deployment and infrastructure automation. This is often where much of DevOps is focused to support collaboration, sharing, and feedback between development and operations.

With Continuous Delivery we mean:

Other Resources:

Refactoring

The goal of Code Refactoring is to restructure existing code in order to improve its readability, reduce complexity, improve maintainability, and make it more extensible without changing the external behavior of the code. This is done by relying on automated tests to ensure the behavior of the code stays the same while making a series of small, incremental changes to the internal structure of the code.

Many modern Integrated Development Environments (IDEs) and code editors provide functionality for assisting with applying common patterns of refactoring such as extracting a new class from an existing one or renaming a method across a code base for clarity/readability.

Refactoring is a key part of Test-Driven Development but can also be performed outside of the cycle when necessary. Normally, the observation of a Code Smell is the driver for performing refactoring.

Other Resources:

Peer Review

peer review processAgile Teams are always looking for frequent feedback and knowledge sharing and achieving both at the same time is a big win. By increasing the number of eyes that see and understand any given part of a code base, design, architecture, infrastructure, etc., we apply more knowledge, perspective, and experience to a solution and increase the number of people effectively working in that area.

Core Practices:

Emergent Design

Developers, architects, and teams often struggle with how and when to approach design. In contrast to the more waterfall style of Big Design Up Front in a designated phase, Agile teams tend to design all the time and let the design emerge as features are added. This certainly happens continually through refactoring as part of Test-Driven Development, but other elements of design also take place in planning sessions and designated design sessions. By employing emergent design throughout, we reduce the waste or rework associated with unvalidated or unused designs.

Some key practices and concepts are:

Other Resources:

DevOps

DevOps is often discussed as an outside yet complementary practice to Agile. But its focus on collaboration, shared understanding, and shared responsibilities across Development and Operations are much like the Agile team struggles between Developers and Testers or other combinations of Roles. For Agile teams, DevOps is a key part of achieving Continuous Deployment and Continuous Delivery. These practices are also important for improving consistency, reducing wait time and other wastes, and sharing knowledge between roles that traditionally have been very siloed.

While collaboration and communication are critical, the key technical practices involved are:

Practices for Improvement

Whether you are Agile or Lean, use Scrum or Kanban, or none of the above, these practices and principles and others like them should be considerations for your toolbox and for initiating team improvement.

Blog

Why Decomposition Matters

By: Agile Velocity | Jun 05, 2013 |  Article,  Technical Practices

decomposition of an animalAs children, we were told by adults to take smaller bites when eating. We didn’t always understand the risk of choking that they saw, but ultimately it was easier and safer to chew those smaller bites. In some cases, taking smaller bites made the difference in finishing more of the meal. But have we translated that lesson to other things in our lives?

Often we try to tackle things that are too big to sufficiently understand, estimate, or complete in a reasonable amount of time and we suffer for it. We don’t always see the risks that hide in more complex items and thus don’t feel compelled to break them down. Other times we fail to break things down due to unknowns, ignorance, uncertainty, pride, optimism, or even laziness.

Variation in Software

Delivering software products is a constant struggle with a variation of size and complexity. It’s everywhere, from the size of user stories to tasks to code changes to releases. It is present from the moment someone has an idea to the moment software is deployed. We strive to understand, simplify, prioritize and execute on different types of things that are difficult to digest. The more variation we see, the more we struggle with staying consistent and predictable.

Because of this variation, we need to be good at breaking things down to more manageable sizes. This need is so pervasive, I propose it be viewed as a fundamental skill in software. I don’t think most teams recognize this and certainly don’t develop the skill. Many people on Agile teams exposed to concepts like Story decomposition don’t often realize how often they need to apply similar practices in so many other ways.

Why Decomposition Matters

My example of eating small bites was simplistic. In developing software, there is a lot more to gain but it is still ultimately about reducing risk and making tasks easier.

Progress

According to The Progress Principle, a strong key to people remaining happy, engaged, motivated, and creative is making regular progress on meaningful work. Not surprisingly, we want to get things done, but we also want to feel a sense of pride and accomplishment.

Of course, stakeholders and other parties are interested in seeing progress from those they depend on. When we are able to make more regular progress toward goals, we provide better measurements and visibility for ourselves and others to know how we are doing.

It should be no surprise that smaller items enable quicker progress toward goals if done right. We certainly need to take care to avoid dependencies and wait time. Smaller, more independent goals allow more frequent progress and all the benefits that come with it.

Collaboration

As we have more people working on something, is it easier to put them all on a larger, more monolithic task or divide and conquer? Usually, we prefer to divide and conquer. Yet how we divide is important as well because dependencies and other kinds of blocking issues create wait time and frustration.

Decomposition can be one of the easier ways to get additional people involved with helping accomplish a larger goal. By breaking up work into more isolated items to be done in parallel, we are increasing the ability to swarm on a problem.

Complexity

complex know - decomposition
Complexity is one of the greatest challenges in software development. With more interactions, operations, and behavior,s we are more likely to have edge cases and exposure to risk when anything changes. Large goals are easier places for complexity to hide. The larger the task is when we try to accomplish it, the more details we have to discover, understand, validate, and implement.

Focusing on smaller units of work can be a helpful constraint we place on ourselves. Why add our own constraints? When we constrain ourselves to work on a small portion of a larger task we are trying to limit the complexity that prevents us from accomplishing something. We want to avoid a downward spiral of “what if?” and “we are going to need” scenarios that, while important, can slow the task at hand and lead to overthinking and overdesigning and accumulating work we may never need. We are trying to avoid Analysis Paralysis.

Control

By looking at Kanban systems we can see wide variations in size can impact lead times. If we can be more consistent with the size of items flowing through a system then we will have more consistent throughput and cycle times. By breaking down work items more frequently into items of similar size cycle times become more stable and the average size (due to whatever size variation remains) becomes more useful for forecasting due to the law of large numbers.

Clarity

It isn’t a coincidence that Break It Down is also a slang phrase appearing in music and pop culture that also relates to our goals. Urban Dictionary defines “Break It Down” to mean: To explain at length, clearly, and indisputably. By looking at the pieces of a larger whole individually and in more details we can often gain more clarity and understanding of the bigger picture than if we had never spent the extra effort.

Summary

Software Development is a continual exercise in dealing with variation of size and complexity. From early feature ideas, to low-level code changes we have work that can be difficult to understand, manage, and predict, especially when it is large. Decomposition helps us make this work more manageable.

So, we need to remember to Break It Down. It is all about decomposition. And in software, decomposition is everywhere, yet so many struggle with recognizing the need and applying it well. I believe decomposition should be considered one of the most fundamental and critical skills in software development. Getting better at it takes a combination of discipline, practice, and learning but can pay off immensely.
To be effective, even this post required decomposition. We are going to continue with a series of posts exploring many of the individual types of variation in software and how lean/agile teams cope with these different situations.
Blog

Austin DevOps Events Summary – Culture is Important

By: Agile Velocity | May 07, 2013 |  Agile Technical Practices,  Article,  Technical Practices

DevOps Conference - Culture is important - picture of a diverse teamLast week some of our team attended several DevOps Austin related events. We had a great time learning and interacting with other technologists attending both  PuppetCamp Austin and DevOps Days Austin.

DevOps ConferenceThis edition of the annual DevOps Days event in Austin (which also takes place in other cities each year) was declared the biggest. There were great discussions, Ignite talks, and Open Space sessions as well. And while there are always many conversations around technology, there was a noticeable focus on culture.

Many of the talks on both days had a strong cultural component. On the second day, the organizers even mentioned feedback from some attendees to move past the culture stuff and on to tech talks. But it was obvious from the sessions that a large number of people felt the cultural conversation was important. Those of you in the Agile community who haven’t looked closely at Development Operations will be recognized these are similar conversations to those occurring about Agile in general.

Some notable takeaways:

There were too many great talks to highlight them all here. You can see all the recorded talks on Vimeo. You should also look back through tweets to see what people were saying during the conference.

If you haven’t attended one of these events, you should definitely try to attend at least one they put on each year.