Agile web development experts Jeff Ryan & Carl Shamie discuss the key challenges facing today and the critical steps to building high quality, secure sites.
1. What are the key challenges teams face when using agile web development? Do you recommend it?
We strongly recommend an agile approach to development. The customer can provide feedback early, can better understand what he’s getting, and is more involved and active in the process. Of course, some projects/customers don’t align to these techniques for multiple reasons (Internal process, internal gating, …). Having a backlog of items prioritized and sized by the development team also grows strong ownership and responsibility within the team with the goals of augmenting quality.
We need to formalize the tools and techniques more stringently within projects to make it more consistent. A big challenge with Agile development is that for most organizations it is a paradigm shift that requires both business and IT to change the way they interact. As an SI we adapt our delivery processes to match the expectations of the client whether that is traditional waterfall, fully Agile, or a hybrid of the two.
2. In light of recent security issues for major institutions, how has the approach to a site’s architecture adapted?
Security and compliance is a major concern for organizations. The ramifications of a breach are significant. Bad press, damaged customer relationships, monetary liabilities, and an overall diminished brand are common outcomes. As consultants it is our responsibility to be up-to-date on security best practices and design our systems in a secure manner from the ground up.
At Acquity that includes activities like leveraging our custom security filters designed to address the common code vulnerabilities, participating in forums and organizations like OWASP, and working with our clients to ensure that security audits take place. In addition to best practice processes, we continue to see more adoption of auditing tools like Saint, Satan, Nessus, Sara, Tiger, and IBM AppScan.
3. QA testing is one of the most important phases of an implementation, yet these hours are usually first to be reduced. How do you recommend continuing quality in light of a budget crunch?
Too often QA is seen as an area where time and resources can be trimmed. Other times QA is the last phase, it ends up being squeezed due to delays in earlier phases and timeline pressures. These are 2 separate issues but the end result is the same, a program that suffers from quality issues manifesting in high bug counts, longer regression cycles, and customer dissatisfaction.
Recognizing and articulating the value of the QA phase is the best defense against cost cutting. As we move away from the waterfall SLDC we are engaging QA teams earlier in the project. Bringing the QA lead into the design phase to help identify potential design considerations that impact testing, or working with the BA’s during the functional design to further outline QA impacts reduces wasted cycles later in a project.
Additionally we continue to increase adoption of automated tools, testing methods, static code analysis, and formalized code review processes. Things like Sonar, Find Bugs, PMD, junit, selenium, etc. all serve to improve delivery quality.
4. What about automated testing methods? Can these be used to drive costs lower for QA?
Things like junit and selenium are powerful tools and can be very useful in regression testing scenarios, but they are not without cost. In general there is a 10-20% cost increase in build effort to incorporate these tools. There is also an ongoing maintenance effort to keep tests up to date and working properly. These costs can easily be recaptured over time with reduced manual testing effort and reduced risk of changes introducing side effects. However a significant investment in regression tests can quickly become worthless if an organization does not stay committed to the continued task of keeping them up to date. Often clients fail to understand the rigor required to adopt automated tests in an enterprise environment and they are discarded or ignored once development and maintenance is fully transitioned to their teams.
5. Performance is a key acceptance criteria. What are some common pitfalls and tradeoffs that impact our ability to deliver against performance expectations?
Performance metrics must be discussed early in a project and must be balanced against the project requirements and creative design. Performance cannot be defined in a vacuum. Real time calls to the backend, CDN usage, 3rd party integrations, client side calls, and overall page weight, are common requirements that play a part in the real or perceived speed of a site. No amount of code tuning can overcome a page that has 10mb of images to download. No amount of caching can fix a real time call to the backend that takes 10 seconds to respond or a single page checkout with calls to and AVS service, a fraud service, and a payment service. When performance issues occur most clients’ first reaction is to blame code and our reaction is to blame something environmental.
The reality is that performance tuning is an art and there are rarely black and white answers. Proper load testing prior to launch and adequately configured QA environments are crucial to identifying and resolving performance issues before they go live. A problem found during QA represents an entirely different level of concern than one found after go live. It can ultimately be the deciding factor between a project viewed as a success or one viewed as a failure.