cross-posted from: https://programming.dev/post/864349
I have spent some time trying to simplify the release process. For a variety of reasons, we can only release on Thursdays. The code is “frozen” on Tuesday before it can be released on Thursday. But we sometimes squeeze in a quick fix on Wednesday as well.
The question, is when should QA test the code?
Here is what I have seen happen:
- Dev writes code and sends it to QA.
- QA finds problems, sends it back to the Dev.
- Dev fixes and sends it back to QA.
I have seen a Dev fix their code on Tuesday, and then QA comes back on Wednesday with problems, when the code should have been frozen anyway.
I am looking, what should be the best solution here.
We have several problems going on at once:
- Developers test on the same server as QA tests. I am working to switch developers to a separate Dev server, but it is a long work in progress.
- We don’t have an easy way to revert code back from the QA server. It is easier to build revisions than revert changes. We can try to revert code more, but it will require a culture change.
- QA don’t really have a schedule when they are supposed to do functional testing vs regression testing.
I don’t know what is the best way to proceed forward. Thus far, I haven’t thought too much about the QA because I was focused more on getting releases out. Now that releasing is more simplified, that we can potentially do weekly releases, I am trying to see how we should proceed with the testing process.
Ephemeral test environments are a great tool for this kind of stuff. We do work on a feature branch and spin up a test environment for that feature. QA happens there, then it gets merged to master once they approve it.
In our workflow, it deploys immediately, but you could just as easily adapt that to cut a release every Thursday. Then each release can just contain whatever makes its way into master by Thursday. You might need to add more process if there’s release notes etc that need to be coordinated. My suggestion on that would be to cut the release on Tuesday or Wednesday, then whoever does that documentation can look at what made the cut and spend a day documenting it, then just press the deploy button on Thursday once they’re done.
Your QA team should be building automated regression tests and automating their tests so that they can be run locally as part of the development cycle or as a job in your CI system.
That would decouple this incredibly tight release cycle that you currently have
What’s on the test env? In-dev? Approved changes that will go through QA for release? Release-ready changes?
What does freeze consequently mean? Freeze development? Freeze what you consider for a release? Freeze the release?
If you’re developing on the test env with QA testing changes for approval to a release you can’t “freeze” what you still have to test. You need a inclusion-freeze to have time for QA testing and approval, and then a cleanup and factual freeze of what a release includes.
Once a release is factually frozen you either do the release or you don’t. If an issue is found big enough to not make that release, you make a new release or wait a week. Smaller issues can live on prod for a week - or be fixed earlier through patch releases if you do those.
It seems to me that your problem is that your definition of “freeze” seems to allow fixes for QA issues. So, not a freeze at all if the idea is to give QA a chance to have a clean testing framework on Wednesday.
I see two alternatives:
-
Make Tuesday a true freeze. Any defects found by QA drop that feature out of the Thursday release.
-
Stop “throwing features over the wall” to QA. Make the QA testers/process part of the development team/process. Features are considered “done” and ready for submission to production only once tested. Freeze on Thursday morning with only integration testing to be done before release.
In truth, both approaches yield the same results. If programmers have to get it right by Tuesday, then they’ll need to work more closely with QA during development. Eventually, the Wednesday testing becomes little more than a rubber stamp and they’ll push to move the freeze back to Wednesday.
Most importantly it seems that in this situation the “definition of done”, has to be more than just “coding completed”.
I wanted to suggest something like this. Code-freeze wise, you can have a “minor” and “major” problems, major problems block the feature, minor ones let it go (but you now have a tech debt, and make sure that THIS process to fixing up found issues is higher-prio then new features). Of course, you decide what is minor and what major. E.g. maybe a typo in the UI is acceptable, maybe not.
As for throwing features over the wall - I would actually suggest just changing the perspective - make QA involved earlier. The feature is not ready and not frozen unless it’s been looked at by QA. Then when a thing is frozen, it’s really ready. (Of course you’ll still have regressions etc but that’s another topic.)
-
Your company seem to have a significant technical debt in environments, tooling, test automation, CI/CD, that is slowing down releases. Is that a problem acknowledged in your org? Will you get support for continuing changes and is there an understanding of what is a good enough level?
Would a fixed weekly or bi-weekly schedule work? Both devs and QA can plan their work accordingly. For example: devs and QA agree on what to work on next on a Thursday and how to test (acceptance criterias). Dev start development, QA prepare tests (functional and regression). Dev work for a week, tests and feature freezes code next Thursday. QA starts acceptance test on Friday, they ping-pong Fri-Mon-Tue, regression test and full code freeze on Wednesday, release on Thursday. If quality of the code and tooling makes the process smooth enough, then a similar weekly cycle could also be possible.
Gradually increasing test automation could speed up the whole process. Devs and QA can write and maintain automated tests together.
Feature flags could help isolating bugs that are discovered late in the process.
If there are several rounds between devs and QA, then a root cause analysis can help there as well. Are devs and QA aligned on how to test? Are devs testing enough? Is QA giving enough info for devs on the bugs found? Etc.