this post was submitted on 29 Jan 2024
6 points (87.5% liked)

Programming

17416 readers
77 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 1 year ago
MODERATORS
 

Hey everyone, I'm part of a company that's been trying to modernize. Our team has switched to Agile, switched to some cloud storage, and is slowly trying to add automated tests to its various legacy applications. I know normally automated tests would just be done with the user story as part of the definition of done, and while going forward I want to do that with future user stories, I still I want to be able to keep track of the large amount of work to do with adding automated tests to cover the huge parts of the code already done. It will be kind of a large development effort by itself done by at least 2-3 devs/juniors, and me kind of leading this effort but pretty new at it myself lol.

We're using Azure DevOps which has organized things from big to small with Epics, Features, User Stories, and Tasks. We're trying to decide how to frame and track the work within this context. So even though user stories aren't the best way to illustrate this from what I've read because it isn't user driven functionality, it's the best way to track with what we got, so with that context, here are the ideas so far.

  1. One person suggested an Automated Test Feature, sticking it in this Global epic we have for miscellaneous structure and framework work. Then make one user story each with all automated tests a module has, giving each individual class and pages to test within those modules with a task, and writing within the description the individual tests for each page/class. They don't want the backlog diluted with too many of these automated test stories I think.

  2. Another person suggested creating an Epic for automated tests user stories created up to now, then a feature for each module, then a user story for each class/page to be tested, then a task for each test the developer has to make for each one of those. This person was me, I thought it felt more organized and you can see what dev is working on what piece, but I can see how it balloons the backlog with a ton more user stories for this effort. Although it's at least all in one Epic folder that's easy to ignore.

  3. Our QA wanted one only user story for all automated tests to really prevent clutter, but also was okay with the first idea when I kind of pushed back on it. Since all user stories are usually tested by them and this is kind of superfluous stuff mostly for devs at the moment that isn't application functionality, so I can see why they want it as small and out of the way in the backlog as possible.

  4. Another person just suggested creating a user story for each test, but instead of putting them all in one place, placing them in the proper Feature category that the originating story is kind of testing went in. I get the logic of this, too, but I was afraid of it being confusing for it to track being all scattered around, and user and system driven functionality mixed with tests. But then, I guess we also categorize things in sprints, so maybe this wouldn't be as confusing as I first thought.

Anyway, if anyone had any suggestions or a better way to organize it than these, let me know!

top 4 comments
sorted by: hot top controversial new old
[–] MagicShel@programming.dev 4 points 9 months ago* (last edited 9 months ago) (1 children)

I'd really need much more of a back and forth conversation about the details. Number one, what is "a unit test"? All the tests for a given class? Do you have a tool for ensuring your tests are complete in the sense of testing various code paths and expected exceptions, or are you just doing happy path testing?

Having a story per actual test is way too much Jira. I'd go for a story per class perhaps? Ideally a story per module but that's likely too much for a single Sprint especially given your other work.

Honestly I think this whole thing is a little misguided. I think you should write unit tests as you touch classes in development, not go out to write tests for classes that aren't causing you issues and you haven't changed, but if this is the route you're going then there it would take a lot of talk and effort and most likely is you need to try a few things and see what works for your team, because any advice I could give you would be pretty far removed from your reality.

Best of luck with this. I think it's awesome you've gotten buy in for this initiative.

[–] WanderingVentra@lemm.ee 3 points 9 months ago* (last edited 9 months ago) (1 children)

Thanks! Haha ya, it may be a little misguided, but I think we were allowed to do this partly as busy work to give us something to do between releases. We're kind of in a transition period, and it's something to do while my higher ups negotiate contracts for further work, stakeholders and customers prioritize next items for the next release, etc. Admittedly, these transition periods kinda scare me in terms of you never know when you'll lose work or something, so even if they think it's busy work, for me it's shoring up my resume with tech and leadership experience I should already have that the rest of the industry will be looking for just in case the worst happens lol. I think I've been working at this place too long now, kinda got complacent, but been more interested in looking around and catching up on what I've been missing as the company has been looking to modernize and we've been simultaneously approaching the release of this version of their software.

( Funny enough, I was initially hired to work on automated testing since I had done some at my previous company, immediately got placed doing other dev work to catch up on our schedule. Now it's been years and I'm trying to remember how this all works lol.)

Right now, we're mostly just doing happy path testing tbh. But that's a good point that we should look into our tools to see how it signals code coverage and everything. That might be some reading up I have to do. I think it's a combination of MSTest, or whatever comes with Visual Studios, some Telerik Just Mock and Test Studio tools our company already had licenses for, and Selenium.

You're right that a story per test is probably a bit too much Jira. I was more thinking of a story per class, but even that's probably a bit much with how big this legacy application of theirs is now. I don't want to overwhelm us all in backlog management paperwork, so now I think I'm leaning towards zooming out a bit and doing a story per module.

[–] MagicShel@programming.dev 2 points 9 months ago* (last edited 9 months ago)

My area is Java, so I'm not as familiar with .Net (or whatever you are doing) but look into mutation testing and see if there is a tool for that. It will help identify all the various code paths, so for example if you have a line that says if (Object.value() == "foo") then... then it will make sure you have a test case where Object.value is "foo" and one where it isn't to make sure both paths are tested.

In Java the tool I've used for this is pitest, but I don't see that they support the MS ecosystem. This is way, way better than just code coverage percentages because I can cover a lot of lines by saying assert service.processObject(obj) != null without actually testing the code very much at all.

[–] atheken@programming.dev 3 points 9 months ago* (last edited 9 months ago)

Writing fast unit tests will require some refactoring that could end up being pretty extensive.

For example, you mentioned “cloud storage” - if this is not already behind an interface one ticket could be to define an interface for accessing “cloud storage” and make it so that it can be mocked for most tests and the concrete implementation can be tested directly to confirm the integration works. Try to hone down that interface so that it’s as few methods as possible, only allow the parameters you’re actually using to be exposed and used in the interface. You can add more later if it’s absolutely necessary.

Do this for anything that does I/O and/or is CPU intensive.

So, to do tickets, I’d basically say, one per refactoring.

Going forward, writing “unit tests” should not be separate tickets, it should be factored into the estimates for the original stories, and nothing should go out without appropriate tests. The operational burden will decrease over time.

QA should have their own unit for how they want to test the application. Usually this is a suite per section of the app. If your app has an API, that is probably going to have a nice logical breakdown of the different areas that could each have their own ticket for adding QA-level test suites. The tests that developers write should only be additive and reduce the workload of QA. What you want to be sure of is that change sets are getting reviewed and through the entire pipeline without getting logjammed in any stage. Ideally, individual PRs are getting started and deployed in less than a week.

If you’re interested in more techniques, check out the book “Working effectively with legacy code.” It has a lot of patterns for adding tests to existing codebases.