kersplort

joined 1 year ago
[–] kersplort@programming.dev 2 points 1 year ago (2 children)

It's not a fully controlled environment, that is the point of smokes.

[–] kersplort@programming.dev 1 points 1 year ago

Polling is certainly useful, but at some point introducing reliability degrades effectiveness. I certainly want to know if the app is unreachable over the open internet, and I absolutely need to know if a partner's API is down.

[–] kersplort@programming.dev 1 points 1 year ago

Wherever possible, this is a good idea. The campsite rule - tests don't touch data they didn't bring with them - helps as well.

However, many end to end tests exist as a pipeline, especially for entities that are core to the business function of the app. Cramming all sequentiality into single tests will give you all the problems described, but in a giant single test that you need to fish out the result for.

[–] kersplort@programming.dev 1 points 1 year ago (6 children)

My experience with E2E testing is that the tools and methods necessary to test a complex app are flaky. Waits, checks for text or selectors and custom form field navigation all need careful balancing to make the test effective. On top of this, there is frequently a sequentiality to E2E tests that causes these failures to multiply in frequency, as you're at the mercy of not just the worst test, but the product of every test in sequence.

I agree that the tests cause less flakiness in the app itself, but I have found smokes inherently flaky in a way that unit and integration tests are not.

[–] kersplort@programming.dev 6 points 1 year ago (4 children)

My team has just decided to make working smokes a mandatory part of merging a PR. If the smokes don't work on your branch, it doesn't merge to main. I'm somewhat conflicted - on one hand, we had frequent breaks in the smokes that developers didn't fix, including ones that represented real production issues. On the other, smokes can fail for no reason and are time consuming to run.

We use playwright, running on github actions. The default free tier runner has been awful, and we're moving to larger runners on the platform. We have a retry policy on any smokes that need to run in a step by step order, and we aggressively prune and remove smokes that frequently fail or don't test for real issues.

 

End to end and smoke tests give a really valuable angle on what the app is doing and can warn you about failures before they happen. However, because they're working with a live app and a live database over a live network, they can introduce a lot of flakiness. Beyond just changes to the app, different data in the environment or other issues can cause a smoke test failure.

How do you handle the inherent flakiness of testing against a live app?

When do you run smokes? On every phoenix branch? Pre-prod? Prod only?

Who fixes the issues that the smokes find?

[–] kersplort@programming.dev 2 points 1 year ago* (last edited 1 year ago)

Get good at the three point turn.

  • Add the new code path/behavior. Release - this can be a minor version in semver.
  • Mark the old code path or behavior as deprecated. Release - this can be another minor version.
    • In between here, clean up any dependencies or give your users time to clean up.
  • Remove the old code path or behavior. Release. If you're using semver, this is the major version change.

This is a stable way to make changes on any system that has a dependency on another platform, repository, or system. It's good practice for anything on the web, as users may have logged in or long running sessions, and it works for systems that call each other and get released on different cadences.

[–] kersplort@programming.dev 2 points 1 year ago

We use a little bit of property testing to test invariants with fuzzed data. Mutation testing seems like a neat inverse.

[–] kersplort@programming.dev 2 points 1 year ago

I think the best thing to do with TDD is pair with or convince devs to try it for a feature. Coming at things test first can be novel and interesting, and it does train you to test and use tests better. Once people have tried it, I think it broadens your use of tests pretty well.

However, TDD can be a bit of a cult, and most smart and independent people (like people willing to work at a <20 person company) will notice that TDD isn't the silver bullet it's proponents make it out to be.

[–] kersplort@programming.dev 1 points 1 year ago

YAML works better with git than JSON, but so much config work is copy and pasting and YAML is horrible at that.

Having something where changing one line doesn't turn into changing three lines, but you could also copy it off a website would be great.

[–] kersplort@programming.dev 2 points 1 year ago (1 children)

XML to transform XML to import into more XML? Can't we just have a config file that isn't setting up some big tie in?

[–] kersplort@programming.dev 1 points 1 year ago

I don't know if it's actual json5, but eslint and some other libraries use extended, commentable json in their config files.

 

I'm like a test unitarian. Unit tests? Great. Integration tests? Awesome. End to end tests? If you're into that kind of thing, go for it. Coverage of lines of code doesn't matter. Coverage of critical business functions does. I think TDD can be a cult, but writing software that way for a little bit is a good training exercise.

I'm a senior engineer at a small startup. We need to move fast, ship new stuff fast, and get things moving. We've got CICD running mocked unit tests, integration tests, and end to end tests, with patterns and tooling for each.

I have support from the CTO in getting more testing in, and I'm able to use testing to cover bugs and regressions, and there's solid testing on a few critical user path features. However, I get resistance from the team on getting enough testing to prevent regressions going forward.

The resistance is usually along lines like:

  • You shouldn't have to refactor to test something
  • We shouldn't use mocks, only integration testing works.
    • Repeat for test types N and M
  • We can't test yet, we're going to make changes soon.

How can I convince the team that the tools available to them will help, and will improve their productivity and cut down time having to firefight?

 

What's something you've gotten into your CICD pipeline recently that you like?

I recently automated a little bot for our GitHub CICD. It runs a few tests that we care about, but don't want to block deployment, and posts them on the PR. It uses gh pr comment --edit-last so it isn't spammint the channel. It's been pretty helpful in automating some of the more annoying parts of code review.

 

The site's been down in the morning for the last couple days. Running a new server that gets attention is tough - do the admins for this site need anything from this community? Volunteer time? Money?

view more: next ›