I use datadog for this specific use case. You can log your own metrics through their API, then set up dashboards and alerting based off specific parameters and thresholds. I mainly use it to track web vitals over time to pinpoint problematic releases or assets, but it can be used for any numeric values you wish to track.
Programming
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
Feels like you could maybe (ab)use an ML experiment tracking tool for this, something like MLFlow. Except instead of training an ML model you just trigger your tests and report the statistics from those back to the tracking tool.
I'm pretty interested in this too. I've thought about it in the past, and I think get stuck where you're asking (the post processing and visualizing bit).
I'd thought of having GitHub actions for the measurement, stashing the results as artifacts, then having another workflow that processes the results. Obviously pretty DIY so I'm curious if others have solutions.
This is on my list to do - if you find a good solution do let us know!
I was thinking of just doing the quick-and-dirty approach of appending the data to a file in the repo and auto-committing it. Just have some previous commit information, test name, and results appended every time. That way the head always has the full history of data in order so you can just push/pull that into anything and analyse/graph it without messing about.
I'd probably only do it on push/PR merge so in the grand scheme of things would never really become a lot of data, but you could truncate it as you go easy enough.
Hard to recommend anything without some hint of your build systems. Java via Jenkins? Node via Bitbucket Pipelines? C# via Azure Devops?
My particular use case is actually for a hobby/fun project
developing a bot in Rust to play a game (particularly, Screeps), and I want to track how fast it hits certain game thresholds with each newly developed feature. Gitea Actions for CI/CD, but it's all running on my local network/home lab so I'm happy to shift as needed.
I use datadog for this specific use case. You can log your own metrics through their API, then set up dashboards and alerting based off specific parameters and thresholds. I mainly use it to track web vitals over time to pinpoint problematic releases or assets, but it can be used for any numeric values you wish to track.