Five workloads. Measured. Published.
Before placemy was a product, the engine inside it was a dissertation project. These are the five test cases from that evaluation, reproduced without rounding and without cherry-picking. Your workloads will look different — every cloud estate does — but the technique that produced these numbers is the same one running inside the CLI today.
Five scanned workloads, measured
Range 19%–67%. Median 32.5%.
| Case | Scenario | Before / mo | After / mo | Δ |
|---|---|---|---|---|
| TC-01 | Web tier over-provisioned EC2 | €1,840 | €1,490 | −19% |
| TC-02 | Analytics pipeline idle between runs | €3,220 | €1,180 | −63% |
| TC-03 | Forgotten EBS snapshots | €660 | €340 | −48% |
| TC-04 | Cold storage never tiered down | €2,190 | €890 | −59% |
| TC-05 | Multi-region dev environments | €1,420 | €1,150 | −19% |
Figures reproduced from the 2025 dissertation evaluation. Your workloads will differ — placemy shows you the trade-offs before it touches anything.
What the numbers cover
Each test case is a real workload scanned before any placemy recommendation was applied (the “Before” column) and re-scanned after applying the highest-confidence recommendations (the “After” column). We did not apply every recommendation — only the ones the engine flagged with high confidence and low blast radius. The delta is the monthly saving, not a projection.
What the numbers don't cover
Implementation time, engineering review, regression testing. The savings figures are the ceiling of what the placemy engine surfaces — the floor depends on how your team chooses to act on the report. We think that's the honest way to talk about it.