placemy.cloud
Savings evidence

Five workloads. Measured. Published.

Before placemy was a product, the engine inside it was a dissertation project. These are the five test cases from that evaluation, reproduced without rounding and without cherry-picking. Your workloads will look different — every cloud estate does — but the technique that produced these numbers is the same one running inside the CLI today.

Dissertation evidence

Five scanned workloads, measured

CaseScenarioBefore / moAfter / moΔ
TC-01Web tier over-provisioned EC2€1,840€1,49019%
TC-02Analytics pipeline idle between runs€3,220€1,18063%
TC-03Forgotten EBS snapshots€660€34048%
TC-04Cold storage never tiered down€2,190€89059%
TC-05Multi-region dev environments€1,420€1,15019%

Figures reproduced from the 2025 dissertation evaluation. Your workloads will differ — placemy shows you the trade-offs before it touches anything.

What the numbers cover

Each test case is a real workload scanned before any placemy recommendation was applied (the “Before” column) and re-scanned after applying the highest-confidence recommendations (the “After” column). We did not apply every recommendation — only the ones the engine flagged with high confidence and low blast radius. The delta is the monthly saving, not a projection.

What the numbers don't cover

Implementation time, engineering review, regression testing. The savings figures are the ceiling of what the placemy engine surfaces — the floor depends on how your team chooses to act on the report. We think that's the honest way to talk about it.