-
Print
-
DarkLight
Cost Increase after new release
Problem
In a recent scenario, we had an issue where a cost anomaly was triggered. It happened to be on the day after a new release.
You can see below the anomaly was detected related to log analytics workspaces.
Analysis
Upon further analysis you can see where costs started increasing and how they increased for a couple of days.
Looking at the data by service name re-enforces what the alert was saying that we are seeing a spike in log costs.
We could also use a drill-down to see that the cost spike was related to the data ingestion meter and that it was related to a specific log analysis workspace.
Using a few queries in KQL, you can see the spike in logging data, which coincided with the cost increase.
In this case we could see that most of the logging data was relating to dependency tracking tables. The below query showed how much data was logged per table.
union *
| where TimeGenerated >= ago(30d)
| extend FormattedDate = format_datetime(TimeGenerated, "yyyy-MM-dd")
| summarize count() by FormattedDate, Type
| order by FormattedDate asc
| render timechart
We identified which component was the source of the additional logging from the source in the log data then reviewed the host.json file.
Solution
A new component had been deployed which was processing a lot of messages and the dependency tracking was causing significant logging data to be going to App Insights which was then appearing in the Log Analytics workspace and showing up as a cost there.
In this case, this component does not need to be logging so much data by default. We turned on sampling at an appropriate level and deployed an update the the component. As result the costs for the logging pretty much halved each day and we were back within budget.