More and more companies are migrating to the cloud for computing power, but many are actually wasting too much money on unused services.
According to a Wall Street Journal story, common mistakes that drive up cloud computing costs include ordering too much computing power, failing to program software shutdowns in off hours, not using monitoring tools to keep tabs on wasted computing cycles, or allowing programmers to believe cycles are free.
[contextly_sidebar id=”jLJSQidWDFQqUkPMm3vKNnaMKwwiaHF0″]Indeed, Boris Goldberg, co-founder and chief technology officer at Cloudyn Ltd., told the WSJ that roughly 60% of cloud software servers can be reduced or terminated because companies have purchased too many.
Many companies have learned their lesson and are more closely monitoring their workers and finding ways to automatically shut of their applications on public cloud services, the WSJ wrote. For example, Netflix developed software that automatically shuts down systems at off-peak times and can predict when to resume activity.
Thermo Fisher information technology executive Mark Field now oversees all procurement of cloud services and weekly monitors payments to determine underutilized servers, or servers that are running when no one is using them. Field told the WSJ he had to do that after paying too much for cloud-computing services when engineers left computing tasks processing through the weekend.
“Would you like someone leaving the shower running in your house all weekend long?” Field told the WSJ.
Managing cloud computing costs is becoming more imperative as more companies are migrating to the services, WSJ wrote. According to IDC, global spending on public cloud services is expected to total $59.5 billion for 2014, up from $45.7 billion in 2013. Public cloud spending will grow at a compound annual rate of 23% through 2017, IDC projects.