Decarbonise your IT with sustainable software engineering

Decarbonise your IT with sustainable software engineering

🛠️ Cette page est en cours de traduction en Français.
🛠️ Diese Seite wird derzeit ins Deutsche übersetzt.
Is your software silently adding to global emissions? Discover how sustainable engineering can turn IT into a force for climate action.
Please accept marketing-cookies to access the audio reading feature.
February 4, 2025

The share of IT in the worldwide greenhouse gas emissions is around 4% already - and not only due to the emerging trend of integrating Generative AI into more and more products, but this number is likely to increase dramatically in the upcoming years.

The Green IT community is tirelessly working on ways to reduce software's carbon footprint, from locally executed binaries to full-size Kubernetes clusters. This knowledge can be applied to the decarbonisation of software companies or, more generally, organisations where IT is the primary driver of value creation.

What is Green IT?

Definition

Green IT (green information technology): Implementing environmentally sustainable computing by reducing energy consumption, minimising electronic waste, and optimising the lifecycle of IT equipment to lower environmental impact.

Green IT aims to reduce the carbon footprint and electronic waste through carbon-efficient software and sustainable resource management. The latter can be achieved through measures like:

  • Using refurbished products,
  • Using products longer before replacement,
  • Preferring products that are easier to repair and recycle,
  • Preferring products with lower energy consumption.

This article, however, will focus on software's carbon efficiency and operation. In other words, we want to reduce software's carbon emissions per computational unit as much as possible.

What are the benefits of Green IT?

Software built with sustainability will reduce carbon emissions and lead to considerable cost savings. This is particularly true if your software is operated in the cloud, where resource consumption is precisely tracked and billed.

Focusing on sustainability can also lead to better software performance and, thus, higher revenues. A lightweight website, for example, ranks better in SEO (which is still relevant today, even with the rise of GenAI), and fast-loading websites are known to have better conversion rates.

Measuring software emissions

Before you start optimising the software's efficiency, you should be able to measure its carbon footprint first. Otherwise, you can’t evaluate the impact of your efforts appropriately. Therefore, we’ll explain what options are available to measure software emissions.

Software indirectly emits carbon through the electricity consumed by the CPU, memory usage, and data transfer to/from storage like hard drives or over the network. Every operation caused by the software thus increases the electricity consumption.

Another source of emissions is the production of the device(s) on which our software operates. This so-called embodied carbon of a device includes packaging, transportation, and treatment at the end of its life, such as disposal or recycling.

Local systems

For local devices, like a physical server or a desktop computer, one can directly measure the consumption by installing a watt-hour meter between the device and the socket. Using the carbon intensity, i.e. the amount of carbon equivalents released to produce a kilowatt hour of electricity (usually measured in gCO2eq/kWh) of the local power grid when the software gets executed, we can deduct the carbon emissions.

This setup, however, can be tricky, as there are usually many background tasks running on a system, which can lead to deviations in the measurement. Eliminating measurement errors requires several runs using the same system setup and software operations.

A more straightforward approach is to use Green Metrics Tools by Green Coding. This open-source tool measures software's energy consumption using tools like specific CPUs' Running Average Power Limit (RAPL) interface. For best results, a dedicated cluster is required where the software can be executed and measured in isolation. A hosted SaaS version is also available if you don’t want to operate the cluster.

If your application is containerised with Docker, consider using the tool GreenFrame. The setup is simpler since it directly contains the required usage data from Docker. Although open-source, the paid version is needed if you want detailed information.

Cloud-based applications

We obviously can’t use power monitors for applications operating in the cloud. We need to rely on the data that the cloud provider gathers. 

Many major providers, specifically AWS, Google Cloud, and Microsoft Azure, already provide carbon footprint reports. However, this data is not available in a standardised format, making it harder to compare. Some providers also use market-based emissions (calculated emissions that consider power purchase agreements or other compensation methods), which are handy for reporting reasons. Yet for optimising IT systems towards carbon efficiency, location-based emissions, i.e., the “real” emissions caused at the data centre location, are more relevant.

There can also be a delay of up to 3 months until the provider gathers all the data from their worldwide data centres and makes the emissions available. It is far too long to optimise workloads promptly.

Cloud Carbon Footprint tool

There is help, though: since cloud providers track every little transaction to bill us for it, we can use this fine-grained data to estimate emissions based on the service used and the usage duration. The Cloud Carbon Footprint tool (CCF) does the heavy lifting for us. 

It uses the cloud usage data it receives via APIs to generate accurate emission estimates. This way, the tool can provide nearly real-time data about your AWS, Google Cloud, or Microsoft Azure account emissions, making it the perfect tool to start your optimisation journey. Moreover, it also considers the embodied carbon, broken down over the typical usage phase of the hardware the cloud services are operating on.

Other resources

The tools mentioned above are indeed worth considering for your setup. You can find more ideas on one of the many link lists, like the Green Software Directory by GitHub or Awesome Green Software by the Green Software Foundation.

How to reduce software emissions

Once you can measure the emissions of your cloud setup daily, it’s finally time to begin with the actual optimisation. To maximise your results, we will start with recommendations of actions that require a relatively low effort but have a significant reduction potential on your software emissions.

Level 1 - Infrastructure

Before optimising your code, you must ensure your infrastructure is not wasting resources. This requires examining every part of your infrastructure closely.

If you are operating your infrastructure in the cloud, you can use the providers’ billing service to identify the biggest cost blocks as starting points. Here are some typical points to look for.

Remove unnecessary resources

Finding and removing unnecessary servers and workloads is an easy and effective way to reduce your carbon footprint. Over time, systems tend to become cluttered. Services get replaced but not immediately deleted and eventually forgotten, or recurring jobs are no longer required but continue to run unnoticed.

It even happens that full servers (either physical or virtual) are becoming useless. Yet, an idle server consumes a significant amount of energy because components like the mainboard still need to be supplied with electricity. This is why these superfluous servers are also called “Zombies”—they are not doing anything productive but draining precious resources.

Using system logs and monitoring tools, it is relatively easy to tell whether a resource is still in use. However, we want to be careful not to cause side effects. This can be achieved by so-called “Scream Tests” If you are confident that a resource is no longer in use, communicate your action and deactivate or disconnect (but not delete!) the resource in question. If somebody screams that their system is affected, you can quickly enable it again. Remember that some jobs are recurring weekly or even monthly, so the period should be, so don’t delete it too early.

Rightsize your infrastructure

Cloud services make it easy to provision resources of literally any scale. That often leads to overprovisioning: bigger instance sizes than necessary are chosen to ensure the system has enough computational resources.

The process of optimising instance sizes is also referred to as “Rightsizing”. The aim is to find an instance size that leads to high utilisation without reaching its boundaries. This is important because a server that is only slightly utilised already consumes a significant amount of energy, and running it at a high utilisation rate won’t consume much more. This somewhat counter-intuitive effect is called Energy Proportionality

Cloud providers like AWS even help you with this task and offer services like the Cost Optimisation Hub, which recommends which instances are over- or underprovisioned based on your actual usage.

However, the software you offer as a service is likely not equally used throughout the day. There will be usage peaks, making it challenging to find the right size. If these peaks only happen sporadically and are relatively short, Burstable Instances can be helpful. They offer you a certain amount of “extra power”, i.e. you can utilise them more than 100% for a short period. If the usage varies greatly over time, you need to allocate resources dynamically, which we will cover in the next section.

Allocate resources dynamically

A significant advantage of the cloud is that resources can be added or removed automatically based on the current needs. This process of dynamic resource allocation is also known as autoscaling.

Instead of providing one prominent instance with enough buffer for all expected peaks, which is often idling, the workload can be distributed over several minor instances. Through autoscaling, which all cloud providers offer, additional resources can be allocated automatically if the load increases. Likewise, they can be deactivated once they are no longer needed.

Be careful, though, that autoscaling requires fine-tuning and should be reviewed regularly. You should set sensible upper limits. Otherwise, configuration errors can become costly and spoil your carbon emissions savings.

Use spot instances

Spot instances are short-term spare capacities that a cloud provider can’t sell, so they are offered for a lower price than regular on-demand instances. A general drawback is that these instances are usually short-lived, and the cloud provider might remove them after a short notification period. However, they are still a good match for controlled workloads that can be interrupted, like the processing of job queues.

The ecological benefit derives from energy proportionality, explained above: If you use spot instances, you can better use existing virtual machines instead of creating additional ones, which will probably only run at a specific utilisation rate.

Collect your garbage

Data at rest, data stored locally or in the cloud, is another emission source worth considering. Cloud storage is exceptionally cheap, so it is tempting just to save data there and forget about it. However, the data must still be stored on physical drives, which must be produced, shipped and hosted in data centres.

The best option, of course, is not to store data in the first place. If this can’t be avoided, you can use the log or file retention management of cloud systems, where you can configure that data gets automatically deleted after a specific time. For self-hosted systems, it is also possible, although with a bit more effort, to automatically delete data, like backups or logs.

Be careful with “Lift and Shift.”

If you plan to migrate applications into a cloud environment, avoid “Lift and Shift,” i.e., the direct transfer of all components without redesign. It is often better to utilise cloud-native services that your provider offers. For example, if your application uses a message broker like RabbitMQ, consider switching to a managed service instead of hosting your server instance(s).

Turn off development environments when not needed.

Next to the Production environment, there are usually other environments like Development or User Acceptance Testing, which don’t need to run over the weekend or at night when nobody is working on them. They can be automatically shut down or powered on again when required.

Level 2 - Architecture

So far, we have examined possibilities for reducing your software's carbon footprint on the infrastructure level. This usually requires no or only a few code changes. The next level of difficulty is to optimise the way software components interact with each other.

Use micro-services

A small service is easier to scale than a monolithic application. If your application contains parts or modules whose usage can vary greatly over the day, consider moving these parts into an external service. This way, you don’t have to scale the whole monolith. This does not mean splitting up your entire application into services is necessary.

Use serverless functions

Serverless functions don’t require you to provide a dedicated server. If needed, the cloud provider will execute the code on their infrastructure. This enables them to utilise their servers evenly and thus reduce emissions (also see “Rightsize your infrastructure” from the previous chapter).

Leverage caching

The most efficient computational operation can be avoided. Although caching is often not easy to implement, it can significantly reduce resource consumption if set up and configured correctly. 

There are many possible ways to implement caching: in-memory caching on the code level, response caching of API endpoints, or browser caching. Caching usually benefits websites and applications by making them faster, which your users will appreciate. However, caching requires additional memory, which must be considered when implemented.

Reduce data transfer

Data networks consume a lot of energy, so it is crucial to reduce the amount of data in transit as much as possible. There are many ways to achieve this, like choosing data formats with the smallest file sizes or avoiding sending unnecessary data with every API request.

Another option is to use compression. Although compressing and decompressing the data requires energy, modern algorithms are highly optimised, and the energy saved is greater than the energy expended.

Implement an Event-Driven architecture (EDA)

Implementing an Event-Driven architecture (EDA) avoids unnecessary status update requests among services. Instead of constantly checking for changes (e.g. Long Polling), the service will be informed through a server push that an event, like a data change, has occurred and action might be required.

Optimise your CI/CD pipeline

Continuous Integration is an integral part of the Software Development Life Cycle. Hence, the CI/CD pipeline is often heavily used throughout the day. That’s why you should add it to your carbon monitoring as well. 

Try to optimise it by keeping the tests' execution time low. For example, utilise more CPU cores and run tests in parallel if available. Reducing the number of build dependencies or using small Docker images (e.g., based on Alpine Linux) can reduce the execution time even further.

Choose the right tools.

Choosing the right tool for the job is always important, and there is no exception in Green IT. For example, using a relational database for logging or caching wastes resources since tools are much better suited.

Additional readings

It’s worth mentioning that many cloud providers, such as Google, Microsoft, or AWS, also recommend optimally running software in their clouds.

Level 3 - Code

On the next level, we’ll finally examine the actual code of our software application.

Chose the most efficient algorithm

The Big O Notation describes how an algorithm's runtime or space requirements grow as the input data size increases. This enables developers to choose the most efficient algorithm for a specific use case.

Optimise database schemas and queries

Databases are powerful and highly optimised. At the previous architecture level, we learned they require careful configuration to operate efficiently.

There often lies a vast potential for performance improvements on the code level, which we consider the database schemas and queries to be a part of. Especially for relational databases like MySQL or Postgres, it’s important to log slow queries and analyse them with built-in tools like the EXPLAIN statement to get valuable insights on where the queries can be optimised.

Another often overlooked part is the usage of table indexes, which can drastically reduce the execution time of queries, especially on large tables.

Abstraction layers like ORMs usually reduce the development time but disconnect developers from the underlying database. 

Agree on performance budgets.

Add performance considerations to your acceptance criteria to prioritise code efficiency. A performance budget defines certain limits depending on the use case. For example, it represents a website's maximum loading time or the total size of all downloaded assets. Performance budgets can be integrated into your Continuous Integration pipeline and help you detect performance regressions early.

Leverage code profiling

Profiling will help you identify long-running functions within your code. Resolving those bottlenecks can drastically reduce the execution time and thus increase the efficiency of your software. Commercial solutions even allow you to profile your application in production environments. 

Use third-party code carefully.

Various third-party packages or libraries exist to solve virtually every problem. However, many packages have become increasingly feature-rich over time, which results in slower code execution and bigger file sizes. 

That’s why the most popular package, or the one you are most familiar with, is not necessarily the most efficient. If you want to use a third-party package, check for more sustainable alternatives, such as a code profiler or other performance benchmarking tools.

Prefer native functionality

Particularly for programming languages like Python, Javascript, or PHP, which are not compiled but interpreted, it’s worth checking if there is a built-in way to solve a problem before you use a third-party package or devise your solution. 

Those native functions might not be as easy to use as their counterparts, but they usually offer unparalleled performance and can help solve performance problems. You also reduce the dependency on external code.

Cleanup your code

Over time, software accumulates clutter and gains unnecessary weight. Legacy code should be removed as soon as it becomes outdated. Less code means better performance in general, but you also save time in the CI/CD pipeline due to fewer tests and faster code quality checks. Removing unnecessary packages reduces the data downloaded on every update or when code gets deployed.

Level 4 - Carbon Awareness

Although it would fit in the chapter Level Two—Architecture, the concept of Carbon-Aware Computing is still very young and rapidly evolving, so we want to grant it some extra space.

Understand Carbon Awareness

During the transition phase from fossil fuels to renewable energy, the carbon intensity (i.e. the “dirtiness” of the electricity used) varies depending on the availability of clean energy sources.

In some countries, the output of renewable energies is occasionally curtailed because there is not enough demand for them, and the electricity grid is not yet able to distribute the energy evenly throughout the country.

Until sufficient storage capacity is available, and probably also beyond that point, the basic idea of Carbon Awareness is to use green energy when available and reduce the demand when the carbon intensity is high.

Data sources, such as ElectricityMaps or WattTime, inform us about the current carbon intensity in many countries. Both services offer Web APIs via which the information can be retrieved. Other services are region-specific, like Energy Charts for Germany or Carbon Intensity for the UK.

How can we make use of this information? The Green Software Foundation, for example, outlines several approaches to lowering carbon emissions, which we will discuss in the following sections.

Shape customer demands

The term demand shaping may sound counterintuitive at first, but how could we shape the demand of our customers? Yet with demand, the Green Software Foundation refers to the demand for electricity rather than the demand for products.

Companies that stream data, like video or audio, could, for example, reduce the default quality with which the content is delivered when the carbon intensity is high. 

Shift workloads into the future

If the current carbon intensity is high, chances are good that the electricity will be greener in a couple of hours again, depending on the weather conditions, because fewer fossil fuel power plants have to fill the gap. But how do we know when more renewable energy will be available again?

Luckily, all the above-mentioned services provide carbon intensity forecasting within the next 12 hours or even more. This is possible by using different types of information, like weather forecasts or the day-ahead price on the energy markets.

If the electricity is currently dirty, we might be able to postpone workloads until we know it is greener again. This approach is called Temporal shifting. A major problem could be finding actual tasks that can be delayed. We will discuss this problem in the next chapter.

Use greener locations

A quick look at the carbon intensity map of the world shows us that some countries have significantly greener electricity than others for various reasons. Since we operate software in the cloud, it is easy to use data centres worldwide and benefit from their fewer carbon emissions.

This approach is called Spatial shifting and is relatively easy to implement, yet it also comes with some challenges. Firstly, if everyone ran their software in the greenest region, it would soon overshoot the capacities there, so it is no holistic solution. 

Secondly, farther away regions don’t come without a price: data simply has to travel longer distances, which increases emissions and raises latencies, which could make users dissatisfied. Often, these green regions are more expensive than others.

Spatial shifting can significantly reduce emissions, but it is not the ideal approach for all problems. Use cases could include computation-heavy workloads, such as training in large language models.

Know the problems with Carbon Awareness.

The concept of Carbon Awareness is not the silver bullet to green your software, as it comes with a few challenges. Firstly, if it gets applied unregulated on a large scale, it could have unpredictable effects on the electricity grid's stability.

Another challenge is the users' acceptance: They are accustomed to their actions being carried out directly and immediately. Will they accept waiting a couple of hours until the energy mix is greener again? We will investigate this problem more closely in the next chapter.

Level 5 - Product design

With the last level, we leave the realm of technology and leap into the domain of Product and User-Experience design. Similar to most, if not all, industries, we can’t hope to fight climate change through the use of technology alone. It is necessary to change how we use products, too.

The previous chapter's example of reducing the streaming quality of videos during times of higher carbon intensity is already an example of Sustainable Product Design. Although the impact of this measure can be significant, we ranked it as Level 5 since it requires extensive coordination with other departments and stakeholders.

Inform your users

Measures like reducing the video streaming bitrate should not be taken without informing the users and other important stakeholders. Otherwise, you will end up with unhappy customers complaining about the low video quality and uninformed helpdesk employees who can’t help but forward the bug reports to the engineering teams.

Informing users can be combined with raising awareness and understanding. For example, you can feature the current carbon intensity in your application by signalling times of high carbon intensity.

Give them a choice

Your customers should be able to actively opt-in (or opt-out) to carbon-saving measures, just like they are used to, e.g., their washing machines, where they can select an eco-mode that takes longer but saves energy. Is it acceptable for your software users to wait for specific jobs to be executed when they know it would save emissions?

But keep in mind: It doesn’t help if customers churn because of too rigid curtailments. Sometimes, getting the job done as quickly as possible is necessary, even if this results in higher carbon emissions.

Design for sustainability

Designing your product with sustainability in mind can significantly reduce carbon emissions. For example, does the data dashboard always need to be up to date and require constant database queries? Or would the user be okay with data that can be a couple of minutes old but allows the backend to cache requests and thus save resources?

Other aspects, like using recognised design patterns and avoiding deceptive ones, are explained in the User-Experience section of the Web Sustainability Guidelines (WSG) 1.0.  At this point, Green Software Engineering also blends in with areas such as accessibility or ethics, which are beyond the scope of this article but are nevertheless essential to consider.

Defining a strategy

Now that we have the necessary knowledge and tools to reduce software-related carbon emissions, we must define a strategy for applying it to your organisation. At this point, we assume that Management has already agreed to introduce Green IT measures, i.e., you have the buy-in.

A good starting point would be analysing the emissions/costs data to identify those areas of your infrastructure that bear the most significant cost and carbon reduction potential. Once a couple of goals have been formulated, this can be laid out into a project plan.

Potential project KPIs could be total carbon emissions and infrastructure costs per week. If tracked over a more extended period of time, they quickly show progress.

If you track the working hours on the project, you can calculate the ROI at the end of the project.

Keep it running

Yet, a single Green IT project is not sufficient. Once the first carbon savings have been achieved, it is crucial not to stop but to keep the momentum going. As the typical IT landscape is ever-changing, constant monitoring and analysis are required to identify more reduction potentials and, equally important, to avoid increased carbon emissions, e.g., through configuration problems or simply unaware employees.

A simple measure would be to add the Green IT KPIs to the regular management reporting. This also keeps sustainability in the minds of decision-makers.

Takeaways: How to decarbonise your IT

This article explained how software's carbon footprint can be reduced, making it more carbon-efficient. First, measure the emissions of your software setup.

Once you have reliable emission figures, you can reduce them by taking action on the following levels:

Level Action
Level 1 Avoid unnecessary resource usage
Level 2 Use carbon-efficient software architecture
Level 3 Optimise software performance
Level 4 Consider carbon-aware software engineering
Level 5 Design your product in an environmentally friendly way

The levels are ordered by the difficulty of implementation and the expected carbon-saving potential to give you a first orientation. For example, many Level 1 actions are relatively easy to implement while highly likely to achieve significant emissions reductions.

Lastly, to apply the Green IT measures to your application, you should define a strategy for implementing them. This could be as easy as a simple project plan, where you define concrete outcomes and make the process transparent through KPIs.

Once you finish your first project, the process must start again because Green IT is no one-off project but a shift in mindset and operations.

Start your company's decarbonisation journey today. Book a demo to experience the best of sustainability solutions.

Our sustainability experts

Get your company on the path to net-zero

Our sustainability experts will find the right solution for you.
Sustainability is a deep and broad ocean to navigate. Use my knowledge and intelligence to learn exponentially and find the right resources to make your case.