Optimizing CPU Load in C#: Key Approaches and Strategies

Optimizing CPU Load in C#: Key Approaches and Strategies

Everything you wanted to know about optimizing algorithms and CPU load in C#

Hi everyone, last time we already touched upon the topic of optimizing code in C# from the point of view of RAM usage. In general, efficient use of computer resources such as the central processing unit (CPU) is one of the main aspects of software development. This time we will talk about optimizing CPU load when writing code in C#, which can significantly improve application performance and reduce power consumption, which is especially critical on mobile platforms and the web. In this article, we will consider several key approaches and strategies for optimizing CPU load in the C# programming language.

Using Efficient Algorithms

One of the most important aspects of CPU load optimization is choosing efficient algorithms. When writing C# code, make sure that you use algorithms with minimal runtime complexity. For example, when searching for an element in a large array, use algorithms with O(log n) or O(1) time complexity, such as binary search, instead of algorithms with O(n) time complexity, such as sequential search.

Search Algorithms

Linear Search - also known as the sequential search algorithm. A simple search algorithm checks each element in a collection until the desired value is found. Linear search can be used for sorted and unsorted collections, but it is useful for small collections.

public static int LinearSearch(int[] arr, int target) {
    for (int i = 0; i < arr.Length; i++)
        if (arr[i] == target)
            return i;

    return -1;
}

Binary Search - is a more efficient search algorithm that divides the collection in half at each iteration. Binary search requires the collection to be sorted in ascending or descending order.

public static int BinarySearch(int[] arr, int target) {
    int left = 0;
    int right = arr.Length - 1;

    while (left <= right){
        int mid = (left + right) / 2;

        if (arr[mid] == target)
            return mid;
        else if (arr[mid] < target)
            left = mid + 1;
        else
            right = mid - 1;
    }

    return -1; // target not found
}

Interpolation search - is a variant of binary search that works best for uniformly distributed collections. It uses an interpolation formula to estimate the position of the target element.

public static int InterpolationSearch(int[] arr, int target) {
    int left = 0;
    int right = arr.Length - 1;

    while (left <= right && target >= arr[left] && target <= arr[right]) {
        int pos = left + ((target - arr[left]) * (right - left)) / (arr[right] - arr[left]);

        if (arr[pos] == target)
            return pos;
        else if (arr[pos] < target)
            left = pos + 1;
        else
            right = pos - 1;
    }

    return -1; // target not found
}

Jump search - is another variant of binary search that works by jumping ahead by a fixed number of steps instead of dividing the interval in half.

public static int JumpSearch(int[] arr, int target) {
    int n = arr.Length;
    int step = (int)Math.Sqrt(n);
    int prev = 0;

    while (arr[Math.Min(step, n) - 1] < target) {
        prev = step;
        step += (int)Math.Sqrt(n);

        if (prev >= n)
            return -1; // target not found
    }

    while (arr[prev] < target) {
        prev++;

        if (prev == Math.Min(step, n))
            return -1; // target not found
    }


    if (arr[prev] == target)
        return prev;

    return -1; // target not found
}

As you can see, there can be a large number of search algorithms. Some of them are suitable for some purposes, others for others. The fast binary search algorithm is most often used as a well-established algorithm, but this does not mean that you are obliged to use it only, because it has its own purposes as well.

Sorting Algorithms

Bubble sort - a straightforward sorting algorithm that iterates through a list, comparing adjacent elements and swapping them if they are in the incorrect order. This process is repeated until the list is completely sorted. Below is the C# code implementation for bubble sort:

public static void BubbleSort(int[] arr) {
    int n = arr.Length;
    for (int i = 0; i < n - 1; i++) {
        for (int j = 0; j < n - i - 1; j++) {
            if (arr[j] > arr[j + 1]) {
                int temp = arr[j];
                arr[j] = arr[j + 1];
                arr[j + 1] = temp;
            }
        }
    }
}

Selection sort - a comparison-based sorting algorithm that operates in place. It partitions the input list into two sections: the left end represents the sorted portion, initially empty, while the right end denotes the unsorted portion of the entire list. The algorithm works by locating the smallest element within the unsorted section and swapping it with the leftmost unsorted element, progressively expanding the sorted region by one element.

public static void SelectionSort(int[] arr) {
    int n = arr.Length;
    for (int i = 0; i < n - 1; i++) {
        int minIndex = i;
        for (int j = i + 1; j < n; j++) {
            if (arr[j] < arr[minIndex])
             minIndex = j;
        }

        int temp = arr[i];
        arr[i] = arr[minIndex];
        arr[minIndex] = temp;
    }
}

Insertion sort - a basic sorting algorithm that constructs the sorted array gradually, one item at a time. It is less efficient than more advanced algorithms like quicksort, heapsort, or merge sort, especially for large lists. The algorithm operates by sequentially traversing an array from left to right, comparing adjacent elements, and performing swaps if they are out of order.

public static void InsertionSort(int[] arr) {
    int n = arr.Length;
    for (int i = 1; i < n; i++) {
        int key = arr[i];
        int j = i - 1;
        while (j >= 0 && arr[j] > key) {
            arr[j + 1] = arr[j];
            j--;
        }
        arr[j + 1] = key;
    }
}

Quicksort - a sorting algorithm based on the divide-and-conquer approach. It begins by choosing a pivot element from the array and divides the remaining elements into two sub-arrays based on whether they are smaller or larger than the pivot. These sub-arrays are then recursively sorted.

public static void QuickSort(int[] arr, int left, int right){
    if (left < right) {
        int pivotIndex = Partition(arr, left, right);
        QuickSort(arr, left, pivotIndex - 1);
        QuickSort(arr, pivotIndex + 1, right);
    }
}

private static int Partition(int[] arr, int left, int right){
    int pivot = arr[right];
    int i = left - 1;

    for (int j = left; j < right; j++) {
        if (arr[j] < pivot) {
            i++;
            int temp = arr[i];
            arr[i] = arr[j];
            arr[j] = temp;
        }
    }

    int temp2 = arr[i + 1];
    arr[i + 1] = arr[right];
    arr[right] = temp2;
    return i + 1;
}

Merge sort - a sorting algorithm based on the divide-and-conquer principle. It begins by dividing an array into two halves, recursively applying itself to each half, and then merging the two sorted halves back together. The merge operation plays a crucial role in this algorithm.

public static void MergeSort(int[] arr, int left, int right){
    if (left < right) {
        int middle = (left + right) / 2;
        MergeSort(arr, left, middle);
        MergeSort(arr, middle + 1, right);
        Merge(arr, left, middle, right);
    }
}

private static void Merge(int[] arr, int left, int middle, int right) {
    int[] temp = new int[arr.Length];
    for (int i = left; i <= right; i++){
        temp[i] = arr[i];
    }

    int j = left;
    int k = middle + 1;
    int l = left;

    while (j <= middle && k <= right){
        if (temp[j] <= temp[k]) {
            arr[l] = temp[j];
            j++;
        } else {
            arr[l] = temp[k];
            k++;
        }
        l++;
    }

    while (j <= middle) {
        arr[l] = temp[j];
        l++;
        j++;
    }
}

Like search algorithms, there are many different algorithms used for sorting. Each of them serves a different purpose and you should choose the one you need for a particular purpose.

Cycle Optimization

Loops are one of the most common places where CPU load occurs. When writing loops in C# code, try to minimize the number of operations inside a loop and avoid redundant iterations. Also, pay attention to the order of nested loops, as improper management of them can lead to exponential growth of execution time, as well as lead to memory leaks, which I wrote about in the last article.

Suppose we have a loop in which we perform some calculations on array elements. We can optimize this loop if we avoid unnecessary calls to properties and methods of objects inside the loop:

// Our Arrays for Cycle
int[] numbers = { 1, 2, 3, 4, 5 };
int sum = 0;

// Bad Cycle
for (int i = 0; i < numbers.Length; i++) {
    sum += numbers[i] * numbers[i];
}

// Good Cycle
for (int i = 0, len = numbers.Length; i < len; i++) {
    int num = numbers[i];
    sum += num * num;
}

This example demonstrates how you can avoid repeated calls to object properties and methods within a loop, and how you can avoid calling the Length property of an array at each iteration of the loop by using the local variable len. These optimizations can significantly improve code performance, especially when dealing with large amounts of data.

Use of Parallelism

C# has powerful tools to deal with parallelism, such as multithreading and parallel collections. By parallelizing computations, you can efficiently use the resources of multiprocessor systems and reduce CPU load. However, be careful when using parallelism, as improper thread management can lead to race conditions and other synchronization problems and memory leaks.

So, let's look at bad example of parallelism in C#:

long sum = 0;
int[] numbers = new int[1000000];
Random random = new Random();

// Just fill random numbers for example
for (int i = 0; i < numbers.Length; i++) {
    numbers[i] = random.Next(100);
}

// Bad example with each iteration in separated thread
Parallel.For(0, numbers.Length, i => {
    sum += numbers[i] * numbers[i];
});

And Impoved Example:

long sum = 0;
int[] numbers = new int[1000000];
Random random = new Random();

// Just fill random numbers for example
for (int i = 0; i < numbers.Length; i++) {
    numbers[i] = random.Next(100);
}

// Sync our parallel computions
Parallel.For(0, numbers.Length, () => 0L, (i, state, partialSum) => {
    partialSum += numbers[i] * numbers[i];
    return partialSum;
}, partialSum => {
    lock (locker) {
        sum += partialSum;
    }
});

In this good example, we use the Parallel.For construct to parallelize the calculations. However, instead of directly modifying the shared variable sum, we pass each thread a local variable partialSum, which is the partial sum of the computations for each thread. After each thread completes, we sum these partial sums into the shared variable sum, using monitoring and locking to secure access to the shared variable from different threads. Thus, we avoid race conditions and ensure correct operation of the parallel program.

Don't forget that there is still work to be done with stopping and clearing threads. You should use IDisposable and use using to avoid memory leaks.

If you develop projects in Unity - i really recommend to see at UniTaks.

Data caching

Efficient use of the CPU cache can significantly improve the performance of your application. When working with large amounts of data, try to minimize memory accesses and maximize data locality. This can be achieved by caching frequently used data and optimizing access to it.

Let's look at example:

// Our Cache Dictionary
static Dictionary<int, int> cache = new Dictionary<int, int>();

// Example of Expensive operation with cache
static int ExpensiveOperation(int input) {
    if (cache.ContainsKey(input)) {
        // We found a result in cache
        return cache[input];
    }

    // Example of expensive operation here (it may be webrequest or something else)
    int result = input * input;

    // Save Result to cache
    cache[input] = result;
    return result;
}

In this example, we use a cache dictionary to store the results of expensive operations. Before executing an operation, we check if there is already a result for the given input value in the cache. If there is already a result, we load it from the cache, which avoids re-executing the operation and reduces CPU load. If there is no result in the cache, we perform the operation, store the result in the cache, and then return it.

This example demonstrates how data caching can reduce CPU overhead by avoiding repeated computations for the same input data. For the faster and unique cache use HashSet structure.

Additional Optimization in Unity

Of course, you should not forget that if you work with Unity - you need to take into account both the rendering process and the game engine itself. I advise you to pay attention first of all to the following aspects when optimizing CPU in Unity:

  1. Try to minimize the use of coroutines and replace them with asynchronous calculations, for example with UniTask.

  2. Excessive use of high-poly models and unoptimized shaders causes overload, which strains the rendering process.

  3. Use a simple colliders, reduce realtime physics calculations.

  4. Optimize UI Overdraw. Do not use UI Animators, simplify rendering tree, split canvases, use atlases, disallow render targets and rich text.

  5. Synchronous loading and on-the-fly loading of large assets disrupt gameplay continuity, decreasing its playability. Use async assets loading, for example with Addressables Assets.

  6. Avoiding redundant operations. Frequently calling functions like Update() or performing unnecessary calculations can slow down a game. It's essential to ensure that operations are only executed when needed.

  7. Object pooling. Instead of continuously instantiating and destroying objects, which can be CPU-intensive, developers can leverage object pooling to reuse objects.

  8. Optimize loops. Nested loops or loops that iterate over large datasets should be optimized or avoided when possible.

  9. Use LODs (Levels of Detail). Instead of always rendering high-poly models, developers can use LODs to display lower-poly models when objects are farther from the camera.

  10. Compress textures. High-resolution textures can be memory-intensive. Compressing them without significant quality loss can save valuable resources. Use Crunch Compression.

  11. Optimize animations. Developers should streamline animation as much as possible, as well as remove unnecessary keyframes, and use efficient rigs.

  12. Garbage collection. While Unity's garbage collector helps manage memory, frequent garbage collection can cause performance hitches. Minimize object allocations during gameplay to reduce the frequency of garbage collection.

  13. Use static variables. Use static variables as they are allocated on the stack, which is faster than heap allocation.

  14. Unload unused assets. Regularly unload assets that are no longer needed using Resources.UnloadUnusedAssets() to free up memory.

  15. Optimize shaders. Custom shaders can enhance visuals but can be performance-heavy. Ensure they are optimized and use Unity's built-in shaders when possible.

  16. Use batching. Unity can batch small objects that use the same material, reducing draw calls and improving performance.

  17. Optimize AI pathfinding. Instead of calculating paths every frame, do it at intervals or when specific events occur.

  18. Use layers. Ensure that physics objects only interact with layers they need to, reducing unnecessary calculations.

  19. Use scene streaming. Instead of loading an entire level at once, stream parts based on the player's location, ensuring smoother gameplay.

  20. Optimize level geometry. Ensure that the game's levels are designed with performance in mind, using modular design and avoiding overly complex geometry.

  21. Cull non-essential elements. Remove or reduce the detail of objects that don't significantly impact gameplay or aesthetics.

  22. Use the Shader compilation pragma directives to adapt the compiling of a shader to each target platform.

  23. Bake your lightmaps, do not use real-time lightings.

  24. Minimize reflections and reflection probes, do not use realtime reflections;

  25. Shadow casting can be disabled per Mesh Renderer and light. Disable shadows whenever possible to reduce draw calls.

  26. Reduce unnecessary string creation or manipulation. Avoid parsing string-based data files such as JSON and XML;

  27. Use GameObject.CompareTag instead of manually comparing a string with GameObject.tag (as returning a new string creates garbage);

  28. Avoid passing a value-typed variable in place of a reference-typed variable. This creates a temporary object, and the potential garbage that comes with it implicitly converts the value type to a type object;

  29. Avoid LINQ and Regular Expressions if performance is an issue;

Profiling and Optimization

Finally, don't forget to profile your application and look for bottlenecks where the most CPU usage is occurring. There are many profiling tools for C#, such as dotTrace and ANTS Performance Profiler or Unity Profiler, that can help you identify and fix performance problems.

In Conclusion

Optimizing CPU load when writing C# code is an art that requires balancing performance, readability, and maintainability of the code. By choosing the right algorithms, optimizing loops, using parallelism, data caching, and profiling, you can create high-performance applications on the .NET platform or at Unity.

And of course thank you for reading the article, I would be happy to discuss various aspects of optimization and code with you.


You can also support writing tutorials, articles and see ready-made solutions for your projects:

My Discord | My Blog | My GitHub | Buy me a Beer

BTC: bc1qef2d34r4xkrm48zknjdjt7c0ea92ay9m2a7q55

ETH: 0x1112a2Ef850711DF4dE9c432376F255f416ef5d0