Real World Coding Issues - Design, Diagnostics and Other Common Code Issues

In Part 2 of this series of articles about real-world coding issues, I highlighted the code violations I discovered in the contract I worked on in the "Style" category. The codebase exhibited over 125K instances of code violations, and I provided statistical data to support this finding in Part 1. If you have not done so already, I encourage you to read Part 2 by clicking here: https://www.c-sharpcorner.com/article/real-world-coding-issues-style-and-performance-issues/

In this article, I will discuss common coding issues that I frequently encounter in the projects I work on. These issues fall into the "Design" and "Diagnostics" category of code analyzers, but it is worth noting that many of them can also impact code performance, an area in which I have expertise. In the Odds & Ends section, I will discuss other common issues I see in solutions that I work on.

Design
 

Enums Should Always Have a Zero Value

When reviewing code, I frequently come across a common issue: the absence of a default or "not set" value for Enums. This consideration is crucial in most cases as it helps indicate when a value has not been assigned. Let me provide an example to illustrate the problem:

public enum UserRole
{
    Administrator = 1,
    Guest = 2,
    Service = 3
}

This serves as a perfect example because, in this scenario, a user's role should initially be set as "none" until it is assigned by the code after the user has been validated. Here's the recommended approach to address this issue:

public enum UserRole
{
    NotAuthorized = 0,
    Administrator = 1,
    Guest = 2,
    Service = 3
}

Here is a comprehensive list of reasons highlighting the significance of including a zero value when defining an Enum:

  1. Default initialization: When an Enum variable is declared without explicitly assigning a value, it is automatically initialized to the first element, which is the zero value. This ensures that Enum variables always have a valid initial value.
  2. Implicit conversions: Enums in .NET are essentially a set of named integral constants, where each constant is assigned a unique value. Having a zero value allows implicit conversions between Enums and their underlying integral type (usually int). For example, you can assign an enum value to an int variable and vice versa without requiring explicit casting.
  3. Error handling and default cases: Enum values are often used in switch statements or as return values from methods. By convention, developers often define a "default" or "unknown" value as the zero value, which can be used as a catch-all case or to indicate an invalid or uninitialized state.

When dealing with Enums, it is crucial to validate them when they are used as parameters for methods or properties. In .NET, Enums are assigned as the integer type by default. Consequently, although specific values like 0, 1, 2 and 3 may be defined for an enum, it is still possible to pass any valid integer value into the method. To ensure proper validation, it is recommended to utilize the Enum.IsDefined() method. Alternatively, you can leverage the ArgumentDefined() or CheckIsDefined() functions available in the Spargine Core NuGet package.

Diagnostics
 

The Rijndael and RijndaelManaged Types Are Superseded

Since September 2021, the Rijndael and RijndaelManaged types have been superseded by the AesManaged type. Although Rijndael is still supported in .NET, it is generally not recommended to use the RijndaelManaged class. This is because the RijndaelManaged class does not support certain block and key sizes that are required for AES compatibility. Instead, it is advised to utilize the AesManaged class, which offers a more secure and efficient implementation of AES. Below is a code example demonstrating the use of both encryption algorithms:

using System;
using System.Security.Cryptography;
using System.Text;
public class Program
{
    public static void Main()
    {
        // Encryption using RijndaelManaged (not recommended)
        string plainText = "Hello, World!";
        byte[] key = Encoding.UTF8.GetBytes("0123456789ABCDEF");
        byte[] iv = Encoding.UTF8.GetBytes("1234567890ABCDEF");

        using (var rijndael = new RijndaelManaged())
        {
            rijndael.Key = key;
            rijndael.IV = iv;

            byte[] encryptedData = Encrypt(plainText, rijndael);
            string decryptedText = Decrypt(encryptedData, rijndael);

            Console.WriteLine("RijndaelManaged - Encrypted Data: " + Convert.ToBase64String(encryptedData));
            Console.WriteLine("RijndaelManaged - Decrypted Text: " + decryptedText);
        }

        Console.WriteLine();

        // Encryption using AesManaged (recommended)
        using (var aes = new AesManaged())
        {
            aes.Key = key;
            aes.IV = iv;

            byte[] encryptedData = Encrypt(plainText, aes);
            string decryptedText = Decrypt(encryptedData, aes);

            Console.WriteLine("AesManaged - Encrypted Data: " + Convert.ToBase64String(encryptedData));
            Console.WriteLine("AesManaged - Decrypted Text: " + decryptedText);
        }
    }

    public static byte[] Encrypt(string plainText, SymmetricAlgorithm algorithm)
    {
        byte[] encryptedData;

        using (var encryptor = algorithm.CreateEncryptor())
        {
            byte[] plainBytes = Encoding.UTF8.GetBytes(plainText);

            encryptedData = encryptor.TransformFinalBlock(plainBytes, 0, plainBytes.Length);
        }

        return encryptedData;
    }

    public static string Decrypt(byte[] encryptedData, SymmetricAlgorithm algorithm)
    {
        string decryptedText;

        using (var decryptor = algorithm.CreateDecryptor())
        {
            byte[] decryptedBytes = decryptor.TransformFinalBlock(encryptedData, 0, encryptedData.Length);

            decryptedText = Encoding.UTF8.GetString(decryptedBytes);
        }

        return decryptedText;
    }
}

Failing to stay updated on the most secure algorithms can result in your company being fined during a security audit, as I witnessed during my time at a previous company. To observe how I implemented this in Spargine, please refer to the EncryptionHelper class in DotNetTips.Spargine.6.Core.

UTF-7 Encoding Is Insecure

In the codebase I reviewed for this article, there were multiple violations due to using UTF-7 encoding. UTF-7 encoding is now prohibited by many specs. Here are more reasons why this should be avoided:

  1. Limited character support: UTF-7 is designed to encode only a subset of Unicode characters. It can represent ASCII characters directly, but for other characters outside the ASCII range, it requires encoding them using a combination of ASCII characters. This can lead to inefficient encoding and decoding processes.
  2. Security vulnerabilities: UTF-7 encoding introduces potential security risks, especially when used in web applications or when dealing with user input. The encoding scheme allows for the use of certain characters that can be exploited for various attacks, such as cross-site scripting (XSS) attacks or Unicode-based attacks.
  3. Compatibility issues: Not all software and systems fully support UTF-7 encoding. If you rely on UTF-7 for encoding and decoding data, it can cause interoperability problems when interacting with systems or libraries that do not support it properly. It is generally better to use more widely supported encodings, such as UTF-8 or UTF-16.
  4. Performance overhead: Encoding and decoding UTF-7 can be computationally expensive compared to other encodings. The need to convert characters to and from their ASCII-based representations can result in slower processing times, especially when dealing with large amounts of data.

Here is an example of the issue:

var lines = File.ReadAllLines(fileName, Encoding.UTF7);

To fix, simply use one of the other encodings like this:

var lines = File.ReadAllLines(fileName, Encoding.UTF8);

Encoding Performance

As mentioned above, using UTF-7 can be a performance issue. Let’s look at the performance of the different encodings. Here is a list of them:

  • Encoding.ASCII: Represents the American Standard Code for Information Interchange (ASCII) character encoding. It encodes characters using 7 bits, supporting only the basic Latin alphabet (0-127).
  • Encoding.UTF7: Represents the UTF-7 (Unicode Transformation Format 7-bit) character encoding. It is a variable-length encoding that uses ASCII characters to represent Unicode characters.
  • Encoding.UTF8: Represents the UTF-8 (Unicode Transformation Format 8-bit) character encoding. It is a variable-length encoding that supports the entire Unicode character set.
  • Encoding.Unicode: Represents the UTF-16 (Unicode Transformation Format 16-bit) character encoding. It uses 16 bits per character and supports the entire Unicode character set.
  • Encoding.UTF32: Represents the UTF-32 (Unicode Transformation Format 32-bit) character encoding. It uses 32 bits per character and supports the entire Unicode character set.
  • Encoding.BigEndianUnicode: Represents the UTF-16 (Unicode Transformation Format 16-bit) character encoding with big-endian byte order. It uses 16 bits per character, and the byte order is reversed compared to UTF-16.
  • Encoding.Default: Represents the system's default encoding. The default encoding is typically determined by the operating system or the user's language settings.
  • Other encodings: Besides the above encodings, .NET also provides various other encodings such as UTF-32BE (big-endian), ISO-8859-1 (Latin-1), Windows-1252, and many more.

Benchmark Results

Below are the benchmark results for encoding and decoding using all the encoding settings above.

As you can see in these charts, using UTF7 is the slowest for encoding and for decoding it’s not the slowest but close.

Odds & Ends

Here are more issues that I found in this code base, most of them I have seen many times in other projects.

Code in Unit Tests

I know that unit tests aren’t shipped, but they are equally important to ensure that proper coding techniques are also applied. For example, in this code base there wasn’t any proper coding of types that used IDisposable. Yes, I know the objects in unit tests are destroyed when the tests are done, but you need to make sure that you run the code in Dispose(), especially for the types in your project that implement IDisposable.

Use ConfigureAwait with Async Calls

Generally, in reusable assemblies that make async calls, ConfigureAwait() should be used with the setting of false. In the code base I used for this article, they weren’t using ConfigureAwait() on any of their async calls. Here is an example from my OSS project Spargine:

await request.Body.CopyToAsync(ms, CancellationToken.None).ConfigureAwait(false);

Here are more reasons why ConfigureAwait() should be used and how:

  1. Performance improvement: By default, when an await expression is encountered, the current synchronization context is captured, and the continuation after the async operation is scheduled to run on the same context. This behavior can be beneficial in GUI applications, but in other scenarios, such as server-side applications or CPU-bound work, capturing the synchronization context can introduce unnecessary overhead. By using ConfigureAwait(false), you can avoid the overhead of capturing and restoring the synchronization context, leading to potential performance improvements.
  2. Avoiding deadlocks: When an async method resumes on the captured synchronization context, it expects the context to be available. If you're awaiting an async operation within a context (e.g., in a UI event handler), and that operation also relies on the same context, it can result in a deadlock. This situation often occurs when using Task.Wait or Task.Result in combination with await. By using ConfigureAwait(false), you can explicitly state that you don't need the original synchronization context, which helps prevent potential deadlocks.
  3. Scalability: In scenarios where there's limited availability of synchronization contexts, such as in highly concurrent applications, using ConfigureAwait(false) can help improve scalability. Since capturing a synchronization context may lead to serialization of execution, releasing the context can allow the async operations to execute on different threads concurrently, resulting in better overall performance.

For async calls in GUI applications, use ConfigureAwait(true) so that it runs on the UI thread.

Security Issues in SQL

One of the big security recommendations when creating T-SQL to call SQL Server is that it’s not inline T-SQL. Instead, the code should be using parameters. There were numerous places where this code base was using inline T-SQL. Here are the reasons why this should never be done:

  1. Security vulnerabilities: Inline SQL statements are prone to SQL injection attacks. When you concatenate user-supplied input directly into SQL queries, it becomes possible for malicious users to manipulate the input and inject additional SQL commands. This can lead to unauthorized access, data breaches, or even data loss. Using parameterized queries or stored procedures helps prevent SQL injection by separating the SQL logic from user input.
  2. Code maintainability: Inline SQL can make code harder to read and maintain, especially when SQL statements are scattered throughout the codebase. It becomes challenging to track and modify queries when they are embedded within different methods or classes. Separating SQL logic into separate query files, stored procedures, or using an Object-Relational Mapping (ORM) framework improves code organization and maintainability.
  3. Performance optimizations: Inline SQL often lacks optimization capabilities. The database engine may need to recompile the query each time it is executed, resulting in decreased performance. By using stored procedures or parameterized queries, the database engine can optimize query execution plans, caching them for improved performance.
  4. Database portability: Inline SQL can make it harder to switch between different database systems. Each database has its own SQL dialect and syntax, so if you have inline SQL statements specific to one database, you will need to modify them to work with another database system. By using an abstraction layer like an ORM or writing database-agnostic queries, you can write code that is more portable across different database systems.
  5. Separation of concerns: Inline SQL tends to mix database access code with business logic, violating the principle of separation of concerns. This can lead to a tight coupling between the application and the database, making it harder to change the database schema or switch to a different data storage solution. Using an ORM or following a layered architecture helps maintain a clear separation between different parts of the application.

This is why I like using ORM’s like Entity Framework since it provides better security, code maintainability, performance, and portability compared to inline SQL statements. 

Use WaitAll() for Async Calls

There were many places in this solution where a single method could make 3 or more async calls. I felt that many of these calls could be done faster by using Task.WaitAll(). Here are the advantages of using WaitAll():

  1. Simultaneous waiting: WaitAll allows you to wait for multiple asynchronous operations to complete concurrently. Instead of waiting for each operation individually, you can wait for all of them at once, which can lead to improved performance and efficiency.
  2. Synchronization: By using WaitAll, you can synchronize the execution flow of your program. It ensures that the program waits for all the asynchronous operations to finish before proceeding to the next steps. This is particularly useful in scenarios where you need the results of multiple operations before proceeding.
  3. Error handling: WaitAll provides a straightforward way to handle errors in multiple asynchronous operations. If any of the operations encounters an exception, WaitAll will throw an aggregate exception that contains all the exceptions from the individual operations. This allows you to handle errors in a centralized manner, making it easier to log or handle exceptions consistently.
  4. Improved readability: Using WaitAll can make your code more readable and maintainable. By explicitly waiting for multiple operations to complete, you can clearly express the intent of your code, making it easier for other developers to understand the logic and flow of the program.
  5. Parallelism: WaitAll can be especially beneficial when combined with parallel programming techniques. If the underlying asynchronous operations can be executed concurrently, using WaitAll allows you to take advantage of parallelism, potentially speeding up the overall execution time.

You can also use Task.WhenAll which provides a convenient and efficient way to await multiple asynchronous operations concurrently. It promotes non-blocking asynchronous programming, composition of tasks, and centralized exception handling, making it a powerful tool in asynchronous code scenarios.

Here is an example on how to use WaitAll, generated by ChatGPT:

static async Task Main()
{
    Task task1 = Task.Run(() => DoSomething(1, 2000));
    Task task2 = Task.Run(() => DoSomething(2, 1000));
    Task task3 = Task.Run(() => DoSomething(3, 3000));

    Task[] tasks = { task1, task2, task3 };

    Console.WriteLine("Tasks started. Waiting for completion...");

    await Task.WhenAll(tasks);
        
    Console.WriteLine("All tasks completed.");
}

Watch Your Cyclomatic Complexity!

This project had many, many methods with a high cyclomatic complexity. Let’s go over what cyclomatic complexity means. In programming, Cyclomatic Complexity is a software metric used to measure the complexity of a program. It provides a quantitative measure of the number of independent paths through the source code of a program.

Cyclomatic Complexity is based on the concept of control flow within a program. It counts the number of decision points (such as conditional statements or loops) in the code and uses that count to determine the complexity of the program. Each decision point adds to the overall complexity, as it creates additional paths that need to be considered during testing and analysis.

The higher the Cyclomatic Complexity value of a program, the more complex it is considered to be. A high Cyclomatic Complexity can indicate potential issues in the code, such as increased difficulty in understanding and maintaining the program, as well as a higher likelihood of bugs and errors.

By measuring Cyclomatic Complexity, developers and software quality analysts can assess the complexity of a program and identify areas that may require refactoring or additional testing to improve code quality and maintainability.

With that said, what cyclomatic complexity means to me is the minimum number of unit tests that are needed to properly test the method for encapsulation. The highest number for cyclomatic complexity for a single method that I could find in this project is 195. That means it really needs that many unit tests. Sadly, this method had no unit tests. If you remember back to the first article, this solution has a total cyclomatic complexity of over 77K. The solution only had roughly 400-unit tests. 

I use Refactor (Free from DevExpress) to tell me the cyclomatic complexity number. I also use Refactor To refactor methods quickly into smaller, more manageable methods. 

Globalization & Localization

One thing that I see in 100% of the projects I review is they do not properly code for globalization & localization. While working on this project, they started running into issues since there is a big release coming soon for the country of India. I found over 700 globalization & localization issues. I know from experience that fixing these issues years after the code was first written will take a lot of time and will be costly. So just do it correctly from the start! I recently wrote an article on this subject on dotNetTips.com called Globalizing Strings in Microsoft .NET C# with the Multilingual App Toolkit. I hope you will give it a read. 

I See Duplicate Code

Using Visual Studio, I found that this project had over 13K lines of duplicate code. Duplicate code is a maintenance nightmare. This is why I say that 90% of code for projects should be in reusable assemblies. This is the best way to tackle code duplication. Also, in many solutions that I review, like this one, when someone writes code with issues in it, that bad code gets duplicated all over the projects which happened often in this solution.

I See Dead Code!

This solution had the most code that I have seen that wasn’t being used or commented out. There is no need to check this into source control. It should never be checked into source control!! I found many types and methods that were no longer used. I spent a significant amount of time removing code that was not used anymore in this solution. I even found many places where commented out code was within a region called "Commented Out Code". I have never seen that before!!

Don’t Use Non-Generic Collections

For the first time in a very long time, I saw code that was using the old ArrayList from the .NET Framework. This has been obsoleted ever since generics were added to .NET in 2005 and is a performance issue.

Summary

Well, there you have it. I hope you learned from this three-part article series. Again, most all these issues I see at just about every contract I work on. This  is why it is critical that code quality and performance is a feature of every project or sprint. If not, then you will be left with a solution that is buggy, hard to fix and hard to add features. This is even more critical when you hire contractors which this team did a lot. I think they had more contractors than permanent developers. It is worth noting that most of the issues discussed in this article can be easily addressed by utilizing the Visual Studio extension called CodeRush. CodeRush is a free tool available for download at the following link: https://www.devexpress.com/products/coderush/

For further guidance and insights, I highly recommend obtaining a copy of my book, "Rock Your Code: Coding Standards for Microsoft .NET" available on Amazon.com. Additionally, to explore more performance tips for .NET, I encourage you to acquire the latest edition of "Rock Your Code: Code & App Performance for Microsoft .NET" also available on Amazon.com.

To analyze your code using the same settings I used in these articles, I encourage you to incorporate my EditorConfig file. It can be found at the following link: https://bit.ly/dotNetDaveEditorConfig. I update this file quarterly, so remember to keep yours up to date as well. I hope you will check out my OSS project Spargine by using this link: https://bit.ly/Spargine.

Please feel free to leave a comment below. I would appreciate hearing your thoughts and feedback.


Recommended Free Ebook
Similar Articles
McCarter Consulting
Software architecture, code & app performance, code quality, Microsoft .NET & mentoring. Available!