Skip to content
Ian Griffiths By Ian Griffiths Technical Fellow I
C# 8.0 nullable references: get better results with nullability attributes

If you enable nullable references, the C# 8.0 compiler goes to considerable lengths to analyze your code to try to determine whether it contains null-related programming errors.

However, it won't look inside other components—it won't attempt to decompile a library to gain a deeper understanding of its null handling.

Component boundaries therefore present a challenge for nullability analysis: the compiler has to depend on what is visible at the component's public face. (In fact, nullability analysis is done one method at a time, so in practice the compiler depends on what's visible in the signatures of methods, properties, and constructors.)

This would be quite limiting if the only null-related information were the presence or absence of a ?. Fortunately, it's possible for a component to supply more detailed information using certain custom attributes. This enables the compiler to identify more problems correctly, and to reduce false positives.

Why not inspect the MSIL?

It's worth asking pondering why the compiler doesn't attempt to dig deeper. After all, in the past Microsoft provided tools that could perform some limited null handling checks that did exactly this—they inspected the IL produced by the compiler. But while these tools were better than nothing, they mainly worked at the scope of an individual method; it is not a good technique for inferring how the code in question is intended to be used if your goal is to detect null-related mistakes in programs that call into that code. Moreover, there's a fundamental problem: the relevant IL might not even be available.

When you use a .NET class library type, the compiler typically doesn't have a reference to the real implementation. For example, looking at a .NET Core 3.1 program that uses List<T>, the .NET SDK provides the compiler with the definition of this class through a reference to this path:

C:\Program Files\dotnet\packs\Microsoft.NETCore.App.Ref\3.1.0\ref\netcoreapp3.1\System.Collections.dll

If I open that up in a tool such as ILDASM I can see that every member of every type has an empty implementation. The list type's Add contains a single IL ret instruction, returning immediately without doing anything. The dictionary's TryGet method just throws an exception.

Clearly these aren't the real implementations. The SDKs provide the compiler with these hollowed-out reference assemblies because we can't necessarily know in advance what the real implementation will be.

These types are built into .NET itself, so the implementation we get depends on the version of .NET we end up running on.

But these hollowed-out reference assemblies are ultimately just a feature of how the .NET SDK currently builds things; it wasn't always that way and it could conceivably be changed back to the old approach of compiling against the real thing (or at least something realistic) if it were truly necessary.

But that wouldn't help: what is the compiler supposed to make of an interface? Even the real definition of an interface that gets used at runtime has no code associated with it (unless you're using another new C# 8 language feature, default interface implementations, but those are rare). For nullability analysis to be useful, we want it to work with interfaces, so it really can't rely on peering into the IL.

Enabling Nullable for older vs newer target frameworks

In an earlier post in this series, in which I talked about inferred (non-)nullness, I showed some code that uses the IDictionary<TKey,TValue> interface from .NET's class libraries:

As I said in that article, you will get a warning compiling this code against .NET Standard 2.0, but not against newer targets such .NET Core 3.1 or .NET Standard 2.1. (Specifically, it will warn that value.Length might be referencing a null.) You don't get a warning with those newer libraries because the TryGet method has been annotated with an attribute:

This is the mechanism by which the compiler is able to learn more about a method or property than it could from the signature alone. (As it happens, even nullability annotations such as string vs string? are represented through custom attributes because that is not a distinction understood by the CLR.

It's part of the C# type system, but because the CLR has no support for it, C# uses attributes to embed that information. However, for this article, I'm ignoring those—from a C# perspective those don't really exist, because they become a facet of the type system. I'm looking purely at those things that still look like attributes even in a fully nullable-aware world.)

Null-awareness attributes

.NET Standard 2.1 and .NET Core 3.0 introduced a set of custom attributes you can use to provide more detailed information to the compiler's null handling analysis. In the next few posts in this blog series, I will explain each of these, but for now I'll give you an outline of what you can do.

AllowNull makes it possible to write asymmetric properties, which can be set to null but will never return null. More unusually, DisallowNull supports the converse, in which a property may return null but must not be set to null.

NotNull is useful for out and ref parameters, enabling you to state that the method will set these to a non-null value before returning. It can also be used on normal parameters, enabling the compiler to infer non-nullness in the event that a method returns.

MaybeNull deals with a gnarly issue around nullability and generics. Because of the way nullability was grafted onto C# two decades after the language first appeared, you often can't write T? if T is an unconstrained type parameter. This attribute can provide a means to express what the type system cannot.

The NotNullWhen, MaybeNullWhen, and NotNullIfNotNull attributes enable the compiler to understand more about when values will definitely be non-null, and when they might not be in conditional scenarios.

Finally, various other attributes that were around before nullable references were added to the language have an effect on nullability analysis, including DoesNotReturn and DoesNotReturnIf.

Conclusion

The effect of these attributes is that they make it practical for the compiler to apply a level of sensitivity to possible nullability issues with a much lower rate of false positives than would otherwise be possible.

In other words, thanks to these attributes, the compiler can find more problems for us without drowning us in inappropriate warnings.

Next up, I'll start to explain each of these in more detail.

Ian Griffiths

Technical Fellow I

Ian Griffiths

Ian has worked in various aspects of computing, including computer networking, embedded real-time systems, broadcast television systems, medical imaging, and all forms of cloud computing. Ian is a Technical Fellow at endjin, and Microsoft MVP in Developer Technologies. He is the author of O'Reilly's Programming C# 8.0, and has written Pluralsight courses on WPF (and here) and the TPL. Technology brings him joy.