0% found this document useful (0 votes)
0 views666 pages

Dotnet Navigate Devops Testing

The document provides an overview of using GitHub Actions for .NET application development, detailing how to automate CI/CD processes through workflow files and various GitHub Actions. It also discusses best practices for unit testing in .NET, deploying applications, and using the .NET CLI in CI environments. Additionally, it includes installation options for CI build servers and examples of configuring popular CI services like Travis CI, AppVeyor, and Azure DevOps Services.

Uploaded by

ismail.div
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views666 pages

Dotnet Navigate Devops Testing

The document provides an overview of using GitHub Actions for .NET application development, detailing how to automate CI/CD processes through workflow files and various GitHub Actions. It also discusses best practices for unit testing in .NET, deploying applications, and using the .NET CLI in CI environments. Additionally, it includes installation options for CI build servers and examples of configuring popular CI services like Travis CI, AppVeyor, and Azure DevOps Services.

Uploaded by

ismail.div
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 666

Tell us about your PDF experience.

.NET DevOps, testing, and deployment


documentation
Learn about DevOps, GitHub Actions, testing, and deployment in .NET.

GitHub Actions

e OVERVIEW

Overview

f QUICKSTART

Create a build GitHub workflow

Unit test with .NET

e OVERVIEW

Unit testing in .NET

g TUTORIAL

Test C# code using dotnet test and xUnit

Unit test with NUnit

Unit test with MSTest

p CONCEPT

Best practices

MSTest runner overview

Deploy .NET apps

e OVERVIEW
Publish overview

ReadyToRun

Trimming

Native AOT deployment

Docker and .NET

i REFERENCE

RID catalog
GitHub Actions and .NET
Article • 12/14/2023

In this overview, you'll learn what role GitHub Actions play in .NET application
development. GitHub Actions allow your source code repositories to automate
continuous integration (CI) and continuous delivery (CD). Beyond that, GitHub Actions
expose more advanced scenarios — providing hooks for automation with code reviews,
branch management, and issue triaging. With your .NET source code in GitHub you can
leverage GitHub Actions in many ways.

GitHub Actions
GitHub Actions represent standalone commands, such as:

actions/checkout - This action checks-out your repository under


$GITHUB_WORKSPACE , so your workflow can access it.

actions/setup-dotnet - This action sets up a .NET CLI environment for use in


actions.
dotnet/versionsweeper - This action sweeps .NET repos for out-of-support
target versions of .NET.

While these commands are isolated to a single action, they're powerful through
workflow composition. In workflow composition, you define the events that trigger the
workflow. Once a workflow is running, there are various jobs it's instructed to perform —
with each job defining any number of steps. The steps delegate out to GitHub Actions, or
alternatively call command-line scripts.

For more information, see Introduction to GitHub Actions . Think of a workflow file as a
composition that represents the various steps to build, test, and/or publish an
application. Many .NET CLI commands are available, most of which could be used in the
context of a GitHub Action.

Custom GitHub Actions


While there are plenty of GitHub Actions available in the Marketplace , you may want
to author your own. You can create GitHub Actions that run .NET applications. For more
information, see Tutorial: Create a GitHub Action with .NET

Workflow file
GitHub Actions are utilized through a workflow file. The workflow file must be located in
the .github/workflows directory of the repository, and is expected to be YAML (either
*.yml or *.yaml). Workflow files define the workflow composition. A workflow is a
configurable automated process made up of one or more jobs. For more information,
see Workflow syntax for GitHub Actions .

Example workflow files


There are many examples of .NET workflow files provided as tutorials and quickstarts.
Here are several good examples of workflow file names:

Workflow file name

Description

build-validation.yml

Compiles (or builds) the source code. If the source code doesn't compile, this will fail.

build-and-test.yml

Exercises the unit tests within the repository. In order to run tests, the source code must
first be compiled — this is really both a build and test workflow (it would supersede the
build-validation.yml workflow). Failing unit tests will cause workflow failure.

publish-app.yml

Packages, and publishes the source code to a destination.

codeql-analysis.yml

Analyzes your code for security vulnerabilities and coding errors. Any discovered
vulnerabilities could cause failure.

Encrypted secrets
To use encrypted secrets in your workflow files, you reference the secrets using the
workflow expression syntax from the secrets context object.

YAML

${{ secrets.MY_SECRET_VALUE }} # The MY_SECRET_VALUE must exist in the


repository as a secret

Secret values are never printed in the logs. Instead, their names are printed with an
asterisk representing their values. For example, as each step runs within a job, all of the
values it uses are output to the action log. Secret values render similar to the following:

Console

MY_SECRET_VALUE: ***

) Important

The secrets context provides the GitHub authentication token that is scoped to
the repository, branch, and action. It's provided by GitHub without any user
intervention:

yml

${{ secrets.GITHUB_TOKEN }}

For more information, see Using encrypted secrets in a workflow .

Events
Workflows are triggered by many different types of events. In addition to Webhook
events, which are the most common, there are also scheduled events and manual
events.

Example webhook event

The following example shows how to specify a webhook event trigger for a workflow:

yml

name: code coverage

on:
push:
branches:
- main
pull_request:
branches:
- main, staging
jobs:
coverage:

runs-on: ubuntu-latest

# steps omitted for brevity

In the preceding workflow, the push and pull_request events will trigger the workflow
to run.

Example scheduled event

The following example shows how to specify a scheduled (cron job) event trigger for a
workflow:

yml

name: scan
on:
schedule:
- cron: '0 0 1 * *'
# additional events omitted for brevity

jobs:
build:
runs-on: ubuntu-latest

# steps omitted for brevity

In the preceding workflow, the schedule event specifies the cron of '0 0 1 * *' which
will trigger the workflow to run on the first day of every month. Running workflows on a
schedule is great for workflows that take a long time to run, or perform actions that
require less frequent attention.

Example manual event


The following example shows how to specify a manual event trigger for a workflow:

yml

name: build
on:
workflow_dispatch:
inputs:
reason:
description: 'The reason for running the workflow'
required: true
default: 'Manual run'
# additional events omitted for brevity

jobs:
build:
runs-on: ubuntu-latest

steps:
- name: 'Print manual run reason'
if: ${{ github.event_name == 'workflow_dispatch' }}
run: |
echo 'Reason: ${{ github.event.inputs.reason }}'

# additional steps omitted for brevity

In the preceding workflow, the workflow_dispatch event requires a reason as input.


GitHub sees this and its UI dynamically changes to prompt the user into provided the
reason for manually running the workflow. The steps will print the provided reason
from the user.

For more information, see Events that trigger workflows .

.NET CLI
The .NET command-line interface (CLI) is a cross-platform toolchain for developing,
building, running, and publishing .NET applications. The .NET CLI is used to run as part
of individual steps within a workflow file. Common command include:

dotnet workflow install


dotnet restore
dotnet build
dotnet test
dotnet publish

For more information, see .NET CLI overview

See also
For a more in-depth look at GitHub Actions with .NET, consider the following resources:

Quickstart(s):
Quickstart: Create a build validation GitHub Action
Quickstart: Create a test validation GitHub Action
Quickstart: Create a publish app GitHub Action
Quickstart: Create a security scan GitHub Action

Tutorial(s):
Tutorial: Create a GitHub Action with .NET

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
Use the .NET SDK in continuous
integration (CI) environments
Article • 05/25/2023

This article outlines how to use the .NET SDK and its tools on a build server. The .NET
toolset works both interactively, where a developer types commands at a command
prompt, and automatically, where a continuous integration (CI) server runs a build script.
The commands, options, inputs, and outputs are the same, and the only things you
supply are a way to acquire the tooling and a system to build your app. This article
focuses on scenarios of tool acquisition for CI with recommendations on how to design
and structure your build scripts.

Installation options for CI build servers


If you're using GitHub, the installation is straightforward. You can rely on GitHub Actions
to install the .NET SDK in your workflow. The recommended way to install the .NET SDK
in a workflow is with the actions/setup-net-core-sdk action. For more information, see
the Setup .NET Core SDK action in the GitHub marketplace. For examples of how this
works, see Quickstart: Create a build validation GitHub workflow.

Native installers
Native installers are available for macOS, Linux, and Windows. The installers require
admin (sudo) access to the build server. The advantage of using a native installer is that
it installs all of the native dependencies required for the tooling to run. Native installers
also provide a system-wide installation of the SDK.

macOS users should use the PKG installers. On Linux, there's a choice of using a feed-
based package manager, such as apt-get for Ubuntu or yum for CentOS, or using the
packages themselves, DEB or RPM. On Windows, use the MSI installer.

The latest stable binaries are found at .NET downloads . If you wish to use the latest
(and potentially unstable) pre-release tooling, use the links provided at the
dotnet/installer GitHub repository . For Linux distributions, tar.gz archives (also
known as tarballs ) are available; use the installation scripts within the archives to install
.NET.

Installer script
Using the installer script allows for non-administrative installation on your build server
and easy automation for obtaining the tooling. The script takes care of downloading the
tooling and extracting it into a default or specified location for use. You can also specify
a version of the tooling that you wish to install and whether you want to install the
entire SDK or only the shared runtime.

The installer script is automated to run at the start of the build to fetch and install the
desired version of the SDK. The desired version is whatever version of the SDK your
projects require to build. The script allows you to install the SDK in a local directory on
the server, run the tools from the installed location, and then clean up (or let the CI
service clean up) after the build. This provides encapsulation and isolation to your entire
build process. The installation script reference is found in the dotnet-install article.

7 Note

Azure DevOps Services

When using the installer script, native dependencies aren't installed automatically.
You must install the native dependencies if the operating system doesn't have
them. For more information, see .NET dependencies and requirements.

CI setup examples
This section describes a manual setup using a PowerShell or bash script, along with
descriptions of software as a service (SaaS) CI solutions. The SaaS CI solutions covered
are Travis CI , AppVeyor , and Azure Pipelines. For GitHub Actions, see GitHub Actions
and .NET

Manual setup
Each SaaS service has its methods for creating and configuring a build process. If you
use a different SaaS solution than those listed or require customization beyond the pre-
packaged support, you must perform at least some manual configuration.

In general, a manual setup requires you to acquire a version of the tools (or the latest
nightly builds of the tools) and run your build script. You can use a PowerShell or bash
script to orchestrate the .NET commands or use a project file that outlines the build
process. The orchestration section provides more detail on these options.

After you create a script that performs a manual CI build server setup, use it on your dev
machine to build your code locally for testing purposes. Once you confirm that the
script is running well locally, deploy it to your CI build server. A relatively simple
PowerShell script demonstrates how to obtain the .NET SDK and install it on a Windows
build server:

You provide the implementation for your build process at the end of the script. The
script acquires the tools and then executes your build process.

PowerShell

PowerShell

$ErrorActionPreference="Stop"
$ProgressPreference="SilentlyContinue"

# $LocalDotnet is the path to the locally-installed SDK to ensure the


# correct version of the tools are executed.
$LocalDotnet=""
# $InstallDir and $CliVersion variables can come from options to the
# script.
$InstallDir = "./cli-tools"
$CliVersion = "6.0.7"

# Test the path provided by $InstallDir to confirm it exists. If it


# does, it's removed. This is not strictly required, but it's a
# good way to reset the environment.
if (Test-Path $InstallDir)
{
rm -Recurse $InstallDir
}
New-Item -Type "directory" -Path $InstallDir

Write-Host "Downloading the CLI installer..."

# Use the Invoke-WebRequest PowerShell cmdlet to obtain the


# installation script and save it into the installation directory.
Invoke-WebRequest `
-Uri "https://dot.net/v1/dotnet-install.ps1" `
-OutFile "$InstallDir/dotnet-install.ps1"

Write-Host "Installing the CLI requested version ($CliVersion) ..."

# Install the SDK of the version specified in $CliVersion into the


# specified location ($InstallDir).
& $InstallDir/dotnet-install.ps1 -Version $CliVersion `
-InstallDir $InstallDir

Write-Host "Downloading and installation of the SDK is complete."

# $LocalDotnet holds the path to dotnet.exe for future use by the


# script.
$LocalDotnet = "$InstallDir/dotnet"
# Run the build process now. Implement your build script here.

Travis CI
You can configure Travis CI to install the .NET SDK using the csharp language and the
dotnet key. For more information, see the official Travis CI docs on Building a C#, F#, or

Visual Basic Project . Note as you access the Travis CI information that the community-
maintained language: csharp language identifier works for all .NET languages, including
F#, and Mono.

Travis CI runs both macOS and Linux jobs in a build matrix, where you specify a
combination of runtime, environment, and exclusions/inclusions to cover the build
combinations for your app. For more information, see the Customizing the Build
article in the Travis CI documentation. The MSBuild-based tools include the long-term
support (LTS) and standard-term support (STS) runtimes in the package; so by installing
the SDK, you receive everything you need to build.

AppVeyor
AppVeyor installs the .NET 6 SDK with the Visual Studio 2022 build worker image.
Other build images with different versions of the .NET SDK are available. For more
information, see the Build worker images article in the AppVeyor docs.

The .NET SDK binaries are downloaded and unzipped in a subdirectory using the install
script, and then they're added to the PATH environment variable. Add a build matrix to
run integration tests with multiple versions of the .NET SDK:

YAML

environment:
matrix:
- CLI_VERSION: 6.0.7
- CLI_VERSION: Latest

Azure DevOps Services


Configure Azure DevOps Services to build .NET projects using one of these approaches:

Run the script from the manual setup step using your commands.
Create a build composed of several Azure DevOps Services built-in build tasks that
are configured to use .NET tools.

Both solutions are valid. Using a manual setup script, you control the version of the tools
that you receive, since you download them as part of the build. The build is run from a
script that you must create. This article only covers the manual option. For more
information on composing a build with Azure DevOps Services build tasks, see the Azure
Pipelines documentation.

To use a manual setup script in Azure DevOps Services, create a new build definition and
specify the script to run for the build step. This is accomplished using the Azure DevOps
Services user interface:

1. Start by creating a new build definition. Once you reach the screen that provides
you an option to define what kind of a build you wish to create, select the Empty
option.
2. After configuring the repository to build, you're directed to the build definitions.
Select Add build step:

3. You're presented with the Task catalog. The catalog contains tasks that you use in
the build. Since you have a script, select the Add button for PowerShell: Run a
PowerShell script.
4. Configure the build step. Add the script from the repository that you're building:

Orchestrating the build


Most of this document describes how to acquire the .NET tools and configure various CI
services without providing information on how to orchestrate, or actually build, your
code with .NET. The choices on how to structure the build process depend on many
factors that can't be covered in a general way here. For more information on
orchestrating your builds with each technology, explore the resources and samples
provided in the documentation sets of Travis CI , AppVeyor , and Azure Pipelines.

Two general approaches that you take in structuring the build process for .NET code
using the .NET tools are using MSBuild directly or using the .NET command-line
commands. Which approach you should take is determined by your comfort level with
the approaches and trade-offs in complexity. MSBuild provides you the ability to express
your build process as tasks and targets, but it comes with the added complexity of
learning MSBuild project file syntax. Using the .NET command-line tools is perhaps
simpler, but it requires you to write orchestration logic in a scripting language like bash
or PowerShell.

 Tip

One MSBuild property you'll want to set to true is ContinuousIntegrationBuild.


This property enables settings that only apply to official builds as opposed to local
development builds.

See also
GitHub Actions and .NET
.NET downloads - Linux
.NET-related GitHub Actions
Article • 08/01/2024

This article lists some of the first party .NET GitHub actions that are hosted on the
dotnet GitHub organization .

7 Note

This article is a work-in-progress, and might not list all the available .NET GitHub
Actions.

.NET version sweeper


dotnet/versionsweeper

This action sweeps .NET repos for out-of-support target versions of .NET.

The .NET docs team uses the .NET version sweeper GitHub Action to automate issue
creation. The Action runs on a schedule (as a cron job). When it detects that .NET
projects target out-of-support versions, it creates issues to report its findings. The
output is configurable and helpful for tracking .NET version support concerns.

The Action is available on GitHub Marketplace .

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
be found on GitHub, where you Select a link to provide feedback:
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
Tutorial: Create a GitHub Action with
.NET
Article • 12/14/2023

Learn how to create a .NET app that can be used as a GitHub Action. GitHub Actions
enable workflow automation and composition. With GitHub Actions, you can build, test,
and deploy source code from GitHub. Additionally, actions expose the ability to
programmatically interact with issues, create pull requests, perform code reviews, and
manage branches. For more information on continuous integration with GitHub Actions,
see Building and testing .NET .

In this tutorial, you learn how to:

" Prepare a .NET app for GitHub Actions


" Define action inputs and outputs
" Compose a workflow

Prerequisites
A GitHub account
The .NET 6 SDK or later
A .NET integrated development environment (IDE)
Feel free to use the Visual Studio IDE

The intent of the app


The app in this tutorial performs code metric analysis by:

Scanning and discovering *.csproj and *.vbproj project files.

Analyzing the discovered source code within these projects for:


Cyclomatic complexity
Maintainability index
Depth of inheritance
Class coupling
Number of lines of source code
Approximated lines of executable code

Creating (or updating) a CODE_METRICS.md file.


The app is not responsible for creating a pull request with the changes to the
CODE_METRICS.md file. These changes are managed as part of the workflow
composition.

References to the source code in this tutorial have portions of the app omitted for
brevity. The complete app code is available on GitHub .

Explore the app


The .NET console app uses the CommandLineParser NuGet package to parse
arguments into the ActionInputs object.

C#

using CommandLine;

namespace DotNet.GitHubAction;

public class ActionInputs


{
string _repositoryName = null!;
string _branchName = null!;

public ActionInputs()
{
if (Environment.GetEnvironmentVariable("GREETINGS") is { Length: > 0
} greetings)
{
Console.WriteLine(greetings);
}
}

[Option('o', "owner",
Required = true,
HelpText = "The owner, for example: \"dotnet\". Assign from
`github.repository_owner`.")]
public string Owner { get; set; } = null!;

[Option('n', "name",
Required = true,
HelpText = "The repository name, for example: \"samples\". Assign
from `github.repository`.")]
public string Name
{
get => _repositoryName;
set => ParseAndAssign(value, str => _repositoryName = str);
}

[Option('b', "branch",
Required = true,
HelpText = "The branch name, for example: \"refs/heads/main\".
Assign from `github.ref`.")]
public string Branch
{
get => _branchName;
set => ParseAndAssign(value, str => _branchName = str);
}

[Option('d', "dir",
Required = true,
HelpText = "The root directory to start recursive searching from.")]
public string Directory { get; set; } = null!;

[Option('w', "workspace",
Required = true,
HelpText = "The workspace directory, or repository root
directory.")]
public string WorkspaceDirectory { get; set; } = null!;

static void ParseAndAssign(string? value, Action<string> assign)


{
if (value is { Length: > 0 } && assign is not null)
{
assign(value.Split("/")[^1]);
}
}
}

The preceding action inputs class defines several required inputs for the app to run
successfully. The constructor will write the "GREETINGS" environment variable value, if
one is available in the current execution environment. The Name and Branch properties
are parsed and assigned from the last segment of a "/" delimited string.

With the defined action inputs class, focus on the Program.cs file.

C#

using System.Text;
using CommandLine;
using DotNet.GitHubAction;
using DotNet.GitHubAction.Extensions;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
using static CommandLine.Parser;

HostApplicationBuilder builder = Host.CreateApplicationBuilder(args);

builder.Services.AddGitHubActionServices();

using IHost host = builder.Build();

ParserResult<ActionInputs> parser = Default.ParseArguments<ActionInputs>(()


=> new(), args);
parser.WithNotParsed(
errors =>
{
host.Services
.GetRequiredService<ILoggerFactory>()
.CreateLogger("DotNet.GitHubAction.Program")
.LogError("{Errors}", string.Join(
Environment.NewLine, errors.Select(error =>
error.ToString())));

Environment.Exit(2);
});

await parser.WithParsedAsync(
async options => await StartAnalysisAsync(options, host));

await host.RunAsync();

static async ValueTask StartAnalysisAsync(ActionInputs inputs, IHost host)


{
// Omitted for brevity, here is the pseudo code:
// - Read projects
// - Calculate code metric analytics
// - Write the CODE_METRICS.md file
// - Set the outputs

var updatedMetrics = true;


var title = "Updated 2 projects";
var summary = "Calculated code metrics on two projects.";

// Do the work here...

// Write GitHub Action workflow outputs.


var gitHubOutputFile =
Environment.GetEnvironmentVariable("GITHUB_OUTPUT");
if (!string.IsNullOrWhiteSpace(gitHubOutputFile))
{
using StreamWriter textWriter = new(gitHubOutputFile, true,
Encoding.UTF8);
textWriter.WriteLine($"updated-metrics={updatedMetrics}");
textWriter.WriteLine($"summary-title={title}");
textWriter.WriteLine($"summary-details={summary}");
}

await ValueTask.CompletedTask;

Environment.Exit(0);
}

The Program file is simplified for brevity, to explore the full sample source, see
Program.cs . The mechanics in place demonstrate the boilerplate code required to use:
Top-level statements
Generic Host
Dependency injection

External project or package references can be used, and registered with dependency
injection. The Get<TService> is a static local function, which requires the IHost instance,
and is used to resolve required services. With the CommandLine.Parser.Default singleton,
the app gets a parser instance from the args . When the arguments are unable to be
parsed, the app exits with a non-zero exit code. For more information, see Setting exit
codes for actions .

When the args are successfully parsed, the app was called correctly with the required
inputs. In this case, a call to the primary functionality StartAnalysisAsync is made.

To write output values, you must follow the format recognized by GitHub Actions:
Setting an output parameter .

Prepare the .NET app for GitHub Actions


GitHub Actions support two variations of app development, either

JavaScript (optionally TypeScript )


Docker container (any app that runs on Docker )

The virtual environment where the GitHub Action is hosted may or may not have .NET
installed. For information about what is preinstalled in the target environment, see
GitHub Actions Virtual Environments . While it's possible to run .NET CLI commands
from the GitHub Actions workflows, for a more fully functioning .NET-based GitHub
Action, we recommend that you containerize the app. For more information, see
Containerize a .NET app.

The Dockerfile
A Dockerfile is a set of instructions to build an image. For .NET applications, the
Dockerfile usually sits in the root of the directory next to a solution file.

Dockerfile

# Set the base image as the .NET 7.0 SDK (this includes the runtime)
FROM mcr.microsoft.com/dotnet/sdk:7.0 as build-env

# Copy everything and publish the release (publish implicitly restores and
builds)
WORKDIR /app
COPY . ./
RUN dotnet publish ./DotNet.GitHubAction/DotNet.GitHubAction.csproj -c
Release -o out --no-self-contained

# Label the container


LABEL maintainer="David Pine <david.pine@microsoft.com>"
LABEL repository="https://github.com/dotnet/samples"
LABEL homepage="https://github.com/dotnet/samples"

# Label as GitHub action


LABEL com.github.actions.name="The name of your GitHub Action"
# Limit to 160 characters
LABEL com.github.actions.description="The description of your GitHub
Action."
# See branding:
# https://docs.github.com/actions/creating-actions/metadata-syntax-for-
github-actions#branding
LABEL com.github.actions.icon="activity"
LABEL com.github.actions.color="orange"

# Relayer the .NET SDK, anew with the build output


FROM mcr.microsoft.com/dotnet/sdk:7.0
COPY --from=build-env /app/out .
ENTRYPOINT [ "dotnet", "/DotNet.GitHubAction.dll" ]

7 Note

The .NET app in this tutorial relies on the .NET SDK as part of its functionality. The
Dockerfile creates a new set of Docker layers, independent from the previous ones.
It starts from scratch with the SDK image, and adds the build output from the
previous set of layers. For applications that do not require the .NET SDK as part of
their functionality, they should rely on just the .NET Runtime instead. This greatly
reduces the size of the image.

Dockerfile

FROM mcr.microsoft.com/dotnet/runtime:7.0

2 Warning

Pay close attention to every step within the Dockerfile, as it does differ from the
standard Dockerfile created from the "add docker support" functionality. In
particular, the last few steps vary by not specifying a new WORKDIR which would
change the path to the app's ENTRYPOINT .
The preceding Dockerfile steps include:

Setting the base image from mcr.microsoft.com/dotnet/sdk:7.0 as the alias build-


env .

Copying the contents and publishing the .NET app:


The app is published using the dotnet publish command.
Applying labels to the container.
Relayering the .NET SDK image from mcr.microsoft.com/dotnet/sdk:7.0
Copying the published build output from the build-env .
Defining the entry point, which delegates to dotnet /DotNet.GitHubAction.dll.

 Tip

The MCR in mcr.microsoft.com stands for "Microsoft Container Registry", and is


Microsoft's syndicated container catalog from the official Docker hub. For more
information, see Microsoft syndicates container catalog .

U Caution

If you use a global.json file to pin the SDK version, you should explicitly refer to that
version in your Dockerfile. For example, if you've used global.json to pin SDK version
5.0.300 , your Dockerfile should use mcr.microsoft.com/dotnet/sdk:5.0.300 . This

prevents breaking the GitHub Actions when a new minor revision is released.

Define action inputs and outputs


In the Explore the app section, you learned about the ActionInputs class. This object
represents the inputs for the GitHub Action. For GitHub to recognize that the repository
is a GitHub Action, you need to have an action.yml file at the root of the repository.

yml

name: 'The title of your GitHub Action'


description: 'The description of your GitHub Action'
branding:
icon: activity
color: orange
inputs:
owner:
description:
'The owner of the repo. Assign from github.repository_owner. Example,
"dotnet".'
required: true
name:
description:
'The repository name. Example, "samples".'
required: true
branch:
description:
'The branch name. Assign from github.ref. Example, "refs/heads/main".'
required: true
dir:
description:
'The root directory to work from. Examples, "path/to/code".'
required: false
default: '/github/workspace'
outputs:
summary-title:
description:
'The title of the code metrics action.'
summary-details:
description:
'A detailed summary of all the projects that were flagged.'
updated-metrics:
description:
'A boolean value, indicating whether or not the action updated
metrics.'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- '-o'
- ${{ inputs.owner }}
- '-n'
- ${{ inputs.name }}
- '-b'
- ${{ inputs.branch }}
- '-d'
- ${{ inputs.dir }}

The preceding action.yml file defines:

The name and description of the GitHub Action


The branding , which is used in the GitHub Marketplace to help more uniquely
identify your action
The inputs , which maps one-to-one with the ActionInputs class
The outputs , which is written to in the Program and used as part of Workflow
composition
The runs node, which tells GitHub that the app is a docker application and what
arguments to pass to it

For more information, see Metadata syntax for GitHub Actions .


Pre-defined environment variables
With GitHub Actions, you'll get a lot of environment variables by default. For instance,
the variable GITHUB_REF will always contain a reference to the branch or tag that
triggered the workflow run. GITHUB_REPOSITORY has the owner and repository name, for
example, dotnet/docs .

You should explore the pre-defined environment variables and use them accordingly.

Workflow composition
With the .NET app containerized, and the action inputs and outputs defined, you're
ready to consume the action. GitHub Actions are not required to be published in the
GitHub Marketplace to be used. Workflows are defined in the .github/workflows
directory of a repository as YAML files.

yml

# The name of the work flow. Badges will use this name
name: '.NET code metrics'

on:
push:
branches: [ main ]
paths:
- 'github-actions/DotNet.GitHubAction/**' # run on all
changes to this dir
- '!github-actions/DotNet.GitHubAction/CODE_METRICS.md' # ignore this
file
workflow_dispatch:
inputs:
reason:
description: 'The reason for running the workflow'
required: true
default: 'Manual run'

jobs:
analysis:

runs-on: ubuntu-latest
permissions:
contents: write
pull-requests: write

steps:
- uses: actions/checkout@v3

- name: 'Print manual run reason'


if: ${{ github.event_name == 'workflow_dispatch' }}
run: |
echo 'Reason: ${{ github.event.inputs.reason }}'

- name: .NET code metrics


id: dotnet-code-metrics
uses: dotnet/samples/github-actions/DotNet.GitHubAction@main
env:
GREETINGS: 'Hello, .NET developers!' # ${{ secrets.GITHUB_TOKEN }}
with:
owner: ${{ github.repository_owner }}
name: ${{ github.repository }}
branch: ${{ github.ref }}
dir: ${{ './github-actions/DotNet.GitHubAction' }}

- name: Create pull request


uses: peter-evans/create-pull-request@v4
if: ${{ steps.dotnet-code-metrics.outputs.updated-metrics }} == 'true'
with:
title: '${{ steps.dotnet-code-metrics.outputs.summary-title }}'
body: '${{ steps.dotnet-code-metrics.outputs.summary-details }}'
commit-message: '.NET code metrics, automated pull request.'

) Important

For containerized GitHub Actions, you're required to use runs-on: ubuntu-latest .


For more information, see Workflow syntax jobs.<job_id>.runs-on .

The preceding workflow YAML file defines three primary nodes:

The name of the workflow. This name is also what's used when creating a workflow
status badge .
The on node defines when and how the action is triggered.
The jobs node outlines the various jobs and steps within each job. Individual steps
consume GitHub Actions.

For more information, see Creating your first workflow .

Focusing on the steps node, the composition is more obvious:

yml

steps:
- uses: actions/checkout@v3

- name: 'Print manual run reason'


if: ${{ github.event_name == 'workflow_dispatch' }}
run: |
echo 'Reason: ${{ github.event.inputs.reason }}'
- name: .NET code metrics
id: dotnet-code-metrics
uses: dotnet/samples/github-actions/DotNet.GitHubAction@main
env:
GREETINGS: 'Hello, .NET developers!' # ${{ secrets.GITHUB_TOKEN }}
with:
owner: ${{ github.repository_owner }}
name: ${{ github.repository }}
branch: ${{ github.ref }}
dir: ${{ './github-actions/DotNet.GitHubAction' }}

- name: Create pull request


uses: peter-evans/create-pull-request@v4
if: ${{ steps.dotnet-code-metrics.outputs.updated-metrics }} == 'true'
with:
title: '${{ steps.dotnet-code-metrics.outputs.summary-title }}'
body: '${{ steps.dotnet-code-metrics.outputs.summary-details }}'
commit-message: '.NET code metrics, automated pull request.'

The jobs.steps represents the workflow composition. Steps are orchestrated such that
they're sequential, communicative, and composable. With various GitHub Actions
representing steps, each having inputs and outputs, workflows can be composed.

In the preceding steps, you can observe:

1. The repository is checked out .

2. A message is printed to the workflow log, when manually ran .

3. A step identified as dotnet-code-metrics :

uses: dotnet/samples/github-actions/DotNet.GitHubAction@main is the


location of the containerized .NET app in this tutorial.
env creates an environment variable "GREETING" , which is printed in the

execution of the app.


with specifies each of the required action inputs.

4. A conditional step, named Create pull request runs when the dotnet-code-
metrics step specifies an output parameter of updated-metrics with a value of

true .

) Important

GitHub allows for the creation of encrypted secrets . Secrets can be used within
workflow composition, using the ${{ secrets.SECRET_NAME }} syntax. In the context
of a GitHub Action, there is a GitHub token that is automatically populated by
default: ${{ secrets.GITHUB_TOKEN }} . For more information, see Context and
expression syntax for GitHub Actions .

Put it all together


The dotnet/samples GitHub repository is home to many .NET sample source code
projects, including the app in this tutorial .

The generated CODE_METRICS.md file is navigable. This file represents the hierarchy
of the projects it analyzed. Each project has a top-level section, and an emoji that
represents the overall status of the highest cyclomatic complexity for nested objects. As
you navigate the file, each section exposes drill-down opportunities with a summary of
each area. The markdown has collapsible sections as an added convenience.

The hierarchy progresses from:

Project file to assembly


Assembly to namespace
Namespace to named-type
Each named-type has a table, and each table has:
Links to line numbers for fields, methods, and properties
Individual ratings for code metrics

In action
The workflow specifies that on a push to the main branch, the action is triggered to run.
When it runs, the Actions tab in GitHub will report the live log stream of its execution.
Here is an example log from the .NET code metrics run:

Performance improvements
If you followed along the sample, you might have noticed that every time this action is
used, it will do a docker build for that image. So, every trigger is faced with some time
to build the container before running it. Before releasing your GitHub Actions to the
marketplace, you should:

1. (automatically) Build the Docker image


2. Push the docker image to the GitHub Container Registry (or any other public
container registry)
3. Change the action to not build the image, but to use an image from a public
registry.

YAML

# Rest of action.yml content removed for readability


# using Dockerfile
runs:
using: 'docker'
image: 'Dockerfile' # Change this line
# using container image from public registry
runs:
using: 'docker'
image: 'docker://ghcr.io/some-user/some-registry' # Starting with
docker:// is important!!

For more information, see GitHub Docs: Working with the Container registry .
See also
.NET Generic Host
Dependency injection in .NET
Code metrics values
Open-source GitHub Action build in .NET with a workflow for building and
pushing the docker image automatically.

Next steps
.NET GitHub Actions sample code

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
Quickstart: Create a build validation
GitHub workflow
Article • 10/07/2022

In this quickstart, you will learn how to create a GitHub workflow to validate the
compilation of your .NET source code in GitHub. Compiling your .NET code is one of the
most basic validation steps that you can take to help ensure the quality of updates to
your code. If code doesn't compile (or build), it's an easy deterrent and should be a clear
sign that the code needs to be fixed.

Prerequisites
A GitHub account .
A .NET source code repository.

Create a workflow file


In the GitHub repository, add a new YAML file to the .github/workflows directory. Choose
a meaningful file name, something that will clearly indicate what the workflow is
intended to do. For more information, see Workflow file.

) Important

GitHub requires that workflow composition files to be placed within the


.github/workflows directory.

Workflow files typically define a composition of one or more GitHub Action via the
jobs.<job_id>/steps[*] . For more information, see, Workflow syntax for GitHub

Actions .

Create a new file named build-validation.yml, copy and paste the following YML
contents into it:

yml

name: build

on:
push:
pull_request:
branches: [ main ]
paths:
- '**.cs'
- '**.csproj'

env:
DOTNET_VERSION: '6.0.401' # The .NET SDK version to use

jobs:
build:

name: build-${{matrix.os}}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macOS-latest]

steps:
- uses: actions/checkout@v3
- name: Setup .NET Core
uses: actions/setup-dotnet@v3
with:
dotnet-version: ${{ env.DOTNET_VERSION }}

- name: Install dependencies


run: dotnet restore

- name: Build
run: dotnet build --configuration Release --no-restore

In the preceding workflow composition:

The name: build defines the name, "build" will appear in workflow status badges.

yml

name: build

The on node signifies the events that trigger the workflow:

yml

on:
push:
pull_request:
branches: [ main ]
paths:
- '**.cs'
- '**.csproj'
Triggered when a push or pull_request occurs on the main branch where any
files changed ending with the .cs or .csproj file extensions.

The env node defines named environment variables (env var).

yml

env:
DOTNET_VERSION: '6.0.401' # The .NET SDK version to use

The environment variable DOTNET_VERSION is assigned the value '6.0.401' . The


environment variable is later referenced to specify the dotnet-version of the
actions/setup-dotnet@v3 GitHub Action.

The jobs node builds out the steps for the workflow to take.

yml

jobs:
build:

name: build-${{matrix.os}}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macOS-latest]

steps:
- uses: actions/checkout@v3
- name: Setup .NET Core
uses: actions/setup-dotnet@v3
with:
dotnet-version: ${{ env.DOTNET_VERSION }}

- name: Install dependencies


run: dotnet restore

- name: Build
run: dotnet build --configuration Release --no-restore

There is a single job, named build-<os> where the <os> is the operating system
name from the strategy/matrix . The name and runs-on elements are dynamic
for each value in the matrix/os . This will run on the latest versions of Ubuntu,
Windows, and macOS.

The actions/setup-dotnet@v3 GitHub Action is required to set up the .NET SDK


with the specified version from the DOTNET_VERSION environment variable.
(Optionally) Additional steps may be required, depending on your .NET
workload. They're omitted from this example, but you may need additional tools
installed to build your apps.
For example, when building an ASP.NET Core Blazor WebAssembly
application with Ahead-of-Time (AoT) compilation you'd install the
corresponding workload before running restore/build/publish operations.

YAML

- name: Install WASM Tools Workload


run: dotnet workload install wasm-tools

For more information on .NET workloads, see dotnet workload install.

The dotnet restore command is called.

The dotnet build command is called.

In this case, think of a workflow file as a composition that represents the various steps to
build an application. Many .NET CLI commands are available, most of which could be
used in the context of a GitHub Action.

Create a workflow status badge


It's common nomenclature for GitHub repositories to have a README.md file at the root
of the repository directory. Likewise, it's nice to report the latest status for various
workflows. All workflows can generate a status badge, which are visually appealing
within the README.md file. To add the workflow status badge:

1. From the GitHub repository select the Actions navigation option.

2. All repository workflows are displayed on the left-side, select the desired workflow
and the ellipsis (...) button.

The ellipsis (...) button expands the menu options for the selected workflow.

3. Select the Create status badge menu option.


4. Select the Copy status badge Markdown button.

5. Paste the Markdown into the README.md file, save the file, commit and push the
changes.

For more, see Adding a workflow status badge .

Example build workflow status badge

Passing Failing No status


Passing Failing No status

build passing build failing build no status

See also
dotnet restore
dotnet build
actions/checkout
actions/setup-dotnet

Next steps
Quickstart: Create a .NET test GitHub workflow
Quickstart: Create a test validation
GitHub workflow
Article • 10/07/2022

In this quickstart, you will learn how to create a GitHub workflow to test your .NET
source code. Automatically testing your .NET code within GitHub is referred to as
continuous integration (CI), where pull requests or changes to the source trigger
workflows to exercise. Along with building the source code, testing ensures that the
compiled source code functions as the author intended. More often than not, unit tests
serve as immediate feedback-loop to help ensure the validity of changes to source code.

Prerequisites
A GitHub account .
A .NET source code repository.

Create a workflow file


In the GitHub repository, add a new YAML file to the .github/workflows directory. Choose
a meaningful file name, something that will clearly indicate what the workflow is
intended to do. For more information, see Workflow file.

) Important

GitHub requires that workflow composition files to be placed within the


.github/workflows directory.

Workflow files typically define a composition of one or more GitHub Action via the
jobs.<job_id>/steps[*] . For more information, see, Workflow syntax for GitHub
Actions .

Create a new file named build-and-test.yml, copy and paste the following YML contents
into it:

yml

name: build and test

on:
push:
pull_request:
branches: [ main ]
paths:
- '**.cs'
- '**.csproj'

env:
DOTNET_VERSION: '6.0.401' # The .NET SDK version to use

jobs:
build-and-test:

name: build-and-test-${{matrix.os}}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macOS-latest]

steps:
- uses: actions/checkout@v3
- name: Setup .NET Core
uses: actions/setup-dotnet@v3
with:
dotnet-version: ${{ env.DOTNET_VERSION }}

- name: Install dependencies


run: dotnet restore

- name: Build
run: dotnet build --configuration Release --no-restore

- name: Test
run: dotnet test --no-restore --verbosity normal

In the preceding workflow composition:

The name: build and test defines the name, "build and test" will appear in
workflow status badges.

yml

name: build and test

The on node signifies the events that trigger the workflow:

yml

on:
push:
pull_request:
branches: [ main ]
paths:
- '**.cs'
- '**.csproj'

Triggered when a push or pull_request occurs on the main branch where any
files changed ending with the .cs or .csproj file extensions.

The env node defines named environment variables (env var).

yml

env:
DOTNET_VERSION: '6.0.401' # The .NET SDK version to use

The environment variable DOTNET_VERSION is assigned the value '6.0.401' . The


environment variable is later referenced to specify the dotnet-version of the
actions/setup-dotnet@v3 GitHub Action.

The jobs node builds out the steps for the workflow to take.

yml

jobs:
build-and-test:

name: build-and-test-${{matrix.os}}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macOS-latest]

steps:
- uses: actions/checkout@v3
- name: Setup .NET Core
uses: actions/setup-dotnet@v3
with:
dotnet-version: ${{ env.DOTNET_VERSION }}

- name: Install dependencies


run: dotnet restore

- name: Build
run: dotnet build --configuration Release --no-restore

- name: Test
run: dotnet test --no-restore --verbosity normal

There is a single job, named build-<os> where the <os> is the operating system
name from the strategy/matrix . The name and runs-on elements are dynamic
for each value in the matrix/os . This will run on the latest versions of Ubuntu,
Windows, and macOS.
The actions/setup-dotnet@v3 GitHub Action is used to setup the .NET SDK with
the specified version from the DOTNET_VERSION environment variable.
The dotnet restore command is called.
The dotnet build command is called.
The dotnet test command is called.

Create a workflow status badge


It's common nomenclature for GitHub repositories to have a README.md file at the root
of the repository directory. Likewise, it's nice to report the latest status for various
workflows. All workflows can generate a status badge, which are visually appealing
within the README.md file. To add the workflow status badge:

1. From the GitHub repository select the Actions navigation option.

2. All repository workflows are displayed on the left-side, select the desired workflow
and the ellipsis (...) button.

The ellipsis (...) button expands the menu options for the selected workflow.

3. Select the Create status badge menu option.

4. Select the Copy status badge Markdown button.


5. Paste the Markdown into the README.md file, save the file, commit and push the
changes.

For more, see Adding a workflow status badge .

Example test workflow status badge

Passing Failing No status

test passing test failing test no status

See also
dotnet restore
dotnet build
dotnet test
Unit testing .NET apps
actions/checkout
actions/setup-dotnet

Next steps
Quickstart: Create a GitHub workflow to publish your .NET app
Quickstart: Create a GitHub workflow to
publish an app
Article • 10/07/2022

In this quickstart, you will learn how to create a GitHub workflow to publish your .NET
app from source code. Automatically publishing your .NET app from GitHub to a
destination is referred to as a continuous deployment (CD). There are many possible
destinations to publish an application, in this quickstart you'll publish to Azure.

Prerequisites
A GitHub account .
A .NET source code repository.
An Azure account with an active subscription. Create an account for free .
An ASP.NET Core web app.
An Azure App Service resource.

Add publish profile


To publish the app to Azure, open the Azure portal for the App Service instance of the
application. In the resource Overview, select Get publish profile and save the
*.PublishSetting file locally.

2 Warning
The publish profile contains sensitive information, such as credentials for accessing
your Azure App Service resource. This information should always be treated very
carefully.

In the GitHub repository, navigate to Settings and select Secrets from the left navigation
menu. Select New repository secret, to add a new secret.

Enter AZURE_PUBLISH_PROFILE as the Name, and paste the XML content from the publish
profile into the Value text area. Select Add secret. For more information, see Encrypted
secrets.

Create a workflow file


In the GitHub repository, add a new YAML file to the .github/workflows directory. Choose
a meaningful file name, something that will clearly indicate what the workflow is
intended to do. For more information, see Workflow file.

) Important

GitHub requires that workflow composition files to be placed within the


.github/workflows directory.

Workflow files typically define a composition of one or more GitHub Action via the
jobs.<job_id>/steps[*] . For more information, see, Workflow syntax for GitHub

Actions .
Create a new file named publish-app.yml, copy and paste the following YML contents
into it:

yml

name: publish

on:
push:
branches: [ production ]

env:
AZURE_WEBAPP_NAME: DotNetWeb
AZURE_WEBAPP_PACKAGE_PATH: '.' # Set this to the path to your web app
project, defaults to the repository root:
DOTNET_VERSION: '6.0.401' # The .NET SDK version to use

jobs:
publish:

runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v3
- name: Setup .NET Core
uses: actions/setup-dotnet@v3
with:
dotnet-version: ${{ env.DOTNET_VERSION }}

- name: Install dependencies


run: dotnet restore

- name: Build
run: |
cd DotNet.WebApp
dotnet build --configuration Release --no-restore
dotnet publish -c Release -o ../dotnet-webapp -r linux-x64 --self-
contained true /p:UseAppHost=true
- name: Test
run: |
cd DotNet.WebApp.Tests
dotnet test --no-restore --verbosity normal

- uses: azure/webapps-deploy@v2
name: Deploy
with:
app-name: ${{ env.AZURE_WEBAPP_NAME }}
publish-profile: ${{ secrets.AZURE_PUBLISH_PROFILE }}
package: '${{ env.AZURE_WEBAPP_PACKAGE_PATH }}/dotnet-webapp'

In the preceding workflow composition:


The name: publish defines the name, "publish" will appear in workflow status
badges.

yml

name: publish

The on node signifies the events that trigger the workflow:

yml

on:
push:
branches: [ production ]

Triggered when a push occurs on the production branch.

The env node defines named environment variables (env var).

yml

env:
AZURE_WEBAPP_NAME: DotNetWeb
AZURE_WEBAPP_PACKAGE_PATH: '.' # Set this to the path to your web app
project, defaults to the repository root:
DOTNET_VERSION: '6.0.401' # The .NET SDK version to use

The environment variable AZURE_WEBAPP_NAME is assigned the value DotNetWeb .


The environment variable AZURE_WEBAPP_PACKAGE_PATH is assigned the value '.' .
The environment variable DOTNET_VERSION is assigned the value '6.0.401' . The
environment variable is later referenced to specify the dotnet-version of the
actions/setup-dotnet@v3 GitHub Action.

The jobs node builds out the steps for the workflow to take.

yml

jobs:
publish:

runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v3
- name: Setup .NET Core
uses: actions/setup-dotnet@v3
with:
dotnet-version: ${{ env.DOTNET_VERSION }}
- name: Install dependencies
run: dotnet restore

- name: Build
run: |
cd DotNet.WebApp
dotnet build --configuration Release --no-restore
dotnet publish -c Release -o ../dotnet-webapp -r linux-x64 --
self-contained true /p:UseAppHost=true
- name: Test
run: |
cd DotNet.WebApp.Tests
dotnet test --no-restore --verbosity normal

- uses: azure/webapps-deploy@v2
name: Deploy
with:
app-name: ${{ env.AZURE_WEBAPP_NAME }}
publish-profile: ${{ secrets.AZURE_PUBLISH_PROFILE }}
package: '${{ env.AZURE_WEBAPP_PACKAGE_PATH }}/dotnet-webapp'

There is a single job, named publish that will run on the latest version of
Ubuntu.
The actions/setup-dotnet@v3 GitHub Action is used to set up the .NET SDK with
the specified version from the DOTNET_VERSION environment variable.
The dotnet restore command is called.
The dotnet build command is called.
The dotnet publish command is called.
The dotnet test command is called.
The azure/webapps-deploy@v2 GitHub Action deploys the app with the given
publish-profile and package .

The publish-profile is assigned from the AZURE_PUBLISH_PROFILE repository


secret.

Create a workflow status badge


It's common nomenclature for GitHub repositories to have a README.md file at the root
of the repository directory. Likewise, it's nice to report the latest status for various
workflows. All workflows can generate a status badge, which are visually appealing
within the README.md file. To add the workflow status badge:

1. From the GitHub repository select the Actions navigation option.

2. All repository workflows are displayed on the left-side, select the desired workflow
and the ellipsis (...) button.
The ellipsis (...) button expands the menu options for the selected workflow.

3. Select the Create status badge menu option.

4. Select the Copy status badge Markdown button.

5. Paste the Markdown into the README.md file, save the file, commit and push the
changes.

For more, see Adding a workflow status badge .

Example publish workflow status badge


Passing Failing No status

publish passing publish failing publish no status

See also
dotnet restore
dotnet build
dotnet test
dotnet publish

Next steps
Quickstart: Create a CodeQL GitHub workflow
Quickstart: Create a security scan
GitHub workflow
Article • 02/18/2022

In this quickstart, you will learn how to create a CodeQL GitHub workflow to automate
the discovery of vulnerabilities in your .NET codebase.

In CodeQL, code is treated as data. Security vulnerabilities, bugs, and other errors
are modeled as queries that can be executed against databases extracted from
code.

— GitHub CodeQL: About

Prerequisites
A GitHub account .
A .NET source code repository.

Create a workflow file


In the GitHub repository, add a new YAML file to the .github/workflows directory. Choose
a meaningful file name, something that will clearly indicate what the workflow is
intended to do. For more information, see Workflow file.

) Important

GitHub requires that workflow composition files to be placed within the


.github/workflows directory.

Workflow files typically define a composition of one or more GitHub Action via the
jobs.<job_id>/steps[*] . For more information, see, Workflow syntax for GitHub
Actions .

Create a new file named codeql-analysis.yml, copy and paste the following YML contents
into it:

yml

name: "CodeQL"
on:
push:
branches: [main]
paths:
- '**.cs'
- '**.csproj'
pull_request:
branches: [main]
paths:
- '**.cs'
- '**.csproj'
schedule:
- cron: '0 8 * * 4'

jobs:
analyze:

name: analyze
runs-on: ubuntu-latest

strategy:
fail-fast: false
matrix:
language: ['csharp']

steps:
- name: Checkout repository
uses: actions/checkout@v3
with:
fetch-depth: 2

- run: git checkout HEAD^2


if: ${{ github.event_name == 'pull_request' }}

- name: Initialize CodeQL


uses: github/codeql-action/init@v1
with:
languages: ${{ matrix.language }}

- name: Autobuild
uses: github/codeql-action/autobuild@v1

- name: Perform CodeQL Analysis


uses: github/codeql-action/analyze@v1

In the preceding workflow composition:

The name: CodeQL defines the name, "CodeQL" will appear in workflow status
badges.

yml

name: "CodeQL"
The on node signifies the events that trigger the workflow:

yml

on:
push:
branches: [main]
paths:
- '**.cs'
- '**.csproj'
pull_request:
branches: [main]
paths:
- '**.cs'
- '**.csproj'
schedule:
- cron: '0 8 * * 4'

Triggered when a push or pull_request occurs on the main branch where any
files changed ending with the .cs or .csproj file extensions.
As a cron job (on a schedule) — to run at 8:00 UTC every Thursday.

The jobs node builds out the steps for the workflow to take.

yml

jobs:
analyze:

name: analyze
runs-on: ubuntu-latest

strategy:
fail-fast: false
matrix:
language: ['csharp']

steps:
- name: Checkout repository
uses: actions/checkout@v3
with:
fetch-depth: 2

- run: git checkout HEAD^2


if: ${{ github.event_name == 'pull_request' }}

- name: Initialize CodeQL


uses: github/codeql-action/init@v1
with:
languages: ${{ matrix.language }}
- name: Autobuild
uses: github/codeql-action/autobuild@v1

- name: Perform CodeQL Analysis


uses: github/codeql-action/analyze@v1

There is a single job, named analyze that will run on the latest version of
Ubuntu.
The strategy defines C# as the language .
The github/codeql-action/init@v1 GitHub Action is used to initialize CodeQL.
The github/codeql-action/autobuild@v1 GitHub Action builds the .NET project.
The github/codeql-action/analyze@v1 GitHub Action performs the CodeQL
analysis.

For more information, see GitHub Actions: Configure code scanning .

Create a workflow status badge


It's common nomenclature for GitHub repositories to have a README.md file at the root
of the repository directory. Likewise, it's nice to report the latest status for various
workflows. All workflows can generate a status badge, which are visually appealing
within the README.md file. To add the workflow status badge:

1. From the GitHub repository select the Actions navigation option.

2. All repository workflows are displayed on the left-side, select the desired workflow
and the ellipsis (...) button.

The ellipsis (...) button expands the menu options for the selected workflow.

3. Select the Create status badge menu option.


4. Select the Copy status badge Markdown button.

5. Paste the Markdown into the README.md file, save the file, commit and push the
changes.

For more, see Adding a workflow status badge .

Example CodeQL workflow status badge

Passing Failing No status


Passing Failing No status

CodeQL passing CodeQL failing CodeQL no status

See also
Secure coding guidelines
actions/checkout
actions/setup-dotnet

Next steps
Tutorial: Create a GitHub Action with .NET
Testing in .NET
Article • 12/16/2023

This article introduces the concept of testing and illustrates how different kinds of tests
can be used to validate code. Various tools are available for testing .NET applications,
such as the .NET CLI or Integrated Development Environments (IDEs).

Test types
Automated tests are a great way to ensure that the application code does what its
authors intend. This article covers unit tests, integration tests, and load tests.

Unit tests
A unit test is a test that exercises individual software components or methods, also
known as a "unit of work." Unit tests should only test code within the developer's
control. They don't test infrastructure concerns. Infrastructure concerns include
interacting with databases, file systems, and network resources.

For more information on creating unit tests, see Testing tools.

Integration tests
An integration test differs from a unit test in that it exercises two or more software
components' ability to function together, also known as their "integration." These tests
operate on a broader spectrum of the system under test, whereas unit tests focus on
individual components. Often, integration tests do include infrastructure concerns.

Load tests
A load test aims to determine whether or not a system can handle a specified load. For
example, the number of concurrent users using an application and the app's ability to
handle interactions responsively. For more information on load testing of web
applications, see ASP.NET Core load/stress testing.

Test considerations
Keep in mind there are best practices for writing tests. For example, Test Driven
Development (TDD) is when you write a unit test before the code it's meant to check.
TDD is like creating an outline for a book before you write it. The unit test helps
developers write simpler, readable, and efficient code.

Testing tools
.NET is a multi-language development platform, and you can write various test types for
C#, F#, and Visual Basic. For each of these languages, you can choose between several
test frameworks.

xUnit
xUnit is a free, open-source, community-focused unit testing tool for .NET. The
original inventor of NUnit v2 wrote xUnit.net. xUnit.net is the latest technology for unit
testing .NET apps. It also works with ReSharper, CodeRush, TestDriven.NET, and
Xamarin . xUnit.net is a project of the .NET Foundation and operates under its code
of conduct.

For more information, see the following resources:

Unit testing with C#


Unit testing with F#
Unit testing with Visual Basic

NUnit
NUnit is a unit-testing framework for all .NET languages. Initially, NUnit was ported
from JUnit, and the current production release has been rewritten with many new
features and support for a wide range of .NET platforms. It's a project of the .NET
Foundation .

For more information, see the following resources:

Unit testing with C#


Unit testing with F#
Unit testing with Visual Basic

MSTest
MSTest is the Microsoft test framework for all .NET languages. It's extensible and
works with both .NET CLI and Visual Studio. For more information, see the following
resources:
Unit testing with C#
Unit testing with F#
Unit testing with Visual Basic

MSTest runner
The MSTest runner is a lightweight and portable alternative to VSTest for running tests
in continuous integration (CI) pipelines, and in Visual Studio Test Explorer. For more
information, see MSTest runner overview.

.NET CLI
You can run a solutions unit test from the .NET CLI with the dotnet test command. The
.NET CLI exposes most of the functionality that Integrated Development Environments
(IDEs) make available through user interfaces. The .NET CLI is cross-platform and
available to use as part of continuous integration and delivery pipelines. The .NET CLI is
used with scripted processes to automate common tasks.

IDE
Whether you're using Visual Studio or Visual Studio Code, there are graphical user
interfaces for testing functionality. There are more features available to IDEs than the
CLI, for example, Live Unit Testing. For more information, see Including and excluding
tests with Visual Studio.

See also
For more information, see the following articles:

Unit testing best practices with .NET


Integration tests in ASP.NET Core
Running selective unit tests
Use code coverage for unit testing

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
issues and pull requests. For  Open a documentation issue
more information, see our
contributor guide.  Provide product feedback
Unit testing best practices with .NET
Core and .NET Standard
Article • 11/04/2022

There are numerous benefits of writing unit tests; they help with regression, provide
documentation, and facilitate good design. However, hard to read and brittle unit tests
can wreak havoc on your code base. This article describes some best practices regarding
unit test design for your .NET Core and .NET Standard projects.

In this guide, you learn some best practices when writing unit tests to keep your tests
resilient and easy to understand.

By John Reese with special thanks to Roy Osherove

Why unit test?

Less time performing functional tests


Functional tests are expensive. They typically involve opening up the application and
performing a series of steps that you (or someone else) must follow in order to validate
the expected behavior. These steps might not always be known to the tester. They'll
have to reach out to someone more knowledgeable in the area in order to carry out the
test. Testing itself could take seconds for trivial changes, or minutes for larger changes.
Lastly, this process must be repeated for every change that you make in the system.

Unit tests, on the other hand, take milliseconds, can be run at the press of a button, and
don't necessarily require any knowledge of the system at large. Whether or not the test
passes or fails is up to the test runner, not the individual.

Protection against regression


Regression defects are defects that are introduced when a change is made to the
application. It's common for testers to not only test their new feature but also test
features that existed beforehand in order to verify that previously implemented features
still function as expected.

With unit testing, it's possible to rerun your entire suite of tests after every build or even
after you change a line of code. Giving you confidence that your new code doesn't
break existing functionality.
Executable documentation
It might not always be obvious what a particular method does or how it behaves given a
certain input. You might ask yourself: How does this method behave if I pass it a blank
string? Null?

When you have a suite of well-named unit tests, each test should be able to clearly
explain the expected output for a given input. In addition, it should be able to verify that
it actually works.

Less coupled code


When code is tightly coupled, it can be difficult to unit test. Without creating unit tests
for the code that you're writing, coupling might be less apparent.

Writing tests for your code will naturally decouple your code, because it would be more
difficult to test otherwise.

Characteristics of a good unit test


Fast: It isn't uncommon for mature projects to have thousands of unit tests. Unit
tests should take little time to run. Milliseconds.
Isolated: Unit tests are standalone, can be run in isolation, and have no
dependencies on any outside factors such as a file system or database.
Repeatable: Running a unit test should be consistent with its results, that is, it
always returns the same result if you don't change anything in between runs.
Self-Checking: The test should be able to automatically detect if it passed or failed
without any human interaction.
Timely: A unit test shouldn't take a disproportionately long time to write compared
to the code being tested. If you find testing the code taking a large amount of
time compared to writing the code, consider a design that is more testable.

Code coverage
A high code coverage percentage is often associated with a higher quality of code.
However, the measurement itself can't determine the quality of code. Setting an overly
ambitious code coverage percentage goal can be counterproductive. Imagine a complex
project with thousands of conditional branches, and imagine that you set a goal of 95%
code coverage. Currently the project maintains 90% code coverage. The amount of time
it takes to account for all of the edge cases in the remaining 5% could be a massive
undertaking, and the value proposition quickly diminishes.
A high code coverage percentage isn't an indicator of success, nor does it imply high
code quality. It just represents the amount of code that is covered by unit tests. For
more information, see unit testing code coverage.

Let's speak the same language


The term mock is unfortunately often misused when talking about testing. The following
points define the most common types of fakes when writing unit tests:

Fake - A fake is a generic term that can be used to describe either a stub or a mock
object. Whether it's a stub or a mock depends on the context in which it's used. So in
other words, a fake can be a stub or a mock.

Mock - A mock object is a fake object in the system that decides whether or not a unit
test has passed or failed. A mock starts out as a Fake until it's asserted against.

Stub - A stub is a controllable replacement for an existing dependency (or collaborator)


in the system. By using a stub, you can test your code without dealing with the
dependency directly. By default, a stub starts out as a fake.

Consider the following code snippet:

C#

var mockOrder = new MockOrder();


var purchase = new Purchase(mockOrder);

purchase.ValidateOrders();

Assert.True(purchase.CanBeShipped);

The preceding example would be of a stub being referred to as a mock. In this case, it's
a stub. You're just passing in the Order as a means to be able to instantiate Purchase
(the system under test). The name MockOrder is also misleading because again, the order
isn't a mock.

A better approach would be:

C#

var stubOrder = new FakeOrder();


var purchase = new Purchase(stubOrder);

purchase.ValidateOrders();

Assert.True(purchase.CanBeShipped);
By renaming the class to FakeOrder , you've made the class a lot more generic. The class
can be used as a mock or a stub, whichever is better for the test case. In the preceding
example, FakeOrder is used as a stub. You're not using FakeOrder in any shape or form
during the assert. FakeOrder was passed into the Purchase class to satisfy the
requirements of the constructor.

To use it as a Mock, you could do something like the following code:

C#

var mockOrder = new FakeOrder();


var purchase = new Purchase(mockOrder);

purchase.ValidateOrders();

Assert.True(mockOrder.Validated);

In this case, you're checking a property on the Fake (asserting against it), so in the
preceding code snippet, the mockOrder is a Mock.

) Important

It's important to get this terminology correct. If you call your stubs "mocks," other
developers are going to make false assumptions about your intent.

The main thing to remember about mocks versus stubs is that mocks are just like stubs,
but you assert against the mock object, whereas you don't assert against a stub.

Best practices
Try not to introduce dependencies on infrastructure when writing unit tests. The
dependencies make the tests slow and brittle and should be reserved for integration
tests. You can avoid these dependencies in your application by following the Explicit
Dependencies Principle and using Dependency Injection. You can also keep your unit
tests in a separate project from your integration tests. This approach ensures your unit
test project doesn't have references to or dependencies on infrastructure packages.

Naming your tests


The name of your test should consist of three parts:
The name of the method being tested.
The scenario under which it's being tested.
The expected behavior when the scenario is invoked.

Why?
Naming standards are important because they explicitly express the intent of the test.
Tests are more than just making sure your code works, they also provide documentation.
Just by looking at the suite of unit tests, you should be able to infer the behavior of your
code without even looking at the code itself. Additionally, when tests fail, you can see
exactly which scenarios don't meet your expectations.

Bad:

C#

[Fact]
public void Test_Single()
{
var stringCalculator = new StringCalculator();

var actual = stringCalculator.Add("0");

Assert.Equal(0, actual);
}

Better:

C#

[Fact]
public void Add_SingleNumber_ReturnsSameNumber()
{
var stringCalculator = new StringCalculator();

var actual = stringCalculator.Add("0");

Assert.Equal(0, actual);
}

Arranging your tests


Arrange, Act, Assert is a common pattern when unit testing. As the name implies, it
consists of three main actions:
Arrange your objects, create and set them up as necessary.
Act on an object.
Assert that something is as expected.

Why?
Clearly separates what is being tested from the arrange and assert steps.
Less chance to intermix assertions with "Act" code.

Readability is one of the most important aspects when writing a test. Separating each of
these actions within the test clearly highlight the dependencies required to call your
code, how your code is being called, and what you're trying to assert. While it might be
possible to combine some steps and reduce the size of your test, the primary goal is to
make the test as readable as possible.

Bad:

C#

[Fact]
public void Add_EmptyString_ReturnsZero()
{
// Arrange
var stringCalculator = new StringCalculator();

// Assert
Assert.Equal(0, stringCalculator.Add(""));
}

Better:

C#

[Fact]
public void Add_EmptyString_ReturnsZero()
{
// Arrange
var stringCalculator = new StringCalculator();

// Act
var actual = stringCalculator.Add("");

// Assert
Assert.Equal(0, actual);
}
Write minimally passing tests
The input to be used in a unit test should be the simplest possible in order to verify the
behavior that you're currently testing.

Why?

Tests become more resilient to future changes in the codebase.


Closer to testing behavior over implementation.

Tests that include more information than required to pass the test have a higher chance
of introducing errors into the test and can make the intent of the test less clear. When
writing tests, you want to focus on the behavior. Setting extra properties on models or
using non-zero values when not required, only detracts from what you are trying to
prove.

Bad:

C#

[Fact]
public void Add_SingleNumber_ReturnsSameNumber()
{
var stringCalculator = new StringCalculator();

var actual = stringCalculator.Add("42");

Assert.Equal(42, actual);
}

Better:

C#

[Fact]
public void Add_SingleNumber_ReturnsSameNumber()
{
var stringCalculator = new StringCalculator();

var actual = stringCalculator.Add("0");

Assert.Equal(0, actual);
}
Avoid magic strings
Naming variables in unit tests is important, if not more important, than naming variables
in production code. Unit tests shouldn't contain magic strings.

Why?

Prevents the need for the reader of the test to inspect the production code in
order to figure out what makes the value special.
Explicitly shows what you're trying to prove rather than trying to accomplish.

Magic strings can cause confusion to the reader of your tests. If a string looks out of the
ordinary, they might wonder why a certain value was chosen for a parameter or return
value. This type of string value might lead them to take a closer look at the
implementation details, rather than focus on the test.

 Tip

When writing tests, you should aim to express as much intent as possible. In the
case of magic strings, a good approach is to assign these values to constants.

Bad:

C#

[Fact]
public void Add_BigNumber_ThrowsException()
{
var stringCalculator = new StringCalculator();

Action actual = () => stringCalculator.Add("1001");

Assert.Throws<OverflowException>(actual);
}

Better:

C#

[Fact]
void Add_MaximumSumResult_ThrowsOverflowException()
{
var stringCalculator = new StringCalculator();
const string MAXIMUM_RESULT = "1001";

Action actual = () => stringCalculator.Add(MAXIMUM_RESULT);

Assert.Throws<OverflowException>(actual);
}

Avoid logic in tests


When writing your unit tests, avoid manual string concatenation, logical conditions, such
as if , while , for , and switch , and other conditions.

Why?
Less chance to introduce a bug inside of your tests.
Focus on the end result, rather than implementation details.

When you introduce logic into your test suite, the chance of introducing a bug into it
increases dramatically. The last place that you want to find a bug is within your test
suite. You should have a high level of confidence that your tests work, otherwise, you
won't trust them. Tests that you don't trust, don't provide any value. When a test fails,
you want to have a sense that something is wrong with your code and that it can't be
ignored.

 Tip

If logic in your test seems unavoidable, consider splitting the test up into two or
more different tests.

Bad:

C#

[Fact]
public void Add_MultipleNumbers_ReturnsCorrectResults()
{
var stringCalculator = new StringCalculator();
var expected = 0;
var testCases = new[]
{
"0,0,0",
"0,1,2",
"1,2,3"
};
foreach (var test in testCases)
{
Assert.Equal(expected, stringCalculator.Add(test));
expected += 3;
}
}

Better:

C#

[Theory]
[InlineData("0,0,0", 0)]
[InlineData("0,1,2", 3)]
[InlineData("1,2,3", 6)]
public void Add_MultipleNumbers_ReturnsSumOfNumbers(string input, int
expected)
{
var stringCalculator = new StringCalculator();

var actual = stringCalculator.Add(input);

Assert.Equal(expected, actual);
}

Prefer helper methods to setup and teardown


If you require a similar object or state for your tests, prefer a helper method than using
Setup and Teardown attributes if they exist.

Why?

Less confusion when reading the tests since all of the code is visible from within
each test.
Less chance of setting up too much or too little for the given test.
Less chance of sharing state between tests, which creates unwanted dependencies
between them.

In unit testing frameworks, Setup is called before each and every unit test within your
test suite. While some might see this as a useful tool, it generally ends up leading to
bloated and hard to read tests. Each test will generally have different requirements in
order to get the test up and running. Unfortunately, Setup forces you to use the exact
same requirements for each test.
7 Note

xUnit has removed both SetUp and TearDown as of version 2.x

Bad:

C#

private readonly StringCalculator stringCalculator;


public StringCalculatorTests()
{
stringCalculator = new StringCalculator();
}

C#

// more tests...

C#

[Fact]
public void Add_TwoNumbers_ReturnsSumOfNumbers()
{
var result = stringCalculator.Add("0,1");

Assert.Equal(1, result);
}

Better:

C#

[Fact]
public void Add_TwoNumbers_ReturnsSumOfNumbers()
{
var stringCalculator = CreateDefaultStringCalculator();

var actual = stringCalculator.Add("0,1");

Assert.Equal(1, actual);
}

C#
// more tests...

C#

private StringCalculator CreateDefaultStringCalculator()


{
return new StringCalculator();
}

Avoid multiple acts


When writing your tests, try to only include one act per test. Common approaches to
using only one act include:

Create a separate test for each act.


Use parameterized tests.

Why?

When the test fails, it is clear which act is failing.


Ensures that the test is focused on just a single case.
Gives you the entire picture as to why your tests are failing.

Multiple acts need to be individually Asserted and it isn't guaranteed that all of the
Asserts will be executed. In most unit testing frameworks, once an Assert fails in a unit
test, the proceeding tests are automatically considered to be failing. This kind of process
can be confusing as functionality that is actually working, will be shown as failing.

Bad:

C#

[Fact]
public void Add_EmptyEntries_ShouldBeTreatedAsZero()
{
// Act
var actual1 = stringCalculator.Add("");
var actual2 = stringCalculator.Add(",");

// Assert
Assert.Equal(0, actual1);
Assert.Equal(0, actual2);
}
Better:

C#

[Theory]
[InlineData("", 0)]
[InlineData(",", 0)]
public void Add_EmptyEntries_ShouldBeTreatedAsZero(string input, int
expected)
{
// Arrange
var stringCalculator = new StringCalculator();

// Act
var actual = stringCalculator.Add(input);

// Assert
Assert.Equal(expected, actual);
}

Validate private methods by unit testing public methods


In most cases, there shouldn't be a need to test a private method. Private methods are
an implementation detail and never exist in isolation. At some point, there's going to be
a public facing method that calls the private method as part of its implementation. What
you should care about is the end result of the public method that calls into the private
one.

Consider the following case:

C#

public string ParseLogLine(string input)


{
var sanitizedInput = TrimInput(input);
return sanitizedInput;
}

private string TrimInput(string input)


{
return input.Trim();
}

Your first reaction might be to start writing a test for TrimInput because you want to
ensure that the method is working as expected. However, it's entirely possible that
ParseLogLine manipulates sanitizedInput in such a way that you don't expect,
rendering a test against TrimInput useless.
The real test should be done against the public facing method ParseLogLine because
that is what you should ultimately care about.

C#

public void ParseLogLine_StartsAndEndsWithSpace_ReturnsTrimmedResult()


{
var parser = new Parser();

var result = parser.ParseLogLine(" a ");

Assert.Equals("a", result);
}

With this viewpoint, if you see a private method, find the public method and write your
tests against that method. Just because a private method returns the expected result,
doesn't mean the system that eventually calls the private method uses the result
correctly.

Stub static references


One of the principles of a unit test is that it must have full control of the system under
test. This principle can be problematic when production code includes calls to static
references (for example, DateTime.Now ). Consider the following code:

C#

public int GetDiscountedPrice(int price)


{
if (DateTime.Now.DayOfWeek == DayOfWeek.Tuesday)
{
return price / 2;
}
else
{
return price;
}
}

How can this code possibly be unit tested? You might try an approach such as:

C#

public void GetDiscountedPrice_NotTuesday_ReturnsFullPrice()


{
var priceCalculator = new PriceCalculator();
var actual = priceCalculator.GetDiscountedPrice(2);

Assert.Equals(2, actual)
}

public void GetDiscountedPrice_OnTuesday_ReturnsHalfPrice()


{
var priceCalculator = new PriceCalculator();

var actual = priceCalculator.GetDiscountedPrice(2);

Assert.Equals(1, actual);
}

Unfortunately, you'll quickly realize that there are a couple of problems with your tests.

If the test suite is run on a Tuesday, the second test will pass, but the first test will
fail.
If the test suite is run on any other day, the first test will pass, but the second test
will fail.

To solve these problems, you'll need to introduce a seam into your production code.
One approach is to wrap the code that you need to control in an interface and have the
production code depend on that interface.

C#

public interface IDateTimeProvider


{
DayOfWeek DayOfWeek();
}

public int GetDiscountedPrice(int price, IDateTimeProvider dateTimeProvider)


{
if (dateTimeProvider.DayOfWeek() == DayOfWeek.Tuesday)
{
return price / 2;
}
else
{
return price;
}
}

Your test suite now becomes as follows:

C#

public void GetDiscountedPrice_NotTuesday_ReturnsFullPrice()


{
var priceCalculator = new PriceCalculator();
var dateTimeProviderStub = new Mock<IDateTimeProvider>();
dateTimeProviderStub.Setup(dtp =>
dtp.DayOfWeek()).Returns(DayOfWeek.Monday);

var actual = priceCalculator.GetDiscountedPrice(2,


dateTimeProviderStub);

Assert.Equals(2, actual);
}

public void GetDiscountedPrice_OnTuesday_ReturnsHalfPrice()


{
var priceCalculator = new PriceCalculator();
var dateTimeProviderStub = new Mock<IDateTimeProvider>();
dateTimeProviderStub.Setup(dtp =>
dtp.DayOfWeek()).Returns(DayOfWeek.Tuesday);

var actual = priceCalculator.GetDiscountedPrice(2,


dateTimeProviderStub);

Assert.Equals(1, actual);
}

Now the test suite has full control over DateTime.Now and can stub any value when
calling into the method.
Unit testing C# in .NET using dotnet test
and xUnit
Article • 03/07/2024

This tutorial shows how to build a solution containing a unit test project and source
code project. To follow the tutorial using a pre-built solution, view or download the
sample code . For download instructions, see Samples and Tutorials.

Create the solution


In this section, a solution is created that contains the source and test projects. The
completed solution has the following directory structure:

txt

/unit-testing-using-dotnet-test
unit-testing-using-dotnet-test.sln
/PrimeService
PrimeService.cs
PrimeService.csproj
/PrimeService.Tests
PrimeService_IsPrimeShould.cs
PrimeServiceTests.csproj

The following instructions provide the steps to create the test solution. See Commands
to create test solution for instructions to create the test solution in one step.

Open a shell window.

Run the following command:

.NET CLI

dotnet new sln -o unit-testing-using-dotnet-test

The dotnet new sln command creates a new solution in the unit-testing-using-
dotnet-test directory.

Change directory to the unit-testing-using-dotnet-test folder.

Run the following command:

.NET CLI
dotnet new classlib -o PrimeService

The dotnet new classlib command creates a new class library project in the
PrimeService folder. The new class library will contain the code to be tested.

Rename Class1.cs to PrimeService.cs.

Replace the code in PrimeService.cs with the following code:

C#

using System;

namespace Prime.Services
{
public class PrimeService
{
public bool IsPrime(int candidate)
{
throw new NotImplementedException("Not implemented.");
}
}
}

The preceding code:


Throws a NotImplementedException with a message indicating it's not
implemented.
Is updated later in the tutorial.

In the unit-testing-using-dotnet-test directory, run the following command to add


the class library project to the solution:

.NET CLI

dotnet sln add ./PrimeService/PrimeService.csproj

Create the PrimeService.Tests project by running the following command:

.NET CLI

dotnet new xunit -o PrimeService.Tests

The preceding command:


Creates the PrimeService.Tests project in the PrimeService.Tests directory. The
test project uses xUnit as the test library.
Configures the test runner by adding the following <PackageReference
/> elements to the project file:

Microsoft.NET.Test.Sdk
xunit

xunit.runner.visualstudio

coverlet.collector

Add the test project to the solution file by running the following command:

.NET CLI

dotnet sln add ./PrimeService.Tests/PrimeService.Tests.csproj

Add the PrimeService class library as a dependency to the PrimeService.Tests


project:

.NET CLI

dotnet add ./PrimeService.Tests/PrimeService.Tests.csproj reference


./PrimeService/PrimeService.csproj

Commands to create the solution


This section summarizes all the commands in the previous section. Skip this section if
you've completed the steps in the previous section.

The following commands create the test solution on a Windows machine. For macOS
and Unix, update the ren command to the OS version of ren to rename a file:

.NET CLI

dotnet new sln -o unit-testing-using-dotnet-test


cd unit-testing-using-dotnet-test
dotnet new classlib -o PrimeService
ren .\PrimeService\Class1.cs PrimeService.cs
dotnet sln add ./PrimeService/PrimeService.csproj
dotnet new xunit -o PrimeService.Tests
dotnet add ./PrimeService.Tests/PrimeService.Tests.csproj reference
./PrimeService/PrimeService.csproj
dotnet sln add ./PrimeService.Tests/PrimeService.Tests.csproj

Follow the instructions for "Replace the code in PrimeService.cs with the following code"
in the previous section.
Create a test
A popular approach in test driven development (TDD) is to write a (failing) test before
implementing the target code. This tutorial uses the TDD approach. The IsPrime
method is callable, but not implemented. A test call to IsPrime fails. With TDD, a test is
written that is known to fail. The target code is updated to make the test pass. You keep
repeating this approach, writing a failing test and then updating the target code to pass.

Update the PrimeService.Tests project:

Delete PrimeService.Tests/UnitTest1.cs.
Create a PrimeService.Tests/PrimeService_IsPrimeShould.cs file.
Replace the code in PrimeService_IsPrimeShould.cs with the following code:

C#

using Xunit;
using Prime.Services;

namespace Prime.UnitTests.Services
{
public class PrimeService_IsPrimeShould
{
[Fact]
public void IsPrime_InputIs1_ReturnFalse()
{
var primeService = new PrimeService();
bool result = primeService.IsPrime(1);

Assert.False(result, "1 should not be prime");


}
}
}

The [Fact] attribute declares a test method that's run by the test runner. From the
PrimeService.Tests folder, run dotnet test . The dotnet test command builds both
projects and runs the tests. The xUnit test runner contains the program entry point to
run the tests. dotnet test starts the test runner using the unit test project.

The test fails because IsPrime hasn't been implemented. Using the TDD approach, write
only enough code so this test passes. Update IsPrime with the following code:

C#

public bool IsPrime(int candidate)


{
if (candidate == 1)
{
return false;
}
throw new NotImplementedException("Not fully implemented.");
}

Run dotnet test . The test passes.

Add more tests


Add prime number tests for 0 and -1. You could copy the test created in the preceding
step and make copies of the following code to test 0 and -1. But don't do it, as there's a
better way.

C#

var primeService = new PrimeService();


bool result = primeService.IsPrime(1);

Assert.False(result, "1 should not be prime");

Copying test code when only a parameter changes results in code duplication and test
bloat. The following xUnit attributes enable writing a suite of similar tests:

[Theory] represents a suite of tests that execute the same code but have different

input arguments.
[InlineData] attribute specifies values for those inputs.

Rather than creating new tests, apply the preceding xUnit attributes to create a single
theory. Replace the following code:

C#

[Fact]
public void IsPrime_InputIs1_ReturnFalse()
{
var primeService = new PrimeService();
bool result = primeService.IsPrime(1);

Assert.False(result, "1 should not be prime");


}

with the following code:

C#
[Theory]
[InlineData(-1)]
[InlineData(0)]
[InlineData(1)]
public void IsPrime_ValuesLessThan2_ReturnFalse(int value)
{
var result = _primeService.IsPrime(value);

Assert.False(result, $"{value} should not be prime");


}

In the preceding code, [Theory] and [InlineData] enable testing several values less
than two. Two is the smallest prime number.

Add the following code after the class declaration and before the [Theory] attribute:

C#

private readonly PrimeService _primeService;

public PrimeService_IsPrimeShould()
{
_primeService = new PrimeService();
}

Run dotnet test , and two of the tests fail. To make all of the tests pass, update the
IsPrime method with the following code:

C#

public bool IsPrime(int candidate)


{
if (candidate < 2)
{
return false;
}
throw new NotImplementedException("Not fully implemented.");
}

Following the TDD approach, add more failing tests, then update the target code. See
the finished version of the tests and the complete implementation of the library .

The completed IsPrime method is not an efficient algorithm for testing primality.

Additional resources
xUnit.net official site
Testing controller logic in ASP.NET Core
dotnet add reference

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
Unit testing F# libraries in .NET Core
using dotnet test and xUnit
Article • 09/15/2021

This tutorial takes you through an interactive experience building a sample solution
step-by-step to learn unit testing concepts. If you prefer to follow the tutorial using a
pre-built solution, view or download the sample code before you begin. For download
instructions, see Samples and Tutorials.

This article is about testing a .NET Core project. If you're testing an ASP.NET Core
project, see Integration tests in ASP.NET Core.

Creating the source project


Open a shell window. Create a directory called unit-testing-with-fsharp to hold the
solution. Inside this new directory, run dotnet new sln to create a new solution. This
makes it easier to manage both the class library and the unit test project. Inside the
solution directory, create a MathService directory. The directory and file structure thus
far is shown below:

/unit-testing-with-fsharp
unit-testing-with-fsharp.sln
/MathService

Make MathService the current directory, and run dotnet new classlib -lang "F#" to
create the source project. You'll create a failing implementation of the math service:

F#

module MyMath =
let squaresOfOdds xs = raise (System.NotImplementedException("You
haven't written a test yet!"))

Change the directory back to the unit-testing-with-fsharp directory. Run dotnet sln add
.\MathService\MathService.fsproj to add the class library project to the solution.

Creating the test project


Next, create the MathService.Tests directory. The following outline shows the directory
structure:

/unit-testing-with-fsharp
unit-testing-with-fsharp.sln
/MathService
Source Files
MathService.fsproj
/MathService.Tests

Make the MathService.Tests directory the current directory and create a new project
using dotnet new xunit -lang "F#" . This creates a test project that uses xUnit as the test
library. The generated template configures the test runner in the MathServiceTests.fsproj:

XML

<ItemGroup>
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="15.3.0-
preview-20170628-02" />
<PackageReference Include="xunit" Version="2.2.0" />
<PackageReference Include="xunit.runner.visualstudio" Version="2.2.0" />
</ItemGroup>

The test project requires other packages to create and run unit tests. dotnet new in the
previous step added xUnit and the xUnit runner. Now, add the MathService class library
as another dependency to the project. Use the dotnet add reference command:

.NET CLI

dotnet add reference ../MathService/MathService.fsproj

You can see the entire file in the samples repository on GitHub.

You have the following final solution layout:

/unit-testing-with-fsharp
unit-testing-with-fsharp.sln
/MathService
Source Files
MathService.fsproj
/MathService.Tests
Test Source Files
MathServiceTests.fsproj
Execute dotnet sln add .\MathService.Tests\MathService.Tests.fsproj in the unit-
testing-with-fsharp directory.

Creating the first test


You write one failing test, make it pass, then repeat the process. Open Tests.fs and add
the following code:

F#

[<Fact>]
let ``My test`` () =
Assert.True(true)

[<Fact>]
let ``Fail every time`` () = Assert.True(false)

The [<Fact>] attribute denotes a test method that is run by the test runner. From the
unit-testing-with-fsharp, execute dotnet test to build the tests and the class library and
then run the tests. The xUnit test runner contains the program entry point to run your
tests. dotnet test starts the test runner using the unit test project you've created.

These two tests show the most basic passing and failing tests. My test passes, and Fail
every time fails. Now, create a test for the squaresOfOdds method. The squaresOfOdds

method returns a sequence of the squares of all odd integer values that are part of the
input sequence. Rather than trying to write all of those functions at once, you can
iteratively create tests that validate the functionality. Making each test pass means
creating the necessary functionality for the method.

The simplest test we can write is to call squaresOfOdds with all even numbers, where the
result should be an empty sequence of integers. Here's that test:

F#

[<Fact>]
let ``Sequence of Evens returns empty collection`` () =
let expected = Seq.empty<int>
let actual = MyMath.squaresOfOdds [2; 4; 6; 8; 10]
Assert.Equal<Collections.Generic.IEnumerable<int>>(expected, actual)

Your test fails. You haven't created the implementation yet. Make this test pass by
writing the simplest code in the MathService class that works:
F#

let squaresOfOdds xs =
Seq.empty<int>

In the unit-testing-with-fsharp directory, run dotnet test again. The dotnet test
command runs a build for the MathService project and then for the MathService.Tests
project. After building both projects, it runs this single test. It passes.

Completing the requirements


Now that you've made one test pass, it's time to write more. The next simple case works
with a sequence whose only odd number is 1 . The number 1 is easier because the
square of 1 is 1. Here's that next test:

F#

[<Fact>]
let ``Sequences of Ones and Evens returns Ones`` () =
let expected = [1; 1; 1; 1]
let actual = MyMath.squaresOfOdds [2; 1; 4; 1; 6; 1; 8; 1; 10]
Assert.Equal<Collections.Generic.IEnumerable<int>>(expected, actual)

Executing dotnet test runs your tests and shows you that the new test fails. Now,
update the squaresOfOdds method to handle this new test. You filter all the even
numbers out of the sequence to make this test pass. You can do that by writing a small
filter function and using Seq.filter :

F#

let private isOdd x = x % 2 <> 0

let squaresOfOdds xs =
xs
|> Seq.filter isOdd

There's one more step to go: square each of the odd numbers. Start by writing a new
test:

F#

[<Fact>]
let ``SquaresOfOdds works`` () =
let expected = [1; 9; 25; 49; 81]
let actual = MyMath.squaresOfOdds [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]
Assert.Equal(expected, actual)

You can fix the test by piping the filtered sequence through a map operation to
compute the square of each odd number:

F#

let private square x = x * x


let private isOdd x = x % 2 <> 0

let squaresOfOdds xs =
xs
|> Seq.filter isOdd
|> Seq.map square

You've built a small library and a set of unit tests for that library. You've structured the
solution so that adding new packages and tests is part of the normal workflow. You've
concentrated most of your time and effort on solving the goals of the application.

See also
dotnet new
dotnet sln
dotnet add reference
dotnet test
Unit testing Visual Basic .NET Core
libraries using dotnet test and xUnit
Article • 09/29/2022

This tutorial shows how to build a solution containing a unit test project and library
project. To follow the tutorial using a pre-built solution, view or download the sample
code . For download instructions, see Samples and Tutorials.

Create the solution


In this section, a solution is created that contains the source and test projects. The
completed solution has the following directory structure:

/unit-testing-using-dotnet-test
unit-testing-using-dotnet-test.sln
/PrimeService
PrimeService.vb
PrimeService.vbproj
/PrimeService.Tests
PrimeService_IsPrimeShould.vb
PrimeServiceTests.vbproj

The following instructions provide the steps to create the test solution. See Commands
to create test solution for instructions to create the test solution in one step.

Open a shell window.

Run the following command:

.NET CLI

dotnet new sln -o unit-testing-using-dotnet-test

The dotnet new sln command creates a new solution in the unit-testing-using-
dotnet-test directory.

Change directory to the unit-testing-using-dotnet-test folder.

Run the following command:

.NET CLI
dotnet new classlib -o PrimeService --lang VB

The dotnet new classlib command creates a new class library project in the
PrimeService folder. The new class library will contain the code to be tested.

Rename Class1.vb to PrimeService.vb.

Replace the code in PrimeService.vb with the following code:

VB

Imports System

Namespace Prime.Services
Public Class PrimeService
Public Function IsPrime(candidate As Integer) As Boolean
Throw New NotImplementedException("Not implemented.")
End Function
End Class
End Namespace

The preceding code:


Throws a NotImplementedException with a message indicating it's not
implemented.
Is updated later in the tutorial.

In the unit-testing-using-dotnet-test directory, run the following command to add


the class library project to the solution:

.NET CLI

dotnet sln add ./PrimeService/PrimeService.vbproj

Create the PrimeService.Tests project by running the following command:

.NET CLI

dotnet new xunit -o PrimeService.Tests

The preceding command:


Creates the PrimeService.Tests project in the PrimeService.Tests directory. The
test project uses xUnit as the test library.
Configures the test runner by adding the following <PackageReference
/> elements to the project file:
"Microsoft.NET.Test.Sdk"
"xunit"
"xunit.runner.visualstudio"

Add the test project to the solution file by running the following command:

.NET CLI

dotnet sln add ./PrimeService.Tests/PrimeService.Tests.vbproj

Add the PrimeService class library as a dependency to the PrimeService.Tests


project:

.NET CLI

dotnet add ./PrimeService.Tests/PrimeService.Tests.vbproj reference


./PrimeService/PrimeService.vbproj

Commands to create the solution


This section summarizes all the commands in the previous section. Skip this section if
you've completed the steps in the previous section.

The following commands create the test solution on a Windows machine. For macOS
and Unix, update the ren command to the OS version of ren to rename a file:

.NET CLI

dotnet new sln -o unit-testing-using-dotnet-test


cd unit-testing-using-dotnet-test
dotnet new classlib -o PrimeService
ren .\PrimeService\Class1.vb PrimeService.vb
dotnet sln add ./PrimeService/PrimeService.vbproj
dotnet new xunit -o PrimeService.Tests
dotnet add ./PrimeService.Tests/PrimeService.Tests.vbproj reference
./PrimeService/PrimeService.vbproj
dotnet sln add ./PrimeService.Tests/PrimeService.Tests.vbproj

Follow the instructions for "Replace the code in PrimeService.vb with the following code"
in the previous section.

Create a test
A popular approach in test driven development (TDD) is to write a test before
implementing the target code. This tutorial uses the TDD approach. The IsPrime
method is callable, but not implemented. A test call to IsPrime fails. With TDD, a test is
written that is known to fail. The target code is updated to make the test pass. You keep
repeating this approach, writing a failing test and then updating the target code to pass.

Update the PrimeService.Tests project:

Delete PrimeService.Tests/UnitTest1.vb.
Create a PrimeService.Tests/PrimeService_IsPrimeShould.vb file.
Replace the code in PrimeService_IsPrimeShould.vb with the following code:

VB

Imports Xunit

Namespace PrimeService.Tests
Public Class PrimeService_IsPrimeShould
Private ReadOnly _primeService As Prime.Services.PrimeService

Public Sub New()


_primeService = New Prime.Services.PrimeService()
End Sub

<Fact>
Sub IsPrime_InputIs1_ReturnFalse()
Dim result As Boolean = _primeService.IsPrime(1)

Assert.False(result, "1 should not be prime")


End Sub

End Class
End Namespace

The [Fact] attribute declares a test method that's run by the test runner. From the
PrimeService.Tests folder, run dotnet test . The dotnet test command builds both
projects and runs the tests. The xUnit test runner contains the program entry point to
run the tests. dotnet test starts the test runner using the unit test project.

The test fails because IsPrime hasn't been implemented. Using the TDD approach, write
only enough code so this test passes. Update IsPrime with the following code:

VB

Public Function IsPrime(candidate As Integer) As Boolean


If candidate = 1 Then
Return False
End If
Throw New NotImplementedException("Not implemented.")
End Function

Run dotnet test . The test passes.

Add more tests


Add prime number tests for 0 and -1. You could copy the preceding test and change the
following code to use 0 and -1:

VB

Dim result As Boolean = _primeService.IsPrime(1)

Assert.False(result, "1 should not be prime")

Copying test code when only a parameter changes results in code duplication and test
bloat. The following xUnit attributes enable writing a suite of similar tests:

[Theory] represents a suite of tests that execute the same code but have different
input arguments.
[InlineData] attribute specifies values for those inputs.

Rather than creating new tests, apply the preceding xUnit attributes to create a single
theory. Replace the following code:

VB

<Fact>
Sub IsPrime_InputIs1_ReturnFalse()
Dim result As Boolean = _primeService.IsPrime(1)

Assert.False(result, "1 should not be prime")


End Sub

with the following code:

VB

<Theory>
<InlineData(-1)>
<InlineData(0)>
<InlineData(1)>
Sub IsPrime_ValuesLessThan2_ReturnFalse(ByVal value As Integer)
Dim result As Boolean = _primeService.IsPrime(value)
Assert.False(result, $"{value} should not be prime")
End Sub

In the preceding code, [Theory] and [InlineData] enable testing several values less
than two. Two is the smallest prime number.

Run dotnet test , two of the tests fail. To make all of the tests pass, update the IsPrime
method with the following code:

VB

Public Function IsPrime(candidate As Integer) As Boolean


If candidate < 2 Then
Return False
End If
Throw New NotImplementedException("Not fully implemented.")
End Function

Following the TDD approach, add more failing tests, then update the target code. See
the finished version of the tests and the complete implementation of the library .

The completed IsPrime method is not an efficient algorithm for testing primality.

Additional resources
xUnit.net official site
Testing controller logic in ASP.NET Core
dotnet add reference
Organizing and testing projects with the
.NET CLI
Article • 04/19/2022

This tutorial follows Tutorial: Create a console application with .NET using Visual Studio
Code, taking you beyond the creation of a simple console app to develop advanced and
well-organized applications. After showing you how to use folders to organize your
code, the tutorial shows you how to extend a console application with the xUnit
testing framework.

7 Note

This tutorial recommends that you place the application project and test project in
separate folders. Some developers prefer to keep these projects in the same folder.
For more information, see GitHub issue dotnet/docs #26395 .

Using folders to organize code


If you want to introduce new types into a console app, you can do so by adding files
containing the types to the app. For example, if you add files containing
AccountInformation and MonthlyReportRecords types to your project, the project file
structure is flat and easy to navigate:

/MyProject
|__AccountInformation.cs
|__MonthlyReportRecords.cs
|__MyProject.csproj
|__Program.cs

However, this flat structure only works well when the size of your project is relatively
small. Can you imagine what will happen if you add 20 types to the project? The project
definitely wouldn't be easy to navigate and maintain with that many files littering the
project's root directory.

To organize the project, create a new folder and name it Models to hold the type files.
Place the type files into the Models folder:
/MyProject
|__/Models
|__AccountInformation.cs
|__MonthlyReportRecords.cs
|__MyProject.csproj
|__Program.cs

Projects that logically group files into folders are easy to navigate and maintain. In the
next section, you create a more complex sample with folders and unit testing.

Organizing and testing using the NewTypes


Pets Sample

Prerequisites
.NET 5.0 SDK or a later version.

Building the sample


For the following steps, you can either follow along using the NewTypes Pets Sample
or create your own files and folders. The types are logically organized into a folder
structure that permits the addition of more types later, and tests are also logically
placed in folders permitting the addition of more tests later.

The sample contains two types, Dog and Cat , and has them implement a common
interface, IPet . For the NewTypes project, your goal is to organize the pet-related types
into a Pets folder. If another set of types is added later, WildAnimals for example, they're
placed in the NewTypes folder alongside the Pets folder. The WildAnimals folder may
contain types for animals that aren't pets, such as Squirrel and Rabbit types. In this
way as types are added, the project remains well organized.

Create the following folder structure with file content indicated:

/NewTypes
|__/src
|__/NewTypes
|__/Pets
|__Dog.cs
|__Cat.cs
|__IPet.cs
|__Program.cs
|__NewTypes.csproj

IPet.cs:

C#

using System;

namespace Pets
{
public interface IPet
{
string TalkToOwner();
}
}

Dog.cs:

C#

using System;

namespace Pets
{
public class Dog : IPet
{
public string TalkToOwner() => "Woof!";
}
}

Cat.cs:

C#

using System;

namespace Pets
{
public class Cat : IPet
{
public string TalkToOwner() => "Meow!";
}
}

Program.cs:

C#
using System;
using Pets;
using System.Collections.Generic;

namespace ConsoleApplication
{
public class Program
{
public static void Main(string[] args)
{
List<IPet> pets = new List<IPet>
{
new Dog(),
new Cat()
};

foreach (var pet in pets)


{
Console.WriteLine(pet.TalkToOwner());
}
}
}
}

NewTypes.csproj:

XML

<Project Sdk="Microsoft.NET.Sdk">

<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net6.0</TargetFramework>
<Nullable>enable</Nullable>
</PropertyGroup>

</Project>

Execute the following command:

.NET CLI

dotnet run

Obtain the following output:

Console
Woof!
Meow!

Optional exercise: You can add a new pet type, such as a Bird , by extending this project.
Make the bird's TalkToOwner method give a Tweet! to the owner. Run the app again.
The output will include Tweet!

Testing the sample


The NewTypes project is in place, and you've organized it by keeping the pets-related
types in a folder. Next, create your test project and start writing tests with the xUnit
test framework. Unit testing allows you to automatically check the behavior of your pet
types to confirm that they're operating properly.

Navigate back to the src folder and create a test folder with a NewTypesTests folder
within it. At a command prompt from the NewTypesTests folder, execute dotnet new
xunit . This command produces two files: NewTypesTests.csproj and UnitTest1.cs.

The test project can't currently test the types in NewTypes and requires a project
reference to the NewTypes project. To add a project reference, use the dotnet add
reference command:

.NET CLI

dotnet add reference ../../src/NewTypes/NewTypes.csproj

Or, you also have the option of manually adding the project reference by adding an
<ItemGroup> node to the NewTypesTests.csproj file:

XML

<ItemGroup>
<ProjectReference Include="../../src/NewTypes/NewTypes.csproj" />
</ItemGroup>

NewTypesTests.csproj:

XML

<Project Sdk="Microsoft.NET.Sdk">

<PropertyGroup>
<TargetFramework>net6.0</TargetFramework>
<Nullable>enable</Nullable>
</PropertyGroup>

<ItemGroup>
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.6.2" />
<PackageReference Include="xunit" Version="2.4.2" />
<PackageReference Include="xunit.runner.visualstudio" Version="2.4.5" />
</ItemGroup>

<ItemGroup>
<ProjectReference Include="../../src/NewTypes/NewTypes.csproj"/>
</ItemGroup>

</Project>

The NewTypesTests.csproj file contains the following package references:

Microsoft.NET.Test.Sdk , the .NET testing infrastructure

xunit , the xUnit testing framework

xunit.runner.visualstudio , the test runner


NewTypes , the code to test

Change the name of UnitTest1.cs to PetTests.cs and replace the code in the file with the
following code:

C#

using System;
using Xunit;
using Pets;

public class PetTests


{
[Fact]
public void DogTalkToOwnerReturnsWoof()
{
string expected = "Woof!";
string actual = new Dog().TalkToOwner();

Assert.NotEqual(expected, actual);
}

[Fact]
public void CatTalkToOwnerReturnsMeow()
{
string expected = "Meow!";
string actual = new Cat().TalkToOwner();

Assert.NotEqual(expected, actual);
}
}
Optional exercise: If you added a Bird type earlier that yields a Tweet! to the owner,
add a test method to the PetTests.cs file, BirdTalkToOwnerReturnsTweet , to check that the
TalkToOwner method works correctly for the Bird type.

7 Note

Although you expect that the expected and actual values are equal, an initial
assertion with the Assert.NotEqual check specifies that these values are not equal.
Always initially create a test to fail in order to check the logic of the test. After you
confirm that the test fails, adjust the assertion to allow the test to pass.

The following shows the complete project structure:

/NewTypes
|__/src
|__/NewTypes
|__/Pets
|__Dog.cs
|__Cat.cs
|__IPet.cs
|__Program.cs
|__NewTypes.csproj
|__/test
|__NewTypesTests
|__PetTests.cs
|__NewTypesTests.csproj

Start in the test/NewTypesTests directory. Run the tests with the dotnet test command.
This command starts the test runner specified in the project file.

As expected, testing fails, and the console displays the following output:

Output

Test run for C:\Source\dotnet\docs\samples\snippets\core\tutorials\testing-


with-cli\csharp\test\NewTypesTests\bin\Debug\net5.0\NewTypesTests.dll
(.NETCoreApp,Version=v5.0)
Microsoft (R) Test Execution Command Line Tool Version 16.8.1
Copyright (c) Microsoft Corporation. All rights reserved.

Starting test execution, please wait...


A total of 1 test files matched the specified pattern.
[xUnit.net 00:00:00.50] PetTests.DogTalkToOwnerReturnsWoof [FAIL]
Failed PetTests.DogTalkToOwnerReturnsWoof [6 ms]
Error Message:
Assert.NotEqual() Failure
Expected: Not "Woof!"
Actual: "Woof!"
Stack Trace:
at PetTests.DogTalkToOwnerReturnsWoof() in
C:\Source\dotnet\docs\samples\snippets\core\tutorials\testing-with-
cli\csharp\test\NewTypesTests\PetTests.cs:line 13

Failed! - Failed: 1, Passed: 1, Skipped: 0, Total: 2,


Duration: 8 ms - NewTypesTests.dll (net5.0)

Change the assertions of your tests from Assert.NotEqual to Assert.Equal :

C#

using System;
using Xunit;
using Pets;

public class PetTests


{
[Fact]
public void DogTalkToOwnerReturnsWoof()
{
string expected = "Woof!";
string actual = new Dog().TalkToOwner();

Assert.Equal(expected, actual);
}

[Fact]
public void CatTalkToOwnerReturnsMeow()
{
string expected = "Meow!";
string actual = new Cat().TalkToOwner();

Assert.Equal(expected, actual);
}
}

Rerun the tests with the dotnet test command and obtain the following output:

Output

Test run for C:\Source\dotnet\docs\samples\snippets\core\tutorials\testing-


with-cli\csharp\test\NewTypesTests\bin\Debug\net5.0\NewTypesTests.dll
(.NETCoreApp,Version=v5.0)
Microsoft (R) Test Execution Command Line Tool Version 16.8.1
Copyright (c) Microsoft Corporation. All rights reserved.
Starting test execution, please wait...
A total of 1 test files matched the specified pattern.

Passed! - Failed: 0, Passed: 2, Skipped: 0, Total: 2,


Duration: 2 ms - NewTypesTests.dll (net5.0)

Testing passes. The pet types' methods return the correct values when talking to the
owner.

You've learned techniques for organizing and testing projects using xUnit. Go forward
with these techniques applying them in your own projects. Happy coding!
Unit testing C# with NUnit and .NET
Core
Article • 04/05/2024

This tutorial takes you through an interactive experience building a sample solution
step-by-step to learn unit testing concepts. If you prefer to follow the tutorial using a
pre-built solution, view or download the sample code before you begin. For download
instructions, see Samples and Tutorials.

This article is about testing a .NET Core project. If you're testing an ASP.NET Core
project, see Integration tests in ASP.NET Core.

Prerequisites
.NET 8.0 or later versions.
A text editor or code editor of your choice.

Creating the source project


Open a shell window. Create a directory called unit-testing-using-nunit to hold the
solution. Inside this new directory, run the following command to create a new solution
file for the class library and the test project:

.NET CLI

dotnet new sln

Next, create a PrimeService directory. The following outline shows the directory and file
structure so far:

Console

/unit-testing-using-nunit
unit-testing-using-nunit.sln
/PrimeService

Make PrimeService the current directory and run the following command to create the
source project:

.NET CLI
dotnet new classlib

Rename Class1.cs to PrimeService.cs. You create a failing implementation of the


PrimeService class:

C#

using System;

namespace Prime.Services
{
public class PrimeService
{
public bool IsPrime(int candidate)
{
throw new NotImplementedException("Please create a test
first.");
}
}
}

Change the directory back to the unit-testing-using-nunit directory. Run the following
command to add the class library project to the solution:

.NET CLI

dotnet sln add PrimeService/PrimeService.csproj

Creating the test project


Next, create the PrimeService.Tests directory. The following outline shows the directory
structure:

Console

/unit-testing-using-nunit
unit-testing-using-nunit.sln
/PrimeService
Source Files
PrimeService.csproj
/PrimeService.Tests

Make the PrimeService.Tests directory the current directory and create a new project
using the following command:
.NET CLI

dotnet new nunit

The dotnet new command creates a test project that uses NUnit as the test library. The
generated template configures the test runner in the PrimeService.Tests.csproj file:

XML

<ItemGroup>
<PackageReference Include="nunit" Version="4.1.0" />
<PackageReference Include="NUnit3TestAdapter" Version="4.5.0" />
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.9.0" />
<PackageReference Include="NUnit.Analyzers" Version="4.1.0">
<PrivateAssets>all</PrivateAssets>
<IncludeAssets>runtime; build; native; contentfiles;
analyzers</IncludeAssets>
</PackageReference>
</ItemGroup>

7 Note

Prior to .NET 9, the generated code may reference older versions of the NUnit test
framework. You may use dotnet CLI to update the packages. Alternatively, open the
PrimeService.Tests.csproj file and replace the contents of the package references
item group with the code above.

The test project requires other packages to create and run unit tests. The dotnet new
command in the previous step added the Microsoft test SDK, the NUnit test framework,
and the NUnit test adapter. Now, add the PrimeService class library as another
dependency to the project. Use the dotnet add reference command:

.NET CLI

dotnet add reference ../PrimeService/PrimeService.csproj

You can see the entire file in the samples repository on GitHub.

The following outline shows the final solution layout:

Console

/unit-testing-using-nunit
unit-testing-using-nunit.sln
/PrimeService
Source Files
PrimeService.csproj
/PrimeService.Tests
Test Source Files
PrimeService.Tests.csproj

Execute the following command in the unit-testing-using-nunit directory:

.NET CLI

dotnet sln add ./PrimeService.Tests/PrimeService.Tests.csproj

Creating the first test


You write one failing test, make it pass, and then repeat the process. In the
PrimeService.Tests directory, rename the UnitTest1.cs file to
PrimeService_IsPrimeShould.cs and replace its entire contents with the following code:

C#

using NUnit.Framework;
using Prime.Services;

namespace Prime.UnitTests.Services
{
[TestFixture]
public class PrimeService_IsPrimeShould
{
private PrimeService _primeService;

[SetUp]
public void SetUp()
{
_primeService = new PrimeService();
}

[Test]
public void IsPrime_InputIs1_ReturnFalse()
{
var result = _primeService.IsPrime(1);

Assert.That(result, Is.False, "1 should not be prime");


}
}
}

The [TestFixture] attribute denotes a class that contains unit tests. The [Test]
attribute indicates a method is a test method.
Save this file and execute the dotnet test command to build the tests and the class
library and run the tests. The NUnit test runner contains the program entry point to run
your tests. dotnet test starts the test runner using the unit test project you've created.

Your test fails. You haven't created the implementation yet. Make the test pass by
writing the simplest code in the PrimeService class that works:

C#

public bool IsPrime(int candidate)


{
if (candidate == 1)
{
return false;
}
throw new NotImplementedException("Please create a test first.");
}

In the unit-testing-using-nunit directory, run dotnet test again. The dotnet test
command runs a build for the PrimeService project and then for the
PrimeService.Tests project. After you build both projects, it runs this single test. It

passes.

Adding more features


Now that you've made one test pass, it's time to write more. There are a few other
simple cases for prime numbers: 0, -1. You could add new tests with the [Test]
attribute, but that quickly becomes tedious. There are other NUnit attributes that enable
you to write a suite of similar tests. A [TestCase] attribute is used to create a suite of
tests that execute the same code but have different input arguments. You can use the
[TestCase] attribute to specify values for those inputs.

Instead of creating new tests, apply this attribute to create a single data-driven test. The
data driven test is a method that tests several values less than two, which is the lowest
prime number:

C#

[TestCase(-1)]
[TestCase(0)]
[TestCase(1)]
public void IsPrime_ValuesLessThan2_ReturnFalse(int value)
{
var result = _primeService?.IsPrime(value);
Assert.That(result, Is.False, $"{value} should not be prime");
}

Run dotnet test , and two of these tests fail. To make all of the tests pass, change the
if clause at the beginning of the Main method in the PrimeService.cs file:

C#

if (candidate < 2)

Continue to iterate by adding more tests, theories, and code in the main library. You
have the finished version of the tests and the complete implementation of the
library .

You've built a small library and a set of unit tests for that library. You've also structured
the solution so that adding new packages and tests is part of the standard workflow.
You've concentrated most of your time and effort on solving the goals of the
application.

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
Unit testing F# libraries in .NET Core
using dotnet test and NUnit
Article • 12/11/2021

This tutorial takes you through an interactive experience building a sample solution
step-by-step to learn unit testing concepts. If you prefer to follow the tutorial using a
pre-built solution, view or download the sample code before you begin. For download
instructions, see Samples and Tutorials.

This article is about testing a .NET Core project. If you're testing an ASP.NET Core
project, see Integration tests in ASP.NET Core.

Prerequisites
.NET Core 2.1 SDK or later versions.
A text editor or code editor of your choice.

Creating the source project


Open a shell window. Create a directory called unit-testing-with-fsharp to hold the
solution. Inside this new directory, run the following command to create a new solution
file for the class library and the test project:

.NET CLI

dotnet new sln

Next, create a MathService directory. The following outline shows the directory and file
structure so far:

/unit-testing-with-fsharp
unit-testing-with-fsharp.sln
/MathService

Make MathService the current directory and run the following command to create the
source project:

.NET CLI
dotnet new classlib -lang "F#"

You create a failing implementation of the math service:

F#

module MyMath =
let squaresOfOdds xs = raise (System.NotImplementedException("You
haven't written a test yet!"))

Change the directory back to the unit-testing-with-fsharp directory. Run the following
command to add the class library project to the solution:

.NET CLI

dotnet sln add .\MathService\MathService.fsproj

Creating the test project


Next, create the MathService.Tests directory. The following outline shows the directory
structure:

/unit-testing-with-fsharp
unit-testing-with-fsharp.sln
/MathService
Source Files
MathService.fsproj
/MathService.Tests

Make the MathService.Tests directory the current directory and create a new project
using the following command:

.NET CLI

dotnet new nunit -lang "F#"

This creates a test project that uses NUnit as the test framework. The generated
template configures the test runner in the MathServiceTests.fsproj:

XML
<ItemGroup>
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="15.5.0" />
<PackageReference Include="NUnit" Version="3.9.0" />
<PackageReference Include="NUnit3TestAdapter" Version="3.9.0" />
</ItemGroup>

The test project requires other packages to create and run unit tests. dotnet new in the
previous step added NUnit and the NUnit test adapter. Now, add the MathService class
library as another dependency to the project. Use the dotnet add reference command:

.NET CLI

dotnet add reference ../MathService/MathService.fsproj

You can see the entire file in the samples repository on GitHub.

You have the following final solution layout:

/unit-testing-with-fsharp
unit-testing-with-fsharp.sln
/MathService
Source Files
MathService.fsproj
/MathService.Tests
Test Source Files
MathService.Tests.fsproj

Execute the following command in the unit-testing-with-fsharp directory:

.NET CLI

dotnet sln add .\MathService.Tests\MathService.Tests.fsproj

Creating the first test


You write one failing test, make it pass, then repeat the process. Open UnitTest1.fs and
add the following code:

F#

namespace MathService.Tests
open System
open NUnit.Framework
open MathService

[<TestFixture>]
type TestClass () =

[<Test>]
member this.TestMethodPassing() =
Assert.True(true)

[<Test>]
member this.FailEveryTime() = Assert.True(false)

The [<TestFixture>] attribute denotes a class that contains tests. The [<Test>]
attribute denotes a test method that is run by the test runner. From the unit-testing-
with-fsharp directory, execute dotnet test to build the tests and the class library and
then run the tests. The NUnit test runner contains the program entry point to run your
tests. dotnet test starts the test runner using the unit test project you've created.

These two tests show the most basic passing and failing tests. My test passes, and Fail
every time fails. Now, create a test for the squaresOfOdds method. The squaresOfOdds

method returns a sequence of the squares of all odd integer values that are part of the
input sequence. Rather than trying to write all of those functions at once, you can
iteratively create tests that validate the functionality. Making each test pass means
creating the necessary functionality for the method.

The simplest test we can write is to call squaresOfOdds with all even numbers, where the
result should be an empty sequence of integers. Here's that test:

F#

[<Test>]
member this.TestEvenSequence() =
let expected = Seq.empty<int>
let actual = MyMath.squaresOfOdds [2; 4; 6; 8; 10]
Assert.That(actual, Is.EqualTo(expected))

Notice that the expected sequence has been converted to a list. The NUnit framework
relies on many standard .NET types. That dependency means that your public interface
and expected results support ICollection rather than IEnumerable.

When you run the test, you see that your test fails. You haven't created the
implementation yet. Make this test pass by writing the simplest code in the Library.fs
class in your MathService project that works:
F#

let squaresOfOdds xs =
Seq.empty<int>

In the unit-testing-with-fsharp directory, run dotnet test again. The dotnet test
command runs a build for the MathService project and then for the MathService.Tests
project. After building both projects, it runs your tests. Two tests pass now.

Completing the requirements


Now that you've made one test pass, it's time to write more. The next simple case works
with a sequence whose only odd number is 1 . The number 1 is easier because the
square of 1 is 1. Here's that next test:

F#

[<Test>]
member public this.TestOnesAndEvens() =
let expected = [1; 1; 1; 1]
let actual = MyMath.squaresOfOdds [2; 1; 4; 1; 6; 1; 8; 1; 10]
Assert.That(actual, Is.EqualTo(expected))

Executing dotnet test fails the new test. You must update the squaresOfOdds method to
handle this new test. You must filter all the even numbers out of the sequence to make
this test pass. You can do that by writing a small filter function and using Seq.filter :

F#

let private isOdd x = x % 2 <> 0

let squaresOfOdds xs =
xs
|> Seq.filter isOdd

There's one more step to go: square each of the odd numbers. Start by writing a new
test:

F#

[<Test>]
member public this.TestSquaresOfOdds() =
let expected = [1; 9; 25; 49; 81]
let actual = MyMath.squaresOfOdds [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]
Assert.That(actual, Is.EqualTo(expected))
You can fix the test by piping the filtered sequence through a map operation to
compute the square of each odd number:

F#

let private square x = x * x


let private isOdd x = x % 2 <> 0

let squaresOfOdds xs =
xs
|> Seq.filter isOdd
|> Seq.map square

You've built a small library and a set of unit tests for that library. You've structured the
solution so that adding new packages and tests is part of the normal workflow. You've
concentrated most of your time and effort on solving the goals of the application.

See also
dotnet add reference
dotnet test
Unit testing Visual Basic .NET Core
libraries using dotnet test and NUnit
Article • 03/27/2024

This tutorial takes you through an interactive experience building a sample solution
step-by-step to learn unit testing concepts. If you prefer to follow the tutorial using a
pre-built solution, view or download the sample code before you begin. For download
instructions, see Samples and Tutorials.

This article is about testing a .NET Core project. If you're testing an ASP.NET Core
project, see Integration tests in ASP.NET Core.

Prerequisites
.NET 8 SDK or later versions.
A text editor or code editor of your choice.

Creating the source project


Open a shell window. Create a directory called unit-testing-vb-nunit to hold the solution.
Inside this new directory, run the following command to create a new solution file for
the class library and the test project:

.NET CLI

dotnet new sln

Next, create a PrimeService directory. The following outline shows the file structure so
far:

Console

/unit-testing-vb-nunit
unit-testing-vb-nunit.sln
/PrimeService

Make PrimeService the current directory and run the following command to create the
source project:

.NET CLI
dotnet new classlib -lang VB

Rename Class1.VB to PrimeService.VB. You create a failing implementation of the


PrimeService class:

VB

Namespace Prime.Services
Public Class PrimeService
Public Function IsPrime(candidate As Integer) As Boolean
Throw New NotImplementedException("Please create a test first.")
End Function
End Class
End Namespace

Change the directory back to the unit-testing-vb-using-mstest directory. Run the


following command to add the class library project to the solution:

.NET CLI

dotnet sln add .\PrimeService\PrimeService.vbproj

Creating the test project


Next, create the PrimeService.Tests directory. The following outline shows the directory
structure:

Console

/unit-testing-vb-nunit
unit-testing-vb-nunit.sln
/PrimeService
Source Files
PrimeService.vbproj
/PrimeService.Tests

Make the PrimeService.Tests directory the current directory and create a new project
using the following command:

.NET CLI

dotnet new nunit -lang VB


The dotnet new command creates a test project that uses NUnit as the test library. The
generated template configures the test runner in the PrimeServiceTests.vbproj file:

XML

<ItemGroup>
<PackageReference Include="nunit" Version="4.1.0" />
<PackageReference Include="NUnit3TestAdapter" Version="4.5.0" />
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.9.0" />
</ItemGroup>

7 Note

Prior to .NET 9, the generated code may reference older versions of the NUnit test
framework. You may use dotnet CLI to update the packages. Alternatively, open the
PrimeService.Tests.vbproj file and replace the contents of the package references
item group with the code above.

The test project requires other packages to create and run unit tests. dotnet new in the
previous step added NUnit and the NUnit test adapter. Now, add the PrimeService class
library as another dependency to the project. Use the dotnet add reference command:

.NET CLI

dotnet add reference ../PrimeService/PrimeService.vbproj

You can see the entire file in the samples repository on GitHub.

You have the following final solution layout:

Console

/unit-testing-vb-nunit
unit-testing-vb-nunit.sln
/PrimeService
Source Files
PrimeService.vbproj
/PrimeService.Tests
Test Source Files
PrimeService.Tests.vbproj

Execute the following command in the unit-testing-vb-nunit directory:

.NET CLI
dotnet sln add .\PrimeService.Tests\PrimeService.Tests.vbproj

Creating the first test


You write one failing test, make it pass, then repeat the process. In the PrimeService.Tests
directory, rename the UnitTest1.vb file to PrimeService_IsPrimeShould.VB and replace its
entire contents with the following code:

VB

Imports NUnit.Framework

Namespace PrimeService.Tests
<TestFixture>
Public Class PrimeService_IsPrimeShould
Private _primeService As Prime.Services.PrimeService = New
Prime.Services.PrimeService()

<Test>
Sub IsPrime_InputIs1_ReturnFalse()
Dim result As Boolean = _primeService.IsPrime(1)

Assert.That(result, [Is].False, $"1 should not be prime")


End Sub

End Class
End Namespace

The <TestFixture> attribute indicates a class that contains tests. The <Test> attribute
denotes a method that is run by the test runner. From the unit-testing-vb-nunit, execute
dotnet test to build the tests and the class library and then run the tests. The NUnit test
runner contains the program entry point to run your tests. dotnet test starts the test
runner using the unit test project you've created.

Your test fails. You haven't created the implementation yet. Make this test pass by
writing the simplest code in the PrimeService class that works:

VB

Public Function IsPrime(candidate As Integer) As Boolean


If candidate = 1 Then
Return False
End If
Throw New NotImplementedException("Please create a test first.")
End Function
In the unit-testing-vb-nunit directory, run dotnet test again. The dotnet test
command runs a build for the PrimeService project and then for the
PrimeService.Tests project. After building both projects, it runs this single test. It

passes.

Adding more features


Now that you've made one test pass, it's time to write more. There are a few other
simple cases for prime numbers: 0, -1. You could add those cases as new tests with the
<Test> attribute, but that quickly becomes tedious. There are other xUnit attributes that

enable you to write a suite of similar tests. A <TestCase> attribute represents a suite of
tests that execute the same code but have different input arguments. You can use the
<TestCase> attribute to specify values for those inputs.

Instead of creating new tests, apply these two attributes to create a series of tests that
test several values less than two, which is the lowest prime number:

VB

<TestFixture>
Public Class PrimeService_IsPrimeShould
Private _primeService As Prime.Services.PrimeService = New
Prime.Services.PrimeService()

<TestCase(-1)>
<TestCase(0)>
<TestCase(1)>
Sub IsPrime_ValuesLessThan2_ReturnFalse(value As Integer)
Dim result As Boolean = _primeService.IsPrime(value)

Assert.That(result, [Is].False, $"{value} should not be prime")


End Sub

<TestCase(2)>
<TestCase(3)>
<TestCase(5)>
<TestCase(7)>
Public Sub IsPrime_PrimesLessThan10_ReturnTrue(value As Integer)
Dim result As Boolean = _primeService.IsPrime(value)

Assert.That(result, [Is].True, $"{value} should be prime")


End Sub

<TestCase(4)>
<TestCase(6)>
<TestCase(8)>
<TestCase(9)>
Public Sub IsPrime_NonPrimesLessThan10_ReturnFalse(value As Integer)
Dim result As Boolean = _primeService.IsPrime(value)

Assert.That(result, [Is].False, $"{value} should not be prime")


End Sub
End Class

Run dotnet test , and two of these tests fail. To make all of the tests pass, change the
if clause at the beginning of the Main method in the PrimeServices.cs file:

VB

if candidate < 2

Continue to iterate by adding more tests, more theories, and more code in the main
library. You have the finished version of the tests and the complete implementation of
the library .

You've built a small library and a set of unit tests for that library. You've structured the
solution so that adding new packages and tests is part of the normal workflow. You've
concentrated most of your time and effort on solving the goals of the application.

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
NUnit runner overview
Article • 05/23/2024

The NUnit runner is a lightweight and portable alternative to VSTest for running tests
in all contexts (for example, continuous integration (CI) pipelines, CLI, Visual Studio Test
Explorer, and VS Code Text Explorer). The NUnit runner is embedded directly in your
NUnit test projects, and there are no other app dependencies, such as vstest.console
or dotnet test , needed to run your tests.

The NUnit runner is open source, and builds on a Microsoft.Testing.Platform library. You
can find Microsoft.Testing.Platform code in microsoft/testfx GitHub repository. The
NUnit runner comes bundled with NUnit 5.0.0-beta.2 or newer.

Enable NUnit runner in a NUnit project


You can enable NUnit runner by adding the EnableNUnitRunner property and setting
OutputType to Exe in your project file. You also need to ensure that you're using NUnit

5.0.0-beta.2 or newer.

Consider the following example project file:

XML

<Project Sdk="Microsoft.NET.Sdk">

<PropertyGroup>
<!-- Enable the NUnit runner, this is an opt-in feature -->
<EnableNUnitRunner>true</EnableNUnitRunner>
<OutputType>Exe</OutputType>

<TargetFramework>net8.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>

<IsPackable>false</IsPackable>
<IsTestProject>true</IsTestProject>
</PropertyGroup>

<ItemGroup>
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.9.0" />
<PackageReference Include="NUnit" Version="4.1.0" />
<PackageReference Include="NUnit.Analyzers" Version="4.2.0">
<IncludeAssets>runtime; build; native; contentfiles; analyzers;
buildtransitive</IncludeAssets>
<PrivateAssets>all</PrivateAssets>
</PackageReference>
<PackageReference Include="NUnit3TestAdapter" Version="5.0.0-beta.2" />

<!--
Coverlet collector isn't compatible with NUnit runner, you can
either switch to Microsoft CodeCoverage (as shown below),
or switch to be using coverlet global tool
https://github.com/coverlet-coverage/coverlet#net-global-tool-guide-
suffers-from-possible-known-issue
-->
<PackageReference Include="Microsoft.Testing.Extensions.CodeCoverage"
Version="17.10.1" />
</ItemGroup>

</Project>

Configurations and filters

.runsettings
The NUnit runner supports the runsettings through the command-line option --
settings . The following commands show examples.

Using dotnet run :

.NET CLI

dotnet run --project Contoso.MyTests -- --settings config.runsettings

Using dotnet exec :

.NET CLI

dotnet exec Contoso.MyTests.dll --settings config.runsettings

-or-

.NET CLI

dotnet Contoso.MyTests.dll --settings config.runsettings

Using the executable:

.NET CLI
Contoso.MyTests.exe --settings config.runsettings

Tests filter
You can provide the tests filter seamlessly using the command line option --filter . The
following commands show some examples.

Using dotnet run :

.NET CLI

dotnet run --project Contoso.MyTests -- --filter


"FullyQualifiedName~UnitTest1|TestCategory=CategoryA"

Using dotnet exec :

.NET CLI

dotnet exec Contoso.MyTests.dll --filter


"FullyQualifiedName~UnitTest1|TestCategory=CategoryA"

-or-

.NET CLI

dotnet Contoso.MyTests.dll --filter


"FullyQualifiedName~UnitTest1|TestCategory=CategoryA"

Using the executable:

.NET CLI

Contoso.MyTests.exe --filter
"FullyQualifiedName~UnitTest1|TestCategory=CategoryA"

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
be found on GitHub, where you Select a link to provide feedback:
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our  Provide product feedback
contributor guide.
MSTest overview
Article • 03/19/2024

MSTest, Microsoft Testing Framework, is a test framework for .NET applications. It allows
you to write and execute tests, and provide test suites with integration to Visual Studio
and Visual Studio Code Test Explorers, the .NET CLI, and many CI pipelines.

MSTest is a fully supported, open-source and a cross-platform test framework that


works with all supported .NET targets (.NET Framework, .NET Core, .NET, UWP, WinUI,
and so on) hosted on GitHub .

MSTest support policy


Since v3.0.0, MSTest is strictly following semantic versioning.

The MSTest team only supports the latest released version and strongly encourages its
users and customers to always update to latest version to benefit from new
improvements and security patches. Preview releases aren't supported by Microsoft, but
they are offered for public testing ahead of the final release.

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
Get started with MSTest
Article • 08/13/2024

MSTest functionality is split into multiple NuGet packages:

MSTest.TestFramework : Contains the attributes and classes that are used to


define MSTest tests.
MSTest.TestAdapter : Contains the test adapter that discovers and runs MSTest
tests.
MSTest.Analyzers : Contains the analyzers that helps you write high-quality tests.

We recommend that you don't install these packages directly into your test projects.
Instead, you should use either:

MSTest.Sdk : A MSBuild project SDK that includes all the recommended packages
and greatly simplifies all the boilerplate configuration. Although this is shipped as
a NuGet package, it's not intended to be installed as a regular package
dependency, instead you should modify the Sdk part of your project (e.g. <Project
Sdk="MSTest.Sdk"> or <Project Sdk="MSTest.Sdk/X.Y.Z"> where X.Y.Z is MSTest

version). For more information, please refer to MSTest SDK overview.

the MSTest NuGet package, which includes all recommended packages:


MSTest.TestFramework , MSTest.TestAdapter , MSTest.Analyzers and

Microsoft.NET.Test.Sdk .

If you are creating a test infrastructure project that is intended to be used as a helper by
multiple test projects, you should install the MSTest.TestFramework and
MSTest.Analyzers packages directly into that project.

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
MSTest SDK overview
Article • 09/10/2024

Introduced in .NET 9, MSTest.Sdk is a MSBuild project SDK for building MSTest apps.
It's possible to build a MSTest app without this SDK, however, the MSTest SDK is:

Tailored towards providing a first-class experience for testing with MSTest.


The recommended target for most users.
Easy to configure for other users.

The MSTest SDK discovers and runs your tests using the MSTest runner.

You can enable MSTest.Sdk in a project by simply updating the Sdk attribute of the
Project node of your project:

XML

<Project Sdk="MSTest.Sdk/3.3.1">

<PropertyGroup>
<TargetFramework>net8.0</TargetFramework>
</PropertyGroup>

<!-- references to the code to test -->

</Project>

7 Note

/3.3.1 is given as example as it's the first version of the SDK, but it can be replaced

with any newer version.

To simplify handling of versions, we recommend setting the SDK version at solution level
using the global.json file. For example, your project file would look like:

XML

<Project Sdk="MSTest.Sdk">

<PropertyGroup>
<TargetFramework>net8.0</TargetFramework>
</PropertyGroup>

<!-- references to the code to test -->


</Project>

Then, specify the MSTest.Sdk version in the global.json file as follows:

JSON

{
"msbuild-sdks": {
"MSTest.Sdk": "3.3.1"
}
}

For more information, see Use MSBuild project SDKs.

When you build the project, all the needed components are restored and installed
using the standard NuGet workflow set by your project.

You don't need anything else to build and run your tests and you can use the same
tooling (for example, dotnet test or Visual Studio) used by a "classic" MSTest project.

) Important

By switching to the MSTest.Sdk , you opt in to using the MSTest runner, including
with dotnet test. That requires modifying your CI and local CLI calls, and also
impacts the available entries of the .runsettings. You can use MSTest.Sdk and still
keep the old integrations and tools by instead switching the runner.

Select the runner


By default, MSTest SDK relies on MSTest runner, but you can switch to VSTest by adding
the property <UseVSTest>true</UseVSTest> .

Extend MSTest runner


You can customize MSTest runner experience through a set of NuGet package
extensions. To simplify and improve this experience, MSTest SDK introduces two
features:

MSTest runner profile


Enable or disable extensions
MSTest runner profile
The concept of profiles allows you to select the default set of configurations and
extensions that will be applied to your test project.

You can set the profile using the property TestingExtensionsProfile with one of the
following three profiles:

Default - Enables the recommended extensions for this version of MSTest.SDK.

This is the default when the property isn't set explicitly.


None - No extensions are enabled.

AllMicrosoft - Enable all extensions shipped by Microsoft (including extensions

with a restrictive license).

Here's a full example, using the None profile:

XML

<Project Sdk="MSTest.Sdk/3.3.1">

<PropertyGroup>
<TargetFramework>net8.0</TargetFramework>
<TestingExtensionsProfile>None</TestingExtensionsProfile>
</PropertyGroup>

<!-- references to the code to test -->

</Project>

Enable or disable extensions


Extensions can be enabled and disabled by MSBuild properties with the pattern
Enable[NugetPackageNameWithoutDots] .

For example, to enable the crash dump extension (NuGet package


Microsoft.Testing.Extensions.CrashDump ), you can use the following property
EnableMicrosoftTestingExtensionsCrashDump set to true :

XML

<Project Sdk="MSTest.Sdk/3.3.1">

<PropertyGroup>
<TargetFramework>net8.0</TargetFramework>

<EnableMicrosoftTestingExtensionsCrashDump>true</EnableMicrosoftTestingExten
sionsCrashDump>
</PropertyGroup>

<!-- references to the code to test -->

</Project>

For a list of all available extensions, see Microsoft.Testing.Platform extensions.

2 Warning

It's important to review the licensing terms for each extension as they might vary.

Enabled and disabled extensions are combined with the extensions provided by your
selected extension profile.

This property pattern can be used to enable an additional extension on top of the
implicit Default profile (as seen in the previous CrashDumpExtension example).

You can also disable an extension that's coming from the selected profile. For example,
disable the MS Code Coverage extension by setting
<EnableMicrosoftTestingExtensionsCodeCoverage>false</EnableMicrosoftTestingExtensio
nsCodeCoverage> :

XML

<Project Sdk="MSTest.Sdk/3.3.1">

<PropertyGroup>
<TargetFramework>net8.0</TargetFramework>

<EnableMicrosoftTestingExtensionsCodeCoverage>false</EnableMicrosoftTestingE
xtensionsCodeCoverage>
</PropertyGroup>

<!-- references to the code to test -->

</Project>

Features
Outside of the selection of the runner and runner-specific extensions, MSTest.Sdk also
provides additional features to simplify and enhance your testing experience.
Test with .NET Aspire
.NET Aspire is an opinionated, cloud-ready stack for building observable, production
ready, distributed applications. .NET Aspire is delivered through a collection of NuGet
packages that handle specific cloud-native concerns. For more information, see the .NET
Aspire docs.

7 Note

This feature is available from MSTest.Sdk 3.4.0

By setting the property EnableAspireTesting to true , you can bring all dependencies
and default using directives you need for testing with Aspire and MSTest .

XML

<Project Sdk="MSTest.Sdk/3.4.0">

<PropertyGroup>
<TargetFramework>net8.0</TargetFramework>
<EnableAspireTesting>true</EnableAspireTesting>
</PropertyGroup>

<!-- references to the code to test -->

</Project>

Test with Playwright


Playwright enables reliable end-to-end testing for modern web apps. For more
information, see the official Playwright docs .

7 Note

This feature is available from MSTest.Sdk 3.4.0

By setting the property EnablePlaywright to true you can bring in all the dependencies
and default using directives you need for testing with Playwright and MSTest .

XML

<Project Sdk="MSTest.Sdk/3.4.0">
<PropertyGroup>
<TargetFramework>net8.0</TargetFramework>
<EnablePlaywright>true</EnablePlaywright>
</PropertyGroup>

<!-- references to the code to test -->

</Project>

Migrate to MSTest SDK


Consider the following steps that are required to migrate to the MSTest SDK.

Update your project


When migrating an existing MSTest test project to MSTest SDK, start by replacing the
Sdk="Microsoft.NET.Sdk" entry at the top of your test project with

Sdk="MSTest.Sdk/3.3.1"

diff

- Sdk="Microsoft.NET.Sdk"
+ Sdk="MSTest.Sdk"

Add the version to your global.json :

JSON

{
"msbuild-sdks": {
"MSTest.Sdk": "3.3.1"
}
}

You can then start simplifying your project.

Remove default properties:

diff

- <EnableMSTestRunner>true</EnableMSTestRunner>
- <OutputType>Exe</OutputType>
- <IsPackable>false</IsPackable>
- <IsTestProject>true</IsTestProject>
Remove default package references:

diff

- <PackageReference Include="MSTest"
- <PackageReference Include="MSTest.TestFramework"
- <PackageReference Include="MSTest.TestAdapter"
- <PackageReference Include="MSTest.Analyzers"
- <PackageReference Include="Microsoft.NET.Test.Sdk"

Finally, based on the extensions profile you're using, you can also remove some of the
Microsoft.Testing.Extensions.* packages.

Update your CI
Once you've updated your projects, if you're using MSTest runner (default) and if you
rely on dotnet test to run your tests, you must update your CI configuration. For more
information and to guide your understanding of all the required changes, see dotnet
test integration.

Here's an example update when using the DotNetCoreCLI task in Azure DevOps:

diff

\- task: DotNetCoreCLI@2
inputs:
command: 'test'
projects: '**/**.sln'
- arguments: '--configuration Release'
+ arguments: '--configuration Release -
p:TestingPlatformCommandLineArguments="--report-trx --results-directory
$(Agent.TempDirectory) --coverage"'

See also
Test project–related properties
Write tests with MSTest
Article • 07/25/2024

In this article, you will learn about the APIs and conventions used by MSTest to help you
write and shape your tests.

Attributes
MSTest uses custom attributes to identify and customize tests.

To help provide a clearer overview of the testing framework, this section organizes the
members of the Microsoft.VisualStudio.TestTools.UnitTesting namespace into groups of
related functionality.

7 Note

Attribute elements, whose names end with "Attribute", can be used with or without
"Attribute" at the end. Attributes that have parameterless constructor, can be
written with or without parenthesis. The following code examples work identically:

[TestClass()]

[TestClassAttribute()]

[TestClass]

[TestClassAttribute]

MSTest attributes are divided into the following categories:

Attributes used to identify test classes and methods


Attributes used for data-driven testing
Attributes used to provide initialization and cleanups
Attributes used to control test execution
Utilities attributes
Metadata attributes

Assertions
Use the Assert classes of the Microsoft.VisualStudio.TestTools.UnitTesting namespace to
verify specific functionality. A test method exercises the code of a method in your
application's code, but it reports the correctness of the code's behavior only if you
include Assert statements.

MSTest assertions are divided into the following classes:

The Assert class


The StringAssert class
The CollectionAssert class

Testing private members


You can generate a test for a private method. This generation creates a private accessor
class, which instantiates an object of the PrivateObject class. The PrivateObject class is a
wrapper class that uses reflection as part of the private accessor process. The
PrivateType class is similar, but is used for calling private static methods instead of
calling private instance methods.

PrivateObject
PrivateType

6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
Unit testing C# with MSTest and .NET
Article • 03/18/2023

This tutorial takes you through an interactive experience building a sample solution
step-by-step to learn unit testing concepts. If you prefer to follow the tutorial using a
pre-built solution, view or download the sample code before you begin. For download
instructions, see Samples and Tutorials.

This article is about testing a .NET Core project. If you're testing an ASP.NET Core
project, see Integration tests in ASP.NET Core.

Prerequisites
The .NET 6.0 SDK or later

Create the source project


Open a shell window. Create a directory called unit-testing-using-mstest to hold the
solution. Inside this new directory, run dotnet new sln to create a new solution file for
the class library and the test project. Create a PrimeService directory. The following
outline shows the directory and file structure thus far:

Console

/unit-testing-using-mstest
unit-testing-using-mstest.sln
/PrimeService

Make PrimeService the current directory and run dotnet new classlib to create the
source project. Rename Class1.cs to PrimeService.cs. Replace the code in the file with the
following code to create a failing implementation of the PrimeService class:

C#

using System;

namespace Prime.Services
{
public class PrimeService
{
public bool IsPrime(int candidate)
{
throw new NotImplementedException("Please create a test
first.");
}
}
}

Change the directory back to the unit-testing-using-mstest directory. Run dotnet sln add
to add the class library project to the solution:

.NET CLI

dotnet sln add PrimeService/PrimeService.csproj

Create the test project


Create the PrimeService.Tests directory. The following outline shows the directory
structure:

Console

/unit-testing-using-mstest
unit-testing-using-mstest.sln
/PrimeService
Source Files
PrimeService.csproj
/PrimeService.Tests

Make the PrimeService.Tests directory the current directory and create a new project
using dotnet new mstest. The dotnet new command creates a test project that uses
MSTest as the test library. The template configures the test runner in the
PrimeServiceTests.csproj file:

XML

<ItemGroup>
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="16.7.1" />
<PackageReference Include="MSTest.TestAdapter" Version="2.1.1" />
<PackageReference Include="MSTest.TestFramework" Version="2.1.1" />
<PackageReference Include="coverlet.collector" Version="1.3.0" />
</ItemGroup>

The test project requires other packages to create and run unit tests. dotnet new in the
previous step added the MSTest SDK, the MSTest test framework, the MSTest runner,
and coverlet for code coverage reporting.
Add the PrimeService class library as another dependency to the project. Use the
dotnet add reference command:

.NET CLI

dotnet add reference ../PrimeService/PrimeService.csproj

You can see the entire file in the samples repository on GitHub.

The following outline shows the final solution layout:

Console

/unit-testing-using-mstest
unit-testing-using-mstest.sln
/PrimeService
Source Files
PrimeService.csproj
/PrimeService.Tests
Test Source Files
PrimeServiceTests.csproj

Change to the unit-testing-using-mstest directory, and run dotnet sln add:

.NET CLI

dotnet sln add ./PrimeService.Tests/PrimeService.Tests.csproj

Create the first test


Write a failing test, make it pass, then repeat the process. Remove UnitTest1.cs from the
PrimeService.Tests directory and create a new C# file named
PrimeService_IsPrimeShould.cs with the following content:

C#

using Microsoft.VisualStudio.TestTools.UnitTesting;
using Prime.Services;

namespace Prime.UnitTests.Services
{
[TestClass]
public class PrimeService_IsPrimeShould
{
private readonly PrimeService _primeService;
public PrimeService_IsPrimeShould()
{
_primeService = new PrimeService();
}

[TestMethod]
public void IsPrime_InputIs1_ReturnFalse()
{
bool result = _primeService.IsPrime(1);

Assert.IsFalse(result, "1 should not be prime");


}
}
}

The TestClass attribute denotes a class that contains unit tests. The TestMethod attribute
indicates a method is a test method.

Save this file and execute dotnet test to build the tests and the class library and then run
the tests. The MSTest test runner contains the program entry point to run your tests.
dotnet test starts the test runner using the unit test project you've created.

Your test fails. You haven't created the implementation yet. Make this test pass by
writing the simplest code in the PrimeService class that works:

C#

public bool IsPrime(int candidate)


{
if (candidate == 1)
{
return false;
}
throw new NotImplementedException("Please create a test first.");
}

In the unit-testing-using-mstest directory, run dotnet test again. The dotnet test
command runs a build for the PrimeService project and then for the
PrimeService.Tests project. After building both projects, it runs this single test. It

passes.

Add more features


Now that you've made one test pass, it's time to write more. There are a few other
simple cases for prime numbers: 0, -1. You could add new tests with the TestMethod
attribute, but that quickly becomes tedious. There are other MSTest attributes that
enable you to write a suite of similar tests. A test method can execute the same code
but have different input arguments. You can use the DataRow attribute to specify values
for those inputs.

Instead of creating new tests, apply these two attributes to create a single data driven
test. The data driven test is a method that tests several values less than two, which is the
lowest prime number. Add a new test method in PrimeService_IsPrimeShould.cs:

C#

[TestMethod]
[DataRow(-1)]
[DataRow(0)]
[DataRow(1)]
public void IsPrime_ValuesLessThan2_ReturnFalse(int value)
{
var result = _primeService.IsPrime(value);

Assert.IsFalse(result, $"{value} should not be prime");


}

Run dotnet test , and two of these tests fail. To make all of the tests pass, change the
if clause at the beginning of the IsPrime method in the PrimeService.cs file:

C#

if (candidate < 2)

Continue to iterate by adding more tests, more theories, and more code in the main
library. You have the finished version of the tests and the complete implementation of
the library .

You've built a small library and a set of unit tests for that library. You've structured the
solution so that adding new packages and tests is part of the normal workflow. You've
concentrated most of your time and effort on solving the goals of the application.

See also
Microsoft.VisualStudio.TestTools.UnitTesting
Use the MSTest framework in unit tests
MSTest V2 test framework docs
Unit testing F# libraries in .NET Core
using dotnet test and MSTest
Article • 09/15/2021

This tutorial takes you through an interactive experience building a sample solution
step-by-step to learn unit testing concepts. If you prefer to follow the tutorial using a
pre-built solution, view or download the sample code before you begin. For download
instructions, see Samples and Tutorials.

This article is about testing a .NET Core project. If you're testing an ASP.NET Core
project, see Integration tests in ASP.NET Core.

Creating the source project


Open a shell window. Create a directory called unit-testing-with-fsharp to hold the
solution. Inside this new directory, run dotnet new sln to create a new solution. This
makes it easier to manage both the class library and the unit test project. Inside the
solution directory, create a MathService directory. The directory and file structure thus
far is shown below:

/unit-testing-with-fsharp
unit-testing-with-fsharp.sln
/MathService

Make MathService the current directory and run dotnet new classlib -lang "F#" to
create the source project. You'll create a failing implementation of the math service:

F#

module MyMath =
let squaresOfOdds xs = raise (System.NotImplementedException("You
haven't written a test yet!"))

Change the directory back to the unit-testing-with-fsharp directory. Run dotnet sln add
.\MathService\MathService.fsproj to add the class library project to the solution.

Creating the test project


Next, create the MathService.Tests directory. The following outline shows the directory
structure:

Console

/unit-testing-with-fsharp
unit-testing-with-fsharp.sln
/MathService
Source Files
MathService.fsproj
/MathService.Tests

Make the MathService.Tests directory the current directory and create a new project
using dotnet new mstest -lang "F#" . This creates a test project that uses MSTest as the
test framework. The generated template configures the test runner in the
MathServiceTests.fsproj:

XML

<ItemGroup>
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="15.3.0-
preview-20170628-02" />
<PackageReference Include="MSTest.TestAdapter" Version="1.1.18" />
<PackageReference Include="MSTest.TestFramework" Version="1.1.18" />
</ItemGroup>

The test project requires other packages to create and run unit tests. dotnet new in the
previous step added MSTest and the MSTest runner. Now, add the MathService class
library as another dependency to the project. Use the dotnet add reference command:

.NET CLI

dotnet add reference ../MathService/MathService.fsproj

You can see the entire file in the samples repository on GitHub.

You have the following final solution layout:

/unit-testing-with-fsharp
unit-testing-with-fsharp.sln
/MathService
Source Files
MathService.fsproj
/MathService.Tests
Test Source Files
MathServiceTests.fsproj

Execute dotnet sln add .\MathService.Tests\MathService.Tests.fsproj in the unit-


testing-with-fsharp directory.

Creating the first test


You write one failing test, make it pass, then repeat the process. Open Tests.fs and add
the following code:

F#

namespace MathService.Tests

open System
open Microsoft.VisualStudio.TestTools.UnitTesting
open MathService

[<TestClass>]
type TestClass () =

[<TestMethod>]
member this.TestMethodPassing() =
Assert.IsTrue(true)

[<TestMethod>]
member this.FailEveryTime() = Assert.IsTrue(false)

The [<TestClass>] attribute denotes a class that contains tests. The [<TestMethod>]
attribute denotes a test method that is run by the test runner. From the unit-testing-
with-fsharp directory, execute dotnet test to build the tests and the class library and
then run the tests. The MSTest test runner contains the program entry point to run your
tests. dotnet test starts the test runner using the unit test project you've created.

These two tests show the most basic passing and failing tests. My test passes, and Fail
every time fails. Now, create a test for the squaresOfOdds method. The squaresOfOdds

method returns a list of the squares of all odd integer values that are part of the input
sequence. Rather than trying to write all of those functions at once, you can iteratively
create tests that validate the functionality. Making each test pass means creating the
necessary functionality for the method.

The simplest test we can write is to call squaresOfOdds with all even numbers, where the
result should be an empty sequence of integers. Here's that test:
F#

[<TestMethod>]
member this.TestEvenSequence() =
let expected = Seq.empty<int> |> Seq.toList
let actual = MyMath.squaresOfOdds [2; 4; 6; 8; 10]
Assert.AreEqual(expected, actual)

Notice that the expected sequence has been converted to a list. The MSTest library relies
on many standard .NET types. That dependency means that your public interface and
expected results support ICollection rather than IEnumerable.

When you run the test, you see that your test fails. You haven't created the
implementation yet. Make this test pass by writing the simplest code in the Mathservice
class that works:

F#

let squaresOfOdds xs =
Seq.empty<int> |> Seq.toList

In the unit-testing-with-fsharp directory, run dotnet test again. The dotnet test
command runs a build for the MathService project and then for the MathService.Tests
project. After building both projects, it runs this single test. It passes.

Completing the requirements


Now that you've made one test pass, it's time to write more. The next simple case works
with a sequence whose only odd number is 1 . The number 1 is easier because the
square of 1 is 1. Here's that next test:

F#

[<TestMethod>]
member public this.TestOnesAndEvens() =
let expected = [1; 1; 1; 1]
let actual = MyMath.squaresOfOdds [2; 1; 4; 1; 6; 1; 8; 1; 10]
Assert.AreEqual(expected, actual)

Executing dotnet test fails the new test. You must update the squaresOfOdds method to
handle this new test. You must filter all the even numbers out of the sequence to make
this test pass. You can do that by writing a small filter function and using Seq.filter :
F#

let private isOdd x = x % 2 <> 0

let squaresOfOdds xs =
xs
|> Seq.filter isOdd |> Seq.toList

Notice the call to Seq.toList . That creates a list, which implements the ICollection
interface.

There's one more step to go: square each of the odd numbers. Start by writing a new
test:

F#

[<TestMethod>]
member public this.TestSquaresOfOdds() =
let expected = [1; 9; 25; 49; 81]
let actual = MyMath.squaresOfOdds [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]
Assert.AreEqual(expected, actual)

You can fix the test by piping the filtered sequence through a map operation to
compute the square of each odd number:

F#

let private square x = x * x


let private isOdd x = x % 2 <> 0

let squaresOfOdds xs =
xs
|> Seq.filter isOdd
|> Seq.map square
|> Seq.toList

You've built a small library and a set of unit tests for that library. You've structured the
solution so that adding new packages and tests is part of the normal workflow. You've
concentrated most of your time and effort on solving the goals of the application.

See also
dotnet new
dotnet sln
dotnet add reference
dotnet test
Unit testing Visual Basic .NET Core
libraries using dotnet test and MSTest
Article • 09/15/2021

This tutorial takes you through an interactive experience building a sample solution
step-by-step to learn unit testing concepts. If you prefer to follow the tutorial using a
pre-built solution, view or download the sample code before you begin. For download
instructions, see Samples and Tutorials.

This article is about testing a .NET Core project. If you're testing an ASP.NET Core
project, see Integration tests in ASP.NET Core.

Creating the source project


Open a shell window. Create a directory called unit-testing-vb-mstest to hold the
solution. Inside this new directory, run dotnet new sln to create a new solution. This
practice makes it easier to manage both the class library and the unit test project. Inside
the solution directory, create a PrimeService directory. You have the following directory
and file structure thus far:

Console

/unit-testing-vb-mstest
unit-testing-vb-mstest.sln
/PrimeService

Make PrimeService the current directory and run dotnet new classlib -lang VB to create
the source project. Rename Class1.VB to PrimeService.VB. You create a failing
implementation of the PrimeService class:

VB

Namespace Prime.Services
Public Class PrimeService
Public Function IsPrime(candidate As Integer) As Boolean
Throw New NotImplementedException("Please create a test first")
End Function
End Class
End Namespace

Change the directory back to the unit-testing-vb-using-mstest directory. Run dotnet sln
add .\PrimeService\PrimeService.vbproj to add the class library project to the solution.
Creating the test project
Next, create the PrimeService.Tests directory. The following outline shows the directory
structure:

Console

/unit-testing-vb-mstest
unit-testing-vb-mstest.sln
/PrimeService
Source Files
PrimeService.vbproj
/PrimeService.Tests

Make the PrimeService.Tests directory the current directory and create a new project
using dotnet new mstest -lang VB. This command creates a test project that uses MSTest
as the test library. The generated template configures the test runner in the
PrimeServiceTests.vbproj:

XML

<ItemGroup>
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="15.5.0" />
<PackageReference Include="MSTest.TestAdapter" Version="1.1.18" />
<PackageReference Include="MSTest.TestFramework" Version="1.1.18" />
</ItemGroup>

The test project requires other packages to create and run unit tests. dotnet new in the
previous step added MSTest and the MSTest runner. Now, add the PrimeService class
library as another dependency to the project. Use the dotnet add reference command:

.NET CLI

dotnet add reference ../PrimeService/PrimeService.vbproj

You can see the entire file in the samples repository on GitHub.

You have the following final solution layout:

Console

/unit-testing-vb-mstest
unit-testing-vb-mstest.sln
/PrimeService
Source Files
PrimeService.vbproj
/PrimeService.Tests
Test Source Files
PrimeServiceTests.vbproj

Execute dotnet sln add .\PrimeService.Tests\PrimeService.Tests.vbproj in the unit-testing-


vb-mstest directory.

Creating the first test


You write one failing test, make it pass, then repeat the process. Remove UnitTest1.vb
from the PrimeService.Tests directory and create a new Visual Basic file named
PrimeService_IsPrimeShould.VB. Add the following code:

VB

Imports Microsoft.VisualStudio.TestTools.UnitTesting

Namespace PrimeService.Tests
<TestClass>
Public Class PrimeService_IsPrimeShould
Private _primeService As Prime.Services.PrimeService = New
Prime.Services.PrimeService()

<TestMethod>
Sub IsPrime_InputIs1_ReturnFalse()
Dim result As Boolean = _primeService.IsPrime(1)

Assert.IsFalse(result, "1 should not be prime")


End Sub

End Class
End Namespace

The <TestClass> attribute indicates a class that contains tests. The <TestMethod>
attribute denotes a method that is run by the test runner. From the unit-testing-vb-
mstest, execute dotnet test to build the tests and the class library and then run the tests.
The MSTest test runner contains the program entry point to run your tests. dotnet test
starts the test runner using the unit test project you've created.

Your test fails. You haven't created the implementation yet. Make this test pass by
writing the simplest code in the PrimeService class that works:

VB

Public Function IsPrime(candidate As Integer) As Boolean


If candidate = 1 Then
Return False
End If
Throw New NotImplementedException("Please create a test first.")
End Function

In the unit-testing-vb-mstest directory, run dotnet test again. The dotnet test
command runs a build for the PrimeService project and then for the
PrimeService.Tests project. After building both projects, it runs this single test. It

passes.

Adding more features


Now that you've made one test pass, it's time to write more. There are a few other
simple cases for prime numbers: 0, -1. You could add those cases as new tests with the
<TestMethod> attribute, but that quickly becomes tedious. There are other MSTest

attributes that enable you to write a suite of similar tests. A <DataTestMethod> attribute
represents a suite of tests that execute the same code but have different input
arguments. You can use the <DataRow> attribute to specify values for those inputs.

Instead of creating new tests, apply these two attributes to create a single theory. The
theory is a method that tests several values less than two, which is the lowest prime
number:

VB

<TestClass>
Public Class PrimeService_IsPrimeShould
Private _primeService As Prime.Services.PrimeService = New
Prime.Services.PrimeService()

<DataTestMethod>
<DataRow(-1)>
<DataRow(0)>
<DataRow(1)>
Sub IsPrime_ValuesLessThan2_ReturnFalse(value As Integer)
Dim result As Boolean = _primeService.IsPrime(value)

Assert.IsFalse(result, $"{value} should not be prime")


End Sub

<DataTestMethod>
<DataRow(2)>
<DataRow(3)>
<DataRow(5)>
<DataRow(7)>
Public Sub IsPrime_PrimesLessThan10_ReturnTrue(value As Integer)
Dim result As Boolean = _primeService.IsPrime(value)
Assert.IsTrue(result, $"{value} should be prime")
End Sub

<DataTestMethod>
<DataRow(4)>
<DataRow(6)>
<DataRow(8)>
<DataRow(9)>
Public Sub IsPrime_NonPrimesLessThan10_ReturnFalse(value As Integer)
Dim result As Boolean = _primeService.IsPrime(value)

Assert.IsFalse(result, $"{value} should not be prime")


End Sub
End Class

Run dotnet test , and two of these tests fail. To make all of the tests pass, change the
if clause at the beginning of the method:

VB

if candidate < 2

Continue to iterate by adding more tests, more theories, and more code in the main
library. You have the finished version of the tests and the complete implementation of
the library .

You've built a small library and a set of unit tests for that library. You've structured the
solution so that adding new packages and tests is part of the normal workflow. You've
concentrated most of your time and effort on solving the goals of the application.
MSTest attributes
Article • 07/25/2024

MSTest uses custom attributes to identify and customize tests.

To help provide a clearer overview of the testing framework, this section organizes the
members of the Microsoft.VisualStudio.TestTools.UnitTesting namespace into groups of
related functionality.

7 Note

Attributes, whose names end with "Attribute", can be used with or without
"Attribute" at the end. Attributes that have parameterless constructor, can be
written with or without parenthesis. The following code examples work identically:

[TestClass()]

[TestClassAttribute()]

[TestClass]

[TestClassAttribute]

Attributes used to identify test classes and


methods
Every test class must have the TestClass attribute, and every test method must have the
TestMethod attribute.

TestClassAttribute

The TestClass attribute marks a class that contains tests and, optionally, initialize or
cleanup methods.

This attribute can be extended to change or extend the default behavior.

Example:

C#
[TestClass]
public class MyTestClass
{
}

TestMethodAttribute

The TestMethod attribute is used inside a TestClass to define the actual test method to
run.

The method should be an instance public method defined as void , Task , or ValueTask
(starting with MSTest v3.3). It can optionally be async but should not be async void .

The method should have zero parameters, unless it's used with [DataRow] ,
[DynamicData] or similar attribute that provides test case data to the test method.

Consider the following example test class:

C#

[TestClass]
public class MyTestClass
{
[TestMethod]
public void TestMethod()
{
}
}

Attributes used for data-driven testing


Use the following elements to set up data-driven tests. For more information, see Create
a data-driven unit test and Use a configuration file to define a data source.

DataRowAttribute
DataSourceAttribute
DataTestMethodAttribute
DynamicDataAttribute

DataRowAttribute
The DataRowAttribute allows you to run the same test method with multiple different
inputs. It can appear one or multiple times on a test method. It should be combined
with TestMethodAttribute or DataTestMethodAttribute .

The number and types of arguments must exactly match the test method signature.
Consider the following example of a valid test class demonstrating the DataRow attribute
usage with inline arguments that align to test method parameters:

C#

[TestClass]
public class TestClass
{
[TestMethod]
[DataRow(1, "message", true, 2.0)]
public void TestMethod1(int i, string s, bool b, float f)
{
// Omitted for brevity.
}

[TestMethod]
[DataRow(new string[] { "line1", "line2" })]
public void TestMethod2(string[] lines)
{
// Omitted for brevity.
}

[TestMethod]
[DataRow(null)]
public void TestMethod3(object o)
{
// Omitted for brevity.
}

[TestMethod]
[DataRow(new string[] { "line1", "line2" }, new string[] { "line1.",
"line2." })]
public void TestMethod4(string[] input, string[] expectedOutput)
{
// Omitted for brevity.
}
}

7 Note

You can also use the params feature to capture multiple inputs of the DataRow .

C#
[TestClass]
public class TestClass
{
[TestMethod]
[DataRow(1, 2, 3, 4)]
public void TestMethod(params int[] values) {}
}

Examples of invalid combinations:

C#

[TestClass]
public class TestClass
{
[TestMethod]
[DataRow(1, 2)] // Not valid, we are passing 2 inline data but signature
expects 1
public void TestMethod1(int i) {}

[TestMethod]
[DataRow(1)] // Not valid, we are passing 1 inline data but signature
expects 2
public void TestMethod2(int i, int j) {}

[TestMethod]
[DataRow(1)] // Not valid, count matches but types do not match
public void TestMethod3(string s) {}
}

7 Note

Starting with MSTest v3, when you want to pass exactly 2 arrays, you no longer
need to wrap the second array in an object array. Before: [DataRow(new string[] {
"a" }, new object[] { new string[] { "b" } })] Staring with v3: [DataRow(new
string[] { "a" }, new string[] { "b" })]

You can modify the display name used in Visual Studio and loggers for each instance of
DataRowAttribute by setting the DisplayName property.

C#

[TestClass]
public class TestClass
{
[TestMethod]
[DataRow(1, 2, DisplayName = "Functional Case FC100.1")]
public void TestMethod(int i, int j) {}
}

You can also create your own specialized data row attribute by inheriting the
DataRowAttribute .

C#

[AttributeUsage(AttributeTargets.Method, AllowMultiple = true)]


public class MyCustomDataRowAttribute : DataRowAttribute
{
}

[TestClass]
public class TestClass
{
[TestMethod]
[MyCustomDataRow(1)]
public void TestMethod(int i) {}
}

Attributes used to provide initialization and


cleanups
Setup and cleanup that is common to multiple tests can be extracted to a separate
method, and marked with one of the attributes listed below, to run it at appropriate
time, for example before every test. For more information, see Anatomy of a unit test.

Assembly level
AssemblyInitialize is called right after your assembly is loaded and AssemblyCleanup is
called right before your assembly is unloaded.

The methods marked with these attributes should be defined as static void , static
Task or static ValueTask (starting with MSTest v3.3), in a TestClass , and appear only

once. The initialize part requires one argument of type TestContext and the cleanup no
argument.

C#

[TestClass]
public class MyTestClass
{
[AssemblyInitialize]
public static void AssemblyInitialize(TestContext testContext)
{
}

[AssemblyCleanup]
public static void AssemblyCleanup()
{
}
}

C#

[TestClass]
public class MyOtherTestClass
{
[AssemblyInitialize]
public static async Task AssemblyInitialize(TestContext testContext)
{
}

[AssemblyCleanup]
public static async Task AssemblyCleanup()
{
}
}

Class level
ClassInitialize is called right before your class is loaded (but after static constructor) and
ClassCleanup is called right after your class is unloaded.

It's possible to control the inheritance behavior: only for current class using
InheritanceBehavior.None or for all derived classes using

InheritanceBehavior.BeforeEachDerivedClass .

It's also possible to configure whether the class cleanup should be run at the end of the
class or at the end of the assembly.

The methods marked with these attributes should be defined as static void , static
Task or static ValueTask (starting with MSTest v3.3), in a TestClass , and appear only

once. The initialize part requires one argument of type TestContext and the cleanup no
argument.

C#

[TestClass]
public class MyTestClass
{
[ClassInitialize]
public static void ClassInitialize(TestContext testContext)
{
}

[ClassCleanup]
public static void ClassCleanup()
{
}
}

C#

[TestClass]
public class MyOtherTestClass
{
[ClassInitialize]
public static async Task ClassInitialize(TestContext testContext)
{
}

[ClassCleanup]
public static async Task ClassCleanup()
{
}
}

Test level
TestInitialize is called right before your test is started and TestCleanup is called right
after your test is finished.

The TestInitialize is similar to the class constructor but is usually more suitable for
long or async initializations. The TestInitialize is always called after the constructor
and called for each test (including each data row of data-driven tests).

The TestCleanup is similar to the class Dispose (or DisposeAsync ) but is usually more
suitable for long or async cleanups. The TestCleanup is always called just before the
DisposeAsync / Dispose and called for each test (including each data row of data-driven

tests).

The methods marked with these attributes should be defined as void , Task or
ValueTask (starting with MSTest v3.3), in a TestClass , be parameterless, and appear one

or multiple times.

C#
[TestClass]
public class MyTestClass
{
[TestInitialize]
public void TestInitialize()
{
}

[TestCleanup]
public void TestCleanup()
{
}
}

C#

[TestClass]
public class MyOtherTestClass
{
[TestInitialize]
public async Task TestInitialize()
{
}

[TestCleanup]
public async Task TestCleanup()
{
}
}

Attributes used to control test execution


The following attributes can be used to modify the way tests are executed.

TimeoutAttribute

The Timeout attribute can be used to specify the maximum time in milliseconds that a
test method is allowed to run. If the test method runs longer than the specified time, the
test will be aborted and marked as failed.

This attribute can be applied to any test method or any fixture method (initialization and
cleanup methods). It is also possible to specify the timeout globally for either all test
methods or all test fixture methods by using the timeout properties of the runsettings
file.
7 Note

The timeout is not guaranteed to be precise. The test will be aborted after the
specified time has passed, but it may take a few milliseconds longer.

When using the timeout feature, a separate thread/task is created to run the test
method. The main thread/task is responsible for monitoring the timeout and
unobserving the method thread/task if the timeout is reached.

Starting with MSTest 3.6, it is possible to specify CooperativeCancellation property on


the attribute (or globally through runsettings) to enable cooperative cancellation. In this
mode, the method is responsible for checking the cancellation token and aborting the
test if it is signaled as you would do in a typical async method. This mode is more
performant and allows for more precise control over the cancellation process. This mode
can be applied to both async and sync methods.

STATestClassAttribute

When applied to a test class, the [STATestClass] attribute indicates that all test
methods (and the [ClassInitialize] and [ClassCleanup] methods) in the class should
be run in a single-threaded apartment (STA). This attribute is useful when the test
methods interact with COM objects that require STA.

7 Note

This is only supported on Windows.

STATestMethodAttribute

When applied to a test method, the [STATestMethod] attribute indicates that the test
method should be run in a single-threaded apartment (STA). This attribute is useful
when the test method interacts with COM objects that require STA.

7 Note

This is only supported on Windows.

ParallelizeAttribute
By default, MSTest runs tests in a sequential order. The Parallelize attribute can be used
to run tests in parallel. This is an assembly level attribute. You can specify if the
parallelism should be at class level (multiple classes can be run in parallel but tests in a
given class are run sequentially) or at method level.

It's also possible to specify the maximum number of threads to use for parallel
execution. A value of 0 (default value) means that the number of threads is equal to the
number of logical processors on the machine.

It is also possible to specify the parallelism through the parallelization properties of the
runsettings file.

DoNotParallelizeAttribute

The DoNotParallelize attribute can be used to prevent parallel execution of tests in a


given assembly. This attribute can be applied at the assembly level, class level or
method level.

7 Note

By default, MSTest runs tests in sequential order so you only need to use this
attribute if you have applied the [Parallelize] attribute at the assembly level.

Utilities attributes

DeploymentItemAttribute

The MSTest framework introduced DeploymentItemAttribute for copying files or folders


specified as deployment items to the deployment directory (without adding a custom
output path the copied files will be in TestResults folder inside the project folder). The
deployment directory is where all the deployment items are present along with test
project DLL.

It can be used either on test classes (classes marked with TestClass attribute) or on test
methods (methods marked with TestMethod attribute).

Users can have multiple instances of the attribute to specify more than one item.

And here you can see its constructors.

Example
C#

[TestClass]
[DeploymentItem(@"C:\classLevelDepItem.xml")] // Copy file using some
absolute path
public class UnitTest1
{
[TestMethod]
[DeploymentItem(@"..\..\methodLevelDepItem1.xml")] // Copy file using
a relative path from the dll output location
[DeploymentItem(@"C:\DataFiles\methodLevelDepItem2.xml",
"SampleDataFiles")] // File will be added under a SampleDataFiles in the
deployment directory
public void TestMethod1()
{
string textFromFile = File.ReadAllText("classLevelDepItem.xml");
}
}

2 Warning

We do not recommend the usage of this attribute for copying files to the
deployment directory.

ExpectedExceptionAttribute

The MSTest framework introduced ExpectedExceptionAttribute for marking a test


method to expect an exception of a specific type. The test will pass if the expected
exception is thrown and the exception message matches the expected message.

2 Warning

This attribute exists for backward compatibility and is not recommended for new
tests. Instead, use the Assert.ThrowsException method.

Metadata attributes
The following attributes and the values assigned to them appear in the Visual Studio
Properties window for a particular test method. These attributes aren't meant to be
accessed through the code of the test. Instead, they affect the ways the test is used or
run, either by you through the IDE of Visual Studio, or by the Visual Studio test engine.
For example, some of these attributes appear as columns in the Test Manager window
and Test Results window, which means that you can use them to group and sort tests
and test results. One such attribute is TestPropertyAttribute, which you use to add
arbitrary metadata to tests.

For example, you could use it to store the name of a "test pass" that this test covers, by
marking the test with [TestProperty("Feature", "Accessibility")] . Or, you could use it
to store an indicator of the kind of test It's with [TestProperty("ProductMilestone",
"42")] . The property you create by using this attribute, and the property value you

assign, are both displayed in the Visual Studio Properties window under the heading
Test specific.

DescriptionAttribute
IgnoreAttribute
OwnerAttribute
PriorityAttribute
TestCategoryAttribute
TestPropertyAttribute
WorkItemAttribute

The attributes below relate the test method that they decorate to entities in the project
hierarchy of a Team Foundation Server team project:

CssIterationAttribute
CssProjectStructureAttribute

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
MSTest assertions
Article • 07/25/2024

Use the Assert classes of the Microsoft.VisualStudio.TestTools.UnitTesting namespace to


verify specific functionality. A test method exercises the code of a method in your
application's code, but it reports the correctness of the code's behavior only if you
include Assert statements.

The Assert class


In your test method, you can call any methods of the
Microsoft.VisualStudio.TestTools.UnitTesting.Assert class, such as Assert.AreEqual. The
Assert class has many methods to choose from, and many of the methods have several
overloads.

The StringAssert class


Use the StringAssert class to compare and examine strings. This class contains a variety
of useful methods, such as StringAssert.Contains, StringAssert.Matches, and
StringAssert.StartsWith.

The CollectionAssert class


Use the CollectionAssert class to compare collections of objects, or to verify the state of
a collection.

6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
Run tests with MSTest
Article • 07/25/2024

There are several ways to run MSTest tests depending on your needs. You can run tests
from an IDE (for example, Visual Studio, Visual Studio Code, or JetBrains Rider), or from
the command line, or from a CI service (such as GitHub Actions or Azure DevOps).

Historically, MSTest relied on VSTest for running tests in all contexts but starting with
version 3.2.0, MSTest has its own test runner. This new runner is more lightweight and
faster than VSTest, and it's the recommended way to run MSTest tests.

6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
MSTest runner overview
Article • 09/10/2024

The MSTest runner is a lightweight and portable alternative to VSTest for running tests
in all contexts (for example, continuous integration (CI) pipelines, CLI, Visual Studio Test
Explorer, and VS Code Text Explorer). The MSTest runner is embedded directly in your
MSTest test projects, and there are no other app dependencies, such as vstest.console
or dotnet test , needed to run your tests.

The MSTest runner is open source, and builds on a Microsoft.Testing.Platform library.


You can find Microsoft.Testing.Platform code in microsoft/testfx GitHub repository.
The MSTest runner comes bundled with MSTest in 3.2.0-preview.23623.1 or newer.

Enable MSTest runner in an MSTest project


It's recommended to use MSTest SDK as it greatly simplifies your project configuration
and updating the project, and it ensures a proper alignment of the versions of the
platform (MSTest runner) and its extensions.

When you use MSTest SDK , by default you're opted in to using MSTest runner.

XML

<Project Sdk="MSTest.Sdk/3.3.1">

<PropertyGroup>
<TargetFramework>net8.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
</PropertyGroup>

</Project>

Alternatively, you can enable MSTest runner by adding the EnableMSTestRunner property
and setting OutputType to Exe in your project file. You also need to ensure that you're
using MSTest 3.2.0-preview.23623.1 or newer.

Consider the following example project file:

XML

<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<!-- Enable the MSTest runner, this is an opt-in feature -->
<EnableMSTestRunner>true</EnableMSTestRunner>
<OutputType>Exe</OutputType>

<TargetFramework>net8.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>

<IsPackable>false</IsPackable>
<IsTestProject>true</IsTestProject>
</PropertyGroup>

<ItemGroup>
<!--
MSTest meta package is the recommended way to reference MSTest.
It's equivalent to referencing:
Microsoft.NET.Test.Sdk
MSTest.TestAdapter
MSTest.TestFramework
MSTest.Analyzers
-->
<PackageReference Include="MSTest" Version="3.2.0" />

<!--
Coverlet collector isn't compatible with MSTest runner, you can
either switch to Microsoft CodeCoverage (as shown below),
or switch to be using coverlet global tool
https://github.com/coverlet-coverage/coverlet#net-global-tool-guide-
suffers-from-possible-known-issue
-->
<PackageReference Include="Microsoft.Testing.Extensions.CodeCoverage"
Version="17.10.1" />
</ItemGroup>

</Project>

Configurations and filters

.runsettings
The MSTest runner supports the runsettings through the command-line option --
settings . For the full list of supported MSTest entries, see Configure MSTest:

Runsettings. The following commands show various usage examples.

Using dotnet run :

.NET CLI
dotnet run --project Contoso.MyTests -- --settings config.runsettings

Using dotnet exec :

.NET CLI

dotnet exec Contoso.MyTests.dll --settings config.runsettings

-or-

.NET CLI

dotnet Contoso.MyTests.dll --settings config.runsettings

Using the executable:

.NET CLI

Contoso.MyTests.exe --settings config.runsettings

Tests filter
You can provide the tests filter seamlessly using the command line option --filter . The
following commands show some examples.

Using dotnet run :

.NET CLI

dotnet run --project Contoso.MyTests -- --filter


"FullyQualifiedName~UnitTest1|TestCategory=CategoryA"

Using dotnet exec :

.NET CLI

dotnet exec Contoso.MyTests.dll --filter


"FullyQualifiedName~UnitTest1|TestCategory=CategoryA"

-or-

.NET CLI
dotnet Contoso.MyTests.dll --filter
"FullyQualifiedName~UnitTest1|TestCategory=CategoryA"

Using the executable:

.NET CLI

Contoso.MyTests.exe --filter
"FullyQualifiedName~UnitTest1|TestCategory=CategoryA"
Configure MSTest
Article • 09/12/2024

MSTest, Microsoft Testing Framework, is a test framework for .NET applications. It allows you to
write and execute tests, and provide test suites with integration to Visual Studio and Visual
Studio Code Test Explorers, the .NET CLI, and many CI pipelines.

MSTest is a fully supported, open-source and a cross-platform test framework that works with
all supported .NET targets (.NET Framework, .NET Core, .NET, UWP, WinUI, and so on) hosted on
GitHub .

Runsettings
A .runsettings file can be used to configure how unit tests are being run. To learn more about
the runsettings and the configurations related to the platform, you can check out VSTest
runsettings documentation or MSTest runner runsettings documentation.

MSTest element
The following runsettings entries let you configure how MSTest behaves.

ノ Expand table

Configuration Default Values

AssemblyCleanupTimeout 0 Specify globally the timeout to apply on


each instance of assembly cleanup method.
[Timeout] attribute specified on the
assembly cleanup method overrides the
global timeout .

AssemblyInitializeTimeout 0 Specify globally the timeout to apply on


each instance of assembly initialize method.
[Timeout] attribute specified on the
assembly initialize method overrides the
global timeout .

AssemblyResolution false You can specify paths to extra assemblies


when finding and running unit tests. For
example, use these paths for dependency
assemblies that aren't in the same directory
as the test assembly. To specify a path, use
a Directory Path element. Paths can include
environment variables.

<AssemblyResolution> <Directory
path="D:\myfolder\bin\"
Configuration Default Values

includeSubDirectories="false"/>
</AssemblyResolution>

This feature is only applied when using a


.NET Framework target.

CaptureTraceOutput true Capture text messages coming from the


Console.Write* , Trace.Write* , and
Debug.Write* APIs that will be associated to
the current running test.

ClassCleanupLifecycle EndOfClass If you want the class cleanup to occur at the


end of assembly, set it to EndOfAssembly.
(No longer supported starting from MSTest
v4 as EndOfClass is the default and only
ClassCleanup behavior)

ClassCleanupTimeout 0 Specify globally the timeout to apply on


each instance of class cleanup method.
[Timeout] attribute specified on the class
cleanup method overrides the global
timeout.

ClassInitializeTimeout 0 Specify globally the timeout to apply on


each instance of class initialize method.
[Timeout] attribute specified on the class
initialize method overrides the global
timeout.

DeleteDeploymentDirectoryAfterTestRunIsComplete true To retain the deployment directory after a


test run, set this value to false.

DeploymentEnabled true If you set the value to false, deployment


items that you specify in your test method
aren't copied to the deployment directory.

DeployTestSourceDependencies true A value indicating whether the test source


references are to be deployed.

EnableBaseClassTestMethodsFromOtherAssemblies true A value indicating whether to enable


discovery of test methods from base classes
in a different assembly from the inheriting
test class.

ForcedLegacyMode false In older versions of Visual Studio, the


MSTest adapter was optimized to make it
faster and more scalable. Some behavior,
such as the order in which tests are run,
might not be exactly as it was in previous
editions of Visual Studio. Set the value to
true to use the older test adapter.

For example, you might use this setting if


Configuration Default Values

you have an app.config file specified for a


unit test.

We recommend that you consider


refactoring your tests to allow you to use
the newer adapter.

MapInconclusiveToFailed false If a test completes with an inconclusive


status, it's mapped to the skipped status in
Test Explorer. If you want inconclusive tests
to be shown as failed, set the value to true.

MapNotRunnableToFailed true A value indicating whether a not runnable


result is mapped to failed test.

Parallelize Used to set the parallelization settings:

Workers: The number of threads/workers to


be used for parallelization, which is by
default the number of processors on the
current machine.

SCOPE: The scope of parallelization. You can


set it to MethodLevel. By default, it's
ClassLevel.

<Parallelize><Workers>32</Workers>
<Scope>MethodLevel</Scope></Parallelize>

SettingsFile You can specify a test settings file to use


with the MSTest adapter here. You can also
specify a test settings file from the settings
menu.

If you specify this value, you must also set


the ForcedLegacyMode to true.

<ForcedLegacyMode>true</ForcedLegacyMode>

TestCleanupTimeout 0 Specify globally the timeout to apply on


each instance of test cleanup method.
[Timeout] attribute specified on the test
cleanup method overrides the global
timeout.

TestInitializeTimeout 0 Specify globally the timeout to apply on


each instance of test initialize method.
[Timeout] attribute specified on the test
initialize method overrides the global
timeout.

TestTimeout 0 Gets specified global test case timeout.


Configuration Default Values

TreatClassAndAssemblyCleanupWarningsAsErrors false To see your failures in class cleanups as


errors, set this value to true.

TreatDiscoveryWarningsAsErrors false To report test discovery warnings as errors,


set this value to true.

ConsiderFixturesAsSpecialTests false To display AssemblyInitialize ,


AssemblyCleanup , ClassInitialize ,
ClassCleanup as individual entries in Visual
Studio and Visual Studio Code Test
Explorer and .trx log, set this value to true

TestRunParameter element

XML

<TestRunParameters>
<Parameter name="webAppUrl" value="http://localhost" />
</TestRunParameters>

Test run parameters provide a way to define variables and values that are available to the tests
at run time. Access the parameters using the MSTest TestContext.Properties property:

C#

private string _appUrl;


public TestContext TestContext { get; set; }

[TestMethod]
public void HomePageTest()
{
string _appUrl = TestContext.Properties["webAppUrl"];
}

To use test run parameters, add a public TestContext property to your test class.

Example .runsettings file


The following XML shows the contents of a typical .runsettings file. Copy this code and edit it to
suit your needs.

Each element of the file is optional because it has a default value.

XML

<?xml version="1.0" encoding="utf-8"?>


<RunSettings>
<!-- Parameters used by tests at run time -->
<TestRunParameters>
<Parameter name="webAppUrl" value="http://localhost" />
<Parameter name="webAppUserName" value="Admin" />
<Parameter name="webAppPassword" value="Password" />
</TestRunParameters>

<!-- MSTest -->


<MSTest>
<MapInconclusiveToFailed>True</MapInconclusiveToFailed>
<CaptureTraceOutput>false</CaptureTraceOutput>

<DeleteDeploymentDirectoryAfterTestRunIsComplete>False</DeleteDeploymentDirectoryAf
terTestRunIsComplete>
<DeploymentEnabled>False</DeploymentEnabled>
<ConsiderFixturesAsSpecialTests>False</ConsiderFixturesAsSpecialTests>
<AssemblyResolution>
<Directory path="D:\myfolder\bin\" includeSubDirectories="false"/>
</AssemblyResolution>
</MSTest>

</RunSettings>
MSTest code analysis
Article • 08/13/2024

MSTest analysis ("MSTESTxxxx") rules inspect your C# or Visual Basic code for security,
performance, design and other issues.

 Tip

If you're using Visual Studio, many analyzer rules have associated code fixes that
you can apply to correct the problem. Code fixes are shown in the light bulb icon
menu.

The rules are organized into categories such as performance usage...

Categories
Design rules

Design rules will help you create and maintain test suites that adhere to proper design
and good practices.

Performance rules

Rules that support high-performance testing.

Suppression rules

Rules that support suppressing diagnostics from other rules.

Usage rules

Rules that support proper usage of MSTest.

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
 Provide product feedback
more information, see our
contributor guide.
MSTest design rules
Article • 05/29/2024

Design rules will help you create and maintain test suites that adhere to proper design
and good practices.

ノ Expand table

Identifier Name Description

MSTEST0004 PublicTypeShouldBeTestClassAnalyzer It's considered a good


practice to have only test
classes marked public in a
test project.

MSTEST0006 AvoidExpectedExceptionAttributeAnalyzer Prefer


Assert.ThrowsException or
Assert.ThrowsExceptionAsync
over [ExpectedException] as
it ensures that only the
expected call throws the
expected exception. The
assert APIs also provide
more flexibility and allow you
to assert extra properties of
the exception.

MSTEST0015 TestMethodShouldNotBeIgnored Test methods should not be


ignored (marked with
[Ignore] ).

MSTEST0016 TestClassShouldHaveTestMethod Test class should have at


least one test method or be
'static' with method(s)
marked by
[AssemblyInitialization]
and/or [AssemblyCleanup] .

MSTEST0019 PreferTestInitializeOverConstructorAnalyzer Prefer TestInitialize methods


over constructors

MSTEST0020 PreferConstructorOverTestInitializeAnalyzer Prefer constructors over


TestInitialize methods

MSTEST0021 PreferDisposeOverTestCleanupAnalyzer Prefer Dispose over


TestCleanup methods
Identifier Name Description

MSTEST0022 PreferTestCleanupOverDisposeAnalyzer Prefer TestCleanup over


Dispose methods

MSTEST0025 PreferAssertFailOverAlwaysFalseConditionsAnalyzer Use 'Assert.Fail' instead of an


always-failing assert

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
MSTEST0004: Public types should be
test classes
Article • 08/13/2024

ノ Expand table

Property Value

Rule ID MSTEST0004

Title Public types should be test classes

Category Design

Fix is breaking or non-breaking Breaking

Enabled by default No

Default severity Disabled

Introduced in version 3.2.0

There is a code fix Yes

Cause
A public type is not a test class (class marked with the [TestClass] attribute).

Rule description
It's considered a good practice to keep all helper and base classes internal and have
only test classes marked public in a test project.

How to fix violations


Change the accessibility of the type to not be public .

When to suppress warnings


You can suppress instances of this diagnostic if the type should remain public for
compatibility reason.
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
MSTEST0006: Avoid
[ExpectedException]
Article • 08/13/2024

ノ Expand table

Property Value

Rule ID MSTEST0006

Title Avoid [ExpectedException]

Category Design

Fix is breaking or non-breaking Non-breaking

Enabled by default Yes

Default severity Info

Introduced in version 3.2.0

There is a code fix No

Cause
A method is marked with the [ExpectedException] attribute.

Rule description
Prefer Assert.ThrowsException or Assert.ThrowsExceptionAsync over the
[ExpectedException] attribute as it ensures that only the expected line of code throws

the expected exception, instead of acting on the whole body of the test. The assert APIs
also provide more flexibility and allow you to assert extra properties of the exception.

C#

[TestClass]
public class TestClass
{
[TestMethod]
[ExpectedException(typeof(InvalidOperationException))] // Violation
public void TestMethod()
{
// Arrange
var person = new Person
{
FirstName = "John",
LastName = "Doe",
};
person.SetAge(-1);

// Act
person.GrowOlder();
}
}

How to fix violations


Replace the usage of the [ExpectedException] attribute by a call to
Assert.ThrowsException or Assert.ThrowsExceptionAsync .

C#

[TestClass]
public class TestClass
{
[TestMethod]
public void TestMethod()
{
// Arrange
var person = new Person
{
FirstName = "John",
LastName = "Doe",
};
person.SetAge(-1);

// Act
Assert.ThrowsException(() => person.GrowOlder());
}
}

When to suppress warnings


It is safe to suppress this diagnostic when the method is a one-liner.

C#

[TestClass]
public class TestClass
{
[TestMethod]
[ExpectedException(typeof(ArgumentNullException))]
public void TestMethod()
{
new Person(null);
}
}

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
MSTEST0015: Test method should not
be ignored
Article • 08/13/2024

ノ Expand table

Property Value

Rule ID MSTEST0015

Title Test method should not be ignored

Category Design

Fix is breaking or non-breaking Non-breaking

Enabled by default Yes

Default severity Info

Introduced in version 3.3.0

There is a code fix No

Cause
A Test method should not be ignored.

Rule description
Test methods should not be ignored (marked with [Ignore] ).

How to fix violations


Ensure that the test method isn't ignored.

When to suppress warnings


Do not suppress a warning from this rule. If you ignore this rule, test method will be
ignored.
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
MSTEST0016: Test class should have test
method
Article • 08/13/2024

ノ Expand table

Property Value

Rule ID MSTEST0016

Title Test class should have test method

Category Design

Fix is breaking or non-breaking Non-breaking

Enabled by default Yes

Default severity Info

Introduced in version 3.3.0

There is a code fix No

Cause
A test class should have a test method.

Rule description
A test class should have at least one test method or be static and have methods that
are attributed with [AssemblyInitialization] or [AssemblyCleanup] .

How to fix violations


Ensure that the test class has a test method or is static and has methods attributed
with [AssemblyInitialization] or [AssemblyCleanup] .

When to suppress warnings


Do not suppress a warning from this rule. If you ignore this rule, test class will be
ignored.

6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
MSTEST0019: Prefer TestInitialize
methods over constructors
Article • 08/13/2024

ノ Expand table

Property Value

Rule ID MSTEST0019

Title Prefer TestInitialize methods over constructors

Category Design

Fix is breaking or non-breaking Non-breaking

Enabled by default No

Default severity Info

Introduced in version 3.4.0

There is a code fix No

Cause
This rule raises a diagnostic when there is a parameterless explicit constructor declared
on a test class (class marked with [TestClass] ).

Rule description
Use this rule to enforce using [TestInitialize] for both synchronous and asynchronous
test initialization. Asynchronous (async/await) test intialization requires the use of
[TestInitialize] methods, because the resulting Task needs to be awaited.

How to fix violations


Replace the constructor call with a [TestInitialize] method.

When to suppress warnings


You usually don't want to suppress warnings from this rule if you decided to opt-in for
it.

6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
MSTEST0020: Prefer constructors over
TestInitialize methods
Article • 08/13/2024

ノ Expand table

Property Value

Rule ID MSTEST0020

Title Prefer constructors over TestInitialize methods

Category Design

Fix is breaking or non-breaking Non-breaking

Enabled by default No

Default severity Info

Introduced in version 3.4.0

There is a code fix No

Cause
This rule raises a diagnostic when there is a void [TestInitialize] method.

Rule description
It is usually better to rely on constructors for non-async initialization as you can then
rely on readonly and get better compiler feedback when developing your tests. This is
especially true when dealing with nullable enabled contexts.

How to fix violations


Replace [TestInitialize] returning void by constructors.

When to suppress warnings


You usually don't want to suppress warnings from this rule if you decided to opt-in for
it.

6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
MSTEST0021: Prefer Dispose over
TestCleanup methods
Article • 08/13/2024

ノ Expand table

Property Value

Rule ID MSTEST0021

Title Prefer Dispose over TestCleanup methods

Category Design

Fix is breaking or non-breaking Non-breaking

Enabled by default No

Default severity Info

Introduced in version 3.4.0

There is a code fix No

Cause
This rule raises a diagnostic when there is a void [TestCleanup] method or on any
[TestCleanup] if the targeted framework supports IAsyncDisposable interface.

Rule description
Using Dispose or DisposeAsync is a more common pattern and some developers prefer
to always use this pattern even for tests.

How to fix violations


Replace [TestCleanup] method by Dispose or DisposeAsync pattern.

When to suppress warnings


You usually don't want to suppress warnings from this rule if you decided to opt-in for
it.

6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
MSTEST0022: Prefer TestCleanup over
Dispose methods
Article • 08/13/2024

ノ Expand table

Property Value

Rule ID MSTEST0022

Title Prefer TestCleanup over Dispose methods

Category Design

Fix is breaking or non-breaking Non-breaking

Enabled by default No

Default severity Info

Introduced in version 3.4.0

There is a code fix No

Cause
This rule raises a diagnostic when a Dispose or DisposeAsync method is detected.

Rule description
Although Dispose or DisposeAsync is a more common pattern, some developers prefer
to always use [TestCleanup] for their test cleanup phase as the method is allowing
async pattern even in older version of .NET.

How to fix violations


Replace Dispose or DisposeAsync methods with [TestCleanup] .

When to suppress warnings


You usually don't want to suppress warnings from this rule if you decided to opt-in for
it.

6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
MSTEST0025: Use 'Assert.Fail' instead of
an always-failing assert
Article • 05/14/2024

ノ Expand table

Property Value

Rule ID MSTEST0025

Title Use 'Assert.Fail' instead of an always-failing assert

Category Design

Fix is breaking or non-breaking Non-breaking

Enabled by default Yes

Default severity Info

Introduced in version 3.4.0

Cause
This rule raises a diagnostic when a call to an assertion produces an always-false
condition.

Rule description
Using Assert.Fail over an always-failing assertion call provides clearer intent and
better documentation for the code.

When you encounter an assertion that always fails (for example, Assert.IsTrue(false) ),
it might not be immediately obvious to someone reading the code why the assertion is
there or what condition it's trying to check. This can lead to confusion and wasted time
for developers who come across the code later on.

In contrast, using Assert.Fail allows you to provide a custom failure message, making
it clear why the assertion is failing and what specific condition or scenario it's
addressing. This message serves as documentation for the intent behind the assertion,
helping other developers understand the purpose of the assertion without needing to
dive deep into the code.
Overall, using Assert.Fail promotes clarity, documentation, and maintainability in your
codebase, making it a better choice over an always failing assertion call.

How to fix violations


Ensure that calls to Assert.IsTrue , Assert.IsFalse , Assert.AreEqual ,
Assert.AreNotEqual or Assert.IsNotNull are not producing always-failing conditions.

When to suppress warnings


We do not recommend suppressing warnings from this rule.

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
MSTEST0029: Public method should be
test method
Article • 09/07/2024

ノ Expand table

Property Value

Rule ID MSTEST0029

Title Public method should be test method

Category Design

Fix is breaking or non-breaking Non-breaking

Enabled by default No

Default severity Info

Introduced in version 3.5.0

There is a code fix Yes

Cause
A public method should be a test method.

Rule description
A public method of a class marked with [TestClass] should be a test method (marked
with [TestMethod] ). The rule ignores methods that are marked with [TestInitialize] ,
or [TestCleanup] attributes.

How to fix violations


Ensure that the public method is a test method (marked with [TestMethod] ).

When to suppress warnings


Do not suppress a warning from this rule. If you ignore this rule, the public method
won't be considered as a test method.
MSTEST0036: Do not use shadowing
inside test class.
Article • 08/28/2024

ノ Expand table

Property Value

Rule ID MSTEST0036

Title Do not use shadowing inside test class.

Category Design

Fix is breaking or non-breaking Non-breaking

Enabled by default Yes

Default severity Warning

Introduced in version 3.6.0

There is a code fix No

Cause
Shadowing test members could cause testing issues (such as NRE).

Rule description
Shadowing test members could cause testing issues (such as NRE).

How to fix violations


Delete the shadowing member.

When to suppress warnings


Don't suppress warnings from this rule as it could cause testing issues (such as NRE).
MSTest performance rules
Article • 02/01/2024

Rules that support high-performance testing.

ノ Expand table

Identifier Name Description

MSTEST0001 UseParallelizeAttributeAnalyzer By default, MSTest runs tests sequentially which


can lead to severe performance limitations. It is
recommended to enable assembly attribute
[Parallelize] or if the assembly is known to not
be parallelizable, to use explicitly the assembly
level attribute [DoNotParallelize] .

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
MSTEST0001: Explicitly enable or disable
tests parallelization
Article • 02/01/2024

ノ Expand table

Property Value

Rule ID MSTEST0001

Title Explicitly enable or disable tests parallelization

Category Performance

Fix is breaking or non-breaking Non-breaking

Enabled by default Yes

Default severity Info

Introduced in version 3.2.0

Cause
The assembly is not marked with [assembly: Parallelize] or [assembly:
DoNotParallelize] attribute.

Rule description
By default, MSTest runs tests within the same assembly sequentially, which can lead to
severe performance limitations. It is recommended to enable assembly attribute
[assembly: Parallelize] to run tests in parallel, or if the assembly is known to not be
parallelizable, to use explicitly the assembly level attribute [assembly: DoNotParallelize].

The default configuration of [assembly: Parallelize] is equivalent to [assembly:


Parallelize(Scope = ExecutionScope.ClassLevel)] , meaning that the parallelization will

be set at class level (not method level) and will use as many threads as possible
(depending on internal implementation).

How to fix violations


To fix a violation of this rule, add [assembly: Parallelize] or [assembly:
DoNotParallelize] attribute. We recommend to use [assembly: Parallelize(Scope =

ExecutionScope.MethodLevel)] to have best parallelization.

When to suppress warnings


Do not suppress a warning from this rule. Many libraries can benefit from a massive
performance boost when enabling parallelization. When the test application is designed
in a way that prevents parallelization, having the attribute explicitly set helps new
developers to understand the limitations of the library.

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
MSTest usage rules
Article • 03/21/2024

Rules that support proper usage of MSTest.

ノ Expand table

Identifier Name Description

MSTEST0002 TestClassShouldBeValidAnalyzer Test classes, classes marked with


the [TestClass] attribute, should
respect the following layout to be
considered valid by MSTest:
- it should be public (or internal
if [assembly: DiscoverInternals]
attribute is set)
- it should not be static
- it should not be generic.

MSTEST0003 TestMethodShouldBeValidAnalyzer Test methods, methods marked


with the [TestMethod] attribute,
should respect the following
layout to be considered valid by
MSTest:
- it should be public (or internal
if [assembly: DiscoverInternals]
attribute is set)
- it should not be static
- it should not be generic
- it should not be abstract
- return type should be void or
Task
- it should not be async void
- it should be a special method
(finalizer, operator...).

MSTEST0005 TestContextShouldBeValidAnalyzer TestContext property should follow


the following layout to be valid:
- it should be a property
- it should be public (or internal
if [assembly: DiscoverInternals]
attribute is set)
- it should not be static
- it should not be readonly.

MSTEST0007 UseAttributeOnTestMethodAnalyzer The following test attributes


should only be applied on
Identifier Name Description

methods marked with the


TestMethodAttribute attribute:
- [CssIteration]
- [CssProjectStructure]
- [Description]
- [ExpectedException]
- [Owner]
- [Priority]
- [TestProperty]
- [WorkItem]

MSTEST0008 TestInitializeShouldBeValidAnalyzer Methods marked with


[TestInitialize] should follow
the following layout to be valid:
- it should be public
- it should not be static
- it should not be generic
- it should not be abstract
- it should not take any parameter
- return type should be void , Task
or ValueTask
- it should not be async void
- it should not be a special method
(finalizer, operator...).

MSTEST0009 TestCleanupShouldBeValidAnalyzer Methods marked with


[TestCleanup] should follow the
following layout to be valid:
- it should be public
- it should not be static
- it should not be generic
- it should not be abstract
- it should not take any parameter
- return type should be void , Task
or ValueTask
- it should not be async void
- it should not be a special method
(finalizer, operator...).

MSTEST0010 ClassInitializeShouldBeValidAnalyzer Methods marked with


[ClassInitialize] should follow
the following layout to be valid:
- it should be public
- it should be static
- it should not be generic
- it should take one parameter of
type TestContext
Identifier Name Description

- return type should be void , Task


or ValueTask
- it should not be async void
- it should not be a special method
(finalizer, operator...).

MSTEST0011 ClassCleanupShouldBeValidAnalyzer Methods marked with


[ClassCleanup] should follow the
following layout to be valid:
- it should be public
- it should be static
- it should not be generic
- it should not take any parameter
- return type should be void , Task
or ValueTask
- it should not be async void
- it should not be a special method
(finalizer, operator...).

MSTEST0012 AssemblyInitializeShouldBeValidAnalyzer Methods marked with


[AssemblyInitialize] should
follow the following layout to be
valid:
- it should be public
- it should be static
- it should not be generic
- it should take one parameter of
type TestContext
- return type should be void , Task
or ValueTask
- it should not be async void
- it should not be a special method
(finalizer, operator...).

MSTEST0013 AssemblyCleanupShouldBeValidAnalyzer Methods marked with


[AssemblyCleanup] should follow
the following layout to be valid:
- it should be public
- it should be static
- it should not be generic
- it should not take any parameter
- return type should be void , Task
or ValueTask
- it should not be async void
- it should not be a special method
(finalizer, operator...).
Identifier Name Description

MSTEST0014 DataRowShouldBeValidAnalyzer [DataRow] instances should have


the following layout to be valid:
- they should only be set on a test
method
- argument count should match
method parameters count
- argument type should match
method argument type

MSTEST0017 AssertionArgsShouldBePassedInCorrectOrder Assertion arguments should be


passed in the correct order

MSTEST0023 DoNotNegateBooleanAssertionAnalyzer Do not negate boolean assertions

MSTEST0024 DoNotStoreStaticTestContextAnalyzer Do not store TestContext in a static


member

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
MSTEST0002: Test classes should have
valid layout
Article • 08/13/2024

ノ Expand table

Property Value

Rule ID MSTEST0002

Title Test classes should have valid layout

Category Usage

Fix is breaking or non-breaking Breaking

Enabled by default Yes

Default severity Warning

Introduced in version 3.2.0

There is a code fix No

Cause
A test class is not following one or multiple points of the required test class layout.

Rule description
Test classes (classes marked with the [TestClass] attribute) should follow the given
layout to be considered valid by MSTest:

they should be public (or internal if the [assembly: DiscoverInternals]


assembly attribute is set)
they should not be static
they should not be generic

How to fix violations


Ensure that the class matches the required layout described above.
When to suppress warnings
Do not suppress a warning from this rule. Ignoring this rule will result in tests being
ignored, because MSTest will not consider this class to be a test class.

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
MSTEST0003: Test methods should have
valid layout
Article • 08/13/2024

ノ Expand table

Property Value

Rule ID MSTEST0003

Title Test methods should have valid layout

Category Usage

Fix is breaking or non-breaking Breaking

Enabled by default Yes

Default severity Warning

Introduced in version 3.2.0

There is a code fix Yes

Cause
A test method is not following single or multiple points of the required test method
layout.

Rule description
Test methods (methods marked with the [TestMethod] attribute) should follow the given
layout to be considered valid by MSTest:

they should be public (or internal if [assembly: DiscoverInternals] attribute is


set)
they should not be static
they should not be generic
they should not be abstract
they should return void or Task
they should not be async void
they should not be a special method (constructor, finalizer, operator...)
the type declaring this method should be public

How to fix violations


Ensure that the test method matches the required layout described above.

When to suppress warnings


Do not suppress a warning from this rule. Ignoring this rule will result in tests being
ignored, because MSTest will not consider this method to be a test method.

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
be found on GitHub, where you Select a link to provide feedback:
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
MSTEST0005: Test context property
should have valid layout
Article • 09/07/2024

ノ Expand table

Property Value

Rule ID MSTEST0005

Title Test context property should have valid layout

Category Usage

Fix is breaking or non-breaking Non-breaking

Enabled by default Yes

Default severity Warning

Introduced in version 3.2.0

There is a code fix Yes

Cause
A test context property is not following single or multiple points of the required test
context layout.

Rule description
TestContext properties should follow the given layout to be considered valid by MSTest:

they should be properties and not fields


they should be named TestContext (case insensitive)
they should be public (or internal if the [assembly: DiscoverInternals]
assembly attribute is set)
they should not be static
they should not be readonly

How to fix violations


Ensure that the TestContext property matches the required layout described above.

When to suppress warnings


Do not suppress a warning from this rule. Ignoring this rule will result in the
TestContext not being injected by MSTest, thus resulting in NullReferenceException or

inconsistent state when using the property.


MSTEST0007: Use test attributes only on
test methods
Article • 08/13/2024

ノ Expand table

Property Value

Rule ID MSTEST0007

Title Use test attributes only on test methods

Category Usage

Fix is breaking or non-breaking Non-breaking

Enabled by default Yes

Default severity Info

Introduced in version 3.3.0

There is a code fix No

Cause
A method that's not marked with TestMethodAttribute has one or more test attributes
applied to it.

Rule description
The following test attributes should only be applied on methods marked with the
TestMethodAttribute attribute:

CssIterationAttribute
CssProjectStructureAttribute
DescriptionAttribute
ExpectedExceptionAttribute
OwnerAttribute
PriorityAttribute
TestPropertyAttribute
WorkItemAttribute
How to fix violations
To fix a violation of this rule, either convert the method on which you applied the test
attributes to a test method by setting the [TestMethod] attribute or remove the test
attributes altogether.

When to suppress warnings


Do not suppress a warning from this rule. If you ignore this rule, your attributes will be
ignored since they are designed for use only in a test context.

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
MSTEST0008: TestInitialize method
should have valid layout
Article • 08/13/2024

ノ Expand table

Property Value

Rule ID MSTEST0008

Title TestInitialize method should have valid layout

Category Usage

Fix is breaking or non-breaking Non-breaking

Enabled by default Yes

Default severity Warning

Introduced in version 3.3.0

There is a code fix Yes

Cause
A method marked with [TestInitialize] should have valid layout.

Rule description
Methods marked with [TestInitialize] should follow the following layout to be valid:

it should be public
it should not be abstract
it should not be async void
it should not be static
it should not be a special method (finalizer, operator...).
it should not be generic
it should not take any parameter
return type should be void , Task or ValueTask

The type declaring these methods should also respect the following rules:
The type should be a class .
The class should be public or internal (if the test project is using the
[DiscoverInternals] attribute).

The class shouldn't be static .


If the class is sealed , it should be marked with [TestClass] (or a derived
attribute).

How to fix violations


Ensure that the method matches the layout described above.

When to suppress warnings


Do not suppress a warning from this rule. If you ignore this rule, flagged instances will
be either skipped or result in runtime error.

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
be found on GitHub, where you Select a link to provide feedback:
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
MSTEST0009: TestCleanup method
should have valid layout
Article • 08/13/2024

ノ Expand table

Property Value

Rule ID MSTEST0009

Title TestCleanup method should have valid layout

Category Usage

Fix is breaking or non-breaking Non-breaking

Enabled by default Yes

Default severity Warning

Introduced in version 3.3.0

There is a code fix Yes

Cause
A method marked with [TestCleanup] should have valid layout.

Rule description
Methods marked with [TestCleanup] should follow the following layout to be valid:

it should be public
it should not be abstract
it should not be async void
it should not be static
it should not be a special method (finalizer, operator...).
it should not be generic
it should not take any parameter
return type should be void , Task or ValueTask

The type declaring these methods should also respect the following rules:
The type should be a class .
The class should be public or internal (if the test project is using the
[DiscoverInternals] attribute).

The class shouldn't be static .


If the class is sealed , it should be marked with [TestClass] (or a derived
attribute).

How to fix violations


Ensure that the method matches the layout described above.

When to suppress warnings


Do not suppress a warning from this rule. If you ignore this rule, flagged instances will
be either skipped or result in runtime error.

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
be found on GitHub, where you Select a link to provide feedback:
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
MSTEST0010: ClassInitialize method
should have valid layout
Article • 08/13/2024

ノ Expand table

Property Value

Rule ID MSTEST0010

Title ClassInitialize method should have valid layout

Category Usage

Fix is breaking or non-breaking Non-breaking

Enabled by default Yes

Default severity Warning

Introduced in version 3.3.0

There is a code fix Yes

Cause
A method marked with [ClassInitialize] should have valid layout.

Rule description
Methods marked with [ClassInitialize] should follow the following layout to be valid:

it can't be declared on a generic class without the InheritanceBehavior mode is


set
it should be public
it should be static
it should not be async void
it should not be a special method (finalizer, operator...).
it should not be generic
it should take one parameter of type TestContext
return type should be void , Task or ValueTask
InheritanceBehavior.BeforeEachDerivedClass attribute parameter should be

specified if the class is abstract .


InheritanceBehavior.BeforeEachDerivedClass attribute parameter should not be

specified if the class is sealed .

The type declaring these methods should also respect the following rules:

The type should be a class .


The class should be public or internal (if the test project is using the
[DiscoverInternals] attribute).

The class shouldn't be static .


If the class is sealed , it should be marked with [TestClass] (or a derived
attribute).

the class should not be generic

How to fix violations


Ensure that the method matches the layout described above.

When to suppress warnings


Do not suppress a warning from this rule. If you ignore this rule, flagged instances will
be either skipped or result in runtime error.

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
MSTEST0011: ClassCleanup method
should have valid layout
Article • 08/13/2024

ノ Expand table

Property Value

Rule ID MSTEST0011

Title ClassCleanup method should have valid layout

Category Usage

Fix is breaking or non-breaking Non-breaking

Enabled by default Yes

Default severity Warning

Introduced in version 3.3.0

There is a code fix Yes

Cause
A method marked with [ClassCleanup] should have valid layout.

Rule description
Methods marked with [ClassCleanup] should follow the following layout to be valid:

it can't be declared on a generic class without the InheritanceBehavior mode is


set
it should be public
it should be static
it should not be async void
it should not be a special method (finalizer, operator...).
it should not be generic
it should not take any parameter
return type should be void , Task or ValueTask
InheritanceBehavior.BeforeEachDerivedClass attribute parameter should be

specified if the class is abstract .


InheritanceBehavior.BeforeEachDerivedClass attribute parameter should not be

specified if the class is sealed .

The type declaring these methods should also respect the following rules:

The type should be a class .


The class should be public or internal (if the test project is using the
[DiscoverInternals] attribute).

The class shouldn't be static .


If the class is sealed , it should be marked with [TestClass] (or a derived
attribute).

the class should not be generic

How to fix violations


Ensure that the method matches the layout described above.

When to suppress warnings


Do not suppress a warning from this rule. If you ignore this rule, flagged instances will
be either skipped or result in runtime error.

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
MSTEST0012: AssemblyInitialize method
should have valid layout
Article • 08/13/2024

ノ Expand table

Property Value

Rule ID MSTEST0012

Title AssemblyInitialize method should have valid layout

Category Usage

Fix is breaking or non-breaking Non-breaking

Enabled by default Yes

Default severity Warning

Introduced in version 3.3.0

There is a code fix Yes

Cause
A method marked with [AssemblyInitialize] should have valid layout.

Rule description
Methods marked with [AssemblyInitialize] should follow the following layout to be
valid:

it can't be declared on a generic class


it should be public
it should be static
it should not be async void
it should not be a special method (finalizer, operator...).
it should not be generic
it should take one parameter of type TestContext
return type should be void , Task or ValueTask
The type declaring these methods should also respect the following rules:

The type should be a class.


The class should be public or internal (if the test project is using the
[DiscoverInternals] attribute).
The class shouldn't be static.
The class should be marked with [TestClass] (or a derived attribute)
the class should not be generic

How to fix violations


Ensure that the method matches the layout described above.

When to suppress warnings


Do not suppress a warning from this rule. If you ignore this rule, flagged instances will
be either skipped or result in runtime error.

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
MSTEST0013: AssemblyCleanup method
should have valid layout
Article • 08/13/2024

ノ Expand table

Property Value

Rule ID MSTEST0013

Title AssemblyCleanup method should have valid layout

Category Usage

Fix is breaking or non-breaking Non-breaking

Enabled by default Yes

Default severity Warning

Introduced in version 3.3.0

There is a code fix Yes

Cause
A method marked with [AssemblyCleanup] should have valid layout.

Rule description
Methods marked with [AssemblyCleanup] should follow the following layout to be valid:

it can't be declared on a generic class


it should be public
it should be static
it should not be async void
it should not be a special method (finalizer, operator...).
it should not be generic
it should not take any parameter
return type should be void , Task or ValueTask

The type declaring these methods should also respect the following rules:
The type should be a class.
The class should be public or internal (if the test project is using the
[DiscoverInternals] attribute).
The class shouldn't be static.
The class should be marked with [TestClass] (or a derived attribute)
the class should not be generic

How to fix violations


Ensure that the method matches the layout described above.

When to suppress warnings


Do not suppress a warning from this rule. If you ignore this rule, flagged instances will
be either skipped or result in runtime error.

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
be found on GitHub, where you Select a link to provide feedback:
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
MSTEST0014: DataRow should be valid
Article • 08/13/2024

ノ Expand table

Property Value

Rule ID MSTEST0014

Title DataRow should be valid

Category Usage

Fix is breaking or non-breaking Non-breaking

Enabled by default Yes

Default severity Warning

Introduced in version 3.3.0

There is a code fix No

Cause
An instance of [DataRow] is not following one or multiple points of the required DataRow
layout.

Rule description
[DataRow] instances should have the following layout to be valid:

they should only be set on a test method


argument count should match method parameters count
argument type should match method argument type

How to fix violations


Ensure that the DataRow instance matches the required layout described above.

When to suppress warnings


Do not suppress a warning from this rule. If you ignore this rule, flagged instances will
be either skipped or result in runtime error.

6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
MSTEST0017: Assertion arguments
should be passed in the correct order
Article • 03/21/2024

ノ Expand table

Property Value

Rule ID MSTEST0017

Title Assertion arguments should be passed in the correct order

Category Usage

Fix is breaking or non-breaking Non-breaking

Enabled by default Yes

Default severity Info

Introduced in version 3.4.0

Cause
This rule raises an issue when calls to Assert.AreEqual , Assert.AreNotEqual ,
Assert.AreSame or Assert.AreNotSame are following one or multiple of the patterns

below:

actual argument is a constant or literal value

actual argument variable starts with expected , _expected or Expected


expected or notExpected argument variable starts with actual

actual is not a local variable

Rule description
MSTest Assert.AreEqual , Assert.AreNotEqual , Assert.AreSame and Assert.AreNotSame
expect the first argument to be the expected/unexpected value and the second
argument to be the actual value.

Having the expected value and the actual value in the wrong order will not alter the
outcome of the test (succeeds/fails when it should), but the assertion failure will contain
misleading information.
How to fix violations
Ensure that that actual and expected / notExpected arguments are passed in the correct
order.

When to suppress warnings


Do not suppress a warning from this rule as it would result to misleading output.

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
MSTEST0018: DynamicData should be
valid
Article • 08/13/2024

ノ Expand table

Property Value

Rule ID MSTEST0018

Title DynamicData should be valid

Category Usage

Fix is breaking or non-breaking Non-breaking

Enabled by default Yes

Default severity Warning

Introduced in version 3.6.0

Cause
A method marked with [DynamicData] should have valid layout.

Rule description
Methods marked with [DynamicData] should also be marked with [TestMethod] (or a
derived attribute).

The "data source" member referenced:

should exist on the specified type (current class if no type is specified)


should not have overloads
should be of the same kind (method or property) as the DataSourceType property
should be public
should be static
should not be generic
should be parameterless
should return IEnumerable<object[]> , IEnumerable<Tuple<T,...>> or
IEnumerable<ValueTuple<,...>>
The "display name" member referenced:

should exist on the specified type (current class if no type is specified)


should not have overloads
should be a method
should be public
should be static
should not be generic
should return string
should take exactly 2 parameters, the first being MethodInfo and the second being
object[]

Example:

C#

public static string GetDisplayName(MethodInfo methodInfo, object[] data)


{
return string.Format("{0} ({1})", methodInfo.Name, string.Join(",",
data));
}

How to fix violations


Ensure that the attribute matches the conditions described above.

When to suppress warnings


Do not suppress a warning from this rule. If you ignore this rule, flagged instances will
be either skipped or result in runtime error.
MSTEST0023: Do not negate boolean
assertions
Article • 08/13/2024

ノ Expand table

Property Value

Rule ID MSTEST0023

Title Do not negate boolean assertions

Category Usage

Fix is breaking or non-breaking Non-breaking

Enabled by default Yes

Default severity Info

Introduced in version 3.4.0

There is a code fix No

Cause
This rule raises a diagnostic when a call to Assert.IsTrue or Assert.IsFalse contains a
negated argument.

Rule description
MSTest assertion library contains opposite APIs that makes it easier to test true and
false cases. It is recommend to use the right API for the right case as it is improving

readability and also provides better information in case of failure.

How to fix violations


When negating argument in a Assert.IsTrue call, you should use Assert.IsFalse .
When negating argument in a Assert.IsFalse call, you should use Assert.IsTrue .
When to suppress warnings
Do not suppress warnings from this rule.

6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
MSTEST0024: Do not store TestContext
in a static member
Article • 08/13/2024

ノ Expand table

Property Value

Rule ID MSTEST0024

Title Do not store TestContext in a static member

Category Usage

Fix is breaking or non-breaking Non-breaking

Enabled by default Yes

Default severity Info

Introduced in version 3.4.0

There is a code fix No

Cause
This rule raises a diagnostic when an assignment to a static member of a TestContext
parameter is done.

Rule description
The TestContext parameter passed to each initialize method ( [AssemblyInitialize] or
[ClassInitialize] ) is specific to the current context and is not updated on each test

execution. Storing, for reuse, this TextContext object will most of the time lead to issues.

How to fix violations


Do not store the [AssemblyInitialize] or [ClassInitialize] TestContext parameter.

When to suppress warnings


You can suppress warnings from this rule if you are sure of the behavior does match
what you want to do.

6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
MSTEST0026: Avoid conditional access
in assertions
Article • 08/13/2024

ノ Expand table

Property Value

Rule ID MSTEST0026

Title Avoid conditional access in assertions

Category Usage

Fix is breaking or non-breaking Non-breaking

Enabled by default Yes

Default severity Info

Introduced in version 3.5.0

There is a code fix No

Cause
This rule raises a diagnostic when an argument containing a null conditional operator
(?.) or ?[] is passed to the assertion methods below:

Assert.IsTrue
Assert.IsFalse

Assert.AreEqual
Assert.AreNotEqual

Assert.AreSame

Assert.AreNotSame
CollectionAssert.AreEqual

CollectionAssert.AreNotEqual
CollectionAssert.AreEquivalent

CollectionAssert.AreNotEquivalent

CollectionAssert.Contains
CollectionAssert.DoesNotContain

CollectionAssert.AllItemsAreNotNull
CollectionAssert.AllItemsAreUnique
CollectionAssert.AllItemsAreInstancesOfType

CollectionAssert.IsSubsetOf
CollectionAssert.IsNotSubsetOf

StringAssert.Contains

StringAssert.StartsWith
StringAssert.EndsWith

StringAssert.Matches
StringAssert.DoesNotMatch

Rule description
The purpose of assertions in unit tests is to verify that certain conditions are met. When
a conditional access operator is used in an assertion, it introduces an additional
condition that may or may not be met, depending on the state of the object being
accessed. This can lead to inconsistent test results and make test less clear.

How to fix violations


Ensure that arguments do not contain (?.) or ?[] when passed to the assertion
methods. Instead, perform null checks before making the assertion.

C#

Company? company = GetCompany();


Assert.AreEqual(company?.Name, "Contoso"); // MSTEST0026
StringAssert.Contains(company?.Address, "Brazil"); // MSTEST0026

// Fixed code
Assert.IsNotNull(company);
Assert.AreEqual(company.Name, "Contoso");
StringAssert.Contains(company.Address, "Brazil");

When to suppress warnings


We do not recommend suppressing warnings from this rule.

6 Collaborate with us on
.NET feedback
GitHub .NET is an open source project.
Select a link to provide feedback:
The source for this content can
be found on GitHub, where you  Open a documentation issue
can also create and review
issues and pull requests. For  Provide product feedback
more information, see our
contributor guide.
MSTEST0030: Type containing
[TestMethod] should be marked with
[TestClass]
Article • 08/13/2024

ノ Expand table

Property Value

Rule ID MSTEST0030

Title Type containing [TestMethod] should be marked with


[TestClass]

Category Usage

Fix is breaking or non- Non-breaking


breaking

Enabled by default Yes

Default severity Info

Introduced in version 3.5.0

There is a code fix No

Cause
Type containing [TestMethod] should be marked with [TestClass] , otherwise the test
method will be silently ignored.

Rule description
MSTest considers test methods only on the context of a test class container (a class
marked with [TestClass] or derived attribute) which could lead to some tests being
silently ignored. If your class is supposed to represent common test behavior to be
executed by children classes, it's recommended to mark the type as abstract to clarify
the intent for other developers reading the code.

How to fix violations


A non-abstract class contains test methods should be marked with '[TestClass]'.

When to suppress warnings


It's safe to suppress the diagnostic if you are sure that your class is being inherited and
that the tests declared on this class should only be run in the context of subclasses.
Nonetheless, we recommend marking the class as abstract.

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
MSTEST0031:
System.ComponentModel.DescriptionAttri
bute has no effect on test methods.
Article • 08/13/2024

ノ Expand table

Property Value

Rule ID MSTEST0031

Title System.ComponentModel.DescriptionAttribute has no effect on test


methods.

Category Usage

Fix is breaking or non- Non-breaking


breaking

Enabled by default Yes

Default severity Info

Introduced in version 3.5.0

There is a code fix No

Cause
'System.ComponentModel.DescriptionAttribute' has no effect in the context of tests.

Rule description
'System.ComponentModel.DescriptionAttribute' has no effect in the context of tests, so
likely user wanted to use
'Microsoft.VisualStudio.TestTools.UnitTesting.DescriptionAttribute' instead.

How to fix violations


Remove or replace System.ComponentModel.DescriptionAttribute by
Microsoft.VisualStudio.TestTools.UnitTesting.DescriptionAttribute instead.
When to suppress warnings
We don't recommend to suppress the diagnostic as the
System.ComponentModel.DescriptionAttribute has no effect in the context of tests.

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
MSTEST0032: Review or remove the
assertion as its condition is known to be
always true.
Article • 08/13/2024

ノ Expand table

Property Value

Rule ID MSTEST0032

Title Review or remove the assertion as its condition is known to be


always true.

Category Usage

Fix is breaking or non- Non-breaking


breaking

Enabled by default Yes

Default severity Info

Introduced in version 3.5.0

There is a code fix No

Cause
This rule raises a diagnostic when a call to an assertion produces an always-true
condition.

Rule description
When you encounter an assertion that always passes (for example,
Assert.IsTrue(true) ), it's not obvious to someone reading the code why the assertion is

there or what condition it's trying to check. This can lead to confusion and wasted time
for developers who come across the code later on.

How to fix violations


Ensure that calls to Assert.IsTrue , Assert.IsFalse , Assert.AreEqual ,
Assert.AreNotEqual , Assert.IsNull or Assert.IsNotNull aren't producing always-true

conditions.

When to suppress warnings


It's not recommended to suppress warnings from this rule.

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
MSTEST0034: Use
ClassCleanupBehavior.EndOfClass with
the [ClassCleanup] .
Article • 08/13/2024

ノ Expand table

Property Value

Rule ID MSTEST0034

Title Use ClassCleanupBehavior.EndOfClass with the [ClassCleanup] .

Category Usage

Fix is breaking or non-breaking Non-breaking

Enabled by default Yes

Default severity Info

Introduced in version 3.6.0

There is a code fix No

Cause
This rule raises a diagnostic when ClassCleanupBehavior.EndOfClass isn't set with the
[ClassCleanup] .

Rule description
Without using ClassCleanupBehavior.EndOfClass , the [ClassCleanup] will by default be
run at the end of the assembly and not at the end of the class.

How to fix violations


Use ClassCleanupBehavior.EndOfClass with the [ClassCleanup] .

When to suppress warnings


It's not recommended to suppress warnings from this rule as you can use instead
[AssemblyCleanup] .

6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
MSTEST0035: [DeploymentItem] can be
specified only on test class or test
method.
Article • 09/12/2024

ノ Expand table

Property Value

Rule ID MSTEST0035

Title [DeploymentItem] can be specified only on test class or test


method.

Category Usage

Fix is breaking or non- Non-breaking


breaking

Enabled by default Yes

Default severity Info

Introduced in version 3.6.0

There is a code fix No

Cause
This rule raises a diagnostic when [DeploymentItem] isn't set on test class or test
method.

Rule description
By using [DeploymentItem] without putting it on test class or test method, it will be
ignored.

How to fix violations


Ensure the attribute [DeploymentItem] is specified on a test class or a test method,
otherwise remove the attribute.
When to suppress warnings
It's not recommended to suppress warnings from this rule as the [DeploymentItem] will
be ignored.
Microsoft.Testing.Platform overview
Article • 09/07/2024

Microsoft.Testing.Platform is a lightweight and portable alternative to VSTest for


running tests in all contexts, including continuous integration (CI) pipelines, CLI, Visual
Studio Test Explorer, and VS Code Text Explorer. The Microsoft.Testing.Platform is
embedded directly in your test projects, and there's no other app dependencies, such as
vstest.console or dotnet test needed to run your tests.

Microsoft.Testing.Platform is open source. You can find Microsoft.Testing.Platform

code in microsoft/testfx GitHub repository.

Supported test frameworks


MSTest. In MSTest, the support of Microsoft.Testing.Platform is done via MSTest
runner.
NUnit. In NUnit, the support of Microsoft.Testing.Platform is done via NUnit
runner.
xUnit: work in progress, for more information, see Microsoft Testing Platform for
xUnit .
TUnit: entirely constructed on top of the Microsoft.Testing.Platform , for more
information, see TUnit documentation

Run and debug tests


Microsoft.Testing.Platform test projects are built as executables that can be run (or

debugged) directly. There's no extra test running console or command. The app exits
with a nonzero exit code if there's an error, as typical with most executables. For more
information on the known exit codes, see Microsoft.Testing.Platform exit codes.

) Important

By default, Microsoft.Testing.Platform collects telemetry. For more information


and options on opting out, see Microsoft.Testing.Platform telemetry.

.NET CLI
Publishing the test project using dotnet publish and running the app directly is
another way to run your tests. For example, executing the ./Contoso.MyTests.exe .
In some scenarios it's also viable to use dotnet build to produce the executable,
but there can be edge cases to consider, such Native AOT.

Use dotnet run


The dotnet run command can be used to build and run your test project. This is the
easiest, although sometimes slowest, way to run your tests. Using dotnet run is
practical when you're editing and running tests locally, because it ensures that the
test project is rebuilt when needed. dotnet run will also automatically find the
project in the current folder.

.NET CLI

dotnet run --project Contoso.MyTests

For more information on dotnet run , see dotnet run.

Use dotnet exec


The dotnet exec or dotnet command is used to execute (or run) an already built
test project, this is an alternative to running the application directly. dotnet exec
requires path to the built test project dll.

.NET CLI

dotnet exec Contoso.MyTests.dll

or

.NET CLI

dotnet Contoso.MyTests.dll

7 Note

Providing the path to the test project executable (*.exe) results in an error:

Output
Error:
An assembly specified in the application dependencies manifest
(Contoso.MyTests.deps.json) has already been found but with a
different
file extension:
package: 'Contoso.MyTests', version: '1.0.0'
path: 'Contoso.MyTests.dll'
previously found assembly:
'S:\t\Contoso.MyTests\bin\Debug\net8.0\Contoso.MyTests.exe'

For more information on dotnet exec , see dotnet exec.

Use dotnet test


Microsoft.Testing.Platform offers a compatibility layer with vstest.console.exe

and dotnet test ensuring you can run your tests as before while enabling new
execution scenario.

.NET CLI

dotnet test Contoso.MyTests.dll

Options
The list below described only the platform options. To see the specific options brought
by each extension, either refer to the extension documentation page or use the --help
option.

--diagnostic

Enables the diagnostic logging. The default log level is Trace . The file is written in the
output directory with the following name format, log_[MMddHHssfff].diag .

--diagnostic-filelogger-synchronouswrite

Forces the built-in file logger to synchronously write logs. Useful for scenarios where
you don't want to lose any log entries (if the process crashes). This does slow down the
test execution.

--diagnostic-output-directory
The output directory of the diagnostic logging, if not specified the file is generated in
the default TestResults directory.

--diagnostic-output-fileprefix

The prefix for the log file name. Defaults to "log_" .

--diagnostic-verbosity

Defines the verbosity level when the --diagnostic switch is used. The available values
are Trace , Debug , Information , Warning , Error , or Critical .

--help

Prints out a description of how to use the command.

-ignore-exit-code

Allows some non-zero exit codes to be ignored, and instead returned as 0 . For more
information, see Ignore specific exit codes.

--info

Displays advanced information about the .NET Test Application such as:

The platform.
The environment.
Each registered command line provider, such as its, name , version , description
and options .
Each registered tool, such as its, command , name , version , description , and all
command line providers.

This feature is used to understand extensions that would be registering the same
command line option or the changes in available options between multiple versions of
an extension (or the platform).

--list-tests

List available tests. Tests will not be executed.

--minimum-expected-tests

Specifies the minimum number of tests that are expected to run. By default, at least one
test is expected to run.

--results-directory
The directory where the test results are going to be placed. If the specified directory
doesn't exist, it's created. The default is TestResults in the directory that contains the
test application.

MSBuild integration
The NuGet package Microsoft.Testing.Platform.MSBuild provides various integrations
for Microsoft.Testing.Platform with MSBuild:

Support for dotnet test . For more information, see dotnet test integration.
Support for ProjectCapability required by Visual Studio and Visual Studio Code
Test Explorers.
Automatic generation of the entry point ( Main method).
Automatic generation of the configuration file.

7 Note

This integration works in a transitive way (a project that references another project
referencing this package will behave as if it references the package) and can be
disabled through the IsTestingPlatformApplication MSBuild property.

See also
Microsoft.Testing.Platform and VSTest comparison
Microsoft.Testing.Platform extensions
Microsoft.Testing.Platform telemetry
Microsoft.Testing.Platform exit codes
Microsoft.Testing.Platform FAQ
Article • 09/10/2024

This article contains answers to commonly asked questions about


Microsoft.Testing.Platform .

error CS8892: Method


'TestingPlatformEntryPoint.Main(string[])' will
not be used as an entry point because a
synchronous entry point
'Program.Main(string[])' was found
Manually defining an entry point ( Main ) in a test project or referencing a test project
from an application that already has an entry point results in a conflict with the entry
point generated by Microsoft.Testing.Platform . To avoid this issue, take one of these
steps:

Remove your manually defined entry point, typically Main method in Program.cs,
and let the testing platform generate one for you.

Disable the generation of the entry point by setting the


<GenerateTestingPlatformEntryPoint>false</GenerateTestingPlatformEntryPoint>

MSBuild property.

Completely disable the transitive dependency to


Microsoft.Testing.Platform.MSBuild by setting the

<IsTestingPlatformApplication>false</IsTestingPlatformApplication> MSBuild

property in the project that references a test project. This is needed when you
reference a test project from a non-test project, for example, a console app that
references a test application.
Microsoft.Testing.Platform and VSTest
comparison
Article • 03/19/2024

Microsoft.Testing.Platform is a lightweight and portable alternative to VSTest for


running tests in command line, in continuous integration (CI) pipelines, in Visual Studio
Test Explorer, and in Visual Studio Code. In this article, you learn the key differences
between the MSTest runner and VSTest.

Differences in test execution


Tests are executed in different ways depending on the runner.

Execute VSTest tests


VSTest ships with Visual Studio, the .NET SDK, and as a standalone tool in the
Microsoft.TestPlatform NuGet package. VSTest uses a runner executable to run tests,
called vstest.console.exe , which can be used directly or through dotnet test .

Execute Microsoft.Testing.Platform tests


Microsoft.Testing.Platform is embedded directly into your test project and doesn't ship
any extra executables. When you run your project executable, your tests run. For more
information on running Microsoft.Testing.Platform tests, see Microsoft.Testing.Platform
overview: Run and debug tests.

Namespaces and NuGet packages


To familiarize yourself with Microsoft.Testing.Platform and VSTest, it's helpful to
understand the namespaces and NuGet packages that are used by each.

VSTest namespaces
VSTest is a collection of testing tools that are also known as the Test Platform. The
VSTest source code is open-source and available in the microsoft/vstest GitHub
repository. The code uses the Microsoft.TestPlatform.* namespace.
VSTest is extensible and common types are placed in
Microsoft.TestPlatform.ObjectModel NuGet package.

Microsoft.Testing.Platform namespaces
Microsoft.Testing.Platform is based on Microsoft.Testing.Platform NuGet package and
other libraries in the Microsoft.Testing.* namespace. Like VSTest, the
Microsoft.Testing.Platform is open-source and has a microsoft/testfx GitHub
repository.

Communication protocol

7 Note

The Visual Studio Test Explorer supports the Microsoft.Testing.Platform protocol


from version 17.10 onward. If you run/debug your tests using earlier versions of
Visual Studio, Test Explorer will use vstest.console.exe and the old protocol to run
these tests.

Microsoft.Testing.Platform uses a JSON-RPC based protocol to communicate between


Visual Studio and the test runner process. The protocol is documented in the MSTest
GitHub repository .

VSTest also uses a JSON based communication protocol, but it's not JSON-RPC based.

Disabling the new protocol


To disable the use of the new protocol in Test Explorer, you can edit the csproj and
remove the TestingPlatformServer capability.

XML

<ItemGroup>
<ProjectCapability Remove="TestingPlatformServer" />
</ItemGroup>

Executables
VSTest ships multiple executables, notably vstest.console.exe , testhost.exe , and
datacollector.exe . However, MSTest is embedded directly into your test project and

doesn't ship any other executables. The executable your test project compiles to is used
to host all the testing tools and carry out all the tasks needed to run tests.

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
be found on GitHub, where you Select a link to provide feedback:
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
Microsoft.Testing.Platform
configuration settings
Article • 08/16/2024

Microsoft.Testing.Platform supports the use of configuration files and environment


variables to configure the behavior of the test platform. This article describes the
configuration settings that you can use to configure the test platform.

testconfig.json
The test platform uses a configuration file named [appname].testconfig.json to configure
the behavior of the test platform. The testconfig.json file is a JSON file that contains
configuration settings for the test platform.

The testconfig.json file has the following structure:

JSON

{
"platformOptions": {
"config-property-name1": "config-value1",
"config-property-name2": "config-value2"
}
}

The platform will automatically detect and load the [appname].testconfig.json file located
in the output directory of the test project (close to the executable).

When using Microsoft.Testing.Platform.MSBuild , you can simply create a


testconfig.json file that will be automatically renamed to [appname].testconfig.json and
moved to the output directory of the test project.

7 Note

The [appname].testconfig.json file will get overwritten on subsequent builds.

Environment variables
Environment variables can be used to supply some runtime configuration information.
7 Note

Environment variables take precedence over configuration settings in the


testconfig.json file.

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
Microsoft.Testing.Platform extensions
Article • 06/27/2024

Microsoft.Testing.Platform can be customized through extensions. These extension are


either built-in or can be installed as NuGet packages. Extensions installed through
NuGet packages will auto-register the extensions they are holding to become available
in test execution.

Each and every extension is shipped with its own licensing model (some less permissive),
be sure to refer to the license associated with the extensions you want to use.

Extensions
Code Coverage

Extensions designed to provide code coverage support.

Diagnostics

Extensions offering diagnostics and troubleshooting functionalities.

Hosting

Extensions affecting how the test execution is hosted.

Policy

Extensions allowing to define policies around the test execution.

Test Reports

Extensions allowing to produce test report files that contains information about the
execution and outcome of the tests.

VSTest Bridge

This extension provides a compatibility layer with VSTest allowing the test frameworks
depending on it to continue supporting running in VSTest mode ( vstest.console.exe ,
usual dotnet test , VSTest task on AzDo, Test Explorers of Visual Studio and Visual
Studio Code...).

Microsoft Fakes
This extension provides support to execute a test project that makes use of Microsoft
Fakes .

6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
Code coverage extensions
Article • 04/17/2024

This article list and explains all Microsoft Testing Platform extensions related to the
code coverage capability.

You can use the code coverage feature to determine what proportion of your project's
code is being tested by coded tests such as unit tests. To effectively guard against bugs,
your tests should exercise or cover a large proportion of your code.

Coverlet
There's currently no Coverlet extension, but you can use Coverlet .NET global tool .

Microsoft code coverage


Microsoft Code Coverage analysis is possible for both managed (CLR) and unmanaged
(native) code. Both static and dynamic instrumentation are supported. This extension is
shipped as part of Microsoft.Testing.Extensions.CodeCoverage NuGet package.

7 Note

Unmanaged (native) code coverage is disabled in the extension by default. Use


flags EnableStaticNativeInstrumentation and EnableDynamicNativeInstrumentation
to enable it if needed. For more information about unmanaged code coverage, see
Static and dynamic native instrumentation.

) Important

The package is shipped with Microsoft .NET library closed-source free to use
licensing model.

For more information about Microsoft code coverage, see its GitHub page .

Microsoft Code Coverage provides the following options:

ノ Expand table
Option Description

--coverage Collect the code coverage using dotnet-coverage tool.

--coverage-output Output file.

--coverage-output- Output file format. Supported values are: 'coverage', 'xml', and
format 'cobertura'.

--coverage-settings XML code coverage settings.

For more information about the available options, see settings and samples .

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
Diagnostics extensions
Article • 04/11/2024

This article list and explains all Microsoft Testing Platform extensions related to the
diagnostics capability.

Built-in options
The following platform options provide useful information for troubleshooting your test
apps:

--info

--diagnostic
-⁠
⁠-⁠
diagnostic-⁠
filelogger-⁠
synchronouswrite

--diagnostic-verbosity

--diagnostic-output-fileprefix
--diagnostic-output-directory

You can also enable the diagnostics logs using the environment variables:

ノ Expand table

Environment variable name Description

TESTINGPLATFORM_DIAGNOSTIC If set to 1 , enables the diagnostic


logging.

TESTINGPLATFORM_DIAGNOSTIC_VERBOSITY Defines the verbosity level. The


available values are Trace , Debug ,
Information , Warning , Error , or
Critical .

TESTINGPLATFORM_DIAGNOSTIC_OUTPUT_DIRECTORY The output directory of the


diagnostic logging, if not specified
the file is generated in the default
TestResults directory.

TESTINGPLATFORM_DIAGNOSTIC_OUTPUT_FILEPREFIX The prefix for the log file name.


Defaults to "log_" .

TESTINGPLATFORM_DIAGNOSTIC_FILELOGGER_SYNCHRONOUSWRITE Forces the built-in file logger to


synchronously write logs. Useful for
scenarios where you don't want to
lose any log entries (if the process
Environment variable name Description

crashes). This does slow down the


test execution.

7 Note

Environment variables take precedence over the command line arguments.

Crash dump
This extension allows you to create a crash dump file if the process crashes. This
extension is shipped as part of Microsoft.Testing.Extensions.CrashDump NuGet
package.

) Important

The package is shipped with Microsoft .NET library closed-source free to use
licensing model.

To configure the crash dump file generation, use the following options:

ノ Expand table

Option Description

--crashdump Generates a dump file when the test host process crashes. Supported in
.NET 6.0+.

-⁠
⁠-⁠
crashdump-⁠
filename Specifies the file name of the dump.

--crashdump-type Specifies the type of the dump. Valid values are Mini , Heap , Triage ,
Full . Defaults as Full . For more information, see Types of mini dumps.

U Caution

The extension isn't compatible with .NET Framework and will be silently ignored.
For .NET Framework support, you enable the postmortem debugging with
Sysinternals ProcDump. For more information, see Enabling Postmortem
Debugging: Window Sysinternals ProcDump. The postmortem debugging solution
will also collect process crash information for .NET so you can avoid the use of the
extension if you're targeting both .NET and .NET Framework test applications.

Hang dump
This extension allows you to create a dump file after a given timeout. This extension is
shipped as part of Microsoft.Testing.Extensions.HangDump package.

) Important

The package is shipped with Microsoft .NET library closed-source free to use
licensing model.

To configure the hang dump file generation, use the following options:

ノ Expand table

Option Description

--hangdump Generates a dump file in case the test host process hangs.

-⁠
-⁠
hangdump-⁠
filename Specifies the file name of the dump.

--hangdump-timeout Specifies the timeout after which the dump is generated. The timeout
value is specified in one of the following formats:
1.5h , 1.5hour , 1.5hours
90m , 90min , 90minute , 90minutes
5400s , 5400sec , 5400second , 5400seconds . Defaults to 30m (30 minutes).

--hangdump-type Specifies the type of the dump. Valid values are Mini , Heap , Triage , Full .
Defaults as Full . For more information, see Types of mini dumps.

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
Fakes extension
Article • 06/27/2024

The Microsoft.Testing.Extensions.Fakes extension provides support to execute a test


project that makes use of Microsoft Fakes .

Microsoft Fakes allows you to better test your code by either generating Stub s (for
instance creating a testable implementation of INotifyPropertyChanged ) or by Shim ing
methods and static methods (replacing the implementation of File.Open with a one you
can control in your tests).

7 Note

This extension requires a Visual Studio Enterprise installation with the minimum
version of 17.11 preview 1 in order to work correctly.

Upgrade your project to the new extension


To use the new extension with an existing project, update the existing
Microsoft.QualityTools.Testing.Fakes reference with
Microsoft.Testing.Extensions.Fakes .

diff

- <Reference Include="Microsoft.QualityTools.Testing.Fakes,
Version=12.0.0.0, Culture=Neutral">
- <SpecificVersion>False</SpecificVersion>
- </Reference>
+ <PackageReference Include="Microsoft.Testing.Extensions.Fakes"
Version="17.11.0-beta.24319.3" />

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
 Provide product feedback
more information, see our
contributor guide.
Hosting extensions
Article • 04/17/2024

This article list and explains all Microsoft Testing Platform extensions related to the
hosting capability.

Hot reload
Hot reload lets you modify your app's managed source code while the application is
running, without the need to manually pause or hit a breakpoint. Simply make a
supported change while the app is running and select the Apply code changes button
in Visual Studio to apply your edits.

7 Note

The current version is limited to supporting hot reload in "console mode" only.
There is currently no support for hot reload in Test Explorer for Visual Studio or
Visual Studio Code.

This extension is shipped as part of the Microsoft.Testing.Extensions.HotReload


package.

7 Note

The package is shipped with the restrictive Microsoft Testing Platform Tools license.
The full license is available at
https://www.nuget.org/packages/Microsoft.Testing.Extensions.HotReload/1.0.0/Li
cense .

You can easily enable hot reload support by setting the


TESTINGPLATFORM_HOTRELOAD_ENABLED environment variable to "1" .

For SDK-style projects, you can add "TESTINGPLATFORM_HOTRELOAD_ENABLED": "1" in the


environmentVariables section of the launchSettings.json file. The following snippet

shows an example file:

JSON

{
"profiles": {
"Contoso.MyTests": {
"commandName": "Project",
"environmentVariables": {
"TESTINGPLATFORM_HOTRELOAD_ENABLED": "1"
}
}
}
}

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
Output extensions
Article • 08/30/2024

This article list and explains all Microsoft Testing Platform extensions related to the
terminal output.

Terminal test reporter


Terminal test reporter is the default implementation of status and progress reporting to
the terminal (console).

It comes built-in with Microsoft.Testing.Platform, and offers ANSI and non-ANSI mode,
and progress indicator.

Output modes
There are two output modes available:

Normal , the output contains the banner, reports full failures of tests, warning

messages and writes summary of the run.

Detailed , the same as Normal but it also reports Passed tests.

ANSI
Internally there are 2 different output formatters that are auto-detecting the terminal
capability to handle ANSI escape codes.
The ANSI formatter is used when the terminal is capable of rendering the escape
codes.
The non-ANSI formatter is used when the terminal cannot handle the escape
codes, or when --no-ansi is used, or when output is redirected.

The default is to auto-detect the capabilities.

Progress
A progress indicator is written to terminal. The progress indicator, shows the number of
passed tests, failed tests, and skipped tests, followed by the name of the tested
assembly, its target framework and architecture.

The progress bar is written based on the selected mode:

ANSI, the progress bar is animated, sticking to the bottom of the screen and is
refreshed every 500ms. The progress bar hides once test execution is done.
non-ANSI, the progress bar is written to screen as is every 3 seconds. The progress
remains in the output.

Options
The available options are as follows:

ノ Expand table

Option Description

no- Disable reporting progress to screen.


progress

no-ansi Disable outputting ANSI escape characters to screen.

output Output verbosity when reporting tests. Valid values are 'Normal', 'Detailed'. Default
is 'Normal'.
Policy extensions
Article • 04/17/2024

This article list and explains all Microsoft Testing Platform extensions related to the
policy capability.

Retry
A .NET test resilience and transient-fault-handling extension.

This extension is intended for integration tests where the test depends heavily on the
state of the environment and could experience transient faults.

This extension is shipped as part of Microsoft.Testing.Extensions.Retry package.

7 Note

The package is shipped with the restrictive Microsoft Testing Platform Tools license.
The full license is available at
https://www.nuget.org/packages/Microsoft.Testing.Extensions.Retry/1.0.0/Licens
e .

The available options are as follows:

ノ Expand table

Option Description

retry-failed-tests Reruns any failed tests until they pass or until the maximum number
of attempts is reached.

retry-failed-tests-max- Avoids rerunning tests when the percentage of failed test cases
percentage crosses the specified threshold.

retry-failed-tests-max- Avoids rerunning tests when the number of failed test cases crosses
tests the specified limit.

6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
be found on GitHub, where you Select a link to provide feedback:
can also create and review
issues and pull requests. For  Open a documentation issue
more information, see our
contributor guide.  Provide product feedback
Test reports extensions
Article • 04/11/2024

This article list and explains all Microsoft Testing Platform extensions related to the
test report capability.

A test report is a file that contains information about the execution and outcome of the
tests.

Visual Studio test reports


The Visual Studio test result file (or TRX) is the default format for publishing test results.
This extension is shipped as part of Microsoft.Testing.Extensions.TrxReport package.

) Important

The package is shipped with Microsoft .NET library closed-source free to use
licensing model.

The available options as follows:

ノ Expand table

Option Description

--report-trx Generates the TRX report.

--report-trx- The name of the generated TRX report. The default name matches the following
filename format <UserName>_<MachineName>_<yyyy-MM-dd HH:mm:ss>.trx .

The report is saved inside the default TestResults folder that can be specified through the
--results-directory command line argument.

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our  Provide product feedback
contributor guide.
VSTest Bridge extension
Article • 04/11/2024

This extension provides a compatibility layer with VSTest allowing the test frameworks
depending on it to continue supporting running in VSTest mode ( vstest.console.exe ,
usual dotnet test , VSTest task on AzDo, Test Explorers of Visual Studio and Visual
Studio Code...). This extension is shipped as part of
Microsoft.Testing.Extensions.VSTestBridge package.

) Important

The package is shipped with Microsoft .NET library closed-source free to use
licensing model.

Compatibility with VSTest


The main purpose of this extension is to offer an easy and smooth upgrade experience
for VSTest users by allowing a dual mode where the new platform is enabled and in
parallel a compatibility mode is offered to allow the usual workflows to continue
working.

Runsettings support
This extension allows you to provide a VSTest .runsettings file, but not all options in this
file are picked up by the platform. We describe below the supported and unsupported
settings, configuration options and alternatives for the most used VSTest configuration
options.

When enabled by the test framework, you can use --settings <SETTINGS_FILE> to
provide the .runsettings file.

RunConfiguration element
The RunConfiguration element can include the following elements. None of these
settings are respected by Microsoft.Testing.Platform :

ノ Expand table
Node Description Reason / Workaround

MaxCpuCount This setting When Microsoft.Testing.Platform is


controls the level used with MSBuild, this option is
of parallelism on offloaded to MSBuild. When a single
process-level. Use executable is run, this option has no
0 to enable the meaning for
maximum Microsoft.Testing.Platform.
process-level
parallelism.

ResultsDirectory The directory Use the command-line option --


where test results results-directory to determine the
are placed. The directory where the test results are
path is relative to going to be placed. If the specified
the directory that directory doesn't exist, it's created. The
contains the default is TestResults in the directory
.runsettings file. that contains the test application.

TargetFrameworkVersion This setting This option is ignored. The


defines the <TargetFramework> or
framework <TargetFrameworks> MSBuild
version, or properties determine the target
framework family framework of the application. The tests
to use to run are hosted in the final application.
tests.

TargetPlatform This setting <RuntimeIdentifier> determines the


defines the architecture of the final application
architecture to that hosts the tests.
use to run tests.

TreatTestAdapterErrorsAsWarnings Suppresses test Microsoft.Testing.Platform allows only


adapter errors to one type of tests to be run from a
become warnings. single assembly, and failure to load
the test framework or other parts of
infrastructure will become an un-
skippable error, because it signifies
that some tests could not be
discovered or run.

TestAdaptersPaths One or more Microsoft.Testing.Platform does not


paths to the use the concept of test adapters and
directory where does not allow dynamic loading of
the TestAdapters extensions unless they are part of the
are located build, and are registered in
Program.cs , either automatically via
build targets or manually.
Node Description Reason / Workaround

TestCaseFilter A filter to limit To filter tests use --filter command


tests which will line option.
run.

TestSessionTimeout Allows users to There is no alternative option.


terminate a test
session when it
exceeds a given
timeout.

DotnetHostPath Specify a custom Microsoft.Testing.Platform is not doing


path to dotnet any additional resolving of dotnet. It
host that is used depends fully on how dotnet resolves
to run the test itself, which can be controlled by
host. environment variables such as
DOTNET_HOST_PATH.

TreatNoTestsAsError Exit with non-zero Microsoft.Testing.Platform will error by


exit code when no default when no tests are discovered
tests are or run in a test application. You can set
discovered. how many tests you expect to find in
the assembly by using --minimum-
expected-tests command line
parameter, which defaults to 1.

DataCollectors element
Microsoft.Testing.Platform is not using data collectors. Instead it has the concept of

in-process and out-of-process extensions. Each extension is configured by its respective


configuration file or through the command line.

Most importantly hang and crash extension, and code coverage extension.

LoggerRunSettings element
Loggers in Microsoft.Testing.Platform are configured through command-line
parameters or by settings in code.

VSTest filter support


This extension also offer the ability to use VSTest filtering mechanism to discover or run
only the tests that matches the filter expression. For more information, see the Filter
option details section or for framework specific details see the Running selective unit
tests page.

When enabled by the test framework, you can use --filter <FILTER_EXPRESSION> .

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
Microsoft Testing Platform diagnostics
overview
Article • 05/31/2024

Microsoft Testing platform analysis ("TPXXX") rules inspect your code for security,
performance, design and other issues.

TPEXP
Several APIs of Microsoft Testing Platform are decorated with the ExperimentalAttribute.
This attribute indicates that the API is experimental and may be removed or changed in
future versions of Microsoft Testing Platform. The attribute is used to identify APIs that
aren't yet stable and may not be suitable for production use.

To suppress this diagnostic with the SuppressMessageAttribute , add the following code
to your project:

C#

using System.Diagnostics.CodeAnalysis;

[assembly: SuppressMessage("TPEXP", "Justification")]

Alternatively, you can suppress this diagnostic with preprocessor directive by adding the
following code to your project:

C#

#pragma warning disable TPEXP


// API that is causing the warning.
#pragma warning restore TPEXP

6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
 Provide product feedback
more information, see our
contributor guide.
Microsoft.Testing.Platform telemetry
Article • 03/19/2024

Microsoft.Testing.Platform collects telemetry data, which is used to help understand

how to improve the product. For example, this usage data helps to debug issues, such
as slow start-up times, and to prioritize new features. While these insights are
appreciated, you're free to disable telemetry. For more information on telemetry, see
privacy statement .

Types of telemetry data


Microsoft.Testing.Platform only collects telemetry of type Usage Data. The usage data

is used to understand how features are consumed and where time is spent when
executing the test app. This helps prioritize product improvements.

Disable telemetry reporting


To disable telemetry, set either TESTINGPLATFORM_TELEMETRY_OPTOUT or
DOTNET_CLI_TELEMETRY_OPTOUT environment variable to 1 .

Disclosure
Microsoft.Testing.Platform displays text similar to the following when you first run

your executable. The output text might vary slightly depending on the version
Microsoft.Testing.Platform you're running. This "first run" experience is how Microsoft

notifies you about data collection.

Console

Telemetry
---------
Microsoft.Testing.Platform collects usage data in order to help us improve
your experience.
The data is collected by Microsoft and are not shared.
You can opt-out of telemetry by setting the TESTINGPLATFORM_TELEMETRY_OPTOUT
or DOTNET_CLI_TELEMETRY_OPTOUT environment variable to '1' or 'true' using
your favorite shell.

Read more about Microsoft.Testing.Platform telemetry:


https://aka.ms/testingplatform-telemetry
Data points
The telemetry feature doesn't collect personal data, such as usernames or email
addresses. It doesn't scan your code and doesn't extract project-level data, such as
repository, or author, it extracts the name of your executable and sends it in hashed
form.

It doesn't extract the contents of any data files accessed or created by your apps, dumps
of any memory occupied by your apps' objects, or the contents of the clipboard.

The data is sent securely to Microsoft servers using Azure Monitor technology, held
under restricted access, and published under strict security controls from secure Azure
Storage systems.

Protecting your privacy is important to Microsoft! If you suspect the telemetry is


collecting sensitive data or the data is being insecurely or inappropriately handled, file
an issue in the microsoft/testfx GitHub repository or send an email to
dotnet@microsoft.com for investigation.

The telemetry feature collects the following data points:

ノ Expand table

Version Data

All .NET Runtime version.

All Application mode, such as 'server'.

All Count of test retries that failed.

All Count of test retries that passed.

All Count of tests that failed.

All Count of tests that passed.

All Count of tests that ran.

All The DisplayName of the extensions you're using, as a hashed value.

All If debug build of platform is used.

All If debugger was attached to the process.

All If filter of tests was used.

All If Hot reload is enabled.


Version Data

All If the application crashed.

All If the application is running as NativeAOT.

All If the repository is our own repository. Based on the


telemetry:isDevelopmentRepository setting in testingplatformconfig.json.

All Name of the test framework you're using, as a hashed value.

All Name of your executable (which is usually the same as the name of the project), as a
hashed value.

All Operating system, version and architecture.

All Process architecture.

All Runtime ID (RID). For more information, see .NET RID Catalog.

All The exit code of the application.

All Three octet IP address used to determine the geographical location.

All Timestamp of invocation, timestamp of start and end of various steps in the execution.

All Version of the platform.

All Version of your extensions.

All Version of your test adapter.

All Guid to correlate events from a single runner.

1.0.3 Guid to correlate events from a single test run.

Continuous integration detection


In order to detect if the .NET CLI is running in a continuous integration environment, the
.NET CLI probes for the presence and values of several well-known environment
variables that common CI providers set.

The full list of environment variables, and what is done with their values, is detailed in
the following table:

ノ Expand table
Environment variable(s) Provider Action

APPVEYOR Appveyor Parse boolean value.

BUILD_ID , BUILD_URL Jenkins Check if all are present and non-


null.

BUILD_ID , PROJECT_ID Google Cloud Build Check if all are present and non-
null.

CI Many/Most Parse boolean value.

CIRCLECI Circle CI Parse boolean value.

CODEBUILD_BUILD_ID , Amazon Web Services Check if all are present and non-
AWS_REGION CodeBuild null.

GITHUB_ACTIONS GitHub Actions Parse boolean value.

JB_SPACE_API_URL JetBrains Space Check if present and non-null.

TEAMCITY_VERSION TeamCity Check if present and non-null.

TF_BUILD Azure Pipelines Parse boolean value.

TRAVIS Travis CI Parse boolean value.

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
Microsoft.Testing.Platform exit codes
Article • 08/28/2024

Microsoft.Testing.Platform uses known exit codes to communicate test failure or app

errors. Exit codes start at 0 and are non-negative. Consider the following table that
details the various exit codes and their corresponding reasons:

ノ Expand table

Exit Details
code

0 The 0 exit code indicates success. All tests that were chosen to run ran to completion
and there were no errors.

1 The 1 exit code indicates unknown errors and acts as a catch all. To find additional error
information and details, look in the output.

2 An exit code of 2 is used to indicate that there was at least one test failure.

3 The exit code 3 indicates that the test session was aborted. A session can be aborted
using Ctrl + C , as an example.

4 The exit code 4 indicates that the setup of used extensions is invalid and the tests
session cannot run.

5 The exit code 5 indicates that the command line arguments passed to the test app are
invalid.

6 The exit code 6 indicates that the test session is using a nonimplemented feature.

7 The exit code 7 indicates that a test session was unable to complete successfully, and
likely crashed. It's possible that this was caused by a test session that was run via a test
controller's extension point.

8 The exit code 8 indicates that the test session ran zero tests.

9 The exit code 9 indicates that the minimum execution policy for the executed tests was
violated.

10 The exit code 10 indicates that the test adapter, Testing.Platform Test Framework,
MSTest, NUnit, or xUnit, failed to run tests for an infrastructure reason unrelated to the
test's self. An example is failing to create a fixture needed by tests.

11 The exit code 11 indicates that the test process will exit if dependent process exits.

12 The exit code 12 indicates that the test session was unable to run because the client
does not support any of the supported protocol versions.
To enable verbose logging and troubleshoot issues, see Microsoft.Testing.Platform
Diagnostics extensions.

Ignore specific exit codes


Microsoft.Testing.Platform is designed to be strict by default but allows for

configurability. As such, it's possible for users to decide which exit codes should be
ignored (an exit code of 0 will be returned instead of the original exit code).

To ignore specific exit codes, use the --ignore-exit-code command line option or the
TESTINGPLATFORM_EXITCODE_IGNORE environment variable. The valid format accepted is a

semi-colon separated list of exit codes to ignore (for example, --ignore-exit-code


2;3;8 ). A common scenario is to consider that test failures shouldn't result in a nonzero

exit code (which corresponds to ignoring exit-code 2 ).


Use Microsoft.Testing.Platform with
dotnet test
Article • 09/10/2024

This article describes how to use dotnet test to run tests when using
Microsoft.Testing.Platform , and the various options that are available to configure the

MSBuild output produced when running tests through Microsoft.Testing.Platform.

This article shows how to use dotnet test to run all tests in a solution (*.sln) that uses
Microsoft.Testing.Platform .

dotnet test integration


The dotnet test command is a way to run tests from solutions, projects, or already built
assemblies. Microsoft.Testing.Platform hooks up into this infrastructure to provide a
unified way to run tests, especially when migrating from VSTest to
Microsoft.Testing.Platform .

dotnet test integration - VSTest mode

Microsoft.Testing.Platform provides a compatibility layer (VSTest Bridge) to work with

dotnet test seamlessly.

Tests can be run by running:

.NET CLI

dotnet test

This layer runs test through VSTest and integrates with it on VSTest Test Framework
Adapter level.

dotnet test - Microsoft.Testing.Platform mode

By default, VSTest is used to run Microsoft.Testing.Platform tests. You can enable a full
Microsoft.Testing.Platform by specifying the

<TestingPlatformDotnetTestSupport>true</TestingPlatformDotnetTestSupport> setting in

your project file. This setting disables VSTest and, thanks to the transitive dependency to
the Microsoft.Testing.Platform.MSBuild NuGet package, directly runs all
Microsoft.Testing.Platform empowered test projects in your solution. It works

seamlessly if you pass a direct Microsoft.Testing.Platform test project.

XML

<Project Sdk="Microsoft.NET.Sdk">

<PropertyGroup>
<TargetFramework>net8.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>

<IsPackable>false</IsPackable>
<IsTestProject>true</IsTestProject>

<OutputType>Exe</OutputType>
<EnableMSTestRunner>true</EnableMSTestRunner>

<!-- Add this to your project file. -->

<TestingPlatformDotnetTestSupport>true</TestingPlatformDotnetTestSupport>

</PropertyGroup>

<!-- ... -->

</Project>

In this mode, you can supply extra parameters that are used to call the testing
application in one of the following ways:

Beginning with Microsoft.Testing.Platform version 1.4 (included with MSTest


version 3.6), you can options after the double dash -- on the command line:

.NET CLI

dotnet test -- --minimum-expected-tests 10

By using the TestingPlatformCommandLineArguments MSBuild property on the


command line:

.NET CLI

dotnet test -p:TestingPlatformCommandLineArguments="--minimum-expected-


tests 10"

Or in the project file:


XML

<PropertyGroup>
...
<TestingPlatformCommandLineArguments>--minimum-expected-tests
10</TestingPlatformCommandLineArguments>
</PropertyGroup>

Additional MSBuild options


The MSBuild integration provides options that can be specified in the project file or
through global properties on the command line, such as -
p:TestingPlatformShowTestsFailure=true .

These are the available options:

Show failure per test


Show complete platform output

Show failure per test


By default, test failures are summarized into a .log file, and a single failure per test
project is reported to MSBuild.

To show errors per failed test, specify -p:TestingPlatformShowTestsFailure=true on the


command line, or add the
<TestingPlatformShowTestsFailure>true</TestingPlatformShowTestsFailure> property to
your project file.

On command line:

.NET CLI

dotnet test -p:TestingPlatformShowTestsFailure=true

Or in project file:

XML

<Project Sdk="Microsoft.NET.Sdk">

<PropertyGroup>
<TargetFramework>net8.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>

<IsPackable>false</IsPackable>
<IsTestProject>true</IsTestProject>

<OutputType>Exe</OutputType>
<EnableMSTestRunner>true</EnableMSTestRunner>

<TestingPlatformDotnetTestSupport>true</TestingPlatformDotnetTestSupport>

<!-- Add this to your project file. -->


<TestingPlatformShowTestsFailure>true</TestingPlatformShowTestsFailure>

</PropertyGroup>

<!-- ... -->

</Project>

Show complete platform output


By default, all console output that the underlying test executable writes is captured and
hidden from the user. This includes the banner, version information, and formatted test
information.

To show this information together with MSBuild output


use <TestingPlatformCaptureOutput>false</TestingPlatformCaptureOutput> .

This option doesn't impact how the testing framework captures user output written by
Console.WriteLine or other similar ways to write to the console.

On command line:

.NET CLI

dotnet test -p:TestingPlatformCaptureOutput=false

Or in project file:

XML

<Project Sdk="Microsoft.NET.Sdk">

<PropertyGroup>
<TargetFramework>net8.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
<IsPackable>false</IsPackable>
<IsTestProject>true</IsTestProject>

<OutputType>Exe</OutputType>
<EnableMSTestRunner>true</EnableMSTestRunner>

<TestingPlatformDotnetTestSupport>true</TestingPlatformDotnetTestSupport>

<!-- Add this to your project file. -->


<TestingPlatformCaptureOutput>false</TestingPlatformCaptureOutput>

</PropertyGroup>

<!-- ... -->

</Project>
Run selected unit tests
Article • 06/18/2022

With the dotnet test command in .NET Core, you can use a filter expression to run
selected tests. This article demonstrates how to filter tests. The examples use dotnet
test . If you're using vstest.console.exe , replace --filter with --testcasefilter: .

Syntax
.NET CLI

dotnet test --filter <Expression>

Expression is in the format <Property><Operator><Value>[|&<Expression>] .

Expressions can be joined with boolean operators: | for boolean or, & for boolean
and.

Expressions can be enclosed in parentheses. For example: (Name~MyClass) |


(Name~MyClass2) .

An expression without any operator is interpreted as a contains on the


FullyQualifiedName property. For example, dotnet test --filter xyz is the same
as dotnet test --filter FullyQualifiedName~xyz .

Property is an attribute of the Test Case . For example, the following properties are
supported by popular unit test frameworks.

Test framework Supported properties

MSTest FullyQualifiedName
Name
ClassName
Priority
TestCategory

xUnit FullyQualifiedName
DisplayName
Traits
Test framework Supported properties

Nunit FullyQualifiedName
Name
Priority
TestCategory

Operators
= exact match

!= not exact match

~ contains
!~ doesn't contain

Value is a string. All the lookups are case insensitive.

Character escaping
To use an exclamation mark ( ! ) in a filter expression, you have to escape it in some
Linux or macOS shells by putting a backslash in front of it ( \! ). For example, the
following filter skips all tests in a namespace that contains IntegrationTests :

.NET CLI

dotnet test --filter FullyQualifiedName\!~IntegrationTests

For FullyQualifiedName values that include a comma for generic type parameters,
escape the comma with %2C . For example:

.NET CLI

dotnet test --filter


"FullyQualifiedName=MyNamespace.MyTestsClass<ParameterType1%2CParameterType2
>.MyTestMethod"

MSTest examples
C#

using Microsoft.VisualStudio.TestTools.UnitTesting;

namespace MSTestNamespace
{
[TestClass]
public class UnitTest1
{
[TestMethod, Priority(1), TestCategory("CategoryA")]
public void TestMethod1()
{
}

[TestMethod, Priority(2)]
public void TestMethod2()
{
}
}
}

Expression Result

dotnet test --filter Method Runs tests whose FullyQualifiedName


contains Method .

dotnet test --filter Name~TestMethod1 Runs tests whose name contains


TestMethod1 .

dotnet test --filter ClassName=MSTestNamespace.UnitTest1 Runs tests that are in class


MSTestNamespace.UnitTest1 .
Note: The ClassName value should have
a namespace, so ClassName=UnitTest1
won't work.

dotnet test --filter Runs all tests except


FullyQualifiedName!=MSTestNamespace.UnitTest1.TestMethod1 MSTestNamespace.UnitTest1.TestMethod1 .

dotnet test --filter TestCategory=CategoryA Runs tests that are annotated with
[TestCategory("CategoryA")] .

dotnet test --filter Priority=2 Runs tests that are annotated with
[Priority(2)] .

Examples using the conditional operators | and & :

To run tests that have UnitTest1 in their FullyQualifiedName or


TestCategoryAttribute is "CategoryA" .

.NET CLI

dotnet test --filter


"FullyQualifiedName~UnitTest1|TestCategory=CategoryA"
To run tests that have UnitTest1 in their FullyQualifiedName and have a
TestCategoryAttribute of "CategoryA" .

.NET CLI

dotnet test --filter


"FullyQualifiedName~UnitTest1&TestCategory=CategoryA"

To run tests that have either FullyQualifiedName containing UnitTest1 and have a
TestCategoryAttribute of "CategoryA" or have a PriorityAttribute with a priority of
1.

.NET CLI

dotnet test --filter "


(FullyQualifiedName~UnitTest1&TestCategory=CategoryA)|Priority=1"

See also
dotnet test
dotnet test --filter

Next steps
Order unit tests
Order unit tests
Article • 08/23/2024

Occasionally, you may want to have unit tests run in a specific order. Ideally, the order in
which unit tests run should not matter, and it is best practice to avoid ordering unit
tests. Regardless, there may be a need to do so. In that case, this article demonstrates
how to order test runs.

If you prefer to browse the source code, see the order .NET Core unit tests sample
repository.

 Tip

In addition to the ordering capabilities outlined in this article, consider creating


custom playlists with Visual Studio as an alternative.

Order alphabetically
MSTest discovers tests in the same order in which they are defined in the test class.

When running through Test Explorer (in Visual Studio, or in Visual Studio Code), the tests
are ordered in alphabetical order based on their test name.

When running outside of Test Explorer, tests are executed in the order in which they are
defined in the test class.

7 Note

A test named Test14 will run before Test2 even though the number 2 is less than
14 . This is because test name ordering uses the text name of the test.

C#

using Microsoft.VisualStudio.TestTools.UnitTesting;

namespace MSTest.Project;

[TestClass]
public class ByAlphabeticalOrder
{
public static bool Test1Called;
public static bool Test2Called;
public static bool Test3Called;

[TestMethod]
public void Test2()
{
Test2Called = true;

Assert.IsTrue(Test1Called);
Assert.IsFalse(Test3Called);
}

[TestMethod]
public void Test1()
{
Test1Called = true;

Assert.IsFalse(Test2Called);
Assert.IsFalse(Test3Called);
}

[TestMethod]
public void Test3()
{
Test3Called = true;

Assert.IsTrue(Test1Called);
Assert.IsTrue(Test2Called);
}
}

Next Steps
Unit test code coverage
Use code coverage for unit testing
Article • 01/30/2024

) Important

This article explains the creation of the example project. If you already have a
project, you can skip ahead to the Code coverage tooling section.

Unit tests help to ensure functionality and provide a means of verification for refactoring
efforts. Code coverage is a measurement of the amount of code that is run by unit tests
- either lines, branches, or methods. As an example, if you have a simple application with
only two conditional branches of code (branch a, and branch b), a unit test that verifies
conditional branch a will report branch code coverage of 50%.

This article discusses the usage of code coverage for unit testing with Coverlet and
report generation using ReportGenerator. While this article focuses on C# and xUnit as
the test framework, both MSTest and NUnit would also work. Coverlet is an open source
project on GitHub that provides a cross-platform code coverage framework for C#.
Coverlet is part of the .NET Foundation . Coverlet collects Cobertura coverage test run
data, which is used for report generation.

Additionally, this article details how to use the code coverage information collected
from a Coverlet test run to generate a report. The report generation is possible using
another open source project on GitHub - ReportGenerator . ReportGenerator converts
coverage reports generated by Cobertura among many others, into human-readable
reports in various formats.

This article is based on the sample source code project, available on samples browser.

System under test


The "system under test" refers to the code that you're writing unit tests against, this
could be an object, service, or anything else that exposes testable functionality. For this
article, you'll create a class library that will be the system under test, and two
corresponding unit test projects.

Create a class library


From a command prompt in a new directory named UnitTestingCodeCoverage , create a
new .NET standard class library using the dotnet new classlib command:
.NET CLI

dotnet new classlib -n Numbers

The snippet below defines a simple PrimeService class that provides functionality to
check if a number is prime. Copy the snippet below and replace the contents of the
Class1.cs file that was automatically created in the Numbers directory. Rename the
Class1.cs file to PrimeService.cs.

C#

namespace System.Numbers
{
public class PrimeService
{
public bool IsPrime(int candidate)
{
if (candidate < 2)
{
return false;
}

for (int divisor = 2; divisor <= Math.Sqrt(candidate);


++divisor)
{
if (candidate % divisor == 0)
{
return false;
}
}
return true;
}
}
}

 Tip

It is worth mentioning that the Numbers class library was intentionally added to the
System namespace. This allows for System.Math to be accessible without a using
System; namespace declaration. For more information, see namespace (C#

Reference).

Create test projects


Create two new xUnit Test Project (.NET Core) templates from the same command
prompt using the dotnet new xunit command:

.NET CLI

dotnet new xunit -n XUnit.Coverlet.Collector

.NET CLI

dotnet new xunit -n XUnit.Coverlet.MSBuild

Both of the newly created xUnit test projects need to add a project reference of the
Numbers class library. This is so that the test projects have access to the PrimeService for
testing. From the command prompt, use the dotnet add command:

.NET CLI

dotnet add XUnit.Coverlet.Collector\XUnit.Coverlet.Collector.csproj


reference Numbers\Numbers.csproj

.NET CLI

dotnet add XUnit.Coverlet.MSBuild\XUnit.Coverlet.MSBuild.csproj reference


Numbers\Numbers.csproj

The MSBuild project is named appropriately, as it will depend on the coverlet.msbuild


NuGet package. Add this package dependency by running the dotnet add package
command:

.NET CLI

cd XUnit.Coverlet.MSBuild && dotnet add package coverlet.msbuild && cd ..

The previous command changed directories effectively scoping to the MSBuild test
project, then added the NuGet package. When that was done, it then changed
directories, stepping up one level.

Open both of the UnitTest1.cs files, and replace their contents with the following snippet.
Rename the UnitTest1.cs files to PrimeServiceTests.cs.

C#

using System.Numbers;
using Xunit;
namespace XUnit.Coverlet
{
public class PrimeServiceTests
{
readonly PrimeService _primeService;

public PrimeServiceTests() => _primeService = new PrimeService();

[Theory]
[InlineData(-1), InlineData(0), InlineData(1)]
public void IsPrime_ValuesLessThan2_ReturnFalse(int value) =>
Assert.False(_primeService.IsPrime(value), $"{value} should not
be prime");

[Theory]
[InlineData(2), InlineData(3), InlineData(5), InlineData(7)]
public void IsPrime_PrimesLessThan10_ReturnTrue(int value) =>
Assert.True(_primeService.IsPrime(value), $"{value} should be
prime");

[Theory]
[InlineData(4), InlineData(6), InlineData(8), InlineData(9)]
public void IsPrime_NonPrimesLessThan10_ReturnFalse(int value) =>
Assert.False(_primeService.IsPrime(value), $"{value} should not
be prime");
}
}

Create a solution
From the command prompt, create a new solution to encapsulate the class library and
the two test projects. Using the dotnet sln command:

.NET CLI

dotnet new sln -n XUnit.Coverage

This will create a new solution file name XUnit.Coverage in the UnitTestingCodeCoverage
directory. Add the projects to the root of the solution.

Windows

.NET CLI

dotnet sln XUnit.Coverage.sln add (ls **/*.csproj) --in-root


Build the solution using the dotnet build command:

.NET CLI

dotnet build

If the build is successful, you've created the three projects, appropriately referenced
projects and packages, and updated the source code correctly. Well done!

Code coverage tooling


There are two types of code coverage tools:

DataCollectors: DataCollectors monitor test execution and collect information


about test runs. They report the collected information in various output formats,
such as XML and JSON. For more information, see your first DataCollector .
Report generators: Use data collected from test runs to generate reports, often as
styled HTML.

In this section, the focus is on data collector tools.

.NET includes a built-in code coverage data collector, which is also available in Visual
Studio. This data collector generates a binary .coverage file that can be used to generate
reports in Visual Studio. The binary file is not human-readable, and it must be converted
to a human-readable format before it can be used to generate reports outside of Visual
Studio.

 Tip

The dotnet-coverage tool is a cross-platform tool that can be used to convert the
binary coverage test results file to a human-readable format. For more information,
see dotnet-coverage.

Coverlet is an open-source alternative to the built-in collector. It generates test results


as human-readable Cobertura XML files, which can then be used to generate HTML
reports. To use Coverlet for code coverage, an existing unit test project must have the
appropriate package dependencies, or alternatively rely on .NET global tooling and the
corresponding coverlet.console NuGet package.

Integrate with .NET test


The xUnit test project template already integrates with coverlet.collector by default.
From the command prompt, change directories to the XUnit.Coverlet.Collector project,
and run the dotnet test command:

.NET CLI

cd XUnit.Coverlet.Collector && dotnet test --collect:"XPlat Code Coverage"

7 Note

The "XPlat Code Coverage" argument is a friendly name that corresponds to the
data collectors from Coverlet. This name is required but is case insensitive. To use
.NET's built-in Code Coverage data collector, use "Code Coverage" .

As part of the dotnet test run, a resulting coverage.cobertura.xml file is output to the
TestResults directory. The XML file contains the results. This is a cross-platform option
that relies on the .NET CLI, and it is great for build systems where MSBuild is not
available.

Below is the example coverage.cobertura.xml file.

XML

<?xml version="1.0" encoding="utf-8"?>


<coverage line-rate="1" branch-rate="1" version="1.9" timestamp="1592248008"
lines-covered="12" lines-valid="12" branches-covered="6" branches-
valid="6">
<sources>
<source>C:\</source>
</sources>
<packages>
<package name="Numbers" line-rate="1" branch-rate="1" complexity="6">
<classes>
<class name="Numbers.PrimeService" line-rate="1" branch-rate="1"
complexity="6"
filename="Numbers\PrimeService.cs">
<methods>
<method name="IsPrime" signature="(System.Int32)" line-rate="1"
branch-rate="1" complexity="6">
<lines>
<line number="8" hits="11" branch="False" />
<line number="9" hits="11" branch="True" condition-
coverage="100% (2/2)">
<conditions>
<condition number="7" type="jump" coverage="100%" />
</conditions>
</line>
<line number="10" hits="3" branch="False" />
<line number="11" hits="3" branch="False" />
<line number="14" hits="22" branch="True" condition-
coverage="100% (2/2)">
<conditions>
<condition number="57" type="jump" coverage="100%" />
</conditions>
</line>
<line number="15" hits="7" branch="False" />
<line number="16" hits="7" branch="True" condition-
coverage="100% (2/2)">
<conditions>
<condition number="27" type="jump" coverage="100%" />
</conditions>
</line>
<line number="17" hits="4" branch="False" />
<line number="18" hits="4" branch="False" />
<line number="20" hits="3" branch="False" />
<line number="21" hits="4" branch="False" />
<line number="23" hits="11" branch="False" />
</lines>
</method>
</methods>
<lines>
<line number="8" hits="11" branch="False" />
<line number="9" hits="11" branch="True" condition-
coverage="100% (2/2)">
<conditions>
<condition number="7" type="jump" coverage="100%" />
</conditions>
</line>
<line number="10" hits="3" branch="False" />
<line number="11" hits="3" branch="False" />
<line number="14" hits="22" branch="True" condition-
coverage="100% (2/2)">
<conditions>
<condition number="57" type="jump" coverage="100%" />
</conditions>
</line>
<line number="15" hits="7" branch="False" />
<line number="16" hits="7" branch="True" condition-
coverage="100% (2/2)">
<conditions>
<condition number="27" type="jump" coverage="100%" />
</conditions>
</line>
<line number="17" hits="4" branch="False" />
<line number="18" hits="4" branch="False" />
<line number="20" hits="3" branch="False" />
<line number="21" hits="4" branch="False" />
<line number="23" hits="11" branch="False" />
</lines>
</class>
</classes>
</package>
</packages>
</coverage>

 Tip

As an alternative, you could use the MSBuild package if your build system already
makes use of MSBuild. From the command prompt, change directories to the
XUnit.Coverlet.MSBuild project, and run the dotnet test command:

.NET CLI

dotnet test /p:CollectCoverage=true /p:CoverletOutputFormat=cobertura

The resulting coverage.cobertura.xml file is output. You can follow MSBuild


integration guide here

Generate reports
Now that you're able to collect data from unit test runs, you can generate reports using
ReportGenerator . To install the ReportGenerator NuGet package as a .NET global
tool, use the dotnet tool install command:

.NET CLI

dotnet tool install -g dotnet-reportgenerator-globaltool

Run the tool and provide the desired options, given the output coverage.cobertura.xml
file from the previous test run.

Console

reportgenerator
-reports:"Path\To\TestProject\TestResults\{guid}\coverage.cobertura.xml"
-targetdir:"coveragereport"
-reporttypes:Html

After running this command, an HTML file represents the generated report.

See also
Visual Studio unit test code coverage
GitHub - Coverlet repository
GitHub - ReportGenerator repository
ReportGenerator project site
Azure: Publish code coverage results
Azure: Review code coverage results
.NET CLI test command
dotnet-coverage
Sample source code

Next Steps
Unit testing best practices

6 Collaborate with us on
.NET feedback
GitHub .NET is an open source project.
Select a link to provide feedback:
The source for this content can
be found on GitHub, where you  Open a documentation issue
can also create and review
issues and pull requests. For  Provide product feedback
more information, see our
contributor guide.
Test published output with dotnet vstest
Article • 05/18/2022

You can run tests on already published output by using the dotnet vstest command.
This will work on xUnit, MSTest, and NUnit tests. Simply locate the DLL file that was part
of your published output and run:

.NET CLI

dotnet vstest <MyPublishedTests>.dll

Where <MyPublishedTests> is the name of your published test project.

Example
The commands below demonstrate running tests on a published DLL.

.NET CLI

dotnet new mstest -o MyProject.Tests


cd MyProject.Tests
dotnet publish -o out
dotnet vstest out/MyProject.Tests.dll

7 Note

Note: If your app targets a framework other than netcoreapp , you can still run the
dotnet vstest command by passing in the targeted framework with a framework

flag. For example, dotnet vstest <MyPublishedTests>.dll --


Framework:".NETFramework,Version=v4.6" . In Visual Studio 2017 Update 5 and later,

the desired framework is automatically detected.

See also
Unit Testing with dotnet test and xUnit
Unit Testing with dotnet test and NUnit
Unit Testing with dotnet test and MSTest
Get started with Live Unit Testing
Article • 11/02/2023

When you enable Live Unit Testing in a Visual Studio solution, it visually depicts your test
coverage and the status of your tests. Live Unit Testing also dynamically executes tests
whenever you modify your code and immediately notifies you when your changes cause
tests to fail.

Live Unit Testing can be used to test solutions that target either .NET Framework, .NET
Core, or .NET 5+. In this tutorial, you'll learn to use Live Unit Testing by creating a simple
class library that targets .NET, and you'll create an MSTest project that targets .NET to
test it.

The complete C# solution can be downloaded from the MicrosoftDocs/visualstudio-


docs repo on GitHub.

Prerequisites
This tutorial requires that you've installed Visual Studio Enterprise edition with the .NET
desktop development workload.

Create the solution and the class library project


Begin by creating a Visual Studio solution named UtilityLibraries that consists of a single
.NET class library project, StringLibrary.

The solution is just a container for one or more projects. To create a blank solution, open
Visual Studio and do the following:

1. Select File > New > Project from the top-level Visual Studio menu.

2. Type solution into the template search box, and then select the Blank Solution
template. Name the project UtilityLibraries.

3. Finish creating the solution.

Now that you've created the solution, you'll create a class library named StringLibrary
that contains a number of extension methods for working with strings.

1. In Solution Explorer, right-click on the UtilityLibraries solution and select Add >
New Project.
2. Type class library into the template search box, and the select the Class Library
template that targets .NET or .NET Standard. Click Next.

3. Name the project StringLibrary.

4. Click Create to create the project.

5. Replace all of the existing code in the code editor with the following code:

C#

using System;

namespace UtilityLibraries
{
public static class StringLibrary
{
public static bool StartsWithUpper(this string s)
{
if (String.IsNullOrWhiteSpace(s))
return false;

return Char.IsUpper(s[0]);
}

public static bool StartsWithLower(this string s)


{
if (String.IsNullOrWhiteSpace(s))
return false;

return Char.IsLower(s[0]);
}

public static bool HasEmbeddedSpaces(this string s)


{
foreach (var ch in s.Trim())
{
if (ch == ' ')
return true;
}
return false;
}
}
}

StringLibrary has three static methods:

StartsWithUpper returns true if a string starts with an uppercase character;

otherwise, it returns false .


StartsWithLower returns true if a string starts with a lowercase character;

otherwise, it returns false .

HasEmbeddedSpaces returns true if a string contains an embedded whitespace

character; otherwise, it returns false .

6. Select Build > Build Solution from the top-level Visual Studio menu. The build
should succeed.

Create the test project


The next step is to create the unit test project to test the StringLibrary library. Create the
unit tests by performing the following steps:

1. In Solution Explorer, right-click on the UtilityLibraries solution and select Add >
New Project.

2. Type unit test into the template search box, select C# as the language, and then
select the MSTest Unit Test Project for .NET template. Click Next.

7 Note

In Visual Studio 2019 version 16.9, the MSTest project template name is Unit
Test Project.

3. Name the project StringLibraryTests and click Next.

4. Choose either the recommended target framework or .NET 8, and then choose
Create.

7 Note

This getting started tutorial uses Live Unit Testing with the MSTest test
framework. You can also use the xUnit and NUnit test frameworks.

5. The unit test project can't automatically access the class library that it is testing.
You give the test library access by adding a reference to the class library project. To
do this, right-click the StringLibraryTests project and select Add > Project
Reference. In the Reference Manager dialog, make sure the Solution tab is
selected, and select the StringLibrary project, as shown in the following illustration.
6. Replace the boilerplate unit test code provided by the template with the following
code:

C#

using System;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using UtilityLibraries;

namespace StringLibraryTest
{
[TestClass]
public class UnitTest1
{
[TestMethod]
public void TestStartsWithUpper()
{
// Tests that we expect to return true.
string[] words = { "Alphabet", "Zebra", "ABC", "Αθήνα",
"Москва" };
foreach (var word in words)
{
bool result = word.StartsWithUpper();
Assert.IsTrue(result,
$"Expected for '{word}': true; Actual:
{result}");
}
}

[TestMethod]
public void TestDoesNotStartWithUpper()
{
// Tests that we expect to return false.
string[] words = { "alphabet", "zebra", "abc",
"αυτοκινητοβιομηχανία", "государство",
"1234", ".", ";", " " };
foreach (var word in words)
{
bool result = word.StartsWithUpper();
Assert.IsFalse(result,
$"Expected for '{word}': false; Actual:
{result}");
}
}

[TestMethod]
public void DirectCallWithNullOrEmpty()
{
// Tests that we expect to return false.
string[] words = { String.Empty, null };
foreach (var word in words)
{
bool result = StringLibrary.StartsWithUpper(word);
Assert.IsFalse(result,
$"Expected for '{(word == null ? "
<null>" : word)}': " +
$"false; Actual: {result}");
}
}
}
}

7. Save your project by selecting the Save icon on the toolbar.

Because the unit test code includes some non-ASCII characters, you will see the
following dialog to warn that some characters will be lost if you save the file in its
default ASCII format.

8. Choose the Save with Other Encoding button.


9. In the Encoding drop-down list of the Advance Save Options dialog, choose
Unicode (UTF-8 without signature) - Codepage 65001, as the following illustration
shows:

10. Compile the unit test project by selecting Build > Rebuild Solution from the top-
level Visual Studio menu.

You've created a class library as well as some unit tests for it. You've now finished the
preliminaries needed to use Live Unit Testing.

Enable Live Unit Testing


So far, although you've written the tests for the StringLibrary class library, you haven't
executed them. Live Unit Testing executes them automatically once you enable it. To do
that, do the following:

1. Optionally, select the code editor window that contains the code for StringLibrary.
This is either Class1.cs for a C# project or Class1.vb for a Visual Basic project. (This
step lets you visually inspect the result of your tests and the extent of your code
coverage once you enable Live Unit Testing.)

2. Select Test > Live Unit Testing > Start from the top-level Visual Studio menu.

3. Verify the configuration for Live Unit Testing by ensuring the Repository Root
includes the path to the source files for both the utility project and the test project.
Select Next and then Finish.

4. In the Live Unit Testing window, select the include all tests link (Alternatively, select
the Playlist button icon, then select the StringLibraryTest, which selects all the
tests underneath it. Then deselect the Playlist button to exit edit mode.)

5. Visual Studio will rebuild the project and start Live Unit Test, which automatically
runs all of your tests.
When it finishes running your tests, Live Unit Testing displays both the overall results
and the result of individual tests. In addition, the code editor window graphically
displays both your test code coverage and the result for your tests. As the following
illustration shows, all three tests have executed successfully. It also shows that our tests
have covered all code paths in the StartsWithUpper method, and those tests all
executed successfully (which is indicated by the green check mark, "✓"). Finally, it shows
that none of the other methods in StringLibrary have code coverage (which is indicated
by a blue line, "➖").

You can also get more detailed information about test coverage and test results by
selecting a particular code coverage icon in the code editor window. To examine this
detail, do the following:

1. Click on the green check mark on the line that reads if


(String.IsNullOrWhiteSpace(s)) in the StartsWithUpper method. As the following

illustration shows, Live Unit Testing indicates that three tests cover that line of
code, and that all have executed successfully.
2. Click on the green check mark on the line that reads return Char.IsUpper(s[0]) in
the StartsWithUpper method. As the following illustration shows, Live Unit Testing
indicates that only two tests cover that line of code, and that all have executed
successfully.

The major issue that Live Unit Testing identifies is incomplete code coverage. You'll
address it in the next section.

Expand test coverage


In this section, you'll extend your unit tests to the StartsWithLower method. While you
do that, Live Unit Testing will dynamically continue to test your code.

To extend code coverage to the StartsWithLower method, do the following:

1. Add the following TestStartsWithLower and TestDoesNotStartWithLower methods


to your project's test source code file:

C#
// Code to add to UnitTest1.cs
[TestMethod]
public void TestStartsWithLower()
{
// Tests that we expect to return true.
string[] words = { "alphabet", "zebra", "abc",
"αυτοκινητοβιομηχανία", "государство" };
foreach (var word in words)
{
bool result = word.StartsWithLower();
Assert.IsTrue(result,
$"Expected for '{word}': true; Actual:
{result}");
}
}

[TestMethod]
public void TestDoesNotStartWithLower()
{
// Tests that we expect to return false.
string[] words = { "Alphabet", "Zebra", "ABC", "Αθήνα", "Москва",
"1234", ".", ";", " "};
foreach (var word in words)
{
bool result = word.StartsWithLower();
Assert.IsFalse(result,
$"Expected for '{word}': false; Actual:
{result}");
}
}

2. Modify the DirectCallWithNullOrEmpty method by adding the following code


immediately after the call to the
Microsoft.VisualStudio.TestTools.UnitTesting.Assert.IsFalse method.

C#

// Code to add to UnitTest1.cs


result = StringLibrary.StartsWithLower(word);
Assert.IsFalse(result,
$"Expected for '{(word == null ? "<null>" : word)}': " +
$"false; Actual: {result}");

3. Live Unit Testing automatically executes new and modified tests when you modify
your source code. As the following illustration shows, all of the tests, including the
two you've added and the one you've modified, have succeeded.
4. Switch to the window that contains the source code for the StringLibrary class. Live
Unit Testing now shows that our code coverage is extended to the
StartsWithLower method.
In some cases, successful tests in Test Explorer might be grayed-out. That indicates that
a test is currently executing, or that the test has not run again because there have been
no code changes that would impact the test since it was last executed.

So far, all of our tests have succeeded. In the next section, we'll examine how you can
handle test failure.

Handle a test failure


In this section, you'll explore how you can use Live Unit Testing to identify, troubleshoot,
and address test failures. You'll do this by expanding test coverage to the
HasEmbeddedSpaces method.

1. Add the following method to your test file:

C#

[TestMethod]
public void TestHasEmbeddedSpaces()
{
// Tests that we expect to return true.
string[] phrases = { "one car", "Name\u0009Description",
"Line1\nLine2", "Line3\u000ALine4",
"Line5\u000BLine6", "Line7\u000CLine8",
"Line0009\u000DLine10", "word1\u00A0word2" };
foreach (var phrase in phrases)
{
bool result = phrase.HasEmbeddedSpaces();
Assert.IsTrue(result,
$"Expected for '{phrase}': true; Actual:
{result}");
}
}

2. When the test executes, Live Unit Testing indicates that the TestHasEmbeddedSpaces
method has failed, as the following illustration shows:
3. Select the window that displays the library code. Live Unit Testing has expanded
code coverage to the HasEmbeddedSpaces method. It also reports the test failure by
adding a red "🞩" to lines covered by failing tests.

4. Hover over the line with the HasEmbeddedSpaces method signature. Live Unit Testing
displays a tooltip that reports that the method is covered by one test, as the
following illustration shows:

5. Select the failed TestHasEmbeddedSpaces test. Live Unit Testing gives you a few
options such as running all tests and debugging all tests, as the following
illustration shows:
6. Select Debug All to debug the failed test.

7. Visual Studio executes the test in debug mode.

The test assigns each string in an array to a variable named phrase and passes it to
the HasEmbeddedSpaces method. Program execution pauses and invokes the
debugger the first time the assert expression is false . The exception dialog that
results from the unexpected value in the
Microsoft.VisualStudio.TestTools.UnitTesting.Assert.IsTrue method call is shown in
the following illustration.

In addition, all of the debugging tools that Visual Studio provides are available to
help us troubleshoot our failed test, as the following illustration shows:

Note in the Autos window that the value of the phrase variable is
"Name\tDescription", which is the second element of the array. The test method
expects HasEmbeddedSpaces to return true when it is passed this string; instead, it
returns false . Evidently, it does not recognize "\t", the tab character, as an
embedded space.

8. Select Debug > Continue, press F5, or click the Continue button on the toolbar to
continue executing the test program. Because an unhandled exception occurred,
the test terminates. This provides enough information for a preliminary
investigation of the bug. Either TestHasEmbeddedSpaces (the test routine) made an
incorrect assumption, or HasEmbeddedSpaces does not correctly recognize all
embedded spaces.

9. To diagnose and correct the problem, start with the


StringLibrary.HasEmbeddedSpaces method. Look at the comparison in the
HasEmbeddedSpaces method. It considers an embedded space to be U+0020.

However, the Unicode Standard includes a number of other space characters. This
suggests that the library code has incorrectly tested for a whitespace character.

10. Replace the equality comparison with a call to the System.Char.IsWhiteSpace


method:

C#

if (Char.IsWhiteSpace(ch))

11. Live Unit Testing automatically reruns the failed test method.

Live Unit Testing shows the updated results appear, which also appear in the code
editor window.

Related content
Live Unit Testing in Visual Studio
Live Unit Testing Frequently Asked Questions

Feedback
Was this page helpful?  Yes  No
.NET application publishing overview
Article • 03/19/2024

Applications you create with .NET can be published in two different modes, and the
mode affects how a user runs your app.

Publishing your app as self-contained produces an application that includes the .NET
runtime and libraries, and your application and its dependencies. Users of the
application can run it on a machine that doesn't have the .NET runtime installed.

Publishing your app as framework-dependent produces an application that includes only


your application itself and its dependencies. Users of the application have to separately
install the .NET runtime.

Both publishing modes produce a platform-specific executable by default. Framework-


dependent applications can be created without an executable, and these applications
are cross-platform.

When an executable is produced, you can specify the target platform with a runtime
identifier (RID). For more information about RIDs, see .NET RID Catalog.

The following table outlines the commands used to publish an app as framework-
dependent or self-contained:

ノ Expand table

Type Command

framework-dependent executable for the current dotnet publish


platform.

framework-dependent executable for a specific dotnet publish -r <RID>


platform.

framework-dependent binary. dotnet publish

self-contained executable. dotnet publish -r <RID> --self-


contained

For more information, see .NET dotnet publish command.

Produce an executable
Executables aren't cross-platform, they're specific to an operating system and CPU
architecture. When publishing your app and creating an executable, you can publish the
app as self-contained or framework-dependent. Publishing an app as self-contained
includes the .NET runtime with the app, and users of the app don't have to worry about
installing .NET before running the app. Publishing an app as framework-dependent
doesn't include the .NET runtime; only the app and third-party dependencies are
included.

The following commands produce an executable:

ノ Expand table

Type Command

framework-dependent executable for the current dotnet publish


platform.

framework-dependent executable for a specific dotnet publish -r <RID>


platform.

self-contained executable. dotnet publish -r <RID> --self-


contained

Produce a cross-platform binary


Cross-platform binaries are created when you publish your app as framework-
dependent, in the form of a dll file. The dll file is named after your project. For example,
if you have an app named word_reader, a file named word_reader.dll is created. Apps
published in this way are run with the dotnet <filename.dll> command and can be run
on any platform.

Cross-platform binaries can be run on any operating system as long as the targeted
.NET runtime is already installed. If the targeted .NET runtime isn't installed, the app may
run using a newer runtime if the app is configured to roll-forward. For more information,
see framework-dependent apps roll forward.

You can choose to run the app as a platform-specific executable or as a cross-platform


binary via dotnet command. There should be no app behavior difference when
launching the platform-specific executable versus the dotnet command for ordinary
server apps Launching via a platform-specific executable gives you better integration
with the underlying OS. For example:
You see the application executable name in your process list and not dotnet , which
could be confusing if there's more than one.
You can customize the platform-specific executable with OS specific features. For
example, see this discussion about configuring default stack size on Windows .

The following command produces a cross-platform binary:

ノ Expand table

Type Command

framework-dependent cross-platform binary. dotnet publish

Publish framework-dependent
Apps published as framework-dependent are cross-platform and don't include the .NET
runtime. The user of your app is required to install the .NET runtime.

Publishing an app as framework-dependent produces a cross-platform binary as a dll


file, and a platform-specific executable that targets your current platform. The dll is
cross-platform while the executable isn't. For example, if you publish an app named
word_reader and target Windows, a word_reader.exe executable is created along with
word_reader.dll. When targeting Linux or macOS, a word_reader executable is created
along with word_reader.dll. If the app uses a NuGet package that has platform-specific
implementations, dependencies for all platforms are copied to the publish\runtimes\
{platform} folder.

The cross-platform binary of your app can be run with the dotnet <filename.dll>
command, and can be run on any platform.

Platform-specific and framework-dependent


You can publish a framework-dependent app that's platform-specific by passing the -r
<RID> parameters to the dotnet publish command. Publishing in this way is the same as

publish framework-dependent, except that platform-specific dependencies are handled


differently. If the app uses a NuGet package that has platform-specific implementations,
only the targeted platform's dependencies are copied. These dependencies are copied
directly to the publish folder.

While technically the binary produced is cross-platform, by targeting a specific platform,


your app isn't guaranteed to run cross-platform. You can run dotnet <filename.dll> ,
but the app may crash when it tries to access platform-specific dependencies that are
missing.

For more information about RIDs, see .NET RID Catalog.

Advantages
Small deployment
Only your app and its dependencies are distributed. The .NET runtime and libraries
are installed by the user and all apps share the runtime.

Cross-platform
Your app and any .NET-based library runs on other operating systems. You don't
need to define a target platform for your app. For information about the .NET file
format, see .NET Assembly File Format.

Uses the latest patched runtime


The app uses the latest runtime (within the targeted major-minor family of .NET)
installed on the target system. This means your app automatically uses the latest
patched version of the .NET runtime. This default behavior can be overridden. For
more information, see framework-dependent apps roll forward.

Disadvantages
Requires pre-installing the runtime
Your app can run only if the version of .NET your app targets is already installed on
the host system. You can configure roll-forward behavior for the app to either
require a specific version of .NET or allow a newer version of .NET. For more
information, see framework-dependent apps roll forward.

.NET may change


It's possible for the .NET runtime and libraries to be updated on the machine
where the app is run. In rare cases, this may change the behavior of your app if you
use the .NET libraries, which most apps do. You can configure how your app uses
newer versions of .NET. For more information, see framework-dependent apps roll
forward.

Examples
Publish an app as cross-platform and framework-dependent. An executable that targets
your current platform is created along with the dll file. Any platform-specific
dependencies are published with the app.
.NET CLI

dotnet publish

Publish an app as platform-specific and framework-dependent. A Linux 64-bit


executable is created along with the dll file. Only the targeted platform's dependencies
are published with the app.

.NET CLI

dotnet publish -r linux-x64

Publish self-contained
Publishing your app as self-contained produces a platform-specific executable. The
output publishing folder contains all components of the app, including the .NET libraries
and target runtime. The app is isolated from other .NET apps and doesn't use a locally
installed shared runtime. The user of your app isn't required to download and install
.NET.

You can publish a self-contained app by passing the --self-contained parameter to the
dotnet publish command. The executable binary is produced for the specified target
platform. For example, if you have an app named word_reader, and you publish a self-
contained executable for Windows, a word_reader.exe file is created. Publishing for Linux
or macOS, a word_reader file is created. The target platform and architecture is specified
with the -r <RID> parameter for the dotnet publish command. For more information
about RIDs, see .NET RID Catalog.

If the app has platform-specific dependencies, such as a NuGet package containing


platform-specific dependencies, these are copied to the publish folder along with the
app.

Advantages
Control .NET version
You control which version of .NET is deployed with your app.

Platform-specific targeting
Because you have to publish your app for each platform, you know where your app
runs. If .NET introduces a new platform, users can't run your app on that platform
until you release a version targeting that platform. You can test your app for
compatibility problems before your users run your app on the new platform.

Disadvantages
Larger deployments
Because your app includes the .NET runtime and all of your app dependencies, the
download size and hard drive space required is greater than a framework-
dependent version.

 Tip

You can reduce the size of your deployment on Linux systems by


approximately 28 MB by using .NET globalization invariant mode . This
forces your app to treat all cultures like the invariant culture.

 Tip

IL trimming can further reduce the size of your deployment.

Harder to update the .NET version


.NET Runtime (distributed with your app) can only be upgraded by releasing a new
version of your app.

Examples
Publish an app self-contained. A macOS 64-bit executable is created.

.NET CLI

dotnet publish -r osx-x64 --self-contained

Publish an app self-contained. A Windows 64-bit executable is created.

.NET CLI

dotnet publish -r win-x64 --self-contained

Publish with ReadyToRun images


Publishing with ReadyToRun images improves the startup time of your application at the
cost of increasing the size of your application. For more information, see ReadyToRun.

Advantages
Improved startup time
The application spends less time running the JIT.

Disadvantages
Larger size
The application is larger on disk.

Examples
Publish an app self-contained and ReadyToRun. A macOS 64-bit executable is created.

.NET CLI

dotnet publish -c Release -r osx-x64 --self-contained -


p:PublishReadyToRun=true

Publish an app self-contained and ReadyToRun. A Windows 64-bit executable is created.

.NET CLI

dotnet publish -c Release -r win-x64 --self-contained -


p:PublishReadyToRun=true

See also
Deploying .NET Apps with .NET CLI.
Deploying .NET Apps with Visual Studio.
.NET Runtime Identifier (RID) catalog.
Select the .NET version to use.

6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
be found on GitHub, where you Select a link to provide feedback:
can also create and review
issues and pull requests. For  Open a documentation issue
more information, see our
contributor guide.  Provide product feedback
Deploy .NET Core apps with Visual
Studio
Article • 10/11/2022

You can deploy a .NET Core application either as a framework-dependent deployment,


which includes your application binaries but depends on the presence of .NET Core on
the target system, or as a self-contained deployment, which includes both your
application and .NET Core binaries. For an overview of .NET Core application
deployment, see .NET Core Application Deployment.

The following sections show how to use Microsoft Visual Studio to create the following
kinds of deployments:

Framework-dependent deployment
Framework-dependent deployment with third-party dependencies
Self-contained deployment
Self-contained deployment with third-party dependencies

For information on using Visual Studio to develop .NET Core applications, see .NET Core
dependencies and requirements.

Framework-dependent deployment
Deploying a framework-dependent deployment with no third-party dependencies
simply involves building, testing, and publishing the app. A simple example written in C#
illustrates the process.

1. Create the project.

Select File > New > Project. In the New Project dialog, expand your language's
(C# or Visual Basic) project categories in the Installed project types pane, choose
.NET Core, and then select the Console App (.NET Core) template in the center
pane. Enter a project name, such as "FDD", in the Name text box. Select the OK
button.

2. Add the application's source code.

Open the Program.cs or Program.vb file in the editor and replace the
autogenerated code with the following code. It prompts the user to enter text and
displays the individual words entered by the user. It uses the regular expression
\w+ to separate the words in the input text.
C#

using System;
using System.Text.RegularExpressions;

namespace Applications.ConsoleApps
{
public class ConsoleParser
{
public static void Main()
{
Console.WriteLine("Enter any text, followed by
<Enter>:\n");
String? s = Console.ReadLine();
ShowWords(s ?? "You didn't enter anything.");
Console.Write("\nPress any key to continue... ");
Console.ReadKey();
}

private static void ShowWords(String s)


{
String pattern = @"\w+";
var matches = Regex.Matches(s, pattern);
if (matches.Count == 0)
{
Console.WriteLine("\nNo words were identified in your
input.");
}
else
{
Console.WriteLine($"\nThere are {matches.Count} words
in your string:");
for (int ctr = 0; ctr < matches.Count; ctr++)
{
Console.WriteLine($" #{ctr,2}:
'{matches[ctr].Value}' at position {matches[ctr].Index}");
}
}
}
}
}

3. Create a Debug build of your app.

Select Build > Build Solution. You can also compile and run the Debug build of
your application by selecting Debug > Start Debugging.

4. Deploy your app.

After you've debugged and tested the program, create the files to be deployed
with your app. To publish from Visual Studio, do the following:
a. Change the solution configuration from Debug to Release on the toolbar to
build a Release (rather than a Debug) version of your app.

b. Right-click on the project (not the solution) in Solution Explorer and select
Publish.

c. In the Publish tab, select Publish. Visual Studio writes the files that comprise
your application to the local file system.

d. The Publish tab now shows a single profile, FolderProfile. The profile's
configuration settings are shown in the Summary section of the tab.

The resulting files are placed in a directory named Publish on Windows and
publish on Unix systems that is in a subdirectory of your project's

.\bin\release\netcoreapp2.1 subdirectory.

Along with your application's files, the publishing process emits a program database
(.pdb) file that contains debugging information about your app. The file is useful
primarily for debugging exceptions. You can choose not to package it with your
application's files. You should, however, save it in the event that you want to debug the
Release build of your app.

Deploy the complete set of application files in any way you like. For example, you can
package them in a Zip file, use a simple copy command, or deploy them with any
installation package of your choice. Once installed, users can then execute your
application by using the dotnet command and providing the application filename, such
as dotnet fdd.dll .

In addition to the application binaries, your installer should also either bundle the
shared framework installer or check for it as a prerequisite as part of the application
installation. Installation of the shared framework requires Administrator/root access
since it is machine-wide.

Framework-dependent deployment with third-


party dependencies
Deploying a framework-dependent deployment with one or more third-party
dependencies requires that any dependencies be available to your project. The following
additional steps are required before you can build your app:

1. Use the NuGet Package Manager to add a reference to a NuGet package to your
project; and if the package is not already available on your system, install it. To
open the package manager, select Tools > NuGet Package Manager > Manage
NuGet Packages for Solution.

2. Confirm that your third-party dependencies (for example, Newtonsoft.Json ) are


installed on your system and, if they aren't, install them. The Installed tab lists
NuGet packages installed on your system. If Newtonsoft.Json is not listed there,
select the Browse tab and enter "Newtonsoft.Json" in the search box. Select
Newtonsoft.Json and, in the right pane, select your project before selecting Install.

3. If Newtonsoft.Json is already installed on your system, add it to your project by


selecting your project in the right pane of the Manage Packages for Solution tab.

A framework-dependent deployment with third-party dependencies is only as portable


as its third-party dependencies. For example, if a third-party library only supports
macOS, the app isn't portable to Windows systems. This happens if the third-party
dependency itself depends on native code. A good example of this is Kestrel server,
which requires a native dependency on libuv . When an FDD is created for an
application with this kind of third-party dependency, the published output contains a
folder for each Runtime Identifier (RID) that the native dependency supports (and that
exists in its NuGet package).

Self-contained deployment without third-party


dependencies
Deploying a self-contained deployment with no third-party dependencies involves
creating the project, modifying the csproj file, building, testing, and publishing the app.
A simple example written in C# illustrates the process. You begin by creating, coding,
and testing your project just as you would a framework-dependent deployment:

1. Create the project.

Select File > New > Project. In the New Project dialog, expand your language's
(C# or Visual Basic) project categories in the Installed project types pane, choose
.NET Core, and then select the Console App (.NET Core) template in the center
pane. Enter a project name, such as "SCD", in the Name text box, and select the OK
button.

2. Add the application's source code.

Open the Program.cs or Program.vb file in your editor, and replace the
autogenerated code with the following code. It prompts the user to enter text and
displays the individual words entered by the user. It uses the regular expression
\w+ to separate the words in the input text.

C#

using System;
using System.Text.RegularExpressions;

namespace Applications.ConsoleApps
{
public class ConsoleParser
{
public static void Main()
{
Console.WriteLine("Enter any text, followed by
<Enter>:\n");
String? s = Console.ReadLine();
ShowWords(s ?? "You didn't enter anything.");
Console.Write("\nPress any key to continue... ");
Console.ReadKey();
}

private static void ShowWords(String s)


{
String pattern = @"\w+";
var matches = Regex.Matches(s, pattern);
if (matches.Count == 0)
{
Console.WriteLine("\nNo words were identified in your
input.");
}
else
{
Console.WriteLine($"\nThere are {matches.Count} words
in your string:");
for (int ctr = 0; ctr < matches.Count; ctr++)
{
Console.WriteLine($" #{ctr,2}:
'{matches[ctr].Value}' at position {matches[ctr].Index}");
}
}
}
}
}

3. Determine whether you want to use globalization invariant mode.

Particularly if your app targets Linux, you can reduce the total size of your
deployment by taking advantage of globalization invariant mode . Globalization
invariant mode is useful for applications that are not globally aware and that can
use the formatting conventions, casing conventions, and string comparison and
sort order of the invariant culture.

To enable invariant mode, right-click on your project (not the solution) in Solution
Explorer, and select Edit SCD.csproj or Edit SCD.vbproj. Then add the following
highlighted lines to the file:

XML

<Project Sdk="Microsoft.NET.Sdk">

<PropertyGroup>
<TargetFramework>net6.0</TargetFramework>
<Nullable>enable</Nullable>
</PropertyGroup>

<ItemGroup>
<RuntimeHostConfigurationOption
Include="System.Globalization.Invariant" Value="true" />
</ItemGroup>

</Project>

4. Create a Debug build of your application.

Select Build > Build Solution. You can also compile and run the Debug build of
your application by selecting Debug > Start Debugging. This debugging step lets
you identify problems with your application when it's running on your host
platform. You still will have to test it on each of your target platforms.

If you've enabled globalization invariant mode, be particularly sure to test whether


the absence of culture-sensitive data is suitable for your application.

Once you've finished debugging, you can publish your self-contained deployment:

Visual Studio 15.6 and earlier

After you've debugged and tested the program, create the files to be deployed with
your app for each platform that it targets.

To publish your app from Visual Studio, do the following:

1. Define the platforms that your app will target.

a. Right-click on your project (not the solution) in Solution Explorer and


select Edit SCD.csproj.
b. Create a <RuntimeIdentifiers> tag in the <PropertyGroup> section of your
csproj file that defines the platforms your app targets, and specify the
runtime identifier (RID) of each platform that you target. You also need to
add a semicolon to separate the RIDs. See Runtime identifier catalog for a
list of runtime identifiers.

For example, the following example indicates that the app runs on 64-bit
Windows 10 operating systems and the 64-bit OS X Version 10.11 operating
system.

XML

<PropertyGroup>
<RuntimeIdentifiers>win10-x64;osx.10.11-x64</RuntimeIdentifiers>
</PropertyGroup>

The <RuntimeIdentifiers> element can go into any <PropertyGroup> that you


have in your csproj file. A complete sample csproj file appears later in this
section.

2. Publish your app.

After you've debugged and tested the program, create the files to be
deployed with your app for each platform that it targets.

To publish your app from Visual Studio, do the following:

a. Change the solution configuration from Debug to Release on the toolbar


to build a Release (rather than a Debug) version of your app.

b. Right-click on the project (not the solution) in Solution Explorer and select
Publish.

c. In the Publish tab, select Publish. Visual Studio writes the files that
comprise your application to the local file system.

d. The Publish tab now shows a single profile, FolderProfile. The profile's
configuration settings are shown in the Summary section of the tab. Target
Runtime identifies which runtime has been published, and Target Location
identifies where the files for the self-contained deployment were written.

e. Visual Studio by default writes all published files to a single directory. For
convenience, it's best to create separate profiles for each target runtime
and to place published files in a platform-specific directory. This involves
creating a separate publishing profile for each target platform. So now
rebuild the application for each platform by doing the following:

i. Select Create new profile in the Publish dialog.

ii. In the Pick a publish target dialog, change the Choose a folder location
to bin\Release\PublishOutput\win10-x64. Select OK.

iii. Select the new profile (FolderProfile1) in the list of profiles, and make
sure that the Target Runtime is win10-x64 . If it isn't, select Settings. In
the Profile Settings dialog, change the Target Runtime to win10-x64
and select Save. Otherwise, select Cancel.

iv. Select Publish to publish your app for 64-bit Windows 10 platforms.

v. Follow the previous steps again to create a profile for the osx.10.11-x64
platform. The Target Location is bin\Release\PublishOutput\osx.10.11-
x64, and the Target Runtime is osx.10.11-x64 . The name that Visual
Studio assigns to this profile is FolderProfile2.

Each target location contains the complete set of files (both your app files and
all .NET Core files) needed to launch your app.

Along with your application's files, the publishing process emits a program
database (.pdb) file that contains debugging information about your app. The file is
useful primarily for debugging exceptions. You can choose not to package it with
your application's files. You should, however, save it in the event that you want to
debug the Release build of your app.

Deploy the published files in any way you like. For example, you can package them
in a Zip file, use a simple copy command, or deploy them with any installation
package of your choice.

The following is the complete csproj file for this project.

XML

<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp2.1</TargetFramework>
<RuntimeIdentifiers>win10-x64;osx.10.11-x64</RuntimeIdentifiers>
</PropertyGroup>
</Project>
Self-contained deployment with third-party
dependencies
Deploying a self-contained deployment with one or more third-party dependencies
involves adding the dependencies. The following additional steps are required before
you can build your app:

1. Use the NuGet Package Manager to add a reference to a NuGet package to your
project; and if the package is not already available on your system, install it. To
open the package manager, select Tools > NuGet Package Manager > Manage
NuGet Packages for Solution.

2. Confirm that your third-party dependencies (for example, Newtonsoft.Json ) are


installed on your system and, if they aren't, install them. The Installed tab lists
NuGet packages installed on your system. If Newtonsoft.Json is not listed there,
select the Browse tab and enter "Newtonsoft.Json" in the search box. Select
Newtonsoft.Json and, in the right pane, select your project before selecting Install.

3. If Newtonsoft.Json is already installed on your system, add it to your project by


selecting your project in the right pane of the Manage Packages for Solution tab.

The following is the complete csproj file for this project:

Visual Studio 15.6 and earlier

XML

<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp2.1</TargetFramework>
<RuntimeIdentifiers>win10-x64;osx.10.11-x64</RuntimeIdentifiers>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Newtonsoft.Json" Version="10.0.2" />
</ItemGroup>
</Project>

When you deploy your application, any third-party dependencies used in your app are
also contained with your application files. Third-party libraries aren't required on the
system on which the app is running.
You can only deploy a self-contained deployment with a third-party library to platforms
supported by that library. This is similar to having third-party dependencies with native
dependencies in your framework-dependent deployment, where the native
dependencies won't exist on the target platform unless they were previously installed
there.

See also
.NET Core Application Deployment
.NET Core Runtime Identifier (RID) catalog
Publish .NET apps with the .NET CLI
Article • 09/05/2024

This article demonstrates how you can publish your .NET application from the command
line. .NET provides three ways to publish your applications. Framework-dependent
deployment produces a cross-platform .dll file that uses the locally installed .NET
runtime. Framework-dependent executable produces a platform-specific executable that
uses the locally installed .NET runtime. Self-contained executable produces a platform-
specific executable and includes a local copy of the .NET runtime.

For an overview of these publishing modes, see .NET Application Deployment.

Looking for some quick help on using the CLI? The following table shows some
examples of how to publish your app. You can specify the target framework with the -f
<TFM> parameter or by editing the project file. For more information, see Publishing

basics.

ノ Expand table

Publish Mode Command

Framework-dependent dotnet publish -c Release -p:UseAppHost=false


deployment

Framework-dependent executable dotnet publish -c Release -r <RID> --self-contained


false
dotnet publish -c Release

Self-contained deployment dotnet publish -c Release -r <RID> --self-contained true

7 Note

The -c Release parameter isn't required. It's provided as a reminder to


publish the Release build of your app.
In .NET SDK 3.1 or higher, framework-dependent executable is the default
publishing mode when running the basic dotnet publish command.

Publishing basics
The <TargetFramework> setting of the project file specifies the default target framework
when you publish your app. You can change the target framework to any valid Target
Framework Moniker (TFM). For example, if your project uses
<TargetFramework>net8.0</TargetFramework> , a binary that targets .NET 8 is created. The

TFM specified in this setting is the default target used by the dotnet publish command.

If you want to target more than one framework, you can set the <TargetFrameworks>
setting to multiple TFM values, separated by a semicolon. When you build your app, a
build is produced for each target framework. However, when you publish your app, you
must specify the target framework with the dotnet publish -f <TFM> command.

The default BUILD-CONFIGURATION mode is Debug unless changed with the -c


parameter.

The default output directory of the dotnet publish command is ./bin/<BUILD-


CONFIGURATION>/<TFM>/publish/ . For example, dotnet publish -c Release -f net8.0

publishes to ./bin/Release/net8.0/publish/ . However, you can opt in to a simplified


output path and folder structure for all build outputs. For more information, see
Artifacts output layout.

Native dependencies
If your app has native dependencies, it may not run on a different operating system. For
example, if your app uses the native Windows API, it won't run on macOS or Linux. You
would need to provide platform-specific code and compile an executable for each
platform.

Consider also, if a library you referenced has a native dependency, your app may not run
on every platform. However, it's possible a NuGet package you're referencing has
included platform-specific versions to handle the required native dependencies for you.

When distributing an app with native dependencies, you may need to use the dotnet
publish -r <RID> switch to specify the target platform you want to publish for. For a list

of runtime identifiers, see Runtime Identifier (RID) catalog.

More information about platform-specific binaries is covered in the Framework-


dependent executable and Self-contained deployment sections.

Sample app
You can use the following app to explore the publishing commands. The app is created
by running the following commands in your terminal:

.NET CLI
mkdir apptest1
cd apptest1
dotnet new console
dotnet add package Figgle

The Program.cs or Program.vb file that is generated by the console template needs to
be changed to the following:

C#

using System;

namespace apptest1
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine(Figgle.FiggleFonts.Standard.Render("Hello,
World!"));
}
}
}

When you run the app (dotnet run), the following output is displayed:

terminal

_ _ _ _ __ __ _ _ _
| | | | ___| | | ___ \ \ / /__ _ __| | __| | |
| |_| |/ _ \ | |/ _ \ \ \ /\ / / _ \| '__| |/ _` | |
| _ | __/ | | (_) | \ V V / (_) | | | | (_| |_|
|_| |_|\___|_|_|\___( ) \_/\_/ \___/|_| |_|\__,_(_)
|/

Framework-dependent deployment
When you publish your app as an FDD, a <PROJECT-NAME>.dll file is created in the
./bin/<BUILD-CONFIGURATION>/<TFM>/publish/ folder. To run your app, navigate to the

output folder and use the dotnet <PROJECT-NAME>.dll command.

Your app is configured to target a specific version of .NET. That targeted .NET runtime is
required to be on any machine where your app runs. For example, if your app targets
.NET Core 8, any machine that your app runs on must have the .NET Core 8 runtime
installed. As stated in the Publishing basics section, you can edit your project file to
change the default target framework or to target more than one framework.

Publishing an FDD creates an app that automatically rolls-forward to the latest .NET
security patch available on the system that runs the app. For more information on
version binding at compile time, see Select the .NET version to use.

ノ Expand table

Publish Mode Command

Framework-dependent deployment dotnet publish -c Release -p:UseAppHost=false

Framework-dependent executable
Framework-dependent executable (FDE) is the default mode for the basic dotnet
publish command. You don't need to specify any other parameters, as long as you want

to target the current operating system.

In this mode, a platform-specific executable host is created to host your cross-platform


app. This mode is similar to FDD, as FDD requires a host in the form of the dotnet
command. The host executable filename varies per platform and is named something
similar to <PROJECT-FILE>.exe . You can run this executable directly instead of calling
dotnet <PROJECT-FILE>.dll , which is still an acceptable way to run the app.

Your app is configured to target a specific version of .NET. That targeted .NET runtime is
required to be on any machine where your app runs. For example, if your app targets
.NET 8, any machine that your app runs on must have the .NET 8 runtime installed. As
stated in the Publishing basics section, you can edit your project file to change the
default target framework or to target more than one framework.

Publishing an FDE creates an app that automatically rolls-forward to the latest .NET
security patch available on the system that runs the app. For more information on
version binding at compile time, see Select the .NET version to use.

ノ Expand table

Publish Mode Command

Framework-dependent executable dotnet publish -c Release -r <RID> --self-contained false


dotnet publish -c Release
Whenever you use the -r switch, the output folder path changes to: ./bin/<BUILD-
CONFIGURATION>/<TFM>/<RID>/publish/

If you use the example app, run dotnet publish -f net6.0 -r win-x64 --self-contained
false . This command creates the following executable: ./bin/Debug/net6.0/win-

x64/publish/apptest1.exe

7 Note

You can reduce the total size of your deployment by enabling globalization
invariant mode. This mode is useful for applications that are not globally aware
and that can use the formatting conventions, casing conventions, and string
comparison and sort order of the invariant culture. For more information about
globalization invariant mode and how to enable it, see .NET Globalization
Invariant Mode .

Configure .NET install search behavior


In .NET 9 and later versions, you can configure the .NET installation search paths of the
published executable via the AppHostDotNetSearch and AppHostRelativeDotNet
properties.

AppHostDotNetSearch allows specifying one or more locations where the executable will

look for a .NET installation:

AppLocal : app executable's folder


AppRelative : path relative to the app executable

EnvironmentVariables : value of DOTNET_ROOT[_<arch>] environment variables


Global : registered and default global install locations

AppHostRelativeDotNet specifies the path relative to the executable that will be searched

when AppHostDotNetSearch contains AppRelative .

For more information, see AppHostDotNetSearch, AppHostRelativeDotNet and install


location options in apphost .

Self-contained deployment
When you publish a self-contained deployment (SCD), the .NET SDK creates a platform-
specific executable. Publishing an SCD includes all required .NET files to run your app
but it doesn't include the native dependencies of .NET (for example, for .NET 6 on
Linux or .NET 8 on Linux ). These dependencies must be present on the system
before the app runs.

Publishing an SCD creates an app that doesn't roll forward to the latest available .NET
security patch. For more information on version binding at compile time, see Select the
.NET version to use.

You must use the following switches with the dotnet publish command to publish an
SCD:

-r <RID>

This switch uses an identifier (RID) to specify the target platform. For a list of
runtime identifiers, see Runtime Identifier (RID) catalog.

--self-contained true

This switch tells the .NET SDK to create an executable as an SCD.

ノ Expand table

Publish Mode Command

Self-contained deployment dotnet publish -c Release -r <RID> --self-contained true

 Tip

In .NET 6 and later versions, you can reduce the total size of compatible self-
contained apps by publishing trimmed. This enables the trimmer to remove
parts of the framework and referenced assemblies that are not on any code
path or potentially referenced in runtime reflection. See trimming
incompatibilities to determine if trimming makes sense for your application.
You can reduce the total size of your deployment by enabling globalization
invariant mode. This mode is useful for applications that are not globally
aware and that can use the formatting conventions, casing conventions, and
string comparison and sort order of the invariant culture. For more
information about globalization invariant mode and how to enable it, see
.NET Core Globalization Invariant Mode .
See also
.NET Application Deployment Overview
.NET Runtime IDentifier (RID) catalog
How to create a NuGet package with
the .NET CLI
Article • 02/04/2022

7 Note

The following shows command-line samples using Unix. The dotnet pack
command as shown here works the same way on Windows.

.NET Standard and .NET Core libraries are expected to be distributed as NuGet
packages. This is in fact how all of the .NET Standard libraries are distributed and
consumed. This is most easily done with the dotnet pack command.

Imagine that you just wrote an awesome new library that you would like to distribute
over NuGet. You can create a NuGet package with cross-platform tools to do exactly
that! The following example assumes a library called SuperAwesomeLibrary that targets
netstandard1.0 .

If you have transitive dependencies, that is, a project that depends on another package,
make sure to restore packages for the entire solution with the dotnet restore
command before you create a NuGet package. Failing to do so will result in the dotnet
pack command not working properly.

You don't have to run dotnet restore because it's run implicitly by all commands that
require a restore to occur, such as dotnet new , dotnet build , dotnet run , dotnet test ,
dotnet publish , and dotnet pack . To disable implicit restore, use the --no-restore

option.

The dotnet restore command is still useful in certain scenarios where explicitly
restoring makes sense, such as continuous integration builds in Azure DevOps Services
or in build systems that need to explicitly control when the restore occurs.

For information about how to manage NuGet feeds, see the dotnet restore
documentation.

After ensuring packages are restored, you can navigate to the directory where a library
lives:

Console
cd src/SuperAwesomeLibrary

Then it's just a single command from the command line:

.NET CLI

dotnet pack

Your /bin/Debug folder will now look like this:

Console

$ ls bin/Debug
netstandard1.0/
SuperAwesomeLibrary.1.0.0.nupkg
SuperAwesomeLibrary.1.0.0.symbols.nupkg

This produces a package that is capable of being debugged. If you want to build a
NuGet package with release binaries, all you need to do is add the --configuration (or
-c ) switch and use release as the argument.

.NET CLI

dotnet pack --configuration release

Your /bin folder will now have a release folder containing your NuGet package with
release binaries:

Console

$ ls bin/release
netstandard1.0/
SuperAwesomeLibrary.1.0.0.nupkg
SuperAwesomeLibrary.1.0.0.symbols.nupkg

And now you have the necessary files to publish a NuGet package!

Don't confuse dotnet pack with dotnet publish


It is important to note that at no point is the dotnet publish command involved. The
dotnet publish command is for deploying applications with all of their dependencies in
the same bundle -- not for generating a NuGet package to be distributed and
consumed via NuGet.

See also
Quickstart: Create and publish a package
Self-contained deployment runtime roll
forward
Article • 09/15/2021

.NET Core self-contained application deployments include both the .NET Core libraries
and the .NET Core runtime. Starting in .NET Core 2.1 SDK (version 2.1.300), a self-
contained application deployment publishes the highest patch runtime on your
machine . By default, dotnet publish for a self-contained deployment selects the latest
version installed as part of the SDK on the publishing machine. This enables your
deployed application to run with security fixes (and other fixes) available during
publish . The application must be republished to obtain a new patch. Self-contained

applications are created by specifying -r <RID> on the dotnet publish command or by


specifying the runtime identifier (RID) in the project file (csproj / vbproj) or on the
command line.

Patch version roll forward overview


restore, build and publish are dotnet commands that can run separately. The runtime
choice is part of the restore operation, not publish or build . If you call publish , the
latest patch version will be chosen. If you call publish with the --no-restore argument,
then you may not get the desired patch version because a prior restore may not have
been executed with the new self-contained application publishing policy. In this case, a
build error is generated with text similar to the following:

"The project was restored using Microsoft.NETCore.App version 2.0.0, but with current
settings, version 2.0.6 would be used instead. To resolve this issue, make sure the same
settings are used for restore and for subsequent operations such as build or publish.
Typically this issue can occur if the RuntimeIdentifier property is set during build or
publish but not during restore."

7 Note

restore and build can be run implicitly as part of another command, like publish .

When run implicitly as part of another command, they are provided with additional
context so that the right artifacts are produced. When you publish with a runtime
(for example, dotnet publish -r linux-x64 ), the implicit restore restores packages
for the linux-x64 runtime. If you call restore explicitly, it does not restore runtime
packages by default, because it doesn't have that context.
How to avoid restore during publish
Running restore as part of the publish operation may be undesirable for your scenario.
To avoid restore during publish while creating self-contained applications, do the
following:

Set the RuntimeIdentifiers property to a semicolon-separated list of all the RIDs


to be published.
Set the TargetLatestRuntimePatch property to true .

No-restore argument with dotnet publish


options
If you want to create both self-contained applications and framework-dependent
applications with the same project file, and you want to use the --no-restore argument
with dotnet publish , then choose one of the following:

1. Prefer the framework-dependent behavior. If the application is framework-


dependent, this is the default behavior. If the application is self-contained, and can
use an unpatched 2.1.0 local runtime, set the TargetLatestRuntimePatch to false
in the project file.

2. Prefer the self-contained behavior. If the application is self-contained, this is the


default behavior. If the application is framework-dependent, and requires the latest
patch installed, set TargetLatestRuntimePatch to true in the project file.

3. Take explicit control of the runtime framework version by setting


RuntimeFrameworkVersion to the specific patch version in the project file.
Single-file deployment
Article • 03/11/2023

Bundling all application-dependent files into a single binary provides an application


developer with the attractive option to deploy and distribute the application as a single
file. Single-file deployment is available for both the framework-dependent deployment
model and self-contained applications.

The size of the single file in a self-contained application is large since it includes the
runtime and the framework libraries. In .NET 6, you can publish trimmed to reduce the
total size of trim-compatible applications. The single file deployment option can be
combined with ReadyToRun and Trim publish options.

Sample project file


Here's a sample project file that specifies single file publishing:

XML

<Project Sdk="Microsoft.NET.Sdk">

<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net6.0</TargetFramework>
<PublishSingleFile>true</PublishSingleFile>
<SelfContained>true</SelfContained>
<RuntimeIdentifier>win-x64</RuntimeIdentifier>
</PropertyGroup>

</Project>

These properties have the following functions:

PublishSingleFile . Enables single file publishing. Also enables single file warnings

during dotnet build .


SelfContained . Determines whether the app is self-contained or framework-

dependent.
RuntimeIdentifier . Specifies the OS and CPU type you're targeting. Also sets

<SelfContained>true</SelfContained> by default.

Single file apps are always OS and architecture specific. You need to publish for each
configuration, such as Linux x64, Linux Arm64, Windows x64, and so forth.
Runtime configuration files, such as *.runtimeconfig.json and *.deps.json, are included in
the single file. If an extra configuration file is needed, you can place it beside the single
file.

Publish a single-file app


CLI

Publish a single file application using the dotnet publish command.

1. Add <PublishSingleFile>true</PublishSingleFile> to your project file.

This change produces a single file app on self-contained publish. It also shows
single file compatibility warnings during build.

XML

<PropertyGroup>
<PublishSingleFile>true</PublishSingleFile>
</PropertyGroup>

2. Publish the app for a specific runtime identifier using dotnet publish -r <RID>

The following example publishes the app for Windows as a self-contained


single file application.

dotnet publish -r win-x64

The following example publishes the app for Linux as a framework dependent
single file application.

dotnet publish -r linux-x64 --self-contained false

<PublishSingleFile> should be set in the project file to enable file analysis during

build, but it's also possible to pass these options as dotnet publish arguments:

.NET CLI

dotnet publish -r linux-x64 -p:PublishSingleFile=true --self-contained


false

For more information, see Publish .NET Core apps with .NET CLI.
Exclude files from being embedded
Certain files can be explicitly excluded from being embedded in the single file by setting
the following metadata:

XML

<ExcludeFromSingleFile>true</ExcludeFromSingleFile>

For example, to place some files in the publish directory but not bundle them in the file:

XML

<ItemGroup>
<Content Update="Plugin.dll">
<CopyToPublishDirectory>PreserveNewest</CopyToPublishDirectory>
<ExcludeFromSingleFile>true</ExcludeFromSingleFile>
</Content>
</ItemGroup>

Include PDB files inside the bundle


The PDB file for an assembly can be embedded into the assembly itself (the .dll ) using
the setting below. Since the symbols are part of the assembly, they're part of the
application as well:

XML

<DebugType>embedded</DebugType>

For example, add the following property to the project file of an assembly to embed the
PDB file to that assembly:

XML

<PropertyGroup>
<DebugType>embedded</DebugType>
</PropertyGroup>

Other considerations
Single file applications have all related PDB files alongside the application, not bundled
by default. If you want to include PDBs inside the assembly for projects you build, set
the DebugType to embedded . See Include PDB files inside the bundle.

Managed C++ components aren't well suited for single file deployment. We
recommend that you write applications in C# or another non-managed C++ language
to be single file compatible.

Native libraries
Only managed DLLs are bundled with the app into a single executable. When the app
starts, the managed DLLs are extracted and loaded in memory, avoiding the extraction
to a folder. With this approach, the managed binaries are embedded in the single file
bundle, but the native binaries of the core runtime itself are separate files.

To embed those files for extraction and get one output file, set the property
IncludeNativeLibrariesForSelfExtract to true .

Specifying IncludeAllContentForSelfExtract extracts all files, including the managed


assemblies, before running the executable. This may be helpful for rare application
compatibility problems.

) Important

If extraction is used, the files are extracted to disk before the app starts:

If the DOTNET_BUNDLE_EXTRACT_BASE_DIR environment variable is set to a path,


the files are extracted to a directory under that path.
Otherwise, if running on Linux or macOS, the files are extracted to a directory
under $HOME/.net .
If running on Windows, the files are extracted to a directory under
%TEMP%/.net .

To prevent tampering, these directories shouldn't be writable by users or services


with different privileges. Don't use /tmp or /var/tmp on most Linux and macOS
systems.

7 Note

In some Linux environments, such as under systemd , the default extraction doesn't
work because $HOME isn't defined. In such cases, it's recommended that you set
$DOTNET_BUNDLE_EXTRACT_BASE_DIR explicitly.

For systemd , a good alternative is to define DOTNET_BUNDLE_EXTRACT_BASE_DIR in


your service's unit file as %h/.net , which systemd expands correctly to $HOME/.net
for the account running the service.

text

[Service]
Environment="DOTNET_BUNDLE_EXTRACT_BASE_DIR=%h/.net"

API incompatibility
Some APIs aren't compatible with single file deployment. Applications might require
modification if they use these APIs. If you use a third-party framework or package, it's
possible that they might use one of these APIs and need modification. The most
common cause of problems is dependence on file paths for files or DLLs shipped with
the application.

The table below has the relevant runtime library API details for single file use.

API Note

Assembly.CodeBase Throws PlatformNotSupportedException.

Assembly.EscapedCodeBase Throws PlatformNotSupportedException.

Assembly.GetFile Throws IOException.

Assembly.GetFiles Throws IOException.

Assembly.Location Returns an empty string.

AssemblyName.CodeBase Returns null .

AssemblyName.EscapedCodeBase Returns null .

Module.FullyQualifiedName Returns a string with the value of <Unknown> or throws an


exception.

Marshal.GetHINSTANCE Returns -1.

Module.Name Returns a string with the value of <Unknown> .

We have some recommendations for fixing common scenarios:


To access files next to the executable, use AppContext.BaseDirectory.

To find the file name of the executable, use the first element of
Environment.GetCommandLineArgs(), or starting with .NET 6, use the file name
from ProcessPath.

To avoid shipping loose files entirely, consider using embedded resources.

Post-processing binaries before bundling


Some workflows require post-processing of binaries before bundling. A common
example is signing. The dotnet SDK provides MSBuild extension points to allow
processing binaries just before single-file bundling. The available APIs are:

A target PrepareForBundle that will be called before GenerateSingleFileBundle


An <ItemGroup><FilesToBundle /></ItemGroup> containing all files that will be
bundled
A Property AppHostFile that will specify the apphost template. Post-processing
might want to exclude the apphost from processing.

To plug into this involves creating a target that will be executed between
PrepareForBundle and GenerateSingleFileBundle .

Consider the following .NET project Target node example:

XML

<Target Name="MySignedBundledFile" BeforeTargets="GenerateSingleFileBundle"


DependsOnTargets="PrepareForBundle">

It's possible that tooling will need to copy files in the process of signing. That could
happen if the original file is a shared item not owned by the build, for example, the file
comes from a NuGet cache. In such a case, it's expected that the tool will modify the
path of the corresponding FilesToBundle item to point to the modified copy.

Compress assemblies in single-file apps


Single-file apps can be created with compression enabled on the embedded assemblies.
Set the EnableCompressionInSingleFile property to true . The single file that's produced
will have all of the embedded assemblies compressed, which can significantly reduce the
size of the executable.
Compression comes with a performance cost. On application start, the assemblies must
be decompressed into memory, which takes some time. We recommend that you
measure both the size change and startup cost of enabling compression before using it.
The impact can vary significantly between different applications.

Inspect a single-file app


Single file apps can be inspected using the ILSpy tool . The tool can show all of the
files bundled into the application and can inspect the contents of managed assemblies.

See also
.NET Core application deployment
Publish .NET apps with .NET CLI
Publish .NET Core apps with Visual Studio
dotnet publish command
ReadyToRun Compilation
Article • 06/29/2022

.NET application startup time and latency can be improved by compiling your
application assemblies as ReadyToRun (R2R) format. R2R is a form of ahead-of-time
(AOT) compilation.

R2R binaries improve startup performance by reducing the amount of work the just-in-
time (JIT) compiler needs to do as your application loads. The binaries contain similar
native code compared to what the JIT would produce. However, R2R binaries are larger
because they contain both intermediate language (IL) code, which is still needed for
some scenarios, and the native version of the same code. R2R is only available when you
publish an app that targets specific runtime environments (RID) such as Linux x64 or
Windows x64.

To compile your project as ReadyToRun, the application must be published with the
PublishReadyToRun property set to true .

There are two ways to publish your app as ReadyToRun:

1. Specify the PublishReadyToRun flag directly to the dotnet publish command. See
dotnet publish for details.

.NET CLI

dotnet publish -c Release -r win-x64 -p:PublishReadyToRun=true

2. Specify the property in the project.

Add the <PublishReadyToRun> setting to your project.

XML

<PropertyGroup>
<PublishReadyToRun>true</PublishReadyToRun>
</PropertyGroup>

Publish the application without any special parameters.

.NET CLI

dotnet publish -c Release -r win-x64


Impact of using the ReadyToRun feature
Ahead-of-time compilation has complex performance impact on application
performance, which can be difficult to predict. In general, the size of an assembly will
grow to between two to three times larger. This increase in the physical size of the file
may reduce the performance of loading the assembly from disk, and increase working
set of the process. However, in return the number of methods compiled at run time is
typically reduced substantially. The result is that most applications that have large
amounts of code receive large performance benefits from enabling ReadyToRun.
Applications that have small amounts of code will likely not experience a significant
improvement from enabling ReadyToRun, as the .NET runtime libraries have already
been precompiled with ReadyToRun.

The startup improvement discussed here applies not only to application startup, but also
to the first use of any code in the application. For instance, ReadyToRun can be used to
reduce the response latency of the first use of Web API in an ASP.NET application.

Interaction with tiered compilation


Ahead-of-time generated code is not as highly optimized as code produced by the JIT.
To address this issue, tiered compilation will replace commonly used ReadyToRun
methods with JIT-generated methods.

How is the set of precompiled assemblies


chosen?
The SDK will precompile the assemblies that are distributed with the application. For
self-contained applications, this set of assemblies will include the framework. C++/CLI
binaries are not eligible for ReadyToRun compilation.

To exclude specific assemblies from ReadyToRun processing, use the


<PublishReadyToRunExclude> list.

XML

<ItemGroup>
<PublishReadyToRunExclude Include="Contoso.Example.dll" />
</ItemGroup>
How is the set of methods to precompile
chosen?
The compiler will attempt to pre-compile as many methods as it can. However, for
various reasons, it's not expected that using the ReadyToRun feature will prevent the JIT
from executing. Such reasons may include, but are not limited to:

Use of generic types defined in separate assemblies.


Interop with native code.
Use of hardware intrinsics that the compiler cannot prove are safe to use on a
target machine.
Certain unusual IL patterns.
Dynamic method creation via reflection or LINQ.

Symbol generation for use with profilers


When compiling an application with ReadyToRun, profilers may require symbols for
examining the generated ReadyToRun files. To enable symbol generation, specify the
<PublishReadyToRunEmitSymbols> property.

XML

<PropertyGroup>
<PublishReadyToRunEmitSymbols>true</PublishReadyToRunEmitSymbols>
</PropertyGroup>

These symbols will be placed in the publish directory and for Windows will have a file
extension of .ni.pdb, and for Linux will have a file extension of .r2rmap. These files are
not generally redistributed to end customers, but instead would typically be stored in a
symbol server. In general these symbols are useful for debugging performance issues
related to startup of applications, as Tiered Compilation will replace the ReadyToRun
generated code with dynamically generated code. However, if attempting to profile an
application that disables Tiered Compilation the symbols will be useful.

Composite ReadyToRun
Normal ReadyToRun compilation produces binaries that can be serviced and
manipulated individually. Starting in .NET 6, support for Composite ReadyToRun
compilation has been added. Composite ReadyToRun compiles a set of assemblies that
must be distributed together. This has the advantage that the compiler is able to
perform better optimizations and reduces the set of methods that cannot be compiled
via the ReadyToRun process. However, as a tradeoff, compilation speed is significantly
decreased, and the overall file size of the application is significantly increased. Due to
these tradeoffs, use of Composite ReadyToRun is only recommended for applications
that disable Tiered Compilation or applications running on Linux that are seeking the
best startup time with self-contained deployment. To enable composite ReadyToRun
compilation, specify the <PublishReadyToRunComposite> property.

XML

<PropertyGroup>
<PublishReadyToRunComposite>true</PublishReadyToRunComposite>
</PropertyGroup>

7 Note

In .NET 6, Composite ReadyToRun is only supported for self-contained deployment.

Cross platform/architecture restrictions


For some SDK platforms, the ReadyToRun compiler is capable of cross-compiling for
other target platforms.

Supported compilation targets are described in the table below when targeting .NET 6
and later versions.

SDK platform Supported target platforms

Windows X64 Windows (X86, X64, Arm64), Linux (X64, Arm32, Arm64), macOS (X64, Arm64)

Windows X86 Windows (X86), Linux (Arm32)

Linux X64 Linux (X64, Arm32, Arm64), macOS (X64, Arm64)

Linux Arm32 Linux Arm32

Linux Arm64 Linux (X64, Arm32, Arm64), macOS (X64, Arm64)

macOS X64 Linux (X64, Arm32, Arm64), macOS (X64, Arm64)

macOS Arm64 Linux (X64, Arm32, Arm64), macOS (X64, Arm64)

Supported compilation targets are described in the table below when targeting .NET 5
and below.
SDK platform Supported target platforms

Windows X64 Windows X86, Windows X64, Windows Arm64

Windows X86 Windows X86, Windows Arm32

Linux X64 Linux X86, Linux X64, Linux Arm32, Linux Arm64

Linux Arm32 Linux Arm32

Linux Arm64 Linux Arm64

macOS X64 macOS X64


Trim self-contained deployments and
executables
Article • 01/26/2022

The framework-dependent deployment model has been the most successful


deployment model since the inception of .NET. In this scenario, the application
developer bundles only the application and third-party assemblies with the expectation
that the .NET runtime and runtime libraries will be available in the client machine. This
deployment model continues to be the dominant one in the latest .NET release,
however, there are some scenarios where the framework-dependent model is not the
best choice. The alternative is to publish a self-contained application, where the .NET
runtime and runtime libraries are bundled together with the application and third-party
assemblies.

The trim-self-contained deployment model is a specialized version of the self-contained


deployment model that is optimized to reduce deployment size. Minimizing deployment
size is a critical requirement for some client-side scenarios like Blazor applications.
Depending on the complexity of the application, only a subset of the framework
assemblies are referenced, and a subset of the code within each assembly is required to
run the application. The unused parts of the libraries are unnecessary and can be
trimmed from the packaged application.

However, there is a risk that the build-time analysis of the application can cause failures
at run time, due to not being able to reliably analyze various problematic code patterns
(largely centered on reflection use). To mitigate these problems, warnings are produced
whenever the trimmer cannot fully analyze a code pattern. For information on what the
trim warnings mean and how to resolve them, see Introduction to trim warnings.

7 Note

Trimming is fully supported in .NET 6 and later versions. In .NET Core 3.1 and
.NET 5, trimming was an experimental feature.
Trimming is only available to applications that are published self-contained.

Components that cause trimming problems

2 Warning
Not all project types can be trimmed. For more information, see Known trimming
incompatibilities.

Any code that causes build time analysis challenges isn't suitable for trimming. Some
common coding patterns that are problematic when used by an application originate
from unbounded reflection usage and external dependencies that aren't visible at build
time. An example of unbounded reflection is a legacy serializer, such as XML
serialization, and an example of invisible external dependencies is built-in COM. To
address trim warnings in your application, see Introduction to trim warnings, and to
make your library compatible with trimming, see Prepare .NET libraries for trimming.

Enable trimming
1. Add <PublishTrimmed>true</PublishTrimmed> to your project file.

This property will produce a trimmed app on self-contained publish. It also turns
off trim-incompatible features and shows trim compatibility warnings during build.

XML

<PropertyGroup>
<PublishTrimmed>true</PublishTrimmed>
</PropertyGroup>

2. Then publish your app using either the dotnet publish command or Visual Studio.

Publish with the CLI


The following example publishes the app for Windows as a trimmed self-contained
application.

dotnet publish -r win-x64

Trimming is only supported for self-contained apps.

<PublishTrimmed> should be set in the project file so that trim-incompatible features are

disabled during dotnet build . However, you can also set this option as an argument to
dotnet publish :

dotnet publish -r win-x64 -p:PublishTrimmed=true

For more information, see Publish .NET apps with .NET CLI.
Publish with Visual Studio
1. In Solution Explorer, right-click on the project you want to publish and select
Publish.

If you don't already have a publishing profile, follow the instructions to create one
and choose the Folder target-type.

2. Choose More actions > Edit.

3. In the Profile settings dialog, set the following options:

Set Deployment mode to Self-contained.


Set Target runtime to the platform you want to publish to.
Select Trim unused code.
Choose Save to save the settings and return to the Publish dialog.

4. Choose Publish to publish your app trimmed.

For more information, see Publish .NET Core apps with Visual Studio.

Publish with Visual Studio for Mac


Visual Studio for Mac doesn't provide options to publish your app. You'll need to
publish manually by following the instructions from the Publishing with the CLI section.
For more information, see Publish .NET apps with .NET CLI.

See also
.NET Core application deployment.
Publish .NET apps with .NET CLI.
Publish .NET Core apps with Visual Studio.
dotnet publish command.
Introduction to trim warnings
Article • 11/05/2023

Conceptually, trimming is simple: when you publish an application, the .NET SDK
analyzes the entire application and removes all unused code. However, it can be difficult
to determine what is unused, or more precisely, what is used.

To prevent changes in behavior when trimming applications, the .NET SDK provides
static analysis of trim compatibility through trim warnings. The trimmer produces trim
warnings when it finds code that might not be compatible with trimming. Code that's
not trim-compatible can produce behavioral changes, or even crashes, in an application
after it has been trimmed. Ideally, all applications that use trimming shouldn't produce
any trim warnings. If there are any trim warnings, the app should be thoroughly tested
after trimming to ensure that there are no behavior changes.

This article helps you understand why some patterns produce trim warnings, and how
these warnings can be addressed.

Examples of trim warnings


For most C# code, it's straightforward to determine what code is used and what code is
unused—the trimmer can walk method calls, field and property references, and so on,
and determine what code is accessed. Unfortunately, some features, like reflection,
present a significant problem. Consider the following code:

C#

string s = Console.ReadLine();
Type type = Type.GetType(s);
foreach (var m in type.GetMethods())
{
Console.WriteLine(m.Name);
}

In this example, GetType() dynamically requests a type with an unknown name, and then
prints the names of all of its methods. Because there's no way to know at publish-time
what type name is going to be used, there's no way for the trimmer to know which type
to preserve in the output. It's likely that this code could have worked before trimming
(as long as the input is something known to exist in the target framework), but would
probably produce a null reference exception after trimming, as Type.GetType returns
null when the type isn't found.
In this case, the trimmer issues a warning on the call to Type.GetType , indicating that it
can't determine which type is going to be used by the application.

Reacting to trim warnings


Trim warnings are meant to bring predictability to trimming. There are two large
categories of warnings that you'll likely see:

1. Functionality isn't compatible with trimming


2. Functionality has certain requirements on the input to be trim compatible

Functionality incompatible with trimming


These are typically methods that either don't work at all, or might be broken in some
cases if they're used in a trimmed application. A good example is the Type.GetType
method from the previous example. In a trimmed app it might work, but there's no
guarantee. Such APIs are marked with RequiresUnreferencedCodeAttribute.

RequiresUnreferencedCodeAttribute is simple and broad: it's an attribute that means the


member has been annotated incompatible with trimming. This attribute is used when
code is fundamentally not trim compatible, or the trim dependency is too complex to
explain to the trimmer. This would often be true for methods that dynamically load code
for example via LoadFrom(String), enumerate or search through all types in an
application or assembly for example via GetType(), use the C# dynamic keyword, or use
other runtime code generation technologies. An example would be:

C#

[RequiresUnreferencedCode("This functionality is not compatible with


trimming. Use 'MethodFriendlyToTrimming' instead")]
void MethodWithAssemblyLoad()
{
...
Assembly.LoadFrom(...);
...
}

void TestMethod()
{
// IL2026: Using method 'MethodWithAssemblyLoad' which has
'RequiresUnreferencedCodeAttribute'
// can break functionality when trimming application code. This
functionality is not compatible with trimming. Use
'MethodFriendlyToTrimming' instead.
MethodWithAssemblyLoad();
}

There aren't many workarounds for RequiresUnreferencedCode . The best fix is to avoid
calling the method at all when trimming and use something else that's trim-compatible.

Mark functionality as incompatible with trimming

If you're writing a library and it's not in your control whether or not to use incompatible
functionality, you can mark it with RequiresUnreferencedCode . This annotates your
method as incompatible with trimming. Using RequiresUnreferencedCode silences all trim
warnings in the given method, but produces a warning whenever someone else calls it.

The RequiresUnreferencedCodeAttribute requires you to specify a Message . The


message is shown as part of a warning reported to the developer who calls the marked
method. For example:

Console

IL2026: Using member <incompatible method> which has


'RequiresUnreferencedCodeAttribute' can break functionality when trimming
application code. <The message value>

With the example above, a warning for a specific method might look like this:

Console

IL2026: Using member 'MethodWithAssemblyLoad()' which has


'RequiresUnreferencedCodeAttribute' can break functionality when trimming
application code. This functionality is not compatible with trimming. Use
'MethodFriendlyToTrimming' instead.

Developers calling such APIs are generally not going to be interested in the particulars
of the affected API or specifics as it relates to trimming.

A good message should state what functionality isn't compatible with trimming and
then guide the developer what are their potential next steps. It might suggest to use a
different functionality or change how the functionality is used. It might also simply state
that the functionality isn't yet compatible with trimming without a clear replacement.

If the guidance to the developer becomes too long to be included in a warning


message, you can add an optional Url to the RequiresUnreferencedCodeAttribute to
point the developer to a web page describing the problem and possible solutions in
greater detail.

For example:

C#

[RequiresUnreferencedCode("This functionality is not compatible with


trimming. Use 'MethodFriendlyToTrimming' instead", Url =
"https://site/trimming-and-method)]
void MethodWithAssemblyLoad() { ... }

This produces a warning:

Console

IL2026: Using member 'MethodWithAssemblyLoad()' which has


'RequiresUnreferencedCodeAttribute' can break functionality when trimming
application code. This functionality is not compatible with trimming. Use
'MethodFriendlyToTrimming' instead. https://site/trimming-and-method

Using RequiresUnreferencedCode often leads to marking more methods with it, due to
the same reason. This is common when a high-level method becomes incompatible with
trimming because it calls a low-level method that isn't trim-compatible. You "bubble up"
the warning to a public API. Each usage of RequiresUnreferencedCode needs a message,
and in these cases the messages are likely the same. To avoid duplicating strings and
making it easier to maintain, use a constant string field to store the message:

C#

class Functionality
{
const string IncompatibleWithTrimmingMessage = "This functionality is
not compatible with trimming. Use 'FunctionalityFriendlyToTrimming'
instead";

[RequiresUnreferencedCode(IncompatibleWithTrimmingMessage)]
private void ImplementationOfAssemblyLoading()
{
...
}

[RequiresUnreferencedCode(IncompatibleWithTrimmingMessage)]
public void MethodWithAssemblyLoad()
{
ImplementationOfAssemblyLoading();
}
}

Functionality with requirements on its input


Trimming provides APIs to specify more requirements on input to methods and other
members that lead to trim-compatible code. These requirements are usually about
reflection and the ability to access certain members or operations on a type. Such
requirements are specified using the DynamicallyAccessedMembersAttribute.

Unlike RequiresUnreferencedCode , reflection can sometimes be understood by the


trimmer as long as it's annotated correctly. Let's take another look at the original
example:

C#

string s = Console.ReadLine();
Type type = Type.GetType(s);
foreach (var m in type.GetMethods())
{
Console.WriteLine(m.Name);
}

In the previous example, the real problem is Console.ReadLine() . Because any type
could be read, the trimmer has no way to know if you need methods on
System.DateTime or System.Guid or any other type. On the other hand, the following

code would be fine:

C#

Type type = typeof(System.DateTime);


foreach (var m in type.GetMethods())
{
Console.WriteLine(m.Name);
}

Here the trimmer can see the exact type being referenced: System.DateTime . Now it can
use flow analysis to determine that it needs to keep all public methods on
System.DateTime . So where does DynamicallyAccessMembers come in? When reflection is

split across multiple methods. In the following code, we can see that the type
System.DateTime flows to Method3 where reflection is used to access System.DateTime 's

methods,
C#

void Method1()
{
Method2<System.DateTime>();
}
void Method2<T>()
{
Type t = typeof(T);
Method3(t);
}
void Method3(Type type)
{
var methods = type.GetMethods();
...
}

If you compile the previous code, the following warning is produced:

IL2070: Program.Method3(Type): 'this' argument does not satisfy


'DynamicallyAccessedMemberTypes.PublicMethods' in call to
'System.Type.GetMethods()'. The parameter 'type' of method
'Program.Method3(Type)' does not have matching annotations. The source value
must declare at least the same requirements as those declared on the target
location it is assigned to.

For performance and stability, flow analysis isn't performed between methods, so an
annotation is needed to pass information between methods, from the reflection call
( GetMethods ) to the source of the Type . In the previous example, the trimmer warning is
saying that GetMethods requires the Type object instance it's called on to have the
PublicMethods annotation, but the type variable doesn't have the same requirement. In

other words, we need to pass the requirements from GetMethods up to the caller:

C#

void Method1()
{
Method2<System.DateTime>();
}
void Method2<T>()
{
Type t = typeof(T);
Method3(t);
}
void Method3(

[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicMethods)]
Type type)
{
var methods = type.GetMethods();
...
}

After annotating the parameter type , the original warning disappears, but another
appears:

IL2087: 'type' argument does not satisfy


'DynamicallyAccessedMemberTypes.PublicMethods' in call to
'Program.Method3(Type)'. The generic parameter 'T' of 'Program.Method2<T>()'
does not have matching annotations.

We propagated annotations up to the parameter type of Method3 , in Method2 we have a


similar issue. The trimmer is able to track the value T as it flows through the call to
typeof , is assigned to the local variable t , and passed to Method3 . At that point it sees
that the parameter type requires PublicMethods but there are no requirements on T ,
and produces a new warning. To fix this, we must "annotate and propagate" by applying
annotations all the way up the call chain until we reach a statically known type (like
System.DateTime or System.Tuple ), or another annotated value. In this case, we need to

annotate the type parameter T of Method2 .

C#

void Method1()
{
Method2<System.DateTime>();
}
void
Method2<[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicMet
hods)] T>()
{
Type t = typeof(T);
Method3(t);
}
void Method3(

[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicMethods)]
Type type)
{
var methods = type.GetMethods();
...
}
Now there are no warnings because the trimmer knows which members might be
accessed via runtime reflection (public methods) and on which types ( System.DateTime ),
and it preserves them. It's best practice to add annotations so the trimmer knows what
to preserve.

Warnings produced by these extra requirements are automatically suppressed if the


affected code is in a method with RequiresUnreferencedCode .

Unlike RequiresUnreferencedCode , which simply reports the incompatibility, adding


DynamicallyAccessedMembers makes the code compatible with trimming.

Suppressing trimmer warnings


If you can somehow determine that the call is safe, and all the code that's needed won't
be trimmed away, you can also suppress the warning using
UnconditionalSuppressMessageAttribute. For example:

C#

[RequiresUnreferencedCode("Use 'MethodFriendlyToTrimming' instead")]


void MethodWithAssemblyLoad() { ... }

[UnconditionalSuppressMessage("AssemblyLoadTrimming",
"IL2026:RequiresUnreferencedCode",
Justification = "Everything referenced in the loaded assembly is
manually preserved, so it's safe")]
void TestMethod()
{
InitializeEverything();

MethodWithAssemblyLoad(); // Warning suppressed

ReportResults();
}

2 Warning

Be very careful when suppressing trim warnings. It's possible that the call may be
trim-compatible now, but as you change your code that may change, and you may
forget to review all the suppressions.

UnconditionalSuppressMessage is like SuppressMessage but it can be seen by publish

and other post-build tools.


) Important

Do not use SuppressMessage or #pragma warning disable to suppress trimmer


warnings. These only work for the compiler, but are not preserved in the compiled
assembly. Trimmer operates on compiled assemblies and would not see the
suppression.

The suppression applies to the entire method body. So in our sample above it
suppresses all IL2026 warnings from the method. This makes it harder to understand, as
it's not clear which method is the problematic one, unless you add a comment. More
importantly, if the code changes in the future, such as if ReportResults becomes trim-
incompatible as well, no warning is reported for this method call.

You can resolve this by refactoring the problematic method call into a separate method
or local function and then applying the suppression to just that method:

C#

void TestMethod()
{
InitializeEverything();

CallMethodWithAssemblyLoad();

ReportResults();

[UnconditionalSuppressMessage("AssemblyLoadTrimming",
"IL2026:RequiresUnreferencedCode",
Justification = "Everything referenced in the loaded assembly is
manually preserved, so it's safe")]
void CallMethodWithAssemblyLoad()
{
MethodWIthAssemblyLoad(); // Warning suppressed
}
}

6 Collaborate with us on
GitHub .NET feedback
The .NET documentation is open
The source for this content can
source. Provide feedback here.
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our  Provide product feedback
contributor guide.
Known trimming incompatibilities
Article • 11/07/2023

There are some patterns that are known to be incompatible with trimming. Some of
these patterns might become compatible as tooling improves or as libraries make
modifications to become trimming compatible.

Reflection-based serializers
Alternative: Reflection-free serializers.

Many uses of reflection can be made trimming-compatible, as described in Introduction


to trim warnings. However, serializers tend to have complex uses of reflection. Many of
these uses can't be made analyzable at build time. Unfortunately, the best option is
often to rewrite the system to use source generation instead.

Popular reflection-based serializers and their recommended alternatives:

Newtonsoft.Json. Recommended alternative: source generated System.Text.Json


System.Configuration.ConfigurationManager. Recommended alternative: source
generated Microsoft.Extensions.Configuration
System.Runtime.Serialization.Formatters.Binary.BinaryFormatter Recommended
alternative: Migrate away from BinaryFormatter serialization due to its security and
reliability flaws.

Dynamic assembly loading and execution


Trimming and dynamic assembly loading is a common problem for systems that support
plugins or extensions, usually through APIs like LoadFrom(String). Trimming relies on
seeing all assemblies at build time, so it knows which code is used and can't be trimmed
away. Most plugin systems load third-party code dynamically, so it's not possible for the
trimmer to identify what code is needed.

Windows platform incompatibilities


The following sections list known incompatibilities with trimming on Windows.

Built-in COM marshalling


Alternative: COM Wrappers
Automatic COM marshalling has been built in to .NET since .NET Framework 1.0. It uses
run-time code analysis to automatically convert between native COM objects and
managed .NET objects. Unfortunately, trimming analysis can't always predict what .NET
code needs to be preserved for automatic COM marshalling. However, if COM Wrappers
are used instead, trimming analysis can guarantee that all used code will be correctly
preserved.

WPF
The Windows Presentation Foundation (WPF) framework makes substantial use of
reflection and some features are heavily reliant on run-time code inspection. It's not
possible for trimming analysis to preserve all necessary code for WPF applications.
Unfortunately, almost no WPF apps are runnable after trimming, so trimming support
for WPF is currently disabled in the .NET SDK. See WPF is not trim-compatible issue
for progress on enabling trimming for WPF.

Windows Forms
The Windows Forms framework makes minimal use of reflection, but is heavily reliant on
built-in COM marshalling. Unfortunately, almost no Windows Forms apps are runnable
without built-in COM marshalling, so trimming support for Windows Forms apps is
disabled in the .NET SDK currently. See Make WinForms trim compatible issue for
progress on enabling trimming for Windows Forms.

6 Collaborate with us on
GitHub .NET feedback
The .NET documentation is open
The source for this content can
source. Provide feedback here.
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
Trimming options
Article • 09/04/2024

The MSBuild properties and items described in this article influence the behavior of
trimmed, self-contained deployments. Some of the options mention ILLink , which is
the name of the underlying tool that implements trimming. For more information about
the underlying tool, see the Trimmer documentation .

Trimming with PublishTrimmed was introduced in .NET Core 3.0. The other options are
available in .NET 5 and later versions.

Enable trimming
<PublishTrimmed>true</PublishTrimmed>

Enable trimming during publish. This setting also turns off trim-incompatible
features and enables trim analysis during build. In .NET 8 and later apps, this
setting also enables the configuration binding and request delegate source
generators.

7 Note

If you specify trimming as enabled from the command line, your debugging
experience will differ and you might encounter additional bugs in the final product.

Place this setting in the project file to ensure that the setting applies during dotnet
build , not just dotnet publish .

This setting enables trimming and trims all assemblies by default. In .NET 6, only
assemblies that opted-in to trimming via [AssemblyMetadata("IsTrimmable", "True")]
(added in projects that set <IsTrimmable>true</IsTrimmable> ) were trimmed by default.
You can return to the previous behavior by using <TrimMode>partial</TrimMode> .

This setting also enables the trim-compatibility Roslyn analyzer and disables features
that are incompatible with trimming.

Trimming granularity
Use the TrimMode property to set the trimming granularity to either partial or full .
The default setting for console apps (and, starting in .NET 8, Web SDK apps) is full :

XML

<TrimMode>full</TrimMode>

To only trim assemblies that have opted-in to trimming, set the property to partial :

XML

<TrimMode>partial</TrimMode>

If you change the trim mode to partial , you can opt-in individual assemblies to
trimming by using a <TrimmableAssembly> MSBuild item.

XML

<ItemGroup>
<TrimmableAssembly Include="MyAssembly" />
</ItemGroup>

This is equivalent to setting [AssemblyMetadata("IsTrimmable", "True")] when building


the assembly.

Root assemblies
If an assembly is not trimmed, it's considered "rooted", which means that it and all of its
statically understood dependencies will be kept. Additional assemblies can be "rooted"
by name (without the .dll extension):

XML

<ItemGroup>
<TrimmerRootAssembly Include="MyAssembly" />
</ItemGroup>

Root descriptors
Another way to specify roots for analysis is using an XML file that uses the trimmer
descriptor format . This lets you root specific members instead of a whole assembly.
XML

<ItemGroup>
<TrimmerRootDescriptor Include="MyRoots.xml" />
</ItemGroup>

For example, MyRoots.xml might root a specific method that's dynamically accessed by
the application:

XML

<linker>
<assembly fullname="MyAssembly">
<type fullname="MyAssembly.MyClass">
<method name="DynamicallyAccessedMethod" />
</type>
</assembly>
</linker>

Analysis warnings
<SuppressTrimAnalysisWarnings>false</SuppressTrimAnalysisWarnings>

Enable trim analysis warnings.

Trimming removes IL that's not statically reachable. Apps that use reflection or other
patterns that create dynamic dependencies might be broken by trimming. To warn
about such patterns, set <SuppressTrimAnalysisWarnings> to false . This setting will
surface warnings about the entire app, including your own code, library code, and
framework code.

Roslyn analyzer
Setting PublishTrimmed in .NET 6+ also enables a Roslyn analyzer that shows a limited
set of analysis warnings. You can also enable or disable the analyzer independently of
PublishTrimmed .

<EnableTrimAnalyzer>true</EnableTrimAnalyzer>

Enable a Roslyn analyzer for a subset of trim analysis warnings.

Suppress warnings
You can suppress individual warning codes using the usual MSBuild properties
respected by the toolchain, including NoWarn , WarningsAsErrors , WarningsNotAsErrors ,
and TreatWarningsAsErrors . There's an additional option that controls the ILLink warn-
as-error behavior independently:

<ILLinkTreatWarningsAsErrors>false</ILLinkTreatWarningsAsErrors>

Don't treat ILLink warnings as errors. This might be useful to avoid turning trim
analysis warnings into errors when treating compiler warnings as errors globally.

Show detailed warnings


In .NET 6+, trim analysis produces at most one warning for each assembly that comes
from a PackageReference , indicating that the assembly's internals are not compatible
with trimming. You can also show individual warnings for all assemblies:

<TrimmerSingleWarn>false</TrimmerSingleWarn>

Show all detailed warnings, instead of collapsing them to a single warning per
assembly.

Remove symbols
Symbols are usually trimmed to match the trimmed assemblies. You can also remove all
symbols:

<TrimmerRemoveSymbols>true</TrimmerRemoveSymbols>

Remove symbols from the trimmed application, including embedded PDBs and
separate PDB files. This applies to both the application code and any dependencies
that come with symbols.

The SDK also makes it possible to disable debugger support using the property
DebuggerSupport . When debugger support is disabled, trimming removes symbols

automatically ( TrimmerRemoveSymbols will default to true).

Trim framework library features


Several feature areas of the framework libraries come with trimmer directives that make
it possible to remove the code for disabled features.
ノ Expand table

MSBuild property Description

AutoreleasePoolSupport When set to false , removes code that creates


autorelease pools on supported platforms. false is
the default for the .NET SDK.

DebuggerSupport When set to false , removes code that enables


better debugging experiences. This setting also
removes symbols.

EnableUnsafeBinaryFormatterSerialization When set to false , removes BinaryFormatter


serialization support. For more information, see
BinaryFormatter serialization methods are obsolete
and In-box BinaryFormatter implementation
removed and always throws.

EnableUnsafeUTF7Encoding When set to false , removes insecure UTF-7


encoding code. For more information, see UTF-7
code paths are obsolete.

EventSourceSupport When set to false , removes EventSource-related


code and logic.

HttpActivityPropagationSupport When set to false , removes code related to


diagnostics support for System.Net.Http.

InvariantGlobalization When set to true , removes globalization-specific


code and data. For more information, see Invariant
mode.

MetadataUpdaterSupport When set to false , removes metadata update–


specific logic related to hot reload.

MetricsSupport When set to false , removes support for


System.Diagnostics.Metrics instrumentation.

StackTraceSupport (.NET 8+) When set to false , removes support for


generating stack traces (for example,
Environment.StackTrace or Exception.ToString) by
the runtime. The amount of information that is
removed from stack trace strings might depend on
other deployment options. This option does not
affect stack traces generated by debuggers.

UseNativeHttpHandler When set to true , uses the default platform


implementation of HttpMessageHandler for
Android and iOS and removes the managed
implementation.
MSBuild property Description

UseSystemResourceKeys When set to true , strips exception messages for


System.* assemblies. When an exception is thrown
from a System.* assembly, the message is a
simplified resource ID instead of the full message.

XmlResolverIsNetworkingEnabledByDefault When set to false , removes support for resolving


(.NET 8+) non-file URLs in System.Xml. Only file-system
resolving is supported.

These properties cause the related code to be trimmed and also disable features via the
runtimeconfig file. For more information about these properties, including the
corresponding runtimeconfig options, see feature switches . Some SDKs might have
default values for these properties.

Framework features disabled when trimming


The following features are incompatible with trimming because they require code that's
not statically referenced. These features are disabled by default in trimmed apps.

2 Warning

Enable these features at your own risk. They are likely to break trimmed apps
without extra work to preserve the dynamically referenced code.

<BuiltInComInteropSupport>

Built-in COM support is disabled.

<CustomResourceTypesSupport>

Use of custom resource types isn't supported. ResourceManager code paths that
use reflection for custom resource types are trimmed.

<EnableCppCLIHostActivation>

C++/CLI host activation is disabled.

<EnableUnsafeBinaryFormatterInDesigntimeLicenseContextSerialization>

DesigntimeLicenseContextSerializer use of BinaryFormatter serialization is


disabled.
<StartupHookSupport>

Running code before Main with DOTNET_STARTUP_HOOKS isn't supported. For more
information, see host startup hook .
Prepare .NET libraries for trimming
Article • 09/02/2023

The .NET SDK makes it possible to reduce the size of self-contained apps by trimming.
Trimming removes unused code from the app and its dependencies. Not all code is
compatible with trimming. .NET provides trim analysis warnings to detect patterns that
may break trimmed apps. This article:

Describes how to prepare libraries for trimming.


Provides recommendations for resolving common trimming warnings.

Prerequisites
.NET 8 SDK or later.

Enable library trim warnings


Trim warnings in a library can be found with either of the following methods:

Enabling project-specific trimming using the IsTrimmable property.


Creating a trimming test app that uses the library and enabling trimming for the
test app. It's not necessary to reference all the APIs in the library.

We recommend using both approaches. Project-specific trimming is convenient and


shows trim warnings for one project, but relies on the references being marked trim-
compatible to see all warnings. Trimming a test app is more work, but shows all
warnings.

Enable project-specific trimming


Set <IsTrimmable>true</IsTrimmable> in the project file.

XML

<PropertyGroup>
<IsTrimmable>true</IsTrimmable>
</PropertyGroup>

Setting <IsTrimmable>true</IsTrimmable> marks the assembly as "trimmable" and


enables trim warnings. "trimmable" means the project:
Is considered compatible with trimming.
Shouldn't generate trim related warnings when building. When used in a trimmed
app, the assembly has its unused members trimmed in the final output.

The IsTrimmable property defaults to true when configuring a project as AOT-


compatible with <IsAotCompatible>true</IsAotCompatible> . For more information, see
AOT-compatibility analyzers.

To generate trim warnings without marking the project as trim-compatible, use


<EnableTrimAnalyzer>true</EnableTrimAnalyzer> rather than
<IsTrimmable>true</IsTrimmable> .

Show all warnings with test app


To show all analysis warnings for a library, the trimmer must analyze the
implementation:

Of the library.
All dependencies the library uses.

When building and publishing a library:

The implementations of the dependencies aren't available.


The available reference assemblies don't have enough information for the trimmer
to determine if they're compatible with trimming.

Because of the dependency limitations, a self-contained test app which uses the library
and its dependencies must be created. The test app includes all the information the
trimmer requires to issue warning on trim incompatibilities in:

The library code.


The code that the library references from its dependencies.

Note: If the library has different behavior depending on the target framework, create a
trimming test app for each of the target frameworks that support trimming. For
example, if the library uses conditional compilation such as #if NET7_0 to change
behavior.

To create the trimming test app:

Create a separate console application project.


Add a reference to the library.
Modify the project similar to the project shown below using the following list:
If library targets a TFM that is not trimmable, for example net472 or netstandard2.0 ,
there's no benefit to creating a trimming test app. Trimming is only supported for .NET 6
and later.

Add <PublishTrimmed>true</PublishTrimmed> .
Add a reference to the library project with <ProjectReference
Include="/Path/To/YourLibrary.csproj" /> .

Specify the library as a trimmer root assembly with <TrimmerRootAssembly


Include="YourLibraryName" /> .
TrimmerRootAssembly ensures that every part of the library is analyzed. It tells

the trimmer that this assembly is a "root". A "root" assembly means the trimmer
analyzes every call in the library and traverses all code paths that originate from
that assembly.

.csproj file
XML

<Project Sdk="Microsoft.NET.Sdk">

<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net8.0</TargetFramework>
<PublishTrimmed>true</PublishTrimmed>
</PropertyGroup>

<ItemGroup>
<ProjectReference Include="..\MyLibrary\MyLibrary.csproj" />
<TrimmerRootAssembly Include="MyLibrary" />
</ItemGroup>

</Project>

Once the project file is updated, run dotnet publish with the target runtime identifier
(RID).

.NET CLI

dotnet publish -c Release -r <RID>

Follow the preceding pattern for multiple libraries. To see trim analysis warnings for
more than one library at a time, add them all to the same project as ProjectReference
and TrimmerRootAssembly items. Adding all the libraries to the same project with
ProjectReference and TrimmerRootAssembly items warns about dependencies if any of
the root libraries use a trim-unfriendly API in a dependency. To see warnings that have
to do with only a particular library, reference that library only.

Note: The analysis results depend on the implementation details of the dependencies.
Updating to a new version of a dependency may introduce analysis warnings:

If the new version added non-understood reflection patterns.


Even if there were no API changes.
Introducing trim analysis warnings is a breaking change when the library is used
with PublishTrimmed .

Resolve trim warnings


The preceding steps produce warnings about code that may cause problems when used
in a trimmed app. The following examples show the most common warnings with
recommendations for fixing them.

RequiresUnreferencedCode
Consider the following code that uses [RequiresUnreferencedCode] to indicate that the
specified method requires dynamic access to code that is not referenced statically, for
example, through System.Reflection.

C#

public class MyLibrary


{
public static void MyMethod()
{
// warning IL2026 :
// MyLibrary.MyMethod: Using 'MyLibrary.DynamicBehavior'
// which has [RequiresUnreferencedCode] can break functionality
// when trimming app code.
DynamicBehavior();
}

[RequiresUnreferencedCode(
"DynamicBehavior is incompatible with trimming.")]
static void DynamicBehavior()
{
}
}

The preceding highlighted code indicates the library calls a method that has explicitly
been annotated as incompatible with trimming. To get rid of the warning, consider
whether MyMethod needs to call DynamicBehavior . If so, annotate the caller MyMethod with
[RequiresUnreferencedCode] which propagates the warning so that callers of MyMethod

get a warning instead:

C#

public class MyLibrary


{
[RequiresUnreferencedCode("Calls DynamicBehavior.")]
public static void MyMethod()
{
DynamicBehavior();
}

[RequiresUnreferencedCode(
"DynamicBehavior is incompatible with trimming.")]
static void DynamicBehavior()
{
}
}

Once you have propagated up the attribute all the way to public API, apps calling the
library:

Get warnings only for public methods that aren't trimmable.


Don't get warnings like IL2104: Assembly 'MyLibrary' produced trim warnings .

DynamicallyAccessedMembers
C#

public class MyLibrary3


{
static void UseMethods(Type type)
{
// warning IL2070: MyLibrary.UseMethods(Type): 'this' argument does
not satisfy
// 'DynamicallyAccessedMemberTypes.PublicMethods' in call to
// 'System.Type.GetMethods()'.
// The parameter 't' of method 'MyLibrary.UseMethods(Type)' doesn't
have
// matching annotations.
foreach (var method in type.GetMethods())
{
// ...
}
}
}
In the preceding code, UseMethods is calling a reflection method that has a
[DynamicallyAccessedMembers] requirement. The requirement states that the type's
public methods are available. Satisfy the requirement by adding the same requirement
to the parameter of UseMethods .

C#

static void UseMethods(


// State the requirement in the UseMethods parameter.

[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicMethods)]
Type type)
{
// ...
}

Now any calls to UseMethods produce warnings if they pass in values that don't satisfy
the PublicMethods requirement. Similar to [RequiresUnreferencedCode] , once you have
propagated up such warnings to public APIs, you're done.

In the following example, an unknown Type flows into the annotated method parameter.
The unknown Type is from a field:

C#

static Type type;


static void UseMethodsHelper()
{
// warning IL2077: MyLibrary.UseMethodsHelper(Type): 'type' argument
does not satisfy
// 'DynamicallyAccessedMemberTypes.PublicMethods' in call to
// 'MyLibrary.UseMethods(Type)'.
// The field 'System.Type MyLibrary::type' does not have matching
annotations.
UseMethods(type);
}

Similarly, here the problem is that the field type is passed into a parameter with these
requirements. It's fixed by adding [DynamicallyAccessedMembers] to the field.
[DynamicallyAccessedMembers] warns about code that assigns incompatible values to the

field. Sometimes this process continues until a public API is annotated, and other times
it ends when a concrete type flows into a location with these requirements. For example:

C#
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicMethods)]
static Type type;

static void UseMethodsHelper()


{
MyLibrary.type = typeof(System.Tuple);
}

In this case, the trim analysis keeps public methods of Tuple, and produces further
warnings.

Recommendations
Avoid reflection when possible. When using reflection, minimize reflection scope
so that it's reachable only from a small part of the library.
Annotate code with DynamicallyAccessedMembers to statically express the trimming
requirements when possible.
Consider reorganizing code to make it follow an analyzable pattern that can be
annotated with DynamicallyAccessedMembers
When code is incompatible with trimming, annotate it with
RequiresUnreferencedCode and propagate this annotation to callers until the

relevant public APIs are annotated.


Avoid using code that uses reflection in a way not understood by the static
analysis. For example, reflection in static constructors should be avoided. Using
statically unanalyzable reflection in static constructors result in the warning
propagating to all members of the class.
Avoid annotating virtual methods or interface methods. Annotating virtual or
interface methods requires all overrides to have matching annotations.
If an API is mostly trim-incompatible, alternative coding approaches to the API may
need to be considered. A common example is reflection-based serializers. In these
cases, consider adopting other technology like source generators to produce code
that is more easily statically analyzed. For example, see How to use source
generation in System.Text.Json

Resolve warnings for non-analyzable patterns


It's better to resolve warnings by expressing the intent of your code using
[RequiresUnreferencedCode] and DynamicallyAccessedMembers when possible. However,

in some cases, you may be interested in enabling trimming of a library that uses
patterns that can't be expressed with those attributes, or without refactoring existing
code. This section describes some advanced ways to resolve trim analysis warnings.

2 Warning

These techniques might change the behavior or your code or result in run time
exceptions if used incorrectly.

UnconditionalSuppressMessage
Consider code that:

The intent can't be expressed with the annotations.


Generates a warning but doesn't represent a real issue at run time.

The warnings can be suppressed UnconditionalSuppressMessageAttribute. This is similar


to SuppressMessageAttribute , but it's persisted in IL and respected during trim analysis.

2 Warning

When suppressing warnings, you are responsible for guaranteeing the trim
compatibility of the code based on invariants that you know to be true by
inspection and testing. Use caution with these annotations, because if they are
incorrect, or if invariants of your code change, they might end up hiding incorrect
code.

For example:

C#

class TypeCollection
{
Type[] types;

// Ensure that only types with preserved constructors are stored in the
array

[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicParameterle
ssConstructor)]
public Type this[int i]
{
// warning IL2063: TypeCollection.Item.get: Value returned from
method
// 'TypeCollection.Item.get' can't be statically determined and may
not meet
// 'DynamicallyAccessedMembersAttribute' requirements.
get => types[i];
set => types[i] = value;
}
}

class TypeCreator
{
TypeCollection types;

public void CreateType(int i)


{
types[i] = typeof(TypeWithConstructor);
Activator.CreateInstance(types[i]); // No warning!
}
}

class TypeWithConstructor
{
}

In the preceding code, the indexer property has been annotated so that the returned
Type meets the requirements of CreateInstance . This ensures that the

TypeWithConstructor constructor is kept, and that the call to CreateInstance doesn't


warn. The indexer setter annotation ensures that any types stored in the Type[] have a
constructor. However, the analysis isn't able to see this and produces a warning for the
getter, because it doesn't know that the returned type has its constructor preserved.

If you're sure that the requirements are met, you can silence this warning by adding
[UnconditionalSuppressMessage] to the getter:

C#

class TypeCollection
{
Type[] types;

// Ensure that only types with preserved constructors are stored in the
array

[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicParameterle
ssConstructor)]
public Type this[int i]
{
[UnconditionalSuppressMessage("ReflectionAnalysis", "IL2063",
Justification = "The list only contains types stored through the
annotated setter.")]
get => types[i];
set => types[i] = value;
}
}
class TypeCreator
{
TypeCollection types;

public void CreateType(int i)


{
types[i] = typeof(TypeWithConstructor);
Activator.CreateInstance(types[i]); // No warning!
}
}

class TypeWithConstructor
{
}

It's important to underline that it's only valid to suppress a warning if there are
annotations or code that ensure the reflected-on members are visible targets of
reflection. It isn't sufficient that the member was a target of a call, field or property
access. It may appear to be the case sometimes but such code is bound to break
eventually as more trimming optimizations are added. Properties, fields, and methods
that aren't visible targets of reflection could be inlined, have their names removed, get
moved to different types, or otherwise optimized in ways that break reflecting on them.
When suppressing a warning, it's only permissible to reflect on targets that were visible
targets of reflection to the trimming analyzer elsewhere.

C#

// Invalid justification and suppression: property being non-reflectively


// used by the app doesn't guarantee that the property will be available
// for reflection. Properties that are not visible targets of reflection
// are already optimized away with Native AOT trimming and may be
// optimized away for non-native deployment in the future as well.
[UnconditionalSuppressMessage("ReflectionAnalysis", "IL2063",
Justification = "*INVALID* Only need to serialize properties that are
used by"
+ "the app. *INVALID*")]
public string Serialize(object o)
{
StringBuilder sb = new StringBuilder();
foreach (var property in o.GetType().GetProperties())
{
AppendProperty(sb, property, o);
}
return sb.ToString();
}

DynamicDependency
The [DynamicDependency] attribute can be used to indicate that a member has a
dynamic dependency on other members. This results in the referenced members being
kept whenever the member with the attribute is kept, but doesn't silence warnings on its
own. Unlike the other attributes, which inform the trim analysis about the reflection
behavior of the code, [DynamicDependency] only keeps other members. This can be used
together with [UnconditionalSuppressMessage] to fix some analysis warnings.

2 Warning

Use [DynamicDependency] attribute only as a last resort when the other approaches
aren't viable. It is preferable to express the reflection behavior using
[RequiresUnreferencedCode] or [DynamicallyAccessedMembers].

C#

[DynamicDependency("Helper", "MyType", "MyAssembly")]


static void RunHelper()
{
var helper =
Assembly.Load("MyAssembly").GetType("MyType").GetMethod("Helper");
helper.Invoke(null, null);
}

Without DynamicDependency , trimming might remove Helper from MyAssembly or


remove MyAssembly completely if it's not referenced elsewhere, producing a warning
that indicates a possible failure at run time. The attribute ensures that Helper is kept.

The attribute specifies the members to keep via a string or via


DynamicallyAccessedMemberTypes . The type and assembly are either implicit in the

attribute context, or explicitly specified in the attribute (by Type , or by string s for the
type and assembly name).

The type and member strings use a variation of the C# documentation comment ID
string format, without the member prefix. The member string shouldn't include the
name of the declaring type, and may omit parameters to keep all members of the
specified name. Some examples of the format are shown in the following code:

C#

[DynamicDependency("MyMethod()")]
[DynamicDependency("MyMethod(System,Boolean,System.String)")]
[DynamicDependency("MethodOnDifferentType()", typeof(ContainingType))]
[DynamicDependency("MemberName")]
[DynamicDependency("MemberOnUnreferencedAssembly", "ContainingType"
, "UnreferencedAssembly")]
[DynamicDependency("MemberName", "Namespace.ContainingType.NestedType",
"Assembly")]
// generics
[DynamicDependency("GenericMethodName``1")]
[DynamicDependency("GenericMethod``2(``0,``1)")]
[DynamicDependency(

"MethodWithGenericParameterTypes(System.Collections.Generic.List{System.Stri
ng})")]
[DynamicDependency("MethodOnGenericType(`0)", "GenericType`1",
"UnreferencedAssembly")]
[DynamicDependency("MethodOnGenericType(`0)", typeof(GenericType<>))]

The [DynamicDependency] attribute is designed to be used in cases where a method


contains reflection patterns that can't be analyzed even with the help of
DynamicallyAccessedMembersAttribute .
IL2001: Descriptor file tried to preserve
fields on type that has no fields
Article • 03/11/2022

Cause
An XML descriptor file is trying to preserve fields on a type with no fields.

Rule description
Descriptor files are used to direct the IL trimmer to always keep certain members in an
assembly, regardless of whether the trimmer can find references to them. However,
trying to preserve members that cannot be found will trigger a warning.

Example
XML

<linker>
<assembly fullname="test">
<type fullname="TestType" preserve="fields" />
</assembly>
</linker>

C#

// IL2001: Type 'TestType' has no fields to preserve


class TestType
{
void OnlyMethod() {}
}
IL2002: Descriptor file tried to preserve
methods on type that has no methods
Article • 03/11/2022

Cause
An XML descriptor file is trying to preserve methods on a type with no methods.

Rule description
Descriptor files are used to direct the IL trimmer to always keep certain members in an
assembly, regardless of whether the trimmer can find references to them. However,
trying to preserve members that cannot be found will trigger a warning.

Example
XML

<linker>
<assembly fullname="test">
<type fullname="TestType" preserve="methods" />
</assembly>
</linker>

C#

// IL2002: Type 'TestType' has no methods to preserve


struct TestType
{
public int Number;
}
IL2003: Could not resolve dependency
assembly specified in a
'PreserveDependency' attribute
Article • 03/11/2022

Cause
The assembly specified in a PreserveDependencyAttribute could not be resolved.

Rule description
Trimmer keeps a cache with the assemblies that it has seen. If the assembly specified in
the PreserveDependencyAttribute is not found in this cache, the trimmer does not have a
way to find the member to preserve.

Example
C#

// IL2003: Could not resolve dependency assembly 'NonExistentAssembly'


specified in a 'PreserveDependency' attribute
[PreserveDependency("MyMethod", "MyType", "NonExistentAssembly")]
void TestMethod()
{
}
IL2004: Could not resolve dependency
type specified in a
'PreserveDependency' attribute
Article • 03/11/2022

Cause
The type specified in a PreserveDependencyAttribute could not be resolved.

Rule description
Trimmer keeps a cache with the assemblies that it has seen. If the type specified in the
PreserveDependencyAttribute is not found inside an assembly in this cache, the trimmer

does not have a way to find the member to preserve.

Example
C#

// IL2004: Could not resolve dependency type 'NonExistentType' specified in


a 'PreserveDependency' attribute
[PreserveDependency("MyMethod", "NonExistentType", "MyAssembly")]
void TestMethod()
{
}
IL2005: Could not resolve dependency
member specified in a
'PreserveDependency' attribute
Article • 03/11/2022

Cause
The member of a type specified in a PreserveDependencyAttribute could not be
resolved.

Example
C#

// IL2005: Could not resolve dependency member 'NonExistentMethod' declared


on type 'MyType' specified in a 'PreserveDependency' attribute
[PreserveDependency("NonExistentMethod", "MyType", "MyAssembly")]
void TestMethod()
{
}
IL2007: Could not resolve assembly
specified in descriptor file
Article • 03/11/2022

Cause
An assembly specified in a descriptor file could not be resolved.

Rule description
Descriptor files are used to instruct the trimmer to always keep certain items in an
assembly, regardless of whether the trimmer could find any references to them.

The assembly specified in the descriptor file by its full name could not be found in any
of the assemblies seen by the trimmer.

Example
XML

<!-- IL2007: Could not resolve assembly 'NonExistentAssembly' -->


<linker>
<assembly fullname="NonExistentAssembly" />
</linker>
IL2008: Could not resolve type specified
in descriptor file
Article • 03/11/2022

Cause
A type specified in a descriptor file could not be resolved.

Rule description
Descriptor files are used to instruct the trimmer to always keep certain items in an
assembly, regardless of whether the trimmer could find any references to them.

A type specified in a descriptor file could not be found in the assembly matching the
fullname argument that was passed to the parent of the type element.

Example
XML

<!-- IL2008: Could not resolve type 'NonExistentType' -->


<linker>
<assembly fullname="MyAssembly">
<type fullname="NonExistentType" />
</assembly>
</linker>
IL2009: Could not resolve method
specified in descriptor file
Article • 03/11/2022

Cause
A method specified on a type in a descriptor file could not be resolved.

Rule description
Descriptor files are used to instruct the trimmer to always keep certain items in an
assembly, regardless of whether the trimmer could find any references to them.

A method specified in a descriptor file could not be found in the type matching the
fullname argument that was passed to the parent of the method element.

Example
XML

<!-- IL2009: Could not find method 'NonExistentMethod' on type 'MyType' -->
<linker>
<assembly fullname="MyAssembly">
<type fullname="MyType">
<method name="NonExistentMethod" />
</type>
</assembly>
</linker>
IL2010: Invalid value on a method
substitution
Article • 03/11/2022

Cause
The value used in a substitution file for replacing a method's body does not represent a
value of a built-in type or match the return type of the method.

Rule description
Substitution files are used to instruct the trimmer to replace specific method bodies
with a throw statement or to return constant statements.

The value passed to the value argument of a method element could not be converted
by the trimmer to a type matching the return type of the specified method.

Example
XML

<!-- IL2010: Invalid value for 'MyType.MyMethodReturningInt()' stub -->


<linker>
<assembly fullname="MyAssembly">
<type fullname="MyType">
<method name="MyMethodReturningInt" body="stub" value="NonNumber" />
</type>
</assembly>
</linker>
IL2011: Unknown body modification
action
Article • 03/11/2022

Cause
The action value passed to the body argument of a method element in a substitution file
is invalid.

Rule description
Substitution files are used to instruct the trimmer to replace specific method bodies
with a throw statement or to return constant statements.

The value passed to the body argument of a method element was invalid. The only
supported options for this argument are remove and stub .

Example
XML

<!-- IL2011: Unknown body modification 'nonaction' for 'MyType.MyMethod()' -


->
<linker>
<assembly fullname="MyAssembly">
<type fullname="MyType">
<method name="MyMethod" body="nonaction" value="NonNumber" />
</type>
</assembly>
</linker>
IL2012: Could not find field on type in
substitution file
Article • 03/11/2022

Cause
A field specified for substitution in a substitution file could not be found.

Rule description
Substitution files are used to instruct the trimmer to replace specific method bodies
with either a throw or return constant statements.

A field specified in a substitution file could not be found in the type matching the
fullname argument that was passed to the parent of the field element.

Example
XML

<!-- IL2012: Could not find field 'NonExistentField' on type 'MyType' -->
<linker>
<assembly fullname="MyAssembly">
<type fullname="MyType">
<field name="NonExistentField" />
</type>
</assembly>
</linker>
IL2013: Substituted fields must be static
or constant
Article • 03/11/2022

Cause
A field specified for substitution in a substitution file is non-static or constant.

Rule description
Substitution files are used to instruct the trimmer to replace specific method bodies
with either a throw or return constant statements.

Trimmer cannot substitute non-static or constant fields.

Example
XML

<!-- IL2013: Substituted field 'MyType.InstanceField' needs to be static


field -->
<linker>
<assembly fullname="MyAssembly">
<type fullname="MyType">
<field name="InstanceField" value="5" />
</type>
</assembly>
</linker>
IL2014: Missing value for field
substitution
Article • 03/11/2022

Cause
A field was specified for substitution in a substitution file but no value to be substituted
for was given.

Rule description
Substitution files are used to instruct the trimmer to replace specific method bodies
with either a throw or return constant statements.

A field element specified in the substitution file does not specify the required value
argument.

Example
XML

<!-- IL2014: Missing 'value' attribute for field 'MyType.MyField' -->


<linker>
<assembly fullname="MyAssembly">
<type fullname="MyType">
<field name="MyField" />
</type>
</assembly>
</linker>
IL2015: Invalid value for field
substitution
Article • 03/11/2022

Cause
The value used in a substitution file for replacing a field's value does not represent a
value of a built-in type or does not match the type of the field.

Rule description
Substitution files are used to instruct the trimmer to replace specific method bodies
with either a throw or return constant statements.

The value passed to the value argument of a field element could not be converted by
the trimmer to a type matching the return type of the specified field.

Example
XML

<!-- IL2015: Invalid value 'NonNumber' for 'MyType.IntField' -->


<linker>
<assembly fullname="MyAssembly">
<type fullname="MyType">
<field name="IntField" value="NonNumber" />
</type>
</assembly>
</linker>
IL2016: Could not find event on type
Article • 04/24/2024

Cause
Could not find event on type.

Rule description
An event specified in an XML file for the trimmer file could not be found in the type
matching the fullname argument that was passed to the parent of the event element.

Example
XML

<!-- IL2016: Could not find event 'NonExistentEvent' on type 'MyType' -->
<linker>
<assembly fullname="MyAssembly">
<type fullname="MyType">
<event name="NonExistentEvent" />
</type>
</assembly>
</linker>

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
IL2017: Could not find property on type
Article • 04/24/2024

Cause
Could not find property on type.

Rule description
A property specified in an XML file for the trimmer file could not be found in the type
matching the fullname argument that was passed to the parent of the property
element.

Example
XML

<!-- IL2017: Could not find property 'NonExistentProperty' on type 'MyType'


-->
<linker>
<assembly fullname="MyAssembly">
<type fullname="MyType">
<property name="NonExistentProperty" />
</type>
</assembly>
</linker>

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
be found on GitHub, where you Select a link to provide feedback:
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
IL2018: Could not find the get accessor
of property on type in descriptor file
Article • 03/11/2022

Cause
A get accessor specified in a descriptor file could not be found.

Rule description
Descriptor files are used to instruct the trimmer to always keep certain items in an
assembly, regardless of whether the trimmer could find any references to them.

A get accessor specified in a descriptor file could not be found in the property
matching the signature argument that was passed to the property element.

Example
XML

<!-- IL2018: Could not find the get accessor of property 'SetOnlyProperty'
on type 'MyType' -->
<linker>
<assembly fullname="MyAssembly">
<type fullname="MyType">
<property signature="System.Boolean SetOnlyProperty" accessors="get"
/>
</type>
</assembly>
</linker>
IL2019: Could not find the set accessor
of property on type in descriptor file
Article • 03/11/2022

Cause
A set accessor specified in a descriptor file could not be found.

Rule description
Descriptor files are used to instruct the trimmer to always keep certain items in an
assembly, regardless of whether the trimmer could find any references to them.

A set accessor specified in a descriptor file could not be found in the property
matching the signature argument that was passed to the property element.

Example
XML

<!-- IL2019: Could not find the set accessor of property 'GetOnlyProperty'
on type 'MyType' -->
<linker>
<assembly fullname="MyAssembly">
<type fullname="MyType">
<property signature="System.Boolean GetOnlyProperty" accessors="set"
/>
</type>
</assembly>
</linker>
IL2022: Could not find matching
constructor for custom attribute
specified in custom attribute
annotations file
Article • 03/11/2022

Cause
The constructor of a custom attribute specified in a custom attribute annotations file
could not be found.

Rule description
Custom attribute annotation files are used to instruct the trimmer to behave as if the
specified item has a given attribute. Attribute annotations can only be used to add
attributes that have an effect on the trimmer behavior; all other attributes are ignored.
Attributes added via attribute annotations only influence the trimmer behavior and they
are never added to the output assembly.

A value passed to an argument child of an attribute element could not be converted by


the trimmer to a type matching the attribute's constructor argument type.

Example
XML

<!-- IL2022: Could not find matching constructor for custom attribute
'attribute-type' arguments -->
<linker>
<assembly fullname="MyAssembly">
<type fullname="MyType">
<attribute fullname="AttributeWithNoParametersAttribute">
<argument>ExtraArgumentValue</argument>
</attribute>
</type>
</assembly>
</linker>
IL2023: There is more than one return
child element specified for a method in
a custom attribute annotations file
Article • 03/11/2022

Cause
A method has more than one return element specified. There can only be one return
element when putting an attribute on the return parameter of a method.

Rule description
Custom attribute annotation files are used to instruct the trimmer to behave as if the
specified item has a given attribute. Attribute annotations can only be used to add
attributes that have effect on the trimmer behavior. All other attributes are ignored.
Attributes added via attribute annotations only influence the trimmer behavior, and they
are never added to the output assembly.

A method element has more than one return element specified. Trimmer only allows
one attribute annotation on the return type of a given method.

Example
XML

<!-- IL2023: There is more than one 'return' child element specified for
method 'method' -->
<linker>
<assembly fullname="MyAssembly">
<type fullname="MyType">
<method name="MyMethod">
<return>
<attribute fullname="FirstAttribute"/>
</return>
<return>
<attribute fullname="SecondAttribute"/>
</return>
</method>
</type>
</assembly>
</linker>
IL2024: There is more than one value
specified for the same method
parameter in a custom attribute
annotations file
Article • 03/11/2022

Cause
A method parameter has more than one value element specified. There can only be one
value specified for each method parameter.

Rule description
Custom attribute annotation files are used to instruct the trimmer to behave as if the
specified item has a given attribute. Attribute annotations can only be used to add
attributes which have effect on the trimmer behavior, all other attributes will be ignored.
Attributes added via attribute annotations only influence the trimmer behavior and they
are never added to the output assembly.

There is more than one parameter element with the same name value in a given method .
All attributes on a parameter should be put in a single element.

Example
XML

<!-- IL2024: More than one value specified for parameter 'parameter' of
method 'method' -->
<linker>
<assembly fullname="MyAssembly">
<type fullname="MyType">
<method name="MyMethod">
<parameter name="methodParameter">
<attribute fullname="FirstAttribute"/>
</parameter>
<parameter name="methodParameter">
<attribute fullname="SecondAttribute"/>
</parameter>
</method>
</type>
</assembly>
</linker>
IL2025: Duplicate preserve of a member
in a descriptor file
Article • 03/11/2022

Cause
A member on a type is marked for preservation more than once in a descriptor file.

Rule description
Descriptor files are used to instruct the trimmer to always keep certain items in an
assembly, regardless of whether the trimmer could find any references to them.

Members in this file should only appear once.

Example
XML

<!-- IL2025: Duplicate preserve of 'method' -->


<linker>
<assembly fullname="MyAssembly">
<type fullname="MyType">
<method name="MyMethod"/>
<method name="MyMethod"/>
</type>
</assembly>
</linker>
IL2026: Members attributed with
RequiresUnreferencedCode may break
when trimming
Article • 03/11/2022

Cause
Calling (or accessing via reflection) a member annotated with
RequiresUnreferencedCodeAttribute.

For example:

C#

[RequiresUnreferencedCode("Use 'MethodFriendlyToTrimming' instead",


Url="http://help/unreferencedcode")]
void MethodWithUnreferencedCodeUsage()
{
}

void TestMethod()
{
// IL2026: Using method 'MethodWithUnreferencedCodeUsage' which has
'RequiresUnreferencedCodeAttribute'
// can break functionality when trimming application code. Use
'MethodFriendlyToTrimming' instead. http://help/unreferencedcode
MethodWithUnreferencedCodeUsage();
}

Rule description
RequiresUnreferencedCodeAttribute indicates that the member references code that
may be removed by the trimmer.

Common examples include:

Load(String) is marked as RequiresUnreferencedCode because the Assembly being


loaded may access members that have been trimmed away. The trimmer removes
all members from the framework except the ones directly used by the application,
so it is likely that loading new Assemblies at run time will try to access missing
members.
XmlSerializer is marked as RequiresUnreferencedCode because XmlSerializer uses
complex reflection to scan input types. The reflection cannot be tracked by the
trimmer, so members transitively used by the input types may be trimmed away.
IL2027: Known trimmer attribute used
more than once on a single member
Article • 11/17/2021

Cause
Trimmer found multiple instances of the same trimmer-supported attribute on a single
member.
IL2028: Known trimmer attribute does
not have the required number of
parameters
Article • 11/17/2021

Cause
Trimmer found an instance of a known attribute that lacks the required constructor
parameters or has more than the accepted parameters. This can happen if a custom
assembly defines a custom attribute whose full name conflicts with the trimmer-known
attributes, since the trimmer recognizes custom attributes by matching namespace and
type name.
IL2029: Attribute element in custom
attribute annotations file does not have
required argument fullname or it is
empty
Article • 03/11/2022

Cause
An attribute element in a custom attribute annotations file does not have required
argument fullname or its value is an empty string.

Rule description
Custom attribute annotation files are used to instruct the trimmer to behave as if the
specified item has a given attribute. Attribute annotations can only be used to add
attributes that have effect on the trimmer behavior. All other attributes are ignored.
Attributes added via attribute annotations only influence the trimmer behavior, and
they're never added to the output assembly.

All attribute elements must have the required fullname argument and its value cannot
be an empty string.

Example
XML

<!-- IL2029: 'attribute' element does not contain required attribute


'fullname' or it's empty -->
<linker>
<assembly fullname="MyAssembly">
<attribute/>
</assembly>
</linker>
IL2030: Could not resolve an assembly
specified in a custom attribute
annotations file
Article • 08/20/2022

Cause
Could not resolve assembly from the assembly argument of an attribute element in a
custom attribute annotations file.

Rule description
Custom attribute annotation files are used to instruct the trimmer to behave as if the
specified item has a given attribute. Attribute annotations can only be used to add
attributes which have effect on the trimmer behavior, all other attributes will be ignored.
Attributes added via attribute annotations only influence the trimmer behavior and they
are never added to the output assembly.

The value of the assembly argument in an attribute element does not match any of the
assemblies seen by the trimmer.

Example
XML

<!-- IL2030: Could not resolve assembly 'NonExistentAssembly' for attribute


'MyAttribute' -->
<linker>
<assembly fullname="MyAssembly">
<attribute fullname="MyAttribute" assembly="NonExistentAssembly"/>
</assembly>
</linker>
IL2031: Could not resolve custom
attribute specified in a custom attribute
annotations file
Article • 03/11/2022

Cause
Could not resolve custom attribute from the type name specified in the fullname
argument of an attribute element in a custom attribute annotations file.

Rule description
Custom attribute annotation files are used to instruct the trimmer to behave as if the
specified item has a given attribute. Attribute annotations can only be used to add
attributes which have effect on the trimmer behavior, all other attributes will be ignored.
Attributes added via attribute annotations only influence the trimmer behavior and they
are never added to the output assembly.

An attribute specified in a custom attribute annotations file could not be found in the
assembly matching the fullname argument that was passed to the parent of the
attribute element.

Example
XML

<!-- IL2031: Attribute type 'NonExistentTypeAttribute' could not be found --


>
<linker>
<assembly fullname="MyAssembly">
<attribute fullname="NonExistentTypeAttribute"/>
</assembly>
</linker>
IL2032: Unrecognized value passed to
the parameter 'parameter' of
'System.Activator.CreateInstance'
method
Article • 03/11/2022

Cause
The value passed to the assembly or type name of the CreateInstance method cannot
be statically analyzed. The trimmer cannot guarantee the availability of the target type.

Example
C#

void TestMethod(string assemblyName, string typeName)


{
// IL2032 Trim analysis: Unrecognized value passed to the parameter
'typeName' of method 'System.Activator.CreateInstance(string, string)'. It's
not possible to guarantee the availability of the target type.
Activator.CreateInstance("MyAssembly", typeName);

// IL2032 Trim analysis: Unrecognized value passed to the parameter


'assemblyName' of method 'System.Activator.CreateInstance(string, string)'.
It's not possible to guarantee the availability of the target type.
Activator.CreateInstance(assemblyName, "MyType");
}
IL2033: 'PreserveDependencyAttribute'
is deprecated
Article • 03/11/2022

Cause
PreserveDependencyAttribute was an internal attribute used by the trimmer and is not

supported. Use DynamicDependencyAttribute instead.

Example
C#

// IL2033: 'PreserveDependencyAttribute' is deprecated. Use


'DynamicDependencyAttribute' instead.
[PreserveDependency("OtherMethod")]
public void TestMethod()
{
}
IL2034: 'DynamicDependencyAttribute'
could not be analyzed
Article • 11/17/2021

Cause
Application contains an invalid use of DynamicDependencyAttribute. Ensure that you are
using one of the officially supported constructors.
IL2035: Unresolved assembly in
'DynamicDependencyAttribute'
Article • 03/11/2022

Cause
The value passed to the assemblyName parameter of a DynamicDependencyAttribute
could not be resolved.

Example
C#

// IL2035: Unresolved assembly 'NonExistentAssembly' in


'DynamicDependencyAttribute'
[DynamicDependency("Method", "Type", "NonExistentAssembly")]
public void TestMethod()
{
}
IL2036: Unresolved type in
'DynamicDependencyAttribute'
Article • 03/11/2022

Cause
The value passed to the typeName parameter of a DynamicDependencyAttribute could
not be resolved.

Example
C#

// IL2036: Unresolved type 'NonExistentType' in 'DynamicDependencyAttribute'


[DynamicDependency("Method", "NonExistentType", "MyAssembly")]
public void TestMethod()
{
}
IL2037: Unresolved member in
'DynamicDependencyAttribute'
Article • 03/11/2022

Cause
The value passed to the member signature parameter of a
DynamicDependencyAttribute could not resolve to any member. Ensure that the value
passed refers to an existing member and that it uses the correct ID string format .

Example
C#

// IL2037: Unresolved type 'NonExistentType' in 'DynamicDependencyAttribute'


[DynamicDependency("Method", "NonExistentType", "MyAssembly")]
public void TestMethod()
{
}
IL2038: Missing name argument on a
resource element in a substitution file
Article • 03/11/2022

Cause
A resource element in a substitution file does not specify the required name argument.

Rule description
Substitution files are used to instruct the trimmer to replace specific method bodies
with either a throw or return constant statements.

All resource elements in a substitution file must have the required name argument
specifying the resource to remove.

Example
XML

<!-- IL2038: Missing 'name' attribute for resource. -->


<linker>
<assembly fullname="MyAssembly">
<resource />
</assembly>
</linker>
IL2039: Invalid action value on resource
element in a substitution file
Article • 03/11/2022

Cause
The value passed to the action argument of a resource element in a substitution file is
not valid.

Rule description
Substitution files are used to instruct the trimmer to replace specific method bodies
with either a throw or return constant statements.

The value passed to the action argument of a resource element was invalid. The only
supported value for this argument is remove .

Example
XML

<!-- IL2039: Invalid value 'NonExistentAction' for attribute 'action' for


resource 'MyResource'. -->
<linker>
<assembly fullname="MyAssembly">
<resource name="MyResource" action="NonExistentAction"/>
</assembly>
</linker>
IL2040: Could not find embedded
resource specified in a substitution file
Article • 03/11/2022

Cause
No embedded resource with name matching the value used in the name argument could
be found in the specified assembly.

Rule description
Substitution files are used to instruct the trimmer to replace specific method bodies
with either a throw or return constant statements.

The resource name in a substitution file could not be found in the specified assembly.
The name of the resource to remove must match the name of an embedded resource in
the assembly.

Example
XML

<!-- IL2040: Could not find embedded resource 'NonExistentResource' to


remove in assembly 'MyAssembly'. -->
<linker>
<assembly fullname="MyAssembly">
<resource name="NonExistentResource" action="remove"/>
</assembly>
</linker>
IL2041:
'DynamicallyAccessedMembersAttribute
' is not allowed on methods
Article • 03/11/2022

Cause
DynamicallyAccessedMembersAttribute was put directly on a method. This is only
allowed for instance methods on Type. This attribute should usually be placed on the
return value of the method or one of the parameters.

Example
C#

// IL2041: The 'DynamicallyAccessedMembersAttribute' is not allowed on


methods. It is allowed on method return value or method parameters though.
[DynamicallyAccessedMembers(DynamicallyAccessedMemberType.PublicMethods)]

[return:
DynamicallyAccessedMembers(DynamicallyAccessedMemberType.PublicMethods)]
public Type GetInterestingType()
{
}
IL2042: Could not find a unique backing
field to propagate the
'DynamicallyAccessedMembersAttribute
' annotation on a property
Article • 03/11/2022

Cause
The trimmer could not determine the backing field of a property annotated with
DynamicallyAccessedMembersAttribute.

Example
C#

// IL2042: Could not find a unique backing field for property 'MyProperty'
to propagate 'DynamicallyAccessedMembersAttribute'
[DynamicallyAccessedMembers(DynamicallyAccessedMemberType.PublicMethods)]
public Type MyProperty
{
get { return GetTheValue(); }
set { }
}

// To fix this annotate the accessors manually:


public Type MyProperty
{
[return:
DynamicallyAccessedMembers(DynamicallyAccessedMemberType.PublicMethods)]
get { return GetTheValue(); }

[param:
DynamicallyAccessedMembers(DynamicallyAccessedMemberType.PublicMethods)]
set { }
}
IL2043:
'DynamicallyAccessedMembersAttribute
' on property conflicts with the same
attribute on its accessor method
Article • 03/11/2022

Cause
While propagating DynamicallyAccessedMembersAttribute from the annotated property
to its accessor method, the trimmer found that the accessor already has such an
attribute. Only the existing attribute will be used.

Example
C#

// IL2043: 'DynamicallyAccessedMembersAttribute' on property 'MyProperty'


conflicts with the same attribute on its accessor 'get_MyProperty'.
[DynamicallyAccessedMembers(DynamicallyAccessedMemberType.PublicMethods)]
public Type MyProperty
{
[return:
DynamicallyAccessedMembers(DynamicallyAccessedMemberType.PublicFields)]
get { return GetTheValue(); }
}
IL2044: Could not find any type in a
namespace specified in a descriptor file
Article • 03/11/2022

Cause
The descriptor file specified a namespace that has no types in it.

Rule description
Descriptor files are used to instruct the trimmer to always keep certain items in an
assembly, regardless of whether the trimmer could find any references to them.

A namespace specified in the descriptor file could not be found in the assembly
matching the fullname argument that was passed to the parent of the namespace
element.

Example
XML

<!-- IL2044: Could not find any type in namespace 'NonExistentNamespace' -->
<linker>
<assembly fullname="MyAssembly">
<namespace fullname="NonExistentNamespace" />
</assembly>
</linker>
IL2045: Custom attribute is referenced
in code but the trimmer was instructed
to remove all of its instances
Article • 03/11/2022

Cause
The trimmer was instructed to remove all instances of a custom attribute but kept its
type as part of its analysis. This will likely result in breaking the code where the custom
attribute's type is being used.

Example
XML

<linker>
<assembly fullname="MyAssembly">
<type fullname="MyAttribute">
<attribute internal="RemoveAttributeInstances"/>
</type>
</assembly>
</linker>

C#

// This attribute instance will be removed


[MyAttribute]
class MyType
{
}

public void TestMethod()


{
// IL2045 for 'MyAttribute' reference
typeof(MyType).GetCustomAttributes(typeof(MyAttribute), false);
}
IL2046: All interface implementations
and method overrides must have
annotations matching the interface or
overridden virtual method
'RequiresUnreferencedCodeAttribute'
annotations
Article • 08/20/2022

Cause
There is a mismatch in the RequiresUnreferencedCodeAttribute annotations between an
interface and its implementation or a virtual method and its override.

Example
A base member has the attribute but the derived member does not have the attribute.

C#

public class Base


{
[RequiresUnreferencedCode("Message")]
public virtual void TestMethod() {}
}

public class Derived : Base


{
// IL2046: Base member 'Base.TestMethod' with
'RequiresUnreferencedCodeAttribute' has a derived member
'Derived.TestMethod()' without 'RequiresUnreferencedCodeAttribute'. For all
interfaces and overrides the implementation attribute must match the
definition attribute.
public override void TestMethod() {}
}

A derived member has the attribute but the overridden base member does not have the
attribute.

C#
public class Base
{
public virtual void TestMethod() {}
}

public class Derived : Base


{
// IL2046: Member 'Derived.TestMethod()' with
'RequiresUnreferencedCodeAttribute' overrides base member
'Base.TestMethod()' without 'RequiresUnreferencedCodeAttribute'. For all
interfaces and overrides the implementation attribute must match the
definition attribute.
[RequireUnreferencedCode("Message")]
public override void TestMethod() {}
}

An interface member has the attribute but its implementation does not have the
attribute.

C#

interface IRUC
{
[RequiresUnreferencedCode("Message")]
void TestMethod();
}

class Implementation : IRUC


{
// IL2046: Interface member 'IRUC.TestMethod()' with
'RequiresUnreferencedCodeAttribute' has an implementation member
'Implementation.TestMethod()' without 'RequiresUnreferencedCodeAttribute'.
For all interfaces and overrides the implementation attribute must match the
definition attribute.
public void TestMethod () { }
}

An implementation member has the attribute but the interface that it implements does
not have the attribute.

C#

interface IRUC
{
void TestMethod();
}

class Implementation : IRUC


{
[RequiresUnreferencedCode("Message")]
// IL2046: Member 'Implementation.TestMethod()' with
'RequiresUnreferencedCodeAttribute' implements interface member
'IRUC.TestMethod()' without 'RequiresUnreferencedCodeAttribute'. For all
interfaces and overrides the implementation attribute must match the
definition attribute.
public void TestMethod () { }
}
IL2048: Internal trimmer attribute
'RemoveAttributeInstances' is being
used on a member
Article • 03/11/2022

Cause
Internal trimmer attribute RemoveAttributeInstances is being used on a member but it
can only be used on a type.

Example
XML

<!-- IL2048: Internal attribute 'RemoveAttributeInstances' can only be used


on a type, but is being used on 'MyMethod' -->
<linker>
<assembly fullname="MyAssembly">
<type fullname="MyType">
<method name="MyMethod">
<attribute internal="RemoveAttributeInstances" />
</method>
</type>
</assembly>
</linker>
IL2049: Unrecognized internal attribute
Article • 03/11/2022

Cause
An internal attribute name specified in a custom attribute annotations file is not
supported by the trimmer.

Example
XML

<!-- IL2049: Unrecognized internal attribute 'InvalidInternalAttributeName'


-->
<linker>
<assembly fullname="MyAssembly">
<type fullname="MyType">
<method name="MyMethod">
<attribute internal="InvalidInternalAttributeName" />
</method>
</type>
</assembly>
</linker>
IL2050: Correctness of COM interop
cannot be guaranteed
Article • 03/11/2022

Cause
Trimmer found a p/invoke method that declares a parameter with COM marshalling.
Correctness of COM interop cannot be guaranteed after trimming.

Example
C#

// IL2050: M1(): P/invoke method 'M2(C)' declares a parameter with COM


marshalling. Correctness of COM interop cannot be guaranteed after trimming.
Interfaces and interface members might be removed.
static void M1 ()
{
M2 (null);
}

[DllImport ("Foo")]
static extern void M2 (C autoLayout);

[StructLayout (LayoutKind.Auto)]
public class C
{
}
IL2051: Property element does not have
required argument name in custom
attribute annotations file
Article • 03/11/2022

Cause
A property element in a custom attribute annotations file does not specify the required
argument name .

Example
XML

<!-- IL2051: Property element does not contain attribute 'name' -->
<linker>
<assembly fullname="MyAssembly">
<type fullname="MyType">
<attribute fullname="MyAttribute">
<property>UnspecifiedPropertyName</property>
</attribute>
</type>
</assembly>
</linker>
IL2052: Could not find property
specified in custom attribute
annotations file
Article • 03/11/2022

Cause
Could not find a property matching the value of the name argument specified in a
property element in a custom attribute annotations file.

Example
XML

<!-- IL2052: Property 'NonExistentPropertyName' could not be found -->


<linker>
<assembly fullname="MyAssembly">
<type fullname="MyType">
<attribute fullname="MyAttribute">
<property name="NonExistentPropertyName">SomeValue</property>
</attribute>
</type>
</assembly>
</linker>
IL2053: Invalid value used in property
element in custom attribute annotations
file
Article • 09/21/2022

7 Note

This warning code is obsolete in .NET 7 and no longer produced by the tools.

Cause
Value used in a property element in a custom attribute annotations file does not match
the type of the attribute's property.

Example
XML

<!-- IL2053: Invalid value 'StringValue' for property 'IntProperty' -->


<linker>
<assembly fullname="MyAssembly">
<type fullname="MyType">
<attribute fullname="MyAttribute">
<property name="IntProperty">StringValue</property>
</attribute>
</type>
</assembly>
</linker>
IL2054: Invalid argument value in
custom attribute annotations file
Article • 09/21/2022

7 Note

This warning code is obsolete in .NET 7 and no longer produced by the tools.

Cause
Value used in an argument element in a custom attribute annotations file does not
match the type of the attribute's constructor arguments.

Example
XML

<!-- IL2054: Invalid argument value 'NonExistentEnumValue' for parameter of


type 'MyEnumType' of attribute 'AttributeWithEnumParameterAttribute' -->
<linker>
<assembly fullname="MyAssembly">
<type fullname="MyType">
<attribute fullname="AttributeWithEnumParameterAttribute">
<argument>NonExistentEnumValue</argument>
</attribute>
</type>
</assembly>
</linker>
IL2055: Call to
'System.Type.MakeGenericType' cannot
be statically analyzed by the trimmer
Article • 03/11/2022

Cause
A call to Type.MakeGenericType(Type[]) cannot be statically analyzed by the trimmer.

Rule description
This can either be that the type on which MakeGenericType(Type[]) is called cannot be
statically determined, or that the type parameters to be used for generic arguments
cannot be statically determined. If the open generic type has
DynamicallyAccessedMembersAttribute annotations on any of its generic parameters,
the trimmer currently cannot validate that the requirements are fulfilled by the calling
method.

Example
C#

class
Lazy<[DynamicallyAccessedMembers(DynamicallyAccessedMemberType.PublicParamet
erlessConstructor)] T>
{
// ...
}

void TestMethod(Type unknownType)


{
// IL2055 Trim analysis: Call to `System.Type.MakeGenericType(Type[])`
can not be statically analyzed. It's not possible to guarantee the
availability of requirements of the generic type.
typeof(Lazy<>).MakeGenericType(new Type[] { typeof(TestType) });

// IL2055 Trim analysis: Call to `System.Type.MakeGenericType(Type[])`


can not be statically analyzed. It's not possible to guarantee the
availability of requirements of the generic type.
unknownType.MakeGenericType(new Type[] { typeof(TestType) });
}
IL2056: A
'System.Diagnostics.CodeAnalysis.Dyna
micallyAccessedMembersAttribute'
annotation on a property conflicts with
the same attribute on its backing field
Article • 03/11/2022

Cause
Property annotated with DynamicallyAccessedMembersAttribute also has that attribute
on its backing field.

Rule description
While propagating DynamicallyAccessedMembersAttribute from a property to its
backing field, the trimmer found its backing field to be already annotated. Only the
existing attribute will be used.

The trimmer will only propagate annotations to compiler generated backing fields,
making this warning only possible when the backing field is explicitly annotated with
CompilerGeneratedAttribute.

Example
C#

[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructor
s)]
[CompilerGenerated]
Type backingField;

[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructor
s)]
Type PropertyWithAnnotatedBackingField
{
get { return backingField; }
set { backingField = value; }
}
IL2057: Unrecognized value passed to
the typeName parameter of
'System.Type.GetType(String)'
Article • 03/11/2022

Cause
An unrecognized value was passed to the typeName parameter of Type.GetType(String).

Rule description
If the type name passed to the typeName parameter of GetType(String) is statically
known, the trimmer can make sure it is preserved and that the application code will
continue to work after trimming. If the type is unknown and the trimmer cannot see the
type being used anywhere else, the trimmer might end up removing it from the
application, potentially breaking it.

Example
C#

void TestMethod()
{
string typeName = ReadName();

// IL2057 Trim analysis: Unrecognized value passed to the parameter


'typeName' of method 'System.Type.GetType(String typeName)'
Type.GetType(typeName);
}
IL2058: Parameters passed to
'Assembly.CreateInstance' cannot be
statically analyzed
Article • 03/11/2022

Cause
A call to CreateInstance was found in the analyzed code.

Rule description
Trimmer does not analyze assembly instances and thus does not know on which
assembly CreateInstance was called.

Example
C#

void TestMethod()
{
// IL2058 Trim analysis: Parameters passed to method
'Assembly.CreateInstance(string)' cannot be analyzed. Consider using methods
'System.Type.GetType' and `System.Activator.CreateInstance` instead.
AssemblyLoadContext.Default.Assemblies.First(a => a.Name ==
"MyAssembly").CreateInstance("MyType");

// This can be replaced by


Activator.CreateInstance(Type.GetType("MyType, MyAssembly"));
}

How to fix
Trimmer has support for Type.GetType(String). The result can be passed to
CreateInstance to create an instance of the type.
IL2059: Unrecognized value passed to
the type parameter of
'System.Runtime.CompilerServices.Runti
meHelpers.RunClassConstructor'
Article • 03/11/2022

Cause
An unrecognized value was passed to the type parameter of
RuntimeHelpers.RunClassConstructor(RuntimeTypeHandle).

Rule description
If the type passed to RunClassConstructor(RuntimeTypeHandle) is not statically known,
the trimmer cannot guarantee the availability of the target static constructor.

Example
C#

void TestMethod(Type type)


{
// IL2059 Trim analysis: Unrecognized value passed to the parameter
'type' of method
'System.Runtime.CompilerServices.RuntimeHelpers.RunClassConstructor(RuntimeT
ypeHandle type)'.
// It's not possible to guarantee the availability of the target static
constructor.
RuntimeHelpers.RunClassConstructor(type.TypeHandle);
}
IL2060: Call to
'System.Reflection.MethodInfo.MakeGe
nericMethod' cannot be statically
analyzed by the trimmer
Article • 03/11/2022

Cause
A call to MethodInfo.MakeGenericMethod(Type[]) cannot be statically analyzed by the
trimmer.

Rule description
This can either be that the method on which the MakeGenericMethod(Type[]) is called
cannot be statically determined, or that the type parameters to be used for the generic
arguments cannot be statically determined. If the open generic method has
DynamicallyAccessedMembersAttribute annotations on any of its generic parameters,
the trimmer currently cannot validate that the requirements are fulfilled by the calling
method.

Example
C#

class Test
{
public static void
TestGenericMethod<[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes
.PublicProperties)] T>()
{
}

void TestMethod(Type unknownType)


{
// IL2060 Trim analysis: Call to
'System.Reflection.MethodInfo.MakeGenericMethod' can not be statically
analyzed. It's not possible to guarantee the availability of requirements of
the generic method
typeof(Test).GetMethod("TestGenericMethod").MakeGenericMethod(new Type[]
{ typeof(TestType) });
// IL2060 Trim analysis: Call to
'System.Reflection.MethodInfo.MakeGenericMethod' can not be statically
analyzed. It's not possible to guarantee the availability of requirements of
the generic method
unknownMethod.MakeGenericMethod(new Type[] { typeof(TestType) });
}
}
IL2061: Assembly name passed to
method 'Assembly.CreateInstance'
references an assembly that could not
be resolved
Article • 03/11/2022

Cause
A call to CreateInstance had an assembly that could not be resolved.

Example
C#

void TestMethod()
{
// IL2061 Trim analysis: The assembly name 'NonExistentAssembly' passed
to method 'System.Activator.CreateInstance(string, string)' references
assembly which is not available.
Activator.CreateInstance("NonExistentAssembly", "MyType");
}
IL2062: Value passed to a method
parameter annotated with
'DynamicallyAccessedMembersAttribute
' cannot be statically determined and
may not meet the attribute's
requirements
Article • 03/11/2022

Cause
The parameter 'parameter' of method 'method' has a
DynamicallyAccessedMembersAttribute annotation, but the value passed to it can't be
statically analyzed. Trimmer cannot make sure that the requirements declared by the
attribute are met by the argument value.

Example
C#

void
NeedsPublicConstructors([DynamicallyAccessedMembers(DynamicallyAccessedMembe
rTypes.PublicConstructors)] Type type)
{
// ...
}

void TestMethod(Type[] types)


{
// IL2062: Value passed to parameter 'type' of method
'NeedsPublicConstructors' can not be statically determined and may not meet
'DynamicallyAccessedMembersAttribute' requirements.
NeedsPublicConstructors(types[1]);
}
IL2063: Value returned from a method
whose return type is annotated with
'DynamicallyAccessedMembersAttribute
' cannot be statically determined and
may not meet the attribute's
requirements
Article • 03/11/2022

Cause
The return value of method 'method' has a DynamicallyAccessedMembersAttribute
annotation, but the value returned from the method cannot be statically analyzed.
Trimmer cannot make sure that the requirements declared by the attribute are met by
the returned value.

Example
C#

[return:
DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructors
)]
Type TestMethod(Type[] types)
{
// IL2063 Trim analysis: Value returned from method 'TestMethod' can not
be statically determined and may not meet
'DynamicallyAccessedMembersAttribute' requirements.
return types[1];
}
IL2064: Value assigned to a field
annotated with
'DynamicallyAccessedMembersAttribute
' cannot be statically determined and
may not meet the attribute's
requirements.
Article • 03/11/2022

Cause
The field 'field' has a DynamicallyAccessedMembersAttribute annotation, but the value
assigned to it can not be statically analyzed. Trimmer cannot make sure that the
requirements declared by the attribute are met by the assigned value.

Example
C#

[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructor
s)]
Type _typeField;

void TestMethod(Type[] types)


{
// IL2064 Trim analysis: Value assigned to field '_typeField' can not be
statically determined and may not meet 'DynamicallyAccessedMembersAttribute'
requirements.
_typeField = _types[1];
}
IL2065: Value passed to the implicit
this parameter of a method annotated
with
'DynamicallyAccessedMembersAttribute
' cannot be statically determined and
may not meet the attribute's
requirements.
Article • 03/11/2022

Cause
The method 'method' has a DynamicallyAccessedMembersAttribute annotation (which
applies to the implicit this parameter), but the value used for the this parameter
cannot be statically analyzed. Trimmer cannot make sure that the requirements declared
by the attribute are met by the this value.

Example
C#

void TestMethod(Type[] types)


{
// IL2065 Trim analysis: Value passed to implicit 'this' parameter of
method 'Type.GetMethods()' can not be statically determined and may not meet
'DynamicallyAccessedMembersAttribute' requirements.
_types[1].GetMethods (); // Type.GetMethods has
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicMethods)]
attribute
}
IL2066: Type passed to generic
parameter 'parameter' of 'type' (or
'method') cannot be statically
determined and may not meet
'DynamicallyAccessedMembersAttribute
' requirements
Article • 03/11/2022

Cause
The generic parameter 'parameter' of 'type' (or 'method') is annotated with
DynamicallyAccessedMembersAttribute, but the value used for it cannot be statically
analyzed. Trimmer cannot make sure that the requirements declared on the attribute are
met by the value.

Example
C#

static void MethodWithUnresolvedGenericArgument<[DynamicallyAccessedMembers


(DynamicallyAccessedMemberTypes.PublicMethods)] T>()
{
}

// IL2066: TestMethod(): Type passed to generic parameter 'T' of


'TypeWithUnresolvedGenericArgument<T>' can not be statically determined and
may not meet 'DynamicallyAccessedMembersAttribute' requirements.
// IL2066: TestMethod(): Type passed to generic parameter 'T' of
'MethodWithUnresolvedGenericArgument<T>()' can not be statically determined
and may not meet 'DynamicallyAccessedMembersAttribute' requirements.
static void TestMethod()
{
var _ = new
TypeWithUnresolvedGenericArgument<Dependencies.UnresolvedType>();
MethodWithUnresolvedGenericArgument<Dependencies.UnresolvedType>();
}
IL2067: 'target parameter' argument
does not satisfy
'DynamicallyAccessedMembersAttribute
' in call to 'target method'. The
parameter 'source parameter' of
method 'source method' does not have
matching annotations. The source value
must declare at least the same
requirements as those declared on the
target location it is assigned to
Article • 09/16/2022

Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be met by those
declared on the source value also via the DynamicallyAccessedMembersAttribute. The
source value can declare more requirements than the target if necessary.

Example
C#

void
NeedsPublicConstructors([DynamicallyAccessedMembers(DynamicallyAccessedMembe
rTypes.PublicConstructors)] Type type)
{
// ...
}

void TestMethod(Type type)


{
// IL2067 Trim analysis: 'type' argument does not satisfy
'DynamicallyAccessedMembersAttribute' in call to 'NeedsPublicConstructor'.
The parameter 'type' of method 'TestMethod' does not have matching
annotations. The source value must declare at least the same requirements as
those declared on the target location it is assigned to.
NeedsPublicConstructors(type);
}

Fixing
See Fixing Warnings for guidance.
IL2068: 'target method' method return
value does not satisfy
'DynamicallyAccessedMembersAttribute
' requirements. The parameter 'source
parameter' of method 'source method'
does not have matching annotations.
The source value must declare at least
the same requirements as those
declared on the target location it is
assigned to
Article • 09/16/2022

Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be met by those
declared on the source value also via the DynamicallyAccessedMembersAttribute. The
source value can declare more requirements than the target, if necessary.

Example
C#

[return:
DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructors
)]
Type TestMethod(Type type)
{
// IL2068 Trim analysis: 'TestMethod' method return value does not
satisfy 'DynamicallyAccessedMembersAttribute' requirements. The parameter
'type' of method 'TestMethod' does not have matching annotations. The source
value must declare at least the same requirements as those declared on the
target location it is assigned to.
return type;
}
Fixing
See Fixing Warnings for guidance.
IL2069: Value stored in field 'target field'
does not satisfy
'DynamicallyAccessedMembersAttribute
' requirements. The parameter 'source
parameter' of method 'source method'
does not have matching annotations.
The source value must declare at least
the same requirements as those
declared on the target location it is
assigned to
Article • 09/16/2022

Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be met by those
declared on the source value also via the DynamicallyAccessedMembersAttribute. The
source value can declare more requirements than the target, if necessary.

Example
C#

[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructor
s)]
Type _typeField;

void TestMethod(Type type)


{
// IL2069 Trim analysis: value stored in field '_typeField' does not
satisfy 'DynamicallyAccessedMembersAttribute' requirements. The parameter
'type' of method 'TestMethod' does not have matching annotations. The source
value must declare at least the same requirements as those declared on the
target location it is assigned to.
_typeField = type;
}

Fixing
See Fixing Warnings for guidance.
IL2070: 'this' argument does not satisfy
'DynamicallyAccessedMembersAttribute
' in call to 'target method'. The
parameter 'source parameter' of
method 'source method' does not have
matching annotations. The source value
must declare at least the same
requirements as those declared on the
target location it is assigned to
Article • 09/16/2022

Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be met by those
declared on the source value also via the DynamicallyAccessedMembersAttribute. The
source value can declare more requirements than the target, if necessary.

Example
C#

void
GenericWithAnnotation<[DynamicallyAccessedMembers(DynamicallyAccessedMemberT
ypes.Interfaces)] T>()
{
}

void TestMethod(Type type)


{
// IL2070 Trim analysis: 'this' argument does not satisfy
'DynamicallyAccessedMemberTypes.Interfaces' in call to
'System.Reflection.MethodInfo.MakeGenericMethod(Type[])'. The parameter
'type' of method 'TestMethod' does not have matching annotations. The source
value must declare at least the same requirements as those declared on the
target location it is assigned to.
typeof(AnnotatedGenerics).GetMethod(nameof(GenericWithAnnotation)).MakeGener
icMethod(type);
}

Fixing
See Fixing Warnings for guidance.
IL2071: 'target generic parameter'
generic argument does not satisfy
'DynamicallyAccessedMembersAttribute
' in 'target method or type'. The
parameter 'source parameter' of
method 'source method' does not have
matching annotations. The source value
must declare at least the same
requirements as those declared on the
target location it is assigned to
Article • 08/27/2024

Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be met by those
declared on the source value also via the DynamicallyAccessedMembersAttribute. The
source value can declare more requirements than the target, if necessary.

Example
C#

public void
GenericWithAnnotation<[DynamicallyAccessedMembers(DynamicallyAccessedMemberT
ypes.Interfaces)] T>()
{
}

void TestMethod(Type type)


{
// IL2071 Trim Analysis: 'T' generic argument does not satisfy
'DynamicallyAccessedMemberTypes.Interfaces' in 'GenericWithAnnotation<T>()'.
The parameter 'type' of method 'TestMethod(Type)' does not have matching
annotations. The source value must declare at least the same requirements as
those declared on the target location it is assigned to.

typeof(AnnotatedGenerics).GetMethod(nameof(GenericWithAnnotation)).MakeGener
icMethod(type);
}

Fixing
See Fixing Warnings for guidance.
IL2072: 'target parameter' argument
does not satisfy
'DynamicallyAccessedMembersAttribute
' in call to 'target method'. The return
value of method 'source method' does
not have matching annotations. The
source value must declare at least the
same requirements as those declared on
the target location it is assigned to
Article • 09/16/2022

Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be declared by the
source value also via the DynamicallyAccessedMembersAttribute. The source value can
declare more requirements than the target, if necessary.

Example
C#

Type GetCustomType() { return typeof(CustomType); }

void
NeedsPublicConstructors([DynamicallyAccessedMembers(DynamicallyAccessedMembe
rTypes.PublicConstructors)] Type type)
{
// ...
}

void TestMethod()
{
// IL2072 Trim analysis: 'type' argument does not satisfy
'DynamicallyAccessedMembersAttribute' in call to 'NeedsPublicConstructors'.
The return value of method 'GetCustomType' does not have matching
annotations. The source value must declare at least the same requirements as
those declared on the target location it is assigned to.
NeedsPublicConstructors(GetCustomType());
}

Fixing
See Fixing Warnings for guidance.
IL2073: 'target method' method return
value does not satisfy
'DynamicallyAccessedMembersAttribute
' requirements. The return value of
method 'source method' does not have
matching annotations. The source value
must declare at least the same
requirements as those declared on the
target location it is assigned to
Article • 09/16/2022

Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be declared by the
source value also via the DynamicallyAccessedMembersAttribute. The source value can
declare more requirements than the target, if necessary.

Example
C#

Type GetCustomType() { return typeof(CustomType); }

[return:
DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructors
)]
Type TestMethod()
{
// IL2073 Trim analysis: 'TestMethod' method return value does not
satisfy 'DynamicallyAccessedMembersAttribute' requirements. The return value
of method 'GetCustomType' does not have matching annotations. The source
value must declare at least the same requirements as those declared on the
target location it is assigned to.
return GetCustomType();
}
Fixing
See Fixing Warnings for guidance.
IL2074: Value stored in field 'target field'
does not satisfy
'DynamicallyAccessedMembersAttribute
' requirements. The return value of
method 'source method' does not have
matching annotations. The source value
must declare at least the same
requirements as those declared on the
target location it is assigned to
Article • 09/16/2022

Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be declared by the
source value also via the DynamicallyAccessedMembersAttribute. The source value can
declare more requirements than the target, if necessary.

Example
C#

Type GetCustomType() { return typeof(CustomType); }

[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructor
s)]
Type _typeField;

void TestMethod()
{
// IL2074 Trim analysis: value stored in field '_typeField_' does not
satisfy 'DynamicallyAccessedMembersAttribute' requirements. The return value
of method 'GetCustomType' does not have matching annotations. The source
value must declare at least the same requirements as those declared on the
target location it is assigned to.
_typeField = GetCustomType();
}
Fixing
See Fixing Warnings for guidance.
IL2075: 'this' argument does not satisfy
'DynamicallyAccessedMembersAttribute
' in call to 'target method'. The return
value of method 'source method' does
not have matching annotations. The
source value must declare at least the
same requirements as those declared on
the target location it is assigned to
Article • 08/24/2024

Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be declared by the
source value also via the DynamicallyAccessedMembersAttribute. The source value can
declare more requirements than the target, if necessary.

Example
C#

Type GetCustomType() { return typeof(CustomType); }

void TestMethod()
{
// IL2075 Trim analysis: 'this' argument does not satisfy
'DynamicallyAccessedMembersAttribute' in call to 'GetMethods'. The return
value of method 'GetCustomType' does not have matching annotations. The
source value must declare at least the same requirements as those declared
on the target location it is assigned to.
GetCustomType().GetMethods(); // Type.GetMethods is annotated with
DynamicallyAccessedMemberTypes.PublicMethods
}

To solve this issue, add a DynamicallyAccessedMembersAttribute to the return of the


method that returns the Type object that you call an annotated instance method on.
C#

using System.Diagnostics.CodeAnalysis;

[return:
DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicMethods)]
Type GetCustomType() { return typeof(CustomType); }

void TestMethod()
{
// IL2075 Trim analysis: 'this' argument does not satisfy
'DynamicallyAccessedMembersAttribute' in call to 'GetMethods'. The return
value of method 'GetCustomType' does not have matching annotations. The
source value must declare at least the same requirements as those declared
on the target location it is assigned to.
GetCustomType().GetMethods(); // Type.GetMethods is annotated with
DynamicallyAccessedMemberTypes.PublicMethods
}

Another common situation is calling reflection APIs on a Type object returned by


GetType().

C#

void MyMethod(MyType argument)


{
Type t = argument.GetType();
// IL2075 Trim analysis: 'this' argument does not satisfy
'DynamicallyAccessedMembersAttribute' in call to 'GetMethods'. The return
value of method 'Object.GetType' does not have matching annotations. The
source value must declare at least the same requirements as those declared
on the target location it is assigned to.
t.GetMethods();
}

In this scenario, the solution is to annotate the definition of MyType with


DynamicallyAccessedMembersAttribute .

C#

using System.Diagnostics.CodeAnalysis;

[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicMethods)]
class MyType
{
...
}

void MyMethod(MyType argument)


{
Type t = argument.GetType();
// IL2075 Trim analysis: 'this' argument does not satisfy
'DynamicallyAccessedMembersAttribute' in call to 'GetMethods'. The return
value of method 'Object.GetType' does not have matching annotations. The
source value must declare at least the same requirements as those declared
on the target location it is assigned to.
t.GetMethods();
}

Applying DynamicallyAccessedMembersAttribute to a class, interface, or struct, indicates


to the linker the members specified may be accessed dynamically on Type instances
returned from calling GetType() on instances of that class, interface, or struct.

7 Note

Applying DynamicallyAccessedMembersAttribute to a type definition will "root" all


the indicated DynamicallyAccessedMemberTypes on the type and all it's derived types
(or implementing types when placed on an interface). This means the members will
be kept, as well as any metadata referenced by the members. Be careful to use the
minimum DynamicalylAccessedMemberTypes required, and apply it on the most
specific type possible.

More information
See Fixing Warnings for more information.
IL2076: 'target generic parameter'
generic argument does not satisfy
'DynamicallyAccessedMembersAttribute
' in 'target method or type'. The return
value of method 'source method' does
not have matching annotations. The
source value must declare at least the
same requirements as those declared on
the target location it is assigned to
Article • 08/27/2024

Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be met by those
declared on the source value also via the DynamicallyAccessedMembersAttribute. The
source value can declare more requirements than the target, if necessary.

Example
C#

public void
GenericWithAnnotation<[DynamicallyAccessedMembers(DynamicallyAccessedMemberT
ypes.Interfaces)] T>()
{
}

Type GetType() => typeof(int);

void TestMethod()
{
// IL2076 Trim Analysis: AnnotatedGenerics.TestMethod(Type): 'T' generic
argument does not satisfy 'DynamicallyAccessedMemberTypes.Interfaces' in
'GenericWithAnnotation<T>()'. The return value of method 'GetType()' does
not have matching annotations. The source value must declare at least the
same requirements as those declared on the target location it is assigned to
typeof(AnnotatedGenerics).GetMethod(nameof(GenericWithAnnotation)).MakeGener
icMethod(GetType());
}

Fixing
See Fixing Warnings for guidance.
IL2077: 'target parameter' argument
does not satisfy
'DynamicallyAccessedMembersAttribute
' in call to 'target method'. The field
'source field' does not have matching
annotations. The source value must
declare at least the same requirements
as those declared on the target location
it is assigned to
Article • 09/16/2022

Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be declared by the
source value also via the DynamicallyAccessedMembersAttribute. The source value can
declare more requirements than the target, if necessary.

Example
C#

void
NeedsPublicConstructors([DynamicallyAccessedMembers(DynamicallyAccessedMembe
rTypes.PublicConstructors)] Type type)
{
// ...
}

Type _typeField;

void TestMethod()
{
// IL2077 Trim analysis: 'type' argument does not satisfy
'DynamicallyAccessedMembersAttribute' in call to 'NeedsPublicConstructors'.
The field '_typeField' does not have matching annotations. The source value
must declare at least the same requirements as those declared on the target
location it is assigned to.
NeedsPublicConstructors(_typeField);
}

Fixing
See Fixing Warnings for guidance.
IL2078: 'target method' method return
value does not satisfy
'DynamicallyAccessedMembersAttribute
' requirements. The field 'source field'
does not have matching annotations.
The source value must declare at least
the same requirements as those
declared on the target location it is
assigned to
Article • 09/16/2022

Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be declared by the
source value also via the DynamicallyAccessedMembersAttribute. The source value can
declare more requirements than the target, if necessary.

Example
C#

Type _typeField;

[return:
DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructors
)]
Type TestMethod()
{
// IL2078 Trim analysis: 'TestMethod' method return value does not
satisfy 'DynamicallyAccessedMembersAttribute' requirements. The field
'_typeField' does not have matching annotations. The source value must
declare at least the same requirements as those declared on the target
location it is assigned to.
return _typeField;
}
Fixing
See Fixing Warnings for guidance.
IL2079: Value stored in field 'target field'
does not satisfy
'DynamicallyAccessedMembersAttribute
' requirements. The field 'source field'
does not have matching annotations.
The source value must declare at least
the same requirements as those
declared on the target location it is
assigned to
Article • 09/16/2022

Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be declared by the
source value also via the DynamicallyAccessedMembersAttribute. The source value can
declare more requirements than the target, if necessary.

Example
C#

Type _typeField;

[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructor
s)]
Type _typeFieldWithRequirements;

void TestMethod()
{
// IL2079 Trim analysis: value stored in field
'_typeFieldWithRequirements' does not satisfy
'DynamicallyAccessedMembersAttribute' requirements. The field '_typeField'
does not have matching annotations. The source value must declare at least
the same requirements as those declared on the target location it is
assigned to.
_typeFieldWithRequirements = _typeField;
}

Fixing
See Fixing Warnings for guidance.
IL2080: 'this' argument does not satisfy
'DynamicallyAccessedMembersAttribute
' in call to 'target method'. The field
'source field' does not have matching
annotations. The source value must
declare at least the same requirements
as those declared on the target location
it is assigned to
Article • 09/16/2022

Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be declared by the
source value also via the DynamicallyAccessedMembersAttribute. The source value can
declare more requirements than the target, if necessary.

Example
C#

[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructor
s)]
Type _typeFieldWithRequirements;

void TestMethod()
{
// IL2080 Trim analysis: 'this' argument does not satisfy
'DynamicallyAccessedMemberTypes' in call to 'GetMethod'. The field
'_typeFieldWithRequirements' does not have matching annotations. The source
value must declare at least the same requirements as those declared on the
target location it is assigned to.
_typeFieldWithRequirements.GetMethod("Foo");
}
Fixing
See Fixing Warnings for guidance.
IL2081: 'target generic parameter'
generic argument does not satisfy
'DynamicallyAccessedMembersAttribute
' in 'target method or type'. The field
'source field' does not have matching
annotations. The source value must
declare at least the same requirements
as those declared on the target location
it is assigned to
Article • 08/27/2024

Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be met by those
declared on the source value also via the DynamicallyAccessedMembersAttribute. The
source value can declare more requirements than the target, if necessary.

Example
C#

public void
GenericWithAnnotation<[DynamicallyAccessedMembers(DynamicallyAccessedMemberT
ypes.Interfaces)] T>()
{
}

Type typeField;

void TestMethod()
{
// IL2081 Trim Analysis: 'T' generic argument does not satisfy
'DynamicallyAccessedMemberTypes.Interfaces' in 'GenericWithAnnotation<T>()'.
The field 'typeField' does not have matching annotations. The source value
must declare at least the same requirements as those declared on the target
location it is assigned to.
typeof(AnnotatedGenerics).GetMethod(nameof(GenericWithAnnotation)).MakeGener
icMethod(typeField);
}

Fixing
See Fixing Warnings for guidance.
IL2082: 'target parameter' argument
does not satisfy
'DynamicallyAccessedMembersAttribute
' in call to 'target method'. The implicit
'this' argument of method 'source
method' does not have matching
annotations. The source value must
declare at least the same requirements
as those declared on the target location
it is assigned to
Article • 09/16/2022

Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be declared by the
source value also via the DynamicallyAccessedMembersAttribute. The source value can
declare more requirements than the target, if necessary.

Example
C#

void
NeedsPublicConstructors([DynamicallyAccessedMembers(DynamicallyAccessedMembe
rTypes.PublicConstructors)] Type type)
{
// ...
}

// This can only happen within methods of System.Type type (or derived
types). Assume the below method is declared on System.Type
void TestMethod()
{
// IL2082 Trim analysis: 'type' argument does not satisfy
'DynamicallyAccessedMembersAttribute' in call to 'NeedsPublicConstructors'.
The implicit 'this' argument of method 'TestMethod' does not have matching
annotations. The source value must declare at least the same requirements as
those declared on the target location it is assigned to.
NeedsPublicConstructors(this);
}

Fixing
See Fixing Warnings for guidance.
IL2083: 'target method' method return
value does not satisfy
'DynamicallyAccessedMembersAttribute
' requirements. The implicit 'this'
argument of method 'source method'
does not have matching annotations.
The source value must declare at least
the same requirements as those
declared on the target location it is
assigned to
Article • 09/16/2022

Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be met by those
declared on the source value also via the DynamicallyAccessedMembersAttribute. The
source value can declare more requirements than the source if necessary.

Example
C#

// This can only happen within methods of System.Type type (or derived
types). Assume the below method is declared on System.Type
[DynamicallyAccessedMembers (DynamicallyAccessedMemberTypes.PublicMethods)]
[return:
DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructors
)]
Type TestMethod()
{
// IL2083 Trim analysis: 'TestMethod' method return value does not
satisfy 'DynamicallyAccessedMembersAttribute' requirements. The implicit
'this' argument of method 'TestMethod' does not have matching annotations.
The source value must declare at least the same requirements as those
declared on the target location it is assigned to.
return this;
}

Fixing
See Fixing Warnings for guidance.
IL2084: Value stored in field 'target field'
does not satisfy
'DynamicallyAccessedMembersAttribute
' requirements. The implicit 'this'
argument of method 'source method'
does not have matching annotations.
The source value must declare at least
the same requirements as those
declared on the target location it is
assigned to
Article • 09/16/2022

Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be met by those
declared on the source value also via the DynamicallyAccessedMembersAttribute. The
source value can declare more requirements than the source if necessary.

Example
C#

[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructor
s)]
Type _typeFieldWithRequirements;

// This can only happen within methods of System.Type type (or derived
types). Assume the below method is declared on System.Type
void TestMethod()
{
// IL2084 Trim analysis: value stored in field
'_typeFieldWithRequirements' does not satisfy
'DynamicallyAccessedMembersAttribute' requirements. The implicit 'this'
argument of method 'TestMethod' does not have matching annotations. The
source value must declare at least the same requirements as those declared
on the target location it is assigned to.
_typeFieldWithRequirements = this;
}

Fixing
See Fixing Warnings for guidance.
IL2085: 'this' argument does not satisfy
'DynamicallyAccessedMembersAttribute
' in call to 'target method'. The implicit
'this' argument of method 'source
method' does not have matching
annotations. The source value must
declare at least the same requirements
as those declared on the target location
it is assigned to
Article • 09/16/2022

Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be met by those
declared on the source value also via the DynamicallyAccessedMembersAttribute. The
source value can declare more requirements than the source if necessary.

Example
C#

// This can only happen within methods of System.Type type (or derived
types). Assume the below method is declared on System.Type
void TestMethod()
{
// IL2085 Trim analysis: 'this' argument does not satisfy
'DynamicallyAccessedMembersAttribute' in call to 'GetMethods'. The implicit
'this' argument of method 'TestMethod' does not have matching annotations.
The source value must declare at least the same requirements as those
declared on the target location it is assigned to.
this.GetMethods(); // Type.GetMethods is annotated with
DynamicallyAccessedMemberTypes.PublicMethods
}
Fixing
See Fixing Warnings for guidance.
IL2087: 'target parameter' argument
does not satisfy
'DynamicallyAccessedMembersAttribute
' in call to 'target method'. The generic
parameter 'source generic parameter' of
'source method or type' does not have
matching annotations. The source value
must declare at least the same
requirements as those declared on the
target location it is assigned to
Article • 09/16/2022

Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be met by those
declared on the source value also via the DynamicallyAccessedMembersAttribute. The
source value can declare more requirements than the source if necessary.

Example
C#

void
NeedsPublicConstructors([DynamicallyAccessedMembers(DynamicallyAccessedMembe
rTypes.PublicConstructors)] Type type)
{
// ...
}

void TestMethod<TSource>()
{
// IL2087 Trim analysis: 'type' argument does not satisfy
'DynamicallyAccessedMembersAttribute' in call to 'NeedsPublicConstructor'.
The generic parameter 'TSource' of 'TestMethod' does not have matching
annotations. The source value must declare at least the same requirements as
those declared on the target location it is assigned to.
NeedsPublicConstructors(typeof(TSource));
}

Fixing
See Fixing Warnings for guidance.
IL2088: 'target method' method return
value does not satisfy
'DynamicallyAccessedMembersAttribute
' requirements. The generic parameter
'source generic parameter' of 'source
method or type' does not have
matching annotations. The source value
must declare at least the same
requirements as those declared on the
target location it is assigned to
Article • 09/16/2022

Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be met by those
declared on the source value also via the DynamicallyAccessedMembersAttribute. The
source value can declare more requirements than the source if necessary.

Example
C#

[DynamicallyAccessedMembers (DynamicallyAccessedMemberTypes.PublicMethods)]
[return:
DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructors
)]
Type TestMethod<TSource>()
{
// IL2088 Trim analysis: 'TestMethod' method return value does not
satisfy 'DynamicallyAccessedMembersAttribute' requirements. The generic
parameter 'TSource' of 'TestMethod' does not have matching annotations. The
source value must declare at least the same requirements as those declared
on the target location it is assigned to.
return typeof(TSource);
}

Fixing
See Fixing Warnings for guidance.
IL2089: Value stored in field 'target field'
does not satisfy
'DynamicallyAccessedMembersAttribute
' requirements. The generic parameter
'source generic parameter' of 'source
method or type' does not have
matching annotations. The source value
must declare at least the same
requirements as those declared on the
target location it is assigned to
Article • 09/16/2022

Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be met by those
declared on the source value also via the DynamicallyAccessedMembersAttribute. The
source value can declare more requirements than the source if necessary.

Example
C#

[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructor
s)]
Type _typeFieldWithRequirements;

void TestMethod<TSource>()
{
// IL2089 Trim analysis: value stored in field
'_typeFieldWithRequirements' does not satisfy
'DynamicallyAccessedMembersAttribute' requirements. The generic parameter
'TSource' of 'TestMethod' does not have matching annotations. The source
value must declare at least the same requirements as those declared on the
target location it is assigned to.
_typeFieldWithRequirements = typeof(TSource);
}

Fixing
See Fixing Warnings for guidance.
IL2090: 'this' argument does not satisfy
'DynamicallyAccessedMembersAttribute
' in call to 'target method'. The generic
parameter 'source generic parameter' of
'source method or type' does not have
matching annotations. The source value
must declare at least the same
requirements as those declared on the
target location it is assigned to
Article • 09/16/2022

Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be met by those
declared on the source value also via the DynamicallyAccessedMembersAttribute. The
source value can declare more requirements than the source if necessary.

Example
C#

void TestMethod<TSource>()
{
// IL2090 Trim analysis: 'this' argument does not satisfy
'DynamicallyAccessedMembersAttribute' in call to 'GetMethods'. The generic
parameter 'TSource' of 'TestMethod' does not have matching annotations. The
source value must declare at least the same requirements as those declared
on the target location it is assigned to.
typeof(TSource).GetMethods(); // Type.GetMethods is annotated with
DynamicallyAccessedMemberTypes.PublicMethods
}

Fixing
See Fixing Warnings for guidance.
IL2091: 'target generic parameter'
generic argument does not satisfy
'DynamicallyAccessedMembersAttribute
' in 'target method or type'. The generic
parameter 'source target parameter' of
'source method or type' does not have
matching annotations. The source value
must declare at least the same
requirements as those declared on the
target location it is assigned to
Article • 09/16/2022

Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be met by those
declared on the source value also via the DynamicallyAccessedMembersAttribute. The
source value can declare more requirements than the source if necessary.

Example
C#

void
NeedsPublicConstructors<[DynamicallyAccessedMembers(DynamicallyAccessedMembe
rTypes.PublicConstructors)] TTarget>()
{
// ...
}

void TestMethod<TSource>()
{
// IL2091 Trim analysis: 'TTarget' generic argument does not satisfy
'DynamicallyAccessedMembersAttribute' in 'NeedsPublicConstructors'. The
generic parameter 'TSource' of 'TestMethod' does not have matching
annotations. The source value must declare at least the same requirements as
those declared on the target location it is assigned to.
NeedsPublicConstructors<TSource>();
}

Fixing
See Fixing Warnings for guidance.
IL2092: The
'DynamicallyAccessedMemberTypes'
value used in a
'DynamicallyAccessedMembersAttribute
' annotation on a method's parameter
does not match the
'DynamicallyAccessedMemberTypes'
value of the overridden parameter
annotation. All overridden members
must have the same attribute's usage
Article • 03/11/2022

Cause
All overrides of a virtual method, including the base method, must have the same
DynamicallyAccessedMembersAttribute usage on all their components (return value,
parameters, and generic parameters).

Example
C#

public class Base


{
public virtual void
TestMethod([DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.Public
Methods)] Type type) {}
}

public class Derived : Base


{
// IL2092: 'DynamicallyAccessedMemberTypes' in
'DynamicallyAccessedMembersAttribute' on the parameter 'type' of method
'Derived.TestMethod' don't match overridden parameter 'type' of method
'Base.TestMethod'. All overridden members must have the same
'DynamicallyAccessedMembersAttribute' usage.
public override void
TestMethod([DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.Public
Fields)] Type type) {}
}
IL2093: The
'DynamicallyAccessedMemberTypes'
value used in a
'DynamicallyAccessedMembersAttribute
' annotation on a method's return type
does not match the
'DynamicallyAccessedMemberTypes'
value of the overridden return type
annotation. All overridden members
must have the same attribute's usage
Article • 03/11/2022

Cause
All overrides of a virtual method including the base method must have the same
DynamicallyAccessedMembersAttribute usage on all its components (return value,
parameters and generic parameters).

Example
C#

public class Base


{
[return:
DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicMethods)]
public virtual Type TestMethod() {}
}

public class Derived : Base


{
// IL2093: 'DynamicallyAccessedMemberTypes' in
'DynamicallyAccessedMembersAttribute' on the return value of method
'Derived.TestMethod' don't match overridden return value of method
'Base.TestMethod'. All overridden members must have the same
'DynamicallyAccessedMembersAttribute' usage.
[return:
DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicFields)]
public override Type TestMethod() {}
}
IL2094: The
'DynamicallyAccessedMemberTypes'
value used in a
'DynamicallyAccessedMembersAttribute
' annotation on a method's implicit this
parameter does not match the
'DynamicallyAccessedMemberTypes'
value of the overridden this parameter
annotation. All overridden members
must have the same attribute's usage
Article • 03/11/2022

Cause
All overrides of a virtual method including the base method must have the same
DynamicallyAccessedMembersAttribute usage on all its components (return value,
parameters and generic parameters).

Example
C#

// This only works on methods in System.Type and derived classes - this is


just an example
public class Type
{
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicMethods)]
public virtual void TestMethod() {}
}

public class DerivedType : Type


{
// IL2094: 'DynamicallyAccessedMemberTypes' in
'DynamicallyAccessedMembersAttribute' on the implicit 'this' parameter of
method 'DerivedType.TestMethod' don't match overridden implicit 'this'
parameter of method 'Type.TestMethod'. All overridden members must have the
same 'DynamicallyAccessedMembersAttribute' usage.
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicFields)]
public override void TestMethod() {}
}
IL2095: The
'DynamicallyAccessedMemberTypes'
value used in a
'DynamicallyAccessedMembersAttribute
' annotation on a method's generic
parameter does not match the
'DynamicallyAccessedMemberTypes'
value of the overridden generic
parameter annotation. All overridden
members must have the same
attribute's usage
Article • 03/11/2022

Cause
All overrides of a virtual method including the base method must have the same
DynamicallyAccessedMembersAttribute usage on all its components (return value,
parameters and generic parameters).

Example
C#

public class Base


{
public virtual void
TestMethod<[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.Public
Methods)] T>() {}
}

public class Derived : Base


{
// IL2095: 'DynamicallyAccessedMemberTypes' in
'DynamicallyAccessedMembersAttribute' on the generic parameter 'T' of method
'Derived.TestMethod<T>' don't match overridden generic parameter 'T' of
method 'Base.TestMethod<T>'. All overridden members must have the same
'DynamicallyAccessedMembersAttribute' usage.
public override void
TestMethod<[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.Public
Fields)] T>() {}
}
IL2096: Call to
'Type.GetType(System.String,System.Bo
olean,System.Boolean)' can perform
case insensitive lookup of the type.
Currently, Trimmer cannot guarantee
presence of all the matching types"
Article • 03/11/2022

Cause
Specifying a case-insensitive search on an overload of GetType(String, Boolean, Boolean)
is not supported by Trimmer. Specify false to perform a case-sensitive search or use an
overload that does not use an ignoreCase boolean.

Example
C#

void TestMethod()
{
// IL2096 Trim analysis: Call to
'System.Type.GetType(String,Boolean,Boolean)' can perform case insensitive
lookup of the type, currently ILLink can not guarantee presence of all the
matching types
Type.GetType ("typeName", false, true);
}
IL2097: Field annotated with
'DynamicallyAccessedMembersAttribute
' is not of type 'System.Type',
'System.String', or derived
Article • 03/11/2022

Cause
DynamicallyAccessedMembersAttribute is only applicable to items of type Type, String,
or derived. On all other types the attribute will be ignored. Using the attribute on any
other type is likely incorrect and unintentional.

Example
C#

// IL2097: Field '_valueField' has 'DynamicallyAccessedMembersAttribute',


but that attribute can only be applied to fields of type 'System.Type' or
'System.String'
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicMethods)]
object _valueField;
}
IL2098: Method's parameter annotated
with
'DynamicallyAccessedMembersAttribute
' is not of type 'System.Type',
'System.String', or derived
Article • 03/11/2022

Cause
DynamicallyAccessedMembersAttribute is only applicable to items of type Type, String,
or derived. On all other types the attribute will be ignored. Using the attribute on any
other type is likely incorrect and unintentional.

Example
C#

// IL2098: Parameter 'param' of method 'TestMethod' has


'DynamicallyAccessedMembersAttribute', but that attribute can only be
applied to parameters of type 'System.Type' or 'System.String'
void
TestMethod([DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.Public
Methods)] object param)
{
// param.GetType()....
}
IL2099: Property annotated with
'DynamicallyAccessedMembersAttribute
' is not of type 'System.Type;,
'System.String', or derived
Article • 03/11/2022

Cause
DynamicallyAccessedMembersAttribute is only applicable to items of type Type, String,
or derived. On all other types the attribute will be ignored. Using the attribute on any
other type is likely incorrect and unintentional.

Example
C#

// IL2099: Parameter 'param' of method 'TestMethod' has


'DynamicallyAccessedMembersAttribute', but that attribute can only be
applied to properties of type 'System.Type' or 'System.String'
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicMethods)]
object TestProperty { get; set; }
IL2100: Trimmer XML contains
unsupported wildcard on argument
fullname of an assembly element
Article • 03/11/2022

Cause
A wildcard cannot be the value of the fullname argument for an assembly element in a
Trimmer XML. Use a specific assembly name instead.

Example
XML

<!-- IL2100: XML contains unsupported wildcard for assembly "fullname"


attribute -->
<linker>
<assembly fullname="*">
<type fullname="MyType" />
</assembly>
</linker>
IL2101: Assembly's embedded XML
references a different assembly in
fullname argument of an assembly
element
Article • 03/11/2022

Cause
Embedded attribute or substitution XML may only contain elements that apply to the
containing assembly. Attempting to modify another assembly will not have any effect.

Example
XML

<!-- IL2101: Embedded XML in assembly 'ContainingAssembly' contains assembly


"fullname" attribute for another assembly 'OtherAssembly' -->
<linker>
<assembly fullname="OtherAssembly">
<type fullname="MyType" />
</assembly>
</linker>
IL2102: Invalid
'Reflection.AssemblyMetadataAttribute'
attribute found in assembly. Value must
be True
Article • 03/11/2022

Cause
AssemblyMetadataAttribute may be used at the assembly level to turn on trimming for
the assembly. The attribute contains an unsupported value. The only supported value is
True .

Example
XML

<!-- IL2102: Embedded XML in assembly 'ContainingAssembly' contains assembly


"fullname" attribute for another assembly 'OtherAssembly' -->
<linker>
<assembly fullname="OtherAssembly">
<type fullname="MyType" />
</assembly>
</linker>
IL2103: Value passed to the
'propertyAccessor' parameter of
method
'System.Linq.Expressions.Expression.Pro
perty(Expression, MethodInfo)' cannot
be statically determined as a property
accessor
Article • 03/11/2022

Cause
The value passed to the propertyAccessor parameter of Property(Expression,
MethodInfo) was not recognized as a property accessor method. Trimmer can't
guarantee the presence of the property.

Example
C#

void TestMethod(MethodInfo methodInfo)


{
// IL2103: Value passed to the 'propertyAccessor' parameter of method
'System.Linq.Expressions.Expression.Property(Expression, MethodInfo)' cannot
be statically determined as a property accessor.
Expression.Property(null, methodInfo);
}
IL2104: Assembly produced trim
warnings
Article • 09/06/2023

Cause
The assembly 'assembly' produced trim analysis warnings in the context of the app. This
means the assembly has not been fully annotated for trimming. Consider contacting the
library author to request they add trim annotations to the library. To see detailed
warnings, turn off grouped warnings by setting
<TrimmerSingleWarn>false</TrimmerSingleWarn> property in your project file. For
more information on annotating libraries for trimming, see Prepare .NET libraries for
trimming.

6 Collaborate with us on
GitHub .NET feedback
The .NET documentation is open
The source for this content can
source. Provide feedback here.
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
IL2105: Type 'type' was not found in the
caller assembly nor in the base library.
Type name strings used for dynamically
accessing a type should be assembly
qualified
Article • 03/11/2022

Cause
Type name strings representing dynamically accessed types must be assembly qualified.
Otherwise, linker will first search for the type name in the caller's assembly and then in
System.Private.CoreLib. If the linker fails to resolve the type name, null is returned.

Example
C#

void TestInvalidTypeName()
{
RequirePublicMethodOnAType("Foo.Unqualified.TypeName");
}

void RequirePublicMethodOnAType(

[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicMethods)]
string typeName)
{
}
IL2106: Return type of method 'method'
has
'DynamicallyAccessedMembersAttribute
', but that attribute can only be applied
to properties of type 'System.Type' or
'System.String'
Article • 03/11/2022

Cause
DynamicallyAccessedMembersAttribute is only applicable to items of type Type or String
(or derived). On all other types, the attribute will be ignored. Using the attribute on any
other type is likely incorrect and unintentional.

Example
C#

// IL2106: Return type of method 'TestMethod' has


'DynamicallyAccessedMembersAttribute', but that attribute can only be
applied to properties of type 'System.Type' or 'System.String'
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicMethods)]
object TestMethod()
{
return typeof(TestType);
}
IL2107: Methods 'method1' and
'method2' are both associated with
state machine type 'type'. This is
currently unsupported and may lead to
incorrectly reported warnings
Article • 03/11/2022

Cause
The trimmer cannot correctly handle if the same compiler-generated state machine type
is associated (via the state-machine attributes) with two different methods. The trimmer
derives warning suppressions from the method that produced the state machine and
does not support reprocessing the same method or type more than once.

Example
C#

class <compiler_generated_state_machine>_type {
void MoveNext()
{
// This should normally produce IL2026
CallSomethingWhichRequiresUnreferencedCode ();
}
}

[RequiresUnreferencedCode ("")] // This should suppress all warnings from


the method
[IteratorStateMachine(typeof(<compiler_generated_state_machine>_type))]
IEnumerable<int> UserDefinedMethod()
{
// Uses the state machine type
// The IL2026 from the state machine should be suppressed in this case
}

// IL2107: Methods 'UserDefinedMethod' and 'SecondUserDefinedMethod' are


both associated with state machine type
'<compiler_generated_state_machine>_type'.
[IteratorStateMachine(typeof(<compiler_generated_state_machine>_type))]
IEnumerable<int> SecondUserDefinedMethod()
{
// Uses the state machine type
// The IL2026 from the state should be reported in this case
}
IL2108: Invalid scope 'scope' used in
'UnconditionalSuppressMessageAttribut
e' on module 'module' with target
'target'
Article • 03/11/2022

Cause
The only scopes supported on global unconditional suppressions are module , type , and
member . If the scope and target arguments are null or missing on a global suppression,

it's assumed that the suppression is put on the module. Global unconditional
suppressions using invalid scopes are ignored.

Example
C#

// IL2108: Invalid scope 'method' used in


'UnconditionalSuppressMessageAttribute' on module 'Warning' with target
'MyTarget'.
[module: UnconditionalSuppressMessage ("Test suppression with invalid
scope", "IL2026", Scope = "method", Target = "MyTarget")]

class Warning
{
static void Main(string[] args)
{
Foo();
}

[RequiresUnreferencedCode("Warn when Foo() is called")]


static void Foo()
{
}
}
IL2109: Type derives from base type that
has
'RequiresUnreferencedCodeAttribute'
Article • 04/12/2022

Cause
A type is referenced in code, and this type derives from a base type with
RequiresUnreferencedCodeAttribute, which can break functionality of a trimmed
application. Types that derive from a base class with
RequiresUnreferencedCodeAttribute need to explicitly use the
RequiresUnreferencedCodeAttribute or suppress this warning.

Example
C#

[RequiresUnreferencedCode("Using any of the members inside this class is


trim unsafe", Url="http://help/unreferencedcode")]
public class UnsafeClass {
public UnsafeClass () {}
public static void UnsafeMethod();
}

// IL2109: Type 'Derived' derives from 'UnsafeClass' which has


'RequiresUnreferencedCodeAttribute'. Using any of the members inside this
class is trim unsafe. http://help/unreferencedcode
class Derived : UnsafeClass {}
IL2110: Field with
'DynamicallyAccessedMembersAttribute
' is accessed via reflection. Trimmer
cannot guarantee availability of the
requirements of the field
Article • 03/11/2022

Cause
The trimmer can't guarantee that all requirements of the
DynamicallyAccessedMembersAttribute are fulfilled if the field is accessed via reflection.

Example
C#

[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicMethods)]
Type _field;

void TestMethod()
{
// IL2110: Field '_field' with 'DynamicallyAccessedMembersAttribute' is
accessed via reflection. Trimmer can't guarantee availability of the
requirements of the field.
typeof(Test).GetField("_field");
}
IL2111: Method with parameters or
return value with
'DynamicallyAccessedMembersAttribute
' is accessed via reflection. Trimmer
cannot guarantee availability of the
requirements of the method
Article • 07/02/2024

Cause
The trimmer can't guarantee that all requirements of the
DynamicallyAccessedMembersAttribute are fulfilled if the method is accessed via
reflection.

Example
This warning can be caused by directly accessing a method with a
DynamicallyAccessedMembersAttribute on its parameters or return type.

C#

void
MethodWithRequirements([DynamicallyAccessedMembers(DynamicallyAccessedMember
Types.PublicMethods)] Type type)
{
}

void TestMethod()
{
// IL2111: Method 'MethodWithRequirements' with parameters or return
value with `DynamicallyAccessedMembersAttribute` is accessed via reflection.
Trimmer can't guarantee availability of the requirements of the method.
typeof(Test).GetMethod("MethodWithRequirements");
}

This warning can also be caused by passing a type to a field, paramter, argument, or
return value that is annotated with DynamicallyAccessedMembersAttribute.
DynamicallyAccessedMembersAttribute implies reflection access over all of the listed
DynamicallyAccessedMemberTypes. This means that when a type is passed to a
parameter, field, generic parameter, or return value annotated with PublicMethods .NET
tooling assumes that all public methods are accessed via reflection. If a type that
contains a method with an annotated parameter or return value is passed to a location
annotated with PublicMethods, then IL2111 will be raised.

C#

class TypeWithAnnotatedMethod
{
void
MethodWithRequirements([DynamicallyAccessedMembers(DynamicallyAccessedMember
Types.PublicFields)] Type type)
{
}
}

class OtherType
{
void
AccessMethodViaReflection([DynamicallyAccessedMembers(DynamicallyAccessedMem
berTypes.PublicMethods)] Type type)
{
}

void PassTypeToAnnotatedMethod()
{
// IL2111: Method 'MethodWithRequirements' with parameters or return
value with `DynamicallyAccessedMembersAttribute` is accessed via reflection.
Trimmer can't guarantee availability of the requirements of the method.
AccessMethodViaReflection(typeof(TypeWithAnnotatedMethod));
}
}

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
IL2112:
'DynamicallyAccessedMembersAttribute
' on 'type' or one of its base types
references 'member', which requires
unreferenced code
Article • 03/11/2022

Cause
A type is annotated with DynamicallyAccessedMembersAttribute indicating that the
type may dynamically access some members declared on the type or its derived types.
This instructs the trimmer to keep the specified members, but one of them is annotated
with RequiresUnreferencedCodeAttribute, which can break functionality when trimming.
The DynamicallyAccessedMembersAttribute annotation may be directly on the type, or
implied by an annotation on one of its base or interface types. This warning originates
from the member with RequiresUnreferencedCodeAttribute.

Example
C#

[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicMethods)]
public class AnnotatedType {
// Trim analysis warning IL2112: AnnotatedType.Method():
'DynamicallyAccessedMembersAttribute' on 'AnnotatedType' or one of its
// base types references 'AnnotatedType.Method()' which requires
unreferenced code. Using this member is trim unsafe.
[RequiresUnreferencedCode("Using this member is trim unsafe")]
public static void Method() { }
}
IL2113:
'DynamicallyAccessedMembersAttribute
' on 'type' or one of its base types
references 'member', which requires
unreferenced code
Article • 03/11/2022

Cause
A type is annotated with RequiresUnreferencedCodeAttribute indicating that the type
may dynamically access some members declared on one of its derived types. This
instructs the trimmer to keep the specified members, but a member of one of the base
or interface types is annotated with RequiresUnreferencedCodeAttribute, which can
break functionality when trimming. The RequiresUnreferencedCodeAttribute annotation
may be directly on the type, or implied by an annotation on one of its base or interface
types. This warning originates from the type which has
RequiresUnreferencedCodeAttribute requirements.

Example
C#

public class BaseType {


[RequiresUnreferencedCode("Using this member is trim unsafe")]
public static void Method() { }
}

// Trim analysis warning IL2113: AnnotatedType:


'DynamicallyAccessedMembersAttribute' on 'AnnotatedType' or one of its
// base types references 'BaseType.Method()' which requires unreferenced
code. Using this member is trim unsafe.
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicMethods)]
public class AnnotatedType : BaseType {
}
IL2114:
'DynamicallyAccessedMembersAttribute
' on 'type' or one of its base types
references 'member' which has
'DynamicallyAccessedMembersAttribute
' requirements
Article • 03/11/2022

Cause
A type is annotated with RequiresUnreferencedCodeAttribute indicating that the type
may dynamically access some members declared on the type or its derived types. This
instructs the trimmer to keep the specified members, but one of them is annotated with
RequiresUnreferencedCodeAttribute which can not be statically verified. The
RequiresUnreferencedCodeAttribute annotation may be directly on the type, or implied
by an annotation on one of its base or interface types. This warning originates from the
member with RequiresUnreferencedCodeAttribute requirements.

Example
C#

[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicFields)]
public class AnnotatedType {
// Trim analysis warning IL2114: System.Type AnnotatedType::Field:
'DynamicallyAccessedMembersAttribute' on 'AnnotatedType' or one of its
// base types references 'System.Type AnnotatedType::Field' which has
'DynamicallyAccessedMembersAttribute' requirements .

[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicProperties)
]
public static Type Field;
}
IL2115:
'DynamicallyAccessedMembersAttribute
' on 'type' or one of its base types
references 'member' which has
'DynamicallyAccessedMembersAttribute
' requirements
Article • 03/11/2022

Cause
A type is annotated with DynamicallyAccessedMembersAttribute indicating that the
type may dynamically access some members declared on one of the derived types. This
instructs the trimmer to keep the specified members, but a member of one of the base
or interface types is annotated with DynamicallyAccessedMembersAttribute which
cannot be statically verified. The DynamicallyAccessedMembersAttribute annotation
may be directly on the type, or implied by an annotation on one of its base or interface
types. This warning originates from the type which has
DynamicallyAccessedMembersAttribute requirements.

Example
C#

public class BaseType {

[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicProperties)
]
public static Type Field;
}

// Trim analysis warning IL2115: AnnotatedType:


'DynamicallyAccessedMembersAttribute' on 'AnnotatedType' or one of its
// base types references 'System.Type BaseType::Field' which has
'DynamicallyAccessedMembersAttribute' requirements .
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicFields)]
public class AnnotatedType : BaseType {
}
IL2116:
'RequiresUnreferencedCodeAttribute'
cannot be placed directly on a static
constructor. Consider placing it on the
type declaration instead
Article • 03/11/2022

Cause
RequiresUnreferencedCodeAttribute is not allowed on static constructors since these are
not callable by the user. Placing the attribute directly on a static constructor will have no
effect. Annotate the method's containing type instead.

Example
C#

public class MyClass {


// Trim analysis warning IL2116: 'RequiresUnreferencedCodeAttribute'
cannot be placed directly on static constructor 'MyClass..cctor()', consider
placing 'RequiresUnreferencedCodeAttribute' on the type declaration instead.
[RequiresUnreferencedCode("Dangerous")]
static MyClass () { }
}
IL2117: Methods 'method1' and
'method2' are both associated with
lambda or local function 'method'. This
is currently unsupported and may lead
to incorrectly reported warnings.
Article • 09/16/2022

Cause
Trimmer currently can't correctly handle if the same compiler generated lambda or local
function is associated with two different methods. We don't know of any C# patterns
which would cause this problem, but it is possible to write code like this in IL.

Example
Only a meta-sample:

C#

[RequiresUnreferencedCode ("")] // This should suppress all warnings from


the method
void UserDefinedMethod()
{
// Uses the compiler-generated local function method
// The IL2026 from the local function should be suppressed in this case
}

// IL2107: Methods 'UserDefinedMethod' and 'SecondUserDefinedMethod' are


both associated with state machine type
'<compiler_generated_state_machine>_type'.
[RequiresUnreferencedCode ("")] // This should suppress all warnings from
the method
void SecondUserDefinedMethod()
{
// Uses the compiler-generated local function method
// The IL2026 from the local function should be suppressed in this case
}

internal static void <UserDefinedMethod>g__LocalFunction|0_0()


{
// Compiler-generated method emitted for a local function.
// This should only ever be called from one user-defined method.
}
IL2122: Type 'type' is not assembly
qualified. Type name strings used for
dynamically accessing a type should be
assembly qualified
Article • 08/27/2024

Cause
Type name strings representing dynamically accessed types must be assembly qualified.
Otherwise, the lookup semantics of Type.GetType will search the assembly with the
Type.GetType callsite and the core library. The assembly with the Type.GetType callsite

may be different than the assembly which passes the type name string to a location with
DynamicallyAccessedMembersAttribute , so the tooling cannot determine which

assemblies to search.

Example
C#

// In Assembly 1
void TestInvalidTypeName()
{
// IL2122: Type 'MyType' is not assembly qualified. Type name strings
used for dynamically accessing a type should be assembly qualified.
RequirePublicMethodOnAType("MyType");
}

void ForwardTypeNameToAnotherMethod(

[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicMethods)]
string typeName)
{
MyTypeFromAnotherAssembly.GetType(typeName);
}

C#

// In Assembly 2
public class MyTypeFromAnotherAssembly
{
void GetTypeAndSearchThroughMethods(
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicMethods)]
string typeName)
{
Type.GetType(typeName).GetMethods();
}
}

class MyType
{
// ...
}

In a non-trimmed app, at runtime the Type.GetType call will discover MyType in


Assembly 2. However, trimming will remove MyType because the trimming tools don't
have enough information to determine where the type will be found at runtime.

To fix this, consider using a fully-qualified type name instead:

C#

RequirePublicMethodsOnAType("MyType,Assembly2");

Another option is to pass the unqualified type name string directly to Type.GetType , and
avoid annotations on string :

C#

var type = Type.GetType("MyType");

void SearchThroughMethods(

[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicMethods)]
Type type)
{
type.GetMethods();
}

This gives the trimming tools enough information to look up the type using the same
semantics that GetType(String) has at runtime, causing the type (and public methods in
this example) to be preserved.
Native AOT deployment
Article • 09/18/2024

Publishing your app as Native AOT produces an app that's self-contained and that has
been ahead-of-time (AOT) compiled to native code. Native AOT apps have faster startup
time and smaller memory footprints. These apps can run on machines that don't have
the .NET runtime installed.

The benefit of Native AOT is most significant for workloads with a high number of
deployed instances, such as cloud infrastructure and hyper-scale services. .NET 8 adds
ASP.NET Core support for native AOT.

The Native AOT deployment model uses an ahead-of-time compiler to compile IL to


native code at the time of publish. Native AOT apps don't use a just-in-time (JIT)
compiler when the application runs. Native AOT apps can run in restricted environments
where a JIT isn't allowed. Native AOT applications target a specific runtime environment,
such as Linux x64 or Windows x64, just like publishing a self-contained app.

Prerequisites
Windows

Visual Studio 2022 , including the Desktop development with C++ workload with
all default components.

Publish Native AOT using the CLI


1. Add <PublishAot>true</PublishAot> to your project file.

This property enables Native AOT compilation during publish. It also enables
dynamic code-usage analysis during build and editing. It's preferable to put this
setting in the project file rather than passing it on the command line, since it
controls behaviors outside publish.

XML

<PropertyGroup>
<PublishAot>true</PublishAot>
</PropertyGroup>
2. Publish the app for a specific runtime identifier using dotnet publish -r <RID> .

The following example publishes the app for Windows as a Native AOT application
on a machine with the required prerequisites installed.

dotnet publish -r win-x64 -c Release

The following example publishes the app for Linux as a Native AOT application. A
Native AOT binary produced on Linux machine is only going to work on same or
newer Linux version. For example, Native AOT binary produced on Ubuntu 20.04 is
going to run on Ubuntu 20.04 and later, but it isn't going to run on Ubuntu 18.04.

dotnet publish -r linux-arm64 -c Release

The app is available in the publish directory and contains all the code needed to run in
it, including a stripped-down version of the coreclr runtime.

Check out the Native AOT samples available in the dotnet/samples repository on
GitHub. The samples include Linux and Windows Dockerfiles that demonstrate how
to automate installation of prerequisites and publish .NET projects with Native AOT
using containers.

AOT-compatibility analyzers
The IsAotCompatible property is used to indicate whether a library is compatible with
Native AOT. Consider when a library sets the IsAotCompatible property to true , for
example:

XML

<PropertyGroup>
<IsAotCompatible>true</IsAotCompatible>
</PropertyGroup>

The preceding configuration assigns a default of true to the following properties:

IsTrimmable

EnableTrimAnalyzer

EnableSingleFileAnalyzer
EnableAotAnalyzer

These analyzers help to ensure that a library is compatible with Native AOT.
Native debug information
By default, Native AOT publishing produces debug information in a separate file:

Linux: .dbg
Windows: .pdb
macOS: .dSYM folder

The debug file is necessary for running the app under the debugger or inspecting crash
dumps. On Unix-like platforms, set the StripSymbols property to false to include the
debug information in the native binary. Including debug information makes the native
binary larger.

XML

<PropertyGroup>
<StripSymbols>false</StripSymbols>
</PropertyGroup>

Limitations of Native AOT deployment


Native AOT apps have the following limitations:

No dynamic loading, for example, Assembly.LoadFile .


No run-time code generation, for example, System.Reflection.Emit .
No C++/CLI.
Windows: No built-in COM.
Requires trimming, which has limitations.
Implies compilation into a single file, which has known incompatibilities.
Apps include required runtime libraries (just like self-contained apps, increasing
their size as compared to framework-dependent apps).
System.Linq.Expressions always use their interpreted form, which is slower than
run-time generated compiled code.
Not all the runtime libraries are fully annotated to be Native AOT compatible. That
is, some warnings in the runtime libraries aren't actionable by end developers.
Diagnostic support for debugging and profiling with some limitations.
Support for some ASP.NET Core features. For more information, see ASP.NET Core
support for Native AOT.

The publish process analyzes the entire project and its dependencies for possible
limitations. Warnings are issued for each limitation the published app might encounter
at run time.
Platform/architecture restrictions
The following table shows supported compilation targets.

.NET 8

ノ Expand table

Platform Supported architecture Notes

Windows x64, Arm64

Linux x64, Arm64

macOS x64, Arm64

iOS Arm64 Experimental support

iOSSimulator x64, Arm64 Experimental support

tvOS Arm64 Experimental support

tvOSSimulator x64, Arm64 Experimental support

MacCatalyst x64, Arm64 Experimental support

Android x64, Arm64 Experimental, no built-in Java interop


Optimize AOT deployments
Article • 09/04/2024

The Native AOT publishing process generates a self-contained executable with a subset
of the runtime libraries that are tailored specifically for your app. The compilation
generally relies on static analysis of the application to generate the best possible output.
However, the term "best possible" can have many meanings. Sometimes, you can
improve the output of the compilation by providing hints to the publish process.

Optimize for size or speed


During the compilation, the publishing process makes decisions and tradeoffs between
generating the theoretically fastest possible executable and the size of the executable.
By default, the compiler chooses a blended approach: generate fast code, but be
mindful of the size of the application.

The <OptimizationPreference> MSBuild property can be used to communicate a general


optimization goal instead of the blended default approach:

XML

<OptimizationPreference>Size</OptimizationPreference>

Setting OptimizationPreference to Size instructs the publishing process to favor the


size of the executable instead of other performance metrics. The size of the app is
expected to be smaller, but other performance metrics might be affected.

XML

<OptimizationPreference>Speed</OptimizationPreference>

Setting OptimizationPreference to Speed instructs the publishing process to favor code


execution speed. The peak throughput of the app is expected to be higher, but other
performance metrics might be affected.

Further size optimization options


Since Native AOT deployments imply the use of trimming, it's possible to further
improve the size of the application by specifying more trimming options. For example,
the Trim framework library features section discusses how to disable library features
such as globalization.
Diagnostics and instrumentation
Article • 04/22/2024

Native AOT shares some, but not all, diagnostics and instrumentation capabilities with
CoreCLR. Because of CoreCLR's rich selection of diagnostic utilities, it's sometimes
appropriate to diagnose and debug problems in CoreCLR. Apps that are trim-
compatible shouldn't have behavioral differences, so investigations often apply to both
runtimes. Nonetheless, some information can only be gathered after publishing, so
Native AOT also provides post-publish diagnostic tooling.

.NET 8 Native AOT diagnostic support


The following table summarizes diagnostic features supported for Native AOT
deployments:

ノ Expand table

Feature Fully supported Partially supported Not supported

Observability and telemetry ✔️

Development-time diagnostics ✔️

Native debugging ✔️

CPU Profiling ✔️

Heap analysis ❌

Observability and telemetry


As of .NET 8, the Native AOT runtime supports EventPipe, which is the base layer used
by many logging and tracing libraries. You can interface with EventPipe directly through
APIs like EventSource.WriteEvent or you can use libraries built on top, like
OpenTelemetry. EventPipe support also allows .NET diagnostic tools like dotnet-trace,
dotnet-counters, and dotnet-monitor to work seamlessly with Native AOT or CoreCLR
applications. EventPipe is an optional component in Native AOT. To include EventPipe
support, set the EventSourceSupport MSBuild property to true .

XML
<PropertyGroup>
<EventSourceSupport>true</EventSourceSupport>
</PropertyGroup>

Native AOT provides partial support for some well-known event providers. Not all
runtime events are supported in Native AOT.

Development-time diagnostics
The .NET CLI tooling ( dotnet SDK) and Visual Studio offer separate commands for build
and publish . build (or Start in Visual Studio) uses CoreCLR. Only publish creates a
Native AOT application. Publishing your app as Native AOT produces an app that has
been ahead-of-time (AOT) compiled to native code. As mentioned previously, not all
diagnostic tools work seamlessly with published Native AOT applications in .NET 8.
However, all .NET diagnostic tools are available for developers during the application
building stage. We recommend developing, debugging, and testing the applications as
usual and publishing the working app with Native AOT as one of the last steps.

Native debugging
When you run your app during development, like inside Visual Studio, or with dotnet
run , dotnet build , or dotnet test , it runs on CoreCLR by default. However, if

PublishAot is present in the project file, the behavior should be the same between

CoreCLR and Native AOT. This characteristic allows you to use the standard Visual Studio
managed debugging engine for development and testing.

After publishing, Native AOT applications are true native binaries. The managed
debugger won't work on them. However, the Native AOT compiler generates fully native
executable files that native debuggers can debug on your platform of choice (for
example, WinDbg or Visual Studio on Windows and gdb or lldb on Unix-like systems).

The Native AOT compiler generates information about line numbers, types, locals, and
parameters. The native debugger lets you inspect stack trace and variables, step into or
over source lines, or set line breakpoints.

To debug managed exceptions, set a breakpoint on the RhThrowEx method, which is


called whenever a managed exception is thrown. The exception is stored in the rcx or
x0 register. If your debugger supports viewing C++ objects, you can cast the register to
S_P_CoreLib_System_Exception* to see more information about the exception.
Collecting a dump file for a Native AOT application involves some manual steps in .NET
8.

Visual Studio-specific notes


You can launch a Native AOT-compiled executable under the Visual Studio debugger by
opening it in the Visual Studio IDE. You need to open the executable itself in Visual
Studio.

To set a breakpoint that breaks whenever an exception is thrown, choose the


Breakpoints option from the Debug > Windows menu. In the new window, select New
> Function breakpoint. Specify RhThrowEx as the Function Name and leave the
Language option at All Languages (don't select C#).

To see what exception was thrown, start debugging (Debug > Start Debugging or F5 ),
open the Watch window (Debug > Windows > Watch), and add following expression as
one of the watches: (S_P_CoreLib_System_Exception*)@rcx . This mechanism leverages
the fact that at the time RhThrowEx is called, the x64 CPU register RCX contains the
thrown exception. You can also paste the expression into the Immediate window; the
syntax is the same as for watches.

Importance of the symbol file


When publishing, the Native AOT compiler produces both an executable and a symbol
file. Native debugging, and related activities like profiling, require access to the native
symbol file. If this file isn't present, you might have degraded or broken results.

For information about the name and location of the symbol file, see Native debug
information.

CPU profiling
Platform-specific tools like PerfView and Perf can be used to collect CPU samples of
a Native AOT application.

Heap analysis
Managed heap analysis isn't currently supported in Native AOT. Heap analysis tools like
dotnet-gcdump, PerfView , and Visual Studio heap analysis tools don't work in Native
AOT in .NET 8.
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
Native code interop with Native AOT
Article • 03/30/2024

Native code interop is a technology that allows you to access unmanaged libraries from
managed code, or expose managed libraries to unmanaged code (the opposite
direction).

While native code interop works similarly in Native AOT and non-AOT deployments,
there are some specifics that differ when publishing as Native AOT.

Direct P/Invoke calls


The P/Invoke calls in AOT-compiled binaries are bound lazily at run time by default, for
better compatibility. You can configure the AOT compiler to generate direct calls for
selected P/Invoke methods that are bound during startup by the dynamic loader that
comes with the operating system. The unmanaged libraries and entry points referenced
via direct calls must always be available at run time, otherwise the native binary fails to
start.

The benefits of direct P/Invoke calls are:

They have better steady state performance.


They make it possible to link the unmanaged dependencies statically.

You can configure the direct P/Invoke generation using <DirectPInvoke> items in the
project file. The item name can be either <modulename>, which enables direct calls for
all entry points in the module, or <modulename!entrypointname>, which enables a
direct call for the specific module and entry point only.

To specify a list of entry points in an external file, use <DirectPInvokeList> items in the
project file. A list is useful when the number of direct P/Invoke calls is large and it's
unpractical to specify them using individual <DirectPInvoke> items. The file can contain
empty lines and comments starting with # .

Examples:

XML

<ItemGroup>
<!-- Generate direct PInvoke calls for everything in __Internal -->
<!-- This option replicates Mono AOT behavior that generates direct
PInvoke calls for __Internal -->
<DirectPInvoke Include="__Internal" />
<!-- Generate direct PInvoke calls for everything in libc (also matches
libc.so on Linux or libc.dylib on macOS) -->
<DirectPInvoke Include="libc" />
<!-- Generate direct PInvoke calls for Sleep in kernel32 (also matches
kernel32.dll on Windows) -->
<DirectPInvoke Include="kernel32!Sleep" />
<!-- Generate direct PInvoke for all APIs listed in DirectXAPIs.txt -->
<DirectPInvokeList Include="DirectXAPIs.txt" />
</ItemGroup>

On Windows, Native AOT uses a prepopulated list of direct P/Invoke methods that are
available on all supported versions of Windows.

2 Warning

Because direct P/Invoke methods are resolved by the operating system dynamic
loader and not by the Native AOT runtime library, direct P/Invoke methods will not
respect the DefaultDllImportSearchPathsAttribute. The library search order will
follow the dynamic loader rules as defined by the operating system. Some
operating systems and loaders offer ways to control dynamic loading through
linker flags (such as /DEPENDENTLOADFLAG on Windows or -rpath on Linux). For more
information on how to specify linker flags, see the Linking section.

Linking
To statically link against an unmanaged library, you need to specify <NativeLibrary
Include="filename" /> pointing to a .lib file on Windows and a .a file on Unix-like

systems.

Examples:

XML

<ItemGroup>
<!-- Generate direct PInvokes for Dependency -->
<DirectPInvoke Include="Dependency" />
<!-- Specify library to link against -->
<NativeLibrary Include="Dependency.lib"
Condition="$(RuntimeIdentifier.StartsWith('win'))" />
<NativeLibrary Include="Dependency.a"
Condition="!$(RuntimeIdentifier.StartsWith('win'))" />
</ItemGroup>

To specify additional flags to the native linker, use the <LinkerArg> item.
Examples:

XML

<ItemGroup>
<!-- link.exe is used as the linker on Windows -->
<LinkerArg Include="/DEPENDENTLOADFLAG:0x800"
Condition="$(RuntimeIdentifier.StartsWith('win'))" />

<!-- Native AOT invokes clang/gcc as the linker, so arguments need to be


prefixed with "-Wl," -->
<LinkerArg Include="-Wl,-rpath,'/bin/'"
Condition="$(RuntimeIdentifier.StartsWith('linux'))" />
</ItemGroup>

Native exports
The Native AOT compiler exports methods annotated with
UnmanagedCallersOnlyAttribute with a nonempty EntryPoint property as public C entry
points. This makes it possible to either dynamically or statically link the AOT compiled
modules into external programs. Only methods marked UnmanagedCallersOnly in the
published assembly are considered. Methods in project references or NuGet packages
won't be exported. For more information, see NativeLibrary sample .

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
Building native libraries
Article • 06/07/2024

Publishing .NET class libraries as Native AOT allows creating libraries that can be
consumed from non-.NET programming languages. The produced native library is self-
contained and doesn't require a .NET runtime to be installed.

7 Note

Only "shared libraries" (also known as DLLs on Windows) are supported. Static
libraries are not officially supported and may require compiling Native AOT from
source. Unloading Native AOT libraries (via dlclose or FreeLibrary , for example) is
not supported.

Publishing a class library as Native AOT creates a native library that exposes methods of
the class library annotated with UnmanagedCallersOnlyAttribute with a non-null
EntryPoint field. For more information, see the native library sample available in the
dotnet/samples repository on GitHub.

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
Cross-compilation
Article • 12/05/2023

Cross-compilation is a process of creating executable code for a platform other than the
one on which the compiler is running. The platform difference might be a different OS
or a different architecture. For instance, compiling for Windows from Linux, or for Arm64
from x64. On Linux, the difference can also be between the standard C library
implementations - glibc (e.g. Ubuntu Linux) or musl (e.g. Alpine Linux).

Native AOT uses platform tools (linkers) to link platform libraries (static and dynamic)
together with AOT-compiled managed code into the final executable file. The availability
of cross-linkers and static/dynamic libraries for the target system limits the
OS/architecture pairs that can cross-compile.

Since there's no standardized way to obtain native macOS SDK for use on
Windows/Linux, or Windows SDK for use on Linux/macOS, or a Linux SDK for use on
Windows/macOS, Native AOT does not support cross-OS compilation. Cross-OS
compilation with Native AOT requires some form of emulation, like a virtual machine or
Windows WSL.

However, Native AOT does have limited support for cross-architecture compilation. As
long as the necessary native toolchain is installed, it's possible to cross-compile between
the x64 and the arm64 architectures on Windows, Mac, or Linux.

Windows
Cross-compiling from x64 Windows to ARM64 Windows or vice versa works as long as
the appropriate VS 2022 C++ build tools are installed. To target ARM64 make sure the
Visual Studio component "VS 2022 C++ ARM64/ARM64EC build tools (Latest)" is
installed. To target x64, look for "VS 2022 C++ x64/x86 build tools (Latest)" instead.

Mac
MacOS provides the x64 and amd64 toolchains in the default XCode install.

Linux
Every Linux distribution has a different system for installing native toolchain
dependencies. Consult the documentation for your Linux distribution to determine the
necessary steps.
The necessary dependencies are:

A cross-linker, or a linker that can emit for the target. clang is one such linker.
A target-compatible objcopy or strip , if StripSymbols is enabled for your project.
Object files for the C runtime of the target architecture.
Object files for zlib for the target architecture.

The following commands may suffice for compiling for linux-arm64 on Ubuntu 22.04
amd64, although this is not documented or guaranteed by Ubuntu:

Bash

sudo dpkg --add-architecture arm64


sudo bash -c 'cat > /etc/apt/sources.list.d/arm64.list <<EOF
deb [arch=arm64] http://ports.ubuntu.com/ubuntu-ports/ jammy main restricted
deb [arch=arm64] http://ports.ubuntu.com/ubuntu-ports/ jammy-updates main
restricted
deb [arch=arm64] http://ports.ubuntu.com/ubuntu-ports/ jammy-backports main
restricted universe multiverse
EOF'
sudo sed -i -e 's/deb http/deb [arch=amd64] http/g' /etc/apt/sources.list
sudo sed -i -e 's/deb mirror/deb [arch=amd64] mirror/g'
/etc/apt/sources.list
sudo apt update
sudo apt install -y clang llvm binutils-aarch64-linux-gnu gcc-aarch64-linux-
gnu zlib1g-dev:arm64

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
Security features
Article • 09/18/2024

.NET offers many facilities to help address security concerns when building apps. Native
AOT deployment builds on top of these facilities and provides several that can help
harden your apps.

No run-time code generation


Since native AOT generates all code at the time of publishing the app, no new
executable code needs to be generated at run time. This allows running your apps in
environments that disallow creation of new executable code pages at run time. All the
code that the CPU executes can be digitally signed.

Restricted reflection surface


When apps are published with native AOT, the compiler analyzes the usage of reflection
within the app. Only the program elements that were deemed to be targets of reflection
are available for reflection at run time. Places within the program that attempt to do
unconstrained reflection are flagged using trimming warnings. Program elements that
weren't intended to be targets of reflection cannot be reflected on. This restriction can
prevent a class of issues where a malicious actor gets in control of what the program
reflects on and invokes unintended code. This restriction includes approaches that use
Assembly.LoadFrom or Reflection.Emit - neither of those work with native AOT, and

their use is flagged with a warning at build time.

Control Flow Guard


Control Flow Guard is a highly optimized platform security feature on Windows that was
created to combat memory corruption vulnerabilities. By placing tight restrictions on
where an application can execute code from, it makes it much harder for exploits to
execute arbitrary code through vulnerabilities such as buffer overflows.

To enable Control Flow Guard on your native AOT app, set the ControlFlowGuard
property in the published project.

XML

<PropertyGroup>
<!-- Enable control flow guard -->
<ControlFlowGuard>Guard</ControlFlowGuard>
</PropertyGroup>

Control-flow Enforcement Technology Shadow


Stack (.NET 9+)
Control-flow Enforcement Technology (CET) Shadow Stack is a computer processor
feature. It provides capabilities to defend against return-oriented programming (ROP)
based malware attacks.

CET is enabled by default when publishing for Windows. To disable CET, set the
CetCompat property in the published project.

XML

<PropertyGroup>
<!-- Disable Control-flow Enforcement Technology -->
<CetCompat>false</CetCompat>
</PropertyGroup>
Introduction to AOT warnings
Article • 09/12/2023

When publishing your application as Native AOT, the build process produces all the
native code and data structures required to support the application at run time. This is
different from non-native deployments, which execute the application from formats that
describe the application in abstract terms (a program for a virtual machine) and create
native representations on demand at run time.

The abstract representations of program parts don't have a one-to-one mapping to


native representation. For example, the abstract description of the generic List<T>.Add
method maps to potentially infinite native method bodies that need to be specialized
for the given T (for example, List<int>.Add and List<double>.Add ).

Because the relationship of abstract code to native code is not one-to-one, the build
process needs to create a complete list of native code bodies and data structures at
build time. It can be difficult to create this list at build time for some of the .NET APIs. If
the API is used in a way that wasn't anticipated at build time, an exception will be
thrown at run time.

To prevent changes in behavior when deploying as Native AOT, the .NET SDK provides
static analysis of AOT compatibility through "AOT warnings." AOT warnings are
produced when the build finds code that may not be compatible with AOT. Code that's
not AOT-compatible may produce behavioral changes or even crashes in an application
after it's been built as Native AOT. Ideally, all applications that use Native AOT should
have no AOT warnings. If there are any AOT warnings, ensure there are no behavior
changes by thoroughly testing your app after building as Native AOT.

Examples of AOT warnings


For most C# code, it's straightforward to determine what native code needs to be
generated. The native compiler can walk the method bodies and find what native code
and data structures are accessed. Unfortunately, some features, like reflection, present a
significant problem. Consider the following code:

C#

Type t = typeof(int);
while (true)
{
t = typeof(GenericType<>).MakeGenericType(t);
Console.WriteLine(Activator.CreateInstance(t));
}

struct GenericType<T> { }

While the above program is not very useful, it represents an extreme case that requires
an infinite number of generic types to be created when building the application as
Native AOT. Without Native AOT, the program would run until it runs out of memory.
With Native AOT, we would not be able to even build it if we were to generate all the
necessary types (the infinite number of them).

In this case, Native AOT build issues the following warning on the MakeGenericType line:

AOT analysis warning IL3050: Program.<Main>$(String[]): Using member


'System.Type.MakeGenericType(Type[])' which has
'RequiresDynamicCodeAttribute' can break functionality when AOT compiling.
The native code for this instantiation might not be available at runtime.

At run time, the application will indeed throw an exception from the MakeGenericType
call.

React to AOT warnings


The AOT warnings are meant to bring predictability to Native AOT builds. A majority of
AOT warnings are about possible run-time exception in situations when native code
wasn't generated to support the scenario. The broadest category is
RequiresDynamicCodeAttribute .

RequiresDynamicCode
RequiresDynamicCodeAttribute is simple and broad: it's an attribute that means the
member has been annotated as being incompatible with AOT. This annotation means
that the member might use reflection or another mechanism to create new native code
at run time. This attribute is used when code is fundamentally not AOT compatible, or
the native dependency is too complex to statically predict at build time. This would
often be true for methods that use the Type.MakeGenericType API, reflection emit, or
other run-time code generation technologies. The following code shows an example.

C#

[RequiresDynamicCode("Use 'MethodFriendlyToAot' instead")]


void MethodWithReflectionEmit() { ... }
void TestMethod()
{
// IL3050: Using method 'MethodWithReflectionEmit' which has
'RequiresDynamicCodeAttribute'
// can break functionality when AOT compiling. Use 'MethodFriendlyToAot'
instead.
MethodWithReflectionEmit();
}

There aren't many workarounds for RequiresDynamicCode . The best fix is to avoid calling
the method at all when building as Native AOT and use something else that's AOT
compatible. If you're writing a library and it's not in your control whether or not to call
the method, you can also add RequiresDynamicCode to your own method. This will
annotate your method as not AOT compatible. Adding RequiresDynamicCode silences all
AOT warnings in the annotated method but will produce a warning whenever someone
else calls it. For this reason, it's mostly useful to library authors to "bubble up" the
warning to a public API.

If you can somehow determine that the call is safe, and all native code will be available
at run time, you can also suppress the warning using
UnconditionalSuppressMessageAttribute. For example:

C#

[RequiresDynamicCode("Use 'MethodFriendlyToAot' instead")]


void MethodWithReflectionEmit() { ... }

[UnconditionalSuppressMessage("Aot", "IL3050:RequiresDynamicCode",
Justification = "The unfriendly method is not reachable with AOT")]
void TestMethod()
{
If (RuntimeFeature.IsDynamicCodeSupported)
MethodWithReflectionEmit(); // warning suppressed
}

UnconditionalSuppressMessage is like SuppressMessage but it can be seen by publish

and other post-build tools. SuppressMessage and #pragma directives are only present in
source, so they can't be used to silence warnings from the build.

U Caution

Be careful when suppressing AOT warnings. The call might be AOT-compatible now,
but as you update your code, that might change, and you might forget to review all
the suppressions.
6 Collaborate with us on
GitHub .NET feedback
The .NET documentation is open
The source for this content can
source. Provide feedback here.
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
Intrinsic APIs marked
RequiresDynamicCode
Article • 09/11/2024

Under normal circumstances, calling APIs annotated with


RequiresDynamicCodeAttribute in an app published with native AOT triggers warning
IL3050 (Avoid calling members annotated with 'RequiresDynamicCodeAttribute' when
publishing as native AOT). APIs that trigger the warning might not behave correctly after
AOT compilation.

Some APIs annotated RequiresDynamicCode can still be used without triggering the
warning when called in a specific pattern. When used as part of a pattern, the call to the
API can be statically analyzed by the compiler, does not generate a warning, and
behaves as expected at run time.

Enum.GetValues(Type) Method
Calls to this API don't trigger a warning if the concrete enum type is statically visible in
the calling method body. For example, Enum.GetValues(typeof(AttributeTargets)) does
not trigger a warning, but Enum.GetValues(typeof(T)) and Enum.GetValues(someType)
do.

Marshal.DestroyStructure(IntPtr, Type) Method


Calls to this API don't trigger a warning if the concrete type is statically visible in the
calling method body. For example, Marshal.DestroyStructure(offs, typeof(bool)) does
not trigger a warning, but Marshal.DestroyStructure(offs, typeof(T)) and
Marshal.DestroyStructure(offs, someType) do.

Marshal.GetDelegateForFunctionPointer(IntPtr,
Type) Method
Calls to this API don't trigger a warning if the concrete type is statically visible in the
calling method body. For example, Marshal.GetDelegateForFunctionPointer(ptr,
typeof(bool)) does not trigger a warning, but
Marshal.GetDelegateForFunctionPointer(ptr, typeof(T)) and

Marshal.GetDelegateForFunctionPointer(ptr, someType) do.


Marshal.OffsetOf(Type, String) Method
Calls to this API don't trigger a warning if the concrete type is statically visible in the
calling method body. For example, Marshal.OffsetOf(typeof(Point), someField) does
not trigger a warning, but Marshal.OffsetOf(typeof(T), someField) and
Marshal.OffsetOf(someType, someField) do.

Marshal.PtrToStructure(IntPtr, Type) Method


Calls to this API don't trigger a warning if the concrete type is statically visible in the
calling method body. For example, Marshal.PtrToStructure(offs, typeof(bool)) does
not trigger a warning, but Marshal.PtrToStructure(offs, typeof(T)) and
Marshal.PtrToStructure(offs, someType) do.

Marshal.SizeOf(Type) Method
Calls to this API don't trigger a warning if the concrete type is statically visible in the
calling method body. For example, Marshal.SizeOf(typeof(bool)) does not trigger a
warning, but Marshal.SizeOf(typeof(T)) and Marshal.SizeOf(someType) do.

MethodInfo.MakeGenericMethod(Type[])
Method (.NET 9+)
Calls to this API don't trigger a warning if both the generic method definition and the
instantiation arguments are statically visible within the calling method body. For
example,
typeof(SomeType).GetMethod("GenericMethod").MakeGenericMethod(typeof(int)) . It's also
possible to use a generic parameter as the argument:
typeof(SomeType).GetMethod("GenericMethod").MakeGenericMethod(typeof(T)) also

doesn't warn.

If the generic type definition is statically visible within the calling method body and all
the generic parameters of it are constrained to be a class, the call also doesn't trigger
the IL3050 warning. In this case, the arguments don't have to be statically visible. For
example:

C#

// No IL3050 warning on MakeGenericMethod because T is constrained to be


class
typeof(SomeType).GetMethod("GenericMethod").MakeGenericMethod(Type.GetType(C
onsole.ReadLine()));
class SomeType
{
public void GenericMethod<T>() where T : class { }
}

All the other cases, such as someMethod.MakeGenericMethod(typeof(int)) or


typeof(SomeType).GetMethod("GenericMethod").MakeGenericMethod(someType) where

someType has an unknown value, trigger a warning.

Type.MakeGenericType(Type[]) Method (.NET


9+)
Calls to this API don't trigger a warning if both the generic type definition and the
instantiation arguments are statically visible within the calling method body. For
example, typeof(List<>).MakeGenericType(typeof(int)) . It's also possible to use a
generic parameter as the argument: typeof(List<>).MakeGenericType(typeof(T)) also
doesn't warn.

If the generic type definition is statically visible within the calling method body and all
the generic parameters of it are constrained to be a class, the call also doesn't trigger
the IL3050 warning. In this case, the arguments don't have to be statically visible. For
example:

C#

// No IL3050 warning on MakeGenericType because T is constrained to be class


typeof(Generic<>).MakeGenericType(Type.GetType(Console.ReadLine()));
class Generic<T> where T : class { }

All the other cases, such as someType.MakeGenericType(typeof(int)) or


typeof(List<>).MakeGenericType(someType) where someType has an unknown value,

trigger a warning.
IL3050: Avoid calling members
annotated with
'RequiresDynamicCodeAttribute' when
publishing as Native AOT
Article • 09/11/2024

Cause
When you publish an app as Native AOT (by setting the PublishAot property to true in
a project), calling members annotated with the RequiresDynamicCodeAttribute attribute
might result in exceptions at run time. Members annotated with this attribute might
require ability to dynamically create new code at run time, and Native AOT publishing
model doesn't provide a way to generate native code at run time.

Rule description
RequiresDynamicCodeAttribute indicates that the member references code that might
require code generation at run time.

Example
C#

// AOT analysis warning IL3050: Program.<Main>$(String[]): Using member


'System.Type.MakeGenericType(Type[])'
// which has 'RequiresDynamicCodeAttribute' can break functionality when AOT
compiling. The native code for
// this instantiation might not be available at runtime.
typeof(Generic<>).MakeGenericType(unknownType);

class Generic<T> { }

struct SomeStruct { }

How to fix violations


Members annotated with the RequiresDynamicCodeAttribute attribute have a message
that provides useful information to users who are publishing as Native AOT. Consider
adapting existing code to the attribute's message or removing the violating call.

Some APIs annotated with RequiresDynamicCodeAttribute don't trigger a warning when


called in a specific pattern. For more information, see Intrinsic APIs marked
RequiresDynamicCode.
IL3051: 'RequiresDynamicCodeAttribute'
attribute must be consistently applied
on virtual and interface methods
Article • 09/02/2022

Cause
There is a mismatch in the RequiresDynamicCodeAttribute annotations between an
interface and its implementation or a virtual method and its override.

Example
A base member has the attribute but the derived member does not have the attribute.

C#

public class Base


{
[RequiresDynamicCode("Message")]
public virtual void TestMethod() {}
}

public class Derived : Base


{
// IL3051: Base member 'Base.TestMethod' with
'RequiresDynamicCodeAttribute' has a derived member 'Derived.TestMethod()'
without 'RequiresDynamicCodeAttribute'. For all interfaces and overrides the
implementation attribute must match the definition attribute.
public override void TestMethod() {}
}

A derived member has the attribute but the overridden base member does not have the
attribute.

C#

public class Base


{
public virtual void TestMethod() {}
}

public class Derived : Base


{
// IL3051: Member 'Derived.TestMethod()' with
'RequiresDynamicCodeAttribute' overrides base member 'Base.TestMethod()'
without 'RequiresDynamicCodeAttribute'. For all interfaces and overrides the
implementation attribute must match the definition attribute.
[RequireDynamicCode("Message")]
public override void TestMethod() {}
}

An interface member has the attribute but its implementation does not have the
attribute.

C#

interface IRDC
{
[RequiresDynamicCode("Message")]
void TestMethod();
}

class Implementation : IRDC


{
// IL3051: Interface member 'IRDC.TestMethod()' with
'RequiresDynamicCodeAttribute' has an implementation member
'Implementation.TestMethod()' without 'RequiresDynamicCodeAttribute'. For
all interfaces and overrides the implementation attribute must match the
definition attribute.
public void TestMethod () { }
}

An implementation member has the attribute but the interface that it implements does
not have the attribute.

C#

interface IRDC
{
void TestMethod();
}

class Implementation : IRDC


{
[RequiresDynamicCode("Message")]
// IL3051: Member 'Implementation.TestMethod()' with
'RequiresDynamicCodeAttribute' implements interface member
'IRDC.TestMethod()' without 'RequiresDynamicCodeAttribute'. For all
interfaces and overrides the implementation attribute must match the
definition attribute.
public void TestMethod () { }
}
IL3052: COM interop is not supported
with full ahead of time compilation
Article • 09/12/2023

Cause
Built-in COM is not supported with Native AOT compilation. Use COM wrappers instead.

When the unsupported code path is reached at run time, an exception will be thrown.

Example
C#

using System.Runtime.InteropServices;

// AOT analysis warning IL3052: CorRuntimeHost.CorRuntimeHost(): COM interop


is not supported
// with full ahead of time compilation.
new CorRuntimeHost();

[Guid("CB2F6723-AB3A-11D2-9C40-00C04FA30A3E")]
[ComImport]
[ClassInterface(ClassInterfaceType.None)]
public class CorRuntimeHost
{
}

6 Collaborate with us on
GitHub .NET feedback
The .NET documentation is open
The source for this content can
source. Provide feedback here.
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
IL3053: Assembly produced AOT
warnings
Article • 09/02/2022

Cause
The assembly produced one or more AOT analysis warnings. The warnings have been
collapsed into a single warning message because they refer to code that likely comes
from a third party and is not directly actionable. Using the library with native AOT might
be problematic.

To see the detailed warnings, specify <TrimmerSingleWarn>false</TrimmerSingleWarn> in


your project file.
IL3054: Generic expansion to a method
or type was aborted due to generic
recursion
Article • 09/05/2023

Cause
Methods on generic types and generic methods that are instantiated over different
types are supported by different native code bodies specialized for the given type
parameter.

It is possible to form a cycle between generic instantiations in a way that the number of
native code bodies required to support the application becomes unbounded. Because
Native AOT deployments require generating all native method bodies at the time of
publishing the application, this would require compiling an infinite number of methods.

When the AOT compilation process detects such unbounded growth, it cuts off the
growth by generating a throwing method. If the application goes beyond the cutoff
point at run time, an exception is thrown.

Even though it's unlikely the throwing method body will be reached at run time, it's
advisable to remove the generic recursion by restructuring the code. Generic recursion
negatively affects compilation speed and the size of the output executable.

In .NET, generic code instantiated over reference type is shared across all reference
typed instantiations (for example, the code to support List<string> and List<object>
is the same). However, additional native data structures are needed to express the
"generic context" (the thing that gets substituted for T ). It is possible to form generic
recursion within these data structures as well. For example, this can happen if the
generic context for Foo<T> needs to refer to Foo<Foo<T>> that in turn needs
Foo<Foo<Foo<T>>> .

Example
The following program will work correctly for input "2" but throws an exception for
input "100".

C#
// AOT analysis warning IL3054:
// Program.
<<Main>$>g__CauseGenericRecursion|0_0<Struct`1<Struct`1<Struct`1<Struct`1<In
t32>>>>>(Int32):
// Generic expansion to 'Program.
<<Main>$>g__CauseGenericRecursion|0_0<Struct`1<Struct`1<Struct`1<Struct`1<St
ruct`1<Int32>>>>>>(Int32)'
// was aborted due to generic recursion. An exception will be thrown at
runtime if this codepath
// is ever reached. Generic recursion also negatively affects compilation
speed and the size of
// the compilation output. It is advisable to remove the source of the
generic recursion
// by restructuring the program around the source of recursion. The source
of generic recursion
// might include: 'Program.<<Main>$>g__CauseGenericRecursion|0_0<T>(Int32)

using System;

int number = int.Parse(Console.ReadLine());


Console.WriteLine(CauseGenericRecursion<int>(number));

static int CauseGenericRecursion<T>(int i) => 1 +


CauseGenericRecursion<Struct<T>>(i - 1);

struct Struct<T> { }

The behavior of the program at run time for input "100":

Unhandled Exception: System.TypeLoadException: Could not load type 'Program'


from assembly 'repro, Version=7.0.0.0, Culture=neutral,
PublicKeyToken=null'.
at
Internal.Runtime.CompilerHelpers.ThrowHelpers.ThrowTypeLoadExceptionWithArgu
ment(ExceptionStringID, String, String, String) + 0x42
at Program.<<Main>$>g__CauseGenericRecursion|0_0[T](Int32) + 0x29
at Program.<<Main>$>g__CauseGenericRecursion|0_0[T](Int32) + 0x1f
at Program.<<Main>$>g__CauseGenericRecursion|0_0[T](Int32) + 0x1f
at Program.<<Main>$>g__CauseGenericRecursion|0_0[T](Int32) + 0x1f
at Program.<<Main>$>g__CauseGenericRecursion|0_0[T](Int32) + 0x1f
at Program.<Main>$(String[]) + 0x3a

Similarly, the following program causes recursion within native data structures (as
opposed to generic recursion within native code), since the instantiation is over a
reference type, but has a cycle:

C#
// AOT analysis warning IL3054:
// Program.<<Main>$>g__Recursive|0_0<List`1<List`1<List`1<List`1<Object>>>>>
():
// Generic expansion to 'Program.
<<Main>$>g__Recursive|0_0<List`1<List`1<List`1<List`1<List`1<Object>>>>>>()'
// was aborted due to generic recursion. An exception will be thrown at
runtime if this codepath
// is ever reached. Generic recursion also negatively affects compilation
speed and the size of
// the compilation output. It is advisable to remove the source of the
generic recursion
// by restructuring the program around the source of recursion. The source
of generic recursion
// might include: 'Program.<<Main>$>g__Recursive|0_0<T>()'

using System.Collections.Generic;

Recursive<object>();

static void Recursive<T>() => Recursive<List<T>>();

6 Collaborate with us on
GitHub .NET feedback
The .NET documentation is open
The source for this content can
source. Provide feedback here.
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
IL3055: P/Invoke method declares a
parameter with an abstract delegate
Article • 09/02/2022

Cause
P/Invoke marshalling code needs to be generated ahead of time. If marshalling code for
a delegate wasn't pregenerated, P/Invoke marshalling will fail with an exception at run
time.

Marshalling code is generated for delegate types that either:

Are used in signatures of P/Invoke methods.


Appear as fields of types passed to native code via P/Invoke.
Are decorated with UnmanagedFunctionPointerAttribute.

If a concrete type cannot be inferred from the P/Invoke signature, marshalling code
might not be available at run time and the P/Invoke will throw an exception.

Replace Delegate or MulticastDelegate in the P/Invoke signature with a concrete


delegate type.

Example
C#

using System;
using System.Runtime.InteropServices;

PinvokeMethod(() => { });

// AOT analysis warning IL3055: Program.<Main>$(String[]): P/invoke method


// 'Program.<<Main>$>g__PinvokeMethod|0_1(Delegate)' declares a parameter
with an abstract delegate.
// Correctness of interop for abstract delegates cannot be guaranteed after
native compilation:
// the marshalling code for the delegate might not be available. Use a non-
abstract delegate type
// or ensure any delegate instance passed as parameter is marked with
`UnmanagedFunctionPointerAttribute`.
[DllImport("library")]
static extern void PinvokeMethod(Delegate del);
IL3056: RequiresDynamicCodeAttribute
cannot be placed directly on a static
constructor
Article • 09/02/2022

Cause
RequiresDynamicCodeAttribute is not allowed on static constructors since these are not
callable by the user. Placing the attribute directly on a static constructor will have no
effect. Annotate the method's containing type instead.

Example
C#

public class MyClass {


// Trim analysis warning IL2116: 'RequiresDynamicCodeAttribute' cannot
be placed directly on static constructor 'MyClass..cctor()', consider
placing 'RequiresDynamicCodeAttribute' on the type declaration instead.
[RequiresDynamicCode("Dangerous")]
static MyClass () { }
}
Runtime package store
Article • 11/03/2021

Starting with .NET Core 2.0, it's possible to package and deploy apps against a known
set of packages that exist in the target environment. The benefits are faster
deployments, lower disk space usage, and improved startup performance in some cases.

This feature is implemented as a runtime package store, which is a directory on disk


where packages are stored (typically at /usr/local/share/dotnet/store on macOS/Linux
and C:/Program Files/dotnet/store on Windows). Under this directory, there are
subdirectories for architectures and target frameworks. The file layout is similar to the
way that NuGet assets are laid out on disk:

\dotnet
\store
\x64
\netcoreapp2.0
\microsoft.applicationinsights
\microsoft.aspnetcore
...
\x86
\netcoreapp2.0
\microsoft.applicationinsights
\microsoft.aspnetcore
...

A target manifest file lists the packages in the runtime package store. Developers can
target this manifest when publishing their app. The target manifest is typically provided
by the owner of the targeted production environment.

Preparing a runtime environment


The administrator of a runtime environment can optimize apps for faster deployments
and lower disk space use by building a runtime package store and the corresponding
target manifest.

The first step is to create a package store manifest that lists the packages that compose
the runtime package store. This file format is compatible with the project file format
(csproj).

XML
<Project Sdk="Microsoft.NET.Sdk">
<ItemGroup>
<PackageReference Include="NUGET_PACKAGE" Version="VERSION" />
<!-- Include additional packages here -->
</ItemGroup>
</Project>

Example

The following example package store manifest (packages.csproj) is used to add


Newtonsoft.Json and Moq to a runtime package store:

XML

<Project Sdk="Microsoft.NET.Sdk">
<ItemGroup>
<PackageReference Include="Newtonsoft.Json" Version="10.0.3" />
<PackageReference Include="Moq" Version="4.7.63" />
</ItemGroup>
</Project>

Provision the runtime package store by executing dotnet store with the package store
manifest, runtime, and framework:

.NET CLI

dotnet store --manifest <PATH_TO_MANIFEST_FILE> --runtime


<RUNTIME_IDENTIFIER> --framework <FRAMEWORK>

Example

.NET CLI

dotnet store --manifest packages.csproj --runtime win10-x64 --framework


netcoreapp2.0 --framework-version 2.0.0

You can pass multiple target package store manifest paths to a single dotnet store
command by repeating the option and path in the command.

By default, the output of the command is a package store under the .dotnet/store
subdirectory of the user's profile. You can specify a different location using the --output
<OUTPUT_DIRECTORY> option. The root directory of the store contains a target manifest
artifact.xml file. This file can be made available for download and be used by app
authors who want to target this store when publishing.
Example

The following artifact.xml file is produced after running the previous example. Note that
Castle.Core is a dependency of Moq , so it's included automatically and appears in the
artifacts.xml manifest file.

XML

<StoreArtifacts>
<Package Id="Newtonsoft.Json" Version="10.0.3" />
<Package Id="Castle.Core" Version="4.1.0" />
<Package Id="Moq" Version="4.7.63" />
</StoreArtifacts>

Publishing an app against a target manifest


If you have a target manifest file on disk, you specify the path to the file when
publishing your app with the dotnet publish command:

.NET CLI

dotnet publish --manifest <PATH_TO_MANIFEST_FILE>

Example

.NET CLI

dotnet publish --manifest manifest.xml

You deploy the resulting published app to an environment that has the packages
described in the target manifest. Failing to do so results in the app failing to start.

Specify multiple target manifests when publishing an app by repeating the option and
path (for example, --manifest manifest1.xml --manifest manifest2.xml ). When you do
so, the app is trimmed for the union of packages specified in the target manifest files
provided to the command.

If you deploy an application with a manifest dependency that's present in the


deployment (the assembly is present in the bin folder), the runtime package store isn't
used on the host for that assembly. The bin folder assembly is used regardless of its
presence in the runtime package store on the host.
The version of the dependency indicated in the manifest must match the version of the
dependency in the runtime package store. If you have a version mismatch between the
dependency in the target manifest and the version that exists in the runtime package
store and the app doesn't include the required version of the package in its deployment,
the app fails to start. The exception includes the name of the target manifest that called
for the runtime package store assembly, which helps you troubleshoot the mismatch.

When the deployment is trimmed on publish, only the specific versions of the manifest
packages you indicate are withheld from the published output. The packages at the
versions indicated must be present on the host for the app to start.

Specifying target manifests in the project file


An alternative to specifying target manifests with the dotnet publish command is to
specify them in the project file as a semicolon-separated list of paths under a
<TargetManifestFiles> tag.

XML

<PropertyGroup>
<TargetManifestFiles>manifest1.xml;manifest2.xml</TargetManifestFiles>
</PropertyGroup>

Specify the target manifests in the project file only when the target environment for the
app is well-known, such as for .NET Core projects. This isn't the case for open-source
projects. The users of an open-source project typically deploy it to different production
environments. These production environments generally have different sets of packages
pre-installed. You can't make assumptions about the target manifest in such
environments, so you should use the --manifest option of dotnet publish.

ASP.NET Core implicit store (.NET Core 2.0 only)


The ASP.NET Core implicit store applies only to ASP.NET Core 2.0. We strongly
recommend applications use ASP.NET Core 2.1 and later, which does not use the
implicit store. ASP.NET Core 2.1 and later use the shared framework.

For .NET Core 2.0, the runtime package store feature is used implicitly by an ASP.NET
Core app when the app is deployed as a framework-dependent deployment app. The
targets in Microsoft.NET.Sdk.Web include manifests referencing the implicit package
store on the target system. Additionally, any framework-dependent app that depends
on the Microsoft.AspNetCore.All package results in a published app that contains only
the app and its assets and not the packages listed in the Microsoft.AspNetCore.All
metapackage. It's assumed that those packages are present on the target system.

The runtime package store is installed on the host when the .NET SDK is installed. Other
installers may provide the runtime package store, including Zip/tarball installations of
the .NET SDK, apt-get , Red Hat Yum, the .NET Core Windows Server Hosting bundle,
and manual runtime package store installations.

When deploying a framework-dependent deployment app, make sure that the target
environment has the .NET SDK installed. If the app is deployed to an environment that
doesn't include ASP.NET Core, you can opt out of the implicit store by specifying
<PublishWithAspNetCoreTargetManifest> set to false in the project file as in the
following example:

XML

<PropertyGroup>

<PublishWithAspNetCoreTargetManifest>false</PublishWithAspNetCoreTargetManif
est>
</PropertyGroup>

7 Note

For self-contained deployment apps, it's assumed that the target system doesn't
necessarily contain the required manifest packages. Therefore,
<PublishWithAspNetCoreTargetManifest> cannot be set to true for an self-
contained app.

See also
dotnet-publish
dotnet-store
.NET RID Catalog
Article • 07/11/2024

RID is short for runtime identifier. RID values are used to identify target platforms where
the application runs. They're used by .NET packages to represent platform-specific
assets in NuGet packages. The following values are examples of RIDs: linux-x64 , win-
x64 , or osx-x64 . For the packages with native dependencies, the RID designates on

which platforms the package can be restored.

A single RID can be set in the <RuntimeIdentifier> element of your project file. Multiple
RIDs can be defined as a semicolon-delimited list in the project file's
<RuntimeIdentifiers> element. They're also used via the --runtime option with the
following .NET CLI commands:

dotnet build
dotnet clean
dotnet pack
dotnet publish
dotnet restore
dotnet run
dotnet store

RIDs that represent concrete operating systems usually follow this pattern: [os].
[version]-[architecture]-[additional qualifiers] where:

[os] is the operating/platform system moniker. For example, ubuntu .

[version] is the operating system version in the form of a dot-separated ( . )

version number. For example, 15.10 .

The version shouldn't be a marketing version, as marketing versions often


represent multiple discrete versions of the operating system with varying platform
API surface area.

[architecture] is the processor architecture. For example: x86 , x64 , arm , or


arm64 .

[additional qualifiers] further differentiate different platforms. For example:


aot .

RID graph
The RID graph or runtime fallback graph is a list of RIDs that are compatible with each
other.

These RIDs are defined in PortableRuntimeIdentifierGraph.json in the dotnet/runtime


repository. In this file, you can see that all RIDs, except for the base one, contain an
"#import" statement. These statements indicate compatible RIDs.

Before .NET 8, version-specific and distro-specific RIDs were regularly added to the
runtime.json file, which is located in the dotnet/runtime repository. This graph is no
longer updated and exists as a backwards compatibility option. Developers should use
RIDs that are non-version-specific and non-distro-specific.

When NuGet restores packages, it tries to find an exact match for the specified runtime.
If an exact match is not found, NuGet walks back the graph until it finds the closest
compatible system according to the RID graph.

The following example is the actual entry for the osx-x64 RID:

JSON

"osx-x64": {
"#import": [ "osx", "unix-x64" ]
}

The above RID specifies that osx-x64 imports unix-x64 . So, when NuGet restores
packages, it tries to find an exact match for osx-x64 in the package. If NuGet can't find
the specific runtime, it can restore packages that specify unix-x64 runtimes, for
example.

The following example shows a slightly bigger RID graph also defined in the
runtime.json file:

linux-arm64 linux-arm32
| \ / |
| linux |
| | |
unix-arm64 | unix-x64
\ | /
unix
|
any
Alternatively, you can use the RidGraph tool to easily visualize the RID graph (or any
subset of the graph).

All RIDs eventually map back to the root any RID.

There are some considerations about RIDs that you have to keep in mind when working
with them:

Don't try to parse RIDs to retrieve component parts.

Use RIDs that are already defined for the platform.

The RIDs need to be specific, so don't assume anything from the actual RID value.

Don't build RIDs programmatically unless absolutely necessary.

Some apps need to compute RIDs programmatically. If so, the computed RIDs
must match the catalog exactly, including in casing. RIDs with different casing
would cause problems when the OS is case sensitive, for example, Linux, because
the value is often used when constructing things like output paths. For example,
consider a custom publishing wizard in Visual Studio that relies on information
from the solution configuration manager and project properties. If the solution
configuration passes an invalid value, for example, ARM64 instead of arm64 , it could
result in an invalid RID, such as win-ARM64 .

Using RIDs
To be able to use RIDs, you have to know which RIDs exist. For the latest and complete
version, see the PortableRuntimeIdentifierGraph.json in the dotnet/runtime
repository.

RIDs that are considered 'portable'—that is, aren't tied to a specific version or OS
distribution—are the recommended choice. This means that portable RIDs should be
used for both building a platform-specific application and creating a NuGet package
with RID-specific assets.

Starting with .NET 8, the default behavior of the .NET SDK and runtime is to only
consider non-version-specific and non-distro-specific RIDs. When restoring and
building, the SDK uses a smaller portable RID graph. The
RuntimeInformation.RuntimeIdentifier returns the platform for which the runtime was
built. At run time, .NET finds RID-specific assets using a known set of portable RIDs.
When building an application with RID-specific assets that may be ignored at runtime,
the SDK will emit a warning: NETSDK1206.
Loading assets for a specific OS version or distribution
.NET no longer attempts to provide first-class support for resolving dependencies that
are specific to an OS version or distribution. If your application or package needs to load
different assets based on OS version or distribution, it should implement the logic to
conditionally load assets.

To get information about the platform, use System.OperatingSystem APIs. On Windows


and macOS, Environment.OSVersion will return the operating system version. On Linux,
it may be the kernel version—to get the Linux distro name and version information, the
recommended approach is to read the /etc/os-release file.

.NET provides various extension points for customizing loading logic—for example,
NativeLibrary.SetDllImportResolver(Assembly, DllImportResolver),
AssemblyLoadContext.ResolvingUnmanagedDll, AssemblyLoadContext.Resolving, and
AppDomain.AssemblyResolve. These can be used to load the asset corresponding to the
current platform.

Known RIDs
The following list shows a small subset of the most common RIDs used for each OS. For
the latest and complete version, see the PortableRuntimeIdentifierGraph.json in the
dotnet/runtime repository.

Windows RIDs
win-x64

win-x86

win-arm64

For more information, see Install .NET on Windows.

Linux RIDs
linux-x64 (Most desktop distributions like CentOS Stream, Debian, Fedora,

Ubuntu, and derivatives)


linux-musl-x64 (Lightweight distributions using musl like Alpine Linux)
linux-musl-arm64 (Used to build Docker images for 64-bit Arm v8 and minimalistic

base images)
linux-arm (Linux distributions running on Arm like Raspbian on Raspberry Pi

Model 2+)
linux-arm64 (Linux distributions running on 64-bit Arm like Ubuntu Server 64-bit

on Raspberry Pi Model 3+)


linux-bionic-arm64 (Distributions using Android's bionic libc, for example, Termux)

For more information, see .NET dependencies and requirements.

macOS RIDs
macOS RIDs use the older "OSX" branding.

osx-x64 (Minimum OS version is macOS 10.12 Sierra)


osx-arm64

For more information, see .NET dependencies and requirements.

iOS RIDs
ios-arm64

Android RIDs
android-arm64

See also
Runtime IDs

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
How resource manifest files are named
Article • 11/03/2021

When MSBuild compiles a .NET Core project, XML resource files, which have the .resx file
extension, are converted into binary .resources files. The binary files are embedded into
the output of the compiler and can be read by the ResourceManager. This article
describes how MSBuild chooses a name for each .resources file.

 Tip

If you explicitly add a resource item to your project file, and it's also included with
the default include globs for .NET Core, you will get a build error. To manually
include resource files as EmbeddedResource items, set the
EnableDefaultEmbeddedResourceItems property to false.

Default name
In .NET Core 3.0 and later, the default name for a resource manifest is used when both of
the following conditions are met:

The resource file is not explicitly included in the project file as an EmbeddedResource
item with LogicalName , ManifestResourceName , or DependentUpon metadata.
The EmbeddedResourceUseDependentUponConvention property is not set to false in
the project file. By default, this property is set to true . For more information, see
EmbeddedResourceUseDependentUponConvention.

If the resource file is colocated with a source file (.cs or .vb) of the same root file name,
the full name of the first type that's defined in the source file is used for the manifest file
name. For example, if MyNamespace.Form1 is the first type defined in Form1.cs, and
Form1.cs is colocated with Form1.resx, the generated manifest name for that resource file
is MyNamespace.Form1.resources.

LogicalName metadata
If a resource file is explicitly included in the project file as an EmbeddedResource item with
LogicalName metadata, the LogicalName value is used as the manifest name. LogicalName
takes precedence over any other metadata or setting.
For example, the manifest name for the resource file that's defined in the following
project file snippet is SomeName.resources.

XML

<EmbeddedResource Include="X.resx" LogicalName="SomeName.resources" />

-or-

XML

<EmbeddedResource Include="X.fr-FR.resx" LogicalName="SomeName.resources" />

7 Note

If LogicalName is not specified, an EmbeddedResource with two dots ( . ) in the


file name doesn't work, which means that GetManifestResourceNames doesn't
return that file.

The following example works correctly:

XML

<EmbeddedResource Include="X.resx" />

The following example doesn't work:

XML

<EmbeddedResource Include="X.fr-FR.resx" />

ManifestResourceName metadata
If a resource file is explicitly included in the project file as an EmbeddedResource item with
ManifestResourceName metadata (and LogicalName is absent), the ManifestResourceName

value, combined with the file extension .resources, is used as the manifest file name.

For example, the manifest name for the resource file that's defined in the following
project file snippet is SomeName.resources.

XML
<EmbeddedResource Include="X.resx" ManifestResourceName="SomeName" />

The manifest name for the resource file that's defined in the following project file
snippet is SomeName.fr-FR.resources.

XML

<EmbeddedResource Include="X.fr-FR.resx" ManifestResourceName="SomeName.fr-


FR" />

DependentUpon metadata
If a resource file is explicitly included in the project file as an EmbeddedResource item with
DependentUpon metadata (and LogicalName and ManifestResourceName are absent),

information from the source file defined by DependentUpon is used for the resource
manifest file name. Specifically, the name of the first type that's defined in the source file
is used in the manifest name as follows: Namespace.Classname[.Culture].resources.

For example, the manifest name for the resource file that's defined in the following
project file snippet is Namespace.Classname.resources (where Namespace.Classname is the
first class that's defined in MyTypes.cs).

XML

<EmbeddedResource Include="X.resx" DependentUpon="MyTypes.cs">

The manifest name for the resource file that's defined in the following project file
snippet is Namespace.Classname.fr-FR.resources (where Namespace.Classname is the first
class that's defined in MyTypes.cs).

XML

<EmbeddedResource Include="X.fr-FR.resx" DependentUpon="MyTypes.cs">

EmbeddedResourceUseDependentUponConvention
property
If EmbeddedResourceUseDependentUponConvention is set to false in the project file, each
resource manifest file name is based off the root namespace for the project and the
relative path from the project root to the .resx file. More specifically, the generated
resource manifest file name is RootNamespace.RelativePathWithDotsForSlashes.
[Culture.]resources. This is also the logic used to generate manifest names in .NET Core
versions prior to 3.0.

7 Note

If RootNamespace is not defined, it defaults to the project name.


If LogicalName , ManifestResourceName , or DependentUpon metadata is specified
for an EmbeddedResource item in the project file, this naming rule does not
apply to that resource file.

See also
How Manifest Resource Naming Works
MSBuild properties for .NET SDK projects
MSBuild breaking changes
Introduction to .NET and Docker
Article • 01/04/2024

Containers are one of the most popular ways for deploying and hosting cloud
applications, with tools like Docker , Kubernetes , and Podman . Many developers
choose containers because it's straightforward to package an app with its dependencies
and get that app to reliably run on any container host. There's extensive support for
using .NET with containers .

Docker provides a great overview of containers. Docker Desktop: Community


Edition is a good tool to use for using containers on developer desktop machine.

.NET images
Official .NET container images are published to the Microsoft Artifact Registry and are
discoverable on the Docker Hub . There are runtime images for production and SDK
images for building your code, for Linux (Alpine, Debian, Ubuntu, Mariner) and
Windows. For more information, see .NET container images.

.NET images are regularly updated whenever a new .NET patch is published or when an
operating system base image is updated.

Chiseled container images are Ubuntu container images with a minimal set of
components required by the .NET runtime. These images are ~100 MB smaller than the
regular Ubuntu images and have fewer CVEs since they have fewer components. In
particular, they don't contain a shell or package manager, which significantly improves
their security profile. They also include a non-root user and are configured with that
user enabled.

Building container images


You can build a container image with a Dockerfile or rely on the .NET SDK to produce an
image. For samples on building images, see dotnet/dotnet-docker and dotnet/sdk-
container-builds .

The following example demonstrates building and running a container image in a few
quick steps (supported with .NET 8 and .NET 7.0.300).

Bash
$ dotnet new webapp -o webapp
$ cd webapp/
$ dotnet publish -t:PublishContainer
MSBuild version 17.8.3+195e7f5a3 for .NET
Determining projects to restore...
All projects are up-to-date for restore.
webapp -> /home/rich/webapp/bin/Release/net8.0/webapp.dll
webapp -> /home/rich/webapp/bin/Release/net8.0/publish/
Building image 'webapp' with tags 'latest' on top of base image
'mcr.microsoft.com/dotnet/aspnet:8.0'.
Pushed image 'webapp:latest' to local registry via 'docker'.
$ docker run --rm -d -p 8000:8080 webapp
7c7ad33409e52ddd3a9d330902acdd49845ca4575e39a6494952b642e584016e
$ curl -s http://localhost:8000 | grep ASP.NET
<p>Learn about <a
href="https://learn.microsoft.com/aspnet/core">building Web apps with
ASP.NET Core</a>.</p>
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES
7c7ad33409e5 webapp "dotnet webapp.dll" About a minute ago Up About
a minute 0.0.0.0:8000->8080/tcp, :::8000->8080/tcp jovial_shtern
$ docker kill 7c7ad33409e5

docker init is a new option for developers wanting to use Dockerfiles.

Ports
Port mapping is a key part of using containers. Ports must be published outside the
container in order to respond to external web requests. ASP.NET Core container images
changed in .NET 8 to listen on port 8080 , by default. .NET 6 and 7 listen on port 80 .

In the prior example with docker run , the host port 8000 is mapped to the container
port 8080 . Kubernetes works in a similar way.

The ASPNETCORE_HTTP_PORTS , ASPNETCORE_HTTPS_PORTS , and ASPNETCORE_URLS environment


variables can be used to configure this behavior.

Users
Starting with .NET 8, all images include a non-root user called app . By default, chiseled
images are configured with this user enabled. The publish app as .NET container feature
(demonstrated in the Building container images section) also configures images with
this user enabled by default. In all other scenarios, the app user can be set manually, for
example with the USER Dockerfile instruction. If an image has been configured with app
and commands need to run as root , then the USER instruction can be used to set to the
user to root .

Staying informed
Container-related news is posted to dotnet/dotnet-docker discussions and to the
.NET Blog "containers" category .

Azure services
Various Azure services support containers. You create a Docker image for your
application and deploy it to one of the following services:

Azure Kubernetes Service (AKS)


Scale and orchestrate Windows & Linux containers using Kubernetes.

Azure App Service


Deploy web apps or APIs using containers in a PaaS environment.

Azure Container Apps


Run your container workloads without managing servers, orchestration, or
infrastructure and leverage native support for Dapr and KEDA for observability
and scaling to zero.

Azure Container Instances


Create individual containers in the cloud without any higher-level management
services.

Azure Batch
Run repetitive compute jobs using containers.

Azure Service Fabric


Lift, shift, and modernize .NET applications to microservices using Windows &
Linux containers.

Azure Container Registry


Store and manage container images across all types of Azure deployments.

Next steps
Learn how to containerize a .NET Core application.
Learn how to containerize an ASP.NET Core application.
Try the Learn ASP.NET Core Microservice tutorial.
Learn about Container Tools in Visual Studio

6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
Tutorial: Containerize a .NET app
Article • 03/21/2024

In this tutorial, you learn how to containerize a .NET application with Docker. Containers
have many features and benefits, such as being an immutable infrastructure, providing a
portable architecture, and enabling scalability. The image can be used to create
containers for your local development environment, private cloud, or public cloud.

In this tutorial, you:

" Create and publish a simple .NET app


" Create and configure a Dockerfile for .NET
" Build a Docker image
" Create and run a Docker container

You explore the Docker container build and deploy tasks for a .NET application. The
Docker platform uses the Docker engine to quickly build and package apps as Docker
images. These images are written in the Dockerfile format to be deployed and run in a
layered container.

7 Note

This tutorial is not for ASP.NET Core apps. If you're using ASP.NET Core, see the
Learn how to containerize an ASP.NET Core application tutorial.

Prerequisites
Install the following prerequisites:

.NET 8+ SDK .
If you have .NET installed, use the dotnet --info command to determine which
SDK you're using.
Docker Community Edition .
A temporary working folder for the Dockerfile and .NET example app. In this
tutorial, the name docker-working is used as the working folder.

Create .NET app


You need a .NET app that the Docker container runs. Open your terminal, create a
working folder if you haven't already, and enter it. In the working folder, run the
following command to create a new project in a subdirectory named App:

.NET CLI

dotnet new console -o App -n DotNet.Docker

Your folder tree looks similar to the following directory structure:

Directory

📁 docker-working
└──📂 App
├──DotNet.Docker.csproj
├──Program.cs
└──📂 obj
├── DotNet.Docker.csproj.nuget.dgspec.json
├── DotNet.Docker.csproj.nuget.g.props
├── DotNet.Docker.csproj.nuget.g.targets
├── project.assets.json
└── project.nuget.cache

The dotnet new command creates a new folder named App and generates a "Hello
World" console application. Now, you change directories and navigate into the App
folder from your terminal session. Use the dotnet run command to start the app. The
application runs, and prints Hello World! below the command:

.NET CLI

cd App
dotnet run
Hello World!

The default template creates an app that prints to the terminal and then immediately
terminates. For this tutorial, you use an app that loops indefinitely. Open the Program.cs
file in a text editor.

 Tip

If you're using Visual Studio Code, from the previous terminal session type the
following command:

Console

code .
This will open the App folder that contains the project in Visual Studio Code.

The Program.cs should look like the following C# code:

C#

Console.WriteLine("Hello World!");

Replace the file with the following code that counts numbers every second:

C#

var counter = 0;
var max = args.Length is not 0 ? Convert.ToInt32(args[0]) : -1;
while (max is -1 || counter < max)
{
Console.WriteLine($"Counter: {++counter}");
await Task.Delay(TimeSpan.FromMilliseconds(1_000));
}

Save the file and test the program again with dotnet run . Remember that this app runs
indefinitely. Use the cancel command Ctrl+C to stop it. Consider the following example
output:

.NET CLI

dotnet run
Counter: 1
Counter: 2
Counter: 3
Counter: 4
^C

If you pass a number on the command line to the app, it will only count up to that
amount and then exit. Try it with dotnet run -- 5 to count to five.

) Important

Any parameters after -- are not passed to the dotnet run command and instead
are passed to your application.

Publish .NET app


Before adding the .NET app to the Docker image, first it must be published. It's best to
have the container run the published version of the app. To publish the app, run the
following command:

.NET CLI

dotnet publish -c Release

This command compiles your app to the publish folder. The path to the publish folder
from the working folder should be .\App\bin\Release\net8.0\publish\ .

Windows

From the App folder, get a directory listing of the publish folder to verify that the
DotNet.Docker.dll file was created.

PowerShell

dir .\bin\Release\net8.0\publish\

Directory: C:\Users\default\App\bin\Release\net8.0\publish

Mode LastWriteTime Length Name


---- ------------- ------ ----
-a--- 9/22/2023 9:17 AM 431
DotNet.Docker.deps.json
-a--- 9/22/2023 9:17 AM 6144 DotNet.Docker.dll
-a--- 9/22/2023 9:17 AM 157696 DotNet.Docker.exe
-a--- 9/22/2023 9:17 AM 11688 DotNet.Docker.pdb
-a--- 9/22/2023 9:17 AM 353
DotNet.Docker.runtimeconfig.json

Create the Dockerfile


The Dockerfile file is used by the docker build command to create a container image.
This file is a text file named Dockerfile that doesn't have an extension.

Create a file named Dockerfile in the directory containing the .csproj and open it in a text
editor. This tutorial uses the ASP.NET Core runtime image (which contains the .NET
runtime image) and corresponds with the .NET console application.

docker
FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build-env
WORKDIR /App

# Copy everything
COPY . ./
# Restore as distinct layers
RUN dotnet restore
# Build and publish a release
RUN dotnet publish -c Release -o out

# Build runtime image


FROM mcr.microsoft.com/dotnet/aspnet:8.0
WORKDIR /App
COPY --from=build-env /App/out .
ENTRYPOINT ["dotnet", "DotNet.Docker.dll"]

7 Note

The ASP.NET Core runtime image is used intentionally here, although the
mcr.microsoft.com/dotnet/runtime:8.0 image could have been used.

 Tip

This Dockerfile uses multi-stage builds, which optimizes the final size of the image
by layering the build and leaving only required artifacts. For more information, see
Docker Docs: multi-stage builds .

The FROM keyword requires a fully qualified Docker container image name. The
Microsoft Container Registry (MCR, mcr.microsoft.com) is a syndicate of Docker Hub,
which hosts publicly accessible containers. The dotnet segment is the container
repository, whereas the sdk or aspnet segment is the container image name. The image
is tagged with 8.0 , which is used for versioning. Thus,
mcr.microsoft.com/dotnet/aspnet:8.0 is the .NET 8.0 runtime. Make sure that you pull
the runtime version that matches the runtime targeted by your SDK. For example, the
app created in the previous section used the .NET 8.0 SDK, and the base image referred
to in the Dockerfile is tagged with 8.0.

) Important

When using Windows-based container images, you need to specify the image tag
beyond simply 8.0 , for example, mcr.microsoft.com/dotnet/aspnet:8.0-nanoserver-
1809 instead of mcr.microsoft.com/dotnet/aspnet:8.0 . Select an image name based
on whether you're using Nano Server or Windows Server Core and which version of
that OS. You can find a full list of all supported tags on .NET's Docker Hub page .

Save the Dockerfile file. The directory structure of the working folder should look like the
following. Some of the deeper-level files and folders have been omitted to save space in
the article:

Directory

📁 docker-working
└──📂 App
├── Dockerfile
├── DotNet.Docker.csproj
├── Program.cs
├──📂 bin
│ └──📂 Release
│ └──📂 net8.0
│ └──📂 publish
│ ├── DotNet.Docker.deps.json
│ ├── DotNet.Docker.exe
│ ├── DotNet.Docker.dll
│ ├── DotNet.Docker.pdb
│ └── DotNet.Docker.runtimeconfig.json
└──📁 obj
└──...

The ENTRYPOINT instruction sets dotnet as the host for the DotNet.Docker.dll . However,
it's possible to instead define the ENTRYPOINT as the app executable itself, relying on the
OS as the app host:

Dockerfile

ENTRYPOINT ["./DotNet.Docker"]

This causes the app to be executed directly, without dotnet , and instead relies on the
app host and the underlying OS. For more information on deploying cross-platform
binaries, see Produce a cross-platform binary.

To build the container, from your terminal, run the following command:

Console

docker build -t counter-image -f Dockerfile .


Docker will process each line in the Dockerfile. The . in the docker build command sets
the build context of the image. The -f switch is the path to the Dockerfile. This
command builds the image and creates a local repository named counter-image that
points to that image. After this command finishes, run docker images to see a list of
images installed:

Console

docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
counter-image latest 2f15637dc1f6 10 minutes ago 217MB

The counter-image repository is the name of the image. The latest tag is the tag that is
used to identify the image. The 2f15637dc1f6 is the image ID. The 10 minutes ago is the
time the image was created. The 217MB is the size of the image. The final steps of the
Dockerfile are to create a container from the image and run the app, copy the published
app to the container, and define the entry point.

Dockerfile

FROM mcr.microsoft.com/dotnet/aspnet:8.0
WORKDIR /App
COPY --from=build-env /App/out .
ENTRYPOINT ["dotnet", "DotNet.Docker.dll"]

The FROM command specifies the base image and tag to use. The WORKDIR command
changes the current directory inside of the container to App.

The COPY command tells Docker to copy the specified source directory to a destination
folder. In this example, the publish contents in the build-env layer were output into the
folder named App/out, so it's the source to copy from. All of the published contents in
the App/out directory are copied into current working directory (App).

The next command, ENTRYPOINT , tells Docker to configure the container to run as an
executable. When the container starts, the ENTRYPOINT command runs. When this
command ends, the container will automatically stop.

 Tip

Before .NET 8, containers configured to run as read-only may fail with Failed to
create CoreCLR, HRESULT: 0x8007000E . To address this issue, specify a
DOTNET_EnableDiagnostics environment variable as 0 (just before the ENTRYPOINT

step):

Dockerfile

ENV DOTNET_EnableDiagnostics=0

For more information on various .NET environment variables, see .NET


environment variables.

7 Note

.NET 6 standardizes on the prefix DOTNET_ instead of COMPlus_ for environment


variables that configure .NET run-time behavior. However, the COMPlus_ prefix will
continue to work. If you're using a previous version of the .NET runtime, you should
still use the COMPlus_ prefix for environment variables.

Create a container
Now that you have an image that contains your app, you can create a container. You can
create a container in two ways. First, create a new container that is stopped.

Console

docker create --name core-counter counter-image

This docker create command creates a container based on the counter-image image.
The output of that command shows you the CONTAINER ID (yours will be different) of
the created container:

Console

d0be06126f7db6dd1cee369d911262a353c9b7fb4829a0c11b4b2eb7b2d429cf

To see a list of all containers, use the docker ps -a command:

Console

docker ps -a
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
d0be06126f7d counter-image "dotnet DotNet.Docke…" 12 seconds ago
Created core-counter

Manage the container


The container was created with a specific name core-counter . This name is used to
manage the container. The following example uses the docker start command to start
the container, and then uses the docker ps command to only show containers that are
running:

Console

docker start core-counter


core-counter

docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
cf01364df453 counter-image "dotnet DotNet.Docke…" 53 seconds ago Up
10 seconds core-counter

Similarly, the docker stop command stops the container. The following example uses
the docker stop command to stop the container, and then uses the docker ps
command to show that no containers are running:

Console

docker stop core-counter


core-counter

docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

Connect to a container
After a container is running, you can connect to it to see the output. Use the docker
start and docker attach commands to start the container and peek at the output

stream. In this example, the Ctrl+C keystroke is used to detach from the running
container. This keystroke ends the process in the container unless otherwise specified,
which would stop the container. The --sig-proxy=false parameter ensures that Ctrl+C

won't stop the process in the container.


After you detach from the container, reattach to verify that it's still running and
counting.

Console

docker start core-counter


core-counter

docker attach --sig-proxy=false core-counter


Counter: 7
Counter: 8
Counter: 9
^C

docker attach --sig-proxy=false core-counter


Counter: 17
Counter: 18
Counter: 19
^C

Delete a container
For this article, you don't want containers hanging around that don't do anything.
Delete the container you previously created. If the container is running, stop it.

Console

docker stop core-counter

The following example lists all containers. It then uses the docker rm command to
delete the container and then checks a second time for any running containers.

Console

docker ps -a
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
2f6424a7ddce counter-image "dotnet DotNet.Dock…" 7 minutes ago
Exited (143) 20 seconds ago core-counter

docker rm core-counter
core-counter

docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Single run
Docker provides the docker run command to create and run the container as a single
command. This command eliminates the need to run docker create and then docker
start . You can also set this command to automatically delete the container when the

container stops. For example, use docker run -it --rm to do two things, first,
automatically use the current terminal to connect to the container, and then when the
container finishes, remove it:

Console

docker run -it --rm counter-image


Counter: 1
Counter: 2
Counter: 3
Counter: 4
Counter: 5
^C

The container also passes parameters into the execution of the .NET app. To instruct the
.NET app to count only to three, pass in 3.

Console

docker run -it --rm counter-image 3


Counter: 1
Counter: 2
Counter: 3

With docker run -it , the Ctrl+C command stops the process that's running in the
container, which in turn, stops the container. Since the --rm parameter was provided,
the container is automatically deleted when the process is stopped. Verify that it doesn't
exist:

Console

docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

Change the ENTRYPOINT


The docker run command also lets you modify the ENTRYPOINT command from the
Dockerfile and run something else, but only for that container. For example, use the
following command to run bash or cmd.exe . Edit the command as necessary.

Windows

In this example, ENTRYPOINT is changed to cmd.exe . Ctrl+C is pressed to end the


process and stop the container.

Console

docker run -it --rm --entrypoint "cmd.exe" counter-image

Microsoft Windows [Version 10.0.17763.379]


(c) 2018 Microsoft Corporation. All rights reserved.

C:\>dir
Volume in drive C has no label.
Volume Serial Number is 3005-1E84

Directory of C:\

04/09/2019 08:46 AM <DIR> app


03/07/2019 10:25 AM 5,510 License.txt
04/02/2019 01:35 PM <DIR> Program Files
04/09/2019 01:06 PM <DIR> Users
04/02/2019 01:35 PM <DIR> Windows
1 File(s) 5,510 bytes
4 Dir(s) 21,246,517,248 bytes free

C:\>^C

Essential commands
Docker has many different commands that create, manage, and interact with containers
and images. These Docker commands are essential to managing your containers:

docker build
docker run
docker ps
docker stop
docker rm
docker rmi
docker image

Clean up resources
During this tutorial, you created containers and images. If you want, delete these
resources. Use the following commands to

1. List all containers

Console

docker ps -a

2. Stop containers that are running by their name.

Console

docker stop core-counter

3. Delete the container

Console

docker rm core-counter

Next, delete any images that you no longer want on your machine. Delete the image
created by your Dockerfile and then delete the .NET image the Dockerfile was based on.
You can use the IMAGE ID or the REPOSITORY:TAG formatted string.

Console

docker rmi counter-image:latest


docker rmi mcr.microsoft.com/dotnet/aspnet:8.0

Use the docker images command to see a list of images installed.

 Tip

Image files can be large. Typically, you would remove temporary containers you
created while testing and developing your app. You usually keep the base images
with the runtime installed if you plan on building other images based on that
runtime.

Next steps
Containerize a .NET app with dotnet publish
.NET container images
Containerize an ASP.NET Core application
Azure services that support containers
Dockerfile commands
Container Tools for Visual Studio

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
Containerize a .NET app with dotnet
publish
Article • 08/13/2024

Containers have many features and benefits, such as being an immutable infrastructure,
providing a portable architecture, and enabling scalability. The image can be used to
create containers for your local development environment, private cloud, or public
cloud. In this tutorial, you learn how to containerize a .NET application using the dotnet
publish command without the use of a Dockerfile. Additionally, you explore how to
configure the container image and execution, and how to clean up resources.

Prerequisites
Install the following prerequisites:

.NET 8+ SDK
If you have .NET installed, use the dotnet --info command to determine which
SDK you're using.
Docker Community Edition

In addition to these prerequisites, it's recommended that you're familiar with Worker
Services in .NET.

Create .NET app


You need a .NET app to containerize, so start by creating a new app from a template.
Open your terminal, create a working folder (sample-directory) if you haven't already,
and change directories so that you're in it. In the working folder, run the following
command to create a new project in a subdirectory named Worker:

.NET CLI

dotnet new worker -o Worker -n DotNet.ContainerImage

Your folder tree looks like the following:

Directory

📁 sample-directory
└──📂 Worker
├──appsettings.Development.json
├──appsettings.json
├──DotNet.ContainerImage.csproj
├──Program.cs
├──Worker.cs
└──📂 obj
├── DotNet.ContainerImage.csproj.nuget.dgspec.json
├── DotNet.ContainerImage.csproj.nuget.g.props
├── DotNet.ContainerImage.csproj.nuget.g.targets
├── project.assets.json
└── project.nuget.cache

The dotnet new command creates a new folder named Worker and generates a worker
service that, when run, logs a message every second. From your terminal session,
change directories and navigate into the Worker folder. Use the dotnet run command
to start the app.

.NET CLI

dotnet run
Building...
info: DotNet.ContainerImage.Worker[0]
Worker running at: 10/18/2022 08:56:00 -05:00
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Development
info: Microsoft.Hosting.Lifetime[0]
Content root path: .\Worker
info: DotNet.ContainerImage.Worker[0]
Worker running at: 10/18/2022 08:56:01 -05:00
info: DotNet.ContainerImage.Worker[0]
Worker running at: 10/18/2022 08:56:02 -05:00
info: DotNet.ContainerImage.Worker[0]
Worker running at: 10/18/2022 08:56:03 -05:00
info: Microsoft.Hosting.Lifetime[0]
Application is shutting down...
Attempting to cancel the build...

The worker template loops indefinitely. Use the cancel command Ctrl+C to stop it.

Add NuGet package


Starting with .NET SDK version 8.0.200, the PublishContainer target is available for every
project. To avoid depending on the Microsoft.NET.Build.Containers NuGet package,
ensure that you're using the latest .NET SDK version. Additionally, your project file needs
to have IsPublishable set to true and enable SDK container support.
) Important

By default, the IsPublishable property is set to true for console , webapp , and
worker templates.

To enable SDK container support, set the EnableSdkContainerSupport property to true


in your project file.

XML

<PropertyGroup>
<IsPublishable>true</IsPublishable>
<EnableSdkContainerSupport>true</EnableSdkContainerSupport>
</PropertyGroup>

Set the container image name


There are various configuration options available when publishing an app as a container.

By default, the container image name is the AssemblyName of the project. If that name is
invalid as a container image name, you can override it by specifying a
ContainerRepository as shown in the following project file:

XML

<Project Sdk="Microsoft.NET.Sdk.Worker">

<PropertyGroup>
<TargetFramework>net8.0</TargetFramework>
<Nullable>enable</Nullable>
<ImplicitUsings>enable</ImplicitUsings>
<UserSecretsId>dotnet-DotNet.ContainerImage-2e40c179-a00b-4cc9-9785-
54266210b7eb</UserSecretsId>
<ContainerRepository>dotnet-worker-image</ContainerRepository>
</PropertyGroup>

<ItemGroup>
<PackageReference Include="Microsoft.Extensions.Hosting" Version="8.0.0"
/>
</ItemGroup>
</Project>

For more information, see ContainerRepository.


Publish .NET app
To publish the .NET app as a container, use the following dotnet publish command:

.NET CLI

dotnet publish --os linux --arch x64 /t:PublishContainer

The preceding .NET CLI command publishes the app as a container:

Targeting Linux as the OS ( --os linux ).


Specifying an x64 architecture ( --arch x64 ).

) Important

To build the container locally, you must have the Docker daemon running. If it isn't
running when you attempt to publish the app as a container, you'll experience an
error similar to the following:

Console

..\build\Microsoft.NET.Build.Containers.targets(66,9): error MSB4018:


The "CreateNewImage" task failed unexpectedly.
[..\Worker\DotNet.ContainerImage.csproj]

The command produces output similar to the example output:

.NET CLI

Determining projects to restore...


All projects are up-to-date for restore.
DotNet.ContainerImage -> .\Worker\bin\Release\net8.0\linux-
x64\DotNet.ContainerImage.dll
DotNet.ContainerImage -> .\Worker\bin\Release\net8.0\linux-x64\publish\
Building image 'dotnet-worker-image' with tags latest on top of base image
mcr.microsoft.com/dotnet/aspnet:8.0
Pushed container 'dotnet-worker-image:latest' to Docker daemon

This command compiles your worker app to the publish folder and pushes the container
to your local docker registry.

Configure container image


You can control many aspects of the generated container through MSBuild properties. In
general, if you can use a command in a Dockerfile to set some configuration, you can do
the same via MSBuild.

7 Note

The only exceptions to this are RUN commands. Due to the way containers are built,
those cannot be emulated. If you need this functionality, you'll need to use a
Dockerfile to build your container images.

ContainerArchiveOutputPath

Starting in .NET 8, you can create a container directly as a tar.gz archive. This feature is
useful if your workflow isn't straightforward and requires that you, for example, run a
scanning tool over your images before pushing them. Once the archive is created, you
can move it, scan it, or load it into a local Docker toolchain.

To publish to an archive, add the ContainerArchiveOutputPath property to your dotnet


publish command, for example:

.NET CLI

dotnet publish \
-p PublishProfile=DefaultContainer \
-p ContainerArchiveOutputPath=./images/sdk-container-demo.tar.gz

You can specify either a folder name or a path with a specific file name. If you specify the
folder name, the filename generated for the image archive file will be
$(ContainerRepository).tar.gz . These archives can contain multiple tags inside them,

only as single file is created for all ContainerImageTags .

Container image naming configuration


Container images follow a specific naming convention. The name of the image is
composed of several parts, the registry, optional port, repository, and optional tag and
family.

Dockerfile

REGISTRY[:PORT]/REPOSITORY[:TAG[-FAMILY]]
For example, consider the fully qualified mcr.microsoft.com/dotnet/runtime:8.0-alpine
image name:

mcr.microsoft.com is the registry (and in this case represents the Microsoft

container registry).
dotnet/runtime is the repository (but some consider this the user/repository ).

8.0-alpine is the tag and family (the family is an optional specifier that helps

disambiguate OS packaging).

Some properties described in the following sections correspond to managing parts of


the generated image name. Consider the following table that maps the relationship
between the image name and the build properties:

ノ Expand table

Image name part MSBuild property Example values

REGISTRY[:PORT] ContainerRegistry mcr.microsoft.com:443

PORT ContainerPort :443

REPOSITORY ContainerRepository dotnet/runtime

TAG ContainerImageTag 8.0

FAMILY ContainerFamily -alpine

The following sections describe the various properties that can be used to control the
generated container image.

ContainerBaseImage

The container base image property controls the image used as the basis for your image.
By default, the following values are inferred based on the properties of your project:

If your project is self-contained, the mcr.microsoft.com/dotnet/runtime-deps image


is used as the base image.
If your project is an ASP.NET Core project, the mcr.microsoft.com/dotnet/aspnet
image is used as the base image.
Otherwise the mcr.microsoft.com/dotnet/runtime image is used as the base image.

The tag of the image is inferred to be the numeric component of your chosen
TargetFramework . For example, a project targeting net6.0 results in the 6.0 tag of the

inferred base image, and a net7.0-linux project uses the 7.0 tag, and so on.
If you set a value here, you should set the fully qualified name of the image to use as
the base, including any tag you prefer:

XML

<PropertyGroup>

<ContainerBaseImage>mcr.microsoft.com/dotnet/runtime:8.0</ContainerBaseImage
>
</PropertyGroup>

Starting with .NET SDK version 8.0.200, the ContainerBaseImage inference has been
improved to optimize the size and security:

Targeting the linux-musl-x64 or linux-musl-arm64 Runtime Identifiers,


automatically chooses the alpine image variants to ensure your project runs:
If the project uses PublishAot=true then the nightly/runtime-deps jammy-
chiseled-aot variant of the base image for best size and security.

If the project uses InvariantGlobalization=false then the -extra variants is


used to ensure localization still works.

For more information regarding the image variants sizes and characteristics, see .NET 8.0
Container Image Size Report .

ContainerFamily

Starting with .NET 8, you can use the ContainerFamily MSBuild property to choose a
different family of Microsoft-provided container images as the base image for your app.
When set, this value is appended to the end of the selected TFM-specific tag, changing
the tag provided. For example, to use the Alpine Linux variants of the .NET base images,
you can set ContainerFamily to alpine :

XML

<PropertyGroup>
<ContainerFamily>alpine</ContainerFamily>
</PropertyGroup>

The preceding project configuration results in a final tag of 8.0-alpine for a .NET 8-
targeting app.

This field is free-form, and often can be used to select different operating system
distributions, default package configurations, or any other flavor of changes to a base
image. This field is ignored when ContainerBaseImage is set. For more information, see
.NET container images.

ContainerRuntimeIdentifier

The container runtime identifier property controls the operating system and architecture
used by your container if your ContainerBaseImage supports more than one platform.
For example, the mcr.microsoft.com/dotnet/runtime image currently supports linux-
x64 , linux-arm , linux-arm64 and win10-x64 images all behind the same tag, so the

tooling needs a way to be told which of these versions you intend to use. By default, this
is set to the value of the RuntimeIdentifier that you chose when you published the
container. This property rarely needs to be set explicitly - instead use the -r option to
the dotnet publish command. If the image you've chosen doesn't support the
RuntimeIdentifier you've chosen, results in an error that describes the

RuntimeIdentifiers the image does support.

You can always set the ContainerBaseImage property to a fully qualified image name,
including the tag, to avoid needing to use this property at all.

XML

<PropertyGroup>
<ContainerRuntimeIdentifier>linux-arm64</ContainerRuntimeIdentifier>
</PropertyGroup>

For more information regarding the runtime identifiers supported by .NET, see RID
catalog.

ContainerRegistry

The container registry property controls the destination registry, the place that the
newly created image will be pushed to. By default it's pushed to the local Docker
daemon, but you can also specify a remote registry. When using a remote registry that
requires authentication, you authenticate using the well-known docker login
mechanisms. For more information, See authenticating to container registries for
more details. For a concrete example of using this property, consider the following XML
example:

XML

<PropertyGroup>
<ContainerRegistry>registry.mycorp.com:1234</ContainerRegistry>
</PropertyGroup>

This tooling supports publishing to any registry that supports the Docker Registry HTTP
API V2 . This includes the following registries explicitly (and likely many more
implicitly):

Azure Container Registry


Amazon Elastic Container Registry
Google Artifact Registry
Docker Hub
GitHub Packages
GitLab-hosted Container Registry
Quay.io

For notes on working with these registries, see the registry-specific notes .

ContainerRepository

The container repository is the name of the image itself, for example, dotnet/runtime or
my-app . By default, the AssemblyName of the project is used.

XML

<PropertyGroup>
<ContainerRepository>my-app</ContainerRepository>
</PropertyGroup>

Image names consist of one or more slash-delimited segments, each of which can only
contain lowercase alphanumeric characters, periods, underscores, and dashes, and must
start with a letter or number. Any other characters result in an error being thrown.

ContainerImageTag(s)

The container image tag property controls the tags that are generated for the image. To
specify a single tag use ContainerImageTag and for multiple tags use
ContainerImageTags .

) Important

When you use ContainerImageTags , you'll end up with multiple images, one per
unique tag.
Tags are often used to refer to different versions of an app, but they can also refer to
different operating system distributions, or even different configurations.

Starting with .NET 8, when a tag isn't provided the default is latest .

To override the default, specify either of the following:

XML

<PropertyGroup>
<ContainerImageTag>1.2.3-alpha2</ContainerImageTag>
</PropertyGroup>

To specify multiple tags, use a semicolon-delimited set of tags in the


ContainerImageTags property, similar to setting multiple TargetFrameworks :

XML

<PropertyGroup>
<ContainerImageTags>1.2.3-alpha2;latest</ContainerImageTags>
</PropertyGroup>

Tags can only contain up to 127 alphanumeric characters, periods, underscores, and
dashes. They must start with an alphanumeric character or an underscore. Any other
form results in an error being thrown.

7 Note

When using ContainerImageTags , the tags are delimited by a ; character. If you're


calling dotnet publish from the command line (as is the case with most CI/CD
environments), you'll need to outer wrap the values in a single ' and inner wrap
with double quotes " , for example ( ='"tag-1;tag-2"' ). Consider the following
dotnet publish command:

.NET CLI

dotnet publish -p ContainerImageTags='"1.2.3-alpha2;latest"'

This results in two images being generated: my-app:1.2.3-alpha2 and my-


app:latest .

 Tip
If you experience issues with the ContainerImageTags property, consider scoping an
environment variable ContainerImageTags instead:

.NET CLI

ContainerImageTags='1.2.3;latest' dotnet publish

ContainerLabel

The container label adds a metadata label to the container. Labels have no impact on
the container at run time, but are often used to store version and authoring metadata
for use by security scanners and other infrastructure tools. You can specify any number
of container labels.

The ContainerLabel node has two attributes:

Include : The key of the label.

Value : The value of the label (this may be empty).

XML

<ItemGroup>
<ContainerLabel Include="org.contoso.businessunit" Value="contoso-
university" />
</ItemGroup>

For a list of labels that are created by default, see default container labels.

Configure container execution


To control the execution of the container, you can use the following MSBuild properties.

ContainerWorkingDirectory

The container working directory node controls the working directory of the container,
the directory that commands are executed within if not other command is run.

By default, the /app directory value is used as the working directory.

XML
<PropertyGroup>
<ContainerWorkingDirectory>/bin</ContainerWorkingDirectory>
</PropertyGroup>

ContainerPort

The container port adds TCP or UDP ports to the list of known ports for the container.
This enables container runtimes like Docker to map these ports to the host machine
automatically. This is often used as documentation for the container, but can also be
used to enable automatic port mapping.

The ContainerPort node has two attributes:

Include : The port number to expose.


Type : Defaults to tcp , valid values are either tcp or udp .

XML

<ItemGroup>
<ContainerPort Include="80" Type="tcp" />
</ItemGroup>

Starting with .NET 8, the ContainerPort is inferred when not explicitly provided based on
several well-known ASP.NET environment variables:

ASPNETCORE_URLS

ASPNETCORE_HTTP_PORTS
ASPNETCORE_HTTPS_PORTS

If these environment variables are present, their values are parsed and converted to TCP
port mappings. These environment variables are read from your base image, if present,
or from the environment variables defined in your project through
ContainerEnvironmentVariable items. For more information, see

ContainerEnvironmentVariable.

ContainerEnvironmentVariable

The container environment variable node allows you to add environment variables to
the container. Environment variables are accessible to the app running in the container
immediately, and are often used to change the run-time behavior of the running app.

The ContainerEnvironmentVariable node has two attributes:


Include : The name of the environment variable.
Value : The value of the environment variable.

XML

<ItemGroup>
<ContainerEnvironmentVariable Include="LOGGER_VERBOSITY" Value="Trace" />
</ItemGroup>

For more information, see .NET environment variables.

Configure container commands


By default, the container tools launch your app using either the generated AppHost
binary for your app (if your app uses an AppHost), or the dotnet command plus your
app's DLL.

However, you can control how your app is executed by using some combination of
ContainerAppCommand , ContainerAppCommandArgs , ContainerDefaultArgs , and

ContainerAppCommandInstruction .

These different configuration points exist because different base images use different
combinations of the container ENTRYPOINT and COMMAND properties, and you want to be
able to support all of them. The defaults should be useable for most apps, but if you
want to customize your app launch behavior you should:

Identify the binary to run and set it as ContainerAppCommand


Identify which arguments are required for your application to run and set them as
ContainerAppCommandArgs

Identify which arguments (if any) are optional and could be overridden by a user
and set them as ContainerDefaultArgs
Set ContainerAppCommandInstruction to DefaultArgs

For more information, see the following configuration items.

ContainerAppCommand

The app command configuration item is the logical entry point of your app. For most
apps, this is the AppHost, the generated executable binary for your app. If your app
doesn't generate an AppHost, then this command will typically be dotnet <your project
dll> . These values are applied after any ENTRYPOINT in your base container, or directly if

no ENTRYPOINT is defined.

The ContainerAppCommand configuration has a single Include property, which represents


the command, option, or argument to use in the entrypoint command:

XML

<ItemGroup Label="ContainerAppCommand Assignment">


<!-- This is how you would start the dotnet ef tool in your container -->
<ContainerAppCommand Include="dotnet" />
<ContainerAppCommand Include="ef" />

<!-- This shorthand syntax means the same thing, note the semicolon
separating the tokens. -->
<ContainerAppCommand Include="dotnet;ef" />
</ItemGroup>

ContainerAppCommandArgs

This app command args configuration item represents any logically required arguments
for your app that should be applied to the ContainerAppCommand . By default, none are
generated for an app. When present, the args are applied to your container when it's
run.

The ContainerAppCommandArgs configuration has a single Include property, which


represents the option or argument to apply to the ContainerAppCommand command.

XML

<ItemGroup>
<!-- Assuming the ContainerAppCommand defined above,
this would be the way to force the database to update.
-->
<ContainerAppCommandArgs Include="database" />
<ContainerAppCommandArgs Include="update" />

<!-- This is the shorthand syntax for the same idea -->
<ContainerAppCommandArgs Include="database;update" />
</ItemGroup>

ContainerDefaultArgs

This default args configuration item represents any user-overridable arguments for your
app. This is a good way to provide defaults that your app might need to run in a way
that makes it easy to start, yet still easy to customize.

The ContainerDefaultArgs configuration has a single Include property, which


represents the option or argument to apply to the ContainerAppCommand command.

XML

<ItemGroup>
<!-- Assuming the ContainerAppCommand defined above,
this would be the way to force the database to update.
-->
<ContainerDefaultArgs Include="database" />
<ContainerDefaultArgs Include="update" />

<!-- This is the shorthand syntax for the same idea -->
<ContainerDefaultArgs Include="database;update" />
</ItemGroup>

ContainerAppCommandInstruction

The app command instruction configuration helps control the way the
ContainerEntrypoint , ContainerEntrypointArgs , ContainerAppCommand ,

ContainerAppCommandArgs , and ContainerDefaultArgs are combined to form the final

command that is run in the container. This depends greatly on if an ENTRYPOINT is


present in the base image. This property takes one of three values: "DefaultArgs" ,
"Entrypoint" , or "None" .

Entrypoint :

In this mode, the entrypoint is defined by ContainerAppCommand ,


ContainerAppCommandArgs , and ContainerDefaultArgs .

None :

In this mode, the entrypoint is defined by ContainerEntrypoint ,


ContainerEntrypointArgs , and ContainerDefaultArgs .

DefaultArgs :

This is the most complex mode—if none of the ContainerEntrypoint[Args]


items are present, the ContainerAppCommand[Args] and ContainerDefaultArgs
are used to create the entrypoint and command. The base image entrypoint for
base images that have it hard-coded to dotnet or /usr/bin/dotnet is skipped
so that you have complete control.
If both ContainerEntrypoint and ContainerAppCommand are present, then
ContainerEntrypoint becomes the entrypoint, and ContainerAppCommand

becomes the command.


7 Note

The ContainerEntrypoint and ContainerEntrypointArgs configuration items have


been deprecated as of .NET 8.

) Important

This is for advanced users-most apps shouldn't need to customize their entrypoint
to this degree. For more information and if you'd like to provide use cases for your
scenarios, see GitHub: .NET SDK container builds discussions .

ContainerUser

The user configuration property controls the default user that the container runs as. This
is often used to run the container as a non-root user, which is a best practice for
security. There are a few constraints for this configuration to be aware of:

It can take various forms—username, linux user ids, group name, linux group id,
username:groupname , and other ID variants.

There's no verification that the user or group specified exists on the image.
Changing the user can alter the behavior of the app, especially in regards to things
like File System permissions.

The default value of this field varies by project TFM and target operating system:

If you're targeting .NET 8 or higher and using the Microsoft runtime images, then:
on Linux the rootless user app is used (though it's referenced by its user ID)
on Windows the rootless user ContainerUser is used
Otherwise, no default ContainerUser is used

XML

<PropertyGroup>
<ContainerUser>my-existing-app-user</ContainerUser>
</PropertyGroup>

 Tip

The APP_UID environment variable is used to set user information in your container.
This value can come from environment variables defined in your base image (like
that Microsoft .NET images do), or you can set it yourself via the
ContainerEnvironmentVariable syntax.

To configure your app to run as a root user, set the ContainerUser property to root . In
your project file, add the following:

XML

<PropertyGroup>
<ContainerUser>root</ContainerUser>
</PropertyGroup>

Alternatively, you can set this value when calling dotnet publish from the command
line:

.NET CLI

dotnet publish -p ContainerUser=root

Default container labels


Labels are often used to provide consistent metadata on container images. This package
provides some default labels to encourage better maintainability of the generated
images.

org.opencontainers.image.created is set to the ISO 8601 format of the current UTC


DateTime .

For more information, see Implement conventional labels on top of existing label
infrastructure .

Clean up resources
In this article, you published a .NET worker as a container image. If you want, delete this
resource. Use the docker images command to see a list of installed images.

Console

docker images

Consider the following example output:


Console

REPOSITORY TAG IMAGE ID CREATED SIZE


dotnet-worker-image 1.0.0 25aeb97a2e21 12 seconds ago 191MB

 Tip

Image files can be large. Typically, you would remove temporary containers you
created while testing and developing your app. You usually keep the base images
with the runtime installed if you plan on building other images based on that
runtime.

To delete the image, copy the image ID and run the docker image rm command:

Console

docker image rm 25aeb97a2e21

Next steps
Announcing built-in container support for the .NET SDK
Tutorial: Containerize a .NET app
.NET container images
Review the Azure services that support containers
Read about Dockerfile commands
Explore the container tools in Visual Studio

6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.
.NET container images
Article • 08/30/2024

.NET provides various container images for different scenarios. This article describes the
different types of images and how they're used. For more information about official
images, see the Docker Hub: Microsoft .NET repository.

Tagging scheme
Starting with .NET 8, container images are more pragmatic in how they're differentiated.
The following characteristics are used to differentiate images:

The target framework moniker (TFM) of the app.


The OS, version, and architecture.
The image type (for example, runtime , aspnet , sdk ).
The image variant (for example, *-distroless , *-chiseled ).
The image feature (for example, *-aot , *-extra ).

Images optimized for size


The following images are focused on resulting in the smallest possible image size:

Alpine
Mariner distroless
Ubuntu chiseled

These images are smaller, as they don't include globalization dependencies such as, ICU,
or tzdata. These images only work with apps that are configured for globalization
invariant mode. To configure an app for invariant globalization, add the following
property to the project file:

XML

<PropertyGroup>
<InvariantGlobalization>true</InvariantGlobalization>
</PropertyGroup>

 Tip
SDK images aren't produced for *-distroless or *-chiseled image types.
Composite images are the smallest aspnet offering for Core CLR.

Images suitable for globalization


Containerized apps that require globalization inflate the image size, as they require
globalization dependencies. Ubuntu and Debian images have ICU and tzdata installed
already.

The tzdata dependency was added to the following images:

runtime-deps:8.0-jammy
runtime-deps:8.0-bookworm-slim

This globalization tactic is used by runtime , aspnet , and sdk images with the same tag.

) Important

Adding tzdata to Debian bookworm images has no practical effect, unless there's
an update to tzdata (that isn't yet included in Debian), at which point .NET images
would include a newer tzdata.

Some packages are still optional, such as Kerberos, LDAP, and msquic. These packages
are only required in niche scenarios.

Scenario-based images
The runtime-deps images have significant value, particularly since they include a
standard user and port definitions. They're convenient to use for self-contained and
native AOT scenarios. However, solely providing runtime-deps images that are needed
by the runtime and sdk images isn't sufficient to enable all the imaginable scenarios
or generate optimal images.

The need for runtime-deps extends to native AOT, *-distroless , and *-chiseled image
types as well. For each OS, three image variants are provided (all in runtime-deps ).
Consider the following example using *-chiseled images:

8.0-jammy-chiseled : Images for Core CLR, no tzdata or ICU.

8.0-jammy-chiseled-aot : Images for native AOT, no tzdata, ICU, or stdc++.


8.0-jammy-chiseled-extra : Image for both Core CLR and native AOT, includes

tzdata, ICU, and stdc++.

In terms of scenarios:

The 8.0-jammy-chiseled images are the base for runtime and aspnet images of the
same tag. By default, native AOT apps can use the 8.0-jammy-chiseled-aot image, since
it's optimized for size. Native AOT apps and Core CLR self-contained/single file apps that
require globalization functionality can use 8.0-jammy-chiseled-extra .

Alpine and Mariner images use the same scheme.

7 Note

Debian and Ubuntu (non-chiseled) runtime-deps images don't have multiple


variants.

Native AOT container images


Native AOT images are published to the sdk repository, and tagged with the -aot
suffix. These images enable building native AOT apps. They're created for distros with
matching runtime-deps:*-aot images. These images are large, commonly twice the size
of regular SDK images.

AOT images are published for:

Alpine
Mariner
Ubuntu

For more information, see Native AOT deployment

Docker hub repositories


All of the official Microsoft images for .NET are published to the microsoft-dotnet
Docker Hub organization. Consider the following repositories.

.NET stable image repositories:

ノ Expand table
Image repository Image

sdk mcr.microsoft.com/dotnet/sdk

aspnet mcr.microsoft.com/dotnet/aspnet

runtime mcr.microsoft.com/dotnet/runtime

runtime-deps mcr.microsoft.com/dotnet/runtime-deps

monitor mcr.microsoft.com/dotnet/monitor

aspire-dashboard mcr.microsoft.com/dotnet/aspire-dashboard

samples mcr.microsoft.com/dotnet/samples

.NET nightly image repositories:

ノ Expand table

Image repository Image

nightly-aspnet mcr.microsoft.com/dotnet/nightly/aspnet

nightly-monitor mcr.microsoft.com/dotnet/nightly/monitor

nightly-runtime-deps mcr.microsoft.com/dotnet/nightly/runtime-deps

nightly-runtime mcr.microsoft.com/dotnet/nightly/runtime

nightly-sdk mcr.microsoft.com/dotnet/nightly/sdk

nightly-aspire-dashboard mcr.microsoft.com/dotnet/nightly/aspire-dashboard

.NET Framework image repositories:

ノ Expand table

Image repository Image

framework mcr.microsoft.com/dotnet/framework

framework-aspnet mcr.microsoft.com/dotnet/framework/aspnet

framework-runtime mcr.microsoft.com/dotnet/framework/runtime

framework-samples mcr.microsoft.com/dotnet/framework/samples

framework-sdk mcr.microsoft.com/dotnet/framework/sdk
Image repository Image

framework-wcf mcr.microsoft.com/dotnet/framework/wcf

See also
What's new in .NET 8: Container images
New approach for differentiating .NET 8+ images
Visual Studio Container Tools for Docker
Article • 07/23/2024

The tools included in Visual Studio for developing with Docker containers are easy to
use, and greatly simplify building, debugging, and deployment for containerized
applications. You can work with a container for a single project, or use container
orchestration with Docker Compose or Service Fabric to work with multiple services in
containers.

Prerequisites
Docker Desktop
Visual Studio 2022 with the Web Development, Azure Tools workload, and/or
.NET desktop development workload installed
To publish to Azure Container Registry, an Azure subscription. Sign up for a free
trial .

Docker support in Visual Studio


Docker support is available for ASP.NET projects, ASP.NET Core projects, and .NET Core
and .NET Framework console projects.

The support for Docker in Visual Studio has changed over a number of releases in
response to customer needs. There are several options to add Docker support to a
project, and the supported options vary by the type of project and the version of Visual
Studio. With some supported project types, if you just want a container for a single
project, without using orchestration, you can do that by adding Docker support. The
next level is container orchestration support, which adds appropriate support files for
the particular orchestrator you choose.

With Visual Studio 2022 version 17.9 and later, when you add Docker support to a .NET
7 or later project, you have two container build types to choose from for adding Docker
support. You can choose to add a Dockerfile to specify how to build the container
images, or you can choose to use the built-in container support provided by the .NET
SDK.

Also, with Visual Studio 2022 and later, when you choose container orchestration, you
can use Docker Compose or Service Fabric as container orchestration services.

7 Note
If you are using the full .NET Framework console project template, the supported
option is Add Container Orchestrator support after project creation, with options
to use Service Fabric or Docker Compose. Adding support at project creation and
Add Docker support for a single project without orchestration are not available
options.

In Visual Studio 2022, the Containers window is available, which lets you view running
containers, browse available images, view environment variables, logs, and port
mappings, inspect the filesystem, attach a debugger, or open a terminal window inside
the container environment. See Use the Containers window.

7 Note

Docker's licensing requirements might be different for different versions of Docker


Desktop. Refer to the Docker documentation to understand the current licensing
requirements for using your version of Docker Desktop for development in your
situation.

Adding Docker support


You can enable Docker support during project creation by selecting Enable Docker
Support when creating a new project, as shown in the following screenshot:
7 Note

For .NET Framework projects (not .NET Core), only Windows containers are
available.

You can add Docker support to an existing project by selecting Add > Docker Support
in Solution Explorer. The Add > Docker Support and Add > Container Orchestrator
Support commands are located on the right-click menu (or context menu) of the project
node for an ASP.NET Core project in Solution Explorer, as shown in the following
screenshot:

Add Docker support using the Dockerfile


container build type
When you add or enable Docker support to a .NET 7 or later project, Visual Studio
shows the Container Scaffolding Options dialog box, which gives you the choice of
operating system (Linux or Windows), but also the ability to choose the container build
type, either Dockerfile or .NET SDK. This dialog box does not appear in .NET Framework
projects or Azure Functions projects.
In 17.11 and later, you can also specify the Container Image Distro and the Docker
Build Context.

Container Image Distro specifies which OS image your containers use as the base
image. This list changes if you switch between Linux and Windows as the container type.

The following images are available:

Windows:

Windows Nano Server (recommended, only available 8.0 and later, not preset for
Native Ahead-of-time (AOT) deployment projects)
Windows Server Core (only available 8.0 and later)

Linux:

Default (Debian, but the tag is "8.0")


Debian
Ubuntu
Chiseled Ubuntu
Alpine

7 Note

Containers based on the Chiseled Ubuntu image and that use Native Ahead-of-
time (AOT) deployment can only be debugged in Fast Mode. See Customize
Docker containers in Visual Studio.

Docker Build Context specifies the folder that is used for the Docker build. See Docker
build context . The default is the solution folder, which is strongly recommended. All
the files needed for a build need to be under this folder, which is usually not the case if
you choose the project folder or some other folder.

If you choose Dockerfile, Visual Studio adds the following to the project:

a Dockerfile file
a .dockerignore file
a NuGet package reference to the
Microsoft.VisualStudio.Azure.Containers.Tools.Targets

The Dockerfile you add will resemble the following code. In this example, the project
was named WebApplication-Docker , and you chose Linux containers:

Dockerfile

#See https://aka.ms/containerfastmode to understand how Visual Studio uses


this Dockerfile to build your images for faster debugging.

FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base


WORKDIR /app
EXPOSE 80
EXPOSE 443

FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build


WORKDIR /src
COPY ["WebApplication-Docker/WebApplication-Docker.csproj", "WebApplication-
Docker/"]
RUN dotnet restore "WebApplication-Docker/WebApplication-Docker.csproj"
COPY . .
WORKDIR "/src/WebApplication-Docker"
RUN dotnet build "WebApplication-Docker.csproj" -c Release -o /app/build

FROM build AS publish


RUN dotnet publish "WebApplication-Docker.csproj" -c Release -o /app/publish

FROM base AS final


WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "WebApplication-Docker.dll"]

Containerize a .NET app without a Dockerfile


With Visual Studio 2022 17.9 and later with the .NET 7 SDK installed, in ASP.NET Core
projects that target .NET 6 or later, you have the option of using .NET SDK's built-in
support for container builds, which means you don't need a Dockerfile; see Containerize
a .NET app with dotnet publish. Instead, you configure your containers using MSBuild
properties in the project file, and the settings for launching the containers with Visual
Studio are encoded in a .json configuration file, launchSettings.json.

Here, choose .NET SDK as the container build type to use .NET SDK's container
management instead of a Dockerfile.

Container Image Distro specifies which OS image your containers use as the base
image. This list changes if you switch between Linux and Windows as the container. See
the previous section for a list of available images.
The .NET SDK container build entry in launchSettings.json looks like the following code:

JSON

"Container (.NET SDK)": {


"commandName": "SdkContainer",
"launchBrowser": true,
"launchUrl": "{Scheme}://{ServiceHost}:{ServicePort}",
"environmentVariables": {
"ASPNETCORE_HTTPS_PORTS": "8081",
"ASPNETCORE_HTTP_PORTS": "8080"
},
"publishAllPorts": true,
"useSSL": true
}

The .NET SDK manages some of the settings that would have been encoded in a
Dockerfile, such as the container base image, and the environment variables to set. The
settings available in the project file for container configuration are listed at Customizing
your container . For example, the Container Image Distro is saved in the project file as
the ContainerBaseImage property. You can change it later by editing the project file.

XML

<PropertyGroup>
<ContainerBaseImage>mcr.microsoft.com/dotnet/runtime:8.0-alpine-
amd64</ContainerBaseImage>
</PropertyGroup>

Use the Containers window


The Containers window lets you view containers and images on your machine and see
what's going on with them. You can view the filesystem, volumes mounted, environment
variables, ports used, and examine log files.

Open the Containers window by using the quick launch (Ctrl+Q) and typing containers .
You can use the docking controls to put the window somewhere. Because of the width
of the window, it works best when docked at the bottom of the screen.

Select a container, and use the tabs to view the information that's available. To check it
out, run your Docker-enabled app, open the Files tab, and expand the app folder to see
your deployed app on the container.
For more information, see Use the Containers window.

Docker Compose support


When you want to compose a multi-container solution using Docker Compose, add
container orchestrator support to your projects. This lets you run and debug a group of
containers (a whole solution or group of projects) at the same time if they're defined in
the same docker-compose.yml file.

To add container orchestrator support using Docker Compose, right-click on the project
node in Solution Explorer, and choose Add > Container Orchestrator Support. Then
choose Docker Compose to manage the containers.

After you add container orchestrator support to your project, you see a Dockerfile added
to the project (if there wasn't one there already) and a docker-compose folder added to
the solution in Solution Explorer, as shown here:
If docker-compose.yml already exists, Visual Studio just adds the required lines of
configuration code to it.

Repeat the process with the other projects that you want to control using Docker
Compose.
If you work with a large number of services, you can save time and computing resources
by selecting which subset of services you want to start in your debugging session. See
Start a subset of Compose services.

7 Note

Note that remote Docker hosts are not supported in Visual Studio tooling.

Service Fabric support


With Service Fabric tools in Visual Studio, you can develop and debug for Azure Service
Fabric, run and debug locally, and deploy to Azure.

Visual Studio 2019 and later support developing containerized microservices using
Windows containers and Service Fabric orchestration.

For a detailed tutorial, see Tutorial: Deploy a .NET application in a Windows container to
Azure Service Fabric.

For more information on Azure Service Fabric, see Service Fabric.

Continuous delivery and continuous


integration (CI/CD)
Visual Studio integrates readily with Azure Pipelines for automated and continuous
integration and delivery of changes to your service code and configuration. To get
started, see Create your first pipeline.

For Service Fabric, see Tutorial: Deploy your ASP.NET Core app to Azure Service Fabric
by using Azure DevOps Projects.

Next steps
For further details on the services implementation and use of Visual Studio tools for
working with containers, read the following articles:

Debugging apps in a local Docker container

Deploy an ASP.NET container to a container registry using Visual Studio


Feedback
Was this page helpful?  Yes  No
.NET distribution packaging
Article • 12/09/2023

As .NET 5 (and .NET Core) and later versions become available on more and more
platforms, it's useful to learn how to package, name, and version apps and libraries that
use it. This way, package maintainers can help ensure a consistent experience no matter
where users choose to run .NET. This article is useful for users that are:

Attempting to build .NET from source.


Wanting to make changes to the .NET CLI that could impact the resulting layout or
packages produced.

Disk layout
When installed, .NET consists of several components that are laid out as follows in the
file system:

{dotnet_root} (0) (*)


├── dotnet (1)
├── LICENSE.txt (8)
├── ThirdPartyNotices.txt (8)
├── host (*)
│ └── fxr (*)
│ └── <fxr version> (2)
├── sdk (*)
│ └── <sdk version> (3)
├── sdk-manifests (4) (*)
│ └── <sdk feature band version>
├── library-packs (4) (*)
├── metadata (4) (*)
│ └── workloads
│ └── <sdk feature band version>
├── template-packs (4) (*)
├── packs (*)
│ ├── Microsoft.AspNetCore.App.Ref (*)
│ │ └── <aspnetcore ref version> (11)
│ ├── Microsoft.NETCore.App.Ref (*)
│ │ └── <netcore ref version> (12)
│ ├── Microsoft.NETCore.App.Host.<rid> (*)
│ │ └── <apphost version> (13)
│ ├── Microsoft.WindowsDesktop.App.Ref (*)
│ │ └── <desktop ref version> (14)
│ ├── NETStandard.Library.Ref (*)
│ │ └── <netstandard version> (15)
│ ├── Microsoft.NETCore.App.Runtime.<rid> (*)
│ │ └── <runtime version> (18)
│ └── Microsoft.AspNetCore.App.Runtime.<rid> (*)
│ └── <aspnetcore version> (18)
├── shared (*)
│ ├── Microsoft.NETCore.App (*)
│ │ └── <runtime version> (5)
│ ├── Microsoft.AspNetCore.App (*)
│ │ └── <aspnetcore version> (6)
│ ├── Microsoft.AspNetCore.All (*)
│ │ └── <aspnetcore version> (6)
│ └── Microsoft.WindowsDesktop.App (*)
│ └── <desktop app version> (7)
└── templates (*)
│ └── <templates version> (17)
/
├── etc/dotnet
│ └── install_location (16)
├── usr/share/man/man1
│ └── dotnet.1.gz (9)
└── usr/bin
└── dotnet (10)

(0) {dotnet_root} is a shared root for all .NET major and minor versions. If multiple
runtimes are installed, they share the {dotnet_root} folder, for example,
{dotnet_root}/shared/Microsoft.NETCore.App/6.0.11 and
{dotnet_root}/shared/Microsoft.NETCore.App/7.0.0 . The name of the

{dotnet_root} folder should be version agnostic, that is, simply dotnet .

(1) dotnet The host (also known as the "muxer") has two distinct roles: activate a
runtime to launch an application, and activate an SDK to dispatch commands to it.
The host is a native executable ( dotnet.exe ).

While there's a single host, most of the other components are in versioned directories
(2,3,5,6). This means multiple versions can be present on the system since they're
installed side by side.

(2) host/fxr/<fxr version> contains the framework resolution logic used by the
host. The host uses the latest hostfxr that is installed. The hostfxr is responsible for
selecting the appropriate runtime when executing a .NET application. For example,
an application built for .NET 7.0.0 uses the 7.0.5 runtime when it's available.
Similarly, hostfxr selects the appropriate SDK during development.

(3) sdk/<sdk version> The SDK (also known as "the tooling") is a set of managed
tools that are used to write and build .NET libraries and applications. The SDK
includes the .NET CLI, the managed languages compilers, MSBuild, and associated
build tasks and targets, NuGet, new project templates, and so on.
(4) sdk-manifests/<sdk feature band version> The names and versions of the
assets that an optional workload installation requires are maintained in workload
manifests stored in this folder. The folder name is the feature band version of the
SDK. So for an SDK version such as 7.0.102, this folder would still be named
7.0.100. When a workload is installed, the following folders are created as needed
for the workload's assets: library-packs, metadata, and template-packs. A
distribution can create an empty /metadata/workloads/<sdkfeatureband>/userlocal
file if workloads should be installed under a user path rather than in the dotnet
folder. For more information, see GitHub issue dotnet/installer#12104 .

The shared folder contains frameworks. A shared framework provides a set of libraries at
a central location so they can be used by different applications.

(5) shared/Microsoft.NETCore.App/<runtime version> This framework contains


the .NET runtime and supporting managed libraries.

(6) shared/Microsoft.AspNetCore.{App,All}/<aspnetcore version> contains the


ASP.NET Core libraries. The libraries under Microsoft.AspNetCore.App are
developed and supported as part of the .NET project. The libraries under
Microsoft.AspNetCore.All are a superset that also contains third-party libraries.

(7) shared/Microsoft.Desktop.App/<desktop app version> contains the Windows


desktop libraries. This isn't included on non-Windows platforms.

(8) LICENSE.txt,ThirdPartyNotices.txt are the .NET license and licenses of third-


party libraries used in .NET, respectively.

(9,10) dotnet.1.gz, dotnet dotnet.1.gz is the dotnet manual page. dotnet is a


symlink to the dotnet host(1). These files are installed at well-known locations for
system integration.

(11,12) Microsoft.NETCore.App.Ref,Microsoft.AspNetCore.App.Ref describe the


API of an x.y version of .NET and ASP.NET Core respectively. These packs are used
when compiling for those target versions.

(13) Microsoft.NETCore.App.Host.<rid> contains a native binary for platform rid .


This binary is a template when compiling a .NET application into a native binary for
that platform.

(14) Microsoft.WindowsDesktop.App.Ref describes the API of x.y version of


Windows Desktop applications. These files are used when compiling for that
target. This isn't provided on non-Windows platforms.
(15) NETStandard.Library.Ref describes the netstandard x.y API. These files are
used when compiling for that target.

(16) /etc/dotnet/install_location is a file that contains the full path for


{dotnet_root} . The path may end with a newline. It's not necessary to add this file

when the root is /usr/share/dotnet .

(17) templates contains the templates used by the SDK. For example, dotnet new
finds project templates here.

(18) Microsoft.NETCore.App.Runtime.<rid>/<runtime
version>,Microsoft.AspNetCore.App.Runtime.<rid>/<aspnetcore version> These
files enable building self-contained applications. These directories contain
symbolic links to files in (2), (5) and (6).

The folders marked with (*) are used by multiple packages. Some package formats (for
example, rpm ) require special handling of such folders. The package maintainer must
take care of this.

Recommended packages
.NET versioning is based on the runtime component [major].[minor] version numbers.
The SDK version uses the same [major].[minor] and has an independent [patch] that
combines feature and patch semantics for the SDK. For example: SDK version 7.0.302 is
the second patch release of the third feature release of the SDK that supports the 7.0
runtime. For more information about how versioning works, see .NET versioning
overview.

Some of the packages include part of the version number in their name. This allows you
to install a specific version. The rest of the version isn't included in the version name.
This allows the OS package manager to update the packages (for example, automatically
installing security fixes). Supported package managers are Linux specific.

The following lists the recommended packages:

dotnet-sdk-[major].[minor] - Installs the latest SDK for specific runtime

Version: <sdk version>


Example: dotnet-sdk-7.0
Contains: (3),(4),(18)
Dependencies: dotnet-runtime-[major].[minor] , aspnetcore-runtime-[major].
[minor] , dotnet-targeting-pack-[major].[minor] , aspnetcore-targeting-pack-

[major].[minor] , netstandard-targeting-pack-[netstandard_major].
[netstandard_minor] , dotnet-apphost-pack-[major].[minor] , dotnet-templates-
[major].[minor]

aspnetcore-runtime-[major].[minor] - Installs a specific ASP.NET Core runtime

Version: <aspnetcore runtime version>


Example: aspnetcore-runtime-7.0
Contains: (6)
Dependencies: dotnet-runtime-[major].[minor]

dotnet-runtime-deps-[major].[minor] (Optional) - Installs the dependencies for

running self-contained applications


Version: <runtime version>
Example: dotnet-runtime-deps-7.0
Dependencies: distribution-specific dependencies

dotnet-runtime-[major].[minor] - Installs a specific runtime

Version: <runtime version>


Example: dotnet-runtime-7.0
Contains: (5)
Dependencies: dotnet-hostfxr-[major].[minor] , dotnet-runtime-deps-[major].
[minor]

dotnet-hostfxr-[major].[minor] - dependency

Version: <runtime version>


Example: dotnet-hostfxr-7.0
Contains: (2)
Dependencies: dotnet-host

dotnet-host - dependency

Version: <runtime version>


Example: dotnet-host
Contains: (1),(8),(9),(10),(16)

dotnet-apphost-pack-[major].[minor] - dependency

Version: <runtime version>


Contains: (13)

dotnet-targeting-pack-[major].[minor] - Allows targeting a non-latest runtime

Version: <runtime version>


Contains: (12)
aspnetcore-targeting-pack-[major].[minor] - Allows targeting a non-latest

runtime
Version: <aspnetcore runtime version>
Contains: (11)

netstandard-targeting-pack-[netstandard_major].[netstandard_minor] - Allows

targeting a netstandard version


Version: <sdk version>
Contains: (15)

dotnet-templates-[major].[minor]

Version: <sdk version>


Contains: (15)

The following two meta packages are optional. They bring value for end users in that
they abstract the top-level package (dotnet-sdk), which simplifies the installation of the
full set of .NET packages. These meta packages reference a specific .NET SDK version.

dotnet[major] - Installs the specified SDK version

Version: <sdk version>


Example: dotnet7
Dependencies: dotnet-sdk-[major].[minor]

dotnet - Installs a specific SDK version determined by distros to be the primary

version—usually the latest available


Version: <sdk version>
Example: dotnet
Dependencies: dotnet-sdk-[major].[minor]

The dotnet-runtime-deps-[major].[minor] requires understanding the distro-specific


dependencies. Because the distro build system may be able to derive this automatically,
the package is optional, in which case these dependencies are added directly to the
dotnet-runtime-[major].[minor] package.

When package content is under a versioned folder, the package name [major].[minor]
match the versioned folder name. For all packages, except the netstandard-targeting-
pack-[netstandard_major].[netstandard_minor] , this also matches with the .NET version.

Dependencies between packages should use an equal or greater than version


requirement. For example, dotnet-sdk-7.0:7.0.401 requires aspnetcore-runtime-7.0 >=
7.0.6 . This makes it possible for the user to upgrade their installation via a root package

(for example, dnf update dotnet-sdk-7.0 ).


Most distributions require all artifacts to be built from source. This has some impact on
the packages:

The third-party libraries under shared/Microsoft.AspNetCore.All can't be easily


built from source. So that folder is omitted from the aspnetcore-runtime package.

The NuGetFallbackFolder is populated using binary artifacts from nuget.org . It


should remain empty.

Multiple dotnet-sdk packages may provide the same files for the NuGetFallbackFolder .
To avoid issues with the package manager, these files should be identical (checksum,
modification date, and so on).

Debug packages
Debug content should be packaged in debug-named packages that follow the .NET
package split described previously in this article. For instance, debug content for the
dotnet-sdk-[major].[minor] package should be included in a package named dotnet-

sdk-dbg-[major].[minor] . You should install debug content to the same location as the

binaries.

Here are a few binary examples:

In the {dotnet_root}/sdk/<sdk version> directory, the following two files are expected:

dotnet.dll - installed with dotnet-sdk-[major].[minor] package

dotnet.pdb - installed with dotnet-sdk-dbg-[major].[minor] package

In the {dotnet_root}/shared/Microsoft.NETCore.App/<runtime version> directory, the


following two files are expected:

System.Text.Json.dll - installed with dotnet-runtime-[major].[minor] package

System.Text.Json.pdb - installed with dotnet-runtime-dbg-[major].[minor]

package

In the {dotnet_root/shared/Microsoft.AspNetCore.App/<aspnetcore version> directory,


the following two files are expected:

Microsoft.AspNetCore.Routing.dll - installed with aspnetcore-runtime-[major].

[minor] packages

Microsoft.AspNetCore.Routing.pdb - installed with aspnetcore-runtime-dbg-


[major].[minor] packages
Starting with .NET 8.0, all .NET debug content (PDB files), produced by source-build, is
available in a tarball named dotnet-symbols-sdk-<version>-<rid>.tar.gz . This archive
contains PDBs in subdirectories that match the directory structure of the .NET SDK
tarball - dotnet-symbols-<version>-<rid>.tar.gz .

While all debug content is available in the debug tarball, not all debug content is equally
important. End users are mostly interested in the content of the
shared/Microsoft.AspNetCore.App/<aspnetcore version> and

shared/Microsoft.NETCore.App/<runtime version> directories.

The SDK content under sdk/<sdk version> is useful for debugging .NET SDK toolsets.

The following packages are the recommended debug packages:

aspnetcore-runtime-dbg-[major].[minor] - Installs debug content for a specific

ASP.NET Core runtime


Version: <aspnetcore runtime version>
Example: aspnetcore-runtime-dbg-8.0
Contains: debug content for (6)
Dependencies: aspnetcore-runtime-[major].[minor]

dotnet-runtime-dbg-[major].[minor] - Installs debug content for a specific runtime

Version: <runtime version>


Example: dotnet-runtime-dbg-8.0
Contains: debug content for (5)
Dependencies: dotnet-runtime-[major].[minor]

The following debug package is optional:

dotnet-sdk-dbg-[major].[minor] - Installs debug content for a specific SDK version

Version: <sdk version>


Example: dotnet-sdk-dbg-8.0
Contains: debug content for (3),(4),(18)
Dependencies: dotnet-sdk-[major].[minor]

The debug tarball also contains some debug content under packs , which represents
copies of content under shared . In the .NET layout, the packs directory is used for
building .NET applications. There are no debugging scenarios, so you shouldn't package
the debug content under packs in the debug tarball.

Building packages
The dotnet/source-build repository provides instructions on how to build a source
tarball of the .NET SDK and all its components. The output of the source-build
repository matches the layout described in the first section of this article.

6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
 Open a documentation issue
issues and pull requests. For
more information, see our
 Provide product feedback
contributor guide.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy