Dotnet Navigate Devops Testing
Dotnet Navigate Devops Testing
GitHub Actions
e OVERVIEW
Overview
f QUICKSTART
e OVERVIEW
g TUTORIAL
p CONCEPT
Best practices
e OVERVIEW
Publish overview
ReadyToRun
Trimming
i REFERENCE
RID catalog
GitHub Actions and .NET
Article • 12/14/2023
In this overview, you'll learn what role GitHub Actions play in .NET application
development. GitHub Actions allow your source code repositories to automate
continuous integration (CI) and continuous delivery (CD). Beyond that, GitHub Actions
expose more advanced scenarios — providing hooks for automation with code reviews,
branch management, and issue triaging. With your .NET source code in GitHub you can
leverage GitHub Actions in many ways.
GitHub Actions
GitHub Actions represent standalone commands, such as:
While these commands are isolated to a single action, they're powerful through
workflow composition. In workflow composition, you define the events that trigger the
workflow. Once a workflow is running, there are various jobs it's instructed to perform —
with each job defining any number of steps. The steps delegate out to GitHub Actions, or
alternatively call command-line scripts.
For more information, see Introduction to GitHub Actions . Think of a workflow file as a
composition that represents the various steps to build, test, and/or publish an
application. Many .NET CLI commands are available, most of which could be used in the
context of a GitHub Action.
Workflow file
GitHub Actions are utilized through a workflow file. The workflow file must be located in
the .github/workflows directory of the repository, and is expected to be YAML (either
*.yml or *.yaml). Workflow files define the workflow composition. A workflow is a
configurable automated process made up of one or more jobs. For more information,
see Workflow syntax for GitHub Actions .
Description
build-validation.yml
Compiles (or builds) the source code. If the source code doesn't compile, this will fail.
build-and-test.yml
Exercises the unit tests within the repository. In order to run tests, the source code must
first be compiled — this is really both a build and test workflow (it would supersede the
build-validation.yml workflow). Failing unit tests will cause workflow failure.
publish-app.yml
codeql-analysis.yml
Analyzes your code for security vulnerabilities and coding errors. Any discovered
vulnerabilities could cause failure.
Encrypted secrets
To use encrypted secrets in your workflow files, you reference the secrets using the
workflow expression syntax from the secrets context object.
YAML
Secret values are never printed in the logs. Instead, their names are printed with an
asterisk representing their values. For example, as each step runs within a job, all of the
values it uses are output to the action log. Secret values render similar to the following:
Console
MY_SECRET_VALUE: ***
) Important
The secrets context provides the GitHub authentication token that is scoped to
the repository, branch, and action. It's provided by GitHub without any user
intervention:
yml
${{ secrets.GITHUB_TOKEN }}
Events
Workflows are triggered by many different types of events. In addition to Webhook
events, which are the most common, there are also scheduled events and manual
events.
The following example shows how to specify a webhook event trigger for a workflow:
yml
on:
push:
branches:
- main
pull_request:
branches:
- main, staging
jobs:
coverage:
runs-on: ubuntu-latest
In the preceding workflow, the push and pull_request events will trigger the workflow
to run.
The following example shows how to specify a scheduled (cron job) event trigger for a
workflow:
yml
name: scan
on:
schedule:
- cron: '0 0 1 * *'
# additional events omitted for brevity
jobs:
build:
runs-on: ubuntu-latest
In the preceding workflow, the schedule event specifies the cron of '0 0 1 * *' which
will trigger the workflow to run on the first day of every month. Running workflows on a
schedule is great for workflows that take a long time to run, or perform actions that
require less frequent attention.
yml
name: build
on:
workflow_dispatch:
inputs:
reason:
description: 'The reason for running the workflow'
required: true
default: 'Manual run'
# additional events omitted for brevity
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: 'Print manual run reason'
if: ${{ github.event_name == 'workflow_dispatch' }}
run: |
echo 'Reason: ${{ github.event.inputs.reason }}'
.NET CLI
The .NET command-line interface (CLI) is a cross-platform toolchain for developing,
building, running, and publishing .NET applications. The .NET CLI is used to run as part
of individual steps within a workflow file. Common command include:
See also
For a more in-depth look at GitHub Actions with .NET, consider the following resources:
Quickstart(s):
Quickstart: Create a build validation GitHub Action
Quickstart: Create a test validation GitHub Action
Quickstart: Create a publish app GitHub Action
Quickstart: Create a security scan GitHub Action
Tutorial(s):
Tutorial: Create a GitHub Action with .NET
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
Use the .NET SDK in continuous
integration (CI) environments
Article • 05/25/2023
This article outlines how to use the .NET SDK and its tools on a build server. The .NET
toolset works both interactively, where a developer types commands at a command
prompt, and automatically, where a continuous integration (CI) server runs a build script.
The commands, options, inputs, and outputs are the same, and the only things you
supply are a way to acquire the tooling and a system to build your app. This article
focuses on scenarios of tool acquisition for CI with recommendations on how to design
and structure your build scripts.
Native installers
Native installers are available for macOS, Linux, and Windows. The installers require
admin (sudo) access to the build server. The advantage of using a native installer is that
it installs all of the native dependencies required for the tooling to run. Native installers
also provide a system-wide installation of the SDK.
macOS users should use the PKG installers. On Linux, there's a choice of using a feed-
based package manager, such as apt-get for Ubuntu or yum for CentOS, or using the
packages themselves, DEB or RPM. On Windows, use the MSI installer.
The latest stable binaries are found at .NET downloads . If you wish to use the latest
(and potentially unstable) pre-release tooling, use the links provided at the
dotnet/installer GitHub repository . For Linux distributions, tar.gz archives (also
known as tarballs ) are available; use the installation scripts within the archives to install
.NET.
Installer script
Using the installer script allows for non-administrative installation on your build server
and easy automation for obtaining the tooling. The script takes care of downloading the
tooling and extracting it into a default or specified location for use. You can also specify
a version of the tooling that you wish to install and whether you want to install the
entire SDK or only the shared runtime.
The installer script is automated to run at the start of the build to fetch and install the
desired version of the SDK. The desired version is whatever version of the SDK your
projects require to build. The script allows you to install the SDK in a local directory on
the server, run the tools from the installed location, and then clean up (or let the CI
service clean up) after the build. This provides encapsulation and isolation to your entire
build process. The installation script reference is found in the dotnet-install article.
7 Note
When using the installer script, native dependencies aren't installed automatically.
You must install the native dependencies if the operating system doesn't have
them. For more information, see .NET dependencies and requirements.
CI setup examples
This section describes a manual setup using a PowerShell or bash script, along with
descriptions of software as a service (SaaS) CI solutions. The SaaS CI solutions covered
are Travis CI , AppVeyor , and Azure Pipelines. For GitHub Actions, see GitHub Actions
and .NET
Manual setup
Each SaaS service has its methods for creating and configuring a build process. If you
use a different SaaS solution than those listed or require customization beyond the pre-
packaged support, you must perform at least some manual configuration.
In general, a manual setup requires you to acquire a version of the tools (or the latest
nightly builds of the tools) and run your build script. You can use a PowerShell or bash
script to orchestrate the .NET commands or use a project file that outlines the build
process. The orchestration section provides more detail on these options.
After you create a script that performs a manual CI build server setup, use it on your dev
machine to build your code locally for testing purposes. Once you confirm that the
script is running well locally, deploy it to your CI build server. A relatively simple
PowerShell script demonstrates how to obtain the .NET SDK and install it on a Windows
build server:
You provide the implementation for your build process at the end of the script. The
script acquires the tools and then executes your build process.
PowerShell
PowerShell
$ErrorActionPreference="Stop"
$ProgressPreference="SilentlyContinue"
Travis CI
You can configure Travis CI to install the .NET SDK using the csharp language and the
dotnet key. For more information, see the official Travis CI docs on Building a C#, F#, or
Visual Basic Project . Note as you access the Travis CI information that the community-
maintained language: csharp language identifier works for all .NET languages, including
F#, and Mono.
Travis CI runs both macOS and Linux jobs in a build matrix, where you specify a
combination of runtime, environment, and exclusions/inclusions to cover the build
combinations for your app. For more information, see the Customizing the Build
article in the Travis CI documentation. The MSBuild-based tools include the long-term
support (LTS) and standard-term support (STS) runtimes in the package; so by installing
the SDK, you receive everything you need to build.
AppVeyor
AppVeyor installs the .NET 6 SDK with the Visual Studio 2022 build worker image.
Other build images with different versions of the .NET SDK are available. For more
information, see the Build worker images article in the AppVeyor docs.
The .NET SDK binaries are downloaded and unzipped in a subdirectory using the install
script, and then they're added to the PATH environment variable. Add a build matrix to
run integration tests with multiple versions of the .NET SDK:
YAML
environment:
matrix:
- CLI_VERSION: 6.0.7
- CLI_VERSION: Latest
Run the script from the manual setup step using your commands.
Create a build composed of several Azure DevOps Services built-in build tasks that
are configured to use .NET tools.
Both solutions are valid. Using a manual setup script, you control the version of the tools
that you receive, since you download them as part of the build. The build is run from a
script that you must create. This article only covers the manual option. For more
information on composing a build with Azure DevOps Services build tasks, see the Azure
Pipelines documentation.
To use a manual setup script in Azure DevOps Services, create a new build definition and
specify the script to run for the build step. This is accomplished using the Azure DevOps
Services user interface:
1. Start by creating a new build definition. Once you reach the screen that provides
you an option to define what kind of a build you wish to create, select the Empty
option.
2. After configuring the repository to build, you're directed to the build definitions.
Select Add build step:
3. You're presented with the Task catalog. The catalog contains tasks that you use in
the build. Since you have a script, select the Add button for PowerShell: Run a
PowerShell script.
4. Configure the build step. Add the script from the repository that you're building:
Two general approaches that you take in structuring the build process for .NET code
using the .NET tools are using MSBuild directly or using the .NET command-line
commands. Which approach you should take is determined by your comfort level with
the approaches and trade-offs in complexity. MSBuild provides you the ability to express
your build process as tasks and targets, but it comes with the added complexity of
learning MSBuild project file syntax. Using the .NET command-line tools is perhaps
simpler, but it requires you to write orchestration logic in a scripting language like bash
or PowerShell.
Tip
See also
GitHub Actions and .NET
.NET downloads - Linux
.NET-related GitHub Actions
Article • 08/01/2024
This article lists some of the first party .NET GitHub actions that are hosted on the
dotnet GitHub organization .
7 Note
This article is a work-in-progress, and might not list all the available .NET GitHub
Actions.
This action sweeps .NET repos for out-of-support target versions of .NET.
The .NET docs team uses the .NET version sweeper GitHub Action to automate issue
creation. The Action runs on a schedule (as a cron job). When it detects that .NET
projects target out-of-support versions, it creates issues to report its findings. The
output is configurable and helpful for tracking .NET version support concerns.
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
be found on GitHub, where you Select a link to provide feedback:
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
Tutorial: Create a GitHub Action with
.NET
Article • 12/14/2023
Learn how to create a .NET app that can be used as a GitHub Action. GitHub Actions
enable workflow automation and composition. With GitHub Actions, you can build, test,
and deploy source code from GitHub. Additionally, actions expose the ability to
programmatically interact with issues, create pull requests, perform code reviews, and
manage branches. For more information on continuous integration with GitHub Actions,
see Building and testing .NET .
Prerequisites
A GitHub account
The .NET 6 SDK or later
A .NET integrated development environment (IDE)
Feel free to use the Visual Studio IDE
References to the source code in this tutorial have portions of the app omitted for
brevity. The complete app code is available on GitHub .
C#
using CommandLine;
namespace DotNet.GitHubAction;
public ActionInputs()
{
if (Environment.GetEnvironmentVariable("GREETINGS") is { Length: > 0
} greetings)
{
Console.WriteLine(greetings);
}
}
[Option('o', "owner",
Required = true,
HelpText = "The owner, for example: \"dotnet\". Assign from
`github.repository_owner`.")]
public string Owner { get; set; } = null!;
[Option('n', "name",
Required = true,
HelpText = "The repository name, for example: \"samples\". Assign
from `github.repository`.")]
public string Name
{
get => _repositoryName;
set => ParseAndAssign(value, str => _repositoryName = str);
}
[Option('b', "branch",
Required = true,
HelpText = "The branch name, for example: \"refs/heads/main\".
Assign from `github.ref`.")]
public string Branch
{
get => _branchName;
set => ParseAndAssign(value, str => _branchName = str);
}
[Option('d', "dir",
Required = true,
HelpText = "The root directory to start recursive searching from.")]
public string Directory { get; set; } = null!;
[Option('w', "workspace",
Required = true,
HelpText = "The workspace directory, or repository root
directory.")]
public string WorkspaceDirectory { get; set; } = null!;
The preceding action inputs class defines several required inputs for the app to run
successfully. The constructor will write the "GREETINGS" environment variable value, if
one is available in the current execution environment. The Name and Branch properties
are parsed and assigned from the last segment of a "/" delimited string.
With the defined action inputs class, focus on the Program.cs file.
C#
using System.Text;
using CommandLine;
using DotNet.GitHubAction;
using DotNet.GitHubAction.Extensions;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
using static CommandLine.Parser;
builder.Services.AddGitHubActionServices();
Environment.Exit(2);
});
await parser.WithParsedAsync(
async options => await StartAnalysisAsync(options, host));
await host.RunAsync();
await ValueTask.CompletedTask;
Environment.Exit(0);
}
The Program file is simplified for brevity, to explore the full sample source, see
Program.cs . The mechanics in place demonstrate the boilerplate code required to use:
Top-level statements
Generic Host
Dependency injection
External project or package references can be used, and registered with dependency
injection. The Get<TService> is a static local function, which requires the IHost instance,
and is used to resolve required services. With the CommandLine.Parser.Default singleton,
the app gets a parser instance from the args . When the arguments are unable to be
parsed, the app exits with a non-zero exit code. For more information, see Setting exit
codes for actions .
When the args are successfully parsed, the app was called correctly with the required
inputs. In this case, a call to the primary functionality StartAnalysisAsync is made.
To write output values, you must follow the format recognized by GitHub Actions:
Setting an output parameter .
The virtual environment where the GitHub Action is hosted may or may not have .NET
installed. For information about what is preinstalled in the target environment, see
GitHub Actions Virtual Environments . While it's possible to run .NET CLI commands
from the GitHub Actions workflows, for a more fully functioning .NET-based GitHub
Action, we recommend that you containerize the app. For more information, see
Containerize a .NET app.
The Dockerfile
A Dockerfile is a set of instructions to build an image. For .NET applications, the
Dockerfile usually sits in the root of the directory next to a solution file.
Dockerfile
# Set the base image as the .NET 7.0 SDK (this includes the runtime)
FROM mcr.microsoft.com/dotnet/sdk:7.0 as build-env
# Copy everything and publish the release (publish implicitly restores and
builds)
WORKDIR /app
COPY . ./
RUN dotnet publish ./DotNet.GitHubAction/DotNet.GitHubAction.csproj -c
Release -o out --no-self-contained
7 Note
The .NET app in this tutorial relies on the .NET SDK as part of its functionality. The
Dockerfile creates a new set of Docker layers, independent from the previous ones.
It starts from scratch with the SDK image, and adds the build output from the
previous set of layers. For applications that do not require the .NET SDK as part of
their functionality, they should rely on just the .NET Runtime instead. This greatly
reduces the size of the image.
Dockerfile
FROM mcr.microsoft.com/dotnet/runtime:7.0
2 Warning
Pay close attention to every step within the Dockerfile, as it does differ from the
standard Dockerfile created from the "add docker support" functionality. In
particular, the last few steps vary by not specifying a new WORKDIR which would
change the path to the app's ENTRYPOINT .
The preceding Dockerfile steps include:
Tip
U Caution
If you use a global.json file to pin the SDK version, you should explicitly refer to that
version in your Dockerfile. For example, if you've used global.json to pin SDK version
5.0.300 , your Dockerfile should use mcr.microsoft.com/dotnet/sdk:5.0.300 . This
prevents breaking the GitHub Actions when a new minor revision is released.
yml
You should explore the pre-defined environment variables and use them accordingly.
Workflow composition
With the .NET app containerized, and the action inputs and outputs defined, you're
ready to consume the action. GitHub Actions are not required to be published in the
GitHub Marketplace to be used. Workflows are defined in the .github/workflows
directory of a repository as YAML files.
yml
# The name of the work flow. Badges will use this name
name: '.NET code metrics'
on:
push:
branches: [ main ]
paths:
- 'github-actions/DotNet.GitHubAction/**' # run on all
changes to this dir
- '!github-actions/DotNet.GitHubAction/CODE_METRICS.md' # ignore this
file
workflow_dispatch:
inputs:
reason:
description: 'The reason for running the workflow'
required: true
default: 'Manual run'
jobs:
analysis:
runs-on: ubuntu-latest
permissions:
contents: write
pull-requests: write
steps:
- uses: actions/checkout@v3
) Important
The name of the workflow. This name is also what's used when creating a workflow
status badge .
The on node defines when and how the action is triggered.
The jobs node outlines the various jobs and steps within each job. Individual steps
consume GitHub Actions.
yml
steps:
- uses: actions/checkout@v3
The jobs.steps represents the workflow composition. Steps are orchestrated such that
they're sequential, communicative, and composable. With various GitHub Actions
representing steps, each having inputs and outputs, workflows can be composed.
4. A conditional step, named Create pull request runs when the dotnet-code-
metrics step specifies an output parameter of updated-metrics with a value of
true .
) Important
GitHub allows for the creation of encrypted secrets . Secrets can be used within
workflow composition, using the ${{ secrets.SECRET_NAME }} syntax. In the context
of a GitHub Action, there is a GitHub token that is automatically populated by
default: ${{ secrets.GITHUB_TOKEN }} . For more information, see Context and
expression syntax for GitHub Actions .
The generated CODE_METRICS.md file is navigable. This file represents the hierarchy
of the projects it analyzed. Each project has a top-level section, and an emoji that
represents the overall status of the highest cyclomatic complexity for nested objects. As
you navigate the file, each section exposes drill-down opportunities with a summary of
each area. The markdown has collapsible sections as an added convenience.
In action
The workflow specifies that on a push to the main branch, the action is triggered to run.
When it runs, the Actions tab in GitHub will report the live log stream of its execution.
Here is an example log from the .NET code metrics run:
Performance improvements
If you followed along the sample, you might have noticed that every time this action is
used, it will do a docker build for that image. So, every trigger is faced with some time
to build the container before running it. Before releasing your GitHub Actions to the
marketplace, you should:
YAML
For more information, see GitHub Docs: Working with the Container registry .
See also
.NET Generic Host
Dependency injection in .NET
Code metrics values
Open-source GitHub Action build in .NET with a workflow for building and
pushing the docker image automatically.
Next steps
.NET GitHub Actions sample code
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
Quickstart: Create a build validation
GitHub workflow
Article • 10/07/2022
In this quickstart, you will learn how to create a GitHub workflow to validate the
compilation of your .NET source code in GitHub. Compiling your .NET code is one of the
most basic validation steps that you can take to help ensure the quality of updates to
your code. If code doesn't compile (or build), it's an easy deterrent and should be a clear
sign that the code needs to be fixed.
Prerequisites
A GitHub account .
A .NET source code repository.
) Important
Workflow files typically define a composition of one or more GitHub Action via the
jobs.<job_id>/steps[*] . For more information, see, Workflow syntax for GitHub
Actions .
Create a new file named build-validation.yml, copy and paste the following YML
contents into it:
yml
name: build
on:
push:
pull_request:
branches: [ main ]
paths:
- '**.cs'
- '**.csproj'
env:
DOTNET_VERSION: '6.0.401' # The .NET SDK version to use
jobs:
build:
name: build-${{matrix.os}}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macOS-latest]
steps:
- uses: actions/checkout@v3
- name: Setup .NET Core
uses: actions/setup-dotnet@v3
with:
dotnet-version: ${{ env.DOTNET_VERSION }}
- name: Build
run: dotnet build --configuration Release --no-restore
The name: build defines the name, "build" will appear in workflow status badges.
yml
name: build
yml
on:
push:
pull_request:
branches: [ main ]
paths:
- '**.cs'
- '**.csproj'
Triggered when a push or pull_request occurs on the main branch where any
files changed ending with the .cs or .csproj file extensions.
yml
env:
DOTNET_VERSION: '6.0.401' # The .NET SDK version to use
The jobs node builds out the steps for the workflow to take.
yml
jobs:
build:
name: build-${{matrix.os}}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macOS-latest]
steps:
- uses: actions/checkout@v3
- name: Setup .NET Core
uses: actions/setup-dotnet@v3
with:
dotnet-version: ${{ env.DOTNET_VERSION }}
- name: Build
run: dotnet build --configuration Release --no-restore
There is a single job, named build-<os> where the <os> is the operating system
name from the strategy/matrix . The name and runs-on elements are dynamic
for each value in the matrix/os . This will run on the latest versions of Ubuntu,
Windows, and macOS.
YAML
In this case, think of a workflow file as a composition that represents the various steps to
build an application. Many .NET CLI commands are available, most of which could be
used in the context of a GitHub Action.
2. All repository workflows are displayed on the left-side, select the desired workflow
and the ellipsis (...) button.
The ellipsis (...) button expands the menu options for the selected workflow.
5. Paste the Markdown into the README.md file, save the file, commit and push the
changes.
See also
dotnet restore
dotnet build
actions/checkout
actions/setup-dotnet
Next steps
Quickstart: Create a .NET test GitHub workflow
Quickstart: Create a test validation
GitHub workflow
Article • 10/07/2022
In this quickstart, you will learn how to create a GitHub workflow to test your .NET
source code. Automatically testing your .NET code within GitHub is referred to as
continuous integration (CI), where pull requests or changes to the source trigger
workflows to exercise. Along with building the source code, testing ensures that the
compiled source code functions as the author intended. More often than not, unit tests
serve as immediate feedback-loop to help ensure the validity of changes to source code.
Prerequisites
A GitHub account .
A .NET source code repository.
) Important
Workflow files typically define a composition of one or more GitHub Action via the
jobs.<job_id>/steps[*] . For more information, see, Workflow syntax for GitHub
Actions .
Create a new file named build-and-test.yml, copy and paste the following YML contents
into it:
yml
on:
push:
pull_request:
branches: [ main ]
paths:
- '**.cs'
- '**.csproj'
env:
DOTNET_VERSION: '6.0.401' # The .NET SDK version to use
jobs:
build-and-test:
name: build-and-test-${{matrix.os}}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macOS-latest]
steps:
- uses: actions/checkout@v3
- name: Setup .NET Core
uses: actions/setup-dotnet@v3
with:
dotnet-version: ${{ env.DOTNET_VERSION }}
- name: Build
run: dotnet build --configuration Release --no-restore
- name: Test
run: dotnet test --no-restore --verbosity normal
The name: build and test defines the name, "build and test" will appear in
workflow status badges.
yml
yml
on:
push:
pull_request:
branches: [ main ]
paths:
- '**.cs'
- '**.csproj'
Triggered when a push or pull_request occurs on the main branch where any
files changed ending with the .cs or .csproj file extensions.
yml
env:
DOTNET_VERSION: '6.0.401' # The .NET SDK version to use
The jobs node builds out the steps for the workflow to take.
yml
jobs:
build-and-test:
name: build-and-test-${{matrix.os}}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macOS-latest]
steps:
- uses: actions/checkout@v3
- name: Setup .NET Core
uses: actions/setup-dotnet@v3
with:
dotnet-version: ${{ env.DOTNET_VERSION }}
- name: Build
run: dotnet build --configuration Release --no-restore
- name: Test
run: dotnet test --no-restore --verbosity normal
There is a single job, named build-<os> where the <os> is the operating system
name from the strategy/matrix . The name and runs-on elements are dynamic
for each value in the matrix/os . This will run on the latest versions of Ubuntu,
Windows, and macOS.
The actions/setup-dotnet@v3 GitHub Action is used to setup the .NET SDK with
the specified version from the DOTNET_VERSION environment variable.
The dotnet restore command is called.
The dotnet build command is called.
The dotnet test command is called.
2. All repository workflows are displayed on the left-side, select the desired workflow
and the ellipsis (...) button.
The ellipsis (...) button expands the menu options for the selected workflow.
5. Paste the Markdown into the README.md file, save the file, commit and push the
changes.
See also
dotnet restore
dotnet build
dotnet test
Unit testing .NET apps
actions/checkout
actions/setup-dotnet
Next steps
Quickstart: Create a GitHub workflow to publish your .NET app
Quickstart: Create a GitHub workflow to
publish an app
Article • 10/07/2022
In this quickstart, you will learn how to create a GitHub workflow to publish your .NET
app from source code. Automatically publishing your .NET app from GitHub to a
destination is referred to as a continuous deployment (CD). There are many possible
destinations to publish an application, in this quickstart you'll publish to Azure.
Prerequisites
A GitHub account .
A .NET source code repository.
An Azure account with an active subscription. Create an account for free .
An ASP.NET Core web app.
An Azure App Service resource.
2 Warning
The publish profile contains sensitive information, such as credentials for accessing
your Azure App Service resource. This information should always be treated very
carefully.
In the GitHub repository, navigate to Settings and select Secrets from the left navigation
menu. Select New repository secret, to add a new secret.
Enter AZURE_PUBLISH_PROFILE as the Name, and paste the XML content from the publish
profile into the Value text area. Select Add secret. For more information, see Encrypted
secrets.
) Important
Workflow files typically define a composition of one or more GitHub Action via the
jobs.<job_id>/steps[*] . For more information, see, Workflow syntax for GitHub
Actions .
Create a new file named publish-app.yml, copy and paste the following YML contents
into it:
yml
name: publish
on:
push:
branches: [ production ]
env:
AZURE_WEBAPP_NAME: DotNetWeb
AZURE_WEBAPP_PACKAGE_PATH: '.' # Set this to the path to your web app
project, defaults to the repository root:
DOTNET_VERSION: '6.0.401' # The .NET SDK version to use
jobs:
publish:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup .NET Core
uses: actions/setup-dotnet@v3
with:
dotnet-version: ${{ env.DOTNET_VERSION }}
- name: Build
run: |
cd DotNet.WebApp
dotnet build --configuration Release --no-restore
dotnet publish -c Release -o ../dotnet-webapp -r linux-x64 --self-
contained true /p:UseAppHost=true
- name: Test
run: |
cd DotNet.WebApp.Tests
dotnet test --no-restore --verbosity normal
- uses: azure/webapps-deploy@v2
name: Deploy
with:
app-name: ${{ env.AZURE_WEBAPP_NAME }}
publish-profile: ${{ secrets.AZURE_PUBLISH_PROFILE }}
package: '${{ env.AZURE_WEBAPP_PACKAGE_PATH }}/dotnet-webapp'
yml
name: publish
yml
on:
push:
branches: [ production ]
yml
env:
AZURE_WEBAPP_NAME: DotNetWeb
AZURE_WEBAPP_PACKAGE_PATH: '.' # Set this to the path to your web app
project, defaults to the repository root:
DOTNET_VERSION: '6.0.401' # The .NET SDK version to use
The jobs node builds out the steps for the workflow to take.
yml
jobs:
publish:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup .NET Core
uses: actions/setup-dotnet@v3
with:
dotnet-version: ${{ env.DOTNET_VERSION }}
- name: Install dependencies
run: dotnet restore
- name: Build
run: |
cd DotNet.WebApp
dotnet build --configuration Release --no-restore
dotnet publish -c Release -o ../dotnet-webapp -r linux-x64 --
self-contained true /p:UseAppHost=true
- name: Test
run: |
cd DotNet.WebApp.Tests
dotnet test --no-restore --verbosity normal
- uses: azure/webapps-deploy@v2
name: Deploy
with:
app-name: ${{ env.AZURE_WEBAPP_NAME }}
publish-profile: ${{ secrets.AZURE_PUBLISH_PROFILE }}
package: '${{ env.AZURE_WEBAPP_PACKAGE_PATH }}/dotnet-webapp'
There is a single job, named publish that will run on the latest version of
Ubuntu.
The actions/setup-dotnet@v3 GitHub Action is used to set up the .NET SDK with
the specified version from the DOTNET_VERSION environment variable.
The dotnet restore command is called.
The dotnet build command is called.
The dotnet publish command is called.
The dotnet test command is called.
The azure/webapps-deploy@v2 GitHub Action deploys the app with the given
publish-profile and package .
2. All repository workflows are displayed on the left-side, select the desired workflow
and the ellipsis (...) button.
The ellipsis (...) button expands the menu options for the selected workflow.
5. Paste the Markdown into the README.md file, save the file, commit and push the
changes.
See also
dotnet restore
dotnet build
dotnet test
dotnet publish
Next steps
Quickstart: Create a CodeQL GitHub workflow
Quickstart: Create a security scan
GitHub workflow
Article • 02/18/2022
In this quickstart, you will learn how to create a CodeQL GitHub workflow to automate
the discovery of vulnerabilities in your .NET codebase.
In CodeQL, code is treated as data. Security vulnerabilities, bugs, and other errors
are modeled as queries that can be executed against databases extracted from
code.
Prerequisites
A GitHub account .
A .NET source code repository.
) Important
Workflow files typically define a composition of one or more GitHub Action via the
jobs.<job_id>/steps[*] . For more information, see, Workflow syntax for GitHub
Actions .
Create a new file named codeql-analysis.yml, copy and paste the following YML contents
into it:
yml
name: "CodeQL"
on:
push:
branches: [main]
paths:
- '**.cs'
- '**.csproj'
pull_request:
branches: [main]
paths:
- '**.cs'
- '**.csproj'
schedule:
- cron: '0 8 * * 4'
jobs:
analyze:
name: analyze
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
language: ['csharp']
steps:
- name: Checkout repository
uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Autobuild
uses: github/codeql-action/autobuild@v1
The name: CodeQL defines the name, "CodeQL" will appear in workflow status
badges.
yml
name: "CodeQL"
The on node signifies the events that trigger the workflow:
yml
on:
push:
branches: [main]
paths:
- '**.cs'
- '**.csproj'
pull_request:
branches: [main]
paths:
- '**.cs'
- '**.csproj'
schedule:
- cron: '0 8 * * 4'
Triggered when a push or pull_request occurs on the main branch where any
files changed ending with the .cs or .csproj file extensions.
As a cron job (on a schedule) — to run at 8:00 UTC every Thursday.
The jobs node builds out the steps for the workflow to take.
yml
jobs:
analyze:
name: analyze
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
language: ['csharp']
steps:
- name: Checkout repository
uses: actions/checkout@v3
with:
fetch-depth: 2
There is a single job, named analyze that will run on the latest version of
Ubuntu.
The strategy defines C# as the language .
The github/codeql-action/init@v1 GitHub Action is used to initialize CodeQL.
The github/codeql-action/autobuild@v1 GitHub Action builds the .NET project.
The github/codeql-action/analyze@v1 GitHub Action performs the CodeQL
analysis.
2. All repository workflows are displayed on the left-side, select the desired workflow
and the ellipsis (...) button.
The ellipsis (...) button expands the menu options for the selected workflow.
5. Paste the Markdown into the README.md file, save the file, commit and push the
changes.
See also
Secure coding guidelines
actions/checkout
actions/setup-dotnet
Next steps
Tutorial: Create a GitHub Action with .NET
Testing in .NET
Article • 12/16/2023
This article introduces the concept of testing and illustrates how different kinds of tests
can be used to validate code. Various tools are available for testing .NET applications,
such as the .NET CLI or Integrated Development Environments (IDEs).
Test types
Automated tests are a great way to ensure that the application code does what its
authors intend. This article covers unit tests, integration tests, and load tests.
Unit tests
A unit test is a test that exercises individual software components or methods, also
known as a "unit of work." Unit tests should only test code within the developer's
control. They don't test infrastructure concerns. Infrastructure concerns include
interacting with databases, file systems, and network resources.
Integration tests
An integration test differs from a unit test in that it exercises two or more software
components' ability to function together, also known as their "integration." These tests
operate on a broader spectrum of the system under test, whereas unit tests focus on
individual components. Often, integration tests do include infrastructure concerns.
Load tests
A load test aims to determine whether or not a system can handle a specified load. For
example, the number of concurrent users using an application and the app's ability to
handle interactions responsively. For more information on load testing of web
applications, see ASP.NET Core load/stress testing.
Test considerations
Keep in mind there are best practices for writing tests. For example, Test Driven
Development (TDD) is when you write a unit test before the code it's meant to check.
TDD is like creating an outline for a book before you write it. The unit test helps
developers write simpler, readable, and efficient code.
Testing tools
.NET is a multi-language development platform, and you can write various test types for
C#, F#, and Visual Basic. For each of these languages, you can choose between several
test frameworks.
xUnit
xUnit is a free, open-source, community-focused unit testing tool for .NET. The
original inventor of NUnit v2 wrote xUnit.net. xUnit.net is the latest technology for unit
testing .NET apps. It also works with ReSharper, CodeRush, TestDriven.NET, and
Xamarin . xUnit.net is a project of the .NET Foundation and operates under its code
of conduct.
NUnit
NUnit is a unit-testing framework for all .NET languages. Initially, NUnit was ported
from JUnit, and the current production release has been rewritten with many new
features and support for a wide range of .NET platforms. It's a project of the .NET
Foundation .
MSTest
MSTest is the Microsoft test framework for all .NET languages. It's extensible and
works with both .NET CLI and Visual Studio. For more information, see the following
resources:
Unit testing with C#
Unit testing with F#
Unit testing with Visual Basic
MSTest runner
The MSTest runner is a lightweight and portable alternative to VSTest for running tests
in continuous integration (CI) pipelines, and in Visual Studio Test Explorer. For more
information, see MSTest runner overview.
.NET CLI
You can run a solutions unit test from the .NET CLI with the dotnet test command. The
.NET CLI exposes most of the functionality that Integrated Development Environments
(IDEs) make available through user interfaces. The .NET CLI is cross-platform and
available to use as part of continuous integration and delivery pipelines. The .NET CLI is
used with scripted processes to automate common tasks.
IDE
Whether you're using Visual Studio or Visual Studio Code, there are graphical user
interfaces for testing functionality. There are more features available to IDEs than the
CLI, for example, Live Unit Testing. For more information, see Including and excluding
tests with Visual Studio.
See also
For more information, see the following articles:
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
issues and pull requests. For Open a documentation issue
more information, see our
contributor guide. Provide product feedback
Unit testing best practices with .NET
Core and .NET Standard
Article • 11/04/2022
There are numerous benefits of writing unit tests; they help with regression, provide
documentation, and facilitate good design. However, hard to read and brittle unit tests
can wreak havoc on your code base. This article describes some best practices regarding
unit test design for your .NET Core and .NET Standard projects.
In this guide, you learn some best practices when writing unit tests to keep your tests
resilient and easy to understand.
Unit tests, on the other hand, take milliseconds, can be run at the press of a button, and
don't necessarily require any knowledge of the system at large. Whether or not the test
passes or fails is up to the test runner, not the individual.
With unit testing, it's possible to rerun your entire suite of tests after every build or even
after you change a line of code. Giving you confidence that your new code doesn't
break existing functionality.
Executable documentation
It might not always be obvious what a particular method does or how it behaves given a
certain input. You might ask yourself: How does this method behave if I pass it a blank
string? Null?
When you have a suite of well-named unit tests, each test should be able to clearly
explain the expected output for a given input. In addition, it should be able to verify that
it actually works.
Writing tests for your code will naturally decouple your code, because it would be more
difficult to test otherwise.
Code coverage
A high code coverage percentage is often associated with a higher quality of code.
However, the measurement itself can't determine the quality of code. Setting an overly
ambitious code coverage percentage goal can be counterproductive. Imagine a complex
project with thousands of conditional branches, and imagine that you set a goal of 95%
code coverage. Currently the project maintains 90% code coverage. The amount of time
it takes to account for all of the edge cases in the remaining 5% could be a massive
undertaking, and the value proposition quickly diminishes.
A high code coverage percentage isn't an indicator of success, nor does it imply high
code quality. It just represents the amount of code that is covered by unit tests. For
more information, see unit testing code coverage.
Fake - A fake is a generic term that can be used to describe either a stub or a mock
object. Whether it's a stub or a mock depends on the context in which it's used. So in
other words, a fake can be a stub or a mock.
Mock - A mock object is a fake object in the system that decides whether or not a unit
test has passed or failed. A mock starts out as a Fake until it's asserted against.
C#
purchase.ValidateOrders();
Assert.True(purchase.CanBeShipped);
The preceding example would be of a stub being referred to as a mock. In this case, it's
a stub. You're just passing in the Order as a means to be able to instantiate Purchase
(the system under test). The name MockOrder is also misleading because again, the order
isn't a mock.
C#
purchase.ValidateOrders();
Assert.True(purchase.CanBeShipped);
By renaming the class to FakeOrder , you've made the class a lot more generic. The class
can be used as a mock or a stub, whichever is better for the test case. In the preceding
example, FakeOrder is used as a stub. You're not using FakeOrder in any shape or form
during the assert. FakeOrder was passed into the Purchase class to satisfy the
requirements of the constructor.
C#
purchase.ValidateOrders();
Assert.True(mockOrder.Validated);
In this case, you're checking a property on the Fake (asserting against it), so in the
preceding code snippet, the mockOrder is a Mock.
) Important
It's important to get this terminology correct. If you call your stubs "mocks," other
developers are going to make false assumptions about your intent.
The main thing to remember about mocks versus stubs is that mocks are just like stubs,
but you assert against the mock object, whereas you don't assert against a stub.
Best practices
Try not to introduce dependencies on infrastructure when writing unit tests. The
dependencies make the tests slow and brittle and should be reserved for integration
tests. You can avoid these dependencies in your application by following the Explicit
Dependencies Principle and using Dependency Injection. You can also keep your unit
tests in a separate project from your integration tests. This approach ensures your unit
test project doesn't have references to or dependencies on infrastructure packages.
Why?
Naming standards are important because they explicitly express the intent of the test.
Tests are more than just making sure your code works, they also provide documentation.
Just by looking at the suite of unit tests, you should be able to infer the behavior of your
code without even looking at the code itself. Additionally, when tests fail, you can see
exactly which scenarios don't meet your expectations.
Bad:
C#
[Fact]
public void Test_Single()
{
var stringCalculator = new StringCalculator();
Assert.Equal(0, actual);
}
Better:
C#
[Fact]
public void Add_SingleNumber_ReturnsSameNumber()
{
var stringCalculator = new StringCalculator();
Assert.Equal(0, actual);
}
Why?
Clearly separates what is being tested from the arrange and assert steps.
Less chance to intermix assertions with "Act" code.
Readability is one of the most important aspects when writing a test. Separating each of
these actions within the test clearly highlight the dependencies required to call your
code, how your code is being called, and what you're trying to assert. While it might be
possible to combine some steps and reduce the size of your test, the primary goal is to
make the test as readable as possible.
Bad:
C#
[Fact]
public void Add_EmptyString_ReturnsZero()
{
// Arrange
var stringCalculator = new StringCalculator();
// Assert
Assert.Equal(0, stringCalculator.Add(""));
}
Better:
C#
[Fact]
public void Add_EmptyString_ReturnsZero()
{
// Arrange
var stringCalculator = new StringCalculator();
// Act
var actual = stringCalculator.Add("");
// Assert
Assert.Equal(0, actual);
}
Write minimally passing tests
The input to be used in a unit test should be the simplest possible in order to verify the
behavior that you're currently testing.
Why?
Tests that include more information than required to pass the test have a higher chance
of introducing errors into the test and can make the intent of the test less clear. When
writing tests, you want to focus on the behavior. Setting extra properties on models or
using non-zero values when not required, only detracts from what you are trying to
prove.
Bad:
C#
[Fact]
public void Add_SingleNumber_ReturnsSameNumber()
{
var stringCalculator = new StringCalculator();
Assert.Equal(42, actual);
}
Better:
C#
[Fact]
public void Add_SingleNumber_ReturnsSameNumber()
{
var stringCalculator = new StringCalculator();
Assert.Equal(0, actual);
}
Avoid magic strings
Naming variables in unit tests is important, if not more important, than naming variables
in production code. Unit tests shouldn't contain magic strings.
Why?
Prevents the need for the reader of the test to inspect the production code in
order to figure out what makes the value special.
Explicitly shows what you're trying to prove rather than trying to accomplish.
Magic strings can cause confusion to the reader of your tests. If a string looks out of the
ordinary, they might wonder why a certain value was chosen for a parameter or return
value. This type of string value might lead them to take a closer look at the
implementation details, rather than focus on the test.
Tip
When writing tests, you should aim to express as much intent as possible. In the
case of magic strings, a good approach is to assign these values to constants.
Bad:
C#
[Fact]
public void Add_BigNumber_ThrowsException()
{
var stringCalculator = new StringCalculator();
Assert.Throws<OverflowException>(actual);
}
Better:
C#
[Fact]
void Add_MaximumSumResult_ThrowsOverflowException()
{
var stringCalculator = new StringCalculator();
const string MAXIMUM_RESULT = "1001";
Assert.Throws<OverflowException>(actual);
}
Why?
Less chance to introduce a bug inside of your tests.
Focus on the end result, rather than implementation details.
When you introduce logic into your test suite, the chance of introducing a bug into it
increases dramatically. The last place that you want to find a bug is within your test
suite. You should have a high level of confidence that your tests work, otherwise, you
won't trust them. Tests that you don't trust, don't provide any value. When a test fails,
you want to have a sense that something is wrong with your code and that it can't be
ignored.
Tip
If logic in your test seems unavoidable, consider splitting the test up into two or
more different tests.
Bad:
C#
[Fact]
public void Add_MultipleNumbers_ReturnsCorrectResults()
{
var stringCalculator = new StringCalculator();
var expected = 0;
var testCases = new[]
{
"0,0,0",
"0,1,2",
"1,2,3"
};
foreach (var test in testCases)
{
Assert.Equal(expected, stringCalculator.Add(test));
expected += 3;
}
}
Better:
C#
[Theory]
[InlineData("0,0,0", 0)]
[InlineData("0,1,2", 3)]
[InlineData("1,2,3", 6)]
public void Add_MultipleNumbers_ReturnsSumOfNumbers(string input, int
expected)
{
var stringCalculator = new StringCalculator();
Assert.Equal(expected, actual);
}
Why?
Less confusion when reading the tests since all of the code is visible from within
each test.
Less chance of setting up too much or too little for the given test.
Less chance of sharing state between tests, which creates unwanted dependencies
between them.
In unit testing frameworks, Setup is called before each and every unit test within your
test suite. While some might see this as a useful tool, it generally ends up leading to
bloated and hard to read tests. Each test will generally have different requirements in
order to get the test up and running. Unfortunately, Setup forces you to use the exact
same requirements for each test.
7 Note
Bad:
C#
C#
// more tests...
C#
[Fact]
public void Add_TwoNumbers_ReturnsSumOfNumbers()
{
var result = stringCalculator.Add("0,1");
Assert.Equal(1, result);
}
Better:
C#
[Fact]
public void Add_TwoNumbers_ReturnsSumOfNumbers()
{
var stringCalculator = CreateDefaultStringCalculator();
Assert.Equal(1, actual);
}
C#
// more tests...
C#
Why?
Multiple acts need to be individually Asserted and it isn't guaranteed that all of the
Asserts will be executed. In most unit testing frameworks, once an Assert fails in a unit
test, the proceeding tests are automatically considered to be failing. This kind of process
can be confusing as functionality that is actually working, will be shown as failing.
Bad:
C#
[Fact]
public void Add_EmptyEntries_ShouldBeTreatedAsZero()
{
// Act
var actual1 = stringCalculator.Add("");
var actual2 = stringCalculator.Add(",");
// Assert
Assert.Equal(0, actual1);
Assert.Equal(0, actual2);
}
Better:
C#
[Theory]
[InlineData("", 0)]
[InlineData(",", 0)]
public void Add_EmptyEntries_ShouldBeTreatedAsZero(string input, int
expected)
{
// Arrange
var stringCalculator = new StringCalculator();
// Act
var actual = stringCalculator.Add(input);
// Assert
Assert.Equal(expected, actual);
}
C#
Your first reaction might be to start writing a test for TrimInput because you want to
ensure that the method is working as expected. However, it's entirely possible that
ParseLogLine manipulates sanitizedInput in such a way that you don't expect,
rendering a test against TrimInput useless.
The real test should be done against the public facing method ParseLogLine because
that is what you should ultimately care about.
C#
Assert.Equals("a", result);
}
With this viewpoint, if you see a private method, find the public method and write your
tests against that method. Just because a private method returns the expected result,
doesn't mean the system that eventually calls the private method uses the result
correctly.
C#
How can this code possibly be unit tested? You might try an approach such as:
C#
Assert.Equals(2, actual)
}
Assert.Equals(1, actual);
}
Unfortunately, you'll quickly realize that there are a couple of problems with your tests.
If the test suite is run on a Tuesday, the second test will pass, but the first test will
fail.
If the test suite is run on any other day, the first test will pass, but the second test
will fail.
To solve these problems, you'll need to introduce a seam into your production code.
One approach is to wrap the code that you need to control in an interface and have the
production code depend on that interface.
C#
C#
Assert.Equals(2, actual);
}
Assert.Equals(1, actual);
}
Now the test suite has full control over DateTime.Now and can stub any value when
calling into the method.
Unit testing C# in .NET using dotnet test
and xUnit
Article • 03/07/2024
This tutorial shows how to build a solution containing a unit test project and source
code project. To follow the tutorial using a pre-built solution, view or download the
sample code . For download instructions, see Samples and Tutorials.
txt
/unit-testing-using-dotnet-test
unit-testing-using-dotnet-test.sln
/PrimeService
PrimeService.cs
PrimeService.csproj
/PrimeService.Tests
PrimeService_IsPrimeShould.cs
PrimeServiceTests.csproj
The following instructions provide the steps to create the test solution. See Commands
to create test solution for instructions to create the test solution in one step.
.NET CLI
The dotnet new sln command creates a new solution in the unit-testing-using-
dotnet-test directory.
.NET CLI
dotnet new classlib -o PrimeService
The dotnet new classlib command creates a new class library project in the
PrimeService folder. The new class library will contain the code to be tested.
C#
using System;
namespace Prime.Services
{
public class PrimeService
{
public bool IsPrime(int candidate)
{
throw new NotImplementedException("Not implemented.");
}
}
}
.NET CLI
.NET CLI
Microsoft.NET.Test.Sdk
xunit
xunit.runner.visualstudio
coverlet.collector
Add the test project to the solution file by running the following command:
.NET CLI
.NET CLI
The following commands create the test solution on a Windows machine. For macOS
and Unix, update the ren command to the OS version of ren to rename a file:
.NET CLI
Follow the instructions for "Replace the code in PrimeService.cs with the following code"
in the previous section.
Create a test
A popular approach in test driven development (TDD) is to write a (failing) test before
implementing the target code. This tutorial uses the TDD approach. The IsPrime
method is callable, but not implemented. A test call to IsPrime fails. With TDD, a test is
written that is known to fail. The target code is updated to make the test pass. You keep
repeating this approach, writing a failing test and then updating the target code to pass.
Delete PrimeService.Tests/UnitTest1.cs.
Create a PrimeService.Tests/PrimeService_IsPrimeShould.cs file.
Replace the code in PrimeService_IsPrimeShould.cs with the following code:
C#
using Xunit;
using Prime.Services;
namespace Prime.UnitTests.Services
{
public class PrimeService_IsPrimeShould
{
[Fact]
public void IsPrime_InputIs1_ReturnFalse()
{
var primeService = new PrimeService();
bool result = primeService.IsPrime(1);
The [Fact] attribute declares a test method that's run by the test runner. From the
PrimeService.Tests folder, run dotnet test . The dotnet test command builds both
projects and runs the tests. The xUnit test runner contains the program entry point to
run the tests. dotnet test starts the test runner using the unit test project.
The test fails because IsPrime hasn't been implemented. Using the TDD approach, write
only enough code so this test passes. Update IsPrime with the following code:
C#
C#
Copying test code when only a parameter changes results in code duplication and test
bloat. The following xUnit attributes enable writing a suite of similar tests:
[Theory] represents a suite of tests that execute the same code but have different
input arguments.
[InlineData] attribute specifies values for those inputs.
Rather than creating new tests, apply the preceding xUnit attributes to create a single
theory. Replace the following code:
C#
[Fact]
public void IsPrime_InputIs1_ReturnFalse()
{
var primeService = new PrimeService();
bool result = primeService.IsPrime(1);
C#
[Theory]
[InlineData(-1)]
[InlineData(0)]
[InlineData(1)]
public void IsPrime_ValuesLessThan2_ReturnFalse(int value)
{
var result = _primeService.IsPrime(value);
In the preceding code, [Theory] and [InlineData] enable testing several values less
than two. Two is the smallest prime number.
Add the following code after the class declaration and before the [Theory] attribute:
C#
public PrimeService_IsPrimeShould()
{
_primeService = new PrimeService();
}
Run dotnet test , and two of the tests fail. To make all of the tests pass, update the
IsPrime method with the following code:
C#
Following the TDD approach, add more failing tests, then update the target code. See
the finished version of the tests and the complete implementation of the library .
The completed IsPrime method is not an efficient algorithm for testing primality.
Additional resources
xUnit.net official site
Testing controller logic in ASP.NET Core
dotnet add reference
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
Unit testing F# libraries in .NET Core
using dotnet test and xUnit
Article • 09/15/2021
This tutorial takes you through an interactive experience building a sample solution
step-by-step to learn unit testing concepts. If you prefer to follow the tutorial using a
pre-built solution, view or download the sample code before you begin. For download
instructions, see Samples and Tutorials.
This article is about testing a .NET Core project. If you're testing an ASP.NET Core
project, see Integration tests in ASP.NET Core.
/unit-testing-with-fsharp
unit-testing-with-fsharp.sln
/MathService
Make MathService the current directory, and run dotnet new classlib -lang "F#" to
create the source project. You'll create a failing implementation of the math service:
F#
module MyMath =
let squaresOfOdds xs = raise (System.NotImplementedException("You
haven't written a test yet!"))
Change the directory back to the unit-testing-with-fsharp directory. Run dotnet sln add
.\MathService\MathService.fsproj to add the class library project to the solution.
/unit-testing-with-fsharp
unit-testing-with-fsharp.sln
/MathService
Source Files
MathService.fsproj
/MathService.Tests
Make the MathService.Tests directory the current directory and create a new project
using dotnet new xunit -lang "F#" . This creates a test project that uses xUnit as the test
library. The generated template configures the test runner in the MathServiceTests.fsproj:
XML
<ItemGroup>
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="15.3.0-
preview-20170628-02" />
<PackageReference Include="xunit" Version="2.2.0" />
<PackageReference Include="xunit.runner.visualstudio" Version="2.2.0" />
</ItemGroup>
The test project requires other packages to create and run unit tests. dotnet new in the
previous step added xUnit and the xUnit runner. Now, add the MathService class library
as another dependency to the project. Use the dotnet add reference command:
.NET CLI
You can see the entire file in the samples repository on GitHub.
/unit-testing-with-fsharp
unit-testing-with-fsharp.sln
/MathService
Source Files
MathService.fsproj
/MathService.Tests
Test Source Files
MathServiceTests.fsproj
Execute dotnet sln add .\MathService.Tests\MathService.Tests.fsproj in the unit-
testing-with-fsharp directory.
F#
[<Fact>]
let ``My test`` () =
Assert.True(true)
[<Fact>]
let ``Fail every time`` () = Assert.True(false)
The [<Fact>] attribute denotes a test method that is run by the test runner. From the
unit-testing-with-fsharp, execute dotnet test to build the tests and the class library and
then run the tests. The xUnit test runner contains the program entry point to run your
tests. dotnet test starts the test runner using the unit test project you've created.
These two tests show the most basic passing and failing tests. My test passes, and Fail
every time fails. Now, create a test for the squaresOfOdds method. The squaresOfOdds
method returns a sequence of the squares of all odd integer values that are part of the
input sequence. Rather than trying to write all of those functions at once, you can
iteratively create tests that validate the functionality. Making each test pass means
creating the necessary functionality for the method.
The simplest test we can write is to call squaresOfOdds with all even numbers, where the
result should be an empty sequence of integers. Here's that test:
F#
[<Fact>]
let ``Sequence of Evens returns empty collection`` () =
let expected = Seq.empty<int>
let actual = MyMath.squaresOfOdds [2; 4; 6; 8; 10]
Assert.Equal<Collections.Generic.IEnumerable<int>>(expected, actual)
Your test fails. You haven't created the implementation yet. Make this test pass by
writing the simplest code in the MathService class that works:
F#
let squaresOfOdds xs =
Seq.empty<int>
In the unit-testing-with-fsharp directory, run dotnet test again. The dotnet test
command runs a build for the MathService project and then for the MathService.Tests
project. After building both projects, it runs this single test. It passes.
F#
[<Fact>]
let ``Sequences of Ones and Evens returns Ones`` () =
let expected = [1; 1; 1; 1]
let actual = MyMath.squaresOfOdds [2; 1; 4; 1; 6; 1; 8; 1; 10]
Assert.Equal<Collections.Generic.IEnumerable<int>>(expected, actual)
Executing dotnet test runs your tests and shows you that the new test fails. Now,
update the squaresOfOdds method to handle this new test. You filter all the even
numbers out of the sequence to make this test pass. You can do that by writing a small
filter function and using Seq.filter :
F#
let squaresOfOdds xs =
xs
|> Seq.filter isOdd
There's one more step to go: square each of the odd numbers. Start by writing a new
test:
F#
[<Fact>]
let ``SquaresOfOdds works`` () =
let expected = [1; 9; 25; 49; 81]
let actual = MyMath.squaresOfOdds [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]
Assert.Equal(expected, actual)
You can fix the test by piping the filtered sequence through a map operation to
compute the square of each odd number:
F#
let squaresOfOdds xs =
xs
|> Seq.filter isOdd
|> Seq.map square
You've built a small library and a set of unit tests for that library. You've structured the
solution so that adding new packages and tests is part of the normal workflow. You've
concentrated most of your time and effort on solving the goals of the application.
See also
dotnet new
dotnet sln
dotnet add reference
dotnet test
Unit testing Visual Basic .NET Core
libraries using dotnet test and xUnit
Article • 09/29/2022
This tutorial shows how to build a solution containing a unit test project and library
project. To follow the tutorial using a pre-built solution, view or download the sample
code . For download instructions, see Samples and Tutorials.
/unit-testing-using-dotnet-test
unit-testing-using-dotnet-test.sln
/PrimeService
PrimeService.vb
PrimeService.vbproj
/PrimeService.Tests
PrimeService_IsPrimeShould.vb
PrimeServiceTests.vbproj
The following instructions provide the steps to create the test solution. See Commands
to create test solution for instructions to create the test solution in one step.
.NET CLI
The dotnet new sln command creates a new solution in the unit-testing-using-
dotnet-test directory.
.NET CLI
dotnet new classlib -o PrimeService --lang VB
The dotnet new classlib command creates a new class library project in the
PrimeService folder. The new class library will contain the code to be tested.
VB
Imports System
Namespace Prime.Services
Public Class PrimeService
Public Function IsPrime(candidate As Integer) As Boolean
Throw New NotImplementedException("Not implemented.")
End Function
End Class
End Namespace
.NET CLI
.NET CLI
Add the test project to the solution file by running the following command:
.NET CLI
.NET CLI
The following commands create the test solution on a Windows machine. For macOS
and Unix, update the ren command to the OS version of ren to rename a file:
.NET CLI
Follow the instructions for "Replace the code in PrimeService.vb with the following code"
in the previous section.
Create a test
A popular approach in test driven development (TDD) is to write a test before
implementing the target code. This tutorial uses the TDD approach. The IsPrime
method is callable, but not implemented. A test call to IsPrime fails. With TDD, a test is
written that is known to fail. The target code is updated to make the test pass. You keep
repeating this approach, writing a failing test and then updating the target code to pass.
Delete PrimeService.Tests/UnitTest1.vb.
Create a PrimeService.Tests/PrimeService_IsPrimeShould.vb file.
Replace the code in PrimeService_IsPrimeShould.vb with the following code:
VB
Imports Xunit
Namespace PrimeService.Tests
Public Class PrimeService_IsPrimeShould
Private ReadOnly _primeService As Prime.Services.PrimeService
<Fact>
Sub IsPrime_InputIs1_ReturnFalse()
Dim result As Boolean = _primeService.IsPrime(1)
End Class
End Namespace
The [Fact] attribute declares a test method that's run by the test runner. From the
PrimeService.Tests folder, run dotnet test . The dotnet test command builds both
projects and runs the tests. The xUnit test runner contains the program entry point to
run the tests. dotnet test starts the test runner using the unit test project.
The test fails because IsPrime hasn't been implemented. Using the TDD approach, write
only enough code so this test passes. Update IsPrime with the following code:
VB
VB
Copying test code when only a parameter changes results in code duplication and test
bloat. The following xUnit attributes enable writing a suite of similar tests:
[Theory] represents a suite of tests that execute the same code but have different
input arguments.
[InlineData] attribute specifies values for those inputs.
Rather than creating new tests, apply the preceding xUnit attributes to create a single
theory. Replace the following code:
VB
<Fact>
Sub IsPrime_InputIs1_ReturnFalse()
Dim result As Boolean = _primeService.IsPrime(1)
VB
<Theory>
<InlineData(-1)>
<InlineData(0)>
<InlineData(1)>
Sub IsPrime_ValuesLessThan2_ReturnFalse(ByVal value As Integer)
Dim result As Boolean = _primeService.IsPrime(value)
Assert.False(result, $"{value} should not be prime")
End Sub
In the preceding code, [Theory] and [InlineData] enable testing several values less
than two. Two is the smallest prime number.
Run dotnet test , two of the tests fail. To make all of the tests pass, update the IsPrime
method with the following code:
VB
Following the TDD approach, add more failing tests, then update the target code. See
the finished version of the tests and the complete implementation of the library .
The completed IsPrime method is not an efficient algorithm for testing primality.
Additional resources
xUnit.net official site
Testing controller logic in ASP.NET Core
dotnet add reference
Organizing and testing projects with the
.NET CLI
Article • 04/19/2022
This tutorial follows Tutorial: Create a console application with .NET using Visual Studio
Code, taking you beyond the creation of a simple console app to develop advanced and
well-organized applications. After showing you how to use folders to organize your
code, the tutorial shows you how to extend a console application with the xUnit
testing framework.
7 Note
This tutorial recommends that you place the application project and test project in
separate folders. Some developers prefer to keep these projects in the same folder.
For more information, see GitHub issue dotnet/docs #26395 .
/MyProject
|__AccountInformation.cs
|__MonthlyReportRecords.cs
|__MyProject.csproj
|__Program.cs
However, this flat structure only works well when the size of your project is relatively
small. Can you imagine what will happen if you add 20 types to the project? The project
definitely wouldn't be easy to navigate and maintain with that many files littering the
project's root directory.
To organize the project, create a new folder and name it Models to hold the type files.
Place the type files into the Models folder:
/MyProject
|__/Models
|__AccountInformation.cs
|__MonthlyReportRecords.cs
|__MyProject.csproj
|__Program.cs
Projects that logically group files into folders are easy to navigate and maintain. In the
next section, you create a more complex sample with folders and unit testing.
Prerequisites
.NET 5.0 SDK or a later version.
The sample contains two types, Dog and Cat , and has them implement a common
interface, IPet . For the NewTypes project, your goal is to organize the pet-related types
into a Pets folder. If another set of types is added later, WildAnimals for example, they're
placed in the NewTypes folder alongside the Pets folder. The WildAnimals folder may
contain types for animals that aren't pets, such as Squirrel and Rabbit types. In this
way as types are added, the project remains well organized.
/NewTypes
|__/src
|__/NewTypes
|__/Pets
|__Dog.cs
|__Cat.cs
|__IPet.cs
|__Program.cs
|__NewTypes.csproj
IPet.cs:
C#
using System;
namespace Pets
{
public interface IPet
{
string TalkToOwner();
}
}
Dog.cs:
C#
using System;
namespace Pets
{
public class Dog : IPet
{
public string TalkToOwner() => "Woof!";
}
}
Cat.cs:
C#
using System;
namespace Pets
{
public class Cat : IPet
{
public string TalkToOwner() => "Meow!";
}
}
Program.cs:
C#
using System;
using Pets;
using System.Collections.Generic;
namespace ConsoleApplication
{
public class Program
{
public static void Main(string[] args)
{
List<IPet> pets = new List<IPet>
{
new Dog(),
new Cat()
};
NewTypes.csproj:
XML
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net6.0</TargetFramework>
<Nullable>enable</Nullable>
</PropertyGroup>
</Project>
.NET CLI
dotnet run
Console
Woof!
Meow!
Optional exercise: You can add a new pet type, such as a Bird , by extending this project.
Make the bird's TalkToOwner method give a Tweet! to the owner. Run the app again.
The output will include Tweet!
Navigate back to the src folder and create a test folder with a NewTypesTests folder
within it. At a command prompt from the NewTypesTests folder, execute dotnet new
xunit . This command produces two files: NewTypesTests.csproj and UnitTest1.cs.
The test project can't currently test the types in NewTypes and requires a project
reference to the NewTypes project. To add a project reference, use the dotnet add
reference command:
.NET CLI
Or, you also have the option of manually adding the project reference by adding an
<ItemGroup> node to the NewTypesTests.csproj file:
XML
<ItemGroup>
<ProjectReference Include="../../src/NewTypes/NewTypes.csproj" />
</ItemGroup>
NewTypesTests.csproj:
XML
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>net6.0</TargetFramework>
<Nullable>enable</Nullable>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.6.2" />
<PackageReference Include="xunit" Version="2.4.2" />
<PackageReference Include="xunit.runner.visualstudio" Version="2.4.5" />
</ItemGroup>
<ItemGroup>
<ProjectReference Include="../../src/NewTypes/NewTypes.csproj"/>
</ItemGroup>
</Project>
Change the name of UnitTest1.cs to PetTests.cs and replace the code in the file with the
following code:
C#
using System;
using Xunit;
using Pets;
Assert.NotEqual(expected, actual);
}
[Fact]
public void CatTalkToOwnerReturnsMeow()
{
string expected = "Meow!";
string actual = new Cat().TalkToOwner();
Assert.NotEqual(expected, actual);
}
}
Optional exercise: If you added a Bird type earlier that yields a Tweet! to the owner,
add a test method to the PetTests.cs file, BirdTalkToOwnerReturnsTweet , to check that the
TalkToOwner method works correctly for the Bird type.
7 Note
Although you expect that the expected and actual values are equal, an initial
assertion with the Assert.NotEqual check specifies that these values are not equal.
Always initially create a test to fail in order to check the logic of the test. After you
confirm that the test fails, adjust the assertion to allow the test to pass.
/NewTypes
|__/src
|__/NewTypes
|__/Pets
|__Dog.cs
|__Cat.cs
|__IPet.cs
|__Program.cs
|__NewTypes.csproj
|__/test
|__NewTypesTests
|__PetTests.cs
|__NewTypesTests.csproj
Start in the test/NewTypesTests directory. Run the tests with the dotnet test command.
This command starts the test runner specified in the project file.
As expected, testing fails, and the console displays the following output:
Output
C#
using System;
using Xunit;
using Pets;
Assert.Equal(expected, actual);
}
[Fact]
public void CatTalkToOwnerReturnsMeow()
{
string expected = "Meow!";
string actual = new Cat().TalkToOwner();
Assert.Equal(expected, actual);
}
}
Rerun the tests with the dotnet test command and obtain the following output:
Output
Testing passes. The pet types' methods return the correct values when talking to the
owner.
You've learned techniques for organizing and testing projects using xUnit. Go forward
with these techniques applying them in your own projects. Happy coding!
Unit testing C# with NUnit and .NET
Core
Article • 04/05/2024
This tutorial takes you through an interactive experience building a sample solution
step-by-step to learn unit testing concepts. If you prefer to follow the tutorial using a
pre-built solution, view or download the sample code before you begin. For download
instructions, see Samples and Tutorials.
This article is about testing a .NET Core project. If you're testing an ASP.NET Core
project, see Integration tests in ASP.NET Core.
Prerequisites
.NET 8.0 or later versions.
A text editor or code editor of your choice.
.NET CLI
Next, create a PrimeService directory. The following outline shows the directory and file
structure so far:
Console
/unit-testing-using-nunit
unit-testing-using-nunit.sln
/PrimeService
Make PrimeService the current directory and run the following command to create the
source project:
.NET CLI
dotnet new classlib
C#
using System;
namespace Prime.Services
{
public class PrimeService
{
public bool IsPrime(int candidate)
{
throw new NotImplementedException("Please create a test
first.");
}
}
}
Change the directory back to the unit-testing-using-nunit directory. Run the following
command to add the class library project to the solution:
.NET CLI
Console
/unit-testing-using-nunit
unit-testing-using-nunit.sln
/PrimeService
Source Files
PrimeService.csproj
/PrimeService.Tests
Make the PrimeService.Tests directory the current directory and create a new project
using the following command:
.NET CLI
The dotnet new command creates a test project that uses NUnit as the test library. The
generated template configures the test runner in the PrimeService.Tests.csproj file:
XML
<ItemGroup>
<PackageReference Include="nunit" Version="4.1.0" />
<PackageReference Include="NUnit3TestAdapter" Version="4.5.0" />
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.9.0" />
<PackageReference Include="NUnit.Analyzers" Version="4.1.0">
<PrivateAssets>all</PrivateAssets>
<IncludeAssets>runtime; build; native; contentfiles;
analyzers</IncludeAssets>
</PackageReference>
</ItemGroup>
7 Note
Prior to .NET 9, the generated code may reference older versions of the NUnit test
framework. You may use dotnet CLI to update the packages. Alternatively, open the
PrimeService.Tests.csproj file and replace the contents of the package references
item group with the code above.
The test project requires other packages to create and run unit tests. The dotnet new
command in the previous step added the Microsoft test SDK, the NUnit test framework,
and the NUnit test adapter. Now, add the PrimeService class library as another
dependency to the project. Use the dotnet add reference command:
.NET CLI
You can see the entire file in the samples repository on GitHub.
Console
/unit-testing-using-nunit
unit-testing-using-nunit.sln
/PrimeService
Source Files
PrimeService.csproj
/PrimeService.Tests
Test Source Files
PrimeService.Tests.csproj
.NET CLI
C#
using NUnit.Framework;
using Prime.Services;
namespace Prime.UnitTests.Services
{
[TestFixture]
public class PrimeService_IsPrimeShould
{
private PrimeService _primeService;
[SetUp]
public void SetUp()
{
_primeService = new PrimeService();
}
[Test]
public void IsPrime_InputIs1_ReturnFalse()
{
var result = _primeService.IsPrime(1);
The [TestFixture] attribute denotes a class that contains unit tests. The [Test]
attribute indicates a method is a test method.
Save this file and execute the dotnet test command to build the tests and the class
library and run the tests. The NUnit test runner contains the program entry point to run
your tests. dotnet test starts the test runner using the unit test project you've created.
Your test fails. You haven't created the implementation yet. Make the test pass by
writing the simplest code in the PrimeService class that works:
C#
In the unit-testing-using-nunit directory, run dotnet test again. The dotnet test
command runs a build for the PrimeService project and then for the
PrimeService.Tests project. After you build both projects, it runs this single test. It
passes.
Instead of creating new tests, apply this attribute to create a single data-driven test. The
data driven test is a method that tests several values less than two, which is the lowest
prime number:
C#
[TestCase(-1)]
[TestCase(0)]
[TestCase(1)]
public void IsPrime_ValuesLessThan2_ReturnFalse(int value)
{
var result = _primeService?.IsPrime(value);
Assert.That(result, Is.False, $"{value} should not be prime");
}
Run dotnet test , and two of these tests fail. To make all of the tests pass, change the
if clause at the beginning of the Main method in the PrimeService.cs file:
C#
if (candidate < 2)
Continue to iterate by adding more tests, theories, and code in the main library. You
have the finished version of the tests and the complete implementation of the
library .
You've built a small library and a set of unit tests for that library. You've also structured
the solution so that adding new packages and tests is part of the standard workflow.
You've concentrated most of your time and effort on solving the goals of the
application.
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
Unit testing F# libraries in .NET Core
using dotnet test and NUnit
Article • 12/11/2021
This tutorial takes you through an interactive experience building a sample solution
step-by-step to learn unit testing concepts. If you prefer to follow the tutorial using a
pre-built solution, view or download the sample code before you begin. For download
instructions, see Samples and Tutorials.
This article is about testing a .NET Core project. If you're testing an ASP.NET Core
project, see Integration tests in ASP.NET Core.
Prerequisites
.NET Core 2.1 SDK or later versions.
A text editor or code editor of your choice.
.NET CLI
Next, create a MathService directory. The following outline shows the directory and file
structure so far:
/unit-testing-with-fsharp
unit-testing-with-fsharp.sln
/MathService
Make MathService the current directory and run the following command to create the
source project:
.NET CLI
dotnet new classlib -lang "F#"
F#
module MyMath =
let squaresOfOdds xs = raise (System.NotImplementedException("You
haven't written a test yet!"))
Change the directory back to the unit-testing-with-fsharp directory. Run the following
command to add the class library project to the solution:
.NET CLI
/unit-testing-with-fsharp
unit-testing-with-fsharp.sln
/MathService
Source Files
MathService.fsproj
/MathService.Tests
Make the MathService.Tests directory the current directory and create a new project
using the following command:
.NET CLI
This creates a test project that uses NUnit as the test framework. The generated
template configures the test runner in the MathServiceTests.fsproj:
XML
<ItemGroup>
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="15.5.0" />
<PackageReference Include="NUnit" Version="3.9.0" />
<PackageReference Include="NUnit3TestAdapter" Version="3.9.0" />
</ItemGroup>
The test project requires other packages to create and run unit tests. dotnet new in the
previous step added NUnit and the NUnit test adapter. Now, add the MathService class
library as another dependency to the project. Use the dotnet add reference command:
.NET CLI
You can see the entire file in the samples repository on GitHub.
/unit-testing-with-fsharp
unit-testing-with-fsharp.sln
/MathService
Source Files
MathService.fsproj
/MathService.Tests
Test Source Files
MathService.Tests.fsproj
.NET CLI
F#
namespace MathService.Tests
open System
open NUnit.Framework
open MathService
[<TestFixture>]
type TestClass () =
[<Test>]
member this.TestMethodPassing() =
Assert.True(true)
[<Test>]
member this.FailEveryTime() = Assert.True(false)
The [<TestFixture>] attribute denotes a class that contains tests. The [<Test>]
attribute denotes a test method that is run by the test runner. From the unit-testing-
with-fsharp directory, execute dotnet test to build the tests and the class library and
then run the tests. The NUnit test runner contains the program entry point to run your
tests. dotnet test starts the test runner using the unit test project you've created.
These two tests show the most basic passing and failing tests. My test passes, and Fail
every time fails. Now, create a test for the squaresOfOdds method. The squaresOfOdds
method returns a sequence of the squares of all odd integer values that are part of the
input sequence. Rather than trying to write all of those functions at once, you can
iteratively create tests that validate the functionality. Making each test pass means
creating the necessary functionality for the method.
The simplest test we can write is to call squaresOfOdds with all even numbers, where the
result should be an empty sequence of integers. Here's that test:
F#
[<Test>]
member this.TestEvenSequence() =
let expected = Seq.empty<int>
let actual = MyMath.squaresOfOdds [2; 4; 6; 8; 10]
Assert.That(actual, Is.EqualTo(expected))
Notice that the expected sequence has been converted to a list. The NUnit framework
relies on many standard .NET types. That dependency means that your public interface
and expected results support ICollection rather than IEnumerable.
When you run the test, you see that your test fails. You haven't created the
implementation yet. Make this test pass by writing the simplest code in the Library.fs
class in your MathService project that works:
F#
let squaresOfOdds xs =
Seq.empty<int>
In the unit-testing-with-fsharp directory, run dotnet test again. The dotnet test
command runs a build for the MathService project and then for the MathService.Tests
project. After building both projects, it runs your tests. Two tests pass now.
F#
[<Test>]
member public this.TestOnesAndEvens() =
let expected = [1; 1; 1; 1]
let actual = MyMath.squaresOfOdds [2; 1; 4; 1; 6; 1; 8; 1; 10]
Assert.That(actual, Is.EqualTo(expected))
Executing dotnet test fails the new test. You must update the squaresOfOdds method to
handle this new test. You must filter all the even numbers out of the sequence to make
this test pass. You can do that by writing a small filter function and using Seq.filter :
F#
let squaresOfOdds xs =
xs
|> Seq.filter isOdd
There's one more step to go: square each of the odd numbers. Start by writing a new
test:
F#
[<Test>]
member public this.TestSquaresOfOdds() =
let expected = [1; 9; 25; 49; 81]
let actual = MyMath.squaresOfOdds [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]
Assert.That(actual, Is.EqualTo(expected))
You can fix the test by piping the filtered sequence through a map operation to
compute the square of each odd number:
F#
let squaresOfOdds xs =
xs
|> Seq.filter isOdd
|> Seq.map square
You've built a small library and a set of unit tests for that library. You've structured the
solution so that adding new packages and tests is part of the normal workflow. You've
concentrated most of your time and effort on solving the goals of the application.
See also
dotnet add reference
dotnet test
Unit testing Visual Basic .NET Core
libraries using dotnet test and NUnit
Article • 03/27/2024
This tutorial takes you through an interactive experience building a sample solution
step-by-step to learn unit testing concepts. If you prefer to follow the tutorial using a
pre-built solution, view or download the sample code before you begin. For download
instructions, see Samples and Tutorials.
This article is about testing a .NET Core project. If you're testing an ASP.NET Core
project, see Integration tests in ASP.NET Core.
Prerequisites
.NET 8 SDK or later versions.
A text editor or code editor of your choice.
.NET CLI
Next, create a PrimeService directory. The following outline shows the file structure so
far:
Console
/unit-testing-vb-nunit
unit-testing-vb-nunit.sln
/PrimeService
Make PrimeService the current directory and run the following command to create the
source project:
.NET CLI
dotnet new classlib -lang VB
VB
Namespace Prime.Services
Public Class PrimeService
Public Function IsPrime(candidate As Integer) As Boolean
Throw New NotImplementedException("Please create a test first.")
End Function
End Class
End Namespace
.NET CLI
Console
/unit-testing-vb-nunit
unit-testing-vb-nunit.sln
/PrimeService
Source Files
PrimeService.vbproj
/PrimeService.Tests
Make the PrimeService.Tests directory the current directory and create a new project
using the following command:
.NET CLI
XML
<ItemGroup>
<PackageReference Include="nunit" Version="4.1.0" />
<PackageReference Include="NUnit3TestAdapter" Version="4.5.0" />
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.9.0" />
</ItemGroup>
7 Note
Prior to .NET 9, the generated code may reference older versions of the NUnit test
framework. You may use dotnet CLI to update the packages. Alternatively, open the
PrimeService.Tests.vbproj file and replace the contents of the package references
item group with the code above.
The test project requires other packages to create and run unit tests. dotnet new in the
previous step added NUnit and the NUnit test adapter. Now, add the PrimeService class
library as another dependency to the project. Use the dotnet add reference command:
.NET CLI
You can see the entire file in the samples repository on GitHub.
Console
/unit-testing-vb-nunit
unit-testing-vb-nunit.sln
/PrimeService
Source Files
PrimeService.vbproj
/PrimeService.Tests
Test Source Files
PrimeService.Tests.vbproj
.NET CLI
dotnet sln add .\PrimeService.Tests\PrimeService.Tests.vbproj
VB
Imports NUnit.Framework
Namespace PrimeService.Tests
<TestFixture>
Public Class PrimeService_IsPrimeShould
Private _primeService As Prime.Services.PrimeService = New
Prime.Services.PrimeService()
<Test>
Sub IsPrime_InputIs1_ReturnFalse()
Dim result As Boolean = _primeService.IsPrime(1)
End Class
End Namespace
The <TestFixture> attribute indicates a class that contains tests. The <Test> attribute
denotes a method that is run by the test runner. From the unit-testing-vb-nunit, execute
dotnet test to build the tests and the class library and then run the tests. The NUnit test
runner contains the program entry point to run your tests. dotnet test starts the test
runner using the unit test project you've created.
Your test fails. You haven't created the implementation yet. Make this test pass by
writing the simplest code in the PrimeService class that works:
VB
passes.
enable you to write a suite of similar tests. A <TestCase> attribute represents a suite of
tests that execute the same code but have different input arguments. You can use the
<TestCase> attribute to specify values for those inputs.
Instead of creating new tests, apply these two attributes to create a series of tests that
test several values less than two, which is the lowest prime number:
VB
<TestFixture>
Public Class PrimeService_IsPrimeShould
Private _primeService As Prime.Services.PrimeService = New
Prime.Services.PrimeService()
<TestCase(-1)>
<TestCase(0)>
<TestCase(1)>
Sub IsPrime_ValuesLessThan2_ReturnFalse(value As Integer)
Dim result As Boolean = _primeService.IsPrime(value)
<TestCase(2)>
<TestCase(3)>
<TestCase(5)>
<TestCase(7)>
Public Sub IsPrime_PrimesLessThan10_ReturnTrue(value As Integer)
Dim result As Boolean = _primeService.IsPrime(value)
<TestCase(4)>
<TestCase(6)>
<TestCase(8)>
<TestCase(9)>
Public Sub IsPrime_NonPrimesLessThan10_ReturnFalse(value As Integer)
Dim result As Boolean = _primeService.IsPrime(value)
Run dotnet test , and two of these tests fail. To make all of the tests pass, change the
if clause at the beginning of the Main method in the PrimeServices.cs file:
VB
if candidate < 2
Continue to iterate by adding more tests, more theories, and more code in the main
library. You have the finished version of the tests and the complete implementation of
the library .
You've built a small library and a set of unit tests for that library. You've structured the
solution so that adding new packages and tests is part of the normal workflow. You've
concentrated most of your time and effort on solving the goals of the application.
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
NUnit runner overview
Article • 05/23/2024
The NUnit runner is a lightweight and portable alternative to VSTest for running tests
in all contexts (for example, continuous integration (CI) pipelines, CLI, Visual Studio Test
Explorer, and VS Code Text Explorer). The NUnit runner is embedded directly in your
NUnit test projects, and there are no other app dependencies, such as vstest.console
or dotnet test , needed to run your tests.
The NUnit runner is open source, and builds on a Microsoft.Testing.Platform library. You
can find Microsoft.Testing.Platform code in microsoft/testfx GitHub repository. The
NUnit runner comes bundled with NUnit 5.0.0-beta.2 or newer.
5.0.0-beta.2 or newer.
XML
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<!-- Enable the NUnit runner, this is an opt-in feature -->
<EnableNUnitRunner>true</EnableNUnitRunner>
<OutputType>Exe</OutputType>
<TargetFramework>net8.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
<IsPackable>false</IsPackable>
<IsTestProject>true</IsTestProject>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.9.0" />
<PackageReference Include="NUnit" Version="4.1.0" />
<PackageReference Include="NUnit.Analyzers" Version="4.2.0">
<IncludeAssets>runtime; build; native; contentfiles; analyzers;
buildtransitive</IncludeAssets>
<PrivateAssets>all</PrivateAssets>
</PackageReference>
<PackageReference Include="NUnit3TestAdapter" Version="5.0.0-beta.2" />
<!--
Coverlet collector isn't compatible with NUnit runner, you can
either switch to Microsoft CodeCoverage (as shown below),
or switch to be using coverlet global tool
https://github.com/coverlet-coverage/coverlet#net-global-tool-guide-
suffers-from-possible-known-issue
-->
<PackageReference Include="Microsoft.Testing.Extensions.CodeCoverage"
Version="17.10.1" />
</ItemGroup>
</Project>
.runsettings
The NUnit runner supports the runsettings through the command-line option --
settings . The following commands show examples.
.NET CLI
.NET CLI
-or-
.NET CLI
.NET CLI
Contoso.MyTests.exe --settings config.runsettings
Tests filter
You can provide the tests filter seamlessly using the command line option --filter . The
following commands show some examples.
.NET CLI
.NET CLI
-or-
.NET CLI
.NET CLI
Contoso.MyTests.exe --filter
"FullyQualifiedName~UnitTest1|TestCategory=CategoryA"
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
be found on GitHub, where you Select a link to provide feedback:
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our Provide product feedback
contributor guide.
MSTest overview
Article • 03/19/2024
MSTest, Microsoft Testing Framework, is a test framework for .NET applications. It allows
you to write and execute tests, and provide test suites with integration to Visual Studio
and Visual Studio Code Test Explorers, the .NET CLI, and many CI pipelines.
The MSTest team only supports the latest released version and strongly encourages its
users and customers to always update to latest version to benefit from new
improvements and security patches. Preview releases aren't supported by Microsoft, but
they are offered for public testing ahead of the final release.
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
Get started with MSTest
Article • 08/13/2024
We recommend that you don't install these packages directly into your test projects.
Instead, you should use either:
MSTest.Sdk : A MSBuild project SDK that includes all the recommended packages
and greatly simplifies all the boilerplate configuration. Although this is shipped as
a NuGet package, it's not intended to be installed as a regular package
dependency, instead you should modify the Sdk part of your project (e.g. <Project
Sdk="MSTest.Sdk"> or <Project Sdk="MSTest.Sdk/X.Y.Z"> where X.Y.Z is MSTest
Microsoft.NET.Test.Sdk .
If you are creating a test infrastructure project that is intended to be used as a helper by
multiple test projects, you should install the MSTest.TestFramework and
MSTest.Analyzers packages directly into that project.
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
MSTest SDK overview
Article • 09/10/2024
Introduced in .NET 9, MSTest.Sdk is a MSBuild project SDK for building MSTest apps.
It's possible to build a MSTest app without this SDK, however, the MSTest SDK is:
The MSTest SDK discovers and runs your tests using the MSTest runner.
You can enable MSTest.Sdk in a project by simply updating the Sdk attribute of the
Project node of your project:
XML
<Project Sdk="MSTest.Sdk/3.3.1">
<PropertyGroup>
<TargetFramework>net8.0</TargetFramework>
</PropertyGroup>
</Project>
7 Note
/3.3.1 is given as example as it's the first version of the SDK, but it can be replaced
To simplify handling of versions, we recommend setting the SDK version at solution level
using the global.json file. For example, your project file would look like:
XML
<Project Sdk="MSTest.Sdk">
<PropertyGroup>
<TargetFramework>net8.0</TargetFramework>
</PropertyGroup>
JSON
{
"msbuild-sdks": {
"MSTest.Sdk": "3.3.1"
}
}
When you build the project, all the needed components are restored and installed
using the standard NuGet workflow set by your project.
You don't need anything else to build and run your tests and you can use the same
tooling (for example, dotnet test or Visual Studio) used by a "classic" MSTest project.
) Important
By switching to the MSTest.Sdk , you opt in to using the MSTest runner, including
with dotnet test. That requires modifying your CI and local CLI calls, and also
impacts the available entries of the .runsettings. You can use MSTest.Sdk and still
keep the old integrations and tools by instead switching the runner.
You can set the profile using the property TestingExtensionsProfile with one of the
following three profiles:
XML
<Project Sdk="MSTest.Sdk/3.3.1">
<PropertyGroup>
<TargetFramework>net8.0</TargetFramework>
<TestingExtensionsProfile>None</TestingExtensionsProfile>
</PropertyGroup>
</Project>
XML
<Project Sdk="MSTest.Sdk/3.3.1">
<PropertyGroup>
<TargetFramework>net8.0</TargetFramework>
<EnableMicrosoftTestingExtensionsCrashDump>true</EnableMicrosoftTestingExten
sionsCrashDump>
</PropertyGroup>
</Project>
2 Warning
It's important to review the licensing terms for each extension as they might vary.
Enabled and disabled extensions are combined with the extensions provided by your
selected extension profile.
This property pattern can be used to enable an additional extension on top of the
implicit Default profile (as seen in the previous CrashDumpExtension example).
You can also disable an extension that's coming from the selected profile. For example,
disable the MS Code Coverage extension by setting
<EnableMicrosoftTestingExtensionsCodeCoverage>false</EnableMicrosoftTestingExtensio
nsCodeCoverage> :
XML
<Project Sdk="MSTest.Sdk/3.3.1">
<PropertyGroup>
<TargetFramework>net8.0</TargetFramework>
<EnableMicrosoftTestingExtensionsCodeCoverage>false</EnableMicrosoftTestingE
xtensionsCodeCoverage>
</PropertyGroup>
</Project>
Features
Outside of the selection of the runner and runner-specific extensions, MSTest.Sdk also
provides additional features to simplify and enhance your testing experience.
Test with .NET Aspire
.NET Aspire is an opinionated, cloud-ready stack for building observable, production
ready, distributed applications. .NET Aspire is delivered through a collection of NuGet
packages that handle specific cloud-native concerns. For more information, see the .NET
Aspire docs.
7 Note
By setting the property EnableAspireTesting to true , you can bring all dependencies
and default using directives you need for testing with Aspire and MSTest .
XML
<Project Sdk="MSTest.Sdk/3.4.0">
<PropertyGroup>
<TargetFramework>net8.0</TargetFramework>
<EnableAspireTesting>true</EnableAspireTesting>
</PropertyGroup>
</Project>
7 Note
By setting the property EnablePlaywright to true you can bring in all the dependencies
and default using directives you need for testing with Playwright and MSTest .
XML
<Project Sdk="MSTest.Sdk/3.4.0">
<PropertyGroup>
<TargetFramework>net8.0</TargetFramework>
<EnablePlaywright>true</EnablePlaywright>
</PropertyGroup>
</Project>
Sdk="MSTest.Sdk/3.3.1"
diff
- Sdk="Microsoft.NET.Sdk"
+ Sdk="MSTest.Sdk"
JSON
{
"msbuild-sdks": {
"MSTest.Sdk": "3.3.1"
}
}
diff
- <EnableMSTestRunner>true</EnableMSTestRunner>
- <OutputType>Exe</OutputType>
- <IsPackable>false</IsPackable>
- <IsTestProject>true</IsTestProject>
Remove default package references:
diff
- <PackageReference Include="MSTest"
- <PackageReference Include="MSTest.TestFramework"
- <PackageReference Include="MSTest.TestAdapter"
- <PackageReference Include="MSTest.Analyzers"
- <PackageReference Include="Microsoft.NET.Test.Sdk"
Finally, based on the extensions profile you're using, you can also remove some of the
Microsoft.Testing.Extensions.* packages.
Update your CI
Once you've updated your projects, if you're using MSTest runner (default) and if you
rely on dotnet test to run your tests, you must update your CI configuration. For more
information and to guide your understanding of all the required changes, see dotnet
test integration.
Here's an example update when using the DotNetCoreCLI task in Azure DevOps:
diff
\- task: DotNetCoreCLI@2
inputs:
command: 'test'
projects: '**/**.sln'
- arguments: '--configuration Release'
+ arguments: '--configuration Release -
p:TestingPlatformCommandLineArguments="--report-trx --results-directory
$(Agent.TempDirectory) --coverage"'
See also
Test project–related properties
Write tests with MSTest
Article • 07/25/2024
In this article, you will learn about the APIs and conventions used by MSTest to help you
write and shape your tests.
Attributes
MSTest uses custom attributes to identify and customize tests.
To help provide a clearer overview of the testing framework, this section organizes the
members of the Microsoft.VisualStudio.TestTools.UnitTesting namespace into groups of
related functionality.
7 Note
Attribute elements, whose names end with "Attribute", can be used with or without
"Attribute" at the end. Attributes that have parameterless constructor, can be
written with or without parenthesis. The following code examples work identically:
[TestClass()]
[TestClassAttribute()]
[TestClass]
[TestClassAttribute]
Assertions
Use the Assert classes of the Microsoft.VisualStudio.TestTools.UnitTesting namespace to
verify specific functionality. A test method exercises the code of a method in your
application's code, but it reports the correctness of the code's behavior only if you
include Assert statements.
PrivateObject
PrivateType
6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
Unit testing C# with MSTest and .NET
Article • 03/18/2023
This tutorial takes you through an interactive experience building a sample solution
step-by-step to learn unit testing concepts. If you prefer to follow the tutorial using a
pre-built solution, view or download the sample code before you begin. For download
instructions, see Samples and Tutorials.
This article is about testing a .NET Core project. If you're testing an ASP.NET Core
project, see Integration tests in ASP.NET Core.
Prerequisites
The .NET 6.0 SDK or later
Console
/unit-testing-using-mstest
unit-testing-using-mstest.sln
/PrimeService
Make PrimeService the current directory and run dotnet new classlib to create the
source project. Rename Class1.cs to PrimeService.cs. Replace the code in the file with the
following code to create a failing implementation of the PrimeService class:
C#
using System;
namespace Prime.Services
{
public class PrimeService
{
public bool IsPrime(int candidate)
{
throw new NotImplementedException("Please create a test
first.");
}
}
}
Change the directory back to the unit-testing-using-mstest directory. Run dotnet sln add
to add the class library project to the solution:
.NET CLI
Console
/unit-testing-using-mstest
unit-testing-using-mstest.sln
/PrimeService
Source Files
PrimeService.csproj
/PrimeService.Tests
Make the PrimeService.Tests directory the current directory and create a new project
using dotnet new mstest. The dotnet new command creates a test project that uses
MSTest as the test library. The template configures the test runner in the
PrimeServiceTests.csproj file:
XML
<ItemGroup>
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="16.7.1" />
<PackageReference Include="MSTest.TestAdapter" Version="2.1.1" />
<PackageReference Include="MSTest.TestFramework" Version="2.1.1" />
<PackageReference Include="coverlet.collector" Version="1.3.0" />
</ItemGroup>
The test project requires other packages to create and run unit tests. dotnet new in the
previous step added the MSTest SDK, the MSTest test framework, the MSTest runner,
and coverlet for code coverage reporting.
Add the PrimeService class library as another dependency to the project. Use the
dotnet add reference command:
.NET CLI
You can see the entire file in the samples repository on GitHub.
Console
/unit-testing-using-mstest
unit-testing-using-mstest.sln
/PrimeService
Source Files
PrimeService.csproj
/PrimeService.Tests
Test Source Files
PrimeServiceTests.csproj
.NET CLI
C#
using Microsoft.VisualStudio.TestTools.UnitTesting;
using Prime.Services;
namespace Prime.UnitTests.Services
{
[TestClass]
public class PrimeService_IsPrimeShould
{
private readonly PrimeService _primeService;
public PrimeService_IsPrimeShould()
{
_primeService = new PrimeService();
}
[TestMethod]
public void IsPrime_InputIs1_ReturnFalse()
{
bool result = _primeService.IsPrime(1);
The TestClass attribute denotes a class that contains unit tests. The TestMethod attribute
indicates a method is a test method.
Save this file and execute dotnet test to build the tests and the class library and then run
the tests. The MSTest test runner contains the program entry point to run your tests.
dotnet test starts the test runner using the unit test project you've created.
Your test fails. You haven't created the implementation yet. Make this test pass by
writing the simplest code in the PrimeService class that works:
C#
In the unit-testing-using-mstest directory, run dotnet test again. The dotnet test
command runs a build for the PrimeService project and then for the
PrimeService.Tests project. After building both projects, it runs this single test. It
passes.
Instead of creating new tests, apply these two attributes to create a single data driven
test. The data driven test is a method that tests several values less than two, which is the
lowest prime number. Add a new test method in PrimeService_IsPrimeShould.cs:
C#
[TestMethod]
[DataRow(-1)]
[DataRow(0)]
[DataRow(1)]
public void IsPrime_ValuesLessThan2_ReturnFalse(int value)
{
var result = _primeService.IsPrime(value);
Run dotnet test , and two of these tests fail. To make all of the tests pass, change the
if clause at the beginning of the IsPrime method in the PrimeService.cs file:
C#
if (candidate < 2)
Continue to iterate by adding more tests, more theories, and more code in the main
library. You have the finished version of the tests and the complete implementation of
the library .
You've built a small library and a set of unit tests for that library. You've structured the
solution so that adding new packages and tests is part of the normal workflow. You've
concentrated most of your time and effort on solving the goals of the application.
See also
Microsoft.VisualStudio.TestTools.UnitTesting
Use the MSTest framework in unit tests
MSTest V2 test framework docs
Unit testing F# libraries in .NET Core
using dotnet test and MSTest
Article • 09/15/2021
This tutorial takes you through an interactive experience building a sample solution
step-by-step to learn unit testing concepts. If you prefer to follow the tutorial using a
pre-built solution, view or download the sample code before you begin. For download
instructions, see Samples and Tutorials.
This article is about testing a .NET Core project. If you're testing an ASP.NET Core
project, see Integration tests in ASP.NET Core.
/unit-testing-with-fsharp
unit-testing-with-fsharp.sln
/MathService
Make MathService the current directory and run dotnet new classlib -lang "F#" to
create the source project. You'll create a failing implementation of the math service:
F#
module MyMath =
let squaresOfOdds xs = raise (System.NotImplementedException("You
haven't written a test yet!"))
Change the directory back to the unit-testing-with-fsharp directory. Run dotnet sln add
.\MathService\MathService.fsproj to add the class library project to the solution.
Console
/unit-testing-with-fsharp
unit-testing-with-fsharp.sln
/MathService
Source Files
MathService.fsproj
/MathService.Tests
Make the MathService.Tests directory the current directory and create a new project
using dotnet new mstest -lang "F#" . This creates a test project that uses MSTest as the
test framework. The generated template configures the test runner in the
MathServiceTests.fsproj:
XML
<ItemGroup>
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="15.3.0-
preview-20170628-02" />
<PackageReference Include="MSTest.TestAdapter" Version="1.1.18" />
<PackageReference Include="MSTest.TestFramework" Version="1.1.18" />
</ItemGroup>
The test project requires other packages to create and run unit tests. dotnet new in the
previous step added MSTest and the MSTest runner. Now, add the MathService class
library as another dependency to the project. Use the dotnet add reference command:
.NET CLI
You can see the entire file in the samples repository on GitHub.
/unit-testing-with-fsharp
unit-testing-with-fsharp.sln
/MathService
Source Files
MathService.fsproj
/MathService.Tests
Test Source Files
MathServiceTests.fsproj
F#
namespace MathService.Tests
open System
open Microsoft.VisualStudio.TestTools.UnitTesting
open MathService
[<TestClass>]
type TestClass () =
[<TestMethod>]
member this.TestMethodPassing() =
Assert.IsTrue(true)
[<TestMethod>]
member this.FailEveryTime() = Assert.IsTrue(false)
The [<TestClass>] attribute denotes a class that contains tests. The [<TestMethod>]
attribute denotes a test method that is run by the test runner. From the unit-testing-
with-fsharp directory, execute dotnet test to build the tests and the class library and
then run the tests. The MSTest test runner contains the program entry point to run your
tests. dotnet test starts the test runner using the unit test project you've created.
These two tests show the most basic passing and failing tests. My test passes, and Fail
every time fails. Now, create a test for the squaresOfOdds method. The squaresOfOdds
method returns a list of the squares of all odd integer values that are part of the input
sequence. Rather than trying to write all of those functions at once, you can iteratively
create tests that validate the functionality. Making each test pass means creating the
necessary functionality for the method.
The simplest test we can write is to call squaresOfOdds with all even numbers, where the
result should be an empty sequence of integers. Here's that test:
F#
[<TestMethod>]
member this.TestEvenSequence() =
let expected = Seq.empty<int> |> Seq.toList
let actual = MyMath.squaresOfOdds [2; 4; 6; 8; 10]
Assert.AreEqual(expected, actual)
Notice that the expected sequence has been converted to a list. The MSTest library relies
on many standard .NET types. That dependency means that your public interface and
expected results support ICollection rather than IEnumerable.
When you run the test, you see that your test fails. You haven't created the
implementation yet. Make this test pass by writing the simplest code in the Mathservice
class that works:
F#
let squaresOfOdds xs =
Seq.empty<int> |> Seq.toList
In the unit-testing-with-fsharp directory, run dotnet test again. The dotnet test
command runs a build for the MathService project and then for the MathService.Tests
project. After building both projects, it runs this single test. It passes.
F#
[<TestMethod>]
member public this.TestOnesAndEvens() =
let expected = [1; 1; 1; 1]
let actual = MyMath.squaresOfOdds [2; 1; 4; 1; 6; 1; 8; 1; 10]
Assert.AreEqual(expected, actual)
Executing dotnet test fails the new test. You must update the squaresOfOdds method to
handle this new test. You must filter all the even numbers out of the sequence to make
this test pass. You can do that by writing a small filter function and using Seq.filter :
F#
let squaresOfOdds xs =
xs
|> Seq.filter isOdd |> Seq.toList
Notice the call to Seq.toList . That creates a list, which implements the ICollection
interface.
There's one more step to go: square each of the odd numbers. Start by writing a new
test:
F#
[<TestMethod>]
member public this.TestSquaresOfOdds() =
let expected = [1; 9; 25; 49; 81]
let actual = MyMath.squaresOfOdds [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]
Assert.AreEqual(expected, actual)
You can fix the test by piping the filtered sequence through a map operation to
compute the square of each odd number:
F#
let squaresOfOdds xs =
xs
|> Seq.filter isOdd
|> Seq.map square
|> Seq.toList
You've built a small library and a set of unit tests for that library. You've structured the
solution so that adding new packages and tests is part of the normal workflow. You've
concentrated most of your time and effort on solving the goals of the application.
See also
dotnet new
dotnet sln
dotnet add reference
dotnet test
Unit testing Visual Basic .NET Core
libraries using dotnet test and MSTest
Article • 09/15/2021
This tutorial takes you through an interactive experience building a sample solution
step-by-step to learn unit testing concepts. If you prefer to follow the tutorial using a
pre-built solution, view or download the sample code before you begin. For download
instructions, see Samples and Tutorials.
This article is about testing a .NET Core project. If you're testing an ASP.NET Core
project, see Integration tests in ASP.NET Core.
Console
/unit-testing-vb-mstest
unit-testing-vb-mstest.sln
/PrimeService
Make PrimeService the current directory and run dotnet new classlib -lang VB to create
the source project. Rename Class1.VB to PrimeService.VB. You create a failing
implementation of the PrimeService class:
VB
Namespace Prime.Services
Public Class PrimeService
Public Function IsPrime(candidate As Integer) As Boolean
Throw New NotImplementedException("Please create a test first")
End Function
End Class
End Namespace
Change the directory back to the unit-testing-vb-using-mstest directory. Run dotnet sln
add .\PrimeService\PrimeService.vbproj to add the class library project to the solution.
Creating the test project
Next, create the PrimeService.Tests directory. The following outline shows the directory
structure:
Console
/unit-testing-vb-mstest
unit-testing-vb-mstest.sln
/PrimeService
Source Files
PrimeService.vbproj
/PrimeService.Tests
Make the PrimeService.Tests directory the current directory and create a new project
using dotnet new mstest -lang VB. This command creates a test project that uses MSTest
as the test library. The generated template configures the test runner in the
PrimeServiceTests.vbproj:
XML
<ItemGroup>
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="15.5.0" />
<PackageReference Include="MSTest.TestAdapter" Version="1.1.18" />
<PackageReference Include="MSTest.TestFramework" Version="1.1.18" />
</ItemGroup>
The test project requires other packages to create and run unit tests. dotnet new in the
previous step added MSTest and the MSTest runner. Now, add the PrimeService class
library as another dependency to the project. Use the dotnet add reference command:
.NET CLI
You can see the entire file in the samples repository on GitHub.
Console
/unit-testing-vb-mstest
unit-testing-vb-mstest.sln
/PrimeService
Source Files
PrimeService.vbproj
/PrimeService.Tests
Test Source Files
PrimeServiceTests.vbproj
VB
Imports Microsoft.VisualStudio.TestTools.UnitTesting
Namespace PrimeService.Tests
<TestClass>
Public Class PrimeService_IsPrimeShould
Private _primeService As Prime.Services.PrimeService = New
Prime.Services.PrimeService()
<TestMethod>
Sub IsPrime_InputIs1_ReturnFalse()
Dim result As Boolean = _primeService.IsPrime(1)
End Class
End Namespace
The <TestClass> attribute indicates a class that contains tests. The <TestMethod>
attribute denotes a method that is run by the test runner. From the unit-testing-vb-
mstest, execute dotnet test to build the tests and the class library and then run the tests.
The MSTest test runner contains the program entry point to run your tests. dotnet test
starts the test runner using the unit test project you've created.
Your test fails. You haven't created the implementation yet. Make this test pass by
writing the simplest code in the PrimeService class that works:
VB
In the unit-testing-vb-mstest directory, run dotnet test again. The dotnet test
command runs a build for the PrimeService project and then for the
PrimeService.Tests project. After building both projects, it runs this single test. It
passes.
attributes that enable you to write a suite of similar tests. A <DataTestMethod> attribute
represents a suite of tests that execute the same code but have different input
arguments. You can use the <DataRow> attribute to specify values for those inputs.
Instead of creating new tests, apply these two attributes to create a single theory. The
theory is a method that tests several values less than two, which is the lowest prime
number:
VB
<TestClass>
Public Class PrimeService_IsPrimeShould
Private _primeService As Prime.Services.PrimeService = New
Prime.Services.PrimeService()
<DataTestMethod>
<DataRow(-1)>
<DataRow(0)>
<DataRow(1)>
Sub IsPrime_ValuesLessThan2_ReturnFalse(value As Integer)
Dim result As Boolean = _primeService.IsPrime(value)
<DataTestMethod>
<DataRow(2)>
<DataRow(3)>
<DataRow(5)>
<DataRow(7)>
Public Sub IsPrime_PrimesLessThan10_ReturnTrue(value As Integer)
Dim result As Boolean = _primeService.IsPrime(value)
Assert.IsTrue(result, $"{value} should be prime")
End Sub
<DataTestMethod>
<DataRow(4)>
<DataRow(6)>
<DataRow(8)>
<DataRow(9)>
Public Sub IsPrime_NonPrimesLessThan10_ReturnFalse(value As Integer)
Dim result As Boolean = _primeService.IsPrime(value)
Run dotnet test , and two of these tests fail. To make all of the tests pass, change the
if clause at the beginning of the method:
VB
if candidate < 2
Continue to iterate by adding more tests, more theories, and more code in the main
library. You have the finished version of the tests and the complete implementation of
the library .
You've built a small library and a set of unit tests for that library. You've structured the
solution so that adding new packages and tests is part of the normal workflow. You've
concentrated most of your time and effort on solving the goals of the application.
MSTest attributes
Article • 07/25/2024
To help provide a clearer overview of the testing framework, this section organizes the
members of the Microsoft.VisualStudio.TestTools.UnitTesting namespace into groups of
related functionality.
7 Note
Attributes, whose names end with "Attribute", can be used with or without
"Attribute" at the end. Attributes that have parameterless constructor, can be
written with or without parenthesis. The following code examples work identically:
[TestClass()]
[TestClassAttribute()]
[TestClass]
[TestClassAttribute]
TestClassAttribute
The TestClass attribute marks a class that contains tests and, optionally, initialize or
cleanup methods.
Example:
C#
[TestClass]
public class MyTestClass
{
}
TestMethodAttribute
The TestMethod attribute is used inside a TestClass to define the actual test method to
run.
The method should be an instance public method defined as void , Task , or ValueTask
(starting with MSTest v3.3). It can optionally be async but should not be async void .
The method should have zero parameters, unless it's used with [DataRow] ,
[DynamicData] or similar attribute that provides test case data to the test method.
C#
[TestClass]
public class MyTestClass
{
[TestMethod]
public void TestMethod()
{
}
}
DataRowAttribute
DataSourceAttribute
DataTestMethodAttribute
DynamicDataAttribute
DataRowAttribute
The DataRowAttribute allows you to run the same test method with multiple different
inputs. It can appear one or multiple times on a test method. It should be combined
with TestMethodAttribute or DataTestMethodAttribute .
The number and types of arguments must exactly match the test method signature.
Consider the following example of a valid test class demonstrating the DataRow attribute
usage with inline arguments that align to test method parameters:
C#
[TestClass]
public class TestClass
{
[TestMethod]
[DataRow(1, "message", true, 2.0)]
public void TestMethod1(int i, string s, bool b, float f)
{
// Omitted for brevity.
}
[TestMethod]
[DataRow(new string[] { "line1", "line2" })]
public void TestMethod2(string[] lines)
{
// Omitted for brevity.
}
[TestMethod]
[DataRow(null)]
public void TestMethod3(object o)
{
// Omitted for brevity.
}
[TestMethod]
[DataRow(new string[] { "line1", "line2" }, new string[] { "line1.",
"line2." })]
public void TestMethod4(string[] input, string[] expectedOutput)
{
// Omitted for brevity.
}
}
7 Note
You can also use the params feature to capture multiple inputs of the DataRow .
C#
[TestClass]
public class TestClass
{
[TestMethod]
[DataRow(1, 2, 3, 4)]
public void TestMethod(params int[] values) {}
}
C#
[TestClass]
public class TestClass
{
[TestMethod]
[DataRow(1, 2)] // Not valid, we are passing 2 inline data but signature
expects 1
public void TestMethod1(int i) {}
[TestMethod]
[DataRow(1)] // Not valid, we are passing 1 inline data but signature
expects 2
public void TestMethod2(int i, int j) {}
[TestMethod]
[DataRow(1)] // Not valid, count matches but types do not match
public void TestMethod3(string s) {}
}
7 Note
Starting with MSTest v3, when you want to pass exactly 2 arrays, you no longer
need to wrap the second array in an object array. Before: [DataRow(new string[] {
"a" }, new object[] { new string[] { "b" } })] Staring with v3: [DataRow(new
string[] { "a" }, new string[] { "b" })]
You can modify the display name used in Visual Studio and loggers for each instance of
DataRowAttribute by setting the DisplayName property.
C#
[TestClass]
public class TestClass
{
[TestMethod]
[DataRow(1, 2, DisplayName = "Functional Case FC100.1")]
public void TestMethod(int i, int j) {}
}
You can also create your own specialized data row attribute by inheriting the
DataRowAttribute .
C#
[TestClass]
public class TestClass
{
[TestMethod]
[MyCustomDataRow(1)]
public void TestMethod(int i) {}
}
Assembly level
AssemblyInitialize is called right after your assembly is loaded and AssemblyCleanup is
called right before your assembly is unloaded.
The methods marked with these attributes should be defined as static void , static
Task or static ValueTask (starting with MSTest v3.3), in a TestClass , and appear only
once. The initialize part requires one argument of type TestContext and the cleanup no
argument.
C#
[TestClass]
public class MyTestClass
{
[AssemblyInitialize]
public static void AssemblyInitialize(TestContext testContext)
{
}
[AssemblyCleanup]
public static void AssemblyCleanup()
{
}
}
C#
[TestClass]
public class MyOtherTestClass
{
[AssemblyInitialize]
public static async Task AssemblyInitialize(TestContext testContext)
{
}
[AssemblyCleanup]
public static async Task AssemblyCleanup()
{
}
}
Class level
ClassInitialize is called right before your class is loaded (but after static constructor) and
ClassCleanup is called right after your class is unloaded.
It's possible to control the inheritance behavior: only for current class using
InheritanceBehavior.None or for all derived classes using
InheritanceBehavior.BeforeEachDerivedClass .
It's also possible to configure whether the class cleanup should be run at the end of the
class or at the end of the assembly.
The methods marked with these attributes should be defined as static void , static
Task or static ValueTask (starting with MSTest v3.3), in a TestClass , and appear only
once. The initialize part requires one argument of type TestContext and the cleanup no
argument.
C#
[TestClass]
public class MyTestClass
{
[ClassInitialize]
public static void ClassInitialize(TestContext testContext)
{
}
[ClassCleanup]
public static void ClassCleanup()
{
}
}
C#
[TestClass]
public class MyOtherTestClass
{
[ClassInitialize]
public static async Task ClassInitialize(TestContext testContext)
{
}
[ClassCleanup]
public static async Task ClassCleanup()
{
}
}
Test level
TestInitialize is called right before your test is started and TestCleanup is called right
after your test is finished.
The TestInitialize is similar to the class constructor but is usually more suitable for
long or async initializations. The TestInitialize is always called after the constructor
and called for each test (including each data row of data-driven tests).
The TestCleanup is similar to the class Dispose (or DisposeAsync ) but is usually more
suitable for long or async cleanups. The TestCleanup is always called just before the
DisposeAsync / Dispose and called for each test (including each data row of data-driven
tests).
The methods marked with these attributes should be defined as void , Task or
ValueTask (starting with MSTest v3.3), in a TestClass , be parameterless, and appear one
or multiple times.
C#
[TestClass]
public class MyTestClass
{
[TestInitialize]
public void TestInitialize()
{
}
[TestCleanup]
public void TestCleanup()
{
}
}
C#
[TestClass]
public class MyOtherTestClass
{
[TestInitialize]
public async Task TestInitialize()
{
}
[TestCleanup]
public async Task TestCleanup()
{
}
}
TimeoutAttribute
The Timeout attribute can be used to specify the maximum time in milliseconds that a
test method is allowed to run. If the test method runs longer than the specified time, the
test will be aborted and marked as failed.
This attribute can be applied to any test method or any fixture method (initialization and
cleanup methods). It is also possible to specify the timeout globally for either all test
methods or all test fixture methods by using the timeout properties of the runsettings
file.
7 Note
The timeout is not guaranteed to be precise. The test will be aborted after the
specified time has passed, but it may take a few milliseconds longer.
When using the timeout feature, a separate thread/task is created to run the test
method. The main thread/task is responsible for monitoring the timeout and
unobserving the method thread/task if the timeout is reached.
STATestClassAttribute
When applied to a test class, the [STATestClass] attribute indicates that all test
methods (and the [ClassInitialize] and [ClassCleanup] methods) in the class should
be run in a single-threaded apartment (STA). This attribute is useful when the test
methods interact with COM objects that require STA.
7 Note
STATestMethodAttribute
When applied to a test method, the [STATestMethod] attribute indicates that the test
method should be run in a single-threaded apartment (STA). This attribute is useful
when the test method interacts with COM objects that require STA.
7 Note
ParallelizeAttribute
By default, MSTest runs tests in a sequential order. The Parallelize attribute can be used
to run tests in parallel. This is an assembly level attribute. You can specify if the
parallelism should be at class level (multiple classes can be run in parallel but tests in a
given class are run sequentially) or at method level.
It's also possible to specify the maximum number of threads to use for parallel
execution. A value of 0 (default value) means that the number of threads is equal to the
number of logical processors on the machine.
It is also possible to specify the parallelism through the parallelization properties of the
runsettings file.
DoNotParallelizeAttribute
7 Note
By default, MSTest runs tests in sequential order so you only need to use this
attribute if you have applied the [Parallelize] attribute at the assembly level.
Utilities attributes
DeploymentItemAttribute
It can be used either on test classes (classes marked with TestClass attribute) or on test
methods (methods marked with TestMethod attribute).
Users can have multiple instances of the attribute to specify more than one item.
Example
C#
[TestClass]
[DeploymentItem(@"C:\classLevelDepItem.xml")] // Copy file using some
absolute path
public class UnitTest1
{
[TestMethod]
[DeploymentItem(@"..\..\methodLevelDepItem1.xml")] // Copy file using
a relative path from the dll output location
[DeploymentItem(@"C:\DataFiles\methodLevelDepItem2.xml",
"SampleDataFiles")] // File will be added under a SampleDataFiles in the
deployment directory
public void TestMethod1()
{
string textFromFile = File.ReadAllText("classLevelDepItem.xml");
}
}
2 Warning
We do not recommend the usage of this attribute for copying files to the
deployment directory.
ExpectedExceptionAttribute
2 Warning
This attribute exists for backward compatibility and is not recommended for new
tests. Instead, use the Assert.ThrowsException method.
Metadata attributes
The following attributes and the values assigned to them appear in the Visual Studio
Properties window for a particular test method. These attributes aren't meant to be
accessed through the code of the test. Instead, they affect the ways the test is used or
run, either by you through the IDE of Visual Studio, or by the Visual Studio test engine.
For example, some of these attributes appear as columns in the Test Manager window
and Test Results window, which means that you can use them to group and sort tests
and test results. One such attribute is TestPropertyAttribute, which you use to add
arbitrary metadata to tests.
For example, you could use it to store the name of a "test pass" that this test covers, by
marking the test with [TestProperty("Feature", "Accessibility")] . Or, you could use it
to store an indicator of the kind of test It's with [TestProperty("ProductMilestone",
"42")] . The property you create by using this attribute, and the property value you
assign, are both displayed in the Visual Studio Properties window under the heading
Test specific.
DescriptionAttribute
IgnoreAttribute
OwnerAttribute
PriorityAttribute
TestCategoryAttribute
TestPropertyAttribute
WorkItemAttribute
The attributes below relate the test method that they decorate to entities in the project
hierarchy of a Team Foundation Server team project:
CssIterationAttribute
CssProjectStructureAttribute
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
MSTest assertions
Article • 07/25/2024
6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
Run tests with MSTest
Article • 07/25/2024
There are several ways to run MSTest tests depending on your needs. You can run tests
from an IDE (for example, Visual Studio, Visual Studio Code, or JetBrains Rider), or from
the command line, or from a CI service (such as GitHub Actions or Azure DevOps).
Historically, MSTest relied on VSTest for running tests in all contexts but starting with
version 3.2.0, MSTest has its own test runner. This new runner is more lightweight and
faster than VSTest, and it's the recommended way to run MSTest tests.
6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
MSTest runner overview
Article • 09/10/2024
The MSTest runner is a lightweight and portable alternative to VSTest for running tests
in all contexts (for example, continuous integration (CI) pipelines, CLI, Visual Studio Test
Explorer, and VS Code Text Explorer). The MSTest runner is embedded directly in your
MSTest test projects, and there are no other app dependencies, such as vstest.console
or dotnet test , needed to run your tests.
When you use MSTest SDK , by default you're opted in to using MSTest runner.
XML
<Project Sdk="MSTest.Sdk/3.3.1">
<PropertyGroup>
<TargetFramework>net8.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
</PropertyGroup>
</Project>
Alternatively, you can enable MSTest runner by adding the EnableMSTestRunner property
and setting OutputType to Exe in your project file. You also need to ensure that you're
using MSTest 3.2.0-preview.23623.1 or newer.
XML
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<!-- Enable the MSTest runner, this is an opt-in feature -->
<EnableMSTestRunner>true</EnableMSTestRunner>
<OutputType>Exe</OutputType>
<TargetFramework>net8.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
<IsPackable>false</IsPackable>
<IsTestProject>true</IsTestProject>
</PropertyGroup>
<ItemGroup>
<!--
MSTest meta package is the recommended way to reference MSTest.
It's equivalent to referencing:
Microsoft.NET.Test.Sdk
MSTest.TestAdapter
MSTest.TestFramework
MSTest.Analyzers
-->
<PackageReference Include="MSTest" Version="3.2.0" />
<!--
Coverlet collector isn't compatible with MSTest runner, you can
either switch to Microsoft CodeCoverage (as shown below),
or switch to be using coverlet global tool
https://github.com/coverlet-coverage/coverlet#net-global-tool-guide-
suffers-from-possible-known-issue
-->
<PackageReference Include="Microsoft.Testing.Extensions.CodeCoverage"
Version="17.10.1" />
</ItemGroup>
</Project>
.runsettings
The MSTest runner supports the runsettings through the command-line option --
settings . For the full list of supported MSTest entries, see Configure MSTest:
.NET CLI
dotnet run --project Contoso.MyTests -- --settings config.runsettings
.NET CLI
-or-
.NET CLI
.NET CLI
Tests filter
You can provide the tests filter seamlessly using the command line option --filter . The
following commands show some examples.
.NET CLI
.NET CLI
-or-
.NET CLI
dotnet Contoso.MyTests.dll --filter
"FullyQualifiedName~UnitTest1|TestCategory=CategoryA"
.NET CLI
Contoso.MyTests.exe --filter
"FullyQualifiedName~UnitTest1|TestCategory=CategoryA"
Configure MSTest
Article • 09/12/2024
MSTest, Microsoft Testing Framework, is a test framework for .NET applications. It allows you to
write and execute tests, and provide test suites with integration to Visual Studio and Visual
Studio Code Test Explorers, the .NET CLI, and many CI pipelines.
MSTest is a fully supported, open-source and a cross-platform test framework that works with
all supported .NET targets (.NET Framework, .NET Core, .NET, UWP, WinUI, and so on) hosted on
GitHub .
Runsettings
A .runsettings file can be used to configure how unit tests are being run. To learn more about
the runsettings and the configurations related to the platform, you can check out VSTest
runsettings documentation or MSTest runner runsettings documentation.
MSTest element
The following runsettings entries let you configure how MSTest behaves.
ノ Expand table
<AssemblyResolution> <Directory
path="D:\myfolder\bin\"
Configuration Default Values
includeSubDirectories="false"/>
</AssemblyResolution>
<Parallelize><Workers>32</Workers>
<Scope>MethodLevel</Scope></Parallelize>
<ForcedLegacyMode>true</ForcedLegacyMode>
TestRunParameter element
XML
<TestRunParameters>
<Parameter name="webAppUrl" value="http://localhost" />
</TestRunParameters>
Test run parameters provide a way to define variables and values that are available to the tests
at run time. Access the parameters using the MSTest TestContext.Properties property:
C#
[TestMethod]
public void HomePageTest()
{
string _appUrl = TestContext.Properties["webAppUrl"];
}
To use test run parameters, add a public TestContext property to your test class.
XML
<DeleteDeploymentDirectoryAfterTestRunIsComplete>False</DeleteDeploymentDirectoryAf
terTestRunIsComplete>
<DeploymentEnabled>False</DeploymentEnabled>
<ConsiderFixturesAsSpecialTests>False</ConsiderFixturesAsSpecialTests>
<AssemblyResolution>
<Directory path="D:\myfolder\bin\" includeSubDirectories="false"/>
</AssemblyResolution>
</MSTest>
</RunSettings>
MSTest code analysis
Article • 08/13/2024
MSTest analysis ("MSTESTxxxx") rules inspect your C# or Visual Basic code for security,
performance, design and other issues.
Tip
If you're using Visual Studio, many analyzer rules have associated code fixes that
you can apply to correct the problem. Code fixes are shown in the light bulb icon
menu.
Categories
Design rules
Design rules will help you create and maintain test suites that adhere to proper design
and good practices.
Performance rules
Suppression rules
Usage rules
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
Provide product feedback
more information, see our
contributor guide.
MSTest design rules
Article • 05/29/2024
Design rules will help you create and maintain test suites that adhere to proper design
and good practices.
ノ Expand table
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
MSTEST0004: Public types should be
test classes
Article • 08/13/2024
ノ Expand table
Property Value
Rule ID MSTEST0004
Category Design
Enabled by default No
Cause
A public type is not a test class (class marked with the [TestClass] attribute).
Rule description
It's considered a good practice to keep all helper and base classes internal and have
only test classes marked public in a test project.
ノ Expand table
Property Value
Rule ID MSTEST0006
Category Design
Cause
A method is marked with the [ExpectedException] attribute.
Rule description
Prefer Assert.ThrowsException or Assert.ThrowsExceptionAsync over the
[ExpectedException] attribute as it ensures that only the expected line of code throws
the expected exception, instead of acting on the whole body of the test. The assert APIs
also provide more flexibility and allow you to assert extra properties of the exception.
C#
[TestClass]
public class TestClass
{
[TestMethod]
[ExpectedException(typeof(InvalidOperationException))] // Violation
public void TestMethod()
{
// Arrange
var person = new Person
{
FirstName = "John",
LastName = "Doe",
};
person.SetAge(-1);
// Act
person.GrowOlder();
}
}
C#
[TestClass]
public class TestClass
{
[TestMethod]
public void TestMethod()
{
// Arrange
var person = new Person
{
FirstName = "John",
LastName = "Doe",
};
person.SetAge(-1);
// Act
Assert.ThrowsException(() => person.GrowOlder());
}
}
C#
[TestClass]
public class TestClass
{
[TestMethod]
[ExpectedException(typeof(ArgumentNullException))]
public void TestMethod()
{
new Person(null);
}
}
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
MSTEST0015: Test method should not
be ignored
Article • 08/13/2024
ノ Expand table
Property Value
Rule ID MSTEST0015
Category Design
Cause
A Test method should not be ignored.
Rule description
Test methods should not be ignored (marked with [Ignore] ).
ノ Expand table
Property Value
Rule ID MSTEST0016
Category Design
Cause
A test class should have a test method.
Rule description
A test class should have at least one test method or be static and have methods that
are attributed with [AssemblyInitialization] or [AssemblyCleanup] .
6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
MSTEST0019: Prefer TestInitialize
methods over constructors
Article • 08/13/2024
ノ Expand table
Property Value
Rule ID MSTEST0019
Category Design
Enabled by default No
Cause
This rule raises a diagnostic when there is a parameterless explicit constructor declared
on a test class (class marked with [TestClass] ).
Rule description
Use this rule to enforce using [TestInitialize] for both synchronous and asynchronous
test initialization. Asynchronous (async/await) test intialization requires the use of
[TestInitialize] methods, because the resulting Task needs to be awaited.
6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
MSTEST0020: Prefer constructors over
TestInitialize methods
Article • 08/13/2024
ノ Expand table
Property Value
Rule ID MSTEST0020
Category Design
Enabled by default No
Cause
This rule raises a diagnostic when there is a void [TestInitialize] method.
Rule description
It is usually better to rely on constructors for non-async initialization as you can then
rely on readonly and get better compiler feedback when developing your tests. This is
especially true when dealing with nullable enabled contexts.
6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
MSTEST0021: Prefer Dispose over
TestCleanup methods
Article • 08/13/2024
ノ Expand table
Property Value
Rule ID MSTEST0021
Category Design
Enabled by default No
Cause
This rule raises a diagnostic when there is a void [TestCleanup] method or on any
[TestCleanup] if the targeted framework supports IAsyncDisposable interface.
Rule description
Using Dispose or DisposeAsync is a more common pattern and some developers prefer
to always use this pattern even for tests.
6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
MSTEST0022: Prefer TestCleanup over
Dispose methods
Article • 08/13/2024
ノ Expand table
Property Value
Rule ID MSTEST0022
Category Design
Enabled by default No
Cause
This rule raises a diagnostic when a Dispose or DisposeAsync method is detected.
Rule description
Although Dispose or DisposeAsync is a more common pattern, some developers prefer
to always use [TestCleanup] for their test cleanup phase as the method is allowing
async pattern even in older version of .NET.
6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
MSTEST0025: Use 'Assert.Fail' instead of
an always-failing assert
Article • 05/14/2024
ノ Expand table
Property Value
Rule ID MSTEST0025
Category Design
Cause
This rule raises a diagnostic when a call to an assertion produces an always-false
condition.
Rule description
Using Assert.Fail over an always-failing assertion call provides clearer intent and
better documentation for the code.
When you encounter an assertion that always fails (for example, Assert.IsTrue(false) ),
it might not be immediately obvious to someone reading the code why the assertion is
there or what condition it's trying to check. This can lead to confusion and wasted time
for developers who come across the code later on.
In contrast, using Assert.Fail allows you to provide a custom failure message, making
it clear why the assertion is failing and what specific condition or scenario it's
addressing. This message serves as documentation for the intent behind the assertion,
helping other developers understand the purpose of the assertion without needing to
dive deep into the code.
Overall, using Assert.Fail promotes clarity, documentation, and maintainability in your
codebase, making it a better choice over an always failing assertion call.
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
MSTEST0029: Public method should be
test method
Article • 09/07/2024
ノ Expand table
Property Value
Rule ID MSTEST0029
Category Design
Enabled by default No
Cause
A public method should be a test method.
Rule description
A public method of a class marked with [TestClass] should be a test method (marked
with [TestMethod] ). The rule ignores methods that are marked with [TestInitialize] ,
or [TestCleanup] attributes.
ノ Expand table
Property Value
Rule ID MSTEST0036
Category Design
Cause
Shadowing test members could cause testing issues (such as NRE).
Rule description
Shadowing test members could cause testing issues (such as NRE).
ノ Expand table
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
MSTEST0001: Explicitly enable or disable
tests parallelization
Article • 02/01/2024
ノ Expand table
Property Value
Rule ID MSTEST0001
Category Performance
Cause
The assembly is not marked with [assembly: Parallelize] or [assembly:
DoNotParallelize] attribute.
Rule description
By default, MSTest runs tests within the same assembly sequentially, which can lead to
severe performance limitations. It is recommended to enable assembly attribute
[assembly: Parallelize] to run tests in parallel, or if the assembly is known to not be
parallelizable, to use explicitly the assembly level attribute [assembly: DoNotParallelize].
be set at class level (not method level) and will use as many threads as possible
(depending on internal implementation).
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
MSTest usage rules
Article • 03/21/2024
ノ Expand table
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
MSTEST0002: Test classes should have
valid layout
Article • 08/13/2024
ノ Expand table
Property Value
Rule ID MSTEST0002
Category Usage
Cause
A test class is not following one or multiple points of the required test class layout.
Rule description
Test classes (classes marked with the [TestClass] attribute) should follow the given
layout to be considered valid by MSTest:
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
MSTEST0003: Test methods should have
valid layout
Article • 08/13/2024
ノ Expand table
Property Value
Rule ID MSTEST0003
Category Usage
Cause
A test method is not following single or multiple points of the required test method
layout.
Rule description
Test methods (methods marked with the [TestMethod] attribute) should follow the given
layout to be considered valid by MSTest:
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
be found on GitHub, where you Select a link to provide feedback:
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
MSTEST0005: Test context property
should have valid layout
Article • 09/07/2024
ノ Expand table
Property Value
Rule ID MSTEST0005
Category Usage
Cause
A test context property is not following single or multiple points of the required test
context layout.
Rule description
TestContext properties should follow the given layout to be considered valid by MSTest:
ノ Expand table
Property Value
Rule ID MSTEST0007
Category Usage
Cause
A method that's not marked with TestMethodAttribute has one or more test attributes
applied to it.
Rule description
The following test attributes should only be applied on methods marked with the
TestMethodAttribute attribute:
CssIterationAttribute
CssProjectStructureAttribute
DescriptionAttribute
ExpectedExceptionAttribute
OwnerAttribute
PriorityAttribute
TestPropertyAttribute
WorkItemAttribute
How to fix violations
To fix a violation of this rule, either convert the method on which you applied the test
attributes to a test method by setting the [TestMethod] attribute or remove the test
attributes altogether.
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
MSTEST0008: TestInitialize method
should have valid layout
Article • 08/13/2024
ノ Expand table
Property Value
Rule ID MSTEST0008
Category Usage
Cause
A method marked with [TestInitialize] should have valid layout.
Rule description
Methods marked with [TestInitialize] should follow the following layout to be valid:
it should be public
it should not be abstract
it should not be async void
it should not be static
it should not be a special method (finalizer, operator...).
it should not be generic
it should not take any parameter
return type should be void , Task or ValueTask
The type declaring these methods should also respect the following rules:
The type should be a class .
The class should be public or internal (if the test project is using the
[DiscoverInternals] attribute).
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
be found on GitHub, where you Select a link to provide feedback:
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
MSTEST0009: TestCleanup method
should have valid layout
Article • 08/13/2024
ノ Expand table
Property Value
Rule ID MSTEST0009
Category Usage
Cause
A method marked with [TestCleanup] should have valid layout.
Rule description
Methods marked with [TestCleanup] should follow the following layout to be valid:
it should be public
it should not be abstract
it should not be async void
it should not be static
it should not be a special method (finalizer, operator...).
it should not be generic
it should not take any parameter
return type should be void , Task or ValueTask
The type declaring these methods should also respect the following rules:
The type should be a class .
The class should be public or internal (if the test project is using the
[DiscoverInternals] attribute).
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
be found on GitHub, where you Select a link to provide feedback:
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
MSTEST0010: ClassInitialize method
should have valid layout
Article • 08/13/2024
ノ Expand table
Property Value
Rule ID MSTEST0010
Category Usage
Cause
A method marked with [ClassInitialize] should have valid layout.
Rule description
Methods marked with [ClassInitialize] should follow the following layout to be valid:
The type declaring these methods should also respect the following rules:
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
MSTEST0011: ClassCleanup method
should have valid layout
Article • 08/13/2024
ノ Expand table
Property Value
Rule ID MSTEST0011
Category Usage
Cause
A method marked with [ClassCleanup] should have valid layout.
Rule description
Methods marked with [ClassCleanup] should follow the following layout to be valid:
The type declaring these methods should also respect the following rules:
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
MSTEST0012: AssemblyInitialize method
should have valid layout
Article • 08/13/2024
ノ Expand table
Property Value
Rule ID MSTEST0012
Category Usage
Cause
A method marked with [AssemblyInitialize] should have valid layout.
Rule description
Methods marked with [AssemblyInitialize] should follow the following layout to be
valid:
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
MSTEST0013: AssemblyCleanup method
should have valid layout
Article • 08/13/2024
ノ Expand table
Property Value
Rule ID MSTEST0013
Category Usage
Cause
A method marked with [AssemblyCleanup] should have valid layout.
Rule description
Methods marked with [AssemblyCleanup] should follow the following layout to be valid:
The type declaring these methods should also respect the following rules:
The type should be a class.
The class should be public or internal (if the test project is using the
[DiscoverInternals] attribute).
The class shouldn't be static.
The class should be marked with [TestClass] (or a derived attribute)
the class should not be generic
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
be found on GitHub, where you Select a link to provide feedback:
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
MSTEST0014: DataRow should be valid
Article • 08/13/2024
ノ Expand table
Property Value
Rule ID MSTEST0014
Category Usage
Cause
An instance of [DataRow] is not following one or multiple points of the required DataRow
layout.
Rule description
[DataRow] instances should have the following layout to be valid:
6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
MSTEST0017: Assertion arguments
should be passed in the correct order
Article • 03/21/2024
ノ Expand table
Property Value
Rule ID MSTEST0017
Category Usage
Cause
This rule raises an issue when calls to Assert.AreEqual , Assert.AreNotEqual ,
Assert.AreSame or Assert.AreNotSame are following one or multiple of the patterns
below:
Rule description
MSTest Assert.AreEqual , Assert.AreNotEqual , Assert.AreSame and Assert.AreNotSame
expect the first argument to be the expected/unexpected value and the second
argument to be the actual value.
Having the expected value and the actual value in the wrong order will not alter the
outcome of the test (succeeds/fails when it should), but the assertion failure will contain
misleading information.
How to fix violations
Ensure that that actual and expected / notExpected arguments are passed in the correct
order.
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
MSTEST0018: DynamicData should be
valid
Article • 08/13/2024
ノ Expand table
Property Value
Rule ID MSTEST0018
Category Usage
Cause
A method marked with [DynamicData] should have valid layout.
Rule description
Methods marked with [DynamicData] should also be marked with [TestMethod] (or a
derived attribute).
Example:
C#
ノ Expand table
Property Value
Rule ID MSTEST0023
Category Usage
Cause
This rule raises a diagnostic when a call to Assert.IsTrue or Assert.IsFalse contains a
negated argument.
Rule description
MSTest assertion library contains opposite APIs that makes it easier to test true and
false cases. It is recommend to use the right API for the right case as it is improving
6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
MSTEST0024: Do not store TestContext
in a static member
Article • 08/13/2024
ノ Expand table
Property Value
Rule ID MSTEST0024
Category Usage
Cause
This rule raises a diagnostic when an assignment to a static member of a TestContext
parameter is done.
Rule description
The TestContext parameter passed to each initialize method ( [AssemblyInitialize] or
[ClassInitialize] ) is specific to the current context and is not updated on each test
execution. Storing, for reuse, this TextContext object will most of the time lead to issues.
6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
MSTEST0026: Avoid conditional access
in assertions
Article • 08/13/2024
ノ Expand table
Property Value
Rule ID MSTEST0026
Category Usage
Cause
This rule raises a diagnostic when an argument containing a null conditional operator
(?.) or ?[] is passed to the assertion methods below:
Assert.IsTrue
Assert.IsFalse
Assert.AreEqual
Assert.AreNotEqual
Assert.AreSame
Assert.AreNotSame
CollectionAssert.AreEqual
CollectionAssert.AreNotEqual
CollectionAssert.AreEquivalent
CollectionAssert.AreNotEquivalent
CollectionAssert.Contains
CollectionAssert.DoesNotContain
CollectionAssert.AllItemsAreNotNull
CollectionAssert.AllItemsAreUnique
CollectionAssert.AllItemsAreInstancesOfType
CollectionAssert.IsSubsetOf
CollectionAssert.IsNotSubsetOf
StringAssert.Contains
StringAssert.StartsWith
StringAssert.EndsWith
StringAssert.Matches
StringAssert.DoesNotMatch
Rule description
The purpose of assertions in unit tests is to verify that certain conditions are met. When
a conditional access operator is used in an assertion, it introduces an additional
condition that may or may not be met, depending on the state of the object being
accessed. This can lead to inconsistent test results and make test less clear.
C#
// Fixed code
Assert.IsNotNull(company);
Assert.AreEqual(company.Name, "Contoso");
StringAssert.Contains(company.Address, "Brazil");
6 Collaborate with us on
.NET feedback
GitHub .NET is an open source project.
Select a link to provide feedback:
The source for this content can
be found on GitHub, where you Open a documentation issue
can also create and review
issues and pull requests. For Provide product feedback
more information, see our
contributor guide.
MSTEST0030: Type containing
[TestMethod] should be marked with
[TestClass]
Article • 08/13/2024
ノ Expand table
Property Value
Rule ID MSTEST0030
Category Usage
Cause
Type containing [TestMethod] should be marked with [TestClass] , otherwise the test
method will be silently ignored.
Rule description
MSTest considers test methods only on the context of a test class container (a class
marked with [TestClass] or derived attribute) which could lead to some tests being
silently ignored. If your class is supposed to represent common test behavior to be
executed by children classes, it's recommended to mark the type as abstract to clarify
the intent for other developers reading the code.
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
MSTEST0031:
System.ComponentModel.DescriptionAttri
bute has no effect on test methods.
Article • 08/13/2024
ノ Expand table
Property Value
Rule ID MSTEST0031
Category Usage
Cause
'System.ComponentModel.DescriptionAttribute' has no effect in the context of tests.
Rule description
'System.ComponentModel.DescriptionAttribute' has no effect in the context of tests, so
likely user wanted to use
'Microsoft.VisualStudio.TestTools.UnitTesting.DescriptionAttribute' instead.
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
MSTEST0032: Review or remove the
assertion as its condition is known to be
always true.
Article • 08/13/2024
ノ Expand table
Property Value
Rule ID MSTEST0032
Category Usage
Cause
This rule raises a diagnostic when a call to an assertion produces an always-true
condition.
Rule description
When you encounter an assertion that always passes (for example,
Assert.IsTrue(true) ), it's not obvious to someone reading the code why the assertion is
there or what condition it's trying to check. This can lead to confusion and wasted time
for developers who come across the code later on.
conditions.
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
MSTEST0034: Use
ClassCleanupBehavior.EndOfClass with
the [ClassCleanup] .
Article • 08/13/2024
ノ Expand table
Property Value
Rule ID MSTEST0034
Category Usage
Cause
This rule raises a diagnostic when ClassCleanupBehavior.EndOfClass isn't set with the
[ClassCleanup] .
Rule description
Without using ClassCleanupBehavior.EndOfClass , the [ClassCleanup] will by default be
run at the end of the assembly and not at the end of the class.
6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
MSTEST0035: [DeploymentItem] can be
specified only on test class or test
method.
Article • 09/12/2024
ノ Expand table
Property Value
Rule ID MSTEST0035
Category Usage
Cause
This rule raises a diagnostic when [DeploymentItem] isn't set on test class or test
method.
Rule description
By using [DeploymentItem] without putting it on test class or test method, it will be
ignored.
debugged) directly. There's no extra test running console or command. The app exits
with a nonzero exit code if there's an error, as typical with most executables. For more
information on the known exit codes, see Microsoft.Testing.Platform exit codes.
) Important
.NET CLI
Publishing the test project using dotnet publish and running the app directly is
another way to run your tests. For example, executing the ./Contoso.MyTests.exe .
In some scenarios it's also viable to use dotnet build to produce the executable,
but there can be edge cases to consider, such Native AOT.
.NET CLI
.NET CLI
or
.NET CLI
dotnet Contoso.MyTests.dll
7 Note
Providing the path to the test project executable (*.exe) results in an error:
Output
Error:
An assembly specified in the application dependencies manifest
(Contoso.MyTests.deps.json) has already been found but with a
different
file extension:
package: 'Contoso.MyTests', version: '1.0.0'
path: 'Contoso.MyTests.dll'
previously found assembly:
'S:\t\Contoso.MyTests\bin\Debug\net8.0\Contoso.MyTests.exe'
and dotnet test ensuring you can run your tests as before while enabling new
execution scenario.
.NET CLI
Options
The list below described only the platform options. To see the specific options brought
by each extension, either refer to the extension documentation page or use the --help
option.
--diagnostic
Enables the diagnostic logging. The default log level is Trace . The file is written in the
output directory with the following name format, log_[MMddHHssfff].diag .
--diagnostic-filelogger-synchronouswrite
Forces the built-in file logger to synchronously write logs. Useful for scenarios where
you don't want to lose any log entries (if the process crashes). This does slow down the
test execution.
--diagnostic-output-directory
The output directory of the diagnostic logging, if not specified the file is generated in
the default TestResults directory.
--diagnostic-output-fileprefix
--diagnostic-verbosity
Defines the verbosity level when the --diagnostic switch is used. The available values
are Trace , Debug , Information , Warning , Error , or Critical .
--help
-ignore-exit-code
Allows some non-zero exit codes to be ignored, and instead returned as 0 . For more
information, see Ignore specific exit codes.
--info
Displays advanced information about the .NET Test Application such as:
The platform.
The environment.
Each registered command line provider, such as its, name , version , description
and options .
Each registered tool, such as its, command , name , version , description , and all
command line providers.
This feature is used to understand extensions that would be registering the same
command line option or the changes in available options between multiple versions of
an extension (or the platform).
--list-tests
--minimum-expected-tests
Specifies the minimum number of tests that are expected to run. By default, at least one
test is expected to run.
--results-directory
The directory where the test results are going to be placed. If the specified directory
doesn't exist, it's created. The default is TestResults in the directory that contains the
test application.
MSBuild integration
The NuGet package Microsoft.Testing.Platform.MSBuild provides various integrations
for Microsoft.Testing.Platform with MSBuild:
Support for dotnet test . For more information, see dotnet test integration.
Support for ProjectCapability required by Visual Studio and Visual Studio Code
Test Explorers.
Automatic generation of the entry point ( Main method).
Automatic generation of the configuration file.
7 Note
This integration works in a transitive way (a project that references another project
referencing this package will behave as if it references the package) and can be
disabled through the IsTestingPlatformApplication MSBuild property.
See also
Microsoft.Testing.Platform and VSTest comparison
Microsoft.Testing.Platform extensions
Microsoft.Testing.Platform telemetry
Microsoft.Testing.Platform exit codes
Microsoft.Testing.Platform FAQ
Article • 09/10/2024
Remove your manually defined entry point, typically Main method in Program.cs,
and let the testing platform generate one for you.
MSBuild property.
<IsTestingPlatformApplication>false</IsTestingPlatformApplication> MSBuild
property in the project that references a test project. This is needed when you
reference a test project from a non-test project, for example, a console app that
references a test application.
Microsoft.Testing.Platform and VSTest
comparison
Article • 03/19/2024
VSTest namespaces
VSTest is a collection of testing tools that are also known as the Test Platform. The
VSTest source code is open-source and available in the microsoft/vstest GitHub
repository. The code uses the Microsoft.TestPlatform.* namespace.
VSTest is extensible and common types are placed in
Microsoft.TestPlatform.ObjectModel NuGet package.
Microsoft.Testing.Platform namespaces
Microsoft.Testing.Platform is based on Microsoft.Testing.Platform NuGet package and
other libraries in the Microsoft.Testing.* namespace. Like VSTest, the
Microsoft.Testing.Platform is open-source and has a microsoft/testfx GitHub
repository.
Communication protocol
7 Note
VSTest also uses a JSON based communication protocol, but it's not JSON-RPC based.
XML
<ItemGroup>
<ProjectCapability Remove="TestingPlatformServer" />
</ItemGroup>
Executables
VSTest ships multiple executables, notably vstest.console.exe , testhost.exe , and
datacollector.exe . However, MSTest is embedded directly into your test project and
doesn't ship any other executables. The executable your test project compiles to is used
to host all the testing tools and carry out all the tasks needed to run tests.
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
be found on GitHub, where you Select a link to provide feedback:
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
Microsoft.Testing.Platform
configuration settings
Article • 08/16/2024
testconfig.json
The test platform uses a configuration file named [appname].testconfig.json to configure
the behavior of the test platform. The testconfig.json file is a JSON file that contains
configuration settings for the test platform.
JSON
{
"platformOptions": {
"config-property-name1": "config-value1",
"config-property-name2": "config-value2"
}
}
The platform will automatically detect and load the [appname].testconfig.json file located
in the output directory of the test project (close to the executable).
7 Note
Environment variables
Environment variables can be used to supply some runtime configuration information.
7 Note
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
Microsoft.Testing.Platform extensions
Article • 06/27/2024
Each and every extension is shipped with its own licensing model (some less permissive),
be sure to refer to the license associated with the extensions you want to use.
Extensions
Code Coverage
Diagnostics
Hosting
Policy
Test Reports
Extensions allowing to produce test report files that contains information about the
execution and outcome of the tests.
VSTest Bridge
This extension provides a compatibility layer with VSTest allowing the test frameworks
depending on it to continue supporting running in VSTest mode ( vstest.console.exe ,
usual dotnet test , VSTest task on AzDo, Test Explorers of Visual Studio and Visual
Studio Code...).
Microsoft Fakes
This extension provides support to execute a test project that makes use of Microsoft
Fakes .
6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
Code coverage extensions
Article • 04/17/2024
This article list and explains all Microsoft Testing Platform extensions related to the
code coverage capability.
You can use the code coverage feature to determine what proportion of your project's
code is being tested by coded tests such as unit tests. To effectively guard against bugs,
your tests should exercise or cover a large proportion of your code.
Coverlet
There's currently no Coverlet extension, but you can use Coverlet .NET global tool .
7 Note
) Important
The package is shipped with Microsoft .NET library closed-source free to use
licensing model.
For more information about Microsoft code coverage, see its GitHub page .
ノ Expand table
Option Description
--coverage-output- Output file format. Supported values are: 'coverage', 'xml', and
format 'cobertura'.
For more information about the available options, see settings and samples .
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
Diagnostics extensions
Article • 04/11/2024
This article list and explains all Microsoft Testing Platform extensions related to the
diagnostics capability.
Built-in options
The following platform options provide useful information for troubleshooting your test
apps:
--info
--diagnostic
-
-
diagnostic-
filelogger-
synchronouswrite
--diagnostic-verbosity
--diagnostic-output-fileprefix
--diagnostic-output-directory
You can also enable the diagnostics logs using the environment variables:
ノ Expand table
7 Note
Crash dump
This extension allows you to create a crash dump file if the process crashes. This
extension is shipped as part of Microsoft.Testing.Extensions.CrashDump NuGet
package.
) Important
The package is shipped with Microsoft .NET library closed-source free to use
licensing model.
To configure the crash dump file generation, use the following options:
ノ Expand table
Option Description
--crashdump Generates a dump file when the test host process crashes. Supported in
.NET 6.0+.
-
-
crashdump-
filename Specifies the file name of the dump.
--crashdump-type Specifies the type of the dump. Valid values are Mini , Heap , Triage ,
Full . Defaults as Full . For more information, see Types of mini dumps.
U Caution
The extension isn't compatible with .NET Framework and will be silently ignored.
For .NET Framework support, you enable the postmortem debugging with
Sysinternals ProcDump. For more information, see Enabling Postmortem
Debugging: Window Sysinternals ProcDump. The postmortem debugging solution
will also collect process crash information for .NET so you can avoid the use of the
extension if you're targeting both .NET and .NET Framework test applications.
Hang dump
This extension allows you to create a dump file after a given timeout. This extension is
shipped as part of Microsoft.Testing.Extensions.HangDump package.
) Important
The package is shipped with Microsoft .NET library closed-source free to use
licensing model.
To configure the hang dump file generation, use the following options:
ノ Expand table
Option Description
--hangdump Generates a dump file in case the test host process hangs.
-
-
hangdump-
filename Specifies the file name of the dump.
--hangdump-timeout Specifies the timeout after which the dump is generated. The timeout
value is specified in one of the following formats:
1.5h , 1.5hour , 1.5hours
90m , 90min , 90minute , 90minutes
5400s , 5400sec , 5400second , 5400seconds . Defaults to 30m (30 minutes).
--hangdump-type Specifies the type of the dump. Valid values are Mini , Heap , Triage , Full .
Defaults as Full . For more information, see Types of mini dumps.
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
Fakes extension
Article • 06/27/2024
Microsoft Fakes allows you to better test your code by either generating Stub s (for
instance creating a testable implementation of INotifyPropertyChanged ) or by Shim ing
methods and static methods (replacing the implementation of File.Open with a one you
can control in your tests).
7 Note
This extension requires a Visual Studio Enterprise installation with the minimum
version of 17.11 preview 1 in order to work correctly.
diff
- <Reference Include="Microsoft.QualityTools.Testing.Fakes,
Version=12.0.0.0, Culture=Neutral">
- <SpecificVersion>False</SpecificVersion>
- </Reference>
+ <PackageReference Include="Microsoft.Testing.Extensions.Fakes"
Version="17.11.0-beta.24319.3" />
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
Provide product feedback
more information, see our
contributor guide.
Hosting extensions
Article • 04/17/2024
This article list and explains all Microsoft Testing Platform extensions related to the
hosting capability.
Hot reload
Hot reload lets you modify your app's managed source code while the application is
running, without the need to manually pause or hit a breakpoint. Simply make a
supported change while the app is running and select the Apply code changes button
in Visual Studio to apply your edits.
7 Note
The current version is limited to supporting hot reload in "console mode" only.
There is currently no support for hot reload in Test Explorer for Visual Studio or
Visual Studio Code.
7 Note
The package is shipped with the restrictive Microsoft Testing Platform Tools license.
The full license is available at
https://www.nuget.org/packages/Microsoft.Testing.Extensions.HotReload/1.0.0/Li
cense .
JSON
{
"profiles": {
"Contoso.MyTests": {
"commandName": "Project",
"environmentVariables": {
"TESTINGPLATFORM_HOTRELOAD_ENABLED": "1"
}
}
}
}
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
Output extensions
Article • 08/30/2024
This article list and explains all Microsoft Testing Platform extensions related to the
terminal output.
It comes built-in with Microsoft.Testing.Platform, and offers ANSI and non-ANSI mode,
and progress indicator.
Output modes
There are two output modes available:
Normal , the output contains the banner, reports full failures of tests, warning
ANSI
Internally there are 2 different output formatters that are auto-detecting the terminal
capability to handle ANSI escape codes.
The ANSI formatter is used when the terminal is capable of rendering the escape
codes.
The non-ANSI formatter is used when the terminal cannot handle the escape
codes, or when --no-ansi is used, or when output is redirected.
Progress
A progress indicator is written to terminal. The progress indicator, shows the number of
passed tests, failed tests, and skipped tests, followed by the name of the tested
assembly, its target framework and architecture.
ANSI, the progress bar is animated, sticking to the bottom of the screen and is
refreshed every 500ms. The progress bar hides once test execution is done.
non-ANSI, the progress bar is written to screen as is every 3 seconds. The progress
remains in the output.
Options
The available options are as follows:
ノ Expand table
Option Description
output Output verbosity when reporting tests. Valid values are 'Normal', 'Detailed'. Default
is 'Normal'.
Policy extensions
Article • 04/17/2024
This article list and explains all Microsoft Testing Platform extensions related to the
policy capability.
Retry
A .NET test resilience and transient-fault-handling extension.
This extension is intended for integration tests where the test depends heavily on the
state of the environment and could experience transient faults.
7 Note
The package is shipped with the restrictive Microsoft Testing Platform Tools license.
The full license is available at
https://www.nuget.org/packages/Microsoft.Testing.Extensions.Retry/1.0.0/Licens
e .
ノ Expand table
Option Description
retry-failed-tests Reruns any failed tests until they pass or until the maximum number
of attempts is reached.
retry-failed-tests-max- Avoids rerunning tests when the percentage of failed test cases
percentage crosses the specified threshold.
retry-failed-tests-max- Avoids rerunning tests when the number of failed test cases crosses
tests the specified limit.
6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
be found on GitHub, where you Select a link to provide feedback:
can also create and review
issues and pull requests. For Open a documentation issue
more information, see our
contributor guide. Provide product feedback
Test reports extensions
Article • 04/11/2024
This article list and explains all Microsoft Testing Platform extensions related to the
test report capability.
A test report is a file that contains information about the execution and outcome of the
tests.
) Important
The package is shipped with Microsoft .NET library closed-source free to use
licensing model.
ノ Expand table
Option Description
--report-trx- The name of the generated TRX report. The default name matches the following
filename format <UserName>_<MachineName>_<yyyy-MM-dd HH:mm:ss>.trx .
The report is saved inside the default TestResults folder that can be specified through the
--results-directory command line argument.
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our Provide product feedback
contributor guide.
VSTest Bridge extension
Article • 04/11/2024
This extension provides a compatibility layer with VSTest allowing the test frameworks
depending on it to continue supporting running in VSTest mode ( vstest.console.exe ,
usual dotnet test , VSTest task on AzDo, Test Explorers of Visual Studio and Visual
Studio Code...). This extension is shipped as part of
Microsoft.Testing.Extensions.VSTestBridge package.
) Important
The package is shipped with Microsoft .NET library closed-source free to use
licensing model.
Runsettings support
This extension allows you to provide a VSTest .runsettings file, but not all options in this
file are picked up by the platform. We describe below the supported and unsupported
settings, configuration options and alternatives for the most used VSTest configuration
options.
When enabled by the test framework, you can use --settings <SETTINGS_FILE> to
provide the .runsettings file.
RunConfiguration element
The RunConfiguration element can include the following elements. None of these
settings are respected by Microsoft.Testing.Platform :
ノ Expand table
Node Description Reason / Workaround
DataCollectors element
Microsoft.Testing.Platform is not using data collectors. Instead it has the concept of
Most importantly hang and crash extension, and code coverage extension.
LoggerRunSettings element
Loggers in Microsoft.Testing.Platform are configured through command-line
parameters or by settings in code.
When enabled by the test framework, you can use --filter <FILTER_EXPRESSION> .
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
Microsoft Testing Platform diagnostics
overview
Article • 05/31/2024
Microsoft Testing platform analysis ("TPXXX") rules inspect your code for security,
performance, design and other issues.
TPEXP
Several APIs of Microsoft Testing Platform are decorated with the ExperimentalAttribute.
This attribute indicates that the API is experimental and may be removed or changed in
future versions of Microsoft Testing Platform. The attribute is used to identify APIs that
aren't yet stable and may not be suitable for production use.
To suppress this diagnostic with the SuppressMessageAttribute , add the following code
to your project:
C#
using System.Diagnostics.CodeAnalysis;
Alternatively, you can suppress this diagnostic with preprocessor directive by adding the
following code to your project:
C#
6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
Provide product feedback
more information, see our
contributor guide.
Microsoft.Testing.Platform telemetry
Article • 03/19/2024
how to improve the product. For example, this usage data helps to debug issues, such
as slow start-up times, and to prioritize new features. While these insights are
appreciated, you're free to disable telemetry. For more information on telemetry, see
privacy statement .
is used to understand how features are consumed and where time is spent when
executing the test app. This helps prioritize product improvements.
Disclosure
Microsoft.Testing.Platform displays text similar to the following when you first run
your executable. The output text might vary slightly depending on the version
Microsoft.Testing.Platform you're running. This "first run" experience is how Microsoft
Console
Telemetry
---------
Microsoft.Testing.Platform collects usage data in order to help us improve
your experience.
The data is collected by Microsoft and are not shared.
You can opt-out of telemetry by setting the TESTINGPLATFORM_TELEMETRY_OPTOUT
or DOTNET_CLI_TELEMETRY_OPTOUT environment variable to '1' or 'true' using
your favorite shell.
It doesn't extract the contents of any data files accessed or created by your apps, dumps
of any memory occupied by your apps' objects, or the contents of the clipboard.
The data is sent securely to Microsoft servers using Azure Monitor technology, held
under restricted access, and published under strict security controls from secure Azure
Storage systems.
ノ Expand table
Version Data
All Name of your executable (which is usually the same as the name of the project), as a
hashed value.
All Runtime ID (RID). For more information, see .NET RID Catalog.
All Timestamp of invocation, timestamp of start and end of various steps in the execution.
The full list of environment variables, and what is done with their values, is detailed in
the following table:
ノ Expand table
Environment variable(s) Provider Action
BUILD_ID , PROJECT_ID Google Cloud Build Check if all are present and non-
null.
CODEBUILD_BUILD_ID , Amazon Web Services Check if all are present and non-
AWS_REGION CodeBuild null.
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
Microsoft.Testing.Platform exit codes
Article • 08/28/2024
errors. Exit codes start at 0 and are non-negative. Consider the following table that
details the various exit codes and their corresponding reasons:
ノ Expand table
Exit Details
code
0 The 0 exit code indicates success. All tests that were chosen to run ran to completion
and there were no errors.
1 The 1 exit code indicates unknown errors and acts as a catch all. To find additional error
information and details, look in the output.
2 An exit code of 2 is used to indicate that there was at least one test failure.
3 The exit code 3 indicates that the test session was aborted. A session can be aborted
using Ctrl + C , as an example.
4 The exit code 4 indicates that the setup of used extensions is invalid and the tests
session cannot run.
5 The exit code 5 indicates that the command line arguments passed to the test app are
invalid.
6 The exit code 6 indicates that the test session is using a nonimplemented feature.
7 The exit code 7 indicates that a test session was unable to complete successfully, and
likely crashed. It's possible that this was caused by a test session that was run via a test
controller's extension point.
8 The exit code 8 indicates that the test session ran zero tests.
9 The exit code 9 indicates that the minimum execution policy for the executed tests was
violated.
10 The exit code 10 indicates that the test adapter, Testing.Platform Test Framework,
MSTest, NUnit, or xUnit, failed to run tests for an infrastructure reason unrelated to the
test's self. An example is failing to create a fixture needed by tests.
11 The exit code 11 indicates that the test process will exit if dependent process exits.
12 The exit code 12 indicates that the test session was unable to run because the client
does not support any of the supported protocol versions.
To enable verbose logging and troubleshoot issues, see Microsoft.Testing.Platform
Diagnostics extensions.
configurability. As such, it's possible for users to decide which exit codes should be
ignored (an exit code of 0 will be returned instead of the original exit code).
To ignore specific exit codes, use the --ignore-exit-code command line option or the
TESTINGPLATFORM_EXITCODE_IGNORE environment variable. The valid format accepted is a
This article describes how to use dotnet test to run tests when using
Microsoft.Testing.Platform , and the various options that are available to configure the
This article shows how to use dotnet test to run all tests in a solution (*.sln) that uses
Microsoft.Testing.Platform .
.NET CLI
dotnet test
This layer runs test through VSTest and integrates with it on VSTest Test Framework
Adapter level.
By default, VSTest is used to run Microsoft.Testing.Platform tests. You can enable a full
Microsoft.Testing.Platform by specifying the
<TestingPlatformDotnetTestSupport>true</TestingPlatformDotnetTestSupport> setting in
your project file. This setting disables VSTest and, thanks to the transitive dependency to
the Microsoft.Testing.Platform.MSBuild NuGet package, directly runs all
Microsoft.Testing.Platform empowered test projects in your solution. It works
XML
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>net8.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
<IsPackable>false</IsPackable>
<IsTestProject>true</IsTestProject>
<OutputType>Exe</OutputType>
<EnableMSTestRunner>true</EnableMSTestRunner>
<TestingPlatformDotnetTestSupport>true</TestingPlatformDotnetTestSupport>
</PropertyGroup>
</Project>
In this mode, you can supply extra parameters that are used to call the testing
application in one of the following ways:
.NET CLI
.NET CLI
<PropertyGroup>
...
<TestingPlatformCommandLineArguments>--minimum-expected-tests
10</TestingPlatformCommandLineArguments>
</PropertyGroup>
On command line:
.NET CLI
Or in project file:
XML
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>net8.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
<IsPackable>false</IsPackable>
<IsTestProject>true</IsTestProject>
<OutputType>Exe</OutputType>
<EnableMSTestRunner>true</EnableMSTestRunner>
<TestingPlatformDotnetTestSupport>true</TestingPlatformDotnetTestSupport>
</PropertyGroup>
</Project>
This option doesn't impact how the testing framework captures user output written by
Console.WriteLine or other similar ways to write to the console.
On command line:
.NET CLI
Or in project file:
XML
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>net8.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
<IsPackable>false</IsPackable>
<IsTestProject>true</IsTestProject>
<OutputType>Exe</OutputType>
<EnableMSTestRunner>true</EnableMSTestRunner>
<TestingPlatformDotnetTestSupport>true</TestingPlatformDotnetTestSupport>
</PropertyGroup>
</Project>
Run selected unit tests
Article • 06/18/2022
With the dotnet test command in .NET Core, you can use a filter expression to run
selected tests. This article demonstrates how to filter tests. The examples use dotnet
test . If you're using vstest.console.exe , replace --filter with --testcasefilter: .
Syntax
.NET CLI
Expressions can be joined with boolean operators: | for boolean or, & for boolean
and.
Property is an attribute of the Test Case . For example, the following properties are
supported by popular unit test frameworks.
MSTest FullyQualifiedName
Name
ClassName
Priority
TestCategory
xUnit FullyQualifiedName
DisplayName
Traits
Test framework Supported properties
Nunit FullyQualifiedName
Name
Priority
TestCategory
Operators
= exact match
~ contains
!~ doesn't contain
Character escaping
To use an exclamation mark ( ! ) in a filter expression, you have to escape it in some
Linux or macOS shells by putting a backslash in front of it ( \! ). For example, the
following filter skips all tests in a namespace that contains IntegrationTests :
.NET CLI
For FullyQualifiedName values that include a comma for generic type parameters,
escape the comma with %2C . For example:
.NET CLI
MSTest examples
C#
using Microsoft.VisualStudio.TestTools.UnitTesting;
namespace MSTestNamespace
{
[TestClass]
public class UnitTest1
{
[TestMethod, Priority(1), TestCategory("CategoryA")]
public void TestMethod1()
{
}
[TestMethod, Priority(2)]
public void TestMethod2()
{
}
}
}
Expression Result
dotnet test --filter TestCategory=CategoryA Runs tests that are annotated with
[TestCategory("CategoryA")] .
dotnet test --filter Priority=2 Runs tests that are annotated with
[Priority(2)] .
.NET CLI
.NET CLI
To run tests that have either FullyQualifiedName containing UnitTest1 and have a
TestCategoryAttribute of "CategoryA" or have a PriorityAttribute with a priority of
1.
.NET CLI
See also
dotnet test
dotnet test --filter
Next steps
Order unit tests
Order unit tests
Article • 08/23/2024
Occasionally, you may want to have unit tests run in a specific order. Ideally, the order in
which unit tests run should not matter, and it is best practice to avoid ordering unit
tests. Regardless, there may be a need to do so. In that case, this article demonstrates
how to order test runs.
If you prefer to browse the source code, see the order .NET Core unit tests sample
repository.
Tip
Order alphabetically
MSTest discovers tests in the same order in which they are defined in the test class.
When running through Test Explorer (in Visual Studio, or in Visual Studio Code), the tests
are ordered in alphabetical order based on their test name.
When running outside of Test Explorer, tests are executed in the order in which they are
defined in the test class.
7 Note
A test named Test14 will run before Test2 even though the number 2 is less than
14 . This is because test name ordering uses the text name of the test.
C#
using Microsoft.VisualStudio.TestTools.UnitTesting;
namespace MSTest.Project;
[TestClass]
public class ByAlphabeticalOrder
{
public static bool Test1Called;
public static bool Test2Called;
public static bool Test3Called;
[TestMethod]
public void Test2()
{
Test2Called = true;
Assert.IsTrue(Test1Called);
Assert.IsFalse(Test3Called);
}
[TestMethod]
public void Test1()
{
Test1Called = true;
Assert.IsFalse(Test2Called);
Assert.IsFalse(Test3Called);
}
[TestMethod]
public void Test3()
{
Test3Called = true;
Assert.IsTrue(Test1Called);
Assert.IsTrue(Test2Called);
}
}
Next Steps
Unit test code coverage
Use code coverage for unit testing
Article • 01/30/2024
) Important
This article explains the creation of the example project. If you already have a
project, you can skip ahead to the Code coverage tooling section.
Unit tests help to ensure functionality and provide a means of verification for refactoring
efforts. Code coverage is a measurement of the amount of code that is run by unit tests
- either lines, branches, or methods. As an example, if you have a simple application with
only two conditional branches of code (branch a, and branch b), a unit test that verifies
conditional branch a will report branch code coverage of 50%.
This article discusses the usage of code coverage for unit testing with Coverlet and
report generation using ReportGenerator. While this article focuses on C# and xUnit as
the test framework, both MSTest and NUnit would also work. Coverlet is an open source
project on GitHub that provides a cross-platform code coverage framework for C#.
Coverlet is part of the .NET Foundation . Coverlet collects Cobertura coverage test run
data, which is used for report generation.
Additionally, this article details how to use the code coverage information collected
from a Coverlet test run to generate a report. The report generation is possible using
another open source project on GitHub - ReportGenerator . ReportGenerator converts
coverage reports generated by Cobertura among many others, into human-readable
reports in various formats.
This article is based on the sample source code project, available on samples browser.
The snippet below defines a simple PrimeService class that provides functionality to
check if a number is prime. Copy the snippet below and replace the contents of the
Class1.cs file that was automatically created in the Numbers directory. Rename the
Class1.cs file to PrimeService.cs.
C#
namespace System.Numbers
{
public class PrimeService
{
public bool IsPrime(int candidate)
{
if (candidate < 2)
{
return false;
}
Tip
It is worth mentioning that the Numbers class library was intentionally added to the
System namespace. This allows for System.Math to be accessible without a using
System; namespace declaration. For more information, see namespace (C#
Reference).
.NET CLI
.NET CLI
Both of the newly created xUnit test projects need to add a project reference of the
Numbers class library. This is so that the test projects have access to the PrimeService for
testing. From the command prompt, use the dotnet add command:
.NET CLI
.NET CLI
.NET CLI
The previous command changed directories effectively scoping to the MSBuild test
project, then added the NuGet package. When that was done, it then changed
directories, stepping up one level.
Open both of the UnitTest1.cs files, and replace their contents with the following snippet.
Rename the UnitTest1.cs files to PrimeServiceTests.cs.
C#
using System.Numbers;
using Xunit;
namespace XUnit.Coverlet
{
public class PrimeServiceTests
{
readonly PrimeService _primeService;
[Theory]
[InlineData(-1), InlineData(0), InlineData(1)]
public void IsPrime_ValuesLessThan2_ReturnFalse(int value) =>
Assert.False(_primeService.IsPrime(value), $"{value} should not
be prime");
[Theory]
[InlineData(2), InlineData(3), InlineData(5), InlineData(7)]
public void IsPrime_PrimesLessThan10_ReturnTrue(int value) =>
Assert.True(_primeService.IsPrime(value), $"{value} should be
prime");
[Theory]
[InlineData(4), InlineData(6), InlineData(8), InlineData(9)]
public void IsPrime_NonPrimesLessThan10_ReturnFalse(int value) =>
Assert.False(_primeService.IsPrime(value), $"{value} should not
be prime");
}
}
Create a solution
From the command prompt, create a new solution to encapsulate the class library and
the two test projects. Using the dotnet sln command:
.NET CLI
This will create a new solution file name XUnit.Coverage in the UnitTestingCodeCoverage
directory. Add the projects to the root of the solution.
Windows
.NET CLI
.NET CLI
dotnet build
If the build is successful, you've created the three projects, appropriately referenced
projects and packages, and updated the source code correctly. Well done!
.NET includes a built-in code coverage data collector, which is also available in Visual
Studio. This data collector generates a binary .coverage file that can be used to generate
reports in Visual Studio. The binary file is not human-readable, and it must be converted
to a human-readable format before it can be used to generate reports outside of Visual
Studio.
Tip
The dotnet-coverage tool is a cross-platform tool that can be used to convert the
binary coverage test results file to a human-readable format. For more information,
see dotnet-coverage.
.NET CLI
7 Note
The "XPlat Code Coverage" argument is a friendly name that corresponds to the
data collectors from Coverlet. This name is required but is case insensitive. To use
.NET's built-in Code Coverage data collector, use "Code Coverage" .
As part of the dotnet test run, a resulting coverage.cobertura.xml file is output to the
TestResults directory. The XML file contains the results. This is a cross-platform option
that relies on the .NET CLI, and it is great for build systems where MSBuild is not
available.
XML
Tip
As an alternative, you could use the MSBuild package if your build system already
makes use of MSBuild. From the command prompt, change directories to the
XUnit.Coverlet.MSBuild project, and run the dotnet test command:
.NET CLI
Generate reports
Now that you're able to collect data from unit test runs, you can generate reports using
ReportGenerator . To install the ReportGenerator NuGet package as a .NET global
tool, use the dotnet tool install command:
.NET CLI
Run the tool and provide the desired options, given the output coverage.cobertura.xml
file from the previous test run.
Console
reportgenerator
-reports:"Path\To\TestProject\TestResults\{guid}\coverage.cobertura.xml"
-targetdir:"coveragereport"
-reporttypes:Html
After running this command, an HTML file represents the generated report.
See also
Visual Studio unit test code coverage
GitHub - Coverlet repository
GitHub - ReportGenerator repository
ReportGenerator project site
Azure: Publish code coverage results
Azure: Review code coverage results
.NET CLI test command
dotnet-coverage
Sample source code
Next Steps
Unit testing best practices
6 Collaborate with us on
.NET feedback
GitHub .NET is an open source project.
Select a link to provide feedback:
The source for this content can
be found on GitHub, where you Open a documentation issue
can also create and review
issues and pull requests. For Provide product feedback
more information, see our
contributor guide.
Test published output with dotnet vstest
Article • 05/18/2022
You can run tests on already published output by using the dotnet vstest command.
This will work on xUnit, MSTest, and NUnit tests. Simply locate the DLL file that was part
of your published output and run:
.NET CLI
Example
The commands below demonstrate running tests on a published DLL.
.NET CLI
7 Note
Note: If your app targets a framework other than netcoreapp , you can still run the
dotnet vstest command by passing in the targeted framework with a framework
See also
Unit Testing with dotnet test and xUnit
Unit Testing with dotnet test and NUnit
Unit Testing with dotnet test and MSTest
Get started with Live Unit Testing
Article • 11/02/2023
When you enable Live Unit Testing in a Visual Studio solution, it visually depicts your test
coverage and the status of your tests. Live Unit Testing also dynamically executes tests
whenever you modify your code and immediately notifies you when your changes cause
tests to fail.
Live Unit Testing can be used to test solutions that target either .NET Framework, .NET
Core, or .NET 5+. In this tutorial, you'll learn to use Live Unit Testing by creating a simple
class library that targets .NET, and you'll create an MSTest project that targets .NET to
test it.
Prerequisites
This tutorial requires that you've installed Visual Studio Enterprise edition with the .NET
desktop development workload.
The solution is just a container for one or more projects. To create a blank solution, open
Visual Studio and do the following:
1. Select File > New > Project from the top-level Visual Studio menu.
2. Type solution into the template search box, and then select the Blank Solution
template. Name the project UtilityLibraries.
Now that you've created the solution, you'll create a class library named StringLibrary
that contains a number of extension methods for working with strings.
1. In Solution Explorer, right-click on the UtilityLibraries solution and select Add >
New Project.
2. Type class library into the template search box, and the select the Class Library
template that targets .NET or .NET Standard. Click Next.
5. Replace all of the existing code in the code editor with the following code:
C#
using System;
namespace UtilityLibraries
{
public static class StringLibrary
{
public static bool StartsWithUpper(this string s)
{
if (String.IsNullOrWhiteSpace(s))
return false;
return Char.IsUpper(s[0]);
}
return Char.IsLower(s[0]);
}
6. Select Build > Build Solution from the top-level Visual Studio menu. The build
should succeed.
1. In Solution Explorer, right-click on the UtilityLibraries solution and select Add >
New Project.
2. Type unit test into the template search box, select C# as the language, and then
select the MSTest Unit Test Project for .NET template. Click Next.
7 Note
In Visual Studio 2019 version 16.9, the MSTest project template name is Unit
Test Project.
4. Choose either the recommended target framework or .NET 8, and then choose
Create.
7 Note
This getting started tutorial uses Live Unit Testing with the MSTest test
framework. You can also use the xUnit and NUnit test frameworks.
5. The unit test project can't automatically access the class library that it is testing.
You give the test library access by adding a reference to the class library project. To
do this, right-click the StringLibraryTests project and select Add > Project
Reference. In the Reference Manager dialog, make sure the Solution tab is
selected, and select the StringLibrary project, as shown in the following illustration.
6. Replace the boilerplate unit test code provided by the template with the following
code:
C#
using System;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using UtilityLibraries;
namespace StringLibraryTest
{
[TestClass]
public class UnitTest1
{
[TestMethod]
public void TestStartsWithUpper()
{
// Tests that we expect to return true.
string[] words = { "Alphabet", "Zebra", "ABC", "Αθήνα",
"Москва" };
foreach (var word in words)
{
bool result = word.StartsWithUpper();
Assert.IsTrue(result,
$"Expected for '{word}': true; Actual:
{result}");
}
}
[TestMethod]
public void TestDoesNotStartWithUpper()
{
// Tests that we expect to return false.
string[] words = { "alphabet", "zebra", "abc",
"αυτοκινητοβιομηχανία", "государство",
"1234", ".", ";", " " };
foreach (var word in words)
{
bool result = word.StartsWithUpper();
Assert.IsFalse(result,
$"Expected for '{word}': false; Actual:
{result}");
}
}
[TestMethod]
public void DirectCallWithNullOrEmpty()
{
// Tests that we expect to return false.
string[] words = { String.Empty, null };
foreach (var word in words)
{
bool result = StringLibrary.StartsWithUpper(word);
Assert.IsFalse(result,
$"Expected for '{(word == null ? "
<null>" : word)}': " +
$"false; Actual: {result}");
}
}
}
}
Because the unit test code includes some non-ASCII characters, you will see the
following dialog to warn that some characters will be lost if you save the file in its
default ASCII format.
10. Compile the unit test project by selecting Build > Rebuild Solution from the top-
level Visual Studio menu.
You've created a class library as well as some unit tests for it. You've now finished the
preliminaries needed to use Live Unit Testing.
1. Optionally, select the code editor window that contains the code for StringLibrary.
This is either Class1.cs for a C# project or Class1.vb for a Visual Basic project. (This
step lets you visually inspect the result of your tests and the extent of your code
coverage once you enable Live Unit Testing.)
2. Select Test > Live Unit Testing > Start from the top-level Visual Studio menu.
3. Verify the configuration for Live Unit Testing by ensuring the Repository Root
includes the path to the source files for both the utility project and the test project.
Select Next and then Finish.
4. In the Live Unit Testing window, select the include all tests link (Alternatively, select
the Playlist button icon, then select the StringLibraryTest, which selects all the
tests underneath it. Then deselect the Playlist button to exit edit mode.)
5. Visual Studio will rebuild the project and start Live Unit Test, which automatically
runs all of your tests.
When it finishes running your tests, Live Unit Testing displays both the overall results
and the result of individual tests. In addition, the code editor window graphically
displays both your test code coverage and the result for your tests. As the following
illustration shows, all three tests have executed successfully. It also shows that our tests
have covered all code paths in the StartsWithUpper method, and those tests all
executed successfully (which is indicated by the green check mark, "✓"). Finally, it shows
that none of the other methods in StringLibrary have code coverage (which is indicated
by a blue line, "➖").
You can also get more detailed information about test coverage and test results by
selecting a particular code coverage icon in the code editor window. To examine this
detail, do the following:
illustration shows, Live Unit Testing indicates that three tests cover that line of
code, and that all have executed successfully.
2. Click on the green check mark on the line that reads return Char.IsUpper(s[0]) in
the StartsWithUpper method. As the following illustration shows, Live Unit Testing
indicates that only two tests cover that line of code, and that all have executed
successfully.
The major issue that Live Unit Testing identifies is incomplete code coverage. You'll
address it in the next section.
C#
// Code to add to UnitTest1.cs
[TestMethod]
public void TestStartsWithLower()
{
// Tests that we expect to return true.
string[] words = { "alphabet", "zebra", "abc",
"αυτοκινητοβιομηχανία", "государство" };
foreach (var word in words)
{
bool result = word.StartsWithLower();
Assert.IsTrue(result,
$"Expected for '{word}': true; Actual:
{result}");
}
}
[TestMethod]
public void TestDoesNotStartWithLower()
{
// Tests that we expect to return false.
string[] words = { "Alphabet", "Zebra", "ABC", "Αθήνα", "Москва",
"1234", ".", ";", " "};
foreach (var word in words)
{
bool result = word.StartsWithLower();
Assert.IsFalse(result,
$"Expected for '{word}': false; Actual:
{result}");
}
}
C#
3. Live Unit Testing automatically executes new and modified tests when you modify
your source code. As the following illustration shows, all of the tests, including the
two you've added and the one you've modified, have succeeded.
4. Switch to the window that contains the source code for the StringLibrary class. Live
Unit Testing now shows that our code coverage is extended to the
StartsWithLower method.
In some cases, successful tests in Test Explorer might be grayed-out. That indicates that
a test is currently executing, or that the test has not run again because there have been
no code changes that would impact the test since it was last executed.
So far, all of our tests have succeeded. In the next section, we'll examine how you can
handle test failure.
C#
[TestMethod]
public void TestHasEmbeddedSpaces()
{
// Tests that we expect to return true.
string[] phrases = { "one car", "Name\u0009Description",
"Line1\nLine2", "Line3\u000ALine4",
"Line5\u000BLine6", "Line7\u000CLine8",
"Line0009\u000DLine10", "word1\u00A0word2" };
foreach (var phrase in phrases)
{
bool result = phrase.HasEmbeddedSpaces();
Assert.IsTrue(result,
$"Expected for '{phrase}': true; Actual:
{result}");
}
}
2. When the test executes, Live Unit Testing indicates that the TestHasEmbeddedSpaces
method has failed, as the following illustration shows:
3. Select the window that displays the library code. Live Unit Testing has expanded
code coverage to the HasEmbeddedSpaces method. It also reports the test failure by
adding a red "🞩" to lines covered by failing tests.
4. Hover over the line with the HasEmbeddedSpaces method signature. Live Unit Testing
displays a tooltip that reports that the method is covered by one test, as the
following illustration shows:
5. Select the failed TestHasEmbeddedSpaces test. Live Unit Testing gives you a few
options such as running all tests and debugging all tests, as the following
illustration shows:
6. Select Debug All to debug the failed test.
The test assigns each string in an array to a variable named phrase and passes it to
the HasEmbeddedSpaces method. Program execution pauses and invokes the
debugger the first time the assert expression is false . The exception dialog that
results from the unexpected value in the
Microsoft.VisualStudio.TestTools.UnitTesting.Assert.IsTrue method call is shown in
the following illustration.
In addition, all of the debugging tools that Visual Studio provides are available to
help us troubleshoot our failed test, as the following illustration shows:
Note in the Autos window that the value of the phrase variable is
"Name\tDescription", which is the second element of the array. The test method
expects HasEmbeddedSpaces to return true when it is passed this string; instead, it
returns false . Evidently, it does not recognize "\t", the tab character, as an
embedded space.
8. Select Debug > Continue, press F5, or click the Continue button on the toolbar to
continue executing the test program. Because an unhandled exception occurred,
the test terminates. This provides enough information for a preliminary
investigation of the bug. Either TestHasEmbeddedSpaces (the test routine) made an
incorrect assumption, or HasEmbeddedSpaces does not correctly recognize all
embedded spaces.
However, the Unicode Standard includes a number of other space characters. This
suggests that the library code has incorrectly tested for a whitespace character.
C#
if (Char.IsWhiteSpace(ch))
11. Live Unit Testing automatically reruns the failed test method.
Live Unit Testing shows the updated results appear, which also appear in the code
editor window.
Related content
Live Unit Testing in Visual Studio
Live Unit Testing Frequently Asked Questions
Feedback
Was this page helpful? Yes No
.NET application publishing overview
Article • 03/19/2024
Applications you create with .NET can be published in two different modes, and the
mode affects how a user runs your app.
Publishing your app as self-contained produces an application that includes the .NET
runtime and libraries, and your application and its dependencies. Users of the
application can run it on a machine that doesn't have the .NET runtime installed.
When an executable is produced, you can specify the target platform with a runtime
identifier (RID). For more information about RIDs, see .NET RID Catalog.
The following table outlines the commands used to publish an app as framework-
dependent or self-contained:
ノ Expand table
Type Command
Produce an executable
Executables aren't cross-platform, they're specific to an operating system and CPU
architecture. When publishing your app and creating an executable, you can publish the
app as self-contained or framework-dependent. Publishing an app as self-contained
includes the .NET runtime with the app, and users of the app don't have to worry about
installing .NET before running the app. Publishing an app as framework-dependent
doesn't include the .NET runtime; only the app and third-party dependencies are
included.
ノ Expand table
Type Command
Cross-platform binaries can be run on any operating system as long as the targeted
.NET runtime is already installed. If the targeted .NET runtime isn't installed, the app may
run using a newer runtime if the app is configured to roll-forward. For more information,
see framework-dependent apps roll forward.
ノ Expand table
Type Command
Publish framework-dependent
Apps published as framework-dependent are cross-platform and don't include the .NET
runtime. The user of your app is required to install the .NET runtime.
The cross-platform binary of your app can be run with the dotnet <filename.dll>
command, and can be run on any platform.
Advantages
Small deployment
Only your app and its dependencies are distributed. The .NET runtime and libraries
are installed by the user and all apps share the runtime.
Cross-platform
Your app and any .NET-based library runs on other operating systems. You don't
need to define a target platform for your app. For information about the .NET file
format, see .NET Assembly File Format.
Disadvantages
Requires pre-installing the runtime
Your app can run only if the version of .NET your app targets is already installed on
the host system. You can configure roll-forward behavior for the app to either
require a specific version of .NET or allow a newer version of .NET. For more
information, see framework-dependent apps roll forward.
Examples
Publish an app as cross-platform and framework-dependent. An executable that targets
your current platform is created along with the dll file. Any platform-specific
dependencies are published with the app.
.NET CLI
dotnet publish
.NET CLI
Publish self-contained
Publishing your app as self-contained produces a platform-specific executable. The
output publishing folder contains all components of the app, including the .NET libraries
and target runtime. The app is isolated from other .NET apps and doesn't use a locally
installed shared runtime. The user of your app isn't required to download and install
.NET.
You can publish a self-contained app by passing the --self-contained parameter to the
dotnet publish command. The executable binary is produced for the specified target
platform. For example, if you have an app named word_reader, and you publish a self-
contained executable for Windows, a word_reader.exe file is created. Publishing for Linux
or macOS, a word_reader file is created. The target platform and architecture is specified
with the -r <RID> parameter for the dotnet publish command. For more information
about RIDs, see .NET RID Catalog.
Advantages
Control .NET version
You control which version of .NET is deployed with your app.
Platform-specific targeting
Because you have to publish your app for each platform, you know where your app
runs. If .NET introduces a new platform, users can't run your app on that platform
until you release a version targeting that platform. You can test your app for
compatibility problems before your users run your app on the new platform.
Disadvantages
Larger deployments
Because your app includes the .NET runtime and all of your app dependencies, the
download size and hard drive space required is greater than a framework-
dependent version.
Tip
Tip
Examples
Publish an app self-contained. A macOS 64-bit executable is created.
.NET CLI
.NET CLI
Advantages
Improved startup time
The application spends less time running the JIT.
Disadvantages
Larger size
The application is larger on disk.
Examples
Publish an app self-contained and ReadyToRun. A macOS 64-bit executable is created.
.NET CLI
.NET CLI
See also
Deploying .NET Apps with .NET CLI.
Deploying .NET Apps with Visual Studio.
.NET Runtime Identifier (RID) catalog.
Select the .NET version to use.
6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
be found on GitHub, where you Select a link to provide feedback:
can also create and review
issues and pull requests. For Open a documentation issue
more information, see our
contributor guide. Provide product feedback
Deploy .NET Core apps with Visual
Studio
Article • 10/11/2022
The following sections show how to use Microsoft Visual Studio to create the following
kinds of deployments:
Framework-dependent deployment
Framework-dependent deployment with third-party dependencies
Self-contained deployment
Self-contained deployment with third-party dependencies
For information on using Visual Studio to develop .NET Core applications, see .NET Core
dependencies and requirements.
Framework-dependent deployment
Deploying a framework-dependent deployment with no third-party dependencies
simply involves building, testing, and publishing the app. A simple example written in C#
illustrates the process.
Select File > New > Project. In the New Project dialog, expand your language's
(C# or Visual Basic) project categories in the Installed project types pane, choose
.NET Core, and then select the Console App (.NET Core) template in the center
pane. Enter a project name, such as "FDD", in the Name text box. Select the OK
button.
Open the Program.cs or Program.vb file in the editor and replace the
autogenerated code with the following code. It prompts the user to enter text and
displays the individual words entered by the user. It uses the regular expression
\w+ to separate the words in the input text.
C#
using System;
using System.Text.RegularExpressions;
namespace Applications.ConsoleApps
{
public class ConsoleParser
{
public static void Main()
{
Console.WriteLine("Enter any text, followed by
<Enter>:\n");
String? s = Console.ReadLine();
ShowWords(s ?? "You didn't enter anything.");
Console.Write("\nPress any key to continue... ");
Console.ReadKey();
}
Select Build > Build Solution. You can also compile and run the Debug build of
your application by selecting Debug > Start Debugging.
After you've debugged and tested the program, create the files to be deployed
with your app. To publish from Visual Studio, do the following:
a. Change the solution configuration from Debug to Release on the toolbar to
build a Release (rather than a Debug) version of your app.
b. Right-click on the project (not the solution) in Solution Explorer and select
Publish.
c. In the Publish tab, select Publish. Visual Studio writes the files that comprise
your application to the local file system.
d. The Publish tab now shows a single profile, FolderProfile. The profile's
configuration settings are shown in the Summary section of the tab.
The resulting files are placed in a directory named Publish on Windows and
publish on Unix systems that is in a subdirectory of your project's
.\bin\release\netcoreapp2.1 subdirectory.
Along with your application's files, the publishing process emits a program database
(.pdb) file that contains debugging information about your app. The file is useful
primarily for debugging exceptions. You can choose not to package it with your
application's files. You should, however, save it in the event that you want to debug the
Release build of your app.
Deploy the complete set of application files in any way you like. For example, you can
package them in a Zip file, use a simple copy command, or deploy them with any
installation package of your choice. Once installed, users can then execute your
application by using the dotnet command and providing the application filename, such
as dotnet fdd.dll .
In addition to the application binaries, your installer should also either bundle the
shared framework installer or check for it as a prerequisite as part of the application
installation. Installation of the shared framework requires Administrator/root access
since it is machine-wide.
1. Use the NuGet Package Manager to add a reference to a NuGet package to your
project; and if the package is not already available on your system, install it. To
open the package manager, select Tools > NuGet Package Manager > Manage
NuGet Packages for Solution.
Select File > New > Project. In the New Project dialog, expand your language's
(C# or Visual Basic) project categories in the Installed project types pane, choose
.NET Core, and then select the Console App (.NET Core) template in the center
pane. Enter a project name, such as "SCD", in the Name text box, and select the OK
button.
Open the Program.cs or Program.vb file in your editor, and replace the
autogenerated code with the following code. It prompts the user to enter text and
displays the individual words entered by the user. It uses the regular expression
\w+ to separate the words in the input text.
C#
using System;
using System.Text.RegularExpressions;
namespace Applications.ConsoleApps
{
public class ConsoleParser
{
public static void Main()
{
Console.WriteLine("Enter any text, followed by
<Enter>:\n");
String? s = Console.ReadLine();
ShowWords(s ?? "You didn't enter anything.");
Console.Write("\nPress any key to continue... ");
Console.ReadKey();
}
Particularly if your app targets Linux, you can reduce the total size of your
deployment by taking advantage of globalization invariant mode . Globalization
invariant mode is useful for applications that are not globally aware and that can
use the formatting conventions, casing conventions, and string comparison and
sort order of the invariant culture.
To enable invariant mode, right-click on your project (not the solution) in Solution
Explorer, and select Edit SCD.csproj or Edit SCD.vbproj. Then add the following
highlighted lines to the file:
XML
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>net6.0</TargetFramework>
<Nullable>enable</Nullable>
</PropertyGroup>
<ItemGroup>
<RuntimeHostConfigurationOption
Include="System.Globalization.Invariant" Value="true" />
</ItemGroup>
</Project>
Select Build > Build Solution. You can also compile and run the Debug build of
your application by selecting Debug > Start Debugging. This debugging step lets
you identify problems with your application when it's running on your host
platform. You still will have to test it on each of your target platforms.
Once you've finished debugging, you can publish your self-contained deployment:
After you've debugged and tested the program, create the files to be deployed with
your app for each platform that it targets.
For example, the following example indicates that the app runs on 64-bit
Windows 10 operating systems and the 64-bit OS X Version 10.11 operating
system.
XML
<PropertyGroup>
<RuntimeIdentifiers>win10-x64;osx.10.11-x64</RuntimeIdentifiers>
</PropertyGroup>
After you've debugged and tested the program, create the files to be
deployed with your app for each platform that it targets.
b. Right-click on the project (not the solution) in Solution Explorer and select
Publish.
c. In the Publish tab, select Publish. Visual Studio writes the files that
comprise your application to the local file system.
d. The Publish tab now shows a single profile, FolderProfile. The profile's
configuration settings are shown in the Summary section of the tab. Target
Runtime identifies which runtime has been published, and Target Location
identifies where the files for the self-contained deployment were written.
e. Visual Studio by default writes all published files to a single directory. For
convenience, it's best to create separate profiles for each target runtime
and to place published files in a platform-specific directory. This involves
creating a separate publishing profile for each target platform. So now
rebuild the application for each platform by doing the following:
ii. In the Pick a publish target dialog, change the Choose a folder location
to bin\Release\PublishOutput\win10-x64. Select OK.
iii. Select the new profile (FolderProfile1) in the list of profiles, and make
sure that the Target Runtime is win10-x64 . If it isn't, select Settings. In
the Profile Settings dialog, change the Target Runtime to win10-x64
and select Save. Otherwise, select Cancel.
iv. Select Publish to publish your app for 64-bit Windows 10 platforms.
v. Follow the previous steps again to create a profile for the osx.10.11-x64
platform. The Target Location is bin\Release\PublishOutput\osx.10.11-
x64, and the Target Runtime is osx.10.11-x64 . The name that Visual
Studio assigns to this profile is FolderProfile2.
Each target location contains the complete set of files (both your app files and
all .NET Core files) needed to launch your app.
Along with your application's files, the publishing process emits a program
database (.pdb) file that contains debugging information about your app. The file is
useful primarily for debugging exceptions. You can choose not to package it with
your application's files. You should, however, save it in the event that you want to
debug the Release build of your app.
Deploy the published files in any way you like. For example, you can package them
in a Zip file, use a simple copy command, or deploy them with any installation
package of your choice.
XML
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp2.1</TargetFramework>
<RuntimeIdentifiers>win10-x64;osx.10.11-x64</RuntimeIdentifiers>
</PropertyGroup>
</Project>
Self-contained deployment with third-party
dependencies
Deploying a self-contained deployment with one or more third-party dependencies
involves adding the dependencies. The following additional steps are required before
you can build your app:
1. Use the NuGet Package Manager to add a reference to a NuGet package to your
project; and if the package is not already available on your system, install it. To
open the package manager, select Tools > NuGet Package Manager > Manage
NuGet Packages for Solution.
XML
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp2.1</TargetFramework>
<RuntimeIdentifiers>win10-x64;osx.10.11-x64</RuntimeIdentifiers>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Newtonsoft.Json" Version="10.0.2" />
</ItemGroup>
</Project>
When you deploy your application, any third-party dependencies used in your app are
also contained with your application files. Third-party libraries aren't required on the
system on which the app is running.
You can only deploy a self-contained deployment with a third-party library to platforms
supported by that library. This is similar to having third-party dependencies with native
dependencies in your framework-dependent deployment, where the native
dependencies won't exist on the target platform unless they were previously installed
there.
See also
.NET Core Application Deployment
.NET Core Runtime Identifier (RID) catalog
Publish .NET apps with the .NET CLI
Article • 09/05/2024
This article demonstrates how you can publish your .NET application from the command
line. .NET provides three ways to publish your applications. Framework-dependent
deployment produces a cross-platform .dll file that uses the locally installed .NET
runtime. Framework-dependent executable produces a platform-specific executable that
uses the locally installed .NET runtime. Self-contained executable produces a platform-
specific executable and includes a local copy of the .NET runtime.
Looking for some quick help on using the CLI? The following table shows some
examples of how to publish your app. You can specify the target framework with the -f
<TFM> parameter or by editing the project file. For more information, see Publishing
basics.
ノ Expand table
7 Note
Publishing basics
The <TargetFramework> setting of the project file specifies the default target framework
when you publish your app. You can change the target framework to any valid Target
Framework Moniker (TFM). For example, if your project uses
<TargetFramework>net8.0</TargetFramework> , a binary that targets .NET 8 is created. The
TFM specified in this setting is the default target used by the dotnet publish command.
If you want to target more than one framework, you can set the <TargetFrameworks>
setting to multiple TFM values, separated by a semicolon. When you build your app, a
build is produced for each target framework. However, when you publish your app, you
must specify the target framework with the dotnet publish -f <TFM> command.
Native dependencies
If your app has native dependencies, it may not run on a different operating system. For
example, if your app uses the native Windows API, it won't run on macOS or Linux. You
would need to provide platform-specific code and compile an executable for each
platform.
Consider also, if a library you referenced has a native dependency, your app may not run
on every platform. However, it's possible a NuGet package you're referencing has
included platform-specific versions to handle the required native dependencies for you.
When distributing an app with native dependencies, you may need to use the dotnet
publish -r <RID> switch to specify the target platform you want to publish for. For a list
Sample app
You can use the following app to explore the publishing commands. The app is created
by running the following commands in your terminal:
.NET CLI
mkdir apptest1
cd apptest1
dotnet new console
dotnet add package Figgle
The Program.cs or Program.vb file that is generated by the console template needs to
be changed to the following:
C#
using System;
namespace apptest1
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine(Figgle.FiggleFonts.Standard.Render("Hello,
World!"));
}
}
}
When you run the app (dotnet run), the following output is displayed:
terminal
_ _ _ _ __ __ _ _ _
| | | | ___| | | ___ \ \ / /__ _ __| | __| | |
| |_| |/ _ \ | |/ _ \ \ \ /\ / / _ \| '__| |/ _` | |
| _ | __/ | | (_) | \ V V / (_) | | | | (_| |_|
|_| |_|\___|_|_|\___( ) \_/\_/ \___/|_| |_|\__,_(_)
|/
Framework-dependent deployment
When you publish your app as an FDD, a <PROJECT-NAME>.dll file is created in the
./bin/<BUILD-CONFIGURATION>/<TFM>/publish/ folder. To run your app, navigate to the
Your app is configured to target a specific version of .NET. That targeted .NET runtime is
required to be on any machine where your app runs. For example, if your app targets
.NET Core 8, any machine that your app runs on must have the .NET Core 8 runtime
installed. As stated in the Publishing basics section, you can edit your project file to
change the default target framework or to target more than one framework.
Publishing an FDD creates an app that automatically rolls-forward to the latest .NET
security patch available on the system that runs the app. For more information on
version binding at compile time, see Select the .NET version to use.
ノ Expand table
Framework-dependent executable
Framework-dependent executable (FDE) is the default mode for the basic dotnet
publish command. You don't need to specify any other parameters, as long as you want
Your app is configured to target a specific version of .NET. That targeted .NET runtime is
required to be on any machine where your app runs. For example, if your app targets
.NET 8, any machine that your app runs on must have the .NET 8 runtime installed. As
stated in the Publishing basics section, you can edit your project file to change the
default target framework or to target more than one framework.
Publishing an FDE creates an app that automatically rolls-forward to the latest .NET
security patch available on the system that runs the app. For more information on
version binding at compile time, see Select the .NET version to use.
ノ Expand table
If you use the example app, run dotnet publish -f net6.0 -r win-x64 --self-contained
false . This command creates the following executable: ./bin/Debug/net6.0/win-
x64/publish/apptest1.exe
7 Note
You can reduce the total size of your deployment by enabling globalization
invariant mode. This mode is useful for applications that are not globally aware
and that can use the formatting conventions, casing conventions, and string
comparison and sort order of the invariant culture. For more information about
globalization invariant mode and how to enable it, see .NET Globalization
Invariant Mode .
AppHostDotNetSearch allows specifying one or more locations where the executable will
AppHostRelativeDotNet specifies the path relative to the executable that will be searched
Self-contained deployment
When you publish a self-contained deployment (SCD), the .NET SDK creates a platform-
specific executable. Publishing an SCD includes all required .NET files to run your app
but it doesn't include the native dependencies of .NET (for example, for .NET 6 on
Linux or .NET 8 on Linux ). These dependencies must be present on the system
before the app runs.
Publishing an SCD creates an app that doesn't roll forward to the latest available .NET
security patch. For more information on version binding at compile time, see Select the
.NET version to use.
You must use the following switches with the dotnet publish command to publish an
SCD:
-r <RID>
This switch uses an identifier (RID) to specify the target platform. For a list of
runtime identifiers, see Runtime Identifier (RID) catalog.
--self-contained true
ノ Expand table
Tip
In .NET 6 and later versions, you can reduce the total size of compatible self-
contained apps by publishing trimmed. This enables the trimmer to remove
parts of the framework and referenced assemblies that are not on any code
path or potentially referenced in runtime reflection. See trimming
incompatibilities to determine if trimming makes sense for your application.
You can reduce the total size of your deployment by enabling globalization
invariant mode. This mode is useful for applications that are not globally
aware and that can use the formatting conventions, casing conventions, and
string comparison and sort order of the invariant culture. For more
information about globalization invariant mode and how to enable it, see
.NET Core Globalization Invariant Mode .
See also
.NET Application Deployment Overview
.NET Runtime IDentifier (RID) catalog
How to create a NuGet package with
the .NET CLI
Article • 02/04/2022
7 Note
The following shows command-line samples using Unix. The dotnet pack
command as shown here works the same way on Windows.
.NET Standard and .NET Core libraries are expected to be distributed as NuGet
packages. This is in fact how all of the .NET Standard libraries are distributed and
consumed. This is most easily done with the dotnet pack command.
Imagine that you just wrote an awesome new library that you would like to distribute
over NuGet. You can create a NuGet package with cross-platform tools to do exactly
that! The following example assumes a library called SuperAwesomeLibrary that targets
netstandard1.0 .
If you have transitive dependencies, that is, a project that depends on another package,
make sure to restore packages for the entire solution with the dotnet restore
command before you create a NuGet package. Failing to do so will result in the dotnet
pack command not working properly.
You don't have to run dotnet restore because it's run implicitly by all commands that
require a restore to occur, such as dotnet new , dotnet build , dotnet run , dotnet test ,
dotnet publish , and dotnet pack . To disable implicit restore, use the --no-restore
option.
The dotnet restore command is still useful in certain scenarios where explicitly
restoring makes sense, such as continuous integration builds in Azure DevOps Services
or in build systems that need to explicitly control when the restore occurs.
For information about how to manage NuGet feeds, see the dotnet restore
documentation.
After ensuring packages are restored, you can navigate to the directory where a library
lives:
Console
cd src/SuperAwesomeLibrary
.NET CLI
dotnet pack
Console
$ ls bin/Debug
netstandard1.0/
SuperAwesomeLibrary.1.0.0.nupkg
SuperAwesomeLibrary.1.0.0.symbols.nupkg
This produces a package that is capable of being debugged. If you want to build a
NuGet package with release binaries, all you need to do is add the --configuration (or
-c ) switch and use release as the argument.
.NET CLI
Your /bin folder will now have a release folder containing your NuGet package with
release binaries:
Console
$ ls bin/release
netstandard1.0/
SuperAwesomeLibrary.1.0.0.nupkg
SuperAwesomeLibrary.1.0.0.symbols.nupkg
And now you have the necessary files to publish a NuGet package!
See also
Quickstart: Create and publish a package
Self-contained deployment runtime roll
forward
Article • 09/15/2021
.NET Core self-contained application deployments include both the .NET Core libraries
and the .NET Core runtime. Starting in .NET Core 2.1 SDK (version 2.1.300), a self-
contained application deployment publishes the highest patch runtime on your
machine . By default, dotnet publish for a self-contained deployment selects the latest
version installed as part of the SDK on the publishing machine. This enables your
deployed application to run with security fixes (and other fixes) available during
publish . The application must be republished to obtain a new patch. Self-contained
"The project was restored using Microsoft.NETCore.App version 2.0.0, but with current
settings, version 2.0.6 would be used instead. To resolve this issue, make sure the same
settings are used for restore and for subsequent operations such as build or publish.
Typically this issue can occur if the RuntimeIdentifier property is set during build or
publish but not during restore."
7 Note
restore and build can be run implicitly as part of another command, like publish .
When run implicitly as part of another command, they are provided with additional
context so that the right artifacts are produced. When you publish with a runtime
(for example, dotnet publish -r linux-x64 ), the implicit restore restores packages
for the linux-x64 runtime. If you call restore explicitly, it does not restore runtime
packages by default, because it doesn't have that context.
How to avoid restore during publish
Running restore as part of the publish operation may be undesirable for your scenario.
To avoid restore during publish while creating self-contained applications, do the
following:
The size of the single file in a self-contained application is large since it includes the
runtime and the framework libraries. In .NET 6, you can publish trimmed to reduce the
total size of trim-compatible applications. The single file deployment option can be
combined with ReadyToRun and Trim publish options.
XML
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net6.0</TargetFramework>
<PublishSingleFile>true</PublishSingleFile>
<SelfContained>true</SelfContained>
<RuntimeIdentifier>win-x64</RuntimeIdentifier>
</PropertyGroup>
</Project>
PublishSingleFile . Enables single file publishing. Also enables single file warnings
dependent.
RuntimeIdentifier . Specifies the OS and CPU type you're targeting. Also sets
<SelfContained>true</SelfContained> by default.
Single file apps are always OS and architecture specific. You need to publish for each
configuration, such as Linux x64, Linux Arm64, Windows x64, and so forth.
Runtime configuration files, such as *.runtimeconfig.json and *.deps.json, are included in
the single file. If an extra configuration file is needed, you can place it beside the single
file.
This change produces a single file app on self-contained publish. It also shows
single file compatibility warnings during build.
XML
<PropertyGroup>
<PublishSingleFile>true</PublishSingleFile>
</PropertyGroup>
2. Publish the app for a specific runtime identifier using dotnet publish -r <RID>
The following example publishes the app for Linux as a framework dependent
single file application.
<PublishSingleFile> should be set in the project file to enable file analysis during
build, but it's also possible to pass these options as dotnet publish arguments:
.NET CLI
For more information, see Publish .NET Core apps with .NET CLI.
Exclude files from being embedded
Certain files can be explicitly excluded from being embedded in the single file by setting
the following metadata:
XML
<ExcludeFromSingleFile>true</ExcludeFromSingleFile>
For example, to place some files in the publish directory but not bundle them in the file:
XML
<ItemGroup>
<Content Update="Plugin.dll">
<CopyToPublishDirectory>PreserveNewest</CopyToPublishDirectory>
<ExcludeFromSingleFile>true</ExcludeFromSingleFile>
</Content>
</ItemGroup>
XML
<DebugType>embedded</DebugType>
For example, add the following property to the project file of an assembly to embed the
PDB file to that assembly:
XML
<PropertyGroup>
<DebugType>embedded</DebugType>
</PropertyGroup>
Other considerations
Single file applications have all related PDB files alongside the application, not bundled
by default. If you want to include PDBs inside the assembly for projects you build, set
the DebugType to embedded . See Include PDB files inside the bundle.
Managed C++ components aren't well suited for single file deployment. We
recommend that you write applications in C# or another non-managed C++ language
to be single file compatible.
Native libraries
Only managed DLLs are bundled with the app into a single executable. When the app
starts, the managed DLLs are extracted and loaded in memory, avoiding the extraction
to a folder. With this approach, the managed binaries are embedded in the single file
bundle, but the native binaries of the core runtime itself are separate files.
To embed those files for extraction and get one output file, set the property
IncludeNativeLibrariesForSelfExtract to true .
) Important
If extraction is used, the files are extracted to disk before the app starts:
7 Note
In some Linux environments, such as under systemd , the default extraction doesn't
work because $HOME isn't defined. In such cases, it's recommended that you set
$DOTNET_BUNDLE_EXTRACT_BASE_DIR explicitly.
text
[Service]
Environment="DOTNET_BUNDLE_EXTRACT_BASE_DIR=%h/.net"
API incompatibility
Some APIs aren't compatible with single file deployment. Applications might require
modification if they use these APIs. If you use a third-party framework or package, it's
possible that they might use one of these APIs and need modification. The most
common cause of problems is dependence on file paths for files or DLLs shipped with
the application.
The table below has the relevant runtime library API details for single file use.
API Note
To find the file name of the executable, use the first element of
Environment.GetCommandLineArgs(), or starting with .NET 6, use the file name
from ProcessPath.
To plug into this involves creating a target that will be executed between
PrepareForBundle and GenerateSingleFileBundle .
XML
It's possible that tooling will need to copy files in the process of signing. That could
happen if the original file is a shared item not owned by the build, for example, the file
comes from a NuGet cache. In such a case, it's expected that the tool will modify the
path of the corresponding FilesToBundle item to point to the modified copy.
See also
.NET Core application deployment
Publish .NET apps with .NET CLI
Publish .NET Core apps with Visual Studio
dotnet publish command
ReadyToRun Compilation
Article • 06/29/2022
.NET application startup time and latency can be improved by compiling your
application assemblies as ReadyToRun (R2R) format. R2R is a form of ahead-of-time
(AOT) compilation.
R2R binaries improve startup performance by reducing the amount of work the just-in-
time (JIT) compiler needs to do as your application loads. The binaries contain similar
native code compared to what the JIT would produce. However, R2R binaries are larger
because they contain both intermediate language (IL) code, which is still needed for
some scenarios, and the native version of the same code. R2R is only available when you
publish an app that targets specific runtime environments (RID) such as Linux x64 or
Windows x64.
To compile your project as ReadyToRun, the application must be published with the
PublishReadyToRun property set to true .
1. Specify the PublishReadyToRun flag directly to the dotnet publish command. See
dotnet publish for details.
.NET CLI
XML
<PropertyGroup>
<PublishReadyToRun>true</PublishReadyToRun>
</PropertyGroup>
.NET CLI
The startup improvement discussed here applies not only to application startup, but also
to the first use of any code in the application. For instance, ReadyToRun can be used to
reduce the response latency of the first use of Web API in an ASP.NET application.
XML
<ItemGroup>
<PublishReadyToRunExclude Include="Contoso.Example.dll" />
</ItemGroup>
How is the set of methods to precompile
chosen?
The compiler will attempt to pre-compile as many methods as it can. However, for
various reasons, it's not expected that using the ReadyToRun feature will prevent the JIT
from executing. Such reasons may include, but are not limited to:
XML
<PropertyGroup>
<PublishReadyToRunEmitSymbols>true</PublishReadyToRunEmitSymbols>
</PropertyGroup>
These symbols will be placed in the publish directory and for Windows will have a file
extension of .ni.pdb, and for Linux will have a file extension of .r2rmap. These files are
not generally redistributed to end customers, but instead would typically be stored in a
symbol server. In general these symbols are useful for debugging performance issues
related to startup of applications, as Tiered Compilation will replace the ReadyToRun
generated code with dynamically generated code. However, if attempting to profile an
application that disables Tiered Compilation the symbols will be useful.
Composite ReadyToRun
Normal ReadyToRun compilation produces binaries that can be serviced and
manipulated individually. Starting in .NET 6, support for Composite ReadyToRun
compilation has been added. Composite ReadyToRun compiles a set of assemblies that
must be distributed together. This has the advantage that the compiler is able to
perform better optimizations and reduces the set of methods that cannot be compiled
via the ReadyToRun process. However, as a tradeoff, compilation speed is significantly
decreased, and the overall file size of the application is significantly increased. Due to
these tradeoffs, use of Composite ReadyToRun is only recommended for applications
that disable Tiered Compilation or applications running on Linux that are seeking the
best startup time with self-contained deployment. To enable composite ReadyToRun
compilation, specify the <PublishReadyToRunComposite> property.
XML
<PropertyGroup>
<PublishReadyToRunComposite>true</PublishReadyToRunComposite>
</PropertyGroup>
7 Note
Supported compilation targets are described in the table below when targeting .NET 6
and later versions.
Windows X64 Windows (X86, X64, Arm64), Linux (X64, Arm32, Arm64), macOS (X64, Arm64)
Supported compilation targets are described in the table below when targeting .NET 5
and below.
SDK platform Supported target platforms
Linux X64 Linux X86, Linux X64, Linux Arm32, Linux Arm64
However, there is a risk that the build-time analysis of the application can cause failures
at run time, due to not being able to reliably analyze various problematic code patterns
(largely centered on reflection use). To mitigate these problems, warnings are produced
whenever the trimmer cannot fully analyze a code pattern. For information on what the
trim warnings mean and how to resolve them, see Introduction to trim warnings.
7 Note
Trimming is fully supported in .NET 6 and later versions. In .NET Core 3.1 and
.NET 5, trimming was an experimental feature.
Trimming is only available to applications that are published self-contained.
2 Warning
Not all project types can be trimmed. For more information, see Known trimming
incompatibilities.
Any code that causes build time analysis challenges isn't suitable for trimming. Some
common coding patterns that are problematic when used by an application originate
from unbounded reflection usage and external dependencies that aren't visible at build
time. An example of unbounded reflection is a legacy serializer, such as XML
serialization, and an example of invisible external dependencies is built-in COM. To
address trim warnings in your application, see Introduction to trim warnings, and to
make your library compatible with trimming, see Prepare .NET libraries for trimming.
Enable trimming
1. Add <PublishTrimmed>true</PublishTrimmed> to your project file.
This property will produce a trimmed app on self-contained publish. It also turns
off trim-incompatible features and shows trim compatibility warnings during build.
XML
<PropertyGroup>
<PublishTrimmed>true</PublishTrimmed>
</PropertyGroup>
2. Then publish your app using either the dotnet publish command or Visual Studio.
<PublishTrimmed> should be set in the project file so that trim-incompatible features are
disabled during dotnet build . However, you can also set this option as an argument to
dotnet publish :
For more information, see Publish .NET apps with .NET CLI.
Publish with Visual Studio
1. In Solution Explorer, right-click on the project you want to publish and select
Publish.
If you don't already have a publishing profile, follow the instructions to create one
and choose the Folder target-type.
For more information, see Publish .NET Core apps with Visual Studio.
See also
.NET Core application deployment.
Publish .NET apps with .NET CLI.
Publish .NET Core apps with Visual Studio.
dotnet publish command.
Introduction to trim warnings
Article • 11/05/2023
Conceptually, trimming is simple: when you publish an application, the .NET SDK
analyzes the entire application and removes all unused code. However, it can be difficult
to determine what is unused, or more precisely, what is used.
To prevent changes in behavior when trimming applications, the .NET SDK provides
static analysis of trim compatibility through trim warnings. The trimmer produces trim
warnings when it finds code that might not be compatible with trimming. Code that's
not trim-compatible can produce behavioral changes, or even crashes, in an application
after it has been trimmed. Ideally, all applications that use trimming shouldn't produce
any trim warnings. If there are any trim warnings, the app should be thoroughly tested
after trimming to ensure that there are no behavior changes.
This article helps you understand why some patterns produce trim warnings, and how
these warnings can be addressed.
C#
string s = Console.ReadLine();
Type type = Type.GetType(s);
foreach (var m in type.GetMethods())
{
Console.WriteLine(m.Name);
}
In this example, GetType() dynamically requests a type with an unknown name, and then
prints the names of all of its methods. Because there's no way to know at publish-time
what type name is going to be used, there's no way for the trimmer to know which type
to preserve in the output. It's likely that this code could have worked before trimming
(as long as the input is something known to exist in the target framework), but would
probably produce a null reference exception after trimming, as Type.GetType returns
null when the type isn't found.
In this case, the trimmer issues a warning on the call to Type.GetType , indicating that it
can't determine which type is going to be used by the application.
C#
void TestMethod()
{
// IL2026: Using method 'MethodWithAssemblyLoad' which has
'RequiresUnreferencedCodeAttribute'
// can break functionality when trimming application code. This
functionality is not compatible with trimming. Use
'MethodFriendlyToTrimming' instead.
MethodWithAssemblyLoad();
}
There aren't many workarounds for RequiresUnreferencedCode . The best fix is to avoid
calling the method at all when trimming and use something else that's trim-compatible.
If you're writing a library and it's not in your control whether or not to use incompatible
functionality, you can mark it with RequiresUnreferencedCode . This annotates your
method as incompatible with trimming. Using RequiresUnreferencedCode silences all trim
warnings in the given method, but produces a warning whenever someone else calls it.
Console
With the example above, a warning for a specific method might look like this:
Console
Developers calling such APIs are generally not going to be interested in the particulars
of the affected API or specifics as it relates to trimming.
A good message should state what functionality isn't compatible with trimming and
then guide the developer what are their potential next steps. It might suggest to use a
different functionality or change how the functionality is used. It might also simply state
that the functionality isn't yet compatible with trimming without a clear replacement.
For example:
C#
Console
Using RequiresUnreferencedCode often leads to marking more methods with it, due to
the same reason. This is common when a high-level method becomes incompatible with
trimming because it calls a low-level method that isn't trim-compatible. You "bubble up"
the warning to a public API. Each usage of RequiresUnreferencedCode needs a message,
and in these cases the messages are likely the same. To avoid duplicating strings and
making it easier to maintain, use a constant string field to store the message:
C#
class Functionality
{
const string IncompatibleWithTrimmingMessage = "This functionality is
not compatible with trimming. Use 'FunctionalityFriendlyToTrimming'
instead";
[RequiresUnreferencedCode(IncompatibleWithTrimmingMessage)]
private void ImplementationOfAssemblyLoading()
{
...
}
[RequiresUnreferencedCode(IncompatibleWithTrimmingMessage)]
public void MethodWithAssemblyLoad()
{
ImplementationOfAssemblyLoading();
}
}
C#
string s = Console.ReadLine();
Type type = Type.GetType(s);
foreach (var m in type.GetMethods())
{
Console.WriteLine(m.Name);
}
In the previous example, the real problem is Console.ReadLine() . Because any type
could be read, the trimmer has no way to know if you need methods on
System.DateTime or System.Guid or any other type. On the other hand, the following
C#
Here the trimmer can see the exact type being referenced: System.DateTime . Now it can
use flow analysis to determine that it needs to keep all public methods on
System.DateTime . So where does DynamicallyAccessMembers come in? When reflection is
split across multiple methods. In the following code, we can see that the type
System.DateTime flows to Method3 where reflection is used to access System.DateTime 's
methods,
C#
void Method1()
{
Method2<System.DateTime>();
}
void Method2<T>()
{
Type t = typeof(T);
Method3(t);
}
void Method3(Type type)
{
var methods = type.GetMethods();
...
}
For performance and stability, flow analysis isn't performed between methods, so an
annotation is needed to pass information between methods, from the reflection call
( GetMethods ) to the source of the Type . In the previous example, the trimmer warning is
saying that GetMethods requires the Type object instance it's called on to have the
PublicMethods annotation, but the type variable doesn't have the same requirement. In
other words, we need to pass the requirements from GetMethods up to the caller:
C#
void Method1()
{
Method2<System.DateTime>();
}
void Method2<T>()
{
Type t = typeof(T);
Method3(t);
}
void Method3(
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicMethods)]
Type type)
{
var methods = type.GetMethods();
...
}
After annotating the parameter type , the original warning disappears, but another
appears:
C#
void Method1()
{
Method2<System.DateTime>();
}
void
Method2<[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicMet
hods)] T>()
{
Type t = typeof(T);
Method3(t);
}
void Method3(
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicMethods)]
Type type)
{
var methods = type.GetMethods();
...
}
Now there are no warnings because the trimmer knows which members might be
accessed via runtime reflection (public methods) and on which types ( System.DateTime ),
and it preserves them. It's best practice to add annotations so the trimmer knows what
to preserve.
C#
[UnconditionalSuppressMessage("AssemblyLoadTrimming",
"IL2026:RequiresUnreferencedCode",
Justification = "Everything referenced in the loaded assembly is
manually preserved, so it's safe")]
void TestMethod()
{
InitializeEverything();
ReportResults();
}
2 Warning
Be very careful when suppressing trim warnings. It's possible that the call may be
trim-compatible now, but as you change your code that may change, and you may
forget to review all the suppressions.
The suppression applies to the entire method body. So in our sample above it
suppresses all IL2026 warnings from the method. This makes it harder to understand, as
it's not clear which method is the problematic one, unless you add a comment. More
importantly, if the code changes in the future, such as if ReportResults becomes trim-
incompatible as well, no warning is reported for this method call.
You can resolve this by refactoring the problematic method call into a separate method
or local function and then applying the suppression to just that method:
C#
void TestMethod()
{
InitializeEverything();
CallMethodWithAssemblyLoad();
ReportResults();
[UnconditionalSuppressMessage("AssemblyLoadTrimming",
"IL2026:RequiresUnreferencedCode",
Justification = "Everything referenced in the loaded assembly is
manually preserved, so it's safe")]
void CallMethodWithAssemblyLoad()
{
MethodWIthAssemblyLoad(); // Warning suppressed
}
}
6 Collaborate with us on
GitHub .NET feedback
The .NET documentation is open
The source for this content can
source. Provide feedback here.
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our Provide product feedback
contributor guide.
Known trimming incompatibilities
Article • 11/07/2023
There are some patterns that are known to be incompatible with trimming. Some of
these patterns might become compatible as tooling improves or as libraries make
modifications to become trimming compatible.
Reflection-based serializers
Alternative: Reflection-free serializers.
WPF
The Windows Presentation Foundation (WPF) framework makes substantial use of
reflection and some features are heavily reliant on run-time code inspection. It's not
possible for trimming analysis to preserve all necessary code for WPF applications.
Unfortunately, almost no WPF apps are runnable after trimming, so trimming support
for WPF is currently disabled in the .NET SDK. See WPF is not trim-compatible issue
for progress on enabling trimming for WPF.
Windows Forms
The Windows Forms framework makes minimal use of reflection, but is heavily reliant on
built-in COM marshalling. Unfortunately, almost no Windows Forms apps are runnable
without built-in COM marshalling, so trimming support for Windows Forms apps is
disabled in the .NET SDK currently. See Make WinForms trim compatible issue for
progress on enabling trimming for Windows Forms.
6 Collaborate with us on
GitHub .NET feedback
The .NET documentation is open
The source for this content can
source. Provide feedback here.
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
Trimming options
Article • 09/04/2024
The MSBuild properties and items described in this article influence the behavior of
trimmed, self-contained deployments. Some of the options mention ILLink , which is
the name of the underlying tool that implements trimming. For more information about
the underlying tool, see the Trimmer documentation .
Trimming with PublishTrimmed was introduced in .NET Core 3.0. The other options are
available in .NET 5 and later versions.
Enable trimming
<PublishTrimmed>true</PublishTrimmed>
Enable trimming during publish. This setting also turns off trim-incompatible
features and enables trim analysis during build. In .NET 8 and later apps, this
setting also enables the configuration binding and request delegate source
generators.
7 Note
If you specify trimming as enabled from the command line, your debugging
experience will differ and you might encounter additional bugs in the final product.
Place this setting in the project file to ensure that the setting applies during dotnet
build , not just dotnet publish .
This setting enables trimming and trims all assemblies by default. In .NET 6, only
assemblies that opted-in to trimming via [AssemblyMetadata("IsTrimmable", "True")]
(added in projects that set <IsTrimmable>true</IsTrimmable> ) were trimmed by default.
You can return to the previous behavior by using <TrimMode>partial</TrimMode> .
This setting also enables the trim-compatibility Roslyn analyzer and disables features
that are incompatible with trimming.
Trimming granularity
Use the TrimMode property to set the trimming granularity to either partial or full .
The default setting for console apps (and, starting in .NET 8, Web SDK apps) is full :
XML
<TrimMode>full</TrimMode>
To only trim assemblies that have opted-in to trimming, set the property to partial :
XML
<TrimMode>partial</TrimMode>
If you change the trim mode to partial , you can opt-in individual assemblies to
trimming by using a <TrimmableAssembly> MSBuild item.
XML
<ItemGroup>
<TrimmableAssembly Include="MyAssembly" />
</ItemGroup>
Root assemblies
If an assembly is not trimmed, it's considered "rooted", which means that it and all of its
statically understood dependencies will be kept. Additional assemblies can be "rooted"
by name (without the .dll extension):
XML
<ItemGroup>
<TrimmerRootAssembly Include="MyAssembly" />
</ItemGroup>
Root descriptors
Another way to specify roots for analysis is using an XML file that uses the trimmer
descriptor format . This lets you root specific members instead of a whole assembly.
XML
<ItemGroup>
<TrimmerRootDescriptor Include="MyRoots.xml" />
</ItemGroup>
For example, MyRoots.xml might root a specific method that's dynamically accessed by
the application:
XML
<linker>
<assembly fullname="MyAssembly">
<type fullname="MyAssembly.MyClass">
<method name="DynamicallyAccessedMethod" />
</type>
</assembly>
</linker>
Analysis warnings
<SuppressTrimAnalysisWarnings>false</SuppressTrimAnalysisWarnings>
Trimming removes IL that's not statically reachable. Apps that use reflection or other
patterns that create dynamic dependencies might be broken by trimming. To warn
about such patterns, set <SuppressTrimAnalysisWarnings> to false . This setting will
surface warnings about the entire app, including your own code, library code, and
framework code.
Roslyn analyzer
Setting PublishTrimmed in .NET 6+ also enables a Roslyn analyzer that shows a limited
set of analysis warnings. You can also enable or disable the analyzer independently of
PublishTrimmed .
<EnableTrimAnalyzer>true</EnableTrimAnalyzer>
Suppress warnings
You can suppress individual warning codes using the usual MSBuild properties
respected by the toolchain, including NoWarn , WarningsAsErrors , WarningsNotAsErrors ,
and TreatWarningsAsErrors . There's an additional option that controls the ILLink warn-
as-error behavior independently:
<ILLinkTreatWarningsAsErrors>false</ILLinkTreatWarningsAsErrors>
Don't treat ILLink warnings as errors. This might be useful to avoid turning trim
analysis warnings into errors when treating compiler warnings as errors globally.
<TrimmerSingleWarn>false</TrimmerSingleWarn>
Show all detailed warnings, instead of collapsing them to a single warning per
assembly.
Remove symbols
Symbols are usually trimmed to match the trimmed assemblies. You can also remove all
symbols:
<TrimmerRemoveSymbols>true</TrimmerRemoveSymbols>
Remove symbols from the trimmed application, including embedded PDBs and
separate PDB files. This applies to both the application code and any dependencies
that come with symbols.
The SDK also makes it possible to disable debugger support using the property
DebuggerSupport . When debugger support is disabled, trimming removes symbols
These properties cause the related code to be trimmed and also disable features via the
runtimeconfig file. For more information about these properties, including the
corresponding runtimeconfig options, see feature switches . Some SDKs might have
default values for these properties.
2 Warning
Enable these features at your own risk. They are likely to break trimmed apps
without extra work to preserve the dynamically referenced code.
<BuiltInComInteropSupport>
<CustomResourceTypesSupport>
Use of custom resource types isn't supported. ResourceManager code paths that
use reflection for custom resource types are trimmed.
<EnableCppCLIHostActivation>
<EnableUnsafeBinaryFormatterInDesigntimeLicenseContextSerialization>
Running code before Main with DOTNET_STARTUP_HOOKS isn't supported. For more
information, see host startup hook .
Prepare .NET libraries for trimming
Article • 09/02/2023
The .NET SDK makes it possible to reduce the size of self-contained apps by trimming.
Trimming removes unused code from the app and its dependencies. Not all code is
compatible with trimming. .NET provides trim analysis warnings to detect patterns that
may break trimmed apps. This article:
Prerequisites
.NET 8 SDK or later.
XML
<PropertyGroup>
<IsTrimmable>true</IsTrimmable>
</PropertyGroup>
Of the library.
All dependencies the library uses.
Because of the dependency limitations, a self-contained test app which uses the library
and its dependencies must be created. The test app includes all the information the
trimmer requires to issue warning on trim incompatibilities in:
Note: If the library has different behavior depending on the target framework, create a
trimming test app for each of the target frameworks that support trimming. For
example, if the library uses conditional compilation such as #if NET7_0 to change
behavior.
Add <PublishTrimmed>true</PublishTrimmed> .
Add a reference to the library project with <ProjectReference
Include="/Path/To/YourLibrary.csproj" /> .
the trimmer that this assembly is a "root". A "root" assembly means the trimmer
analyzes every call in the library and traverses all code paths that originate from
that assembly.
.csproj file
XML
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net8.0</TargetFramework>
<PublishTrimmed>true</PublishTrimmed>
</PropertyGroup>
<ItemGroup>
<ProjectReference Include="..\MyLibrary\MyLibrary.csproj" />
<TrimmerRootAssembly Include="MyLibrary" />
</ItemGroup>
</Project>
Once the project file is updated, run dotnet publish with the target runtime identifier
(RID).
.NET CLI
Follow the preceding pattern for multiple libraries. To see trim analysis warnings for
more than one library at a time, add them all to the same project as ProjectReference
and TrimmerRootAssembly items. Adding all the libraries to the same project with
ProjectReference and TrimmerRootAssembly items warns about dependencies if any of
the root libraries use a trim-unfriendly API in a dependency. To see warnings that have
to do with only a particular library, reference that library only.
Note: The analysis results depend on the implementation details of the dependencies.
Updating to a new version of a dependency may introduce analysis warnings:
RequiresUnreferencedCode
Consider the following code that uses [RequiresUnreferencedCode] to indicate that the
specified method requires dynamic access to code that is not referenced statically, for
example, through System.Reflection.
C#
[RequiresUnreferencedCode(
"DynamicBehavior is incompatible with trimming.")]
static void DynamicBehavior()
{
}
}
The preceding highlighted code indicates the library calls a method that has explicitly
been annotated as incompatible with trimming. To get rid of the warning, consider
whether MyMethod needs to call DynamicBehavior . If so, annotate the caller MyMethod with
[RequiresUnreferencedCode] which propagates the warning so that callers of MyMethod
C#
[RequiresUnreferencedCode(
"DynamicBehavior is incompatible with trimming.")]
static void DynamicBehavior()
{
}
}
Once you have propagated up the attribute all the way to public API, apps calling the
library:
DynamicallyAccessedMembers
C#
C#
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicMethods)]
Type type)
{
// ...
}
Now any calls to UseMethods produce warnings if they pass in values that don't satisfy
the PublicMethods requirement. Similar to [RequiresUnreferencedCode] , once you have
propagated up such warnings to public APIs, you're done.
In the following example, an unknown Type flows into the annotated method parameter.
The unknown Type is from a field:
C#
Similarly, here the problem is that the field type is passed into a parameter with these
requirements. It's fixed by adding [DynamicallyAccessedMembers] to the field.
[DynamicallyAccessedMembers] warns about code that assigns incompatible values to the
field. Sometimes this process continues until a public API is annotated, and other times
it ends when a concrete type flows into a location with these requirements. For example:
C#
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicMethods)]
static Type type;
In this case, the trim analysis keeps public methods of Tuple, and produces further
warnings.
Recommendations
Avoid reflection when possible. When using reflection, minimize reflection scope
so that it's reachable only from a small part of the library.
Annotate code with DynamicallyAccessedMembers to statically express the trimming
requirements when possible.
Consider reorganizing code to make it follow an analyzable pattern that can be
annotated with DynamicallyAccessedMembers
When code is incompatible with trimming, annotate it with
RequiresUnreferencedCode and propagate this annotation to callers until the
in some cases, you may be interested in enabling trimming of a library that uses
patterns that can't be expressed with those attributes, or without refactoring existing
code. This section describes some advanced ways to resolve trim analysis warnings.
2 Warning
These techniques might change the behavior or your code or result in run time
exceptions if used incorrectly.
UnconditionalSuppressMessage
Consider code that:
2 Warning
When suppressing warnings, you are responsible for guaranteeing the trim
compatibility of the code based on invariants that you know to be true by
inspection and testing. Use caution with these annotations, because if they are
incorrect, or if invariants of your code change, they might end up hiding incorrect
code.
For example:
C#
class TypeCollection
{
Type[] types;
// Ensure that only types with preserved constructors are stored in the
array
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicParameterle
ssConstructor)]
public Type this[int i]
{
// warning IL2063: TypeCollection.Item.get: Value returned from
method
// 'TypeCollection.Item.get' can't be statically determined and may
not meet
// 'DynamicallyAccessedMembersAttribute' requirements.
get => types[i];
set => types[i] = value;
}
}
class TypeCreator
{
TypeCollection types;
class TypeWithConstructor
{
}
In the preceding code, the indexer property has been annotated so that the returned
Type meets the requirements of CreateInstance . This ensures that the
If you're sure that the requirements are met, you can silence this warning by adding
[UnconditionalSuppressMessage] to the getter:
C#
class TypeCollection
{
Type[] types;
// Ensure that only types with preserved constructors are stored in the
array
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicParameterle
ssConstructor)]
public Type this[int i]
{
[UnconditionalSuppressMessage("ReflectionAnalysis", "IL2063",
Justification = "The list only contains types stored through the
annotated setter.")]
get => types[i];
set => types[i] = value;
}
}
class TypeCreator
{
TypeCollection types;
class TypeWithConstructor
{
}
It's important to underline that it's only valid to suppress a warning if there are
annotations or code that ensure the reflected-on members are visible targets of
reflection. It isn't sufficient that the member was a target of a call, field or property
access. It may appear to be the case sometimes but such code is bound to break
eventually as more trimming optimizations are added. Properties, fields, and methods
that aren't visible targets of reflection could be inlined, have their names removed, get
moved to different types, or otherwise optimized in ways that break reflecting on them.
When suppressing a warning, it's only permissible to reflect on targets that were visible
targets of reflection to the trimming analyzer elsewhere.
C#
DynamicDependency
The [DynamicDependency] attribute can be used to indicate that a member has a
dynamic dependency on other members. This results in the referenced members being
kept whenever the member with the attribute is kept, but doesn't silence warnings on its
own. Unlike the other attributes, which inform the trim analysis about the reflection
behavior of the code, [DynamicDependency] only keeps other members. This can be used
together with [UnconditionalSuppressMessage] to fix some analysis warnings.
2 Warning
Use [DynamicDependency] attribute only as a last resort when the other approaches
aren't viable. It is preferable to express the reflection behavior using
[RequiresUnreferencedCode] or [DynamicallyAccessedMembers].
C#
attribute context, or explicitly specified in the attribute (by Type , or by string s for the
type and assembly name).
The type and member strings use a variation of the C# documentation comment ID
string format, without the member prefix. The member string shouldn't include the
name of the declaring type, and may omit parameters to keep all members of the
specified name. Some examples of the format are shown in the following code:
C#
[DynamicDependency("MyMethod()")]
[DynamicDependency("MyMethod(System,Boolean,System.String)")]
[DynamicDependency("MethodOnDifferentType()", typeof(ContainingType))]
[DynamicDependency("MemberName")]
[DynamicDependency("MemberOnUnreferencedAssembly", "ContainingType"
, "UnreferencedAssembly")]
[DynamicDependency("MemberName", "Namespace.ContainingType.NestedType",
"Assembly")]
// generics
[DynamicDependency("GenericMethodName``1")]
[DynamicDependency("GenericMethod``2(``0,``1)")]
[DynamicDependency(
"MethodWithGenericParameterTypes(System.Collections.Generic.List{System.Stri
ng})")]
[DynamicDependency("MethodOnGenericType(`0)", "GenericType`1",
"UnreferencedAssembly")]
[DynamicDependency("MethodOnGenericType(`0)", typeof(GenericType<>))]
Cause
An XML descriptor file is trying to preserve fields on a type with no fields.
Rule description
Descriptor files are used to direct the IL trimmer to always keep certain members in an
assembly, regardless of whether the trimmer can find references to them. However,
trying to preserve members that cannot be found will trigger a warning.
Example
XML
<linker>
<assembly fullname="test">
<type fullname="TestType" preserve="fields" />
</assembly>
</linker>
C#
Cause
An XML descriptor file is trying to preserve methods on a type with no methods.
Rule description
Descriptor files are used to direct the IL trimmer to always keep certain members in an
assembly, regardless of whether the trimmer can find references to them. However,
trying to preserve members that cannot be found will trigger a warning.
Example
XML
<linker>
<assembly fullname="test">
<type fullname="TestType" preserve="methods" />
</assembly>
</linker>
C#
Cause
The assembly specified in a PreserveDependencyAttribute could not be resolved.
Rule description
Trimmer keeps a cache with the assemblies that it has seen. If the assembly specified in
the PreserveDependencyAttribute is not found in this cache, the trimmer does not have a
way to find the member to preserve.
Example
C#
Cause
The type specified in a PreserveDependencyAttribute could not be resolved.
Rule description
Trimmer keeps a cache with the assemblies that it has seen. If the type specified in the
PreserveDependencyAttribute is not found inside an assembly in this cache, the trimmer
Example
C#
Cause
The member of a type specified in a PreserveDependencyAttribute could not be
resolved.
Example
C#
Cause
An assembly specified in a descriptor file could not be resolved.
Rule description
Descriptor files are used to instruct the trimmer to always keep certain items in an
assembly, regardless of whether the trimmer could find any references to them.
The assembly specified in the descriptor file by its full name could not be found in any
of the assemblies seen by the trimmer.
Example
XML
Cause
A type specified in a descriptor file could not be resolved.
Rule description
Descriptor files are used to instruct the trimmer to always keep certain items in an
assembly, regardless of whether the trimmer could find any references to them.
A type specified in a descriptor file could not be found in the assembly matching the
fullname argument that was passed to the parent of the type element.
Example
XML
Cause
A method specified on a type in a descriptor file could not be resolved.
Rule description
Descriptor files are used to instruct the trimmer to always keep certain items in an
assembly, regardless of whether the trimmer could find any references to them.
A method specified in a descriptor file could not be found in the type matching the
fullname argument that was passed to the parent of the method element.
Example
XML
<!-- IL2009: Could not find method 'NonExistentMethod' on type 'MyType' -->
<linker>
<assembly fullname="MyAssembly">
<type fullname="MyType">
<method name="NonExistentMethod" />
</type>
</assembly>
</linker>
IL2010: Invalid value on a method
substitution
Article • 03/11/2022
Cause
The value used in a substitution file for replacing a method's body does not represent a
value of a built-in type or match the return type of the method.
Rule description
Substitution files are used to instruct the trimmer to replace specific method bodies
with a throw statement or to return constant statements.
The value passed to the value argument of a method element could not be converted
by the trimmer to a type matching the return type of the specified method.
Example
XML
Cause
The action value passed to the body argument of a method element in a substitution file
is invalid.
Rule description
Substitution files are used to instruct the trimmer to replace specific method bodies
with a throw statement or to return constant statements.
The value passed to the body argument of a method element was invalid. The only
supported options for this argument are remove and stub .
Example
XML
Cause
A field specified for substitution in a substitution file could not be found.
Rule description
Substitution files are used to instruct the trimmer to replace specific method bodies
with either a throw or return constant statements.
A field specified in a substitution file could not be found in the type matching the
fullname argument that was passed to the parent of the field element.
Example
XML
<!-- IL2012: Could not find field 'NonExistentField' on type 'MyType' -->
<linker>
<assembly fullname="MyAssembly">
<type fullname="MyType">
<field name="NonExistentField" />
</type>
</assembly>
</linker>
IL2013: Substituted fields must be static
or constant
Article • 03/11/2022
Cause
A field specified for substitution in a substitution file is non-static or constant.
Rule description
Substitution files are used to instruct the trimmer to replace specific method bodies
with either a throw or return constant statements.
Example
XML
Cause
A field was specified for substitution in a substitution file but no value to be substituted
for was given.
Rule description
Substitution files are used to instruct the trimmer to replace specific method bodies
with either a throw or return constant statements.
A field element specified in the substitution file does not specify the required value
argument.
Example
XML
Cause
The value used in a substitution file for replacing a field's value does not represent a
value of a built-in type or does not match the type of the field.
Rule description
Substitution files are used to instruct the trimmer to replace specific method bodies
with either a throw or return constant statements.
The value passed to the value argument of a field element could not be converted by
the trimmer to a type matching the return type of the specified field.
Example
XML
Cause
Could not find event on type.
Rule description
An event specified in an XML file for the trimmer file could not be found in the type
matching the fullname argument that was passed to the parent of the event element.
Example
XML
<!-- IL2016: Could not find event 'NonExistentEvent' on type 'MyType' -->
<linker>
<assembly fullname="MyAssembly">
<type fullname="MyType">
<event name="NonExistentEvent" />
</type>
</assembly>
</linker>
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
IL2017: Could not find property on type
Article • 04/24/2024
Cause
Could not find property on type.
Rule description
A property specified in an XML file for the trimmer file could not be found in the type
matching the fullname argument that was passed to the parent of the property
element.
Example
XML
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
be found on GitHub, where you Select a link to provide feedback:
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
IL2018: Could not find the get accessor
of property on type in descriptor file
Article • 03/11/2022
Cause
A get accessor specified in a descriptor file could not be found.
Rule description
Descriptor files are used to instruct the trimmer to always keep certain items in an
assembly, regardless of whether the trimmer could find any references to them.
A get accessor specified in a descriptor file could not be found in the property
matching the signature argument that was passed to the property element.
Example
XML
<!-- IL2018: Could not find the get accessor of property 'SetOnlyProperty'
on type 'MyType' -->
<linker>
<assembly fullname="MyAssembly">
<type fullname="MyType">
<property signature="System.Boolean SetOnlyProperty" accessors="get"
/>
</type>
</assembly>
</linker>
IL2019: Could not find the set accessor
of property on type in descriptor file
Article • 03/11/2022
Cause
A set accessor specified in a descriptor file could not be found.
Rule description
Descriptor files are used to instruct the trimmer to always keep certain items in an
assembly, regardless of whether the trimmer could find any references to them.
A set accessor specified in a descriptor file could not be found in the property
matching the signature argument that was passed to the property element.
Example
XML
<!-- IL2019: Could not find the set accessor of property 'GetOnlyProperty'
on type 'MyType' -->
<linker>
<assembly fullname="MyAssembly">
<type fullname="MyType">
<property signature="System.Boolean GetOnlyProperty" accessors="set"
/>
</type>
</assembly>
</linker>
IL2022: Could not find matching
constructor for custom attribute
specified in custom attribute
annotations file
Article • 03/11/2022
Cause
The constructor of a custom attribute specified in a custom attribute annotations file
could not be found.
Rule description
Custom attribute annotation files are used to instruct the trimmer to behave as if the
specified item has a given attribute. Attribute annotations can only be used to add
attributes that have an effect on the trimmer behavior; all other attributes are ignored.
Attributes added via attribute annotations only influence the trimmer behavior and they
are never added to the output assembly.
Example
XML
<!-- IL2022: Could not find matching constructor for custom attribute
'attribute-type' arguments -->
<linker>
<assembly fullname="MyAssembly">
<type fullname="MyType">
<attribute fullname="AttributeWithNoParametersAttribute">
<argument>ExtraArgumentValue</argument>
</attribute>
</type>
</assembly>
</linker>
IL2023: There is more than one return
child element specified for a method in
a custom attribute annotations file
Article • 03/11/2022
Cause
A method has more than one return element specified. There can only be one return
element when putting an attribute on the return parameter of a method.
Rule description
Custom attribute annotation files are used to instruct the trimmer to behave as if the
specified item has a given attribute. Attribute annotations can only be used to add
attributes that have effect on the trimmer behavior. All other attributes are ignored.
Attributes added via attribute annotations only influence the trimmer behavior, and they
are never added to the output assembly.
A method element has more than one return element specified. Trimmer only allows
one attribute annotation on the return type of a given method.
Example
XML
<!-- IL2023: There is more than one 'return' child element specified for
method 'method' -->
<linker>
<assembly fullname="MyAssembly">
<type fullname="MyType">
<method name="MyMethod">
<return>
<attribute fullname="FirstAttribute"/>
</return>
<return>
<attribute fullname="SecondAttribute"/>
</return>
</method>
</type>
</assembly>
</linker>
IL2024: There is more than one value
specified for the same method
parameter in a custom attribute
annotations file
Article • 03/11/2022
Cause
A method parameter has more than one value element specified. There can only be one
value specified for each method parameter.
Rule description
Custom attribute annotation files are used to instruct the trimmer to behave as if the
specified item has a given attribute. Attribute annotations can only be used to add
attributes which have effect on the trimmer behavior, all other attributes will be ignored.
Attributes added via attribute annotations only influence the trimmer behavior and they
are never added to the output assembly.
There is more than one parameter element with the same name value in a given method .
All attributes on a parameter should be put in a single element.
Example
XML
<!-- IL2024: More than one value specified for parameter 'parameter' of
method 'method' -->
<linker>
<assembly fullname="MyAssembly">
<type fullname="MyType">
<method name="MyMethod">
<parameter name="methodParameter">
<attribute fullname="FirstAttribute"/>
</parameter>
<parameter name="methodParameter">
<attribute fullname="SecondAttribute"/>
</parameter>
</method>
</type>
</assembly>
</linker>
IL2025: Duplicate preserve of a member
in a descriptor file
Article • 03/11/2022
Cause
A member on a type is marked for preservation more than once in a descriptor file.
Rule description
Descriptor files are used to instruct the trimmer to always keep certain items in an
assembly, regardless of whether the trimmer could find any references to them.
Example
XML
Cause
Calling (or accessing via reflection) a member annotated with
RequiresUnreferencedCodeAttribute.
For example:
C#
void TestMethod()
{
// IL2026: Using method 'MethodWithUnreferencedCodeUsage' which has
'RequiresUnreferencedCodeAttribute'
// can break functionality when trimming application code. Use
'MethodFriendlyToTrimming' instead. http://help/unreferencedcode
MethodWithUnreferencedCodeUsage();
}
Rule description
RequiresUnreferencedCodeAttribute indicates that the member references code that
may be removed by the trimmer.
Cause
Trimmer found multiple instances of the same trimmer-supported attribute on a single
member.
IL2028: Known trimmer attribute does
not have the required number of
parameters
Article • 11/17/2021
Cause
Trimmer found an instance of a known attribute that lacks the required constructor
parameters or has more than the accepted parameters. This can happen if a custom
assembly defines a custom attribute whose full name conflicts with the trimmer-known
attributes, since the trimmer recognizes custom attributes by matching namespace and
type name.
IL2029: Attribute element in custom
attribute annotations file does not have
required argument fullname or it is
empty
Article • 03/11/2022
Cause
An attribute element in a custom attribute annotations file does not have required
argument fullname or its value is an empty string.
Rule description
Custom attribute annotation files are used to instruct the trimmer to behave as if the
specified item has a given attribute. Attribute annotations can only be used to add
attributes that have effect on the trimmer behavior. All other attributes are ignored.
Attributes added via attribute annotations only influence the trimmer behavior, and
they're never added to the output assembly.
All attribute elements must have the required fullname argument and its value cannot
be an empty string.
Example
XML
Cause
Could not resolve assembly from the assembly argument of an attribute element in a
custom attribute annotations file.
Rule description
Custom attribute annotation files are used to instruct the trimmer to behave as if the
specified item has a given attribute. Attribute annotations can only be used to add
attributes which have effect on the trimmer behavior, all other attributes will be ignored.
Attributes added via attribute annotations only influence the trimmer behavior and they
are never added to the output assembly.
The value of the assembly argument in an attribute element does not match any of the
assemblies seen by the trimmer.
Example
XML
Cause
Could not resolve custom attribute from the type name specified in the fullname
argument of an attribute element in a custom attribute annotations file.
Rule description
Custom attribute annotation files are used to instruct the trimmer to behave as if the
specified item has a given attribute. Attribute annotations can only be used to add
attributes which have effect on the trimmer behavior, all other attributes will be ignored.
Attributes added via attribute annotations only influence the trimmer behavior and they
are never added to the output assembly.
An attribute specified in a custom attribute annotations file could not be found in the
assembly matching the fullname argument that was passed to the parent of the
attribute element.
Example
XML
Cause
The value passed to the assembly or type name of the CreateInstance method cannot
be statically analyzed. The trimmer cannot guarantee the availability of the target type.
Example
C#
Cause
PreserveDependencyAttribute was an internal attribute used by the trimmer and is not
Example
C#
Cause
Application contains an invalid use of DynamicDependencyAttribute. Ensure that you are
using one of the officially supported constructors.
IL2035: Unresolved assembly in
'DynamicDependencyAttribute'
Article • 03/11/2022
Cause
The value passed to the assemblyName parameter of a DynamicDependencyAttribute
could not be resolved.
Example
C#
Cause
The value passed to the typeName parameter of a DynamicDependencyAttribute could
not be resolved.
Example
C#
Cause
The value passed to the member signature parameter of a
DynamicDependencyAttribute could not resolve to any member. Ensure that the value
passed refers to an existing member and that it uses the correct ID string format .
Example
C#
Cause
A resource element in a substitution file does not specify the required name argument.
Rule description
Substitution files are used to instruct the trimmer to replace specific method bodies
with either a throw or return constant statements.
All resource elements in a substitution file must have the required name argument
specifying the resource to remove.
Example
XML
Cause
The value passed to the action argument of a resource element in a substitution file is
not valid.
Rule description
Substitution files are used to instruct the trimmer to replace specific method bodies
with either a throw or return constant statements.
The value passed to the action argument of a resource element was invalid. The only
supported value for this argument is remove .
Example
XML
Cause
No embedded resource with name matching the value used in the name argument could
be found in the specified assembly.
Rule description
Substitution files are used to instruct the trimmer to replace specific method bodies
with either a throw or return constant statements.
The resource name in a substitution file could not be found in the specified assembly.
The name of the resource to remove must match the name of an embedded resource in
the assembly.
Example
XML
Cause
DynamicallyAccessedMembersAttribute was put directly on a method. This is only
allowed for instance methods on Type. This attribute should usually be placed on the
return value of the method or one of the parameters.
Example
C#
[return:
DynamicallyAccessedMembers(DynamicallyAccessedMemberType.PublicMethods)]
public Type GetInterestingType()
{
}
IL2042: Could not find a unique backing
field to propagate the
'DynamicallyAccessedMembersAttribute
' annotation on a property
Article • 03/11/2022
Cause
The trimmer could not determine the backing field of a property annotated with
DynamicallyAccessedMembersAttribute.
Example
C#
// IL2042: Could not find a unique backing field for property 'MyProperty'
to propagate 'DynamicallyAccessedMembersAttribute'
[DynamicallyAccessedMembers(DynamicallyAccessedMemberType.PublicMethods)]
public Type MyProperty
{
get { return GetTheValue(); }
set { }
}
[param:
DynamicallyAccessedMembers(DynamicallyAccessedMemberType.PublicMethods)]
set { }
}
IL2043:
'DynamicallyAccessedMembersAttribute
' on property conflicts with the same
attribute on its accessor method
Article • 03/11/2022
Cause
While propagating DynamicallyAccessedMembersAttribute from the annotated property
to its accessor method, the trimmer found that the accessor already has such an
attribute. Only the existing attribute will be used.
Example
C#
Cause
The descriptor file specified a namespace that has no types in it.
Rule description
Descriptor files are used to instruct the trimmer to always keep certain items in an
assembly, regardless of whether the trimmer could find any references to them.
A namespace specified in the descriptor file could not be found in the assembly
matching the fullname argument that was passed to the parent of the namespace
element.
Example
XML
<!-- IL2044: Could not find any type in namespace 'NonExistentNamespace' -->
<linker>
<assembly fullname="MyAssembly">
<namespace fullname="NonExistentNamespace" />
</assembly>
</linker>
IL2045: Custom attribute is referenced
in code but the trimmer was instructed
to remove all of its instances
Article • 03/11/2022
Cause
The trimmer was instructed to remove all instances of a custom attribute but kept its
type as part of its analysis. This will likely result in breaking the code where the custom
attribute's type is being used.
Example
XML
<linker>
<assembly fullname="MyAssembly">
<type fullname="MyAttribute">
<attribute internal="RemoveAttributeInstances"/>
</type>
</assembly>
</linker>
C#
Cause
There is a mismatch in the RequiresUnreferencedCodeAttribute annotations between an
interface and its implementation or a virtual method and its override.
Example
A base member has the attribute but the derived member does not have the attribute.
C#
A derived member has the attribute but the overridden base member does not have the
attribute.
C#
public class Base
{
public virtual void TestMethod() {}
}
An interface member has the attribute but its implementation does not have the
attribute.
C#
interface IRUC
{
[RequiresUnreferencedCode("Message")]
void TestMethod();
}
An implementation member has the attribute but the interface that it implements does
not have the attribute.
C#
interface IRUC
{
void TestMethod();
}
Cause
Internal trimmer attribute RemoveAttributeInstances is being used on a member but it
can only be used on a type.
Example
XML
Cause
An internal attribute name specified in a custom attribute annotations file is not
supported by the trimmer.
Example
XML
Cause
Trimmer found a p/invoke method that declares a parameter with COM marshalling.
Correctness of COM interop cannot be guaranteed after trimming.
Example
C#
[DllImport ("Foo")]
static extern void M2 (C autoLayout);
[StructLayout (LayoutKind.Auto)]
public class C
{
}
IL2051: Property element does not have
required argument name in custom
attribute annotations file
Article • 03/11/2022
Cause
A property element in a custom attribute annotations file does not specify the required
argument name .
Example
XML
<!-- IL2051: Property element does not contain attribute 'name' -->
<linker>
<assembly fullname="MyAssembly">
<type fullname="MyType">
<attribute fullname="MyAttribute">
<property>UnspecifiedPropertyName</property>
</attribute>
</type>
</assembly>
</linker>
IL2052: Could not find property
specified in custom attribute
annotations file
Article • 03/11/2022
Cause
Could not find a property matching the value of the name argument specified in a
property element in a custom attribute annotations file.
Example
XML
7 Note
This warning code is obsolete in .NET 7 and no longer produced by the tools.
Cause
Value used in a property element in a custom attribute annotations file does not match
the type of the attribute's property.
Example
XML
7 Note
This warning code is obsolete in .NET 7 and no longer produced by the tools.
Cause
Value used in an argument element in a custom attribute annotations file does not
match the type of the attribute's constructor arguments.
Example
XML
Cause
A call to Type.MakeGenericType(Type[]) cannot be statically analyzed by the trimmer.
Rule description
This can either be that the type on which MakeGenericType(Type[]) is called cannot be
statically determined, or that the type parameters to be used for generic arguments
cannot be statically determined. If the open generic type has
DynamicallyAccessedMembersAttribute annotations on any of its generic parameters,
the trimmer currently cannot validate that the requirements are fulfilled by the calling
method.
Example
C#
class
Lazy<[DynamicallyAccessedMembers(DynamicallyAccessedMemberType.PublicParamet
erlessConstructor)] T>
{
// ...
}
Cause
Property annotated with DynamicallyAccessedMembersAttribute also has that attribute
on its backing field.
Rule description
While propagating DynamicallyAccessedMembersAttribute from a property to its
backing field, the trimmer found its backing field to be already annotated. Only the
existing attribute will be used.
The trimmer will only propagate annotations to compiler generated backing fields,
making this warning only possible when the backing field is explicitly annotated with
CompilerGeneratedAttribute.
Example
C#
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructor
s)]
[CompilerGenerated]
Type backingField;
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructor
s)]
Type PropertyWithAnnotatedBackingField
{
get { return backingField; }
set { backingField = value; }
}
IL2057: Unrecognized value passed to
the typeName parameter of
'System.Type.GetType(String)'
Article • 03/11/2022
Cause
An unrecognized value was passed to the typeName parameter of Type.GetType(String).
Rule description
If the type name passed to the typeName parameter of GetType(String) is statically
known, the trimmer can make sure it is preserved and that the application code will
continue to work after trimming. If the type is unknown and the trimmer cannot see the
type being used anywhere else, the trimmer might end up removing it from the
application, potentially breaking it.
Example
C#
void TestMethod()
{
string typeName = ReadName();
Cause
A call to CreateInstance was found in the analyzed code.
Rule description
Trimmer does not analyze assembly instances and thus does not know on which
assembly CreateInstance was called.
Example
C#
void TestMethod()
{
// IL2058 Trim analysis: Parameters passed to method
'Assembly.CreateInstance(string)' cannot be analyzed. Consider using methods
'System.Type.GetType' and `System.Activator.CreateInstance` instead.
AssemblyLoadContext.Default.Assemblies.First(a => a.Name ==
"MyAssembly").CreateInstance("MyType");
How to fix
Trimmer has support for Type.GetType(String). The result can be passed to
CreateInstance to create an instance of the type.
IL2059: Unrecognized value passed to
the type parameter of
'System.Runtime.CompilerServices.Runti
meHelpers.RunClassConstructor'
Article • 03/11/2022
Cause
An unrecognized value was passed to the type parameter of
RuntimeHelpers.RunClassConstructor(RuntimeTypeHandle).
Rule description
If the type passed to RunClassConstructor(RuntimeTypeHandle) is not statically known,
the trimmer cannot guarantee the availability of the target static constructor.
Example
C#
Cause
A call to MethodInfo.MakeGenericMethod(Type[]) cannot be statically analyzed by the
trimmer.
Rule description
This can either be that the method on which the MakeGenericMethod(Type[]) is called
cannot be statically determined, or that the type parameters to be used for the generic
arguments cannot be statically determined. If the open generic method has
DynamicallyAccessedMembersAttribute annotations on any of its generic parameters,
the trimmer currently cannot validate that the requirements are fulfilled by the calling
method.
Example
C#
class Test
{
public static void
TestGenericMethod<[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes
.PublicProperties)] T>()
{
}
Cause
A call to CreateInstance had an assembly that could not be resolved.
Example
C#
void TestMethod()
{
// IL2061 Trim analysis: The assembly name 'NonExistentAssembly' passed
to method 'System.Activator.CreateInstance(string, string)' references
assembly which is not available.
Activator.CreateInstance("NonExistentAssembly", "MyType");
}
IL2062: Value passed to a method
parameter annotated with
'DynamicallyAccessedMembersAttribute
' cannot be statically determined and
may not meet the attribute's
requirements
Article • 03/11/2022
Cause
The parameter 'parameter' of method 'method' has a
DynamicallyAccessedMembersAttribute annotation, but the value passed to it can't be
statically analyzed. Trimmer cannot make sure that the requirements declared by the
attribute are met by the argument value.
Example
C#
void
NeedsPublicConstructors([DynamicallyAccessedMembers(DynamicallyAccessedMembe
rTypes.PublicConstructors)] Type type)
{
// ...
}
Cause
The return value of method 'method' has a DynamicallyAccessedMembersAttribute
annotation, but the value returned from the method cannot be statically analyzed.
Trimmer cannot make sure that the requirements declared by the attribute are met by
the returned value.
Example
C#
[return:
DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructors
)]
Type TestMethod(Type[] types)
{
// IL2063 Trim analysis: Value returned from method 'TestMethod' can not
be statically determined and may not meet
'DynamicallyAccessedMembersAttribute' requirements.
return types[1];
}
IL2064: Value assigned to a field
annotated with
'DynamicallyAccessedMembersAttribute
' cannot be statically determined and
may not meet the attribute's
requirements.
Article • 03/11/2022
Cause
The field 'field' has a DynamicallyAccessedMembersAttribute annotation, but the value
assigned to it can not be statically analyzed. Trimmer cannot make sure that the
requirements declared by the attribute are met by the assigned value.
Example
C#
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructor
s)]
Type _typeField;
Cause
The method 'method' has a DynamicallyAccessedMembersAttribute annotation (which
applies to the implicit this parameter), but the value used for the this parameter
cannot be statically analyzed. Trimmer cannot make sure that the requirements declared
by the attribute are met by the this value.
Example
C#
Cause
The generic parameter 'parameter' of 'type' (or 'method') is annotated with
DynamicallyAccessedMembersAttribute, but the value used for it cannot be statically
analyzed. Trimmer cannot make sure that the requirements declared on the attribute are
met by the value.
Example
C#
Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be met by those
declared on the source value also via the DynamicallyAccessedMembersAttribute. The
source value can declare more requirements than the target if necessary.
Example
C#
void
NeedsPublicConstructors([DynamicallyAccessedMembers(DynamicallyAccessedMembe
rTypes.PublicConstructors)] Type type)
{
// ...
}
Fixing
See Fixing Warnings for guidance.
IL2068: 'target method' method return
value does not satisfy
'DynamicallyAccessedMembersAttribute
' requirements. The parameter 'source
parameter' of method 'source method'
does not have matching annotations.
The source value must declare at least
the same requirements as those
declared on the target location it is
assigned to
Article • 09/16/2022
Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be met by those
declared on the source value also via the DynamicallyAccessedMembersAttribute. The
source value can declare more requirements than the target, if necessary.
Example
C#
[return:
DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructors
)]
Type TestMethod(Type type)
{
// IL2068 Trim analysis: 'TestMethod' method return value does not
satisfy 'DynamicallyAccessedMembersAttribute' requirements. The parameter
'type' of method 'TestMethod' does not have matching annotations. The source
value must declare at least the same requirements as those declared on the
target location it is assigned to.
return type;
}
Fixing
See Fixing Warnings for guidance.
IL2069: Value stored in field 'target field'
does not satisfy
'DynamicallyAccessedMembersAttribute
' requirements. The parameter 'source
parameter' of method 'source method'
does not have matching annotations.
The source value must declare at least
the same requirements as those
declared on the target location it is
assigned to
Article • 09/16/2022
Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be met by those
declared on the source value also via the DynamicallyAccessedMembersAttribute. The
source value can declare more requirements than the target, if necessary.
Example
C#
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructor
s)]
Type _typeField;
Fixing
See Fixing Warnings for guidance.
IL2070: 'this' argument does not satisfy
'DynamicallyAccessedMembersAttribute
' in call to 'target method'. The
parameter 'source parameter' of
method 'source method' does not have
matching annotations. The source value
must declare at least the same
requirements as those declared on the
target location it is assigned to
Article • 09/16/2022
Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be met by those
declared on the source value also via the DynamicallyAccessedMembersAttribute. The
source value can declare more requirements than the target, if necessary.
Example
C#
void
GenericWithAnnotation<[DynamicallyAccessedMembers(DynamicallyAccessedMemberT
ypes.Interfaces)] T>()
{
}
Fixing
See Fixing Warnings for guidance.
IL2071: 'target generic parameter'
generic argument does not satisfy
'DynamicallyAccessedMembersAttribute
' in 'target method or type'. The
parameter 'source parameter' of
method 'source method' does not have
matching annotations. The source value
must declare at least the same
requirements as those declared on the
target location it is assigned to
Article • 08/27/2024
Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be met by those
declared on the source value also via the DynamicallyAccessedMembersAttribute. The
source value can declare more requirements than the target, if necessary.
Example
C#
public void
GenericWithAnnotation<[DynamicallyAccessedMembers(DynamicallyAccessedMemberT
ypes.Interfaces)] T>()
{
}
typeof(AnnotatedGenerics).GetMethod(nameof(GenericWithAnnotation)).MakeGener
icMethod(type);
}
Fixing
See Fixing Warnings for guidance.
IL2072: 'target parameter' argument
does not satisfy
'DynamicallyAccessedMembersAttribute
' in call to 'target method'. The return
value of method 'source method' does
not have matching annotations. The
source value must declare at least the
same requirements as those declared on
the target location it is assigned to
Article • 09/16/2022
Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be declared by the
source value also via the DynamicallyAccessedMembersAttribute. The source value can
declare more requirements than the target, if necessary.
Example
C#
void
NeedsPublicConstructors([DynamicallyAccessedMembers(DynamicallyAccessedMembe
rTypes.PublicConstructors)] Type type)
{
// ...
}
void TestMethod()
{
// IL2072 Trim analysis: 'type' argument does not satisfy
'DynamicallyAccessedMembersAttribute' in call to 'NeedsPublicConstructors'.
The return value of method 'GetCustomType' does not have matching
annotations. The source value must declare at least the same requirements as
those declared on the target location it is assigned to.
NeedsPublicConstructors(GetCustomType());
}
Fixing
See Fixing Warnings for guidance.
IL2073: 'target method' method return
value does not satisfy
'DynamicallyAccessedMembersAttribute
' requirements. The return value of
method 'source method' does not have
matching annotations. The source value
must declare at least the same
requirements as those declared on the
target location it is assigned to
Article • 09/16/2022
Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be declared by the
source value also via the DynamicallyAccessedMembersAttribute. The source value can
declare more requirements than the target, if necessary.
Example
C#
[return:
DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructors
)]
Type TestMethod()
{
// IL2073 Trim analysis: 'TestMethod' method return value does not
satisfy 'DynamicallyAccessedMembersAttribute' requirements. The return value
of method 'GetCustomType' does not have matching annotations. The source
value must declare at least the same requirements as those declared on the
target location it is assigned to.
return GetCustomType();
}
Fixing
See Fixing Warnings for guidance.
IL2074: Value stored in field 'target field'
does not satisfy
'DynamicallyAccessedMembersAttribute
' requirements. The return value of
method 'source method' does not have
matching annotations. The source value
must declare at least the same
requirements as those declared on the
target location it is assigned to
Article • 09/16/2022
Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be declared by the
source value also via the DynamicallyAccessedMembersAttribute. The source value can
declare more requirements than the target, if necessary.
Example
C#
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructor
s)]
Type _typeField;
void TestMethod()
{
// IL2074 Trim analysis: value stored in field '_typeField_' does not
satisfy 'DynamicallyAccessedMembersAttribute' requirements. The return value
of method 'GetCustomType' does not have matching annotations. The source
value must declare at least the same requirements as those declared on the
target location it is assigned to.
_typeField = GetCustomType();
}
Fixing
See Fixing Warnings for guidance.
IL2075: 'this' argument does not satisfy
'DynamicallyAccessedMembersAttribute
' in call to 'target method'. The return
value of method 'source method' does
not have matching annotations. The
source value must declare at least the
same requirements as those declared on
the target location it is assigned to
Article • 08/24/2024
Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be declared by the
source value also via the DynamicallyAccessedMembersAttribute. The source value can
declare more requirements than the target, if necessary.
Example
C#
void TestMethod()
{
// IL2075 Trim analysis: 'this' argument does not satisfy
'DynamicallyAccessedMembersAttribute' in call to 'GetMethods'. The return
value of method 'GetCustomType' does not have matching annotations. The
source value must declare at least the same requirements as those declared
on the target location it is assigned to.
GetCustomType().GetMethods(); // Type.GetMethods is annotated with
DynamicallyAccessedMemberTypes.PublicMethods
}
using System.Diagnostics.CodeAnalysis;
[return:
DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicMethods)]
Type GetCustomType() { return typeof(CustomType); }
void TestMethod()
{
// IL2075 Trim analysis: 'this' argument does not satisfy
'DynamicallyAccessedMembersAttribute' in call to 'GetMethods'. The return
value of method 'GetCustomType' does not have matching annotations. The
source value must declare at least the same requirements as those declared
on the target location it is assigned to.
GetCustomType().GetMethods(); // Type.GetMethods is annotated with
DynamicallyAccessedMemberTypes.PublicMethods
}
C#
C#
using System.Diagnostics.CodeAnalysis;
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicMethods)]
class MyType
{
...
}
7 Note
More information
See Fixing Warnings for more information.
IL2076: 'target generic parameter'
generic argument does not satisfy
'DynamicallyAccessedMembersAttribute
' in 'target method or type'. The return
value of method 'source method' does
not have matching annotations. The
source value must declare at least the
same requirements as those declared on
the target location it is assigned to
Article • 08/27/2024
Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be met by those
declared on the source value also via the DynamicallyAccessedMembersAttribute. The
source value can declare more requirements than the target, if necessary.
Example
C#
public void
GenericWithAnnotation<[DynamicallyAccessedMembers(DynamicallyAccessedMemberT
ypes.Interfaces)] T>()
{
}
void TestMethod()
{
// IL2076 Trim Analysis: AnnotatedGenerics.TestMethod(Type): 'T' generic
argument does not satisfy 'DynamicallyAccessedMemberTypes.Interfaces' in
'GenericWithAnnotation<T>()'. The return value of method 'GetType()' does
not have matching annotations. The source value must declare at least the
same requirements as those declared on the target location it is assigned to
typeof(AnnotatedGenerics).GetMethod(nameof(GenericWithAnnotation)).MakeGener
icMethod(GetType());
}
Fixing
See Fixing Warnings for guidance.
IL2077: 'target parameter' argument
does not satisfy
'DynamicallyAccessedMembersAttribute
' in call to 'target method'. The field
'source field' does not have matching
annotations. The source value must
declare at least the same requirements
as those declared on the target location
it is assigned to
Article • 09/16/2022
Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be declared by the
source value also via the DynamicallyAccessedMembersAttribute. The source value can
declare more requirements than the target, if necessary.
Example
C#
void
NeedsPublicConstructors([DynamicallyAccessedMembers(DynamicallyAccessedMembe
rTypes.PublicConstructors)] Type type)
{
// ...
}
Type _typeField;
void TestMethod()
{
// IL2077 Trim analysis: 'type' argument does not satisfy
'DynamicallyAccessedMembersAttribute' in call to 'NeedsPublicConstructors'.
The field '_typeField' does not have matching annotations. The source value
must declare at least the same requirements as those declared on the target
location it is assigned to.
NeedsPublicConstructors(_typeField);
}
Fixing
See Fixing Warnings for guidance.
IL2078: 'target method' method return
value does not satisfy
'DynamicallyAccessedMembersAttribute
' requirements. The field 'source field'
does not have matching annotations.
The source value must declare at least
the same requirements as those
declared on the target location it is
assigned to
Article • 09/16/2022
Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be declared by the
source value also via the DynamicallyAccessedMembersAttribute. The source value can
declare more requirements than the target, if necessary.
Example
C#
Type _typeField;
[return:
DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructors
)]
Type TestMethod()
{
// IL2078 Trim analysis: 'TestMethod' method return value does not
satisfy 'DynamicallyAccessedMembersAttribute' requirements. The field
'_typeField' does not have matching annotations. The source value must
declare at least the same requirements as those declared on the target
location it is assigned to.
return _typeField;
}
Fixing
See Fixing Warnings for guidance.
IL2079: Value stored in field 'target field'
does not satisfy
'DynamicallyAccessedMembersAttribute
' requirements. The field 'source field'
does not have matching annotations.
The source value must declare at least
the same requirements as those
declared on the target location it is
assigned to
Article • 09/16/2022
Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be declared by the
source value also via the DynamicallyAccessedMembersAttribute. The source value can
declare more requirements than the target, if necessary.
Example
C#
Type _typeField;
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructor
s)]
Type _typeFieldWithRequirements;
void TestMethod()
{
// IL2079 Trim analysis: value stored in field
'_typeFieldWithRequirements' does not satisfy
'DynamicallyAccessedMembersAttribute' requirements. The field '_typeField'
does not have matching annotations. The source value must declare at least
the same requirements as those declared on the target location it is
assigned to.
_typeFieldWithRequirements = _typeField;
}
Fixing
See Fixing Warnings for guidance.
IL2080: 'this' argument does not satisfy
'DynamicallyAccessedMembersAttribute
' in call to 'target method'. The field
'source field' does not have matching
annotations. The source value must
declare at least the same requirements
as those declared on the target location
it is assigned to
Article • 09/16/2022
Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be declared by the
source value also via the DynamicallyAccessedMembersAttribute. The source value can
declare more requirements than the target, if necessary.
Example
C#
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructor
s)]
Type _typeFieldWithRequirements;
void TestMethod()
{
// IL2080 Trim analysis: 'this' argument does not satisfy
'DynamicallyAccessedMemberTypes' in call to 'GetMethod'. The field
'_typeFieldWithRequirements' does not have matching annotations. The source
value must declare at least the same requirements as those declared on the
target location it is assigned to.
_typeFieldWithRequirements.GetMethod("Foo");
}
Fixing
See Fixing Warnings for guidance.
IL2081: 'target generic parameter'
generic argument does not satisfy
'DynamicallyAccessedMembersAttribute
' in 'target method or type'. The field
'source field' does not have matching
annotations. The source value must
declare at least the same requirements
as those declared on the target location
it is assigned to
Article • 08/27/2024
Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be met by those
declared on the source value also via the DynamicallyAccessedMembersAttribute. The
source value can declare more requirements than the target, if necessary.
Example
C#
public void
GenericWithAnnotation<[DynamicallyAccessedMembers(DynamicallyAccessedMemberT
ypes.Interfaces)] T>()
{
}
Type typeField;
void TestMethod()
{
// IL2081 Trim Analysis: 'T' generic argument does not satisfy
'DynamicallyAccessedMemberTypes.Interfaces' in 'GenericWithAnnotation<T>()'.
The field 'typeField' does not have matching annotations. The source value
must declare at least the same requirements as those declared on the target
location it is assigned to.
typeof(AnnotatedGenerics).GetMethod(nameof(GenericWithAnnotation)).MakeGener
icMethod(typeField);
}
Fixing
See Fixing Warnings for guidance.
IL2082: 'target parameter' argument
does not satisfy
'DynamicallyAccessedMembersAttribute
' in call to 'target method'. The implicit
'this' argument of method 'source
method' does not have matching
annotations. The source value must
declare at least the same requirements
as those declared on the target location
it is assigned to
Article • 09/16/2022
Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be declared by the
source value also via the DynamicallyAccessedMembersAttribute. The source value can
declare more requirements than the target, if necessary.
Example
C#
void
NeedsPublicConstructors([DynamicallyAccessedMembers(DynamicallyAccessedMembe
rTypes.PublicConstructors)] Type type)
{
// ...
}
// This can only happen within methods of System.Type type (or derived
types). Assume the below method is declared on System.Type
void TestMethod()
{
// IL2082 Trim analysis: 'type' argument does not satisfy
'DynamicallyAccessedMembersAttribute' in call to 'NeedsPublicConstructors'.
The implicit 'this' argument of method 'TestMethod' does not have matching
annotations. The source value must declare at least the same requirements as
those declared on the target location it is assigned to.
NeedsPublicConstructors(this);
}
Fixing
See Fixing Warnings for guidance.
IL2083: 'target method' method return
value does not satisfy
'DynamicallyAccessedMembersAttribute
' requirements. The implicit 'this'
argument of method 'source method'
does not have matching annotations.
The source value must declare at least
the same requirements as those
declared on the target location it is
assigned to
Article • 09/16/2022
Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be met by those
declared on the source value also via the DynamicallyAccessedMembersAttribute. The
source value can declare more requirements than the source if necessary.
Example
C#
// This can only happen within methods of System.Type type (or derived
types). Assume the below method is declared on System.Type
[DynamicallyAccessedMembers (DynamicallyAccessedMemberTypes.PublicMethods)]
[return:
DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructors
)]
Type TestMethod()
{
// IL2083 Trim analysis: 'TestMethod' method return value does not
satisfy 'DynamicallyAccessedMembersAttribute' requirements. The implicit
'this' argument of method 'TestMethod' does not have matching annotations.
The source value must declare at least the same requirements as those
declared on the target location it is assigned to.
return this;
}
Fixing
See Fixing Warnings for guidance.
IL2084: Value stored in field 'target field'
does not satisfy
'DynamicallyAccessedMembersAttribute
' requirements. The implicit 'this'
argument of method 'source method'
does not have matching annotations.
The source value must declare at least
the same requirements as those
declared on the target location it is
assigned to
Article • 09/16/2022
Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be met by those
declared on the source value also via the DynamicallyAccessedMembersAttribute. The
source value can declare more requirements than the source if necessary.
Example
C#
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructor
s)]
Type _typeFieldWithRequirements;
// This can only happen within methods of System.Type type (or derived
types). Assume the below method is declared on System.Type
void TestMethod()
{
// IL2084 Trim analysis: value stored in field
'_typeFieldWithRequirements' does not satisfy
'DynamicallyAccessedMembersAttribute' requirements. The implicit 'this'
argument of method 'TestMethod' does not have matching annotations. The
source value must declare at least the same requirements as those declared
on the target location it is assigned to.
_typeFieldWithRequirements = this;
}
Fixing
See Fixing Warnings for guidance.
IL2085: 'this' argument does not satisfy
'DynamicallyAccessedMembersAttribute
' in call to 'target method'. The implicit
'this' argument of method 'source
method' does not have matching
annotations. The source value must
declare at least the same requirements
as those declared on the target location
it is assigned to
Article • 09/16/2022
Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be met by those
declared on the source value also via the DynamicallyAccessedMembersAttribute. The
source value can declare more requirements than the source if necessary.
Example
C#
// This can only happen within methods of System.Type type (or derived
types). Assume the below method is declared on System.Type
void TestMethod()
{
// IL2085 Trim analysis: 'this' argument does not satisfy
'DynamicallyAccessedMembersAttribute' in call to 'GetMethods'. The implicit
'this' argument of method 'TestMethod' does not have matching annotations.
The source value must declare at least the same requirements as those
declared on the target location it is assigned to.
this.GetMethods(); // Type.GetMethods is annotated with
DynamicallyAccessedMemberTypes.PublicMethods
}
Fixing
See Fixing Warnings for guidance.
IL2087: 'target parameter' argument
does not satisfy
'DynamicallyAccessedMembersAttribute
' in call to 'target method'. The generic
parameter 'source generic parameter' of
'source method or type' does not have
matching annotations. The source value
must declare at least the same
requirements as those declared on the
target location it is assigned to
Article • 09/16/2022
Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be met by those
declared on the source value also via the DynamicallyAccessedMembersAttribute. The
source value can declare more requirements than the source if necessary.
Example
C#
void
NeedsPublicConstructors([DynamicallyAccessedMembers(DynamicallyAccessedMembe
rTypes.PublicConstructors)] Type type)
{
// ...
}
void TestMethod<TSource>()
{
// IL2087 Trim analysis: 'type' argument does not satisfy
'DynamicallyAccessedMembersAttribute' in call to 'NeedsPublicConstructor'.
The generic parameter 'TSource' of 'TestMethod' does not have matching
annotations. The source value must declare at least the same requirements as
those declared on the target location it is assigned to.
NeedsPublicConstructors(typeof(TSource));
}
Fixing
See Fixing Warnings for guidance.
IL2088: 'target method' method return
value does not satisfy
'DynamicallyAccessedMembersAttribute
' requirements. The generic parameter
'source generic parameter' of 'source
method or type' does not have
matching annotations. The source value
must declare at least the same
requirements as those declared on the
target location it is assigned to
Article • 09/16/2022
Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be met by those
declared on the source value also via the DynamicallyAccessedMembersAttribute. The
source value can declare more requirements than the source if necessary.
Example
C#
[DynamicallyAccessedMembers (DynamicallyAccessedMemberTypes.PublicMethods)]
[return:
DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructors
)]
Type TestMethod<TSource>()
{
// IL2088 Trim analysis: 'TestMethod' method return value does not
satisfy 'DynamicallyAccessedMembersAttribute' requirements. The generic
parameter 'TSource' of 'TestMethod' does not have matching annotations. The
source value must declare at least the same requirements as those declared
on the target location it is assigned to.
return typeof(TSource);
}
Fixing
See Fixing Warnings for guidance.
IL2089: Value stored in field 'target field'
does not satisfy
'DynamicallyAccessedMembersAttribute
' requirements. The generic parameter
'source generic parameter' of 'source
method or type' does not have
matching annotations. The source value
must declare at least the same
requirements as those declared on the
target location it is assigned to
Article • 09/16/2022
Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be met by those
declared on the source value also via the DynamicallyAccessedMembersAttribute. The
source value can declare more requirements than the source if necessary.
Example
C#
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructor
s)]
Type _typeFieldWithRequirements;
void TestMethod<TSource>()
{
// IL2089 Trim analysis: value stored in field
'_typeFieldWithRequirements' does not satisfy
'DynamicallyAccessedMembersAttribute' requirements. The generic parameter
'TSource' of 'TestMethod' does not have matching annotations. The source
value must declare at least the same requirements as those declared on the
target location it is assigned to.
_typeFieldWithRequirements = typeof(TSource);
}
Fixing
See Fixing Warnings for guidance.
IL2090: 'this' argument does not satisfy
'DynamicallyAccessedMembersAttribute
' in call to 'target method'. The generic
parameter 'source generic parameter' of
'source method or type' does not have
matching annotations. The source value
must declare at least the same
requirements as those declared on the
target location it is assigned to
Article • 09/16/2022
Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be met by those
declared on the source value also via the DynamicallyAccessedMembersAttribute. The
source value can declare more requirements than the source if necessary.
Example
C#
void TestMethod<TSource>()
{
// IL2090 Trim analysis: 'this' argument does not satisfy
'DynamicallyAccessedMembersAttribute' in call to 'GetMethods'. The generic
parameter 'TSource' of 'TestMethod' does not have matching annotations. The
source value must declare at least the same requirements as those declared
on the target location it is assigned to.
typeof(TSource).GetMethods(); // Type.GetMethods is annotated with
DynamicallyAccessedMemberTypes.PublicMethods
}
Fixing
See Fixing Warnings for guidance.
IL2091: 'target generic parameter'
generic argument does not satisfy
'DynamicallyAccessedMembersAttribute
' in 'target method or type'. The generic
parameter 'source target parameter' of
'source method or type' does not have
matching annotations. The source value
must declare at least the same
requirements as those declared on the
target location it is assigned to
Article • 09/16/2022
Cause
The target location declares some requirements on the type value via its
DynamicallyAccessedMembersAttribute. Those requirements must be met by those
declared on the source value also via the DynamicallyAccessedMembersAttribute. The
source value can declare more requirements than the source if necessary.
Example
C#
void
NeedsPublicConstructors<[DynamicallyAccessedMembers(DynamicallyAccessedMembe
rTypes.PublicConstructors)] TTarget>()
{
// ...
}
void TestMethod<TSource>()
{
// IL2091 Trim analysis: 'TTarget' generic argument does not satisfy
'DynamicallyAccessedMembersAttribute' in 'NeedsPublicConstructors'. The
generic parameter 'TSource' of 'TestMethod' does not have matching
annotations. The source value must declare at least the same requirements as
those declared on the target location it is assigned to.
NeedsPublicConstructors<TSource>();
}
Fixing
See Fixing Warnings for guidance.
IL2092: The
'DynamicallyAccessedMemberTypes'
value used in a
'DynamicallyAccessedMembersAttribute
' annotation on a method's parameter
does not match the
'DynamicallyAccessedMemberTypes'
value of the overridden parameter
annotation. All overridden members
must have the same attribute's usage
Article • 03/11/2022
Cause
All overrides of a virtual method, including the base method, must have the same
DynamicallyAccessedMembersAttribute usage on all their components (return value,
parameters, and generic parameters).
Example
C#
Cause
All overrides of a virtual method including the base method must have the same
DynamicallyAccessedMembersAttribute usage on all its components (return value,
parameters and generic parameters).
Example
C#
Cause
All overrides of a virtual method including the base method must have the same
DynamicallyAccessedMembersAttribute usage on all its components (return value,
parameters and generic parameters).
Example
C#
Cause
All overrides of a virtual method including the base method must have the same
DynamicallyAccessedMembersAttribute usage on all its components (return value,
parameters and generic parameters).
Example
C#
Cause
Specifying a case-insensitive search on an overload of GetType(String, Boolean, Boolean)
is not supported by Trimmer. Specify false to perform a case-sensitive search or use an
overload that does not use an ignoreCase boolean.
Example
C#
void TestMethod()
{
// IL2096 Trim analysis: Call to
'System.Type.GetType(String,Boolean,Boolean)' can perform case insensitive
lookup of the type, currently ILLink can not guarantee presence of all the
matching types
Type.GetType ("typeName", false, true);
}
IL2097: Field annotated with
'DynamicallyAccessedMembersAttribute
' is not of type 'System.Type',
'System.String', or derived
Article • 03/11/2022
Cause
DynamicallyAccessedMembersAttribute is only applicable to items of type Type, String,
or derived. On all other types the attribute will be ignored. Using the attribute on any
other type is likely incorrect and unintentional.
Example
C#
Cause
DynamicallyAccessedMembersAttribute is only applicable to items of type Type, String,
or derived. On all other types the attribute will be ignored. Using the attribute on any
other type is likely incorrect and unintentional.
Example
C#
Cause
DynamicallyAccessedMembersAttribute is only applicable to items of type Type, String,
or derived. On all other types the attribute will be ignored. Using the attribute on any
other type is likely incorrect and unintentional.
Example
C#
Cause
A wildcard cannot be the value of the fullname argument for an assembly element in a
Trimmer XML. Use a specific assembly name instead.
Example
XML
Cause
Embedded attribute or substitution XML may only contain elements that apply to the
containing assembly. Attempting to modify another assembly will not have any effect.
Example
XML
Cause
AssemblyMetadataAttribute may be used at the assembly level to turn on trimming for
the assembly. The attribute contains an unsupported value. The only supported value is
True .
Example
XML
Cause
The value passed to the propertyAccessor parameter of Property(Expression,
MethodInfo) was not recognized as a property accessor method. Trimmer can't
guarantee the presence of the property.
Example
C#
Cause
The assembly 'assembly' produced trim analysis warnings in the context of the app. This
means the assembly has not been fully annotated for trimming. Consider contacting the
library author to request they add trim annotations to the library. To see detailed
warnings, turn off grouped warnings by setting
<TrimmerSingleWarn>false</TrimmerSingleWarn> property in your project file. For
more information on annotating libraries for trimming, see Prepare .NET libraries for
trimming.
6 Collaborate with us on
GitHub .NET feedback
The .NET documentation is open
The source for this content can
source. Provide feedback here.
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
IL2105: Type 'type' was not found in the
caller assembly nor in the base library.
Type name strings used for dynamically
accessing a type should be assembly
qualified
Article • 03/11/2022
Cause
Type name strings representing dynamically accessed types must be assembly qualified.
Otherwise, linker will first search for the type name in the caller's assembly and then in
System.Private.CoreLib. If the linker fails to resolve the type name, null is returned.
Example
C#
void TestInvalidTypeName()
{
RequirePublicMethodOnAType("Foo.Unqualified.TypeName");
}
void RequirePublicMethodOnAType(
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicMethods)]
string typeName)
{
}
IL2106: Return type of method 'method'
has
'DynamicallyAccessedMembersAttribute
', but that attribute can only be applied
to properties of type 'System.Type' or
'System.String'
Article • 03/11/2022
Cause
DynamicallyAccessedMembersAttribute is only applicable to items of type Type or String
(or derived). On all other types, the attribute will be ignored. Using the attribute on any
other type is likely incorrect and unintentional.
Example
C#
Cause
The trimmer cannot correctly handle if the same compiler-generated state machine type
is associated (via the state-machine attributes) with two different methods. The trimmer
derives warning suppressions from the method that produced the state machine and
does not support reprocessing the same method or type more than once.
Example
C#
class <compiler_generated_state_machine>_type {
void MoveNext()
{
// This should normally produce IL2026
CallSomethingWhichRequiresUnreferencedCode ();
}
}
Cause
The only scopes supported on global unconditional suppressions are module , type , and
member . If the scope and target arguments are null or missing on a global suppression,
it's assumed that the suppression is put on the module. Global unconditional
suppressions using invalid scopes are ignored.
Example
C#
class Warning
{
static void Main(string[] args)
{
Foo();
}
Cause
A type is referenced in code, and this type derives from a base type with
RequiresUnreferencedCodeAttribute, which can break functionality of a trimmed
application. Types that derive from a base class with
RequiresUnreferencedCodeAttribute need to explicitly use the
RequiresUnreferencedCodeAttribute or suppress this warning.
Example
C#
Cause
The trimmer can't guarantee that all requirements of the
DynamicallyAccessedMembersAttribute are fulfilled if the field is accessed via reflection.
Example
C#
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicMethods)]
Type _field;
void TestMethod()
{
// IL2110: Field '_field' with 'DynamicallyAccessedMembersAttribute' is
accessed via reflection. Trimmer can't guarantee availability of the
requirements of the field.
typeof(Test).GetField("_field");
}
IL2111: Method with parameters or
return value with
'DynamicallyAccessedMembersAttribute
' is accessed via reflection. Trimmer
cannot guarantee availability of the
requirements of the method
Article • 07/02/2024
Cause
The trimmer can't guarantee that all requirements of the
DynamicallyAccessedMembersAttribute are fulfilled if the method is accessed via
reflection.
Example
This warning can be caused by directly accessing a method with a
DynamicallyAccessedMembersAttribute on its parameters or return type.
C#
void
MethodWithRequirements([DynamicallyAccessedMembers(DynamicallyAccessedMember
Types.PublicMethods)] Type type)
{
}
void TestMethod()
{
// IL2111: Method 'MethodWithRequirements' with parameters or return
value with `DynamicallyAccessedMembersAttribute` is accessed via reflection.
Trimmer can't guarantee availability of the requirements of the method.
typeof(Test).GetMethod("MethodWithRequirements");
}
This warning can also be caused by passing a type to a field, paramter, argument, or
return value that is annotated with DynamicallyAccessedMembersAttribute.
DynamicallyAccessedMembersAttribute implies reflection access over all of the listed
DynamicallyAccessedMemberTypes. This means that when a type is passed to a
parameter, field, generic parameter, or return value annotated with PublicMethods .NET
tooling assumes that all public methods are accessed via reflection. If a type that
contains a method with an annotated parameter or return value is passed to a location
annotated with PublicMethods, then IL2111 will be raised.
C#
class TypeWithAnnotatedMethod
{
void
MethodWithRequirements([DynamicallyAccessedMembers(DynamicallyAccessedMember
Types.PublicFields)] Type type)
{
}
}
class OtherType
{
void
AccessMethodViaReflection([DynamicallyAccessedMembers(DynamicallyAccessedMem
berTypes.PublicMethods)] Type type)
{
}
void PassTypeToAnnotatedMethod()
{
// IL2111: Method 'MethodWithRequirements' with parameters or return
value with `DynamicallyAccessedMembersAttribute` is accessed via reflection.
Trimmer can't guarantee availability of the requirements of the method.
AccessMethodViaReflection(typeof(TypeWithAnnotatedMethod));
}
}
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
IL2112:
'DynamicallyAccessedMembersAttribute
' on 'type' or one of its base types
references 'member', which requires
unreferenced code
Article • 03/11/2022
Cause
A type is annotated with DynamicallyAccessedMembersAttribute indicating that the
type may dynamically access some members declared on the type or its derived types.
This instructs the trimmer to keep the specified members, but one of them is annotated
with RequiresUnreferencedCodeAttribute, which can break functionality when trimming.
The DynamicallyAccessedMembersAttribute annotation may be directly on the type, or
implied by an annotation on one of its base or interface types. This warning originates
from the member with RequiresUnreferencedCodeAttribute.
Example
C#
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicMethods)]
public class AnnotatedType {
// Trim analysis warning IL2112: AnnotatedType.Method():
'DynamicallyAccessedMembersAttribute' on 'AnnotatedType' or one of its
// base types references 'AnnotatedType.Method()' which requires
unreferenced code. Using this member is trim unsafe.
[RequiresUnreferencedCode("Using this member is trim unsafe")]
public static void Method() { }
}
IL2113:
'DynamicallyAccessedMembersAttribute
' on 'type' or one of its base types
references 'member', which requires
unreferenced code
Article • 03/11/2022
Cause
A type is annotated with RequiresUnreferencedCodeAttribute indicating that the type
may dynamically access some members declared on one of its derived types. This
instructs the trimmer to keep the specified members, but a member of one of the base
or interface types is annotated with RequiresUnreferencedCodeAttribute, which can
break functionality when trimming. The RequiresUnreferencedCodeAttribute annotation
may be directly on the type, or implied by an annotation on one of its base or interface
types. This warning originates from the type which has
RequiresUnreferencedCodeAttribute requirements.
Example
C#
Cause
A type is annotated with RequiresUnreferencedCodeAttribute indicating that the type
may dynamically access some members declared on the type or its derived types. This
instructs the trimmer to keep the specified members, but one of them is annotated with
RequiresUnreferencedCodeAttribute which can not be statically verified. The
RequiresUnreferencedCodeAttribute annotation may be directly on the type, or implied
by an annotation on one of its base or interface types. This warning originates from the
member with RequiresUnreferencedCodeAttribute requirements.
Example
C#
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicFields)]
public class AnnotatedType {
// Trim analysis warning IL2114: System.Type AnnotatedType::Field:
'DynamicallyAccessedMembersAttribute' on 'AnnotatedType' or one of its
// base types references 'System.Type AnnotatedType::Field' which has
'DynamicallyAccessedMembersAttribute' requirements .
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicProperties)
]
public static Type Field;
}
IL2115:
'DynamicallyAccessedMembersAttribute
' on 'type' or one of its base types
references 'member' which has
'DynamicallyAccessedMembersAttribute
' requirements
Article • 03/11/2022
Cause
A type is annotated with DynamicallyAccessedMembersAttribute indicating that the
type may dynamically access some members declared on one of the derived types. This
instructs the trimmer to keep the specified members, but a member of one of the base
or interface types is annotated with DynamicallyAccessedMembersAttribute which
cannot be statically verified. The DynamicallyAccessedMembersAttribute annotation
may be directly on the type, or implied by an annotation on one of its base or interface
types. This warning originates from the type which has
DynamicallyAccessedMembersAttribute requirements.
Example
C#
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicProperties)
]
public static Type Field;
}
Cause
RequiresUnreferencedCodeAttribute is not allowed on static constructors since these are
not callable by the user. Placing the attribute directly on a static constructor will have no
effect. Annotate the method's containing type instead.
Example
C#
Cause
Trimmer currently can't correctly handle if the same compiler generated lambda or local
function is associated with two different methods. We don't know of any C# patterns
which would cause this problem, but it is possible to write code like this in IL.
Example
Only a meta-sample:
C#
Cause
Type name strings representing dynamically accessed types must be assembly qualified.
Otherwise, the lookup semantics of Type.GetType will search the assembly with the
Type.GetType callsite and the core library. The assembly with the Type.GetType callsite
may be different than the assembly which passes the type name string to a location with
DynamicallyAccessedMembersAttribute , so the tooling cannot determine which
assemblies to search.
Example
C#
// In Assembly 1
void TestInvalidTypeName()
{
// IL2122: Type 'MyType' is not assembly qualified. Type name strings
used for dynamically accessing a type should be assembly qualified.
RequirePublicMethodOnAType("MyType");
}
void ForwardTypeNameToAnotherMethod(
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicMethods)]
string typeName)
{
MyTypeFromAnotherAssembly.GetType(typeName);
}
C#
// In Assembly 2
public class MyTypeFromAnotherAssembly
{
void GetTypeAndSearchThroughMethods(
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicMethods)]
string typeName)
{
Type.GetType(typeName).GetMethods();
}
}
class MyType
{
// ...
}
C#
RequirePublicMethodsOnAType("MyType,Assembly2");
Another option is to pass the unqualified type name string directly to Type.GetType , and
avoid annotations on string :
C#
void SearchThroughMethods(
[DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicMethods)]
Type type)
{
type.GetMethods();
}
This gives the trimming tools enough information to look up the type using the same
semantics that GetType(String) has at runtime, causing the type (and public methods in
this example) to be preserved.
Native AOT deployment
Article • 09/18/2024
Publishing your app as Native AOT produces an app that's self-contained and that has
been ahead-of-time (AOT) compiled to native code. Native AOT apps have faster startup
time and smaller memory footprints. These apps can run on machines that don't have
the .NET runtime installed.
The benefit of Native AOT is most significant for workloads with a high number of
deployed instances, such as cloud infrastructure and hyper-scale services. .NET 8 adds
ASP.NET Core support for native AOT.
Prerequisites
Windows
Visual Studio 2022 , including the Desktop development with C++ workload with
all default components.
This property enables Native AOT compilation during publish. It also enables
dynamic code-usage analysis during build and editing. It's preferable to put this
setting in the project file rather than passing it on the command line, since it
controls behaviors outside publish.
XML
<PropertyGroup>
<PublishAot>true</PublishAot>
</PropertyGroup>
2. Publish the app for a specific runtime identifier using dotnet publish -r <RID> .
The following example publishes the app for Windows as a Native AOT application
on a machine with the required prerequisites installed.
The following example publishes the app for Linux as a Native AOT application. A
Native AOT binary produced on Linux machine is only going to work on same or
newer Linux version. For example, Native AOT binary produced on Ubuntu 20.04 is
going to run on Ubuntu 20.04 and later, but it isn't going to run on Ubuntu 18.04.
The app is available in the publish directory and contains all the code needed to run in
it, including a stripped-down version of the coreclr runtime.
Check out the Native AOT samples available in the dotnet/samples repository on
GitHub. The samples include Linux and Windows Dockerfiles that demonstrate how
to automate installation of prerequisites and publish .NET projects with Native AOT
using containers.
AOT-compatibility analyzers
The IsAotCompatible property is used to indicate whether a library is compatible with
Native AOT. Consider when a library sets the IsAotCompatible property to true , for
example:
XML
<PropertyGroup>
<IsAotCompatible>true</IsAotCompatible>
</PropertyGroup>
IsTrimmable
EnableTrimAnalyzer
EnableSingleFileAnalyzer
EnableAotAnalyzer
These analyzers help to ensure that a library is compatible with Native AOT.
Native debug information
By default, Native AOT publishing produces debug information in a separate file:
Linux: .dbg
Windows: .pdb
macOS: .dSYM folder
The debug file is necessary for running the app under the debugger or inspecting crash
dumps. On Unix-like platforms, set the StripSymbols property to false to include the
debug information in the native binary. Including debug information makes the native
binary larger.
XML
<PropertyGroup>
<StripSymbols>false</StripSymbols>
</PropertyGroup>
The publish process analyzes the entire project and its dependencies for possible
limitations. Warnings are issued for each limitation the published app might encounter
at run time.
Platform/architecture restrictions
The following table shows supported compilation targets.
.NET 8
ノ Expand table
The Native AOT publishing process generates a self-contained executable with a subset
of the runtime libraries that are tailored specifically for your app. The compilation
generally relies on static analysis of the application to generate the best possible output.
However, the term "best possible" can have many meanings. Sometimes, you can
improve the output of the compilation by providing hints to the publish process.
XML
<OptimizationPreference>Size</OptimizationPreference>
XML
<OptimizationPreference>Speed</OptimizationPreference>
Native AOT shares some, but not all, diagnostics and instrumentation capabilities with
CoreCLR. Because of CoreCLR's rich selection of diagnostic utilities, it's sometimes
appropriate to diagnose and debug problems in CoreCLR. Apps that are trim-
compatible shouldn't have behavioral differences, so investigations often apply to both
runtimes. Nonetheless, some information can only be gathered after publishing, so
Native AOT also provides post-publish diagnostic tooling.
ノ Expand table
Development-time diagnostics ✔️
Native debugging ✔️
CPU Profiling ✔️
Heap analysis ❌
XML
<PropertyGroup>
<EventSourceSupport>true</EventSourceSupport>
</PropertyGroup>
Native AOT provides partial support for some well-known event providers. Not all
runtime events are supported in Native AOT.
Development-time diagnostics
The .NET CLI tooling ( dotnet SDK) and Visual Studio offer separate commands for build
and publish . build (or Start in Visual Studio) uses CoreCLR. Only publish creates a
Native AOT application. Publishing your app as Native AOT produces an app that has
been ahead-of-time (AOT) compiled to native code. As mentioned previously, not all
diagnostic tools work seamlessly with published Native AOT applications in .NET 8.
However, all .NET diagnostic tools are available for developers during the application
building stage. We recommend developing, debugging, and testing the applications as
usual and publishing the working app with Native AOT as one of the last steps.
Native debugging
When you run your app during development, like inside Visual Studio, or with dotnet
run , dotnet build , or dotnet test , it runs on CoreCLR by default. However, if
PublishAot is present in the project file, the behavior should be the same between
CoreCLR and Native AOT. This characteristic allows you to use the standard Visual Studio
managed debugging engine for development and testing.
After publishing, Native AOT applications are true native binaries. The managed
debugger won't work on them. However, the Native AOT compiler generates fully native
executable files that native debuggers can debug on your platform of choice (for
example, WinDbg or Visual Studio on Windows and gdb or lldb on Unix-like systems).
The Native AOT compiler generates information about line numbers, types, locals, and
parameters. The native debugger lets you inspect stack trace and variables, step into or
over source lines, or set line breakpoints.
To see what exception was thrown, start debugging (Debug > Start Debugging or F5 ),
open the Watch window (Debug > Windows > Watch), and add following expression as
one of the watches: (S_P_CoreLib_System_Exception*)@rcx . This mechanism leverages
the fact that at the time RhThrowEx is called, the x64 CPU register RCX contains the
thrown exception. You can also paste the expression into the Immediate window; the
syntax is the same as for watches.
For information about the name and location of the symbol file, see Native debug
information.
CPU profiling
Platform-specific tools like PerfView and Perf can be used to collect CPU samples of
a Native AOT application.
Heap analysis
Managed heap analysis isn't currently supported in Native AOT. Heap analysis tools like
dotnet-gcdump, PerfView , and Visual Studio heap analysis tools don't work in Native
AOT in .NET 8.
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
Native code interop with Native AOT
Article • 03/30/2024
Native code interop is a technology that allows you to access unmanaged libraries from
managed code, or expose managed libraries to unmanaged code (the opposite
direction).
While native code interop works similarly in Native AOT and non-AOT deployments,
there are some specifics that differ when publishing as Native AOT.
You can configure the direct P/Invoke generation using <DirectPInvoke> items in the
project file. The item name can be either <modulename>, which enables direct calls for
all entry points in the module, or <modulename!entrypointname>, which enables a
direct call for the specific module and entry point only.
To specify a list of entry points in an external file, use <DirectPInvokeList> items in the
project file. A list is useful when the number of direct P/Invoke calls is large and it's
unpractical to specify them using individual <DirectPInvoke> items. The file can contain
empty lines and comments starting with # .
Examples:
XML
<ItemGroup>
<!-- Generate direct PInvoke calls for everything in __Internal -->
<!-- This option replicates Mono AOT behavior that generates direct
PInvoke calls for __Internal -->
<DirectPInvoke Include="__Internal" />
<!-- Generate direct PInvoke calls for everything in libc (also matches
libc.so on Linux or libc.dylib on macOS) -->
<DirectPInvoke Include="libc" />
<!-- Generate direct PInvoke calls for Sleep in kernel32 (also matches
kernel32.dll on Windows) -->
<DirectPInvoke Include="kernel32!Sleep" />
<!-- Generate direct PInvoke for all APIs listed in DirectXAPIs.txt -->
<DirectPInvokeList Include="DirectXAPIs.txt" />
</ItemGroup>
On Windows, Native AOT uses a prepopulated list of direct P/Invoke methods that are
available on all supported versions of Windows.
2 Warning
Because direct P/Invoke methods are resolved by the operating system dynamic
loader and not by the Native AOT runtime library, direct P/Invoke methods will not
respect the DefaultDllImportSearchPathsAttribute. The library search order will
follow the dynamic loader rules as defined by the operating system. Some
operating systems and loaders offer ways to control dynamic loading through
linker flags (such as /DEPENDENTLOADFLAG on Windows or -rpath on Linux). For more
information on how to specify linker flags, see the Linking section.
Linking
To statically link against an unmanaged library, you need to specify <NativeLibrary
Include="filename" /> pointing to a .lib file on Windows and a .a file on Unix-like
systems.
Examples:
XML
<ItemGroup>
<!-- Generate direct PInvokes for Dependency -->
<DirectPInvoke Include="Dependency" />
<!-- Specify library to link against -->
<NativeLibrary Include="Dependency.lib"
Condition="$(RuntimeIdentifier.StartsWith('win'))" />
<NativeLibrary Include="Dependency.a"
Condition="!$(RuntimeIdentifier.StartsWith('win'))" />
</ItemGroup>
To specify additional flags to the native linker, use the <LinkerArg> item.
Examples:
XML
<ItemGroup>
<!-- link.exe is used as the linker on Windows -->
<LinkerArg Include="/DEPENDENTLOADFLAG:0x800"
Condition="$(RuntimeIdentifier.StartsWith('win'))" />
Native exports
The Native AOT compiler exports methods annotated with
UnmanagedCallersOnlyAttribute with a nonempty EntryPoint property as public C entry
points. This makes it possible to either dynamically or statically link the AOT compiled
modules into external programs. Only methods marked UnmanagedCallersOnly in the
published assembly are considered. Methods in project references or NuGet packages
won't be exported. For more information, see NativeLibrary sample .
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
Building native libraries
Article • 06/07/2024
Publishing .NET class libraries as Native AOT allows creating libraries that can be
consumed from non-.NET programming languages. The produced native library is self-
contained and doesn't require a .NET runtime to be installed.
7 Note
Only "shared libraries" (also known as DLLs on Windows) are supported. Static
libraries are not officially supported and may require compiling Native AOT from
source. Unloading Native AOT libraries (via dlclose or FreeLibrary , for example) is
not supported.
Publishing a class library as Native AOT creates a native library that exposes methods of
the class library annotated with UnmanagedCallersOnlyAttribute with a non-null
EntryPoint field. For more information, see the native library sample available in the
dotnet/samples repository on GitHub.
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
Cross-compilation
Article • 12/05/2023
Cross-compilation is a process of creating executable code for a platform other than the
one on which the compiler is running. The platform difference might be a different OS
or a different architecture. For instance, compiling for Windows from Linux, or for Arm64
from x64. On Linux, the difference can also be between the standard C library
implementations - glibc (e.g. Ubuntu Linux) or musl (e.g. Alpine Linux).
Native AOT uses platform tools (linkers) to link platform libraries (static and dynamic)
together with AOT-compiled managed code into the final executable file. The availability
of cross-linkers and static/dynamic libraries for the target system limits the
OS/architecture pairs that can cross-compile.
Since there's no standardized way to obtain native macOS SDK for use on
Windows/Linux, or Windows SDK for use on Linux/macOS, or a Linux SDK for use on
Windows/macOS, Native AOT does not support cross-OS compilation. Cross-OS
compilation with Native AOT requires some form of emulation, like a virtual machine or
Windows WSL.
However, Native AOT does have limited support for cross-architecture compilation. As
long as the necessary native toolchain is installed, it's possible to cross-compile between
the x64 and the arm64 architectures on Windows, Mac, or Linux.
Windows
Cross-compiling from x64 Windows to ARM64 Windows or vice versa works as long as
the appropriate VS 2022 C++ build tools are installed. To target ARM64 make sure the
Visual Studio component "VS 2022 C++ ARM64/ARM64EC build tools (Latest)" is
installed. To target x64, look for "VS 2022 C++ x64/x86 build tools (Latest)" instead.
Mac
MacOS provides the x64 and amd64 toolchains in the default XCode install.
Linux
Every Linux distribution has a different system for installing native toolchain
dependencies. Consult the documentation for your Linux distribution to determine the
necessary steps.
The necessary dependencies are:
A cross-linker, or a linker that can emit for the target. clang is one such linker.
A target-compatible objcopy or strip , if StripSymbols is enabled for your project.
Object files for the C runtime of the target architecture.
Object files for zlib for the target architecture.
The following commands may suffice for compiling for linux-arm64 on Ubuntu 22.04
amd64, although this is not documented or guaranteed by Ubuntu:
Bash
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
Security features
Article • 09/18/2024
.NET offers many facilities to help address security concerns when building apps. Native
AOT deployment builds on top of these facilities and provides several that can help
harden your apps.
To enable Control Flow Guard on your native AOT app, set the ControlFlowGuard
property in the published project.
XML
<PropertyGroup>
<!-- Enable control flow guard -->
<ControlFlowGuard>Guard</ControlFlowGuard>
</PropertyGroup>
CET is enabled by default when publishing for Windows. To disable CET, set the
CetCompat property in the published project.
XML
<PropertyGroup>
<!-- Disable Control-flow Enforcement Technology -->
<CetCompat>false</CetCompat>
</PropertyGroup>
Introduction to AOT warnings
Article • 09/12/2023
When publishing your application as Native AOT, the build process produces all the
native code and data structures required to support the application at run time. This is
different from non-native deployments, which execute the application from formats that
describe the application in abstract terms (a program for a virtual machine) and create
native representations on demand at run time.
Because the relationship of abstract code to native code is not one-to-one, the build
process needs to create a complete list of native code bodies and data structures at
build time. It can be difficult to create this list at build time for some of the .NET APIs. If
the API is used in a way that wasn't anticipated at build time, an exception will be
thrown at run time.
To prevent changes in behavior when deploying as Native AOT, the .NET SDK provides
static analysis of AOT compatibility through "AOT warnings." AOT warnings are
produced when the build finds code that may not be compatible with AOT. Code that's
not AOT-compatible may produce behavioral changes or even crashes in an application
after it's been built as Native AOT. Ideally, all applications that use Native AOT should
have no AOT warnings. If there are any AOT warnings, ensure there are no behavior
changes by thoroughly testing your app after building as Native AOT.
C#
Type t = typeof(int);
while (true)
{
t = typeof(GenericType<>).MakeGenericType(t);
Console.WriteLine(Activator.CreateInstance(t));
}
struct GenericType<T> { }
While the above program is not very useful, it represents an extreme case that requires
an infinite number of generic types to be created when building the application as
Native AOT. Without Native AOT, the program would run until it runs out of memory.
With Native AOT, we would not be able to even build it if we were to generate all the
necessary types (the infinite number of them).
In this case, Native AOT build issues the following warning on the MakeGenericType line:
At run time, the application will indeed throw an exception from the MakeGenericType
call.
RequiresDynamicCode
RequiresDynamicCodeAttribute is simple and broad: it's an attribute that means the
member has been annotated as being incompatible with AOT. This annotation means
that the member might use reflection or another mechanism to create new native code
at run time. This attribute is used when code is fundamentally not AOT compatible, or
the native dependency is too complex to statically predict at build time. This would
often be true for methods that use the Type.MakeGenericType API, reflection emit, or
other run-time code generation technologies. The following code shows an example.
C#
There aren't many workarounds for RequiresDynamicCode . The best fix is to avoid calling
the method at all when building as Native AOT and use something else that's AOT
compatible. If you're writing a library and it's not in your control whether or not to call
the method, you can also add RequiresDynamicCode to your own method. This will
annotate your method as not AOT compatible. Adding RequiresDynamicCode silences all
AOT warnings in the annotated method but will produce a warning whenever someone
else calls it. For this reason, it's mostly useful to library authors to "bubble up" the
warning to a public API.
If you can somehow determine that the call is safe, and all native code will be available
at run time, you can also suppress the warning using
UnconditionalSuppressMessageAttribute. For example:
C#
[UnconditionalSuppressMessage("Aot", "IL3050:RequiresDynamicCode",
Justification = "The unfriendly method is not reachable with AOT")]
void TestMethod()
{
If (RuntimeFeature.IsDynamicCodeSupported)
MethodWithReflectionEmit(); // warning suppressed
}
and other post-build tools. SuppressMessage and #pragma directives are only present in
source, so they can't be used to silence warnings from the build.
U Caution
Be careful when suppressing AOT warnings. The call might be AOT-compatible now,
but as you update your code, that might change, and you might forget to review all
the suppressions.
6 Collaborate with us on
GitHub .NET feedback
The .NET documentation is open
The source for this content can
source. Provide feedback here.
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
Intrinsic APIs marked
RequiresDynamicCode
Article • 09/11/2024
Some APIs annotated RequiresDynamicCode can still be used without triggering the
warning when called in a specific pattern. When used as part of a pattern, the call to the
API can be statically analyzed by the compiler, does not generate a warning, and
behaves as expected at run time.
Enum.GetValues(Type) Method
Calls to this API don't trigger a warning if the concrete enum type is statically visible in
the calling method body. For example, Enum.GetValues(typeof(AttributeTargets)) does
not trigger a warning, but Enum.GetValues(typeof(T)) and Enum.GetValues(someType)
do.
Marshal.GetDelegateForFunctionPointer(IntPtr,
Type) Method
Calls to this API don't trigger a warning if the concrete type is statically visible in the
calling method body. For example, Marshal.GetDelegateForFunctionPointer(ptr,
typeof(bool)) does not trigger a warning, but
Marshal.GetDelegateForFunctionPointer(ptr, typeof(T)) and
Marshal.SizeOf(Type) Method
Calls to this API don't trigger a warning if the concrete type is statically visible in the
calling method body. For example, Marshal.SizeOf(typeof(bool)) does not trigger a
warning, but Marshal.SizeOf(typeof(T)) and Marshal.SizeOf(someType) do.
MethodInfo.MakeGenericMethod(Type[])
Method (.NET 9+)
Calls to this API don't trigger a warning if both the generic method definition and the
instantiation arguments are statically visible within the calling method body. For
example,
typeof(SomeType).GetMethod("GenericMethod").MakeGenericMethod(typeof(int)) . It's also
possible to use a generic parameter as the argument:
typeof(SomeType).GetMethod("GenericMethod").MakeGenericMethod(typeof(T)) also
doesn't warn.
If the generic type definition is statically visible within the calling method body and all
the generic parameters of it are constrained to be a class, the call also doesn't trigger
the IL3050 warning. In this case, the arguments don't have to be statically visible. For
example:
C#
If the generic type definition is statically visible within the calling method body and all
the generic parameters of it are constrained to be a class, the call also doesn't trigger
the IL3050 warning. In this case, the arguments don't have to be statically visible. For
example:
C#
trigger a warning.
IL3050: Avoid calling members
annotated with
'RequiresDynamicCodeAttribute' when
publishing as Native AOT
Article • 09/11/2024
Cause
When you publish an app as Native AOT (by setting the PublishAot property to true in
a project), calling members annotated with the RequiresDynamicCodeAttribute attribute
might result in exceptions at run time. Members annotated with this attribute might
require ability to dynamically create new code at run time, and Native AOT publishing
model doesn't provide a way to generate native code at run time.
Rule description
RequiresDynamicCodeAttribute indicates that the member references code that might
require code generation at run time.
Example
C#
class Generic<T> { }
struct SomeStruct { }
Cause
There is a mismatch in the RequiresDynamicCodeAttribute annotations between an
interface and its implementation or a virtual method and its override.
Example
A base member has the attribute but the derived member does not have the attribute.
C#
A derived member has the attribute but the overridden base member does not have the
attribute.
C#
An interface member has the attribute but its implementation does not have the
attribute.
C#
interface IRDC
{
[RequiresDynamicCode("Message")]
void TestMethod();
}
An implementation member has the attribute but the interface that it implements does
not have the attribute.
C#
interface IRDC
{
void TestMethod();
}
Cause
Built-in COM is not supported with Native AOT compilation. Use COM wrappers instead.
When the unsupported code path is reached at run time, an exception will be thrown.
Example
C#
using System.Runtime.InteropServices;
[Guid("CB2F6723-AB3A-11D2-9C40-00C04FA30A3E")]
[ComImport]
[ClassInterface(ClassInterfaceType.None)]
public class CorRuntimeHost
{
}
6 Collaborate with us on
GitHub .NET feedback
The .NET documentation is open
The source for this content can
source. Provide feedback here.
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
IL3053: Assembly produced AOT
warnings
Article • 09/02/2022
Cause
The assembly produced one or more AOT analysis warnings. The warnings have been
collapsed into a single warning message because they refer to code that likely comes
from a third party and is not directly actionable. Using the library with native AOT might
be problematic.
Cause
Methods on generic types and generic methods that are instantiated over different
types are supported by different native code bodies specialized for the given type
parameter.
It is possible to form a cycle between generic instantiations in a way that the number of
native code bodies required to support the application becomes unbounded. Because
Native AOT deployments require generating all native method bodies at the time of
publishing the application, this would require compiling an infinite number of methods.
When the AOT compilation process detects such unbounded growth, it cuts off the
growth by generating a throwing method. If the application goes beyond the cutoff
point at run time, an exception is thrown.
Even though it's unlikely the throwing method body will be reached at run time, it's
advisable to remove the generic recursion by restructuring the code. Generic recursion
negatively affects compilation speed and the size of the output executable.
In .NET, generic code instantiated over reference type is shared across all reference
typed instantiations (for example, the code to support List<string> and List<object>
is the same). However, additional native data structures are needed to express the
"generic context" (the thing that gets substituted for T ). It is possible to form generic
recursion within these data structures as well. For example, this can happen if the
generic context for Foo<T> needs to refer to Foo<Foo<T>> that in turn needs
Foo<Foo<Foo<T>>> .
Example
The following program will work correctly for input "2" but throws an exception for
input "100".
C#
// AOT analysis warning IL3054:
// Program.
<<Main>$>g__CauseGenericRecursion|0_0<Struct`1<Struct`1<Struct`1<Struct`1<In
t32>>>>>(Int32):
// Generic expansion to 'Program.
<<Main>$>g__CauseGenericRecursion|0_0<Struct`1<Struct`1<Struct`1<Struct`1<St
ruct`1<Int32>>>>>>(Int32)'
// was aborted due to generic recursion. An exception will be thrown at
runtime if this codepath
// is ever reached. Generic recursion also negatively affects compilation
speed and the size of
// the compilation output. It is advisable to remove the source of the
generic recursion
// by restructuring the program around the source of recursion. The source
of generic recursion
// might include: 'Program.<<Main>$>g__CauseGenericRecursion|0_0<T>(Int32)
using System;
struct Struct<T> { }
Similarly, the following program causes recursion within native data structures (as
opposed to generic recursion within native code), since the instantiation is over a
reference type, but has a cycle:
C#
// AOT analysis warning IL3054:
// Program.<<Main>$>g__Recursive|0_0<List`1<List`1<List`1<List`1<Object>>>>>
():
// Generic expansion to 'Program.
<<Main>$>g__Recursive|0_0<List`1<List`1<List`1<List`1<List`1<Object>>>>>>()'
// was aborted due to generic recursion. An exception will be thrown at
runtime if this codepath
// is ever reached. Generic recursion also negatively affects compilation
speed and the size of
// the compilation output. It is advisable to remove the source of the
generic recursion
// by restructuring the program around the source of recursion. The source
of generic recursion
// might include: 'Program.<<Main>$>g__Recursive|0_0<T>()'
using System.Collections.Generic;
Recursive<object>();
6 Collaborate with us on
GitHub .NET feedback
The .NET documentation is open
The source for this content can
source. Provide feedback here.
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
IL3055: P/Invoke method declares a
parameter with an abstract delegate
Article • 09/02/2022
Cause
P/Invoke marshalling code needs to be generated ahead of time. If marshalling code for
a delegate wasn't pregenerated, P/Invoke marshalling will fail with an exception at run
time.
If a concrete type cannot be inferred from the P/Invoke signature, marshalling code
might not be available at run time and the P/Invoke will throw an exception.
Example
C#
using System;
using System.Runtime.InteropServices;
Cause
RequiresDynamicCodeAttribute is not allowed on static constructors since these are not
callable by the user. Placing the attribute directly on a static constructor will have no
effect. Annotate the method's containing type instead.
Example
C#
Starting with .NET Core 2.0, it's possible to package and deploy apps against a known
set of packages that exist in the target environment. The benefits are faster
deployments, lower disk space usage, and improved startup performance in some cases.
\dotnet
\store
\x64
\netcoreapp2.0
\microsoft.applicationinsights
\microsoft.aspnetcore
...
\x86
\netcoreapp2.0
\microsoft.applicationinsights
\microsoft.aspnetcore
...
A target manifest file lists the packages in the runtime package store. Developers can
target this manifest when publishing their app. The target manifest is typically provided
by the owner of the targeted production environment.
The first step is to create a package store manifest that lists the packages that compose
the runtime package store. This file format is compatible with the project file format
(csproj).
XML
<Project Sdk="Microsoft.NET.Sdk">
<ItemGroup>
<PackageReference Include="NUGET_PACKAGE" Version="VERSION" />
<!-- Include additional packages here -->
</ItemGroup>
</Project>
Example
XML
<Project Sdk="Microsoft.NET.Sdk">
<ItemGroup>
<PackageReference Include="Newtonsoft.Json" Version="10.0.3" />
<PackageReference Include="Moq" Version="4.7.63" />
</ItemGroup>
</Project>
Provision the runtime package store by executing dotnet store with the package store
manifest, runtime, and framework:
.NET CLI
Example
.NET CLI
You can pass multiple target package store manifest paths to a single dotnet store
command by repeating the option and path in the command.
By default, the output of the command is a package store under the .dotnet/store
subdirectory of the user's profile. You can specify a different location using the --output
<OUTPUT_DIRECTORY> option. The root directory of the store contains a target manifest
artifact.xml file. This file can be made available for download and be used by app
authors who want to target this store when publishing.
Example
The following artifact.xml file is produced after running the previous example. Note that
Castle.Core is a dependency of Moq , so it's included automatically and appears in the
artifacts.xml manifest file.
XML
<StoreArtifacts>
<Package Id="Newtonsoft.Json" Version="10.0.3" />
<Package Id="Castle.Core" Version="4.1.0" />
<Package Id="Moq" Version="4.7.63" />
</StoreArtifacts>
.NET CLI
Example
.NET CLI
You deploy the resulting published app to an environment that has the packages
described in the target manifest. Failing to do so results in the app failing to start.
Specify multiple target manifests when publishing an app by repeating the option and
path (for example, --manifest manifest1.xml --manifest manifest2.xml ). When you do
so, the app is trimmed for the union of packages specified in the target manifest files
provided to the command.
When the deployment is trimmed on publish, only the specific versions of the manifest
packages you indicate are withheld from the published output. The packages at the
versions indicated must be present on the host for the app to start.
XML
<PropertyGroup>
<TargetManifestFiles>manifest1.xml;manifest2.xml</TargetManifestFiles>
</PropertyGroup>
Specify the target manifests in the project file only when the target environment for the
app is well-known, such as for .NET Core projects. This isn't the case for open-source
projects. The users of an open-source project typically deploy it to different production
environments. These production environments generally have different sets of packages
pre-installed. You can't make assumptions about the target manifest in such
environments, so you should use the --manifest option of dotnet publish.
For .NET Core 2.0, the runtime package store feature is used implicitly by an ASP.NET
Core app when the app is deployed as a framework-dependent deployment app. The
targets in Microsoft.NET.Sdk.Web include manifests referencing the implicit package
store on the target system. Additionally, any framework-dependent app that depends
on the Microsoft.AspNetCore.All package results in a published app that contains only
the app and its assets and not the packages listed in the Microsoft.AspNetCore.All
metapackage. It's assumed that those packages are present on the target system.
The runtime package store is installed on the host when the .NET SDK is installed. Other
installers may provide the runtime package store, including Zip/tarball installations of
the .NET SDK, apt-get , Red Hat Yum, the .NET Core Windows Server Hosting bundle,
and manual runtime package store installations.
When deploying a framework-dependent deployment app, make sure that the target
environment has the .NET SDK installed. If the app is deployed to an environment that
doesn't include ASP.NET Core, you can opt out of the implicit store by specifying
<PublishWithAspNetCoreTargetManifest> set to false in the project file as in the
following example:
XML
<PropertyGroup>
<PublishWithAspNetCoreTargetManifest>false</PublishWithAspNetCoreTargetManif
est>
</PropertyGroup>
7 Note
For self-contained deployment apps, it's assumed that the target system doesn't
necessarily contain the required manifest packages. Therefore,
<PublishWithAspNetCoreTargetManifest> cannot be set to true for an self-
contained app.
See also
dotnet-publish
dotnet-store
.NET RID Catalog
Article • 07/11/2024
RID is short for runtime identifier. RID values are used to identify target platforms where
the application runs. They're used by .NET packages to represent platform-specific
assets in NuGet packages. The following values are examples of RIDs: linux-x64 , win-
x64 , or osx-x64 . For the packages with native dependencies, the RID designates on
A single RID can be set in the <RuntimeIdentifier> element of your project file. Multiple
RIDs can be defined as a semicolon-delimited list in the project file's
<RuntimeIdentifiers> element. They're also used via the --runtime option with the
following .NET CLI commands:
dotnet build
dotnet clean
dotnet pack
dotnet publish
dotnet restore
dotnet run
dotnet store
RIDs that represent concrete operating systems usually follow this pattern: [os].
[version]-[architecture]-[additional qualifiers] where:
RID graph
The RID graph or runtime fallback graph is a list of RIDs that are compatible with each
other.
Before .NET 8, version-specific and distro-specific RIDs were regularly added to the
runtime.json file, which is located in the dotnet/runtime repository. This graph is no
longer updated and exists as a backwards compatibility option. Developers should use
RIDs that are non-version-specific and non-distro-specific.
When NuGet restores packages, it tries to find an exact match for the specified runtime.
If an exact match is not found, NuGet walks back the graph until it finds the closest
compatible system according to the RID graph.
The following example is the actual entry for the osx-x64 RID:
JSON
"osx-x64": {
"#import": [ "osx", "unix-x64" ]
}
The above RID specifies that osx-x64 imports unix-x64 . So, when NuGet restores
packages, it tries to find an exact match for osx-x64 in the package. If NuGet can't find
the specific runtime, it can restore packages that specify unix-x64 runtimes, for
example.
The following example shows a slightly bigger RID graph also defined in the
runtime.json file:
linux-arm64 linux-arm32
| \ / |
| linux |
| | |
unix-arm64 | unix-x64
\ | /
unix
|
any
Alternatively, you can use the RidGraph tool to easily visualize the RID graph (or any
subset of the graph).
There are some considerations about RIDs that you have to keep in mind when working
with them:
The RIDs need to be specific, so don't assume anything from the actual RID value.
Some apps need to compute RIDs programmatically. If so, the computed RIDs
must match the catalog exactly, including in casing. RIDs with different casing
would cause problems when the OS is case sensitive, for example, Linux, because
the value is often used when constructing things like output paths. For example,
consider a custom publishing wizard in Visual Studio that relies on information
from the solution configuration manager and project properties. If the solution
configuration passes an invalid value, for example, ARM64 instead of arm64 , it could
result in an invalid RID, such as win-ARM64 .
Using RIDs
To be able to use RIDs, you have to know which RIDs exist. For the latest and complete
version, see the PortableRuntimeIdentifierGraph.json in the dotnet/runtime
repository.
RIDs that are considered 'portable'—that is, aren't tied to a specific version or OS
distribution—are the recommended choice. This means that portable RIDs should be
used for both building a platform-specific application and creating a NuGet package
with RID-specific assets.
Starting with .NET 8, the default behavior of the .NET SDK and runtime is to only
consider non-version-specific and non-distro-specific RIDs. When restoring and
building, the SDK uses a smaller portable RID graph. The
RuntimeInformation.RuntimeIdentifier returns the platform for which the runtime was
built. At run time, .NET finds RID-specific assets using a known set of portable RIDs.
When building an application with RID-specific assets that may be ignored at runtime,
the SDK will emit a warning: NETSDK1206.
Loading assets for a specific OS version or distribution
.NET no longer attempts to provide first-class support for resolving dependencies that
are specific to an OS version or distribution. If your application or package needs to load
different assets based on OS version or distribution, it should implement the logic to
conditionally load assets.
.NET provides various extension points for customizing loading logic—for example,
NativeLibrary.SetDllImportResolver(Assembly, DllImportResolver),
AssemblyLoadContext.ResolvingUnmanagedDll, AssemblyLoadContext.Resolving, and
AppDomain.AssemblyResolve. These can be used to load the asset corresponding to the
current platform.
Known RIDs
The following list shows a small subset of the most common RIDs used for each OS. For
the latest and complete version, see the PortableRuntimeIdentifierGraph.json in the
dotnet/runtime repository.
Windows RIDs
win-x64
win-x86
win-arm64
Linux RIDs
linux-x64 (Most desktop distributions like CentOS Stream, Debian, Fedora,
base images)
linux-arm (Linux distributions running on Arm like Raspbian on Raspberry Pi
Model 2+)
linux-arm64 (Linux distributions running on 64-bit Arm like Ubuntu Server 64-bit
macOS RIDs
macOS RIDs use the older "OSX" branding.
iOS RIDs
ios-arm64
Android RIDs
android-arm64
See also
Runtime IDs
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
How resource manifest files are named
Article • 11/03/2021
When MSBuild compiles a .NET Core project, XML resource files, which have the .resx file
extension, are converted into binary .resources files. The binary files are embedded into
the output of the compiler and can be read by the ResourceManager. This article
describes how MSBuild chooses a name for each .resources file.
Tip
If you explicitly add a resource item to your project file, and it's also included with
the default include globs for .NET Core, you will get a build error. To manually
include resource files as EmbeddedResource items, set the
EnableDefaultEmbeddedResourceItems property to false.
Default name
In .NET Core 3.0 and later, the default name for a resource manifest is used when both of
the following conditions are met:
The resource file is not explicitly included in the project file as an EmbeddedResource
item with LogicalName , ManifestResourceName , or DependentUpon metadata.
The EmbeddedResourceUseDependentUponConvention property is not set to false in
the project file. By default, this property is set to true . For more information, see
EmbeddedResourceUseDependentUponConvention.
If the resource file is colocated with a source file (.cs or .vb) of the same root file name,
the full name of the first type that's defined in the source file is used for the manifest file
name. For example, if MyNamespace.Form1 is the first type defined in Form1.cs, and
Form1.cs is colocated with Form1.resx, the generated manifest name for that resource file
is MyNamespace.Form1.resources.
LogicalName metadata
If a resource file is explicitly included in the project file as an EmbeddedResource item with
LogicalName metadata, the LogicalName value is used as the manifest name. LogicalName
takes precedence over any other metadata or setting.
For example, the manifest name for the resource file that's defined in the following
project file snippet is SomeName.resources.
XML
-or-
XML
7 Note
XML
XML
ManifestResourceName metadata
If a resource file is explicitly included in the project file as an EmbeddedResource item with
ManifestResourceName metadata (and LogicalName is absent), the ManifestResourceName
value, combined with the file extension .resources, is used as the manifest file name.
For example, the manifest name for the resource file that's defined in the following
project file snippet is SomeName.resources.
XML
<EmbeddedResource Include="X.resx" ManifestResourceName="SomeName" />
The manifest name for the resource file that's defined in the following project file
snippet is SomeName.fr-FR.resources.
XML
DependentUpon metadata
If a resource file is explicitly included in the project file as an EmbeddedResource item with
DependentUpon metadata (and LogicalName and ManifestResourceName are absent),
information from the source file defined by DependentUpon is used for the resource
manifest file name. Specifically, the name of the first type that's defined in the source file
is used in the manifest name as follows: Namespace.Classname[.Culture].resources.
For example, the manifest name for the resource file that's defined in the following
project file snippet is Namespace.Classname.resources (where Namespace.Classname is the
first class that's defined in MyTypes.cs).
XML
The manifest name for the resource file that's defined in the following project file
snippet is Namespace.Classname.fr-FR.resources (where Namespace.Classname is the first
class that's defined in MyTypes.cs).
XML
EmbeddedResourceUseDependentUponConvention
property
If EmbeddedResourceUseDependentUponConvention is set to false in the project file, each
resource manifest file name is based off the root namespace for the project and the
relative path from the project root to the .resx file. More specifically, the generated
resource manifest file name is RootNamespace.RelativePathWithDotsForSlashes.
[Culture.]resources. This is also the logic used to generate manifest names in .NET Core
versions prior to 3.0.
7 Note
See also
How Manifest Resource Naming Works
MSBuild properties for .NET SDK projects
MSBuild breaking changes
Introduction to .NET and Docker
Article • 01/04/2024
Containers are one of the most popular ways for deploying and hosting cloud
applications, with tools like Docker , Kubernetes , and Podman . Many developers
choose containers because it's straightforward to package an app with its dependencies
and get that app to reliably run on any container host. There's extensive support for
using .NET with containers .
.NET images
Official .NET container images are published to the Microsoft Artifact Registry and are
discoverable on the Docker Hub . There are runtime images for production and SDK
images for building your code, for Linux (Alpine, Debian, Ubuntu, Mariner) and
Windows. For more information, see .NET container images.
.NET images are regularly updated whenever a new .NET patch is published or when an
operating system base image is updated.
Chiseled container images are Ubuntu container images with a minimal set of
components required by the .NET runtime. These images are ~100 MB smaller than the
regular Ubuntu images and have fewer CVEs since they have fewer components. In
particular, they don't contain a shell or package manager, which significantly improves
their security profile. They also include a non-root user and are configured with that
user enabled.
The following example demonstrates building and running a container image in a few
quick steps (supported with .NET 8 and .NET 7.0.300).
Bash
$ dotnet new webapp -o webapp
$ cd webapp/
$ dotnet publish -t:PublishContainer
MSBuild version 17.8.3+195e7f5a3 for .NET
Determining projects to restore...
All projects are up-to-date for restore.
webapp -> /home/rich/webapp/bin/Release/net8.0/webapp.dll
webapp -> /home/rich/webapp/bin/Release/net8.0/publish/
Building image 'webapp' with tags 'latest' on top of base image
'mcr.microsoft.com/dotnet/aspnet:8.0'.
Pushed image 'webapp:latest' to local registry via 'docker'.
$ docker run --rm -d -p 8000:8080 webapp
7c7ad33409e52ddd3a9d330902acdd49845ca4575e39a6494952b642e584016e
$ curl -s http://localhost:8000 | grep ASP.NET
<p>Learn about <a
href="https://learn.microsoft.com/aspnet/core">building Web apps with
ASP.NET Core</a>.</p>
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES
7c7ad33409e5 webapp "dotnet webapp.dll" About a minute ago Up About
a minute 0.0.0.0:8000->8080/tcp, :::8000->8080/tcp jovial_shtern
$ docker kill 7c7ad33409e5
Ports
Port mapping is a key part of using containers. Ports must be published outside the
container in order to respond to external web requests. ASP.NET Core container images
changed in .NET 8 to listen on port 8080 , by default. .NET 6 and 7 listen on port 80 .
In the prior example with docker run , the host port 8000 is mapped to the container
port 8080 . Kubernetes works in a similar way.
Users
Starting with .NET 8, all images include a non-root user called app . By default, chiseled
images are configured with this user enabled. The publish app as .NET container feature
(demonstrated in the Building container images section) also configures images with
this user enabled by default. In all other scenarios, the app user can be set manually, for
example with the USER Dockerfile instruction. If an image has been configured with app
and commands need to run as root , then the USER instruction can be used to set to the
user to root .
Staying informed
Container-related news is posted to dotnet/dotnet-docker discussions and to the
.NET Blog "containers" category .
Azure services
Various Azure services support containers. You create a Docker image for your
application and deploy it to one of the following services:
Azure Batch
Run repetitive compute jobs using containers.
Next steps
Learn how to containerize a .NET Core application.
Learn how to containerize an ASP.NET Core application.
Try the Learn ASP.NET Core Microservice tutorial.
Learn about Container Tools in Visual Studio
6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
Tutorial: Containerize a .NET app
Article • 03/21/2024
In this tutorial, you learn how to containerize a .NET application with Docker. Containers
have many features and benefits, such as being an immutable infrastructure, providing a
portable architecture, and enabling scalability. The image can be used to create
containers for your local development environment, private cloud, or public cloud.
You explore the Docker container build and deploy tasks for a .NET application. The
Docker platform uses the Docker engine to quickly build and package apps as Docker
images. These images are written in the Dockerfile format to be deployed and run in a
layered container.
7 Note
This tutorial is not for ASP.NET Core apps. If you're using ASP.NET Core, see the
Learn how to containerize an ASP.NET Core application tutorial.
Prerequisites
Install the following prerequisites:
.NET 8+ SDK .
If you have .NET installed, use the dotnet --info command to determine which
SDK you're using.
Docker Community Edition .
A temporary working folder for the Dockerfile and .NET example app. In this
tutorial, the name docker-working is used as the working folder.
.NET CLI
Directory
📁 docker-working
└──📂 App
├──DotNet.Docker.csproj
├──Program.cs
└──📂 obj
├── DotNet.Docker.csproj.nuget.dgspec.json
├── DotNet.Docker.csproj.nuget.g.props
├── DotNet.Docker.csproj.nuget.g.targets
├── project.assets.json
└── project.nuget.cache
The dotnet new command creates a new folder named App and generates a "Hello
World" console application. Now, you change directories and navigate into the App
folder from your terminal session. Use the dotnet run command to start the app. The
application runs, and prints Hello World! below the command:
.NET CLI
cd App
dotnet run
Hello World!
The default template creates an app that prints to the terminal and then immediately
terminates. For this tutorial, you use an app that loops indefinitely. Open the Program.cs
file in a text editor.
Tip
If you're using Visual Studio Code, from the previous terminal session type the
following command:
Console
code .
This will open the App folder that contains the project in Visual Studio Code.
C#
Console.WriteLine("Hello World!");
Replace the file with the following code that counts numbers every second:
C#
var counter = 0;
var max = args.Length is not 0 ? Convert.ToInt32(args[0]) : -1;
while (max is -1 || counter < max)
{
Console.WriteLine($"Counter: {++counter}");
await Task.Delay(TimeSpan.FromMilliseconds(1_000));
}
Save the file and test the program again with dotnet run . Remember that this app runs
indefinitely. Use the cancel command Ctrl+C to stop it. Consider the following example
output:
.NET CLI
dotnet run
Counter: 1
Counter: 2
Counter: 3
Counter: 4
^C
If you pass a number on the command line to the app, it will only count up to that
amount and then exit. Try it with dotnet run -- 5 to count to five.
) Important
Any parameters after -- are not passed to the dotnet run command and instead
are passed to your application.
.NET CLI
This command compiles your app to the publish folder. The path to the publish folder
from the working folder should be .\App\bin\Release\net8.0\publish\ .
Windows
From the App folder, get a directory listing of the publish folder to verify that the
DotNet.Docker.dll file was created.
PowerShell
dir .\bin\Release\net8.0\publish\
Directory: C:\Users\default\App\bin\Release\net8.0\publish
Create a file named Dockerfile in the directory containing the .csproj and open it in a text
editor. This tutorial uses the ASP.NET Core runtime image (which contains the .NET
runtime image) and corresponds with the .NET console application.
docker
FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build-env
WORKDIR /App
# Copy everything
COPY . ./
# Restore as distinct layers
RUN dotnet restore
# Build and publish a release
RUN dotnet publish -c Release -o out
7 Note
The ASP.NET Core runtime image is used intentionally here, although the
mcr.microsoft.com/dotnet/runtime:8.0 image could have been used.
Tip
This Dockerfile uses multi-stage builds, which optimizes the final size of the image
by layering the build and leaving only required artifacts. For more information, see
Docker Docs: multi-stage builds .
The FROM keyword requires a fully qualified Docker container image name. The
Microsoft Container Registry (MCR, mcr.microsoft.com) is a syndicate of Docker Hub,
which hosts publicly accessible containers. The dotnet segment is the container
repository, whereas the sdk or aspnet segment is the container image name. The image
is tagged with 8.0 , which is used for versioning. Thus,
mcr.microsoft.com/dotnet/aspnet:8.0 is the .NET 8.0 runtime. Make sure that you pull
the runtime version that matches the runtime targeted by your SDK. For example, the
app created in the previous section used the .NET 8.0 SDK, and the base image referred
to in the Dockerfile is tagged with 8.0.
) Important
When using Windows-based container images, you need to specify the image tag
beyond simply 8.0 , for example, mcr.microsoft.com/dotnet/aspnet:8.0-nanoserver-
1809 instead of mcr.microsoft.com/dotnet/aspnet:8.0 . Select an image name based
on whether you're using Nano Server or Windows Server Core and which version of
that OS. You can find a full list of all supported tags on .NET's Docker Hub page .
Save the Dockerfile file. The directory structure of the working folder should look like the
following. Some of the deeper-level files and folders have been omitted to save space in
the article:
Directory
📁 docker-working
└──📂 App
├── Dockerfile
├── DotNet.Docker.csproj
├── Program.cs
├──📂 bin
│ └──📂 Release
│ └──📂 net8.0
│ └──📂 publish
│ ├── DotNet.Docker.deps.json
│ ├── DotNet.Docker.exe
│ ├── DotNet.Docker.dll
│ ├── DotNet.Docker.pdb
│ └── DotNet.Docker.runtimeconfig.json
└──📁 obj
└──...
The ENTRYPOINT instruction sets dotnet as the host for the DotNet.Docker.dll . However,
it's possible to instead define the ENTRYPOINT as the app executable itself, relying on the
OS as the app host:
Dockerfile
ENTRYPOINT ["./DotNet.Docker"]
This causes the app to be executed directly, without dotnet , and instead relies on the
app host and the underlying OS. For more information on deploying cross-platform
binaries, see Produce a cross-platform binary.
To build the container, from your terminal, run the following command:
Console
Console
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
counter-image latest 2f15637dc1f6 10 minutes ago 217MB
The counter-image repository is the name of the image. The latest tag is the tag that is
used to identify the image. The 2f15637dc1f6 is the image ID. The 10 minutes ago is the
time the image was created. The 217MB is the size of the image. The final steps of the
Dockerfile are to create a container from the image and run the app, copy the published
app to the container, and define the entry point.
Dockerfile
FROM mcr.microsoft.com/dotnet/aspnet:8.0
WORKDIR /App
COPY --from=build-env /App/out .
ENTRYPOINT ["dotnet", "DotNet.Docker.dll"]
The FROM command specifies the base image and tag to use. The WORKDIR command
changes the current directory inside of the container to App.
The COPY command tells Docker to copy the specified source directory to a destination
folder. In this example, the publish contents in the build-env layer were output into the
folder named App/out, so it's the source to copy from. All of the published contents in
the App/out directory are copied into current working directory (App).
The next command, ENTRYPOINT , tells Docker to configure the container to run as an
executable. When the container starts, the ENTRYPOINT command runs. When this
command ends, the container will automatically stop.
Tip
Before .NET 8, containers configured to run as read-only may fail with Failed to
create CoreCLR, HRESULT: 0x8007000E . To address this issue, specify a
DOTNET_EnableDiagnostics environment variable as 0 (just before the ENTRYPOINT
step):
Dockerfile
ENV DOTNET_EnableDiagnostics=0
7 Note
Create a container
Now that you have an image that contains your app, you can create a container. You can
create a container in two ways. First, create a new container that is stopped.
Console
This docker create command creates a container based on the counter-image image.
The output of that command shows you the CONTAINER ID (yours will be different) of
the created container:
Console
d0be06126f7db6dd1cee369d911262a353c9b7fb4829a0c11b4b2eb7b2d429cf
Console
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
d0be06126f7d counter-image "dotnet DotNet.Docke…" 12 seconds ago
Created core-counter
Console
docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
cf01364df453 counter-image "dotnet DotNet.Docke…" 53 seconds ago Up
10 seconds core-counter
Similarly, the docker stop command stops the container. The following example uses
the docker stop command to stop the container, and then uses the docker ps
command to show that no containers are running:
Console
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Connect to a container
After a container is running, you can connect to it to see the output. Use the docker
start and docker attach commands to start the container and peek at the output
stream. In this example, the Ctrl+C keystroke is used to detach from the running
container. This keystroke ends the process in the container unless otherwise specified,
which would stop the container. The --sig-proxy=false parameter ensures that Ctrl+C
Console
Delete a container
For this article, you don't want containers hanging around that don't do anything.
Delete the container you previously created. If the container is running, stop it.
Console
The following example lists all containers. It then uses the docker rm command to
delete the container and then checks a second time for any running containers.
Console
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
2f6424a7ddce counter-image "dotnet DotNet.Dock…" 7 minutes ago
Exited (143) 20 seconds ago core-counter
docker rm core-counter
core-counter
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Single run
Docker provides the docker run command to create and run the container as a single
command. This command eliminates the need to run docker create and then docker
start . You can also set this command to automatically delete the container when the
container stops. For example, use docker run -it --rm to do two things, first,
automatically use the current terminal to connect to the container, and then when the
container finishes, remove it:
Console
The container also passes parameters into the execution of the .NET app. To instruct the
.NET app to count only to three, pass in 3.
Console
With docker run -it , the Ctrl+C command stops the process that's running in the
container, which in turn, stops the container. Since the --rm parameter was provided,
the container is automatically deleted when the process is stopped. Verify that it doesn't
exist:
Console
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Windows
Console
C:\>dir
Volume in drive C has no label.
Volume Serial Number is 3005-1E84
Directory of C:\
C:\>^C
Essential commands
Docker has many different commands that create, manage, and interact with containers
and images. These Docker commands are essential to managing your containers:
docker build
docker run
docker ps
docker stop
docker rm
docker rmi
docker image
Clean up resources
During this tutorial, you created containers and images. If you want, delete these
resources. Use the following commands to
Console
docker ps -a
Console
Console
docker rm core-counter
Next, delete any images that you no longer want on your machine. Delete the image
created by your Dockerfile and then delete the .NET image the Dockerfile was based on.
You can use the IMAGE ID or the REPOSITORY:TAG formatted string.
Console
Tip
Image files can be large. Typically, you would remove temporary containers you
created while testing and developing your app. You usually keep the base images
with the runtime installed if you plan on building other images based on that
runtime.
Next steps
Containerize a .NET app with dotnet publish
.NET container images
Containerize an ASP.NET Core application
Azure services that support containers
Dockerfile commands
Container Tools for Visual Studio
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
Containerize a .NET app with dotnet
publish
Article • 08/13/2024
Containers have many features and benefits, such as being an immutable infrastructure,
providing a portable architecture, and enabling scalability. The image can be used to
create containers for your local development environment, private cloud, or public
cloud. In this tutorial, you learn how to containerize a .NET application using the dotnet
publish command without the use of a Dockerfile. Additionally, you explore how to
configure the container image and execution, and how to clean up resources.
Prerequisites
Install the following prerequisites:
.NET 8+ SDK
If you have .NET installed, use the dotnet --info command to determine which
SDK you're using.
Docker Community Edition
In addition to these prerequisites, it's recommended that you're familiar with Worker
Services in .NET.
.NET CLI
Directory
📁 sample-directory
└──📂 Worker
├──appsettings.Development.json
├──appsettings.json
├──DotNet.ContainerImage.csproj
├──Program.cs
├──Worker.cs
└──📂 obj
├── DotNet.ContainerImage.csproj.nuget.dgspec.json
├── DotNet.ContainerImage.csproj.nuget.g.props
├── DotNet.ContainerImage.csproj.nuget.g.targets
├── project.assets.json
└── project.nuget.cache
The dotnet new command creates a new folder named Worker and generates a worker
service that, when run, logs a message every second. From your terminal session,
change directories and navigate into the Worker folder. Use the dotnet run command
to start the app.
.NET CLI
dotnet run
Building...
info: DotNet.ContainerImage.Worker[0]
Worker running at: 10/18/2022 08:56:00 -05:00
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Development
info: Microsoft.Hosting.Lifetime[0]
Content root path: .\Worker
info: DotNet.ContainerImage.Worker[0]
Worker running at: 10/18/2022 08:56:01 -05:00
info: DotNet.ContainerImage.Worker[0]
Worker running at: 10/18/2022 08:56:02 -05:00
info: DotNet.ContainerImage.Worker[0]
Worker running at: 10/18/2022 08:56:03 -05:00
info: Microsoft.Hosting.Lifetime[0]
Application is shutting down...
Attempting to cancel the build...
The worker template loops indefinitely. Use the cancel command Ctrl+C to stop it.
By default, the IsPublishable property is set to true for console , webapp , and
worker templates.
XML
<PropertyGroup>
<IsPublishable>true</IsPublishable>
<EnableSdkContainerSupport>true</EnableSdkContainerSupport>
</PropertyGroup>
By default, the container image name is the AssemblyName of the project. If that name is
invalid as a container image name, you can override it by specifying a
ContainerRepository as shown in the following project file:
XML
<Project Sdk="Microsoft.NET.Sdk.Worker">
<PropertyGroup>
<TargetFramework>net8.0</TargetFramework>
<Nullable>enable</Nullable>
<ImplicitUsings>enable</ImplicitUsings>
<UserSecretsId>dotnet-DotNet.ContainerImage-2e40c179-a00b-4cc9-9785-
54266210b7eb</UserSecretsId>
<ContainerRepository>dotnet-worker-image</ContainerRepository>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.Extensions.Hosting" Version="8.0.0"
/>
</ItemGroup>
</Project>
.NET CLI
) Important
To build the container locally, you must have the Docker daemon running. If it isn't
running when you attempt to publish the app as a container, you'll experience an
error similar to the following:
Console
.NET CLI
This command compiles your worker app to the publish folder and pushes the container
to your local docker registry.
7 Note
The only exceptions to this are RUN commands. Due to the way containers are built,
those cannot be emulated. If you need this functionality, you'll need to use a
Dockerfile to build your container images.
ContainerArchiveOutputPath
Starting in .NET 8, you can create a container directly as a tar.gz archive. This feature is
useful if your workflow isn't straightforward and requires that you, for example, run a
scanning tool over your images before pushing them. Once the archive is created, you
can move it, scan it, or load it into a local Docker toolchain.
.NET CLI
dotnet publish \
-p PublishProfile=DefaultContainer \
-p ContainerArchiveOutputPath=./images/sdk-container-demo.tar.gz
You can specify either a folder name or a path with a specific file name. If you specify the
folder name, the filename generated for the image archive file will be
$(ContainerRepository).tar.gz . These archives can contain multiple tags inside them,
Dockerfile
REGISTRY[:PORT]/REPOSITORY[:TAG[-FAMILY]]
For example, consider the fully qualified mcr.microsoft.com/dotnet/runtime:8.0-alpine
image name:
container registry).
dotnet/runtime is the repository (but some consider this the user/repository ).
8.0-alpine is the tag and family (the family is an optional specifier that helps
disambiguate OS packaging).
ノ Expand table
The following sections describe the various properties that can be used to control the
generated container image.
ContainerBaseImage
The container base image property controls the image used as the basis for your image.
By default, the following values are inferred based on the properties of your project:
The tag of the image is inferred to be the numeric component of your chosen
TargetFramework . For example, a project targeting net6.0 results in the 6.0 tag of the
inferred base image, and a net7.0-linux project uses the 7.0 tag, and so on.
If you set a value here, you should set the fully qualified name of the image to use as
the base, including any tag you prefer:
XML
<PropertyGroup>
<ContainerBaseImage>mcr.microsoft.com/dotnet/runtime:8.0</ContainerBaseImage
>
</PropertyGroup>
Starting with .NET SDK version 8.0.200, the ContainerBaseImage inference has been
improved to optimize the size and security:
For more information regarding the image variants sizes and characteristics, see .NET 8.0
Container Image Size Report .
ContainerFamily
Starting with .NET 8, you can use the ContainerFamily MSBuild property to choose a
different family of Microsoft-provided container images as the base image for your app.
When set, this value is appended to the end of the selected TFM-specific tag, changing
the tag provided. For example, to use the Alpine Linux variants of the .NET base images,
you can set ContainerFamily to alpine :
XML
<PropertyGroup>
<ContainerFamily>alpine</ContainerFamily>
</PropertyGroup>
The preceding project configuration results in a final tag of 8.0-alpine for a .NET 8-
targeting app.
This field is free-form, and often can be used to select different operating system
distributions, default package configurations, or any other flavor of changes to a base
image. This field is ignored when ContainerBaseImage is set. For more information, see
.NET container images.
ContainerRuntimeIdentifier
The container runtime identifier property controls the operating system and architecture
used by your container if your ContainerBaseImage supports more than one platform.
For example, the mcr.microsoft.com/dotnet/runtime image currently supports linux-
x64 , linux-arm , linux-arm64 and win10-x64 images all behind the same tag, so the
tooling needs a way to be told which of these versions you intend to use. By default, this
is set to the value of the RuntimeIdentifier that you chose when you published the
container. This property rarely needs to be set explicitly - instead use the -r option to
the dotnet publish command. If the image you've chosen doesn't support the
RuntimeIdentifier you've chosen, results in an error that describes the
You can always set the ContainerBaseImage property to a fully qualified image name,
including the tag, to avoid needing to use this property at all.
XML
<PropertyGroup>
<ContainerRuntimeIdentifier>linux-arm64</ContainerRuntimeIdentifier>
</PropertyGroup>
For more information regarding the runtime identifiers supported by .NET, see RID
catalog.
ContainerRegistry
The container registry property controls the destination registry, the place that the
newly created image will be pushed to. By default it's pushed to the local Docker
daemon, but you can also specify a remote registry. When using a remote registry that
requires authentication, you authenticate using the well-known docker login
mechanisms. For more information, See authenticating to container registries for
more details. For a concrete example of using this property, consider the following XML
example:
XML
<PropertyGroup>
<ContainerRegistry>registry.mycorp.com:1234</ContainerRegistry>
</PropertyGroup>
This tooling supports publishing to any registry that supports the Docker Registry HTTP
API V2 . This includes the following registries explicitly (and likely many more
implicitly):
For notes on working with these registries, see the registry-specific notes .
ContainerRepository
The container repository is the name of the image itself, for example, dotnet/runtime or
my-app . By default, the AssemblyName of the project is used.
XML
<PropertyGroup>
<ContainerRepository>my-app</ContainerRepository>
</PropertyGroup>
Image names consist of one or more slash-delimited segments, each of which can only
contain lowercase alphanumeric characters, periods, underscores, and dashes, and must
start with a letter or number. Any other characters result in an error being thrown.
ContainerImageTag(s)
The container image tag property controls the tags that are generated for the image. To
specify a single tag use ContainerImageTag and for multiple tags use
ContainerImageTags .
) Important
When you use ContainerImageTags , you'll end up with multiple images, one per
unique tag.
Tags are often used to refer to different versions of an app, but they can also refer to
different operating system distributions, or even different configurations.
Starting with .NET 8, when a tag isn't provided the default is latest .
XML
<PropertyGroup>
<ContainerImageTag>1.2.3-alpha2</ContainerImageTag>
</PropertyGroup>
XML
<PropertyGroup>
<ContainerImageTags>1.2.3-alpha2;latest</ContainerImageTags>
</PropertyGroup>
Tags can only contain up to 127 alphanumeric characters, periods, underscores, and
dashes. They must start with an alphanumeric character or an underscore. Any other
form results in an error being thrown.
7 Note
.NET CLI
Tip
If you experience issues with the ContainerImageTags property, consider scoping an
environment variable ContainerImageTags instead:
.NET CLI
ContainerLabel
The container label adds a metadata label to the container. Labels have no impact on
the container at run time, but are often used to store version and authoring metadata
for use by security scanners and other infrastructure tools. You can specify any number
of container labels.
XML
<ItemGroup>
<ContainerLabel Include="org.contoso.businessunit" Value="contoso-
university" />
</ItemGroup>
For a list of labels that are created by default, see default container labels.
ContainerWorkingDirectory
The container working directory node controls the working directory of the container,
the directory that commands are executed within if not other command is run.
XML
<PropertyGroup>
<ContainerWorkingDirectory>/bin</ContainerWorkingDirectory>
</PropertyGroup>
ContainerPort
The container port adds TCP or UDP ports to the list of known ports for the container.
This enables container runtimes like Docker to map these ports to the host machine
automatically. This is often used as documentation for the container, but can also be
used to enable automatic port mapping.
XML
<ItemGroup>
<ContainerPort Include="80" Type="tcp" />
</ItemGroup>
Starting with .NET 8, the ContainerPort is inferred when not explicitly provided based on
several well-known ASP.NET environment variables:
ASPNETCORE_URLS
ASPNETCORE_HTTP_PORTS
ASPNETCORE_HTTPS_PORTS
If these environment variables are present, their values are parsed and converted to TCP
port mappings. These environment variables are read from your base image, if present,
or from the environment variables defined in your project through
ContainerEnvironmentVariable items. For more information, see
ContainerEnvironmentVariable.
ContainerEnvironmentVariable
The container environment variable node allows you to add environment variables to
the container. Environment variables are accessible to the app running in the container
immediately, and are often used to change the run-time behavior of the running app.
XML
<ItemGroup>
<ContainerEnvironmentVariable Include="LOGGER_VERBOSITY" Value="Trace" />
</ItemGroup>
However, you can control how your app is executed by using some combination of
ContainerAppCommand , ContainerAppCommandArgs , ContainerDefaultArgs , and
ContainerAppCommandInstruction .
These different configuration points exist because different base images use different
combinations of the container ENTRYPOINT and COMMAND properties, and you want to be
able to support all of them. The defaults should be useable for most apps, but if you
want to customize your app launch behavior you should:
Identify which arguments (if any) are optional and could be overridden by a user
and set them as ContainerDefaultArgs
Set ContainerAppCommandInstruction to DefaultArgs
ContainerAppCommand
The app command configuration item is the logical entry point of your app. For most
apps, this is the AppHost, the generated executable binary for your app. If your app
doesn't generate an AppHost, then this command will typically be dotnet <your project
dll> . These values are applied after any ENTRYPOINT in your base container, or directly if
no ENTRYPOINT is defined.
XML
<!-- This shorthand syntax means the same thing, note the semicolon
separating the tokens. -->
<ContainerAppCommand Include="dotnet;ef" />
</ItemGroup>
ContainerAppCommandArgs
This app command args configuration item represents any logically required arguments
for your app that should be applied to the ContainerAppCommand . By default, none are
generated for an app. When present, the args are applied to your container when it's
run.
XML
<ItemGroup>
<!-- Assuming the ContainerAppCommand defined above,
this would be the way to force the database to update.
-->
<ContainerAppCommandArgs Include="database" />
<ContainerAppCommandArgs Include="update" />
<!-- This is the shorthand syntax for the same idea -->
<ContainerAppCommandArgs Include="database;update" />
</ItemGroup>
ContainerDefaultArgs
This default args configuration item represents any user-overridable arguments for your
app. This is a good way to provide defaults that your app might need to run in a way
that makes it easy to start, yet still easy to customize.
XML
<ItemGroup>
<!-- Assuming the ContainerAppCommand defined above,
this would be the way to force the database to update.
-->
<ContainerDefaultArgs Include="database" />
<ContainerDefaultArgs Include="update" />
<!-- This is the shorthand syntax for the same idea -->
<ContainerDefaultArgs Include="database;update" />
</ItemGroup>
ContainerAppCommandInstruction
The app command instruction configuration helps control the way the
ContainerEntrypoint , ContainerEntrypointArgs , ContainerAppCommand ,
Entrypoint :
None :
DefaultArgs :
) Important
This is for advanced users-most apps shouldn't need to customize their entrypoint
to this degree. For more information and if you'd like to provide use cases for your
scenarios, see GitHub: .NET SDK container builds discussions .
ContainerUser
The user configuration property controls the default user that the container runs as. This
is often used to run the container as a non-root user, which is a best practice for
security. There are a few constraints for this configuration to be aware of:
It can take various forms—username, linux user ids, group name, linux group id,
username:groupname , and other ID variants.
There's no verification that the user or group specified exists on the image.
Changing the user can alter the behavior of the app, especially in regards to things
like File System permissions.
The default value of this field varies by project TFM and target operating system:
If you're targeting .NET 8 or higher and using the Microsoft runtime images, then:
on Linux the rootless user app is used (though it's referenced by its user ID)
on Windows the rootless user ContainerUser is used
Otherwise, no default ContainerUser is used
XML
<PropertyGroup>
<ContainerUser>my-existing-app-user</ContainerUser>
</PropertyGroup>
Tip
The APP_UID environment variable is used to set user information in your container.
This value can come from environment variables defined in your base image (like
that Microsoft .NET images do), or you can set it yourself via the
ContainerEnvironmentVariable syntax.
To configure your app to run as a root user, set the ContainerUser property to root . In
your project file, add the following:
XML
<PropertyGroup>
<ContainerUser>root</ContainerUser>
</PropertyGroup>
Alternatively, you can set this value when calling dotnet publish from the command
line:
.NET CLI
For more information, see Implement conventional labels on top of existing label
infrastructure .
Clean up resources
In this article, you published a .NET worker as a container image. If you want, delete this
resource. Use the docker images command to see a list of installed images.
Console
docker images
Tip
Image files can be large. Typically, you would remove temporary containers you
created while testing and developing your app. You usually keep the base images
with the runtime installed if you plan on building other images based on that
runtime.
To delete the image, copy the image ID and run the docker image rm command:
Console
Next steps
Announcing built-in container support for the .NET SDK
Tutorial: Containerize a .NET app
.NET container images
Review the Azure services that support containers
Read about Dockerfile commands
Explore the container tools in Visual Studio
6 Collaborate with us on
GitHub .NET feedback
The source for this content can .NET is an open source project.
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.
.NET container images
Article • 08/30/2024
.NET provides various container images for different scenarios. This article describes the
different types of images and how they're used. For more information about official
images, see the Docker Hub: Microsoft .NET repository.
Tagging scheme
Starting with .NET 8, container images are more pragmatic in how they're differentiated.
The following characteristics are used to differentiate images:
Alpine
Mariner distroless
Ubuntu chiseled
These images are smaller, as they don't include globalization dependencies such as, ICU,
or tzdata. These images only work with apps that are configured for globalization
invariant mode. To configure an app for invariant globalization, add the following
property to the project file:
XML
<PropertyGroup>
<InvariantGlobalization>true</InvariantGlobalization>
</PropertyGroup>
Tip
SDK images aren't produced for *-distroless or *-chiseled image types.
Composite images are the smallest aspnet offering for Core CLR.
runtime-deps:8.0-jammy
runtime-deps:8.0-bookworm-slim
This globalization tactic is used by runtime , aspnet , and sdk images with the same tag.
) Important
Adding tzdata to Debian bookworm images has no practical effect, unless there's
an update to tzdata (that isn't yet included in Debian), at which point .NET images
would include a newer tzdata.
Some packages are still optional, such as Kerberos, LDAP, and msquic. These packages
are only required in niche scenarios.
Scenario-based images
The runtime-deps images have significant value, particularly since they include a
standard user and port definitions. They're convenient to use for self-contained and
native AOT scenarios. However, solely providing runtime-deps images that are needed
by the runtime and sdk images isn't sufficient to enable all the imaginable scenarios
or generate optimal images.
The need for runtime-deps extends to native AOT, *-distroless , and *-chiseled image
types as well. For each OS, three image variants are provided (all in runtime-deps ).
Consider the following example using *-chiseled images:
In terms of scenarios:
The 8.0-jammy-chiseled images are the base for runtime and aspnet images of the
same tag. By default, native AOT apps can use the 8.0-jammy-chiseled-aot image, since
it's optimized for size. Native AOT apps and Core CLR self-contained/single file apps that
require globalization functionality can use 8.0-jammy-chiseled-extra .
7 Note
Alpine
Mariner
Ubuntu
ノ Expand table
Image repository Image
sdk mcr.microsoft.com/dotnet/sdk
aspnet mcr.microsoft.com/dotnet/aspnet
runtime mcr.microsoft.com/dotnet/runtime
runtime-deps mcr.microsoft.com/dotnet/runtime-deps
monitor mcr.microsoft.com/dotnet/monitor
aspire-dashboard mcr.microsoft.com/dotnet/aspire-dashboard
samples mcr.microsoft.com/dotnet/samples
ノ Expand table
nightly-aspnet mcr.microsoft.com/dotnet/nightly/aspnet
nightly-monitor mcr.microsoft.com/dotnet/nightly/monitor
nightly-runtime-deps mcr.microsoft.com/dotnet/nightly/runtime-deps
nightly-runtime mcr.microsoft.com/dotnet/nightly/runtime
nightly-sdk mcr.microsoft.com/dotnet/nightly/sdk
nightly-aspire-dashboard mcr.microsoft.com/dotnet/nightly/aspire-dashboard
ノ Expand table
framework mcr.microsoft.com/dotnet/framework
framework-aspnet mcr.microsoft.com/dotnet/framework/aspnet
framework-runtime mcr.microsoft.com/dotnet/framework/runtime
framework-samples mcr.microsoft.com/dotnet/framework/samples
framework-sdk mcr.microsoft.com/dotnet/framework/sdk
Image repository Image
framework-wcf mcr.microsoft.com/dotnet/framework/wcf
See also
What's new in .NET 8: Container images
New approach for differentiating .NET 8+ images
Visual Studio Container Tools for Docker
Article • 07/23/2024
The tools included in Visual Studio for developing with Docker containers are easy to
use, and greatly simplify building, debugging, and deployment for containerized
applications. You can work with a container for a single project, or use container
orchestration with Docker Compose or Service Fabric to work with multiple services in
containers.
Prerequisites
Docker Desktop
Visual Studio 2022 with the Web Development, Azure Tools workload, and/or
.NET desktop development workload installed
To publish to Azure Container Registry, an Azure subscription. Sign up for a free
trial .
The support for Docker in Visual Studio has changed over a number of releases in
response to customer needs. There are several options to add Docker support to a
project, and the supported options vary by the type of project and the version of Visual
Studio. With some supported project types, if you just want a container for a single
project, without using orchestration, you can do that by adding Docker support. The
next level is container orchestration support, which adds appropriate support files for
the particular orchestrator you choose.
With Visual Studio 2022 version 17.9 and later, when you add Docker support to a .NET
7 or later project, you have two container build types to choose from for adding Docker
support. You can choose to add a Dockerfile to specify how to build the container
images, or you can choose to use the built-in container support provided by the .NET
SDK.
Also, with Visual Studio 2022 and later, when you choose container orchestration, you
can use Docker Compose or Service Fabric as container orchestration services.
7 Note
If you are using the full .NET Framework console project template, the supported
option is Add Container Orchestrator support after project creation, with options
to use Service Fabric or Docker Compose. Adding support at project creation and
Add Docker support for a single project without orchestration are not available
options.
In Visual Studio 2022, the Containers window is available, which lets you view running
containers, browse available images, view environment variables, logs, and port
mappings, inspect the filesystem, attach a debugger, or open a terminal window inside
the container environment. See Use the Containers window.
7 Note
For .NET Framework projects (not .NET Core), only Windows containers are
available.
You can add Docker support to an existing project by selecting Add > Docker Support
in Solution Explorer. The Add > Docker Support and Add > Container Orchestrator
Support commands are located on the right-click menu (or context menu) of the project
node for an ASP.NET Core project in Solution Explorer, as shown in the following
screenshot:
Container Image Distro specifies which OS image your containers use as the base
image. This list changes if you switch between Linux and Windows as the container type.
Windows:
Windows Nano Server (recommended, only available 8.0 and later, not preset for
Native Ahead-of-time (AOT) deployment projects)
Windows Server Core (only available 8.0 and later)
Linux:
7 Note
Containers based on the Chiseled Ubuntu image and that use Native Ahead-of-
time (AOT) deployment can only be debugged in Fast Mode. See Customize
Docker containers in Visual Studio.
Docker Build Context specifies the folder that is used for the Docker build. See Docker
build context . The default is the solution folder, which is strongly recommended. All
the files needed for a build need to be under this folder, which is usually not the case if
you choose the project folder or some other folder.
If you choose Dockerfile, Visual Studio adds the following to the project:
a Dockerfile file
a .dockerignore file
a NuGet package reference to the
Microsoft.VisualStudio.Azure.Containers.Tools.Targets
The Dockerfile you add will resemble the following code. In this example, the project
was named WebApplication-Docker , and you chose Linux containers:
Dockerfile
Here, choose .NET SDK as the container build type to use .NET SDK's container
management instead of a Dockerfile.
Container Image Distro specifies which OS image your containers use as the base
image. This list changes if you switch between Linux and Windows as the container. See
the previous section for a list of available images.
The .NET SDK container build entry in launchSettings.json looks like the following code:
JSON
The .NET SDK manages some of the settings that would have been encoded in a
Dockerfile, such as the container base image, and the environment variables to set. The
settings available in the project file for container configuration are listed at Customizing
your container . For example, the Container Image Distro is saved in the project file as
the ContainerBaseImage property. You can change it later by editing the project file.
XML
<PropertyGroup>
<ContainerBaseImage>mcr.microsoft.com/dotnet/runtime:8.0-alpine-
amd64</ContainerBaseImage>
</PropertyGroup>
Open the Containers window by using the quick launch (Ctrl+Q) and typing containers .
You can use the docking controls to put the window somewhere. Because of the width
of the window, it works best when docked at the bottom of the screen.
Select a container, and use the tabs to view the information that's available. To check it
out, run your Docker-enabled app, open the Files tab, and expand the app folder to see
your deployed app on the container.
For more information, see Use the Containers window.
To add container orchestrator support using Docker Compose, right-click on the project
node in Solution Explorer, and choose Add > Container Orchestrator Support. Then
choose Docker Compose to manage the containers.
After you add container orchestrator support to your project, you see a Dockerfile added
to the project (if there wasn't one there already) and a docker-compose folder added to
the solution in Solution Explorer, as shown here:
If docker-compose.yml already exists, Visual Studio just adds the required lines of
configuration code to it.
Repeat the process with the other projects that you want to control using Docker
Compose.
If you work with a large number of services, you can save time and computing resources
by selecting which subset of services you want to start in your debugging session. See
Start a subset of Compose services.
7 Note
Note that remote Docker hosts are not supported in Visual Studio tooling.
Visual Studio 2019 and later support developing containerized microservices using
Windows containers and Service Fabric orchestration.
For a detailed tutorial, see Tutorial: Deploy a .NET application in a Windows container to
Azure Service Fabric.
For Service Fabric, see Tutorial: Deploy your ASP.NET Core app to Azure Service Fabric
by using Azure DevOps Projects.
Next steps
For further details on the services implementation and use of Visual Studio tools for
working with containers, read the following articles:
As .NET 5 (and .NET Core) and later versions become available on more and more
platforms, it's useful to learn how to package, name, and version apps and libraries that
use it. This way, package maintainers can help ensure a consistent experience no matter
where users choose to run .NET. This article is useful for users that are:
Disk layout
When installed, .NET consists of several components that are laid out as follows in the
file system:
(0) {dotnet_root} is a shared root for all .NET major and minor versions. If multiple
runtimes are installed, they share the {dotnet_root} folder, for example,
{dotnet_root}/shared/Microsoft.NETCore.App/6.0.11 and
{dotnet_root}/shared/Microsoft.NETCore.App/7.0.0 . The name of the
(1) dotnet The host (also known as the "muxer") has two distinct roles: activate a
runtime to launch an application, and activate an SDK to dispatch commands to it.
The host is a native executable ( dotnet.exe ).
While there's a single host, most of the other components are in versioned directories
(2,3,5,6). This means multiple versions can be present on the system since they're
installed side by side.
(2) host/fxr/<fxr version> contains the framework resolution logic used by the
host. The host uses the latest hostfxr that is installed. The hostfxr is responsible for
selecting the appropriate runtime when executing a .NET application. For example,
an application built for .NET 7.0.0 uses the 7.0.5 runtime when it's available.
Similarly, hostfxr selects the appropriate SDK during development.
(3) sdk/<sdk version> The SDK (also known as "the tooling") is a set of managed
tools that are used to write and build .NET libraries and applications. The SDK
includes the .NET CLI, the managed languages compilers, MSBuild, and associated
build tasks and targets, NuGet, new project templates, and so on.
(4) sdk-manifests/<sdk feature band version> The names and versions of the
assets that an optional workload installation requires are maintained in workload
manifests stored in this folder. The folder name is the feature band version of the
SDK. So for an SDK version such as 7.0.102, this folder would still be named
7.0.100. When a workload is installed, the following folders are created as needed
for the workload's assets: library-packs, metadata, and template-packs. A
distribution can create an empty /metadata/workloads/<sdkfeatureband>/userlocal
file if workloads should be installed under a user path rather than in the dotnet
folder. For more information, see GitHub issue dotnet/installer#12104 .
The shared folder contains frameworks. A shared framework provides a set of libraries at
a central location so they can be used by different applications.
(17) templates contains the templates used by the SDK. For example, dotnet new
finds project templates here.
(18) Microsoft.NETCore.App.Runtime.<rid>/<runtime
version>,Microsoft.AspNetCore.App.Runtime.<rid>/<aspnetcore version> These
files enable building self-contained applications. These directories contain
symbolic links to files in (2), (5) and (6).
The folders marked with (*) are used by multiple packages. Some package formats (for
example, rpm ) require special handling of such folders. The package maintainer must
take care of this.
Recommended packages
.NET versioning is based on the runtime component [major].[minor] version numbers.
The SDK version uses the same [major].[minor] and has an independent [patch] that
combines feature and patch semantics for the SDK. For example: SDK version 7.0.302 is
the second patch release of the third feature release of the SDK that supports the 7.0
runtime. For more information about how versioning works, see .NET versioning
overview.
Some of the packages include part of the version number in their name. This allows you
to install a specific version. The rest of the version isn't included in the version name.
This allows the OS package manager to update the packages (for example, automatically
installing security fixes). Supported package managers are Linux specific.
[major].[minor] , netstandard-targeting-pack-[netstandard_major].
[netstandard_minor] , dotnet-apphost-pack-[major].[minor] , dotnet-templates-
[major].[minor]
dotnet-hostfxr-[major].[minor] - dependency
dotnet-host - dependency
dotnet-apphost-pack-[major].[minor] - dependency
runtime
Version: <aspnetcore runtime version>
Contains: (11)
netstandard-targeting-pack-[netstandard_major].[netstandard_minor] - Allows
dotnet-templates-[major].[minor]
The following two meta packages are optional. They bring value for end users in that
they abstract the top-level package (dotnet-sdk), which simplifies the installation of the
full set of .NET packages. These meta packages reference a specific .NET SDK version.
When package content is under a versioned folder, the package name [major].[minor]
match the versioned folder name. For all packages, except the netstandard-targeting-
pack-[netstandard_major].[netstandard_minor] , this also matches with the .NET version.
Multiple dotnet-sdk packages may provide the same files for the NuGetFallbackFolder .
To avoid issues with the package manager, these files should be identical (checksum,
modification date, and so on).
Debug packages
Debug content should be packaged in debug-named packages that follow the .NET
package split described previously in this article. For instance, debug content for the
dotnet-sdk-[major].[minor] package should be included in a package named dotnet-
sdk-dbg-[major].[minor] . You should install debug content to the same location as the
binaries.
In the {dotnet_root}/sdk/<sdk version> directory, the following two files are expected:
package
[minor] packages
While all debug content is available in the debug tarball, not all debug content is equally
important. End users are mostly interested in the content of the
shared/Microsoft.AspNetCore.App/<aspnetcore version> and
The SDK content under sdk/<sdk version> is useful for debugging .NET SDK toolsets.
The debug tarball also contains some debug content under packs , which represents
copies of content under shared . In the .NET layout, the packs directory is used for
building .NET applications. There are no debugging scenarios, so you shouldn't package
the debug content under packs in the debug tarball.
Building packages
The dotnet/source-build repository provides instructions on how to build a source
tarball of the .NET SDK and all its components. The output of the source-build
repository matches the layout described in the first section of this article.
6 Collaborate with us on
GitHub .NET feedback
.NET is an open source project.
The source for this content can
Select a link to provide feedback:
be found on GitHub, where you
can also create and review
Open a documentation issue
issues and pull requests. For
more information, see our
Provide product feedback
contributor guide.