Sentry is great for crashes!

|

Update 27th September: Added information about uploading symbols for symbolication of exception

This blog post is not sponsored or paid. It is just me being a bit excited about a product I’ve tried and reflects the as of writing point of view about the product. Also I had written a section about Metrics in Sentry, which was a preview feature. However, they pulled this support to provide a better version of it later based on Spans, so I will omit this here. It shows though that new features in Sentry are thought through.

With the AppCenter announcement that it is retiring (March 31st 2025), I have been looking around a bit for a good replacement for it. Currently, I use AppCenter for crashes, analytics and App distribution and it would be great if there was a single product out there, which could replace it fully.

It seems like Firebase is the closest to a 100% replacement of AppCenter. However, there is a catch. Crashes on iOS when running on .NET iOS do not work. You will only ever end up with a native stack trace without any of the managed C# stacks. This is not great, and there doesn’t seem to be any traction on getting this to work. Hence, the search for a solution that works excellent for crashes for both .NET Android and iOS, which I primarily target in the Apps I work on.

Having looked at Sentry among other competitors a couple of times before and played a bit around with Sentry too, then now revisiting it again, it seems like a very solid choice for crashes and some of its other features.

What is cool about Sentry?

What I really like in Sentry, is that it really excels as a product for crash reporting. If you have used AppCenter, you will know that when you get a crash, you end up with a stack trace, with limited information about the device it crashed on. If you want extra information, you can enrich it yourself when the App restarts and upload file attachments to the crash. In my Apps, I’ve been using this to upload logs from the App on the crash, to help understand how the user ended up in the situation where it lead to a crash.

This is where Sentry, by far, beats AppCenter! Where Sentry excels is a feature called Breadcrumbs. This is a way to enrich the crash with events, log entries, navigation and much more. It essentially leaves little traces as the App runs from different sources. This feature is really powerful as it can help you much better understand what happened before the crash, so you can better understand what when on and reproduce.

Some of the things I’ve tried using breadcrumbs for are:

  • Adding log entries
  • Adding key events, such as user logged in
  • Navigation, such as user navigated to a specific screen
  • Button presses
  • App lifecycle
  • HTTP requests
  • Android ANRs (AppCenter doesn’t support these!)

Out of the box you also get a plethora of device information, such as storage, memory, ABI, display size, density and so much more. With this extra information, it helps a lot understanding what happened before the App crashed and in which state it was in.

Getting started

Depending on your App you might want to consume Sentry differently. I will cover .NET Android and iOS Apps here, but Sentry supports so many platforms and frameworks, so you will most definitely be able to add support for it in your code.

MvvmCross Apps

Currently MvvmCross still uses its own IoC container, so you cannot use some of the handy extension methods for Microsoft.Extensions.DependencyInjection. However, don’t fear, it is still possible to get it working!

The way I do it is to use the two NuGet packages Sentry and Sentry.Serilog. So I get the out of the box support for Android and iOS that the Sentry package provides. To get logs as breadcrumbs and to automatically report errors when you use LogError(exception, "message"), I use the Sentry.Serilog NuGet package.

<PackageReference Include="Sentry" Version="4.11.0" />
<PackageReference Include="Sentry.Serilog" Version="4.11.0" />

In your MainApplication.OnCreate() on Android and in AppDelegate.FinishedLaunching() (make sure to do this before base.FinishedLaunching() if you are inheriting from MvxAppDelegate) on iOS you add your Sentry setup like:

SentrySdk.Init(options =>
{
    options.Dsn = "<your DSN here>";
    options.ProfilesSampleRate = 1.0;
    options.TracesSampleRate = 1.0;
#if RELEASE
    options.Environment = "prod";
#else
    options.Environment = "dev";
#endif
});

On Android and iOS you can customize some of the platform specifics by enabling some of the stuff in the options.Native property. For instance, if your Application has permission to read logcat you could enable it to pull that and add to crashes with:

options.Android.LogCatIntegration = LogCatIntegrationType.Errors;

Explore that property, because there might be a bunch of interesting things in there for you.

To get the logs integration, using the handy extension method from Sentry.Serilog you can add a few lines to your Serilog configuration:

.WriteTo.Async(l => l.Sentry(options =>
{
    options.InitializeSdk = false;
    options.MinimumBreadcrumbLevel = LogEventLevel.Information;
}))

Since I am initializing Sentry myself, I’ve chosen to tell it to not do it here. MinimumBreadcrumbLevel and MinimumEventLevel helps you control what is added as breadcrumbs and as events to Sentry. If you don’t want logging errors as events in Sentry, you can control that with MinimumEventLevel, which defaults to Error, while MinimumBreadcrumbLevel defaults to Verbose.

What about MAUI Apps?

For the MAUI Framework, there is a dedicated Sentry.Maui package, which you can add by calling AddSentry() on your service collection, which makes it much easier than the setup above. It also adds Logging Providers to hook into HttpClient requests and Microsoft.Extensions.Logging out of the box. So consider using packages that match closely the framework you are using.

There are many more for ASP.NET, Google Cloud, Hangfire, Log4Net and EntityFramework.

Uploading symbols for symbolication

To get nice symbolicated stack traces you can use the sentry-cli commandline tool upload symbols. I add it to my pipeline like so:

- script: |
    sentry-cli debug-files upload --org $(sentryOrg) --project $(sentryProject) --auth-token $(sentryToken) MyProject/bin/Release/net8.0-android
  displayName: Upload Debug Symbols to Sentry

You will need to run sentry-cli login to get a token, once you do that, you are good to go!

Differences from AppCenter

Some few differences in usage compared to AppCenter are how extra information is annotated on events. In AppCenter, when you would report an error or analytics event, you would provide a Dictionary<string, string> with extra information you want added.

This is different in Sentry, where you instead create a scope. It supports both sync and async scope creation. So something like:

// local scope only for this exception
SentrySdk.CaptureException(exception, scope =>
{
    // adding key/value like in AppCenter
    foreach (var (key, value) in properties)
    {
        scope.SetExtra(key, value);
    }
});

This will create a local scope for the specific exception. You can also set global values with:

SentrySdk.ConfigureScope(scope =>
{
    scope.SetTag("my-tag", "my value");
    scope.User = new SentryUser { Id = userId };
});

I find this much nicer, especially that you can set global things for the session.

Overall, compared to AppCenter. Sentry, beats it hands down and goes way beyond what AppCenter crashes supports. Looking forward to some of the other App features the Sentry team comes out with.

Adding Resilience to Refit and your own code

|

You may be using Refit already today in your App or you want to do so. It is a great little REST Api client library where you quickly through interfaces can start communicating with an API and without having to write a bunch of client code yourself.

An example of this looks like so:

public interface IMyUserApi
{
    [Get("/users/{userId}")]
    Task<User> GetUser(string userId);
}

Then you can use the client like so:

var userApi = RestService.For<IMyUserApi>("https://api.myusers.com");
var user = await gitHubApi.GetUser("abcdefg123");

Super easy and no need to write any HttpClient code to call GetAsync.

Refit also nicely integrates with Microsoft.Extensions.DependencyInjection IServiceCollection leveraging HttpClientFactory which most modern Applications should be using. This also allows configuring additional HttpClientHandlers to allow more instrumentation and as I later describe, allows configuring some resiliency:

services
    .AddRefitClient<IMyUserApi>()
    .ConfigureHttpClient(c => c.BaseAddress = new Uri("https://api.myusers.com"));

OK, so with a Refit client registered in the service collection, how can we add some resiliency to it? Perhaps you are already familiar with Polly directly or using Microsoft.Extensions.Http.Polly, this is not recommended anymore, so let me show you how you can set that up using the nice and shiny Microsoft.Extensions.Resilience and Microsoft.Extensions.Http.Resilience packages.

Adding the latter package Microsoft.Extensions.Http.Resilience gives you a nice extension method to add a Resilience Handler to your HttpClient. So the example above becomes:

services
    .AddRefitClient<IMyUserApi>()
    .ConfigureHttpClient(c => c.BaseAddress = new Uri("https://api.myusers.com"))
    .AddStandardResilienceHandler();

Just by adding this one line you will get retries set up for you with the defaults described in the README for the Microsoft.Extensions.Http.Resilience package which as of writing this post are:

  • The total request timeout pipeline applies an overall timeout to the execution, ensuring that the request including hedging attempts, does not exceed the configured limit.
  • The retry pipeline retries the request in case the dependency is slow or returns a transient error.
  • The rate limiter pipeline limits the maximum number of requests being send to the dependency.
  • The circuit breaker blocks the execution if too many direct failures or timeouts are detected.
  • The attempt timeout pipeline limits each request attempt duration and throws if its exceeded.

If you want to configure any of these behaviors you can configure that with:

.AddStandardResilienceHandler(builder =>
{
    builder.Retry = new HttpRetryStrategyOptions
    {
        MaxRetryAttempts = 2,
        Delay = TimeSpan.FromSeconds(1),
        UseJitter = true,
        BackoffType = DelayBackoffType.Exponential
    };

    builder.TotalRequestTimeout = new HttpTimeoutStrategyOptions { Timeout = TimeSpan.FromSeconds(30) };
});

You are fully in control!

If you want to add resiliency to something else, you can also add resilience pipelines directly in your service collection and use the Microsoft.Extensions.Resilience package:

service.AddResiliencePipeline("install-apps", builder =>
{
    builder.AddTimeout(TimeSpan.FromMinutes(3));
});

Then resolve and use it with a construction looking something like this:

public sealed class AppInstaller(
    ILogger<AppInstaller> logger,
    ResiliencePipelineProvider<string> pipelineProvider)
{
    private readonly ILogger<AppInstaller> _logger = logger;
    private readonly ResiliencePipeline _pipeline = pipelineProvider.GetPipeline("install-apps");

    public async Task<InstallStatus> InstallApp(string downloadPath, CancellationToken cancellationToken)
    {
        // Get context for cancellation and passing along state
        ResilienceContext? context = ResilienceContextPool.Shared.Get(cancellationToken);

        // Execute async method passing state and cancellation token
        Outcome<InstallStatus> outcome = await _pipeline.ExecuteOutcomeAsync(
            async (ctx, state) =>
                Outcome.FromResult(
                    await InstallAppInternal(state, ctx.CancellationToken).ConfigureAwait(false)
                ),
                context,
                downloadPath)
            .ConfigureAwait(false);
        
        // Handle errors from outcome
        if (outcome.Exception != null)
        {
            _logger.LogError(outcome.Exception, "Something went wrong installing app from {DownloadPath}", downloadPath);
        }

        return outcome.Result ?? InstallStatus.Failed;
    }

    // more code here...
}

Hope this helps a bit understanding how the resilience libraries work, but just like Polly you can use it with anything you want to retry, with some nice defaults for HTTP requests.

If you want to read more about Resilience Milan also wrote a really nice blog post, which you might find interesting.

Renovate Bot Sharable Configurations

|

If you haven’t already noticed by the amount of blog posts about Renovate Bot, I am really loving it and its feature set.

One very cool feature that was pointed out to me was the ability to have defaults or configurations to extend from, shared from a repository. I put mine in the repository Renovate Bot lives in and share it from there.

So, if you find yourself applying the same configuration in multiple repositories, this is maybe something you want to look into.

Defining a default config

Where your Renovate Bot lives, I created a defaults.json, but you can actually call it almost anything you want, you will need remember the name though for when extending it in your config for your repos you are scanning. With the file defaults.json in place. In this file I put something like this as these are things I keep applying most places:

{
  "$schema": "https://docs.renovatebot.com/renovate-schema.json",
  "extends": [
    "config:recommended"
  ],
  "prHourlyLimit": 0,
  "prConcurrentLimit": 0,
  "automerge": true,
  "azureWorkItemId": 123456,
  "labels": [
    "dependencies"
  ]
}

Using default configs in your configs for your repositories

To use the above defaults.json it is as easy to remove the configuration entries that you want to use from the defaults and adding a line such as this to your renovate.json config in the scanned repository.

{
  "$schema": "https://docs.renovatebot.com/renovate-schema.json",
  "extends": ["local>MyProjectName/RenovateBot:defaults"],
  ...

So here I use local> which means self hosted Git. If you are on GitHub or GitLab or some other hosted Git please refer to the Preset Hosting documentation. For Azure DevOps Repositories, local> works.

Otherwise, I just specify the Project name in Azure DevOps and the Repository the configuration I want to extend lives in. That is more or less it.

Next time your Renovate Bot runs, it will pull those config items.

There is a lot more you can do along with some recommended presets by the Renovate Bot team which you can apply. Read more about it in their documentation about Sharable Config Presets.

Cool Renovate Bot Features

|

Last month I wrote about how cool Renovate Bot is, updating the dependencies in your repositories. It works anywhere, I got it to work in Azure DevOps Pipelines running every night at 3 AM, while everyone is sleeping and no one else is using the build agents for important stuff.

I initially did a couple of things, that I changed later on, after getting a bit more familiar with how Renovate Bot works. I will go through some of the discoveries I did and how I configure it in the end, in this blog post. Hopefully you find this useful 😀

Configuration and pipeline can live in its own repo

Initially I had the renovate bot configuration and pipeline living in the same repository of the code I wanted it to run against. This is entirely not necessary and it can live in its own repository and have a base configuration, for things such as authentication in this repository.

So now I have a repository called RenovateBot with two files in it:

  • azure-pipelines.yml the pipeline for running the bot (see the previous post on how I set that up)
  • config.js the configuration for the bot

When running, Renovate already knows how to check out files in the repositories you tell it to scan, in the config file. So you don’t need to run it in the code repositories, you want to scan for updates.

In the config.js file I now simply have something like:

module.exports = {
    hostRules: [
        {
            hostType: 'nuget',
            matchHost: 'https://pkgs.dev.azure.com/myorg/',
            username: 'user',
            password: process.env.NUGET_TOKEN
        },
    ],
    repositories: [
        'myorg/repo1',
        'myorg/repo2',
        'myorg/repo3'
    ]
};

It will scan all those repositories defined in the repositories collection.

Neat!

You can have repository specific configs

For each repository you define, apart from the basic configuration you provide for renovate, you can add additional configuration. I use this to add tags and group Pull Requests made by Renovate for dependencies that group together. For instance for Unit Test dependencies.

So in each repository you can add a renovate.json file with additional configuration. This is the same file that Renovate creates initially on a repository on the first Pull Request it makes.

Here is an example of what a configuration for one of my repositories looks like:

{
  "$schema": "https://docs.renovatebot.com/renovate-schema.json",
  "azureWorkItemId": 123456,
  "prHourlyLimit": 0,
  "labels": ["dependencies"],
  "automerge": true,
  "packageRules": [
    {
      "matchPackagePatterns": [
        "Cake.*",
        "dotnet-sonarscanner",
        "dotnet-reportgenerator-globaltool"
      ],
      "groupName": "CI",
      "addLabels": [
        "CI"
      ]
    },
    {
      "matchPackagePatterns": [
        ".*xunit.*",
        "Moq.*",
        "AutoFixture.*",
        "liquidtestreports.*",
        "Microsoft.NET.Test.SDK",
        "Microsoft.Reactive.Testing",
        "MvvmCross.Tests",
        "Xamarin.UITest",
        "coverlet.*",
        "MSTest.*",
        "System.IO.Abstractions.*"
      ],
      "groupName": "Unit Test",
      "addLabels": [
        "Unit Test"
      ]
    }
}

Let’s go through some of the options here.

  • azureWorkItemId will add the specific work item Id to every Pull Request it creates. This is especially useful if you have a policy set on your Pull Request to always link a work item
  • prHourlyLimit I’ve set this one to 0, such that Renovate Bot can create as many Pull Requests it wants on a repository. Otherwise, I think the default is 2. So if you wonder why it didn’t update all dependencies, this could by why
  • labels This option lets you set default labels on pull requests, so for each of my Pull Requests made by Renovate it will have the dependencies label on it
  • automerge This option will set Auto Complete in Azure DevOps on a Pull Request using the default merge strategy, such that you can have Pull Requests automatically merge when all checks are completed
  • packageRules Is super powerful. Here you can limit which packages you want to be grouped together, in the case above I have two groups. Unit Test and CI, which will look for specific regex patterns of package names to include in the groups. I also add additional labels for these two groups using addLabels and assign groupName such that when Renovate creates a Pull Request for a group, the title will be Update <group name>. There are many more options you can set on packageRules, you should refer to the docs if you want more info.

You can scan many types of project types

So far I have scanned .NET projects and Kotlin projects with Renovate Bot and it handles these very well without any issues. I simply add additional repositories in the config.js file and on next run or when I run the pipeline manually it adds a renovate.json file to the repository and it is good to go.

Some Azure DevOps annoyances

When using System.AccessToken as your Renovate Token, the Pull Requests are opened by the user Project Collection Build Service (myorg). This user is built into Azure DevOps and does not have any e-mail assigned to it and you cannot change it either. If you have “Commit author email validation” enabled on a repo, you will need to add both the renovate bot email (or the one you’ve defined in your config) along with the Project Collection user like so: [email protected]; Project Collection Build Service (myorg) to the allowed commit author email patterns. Otherwise auto completion on Pull Requests will not work as it will violate one of the repository policies.

Using Renovate Bot in Azure DevOps

|

I have been spoiled by the dependabot on GitHub, which helps keeping NuGet and other packages up to date. However, dependabot is not easily available in Azure DevOps. Again, the Open Source Community to the rescue! After asking around on social media, my friends Martin Björkström, Mattias Karlsson and Pascal Berger let me know of the existence of Renovate bot. The purpose of this bot is to periodically to update the dependencies that you use in your projects. It has loads of plugins for all sorts of package systems, like NPM, NuGet, PIP and many more. Probably, anything you think of, it has support for it or it can be configured to work with it.

Pascal conveniently let me know of a Docker image you can use in your pipelines to run renovate. This docker image comes with the packages pre-installed, such that you just need to execute renovate. This is nice, because then you do not need to install the renovate npm package on every pipeline run.

Configuration

To configure renovate, you will want to create a config.js file, here you can add stuff like private NuGet feeds, rules about which labels to apply on PRs and much more. For my usage, I need access too a private NuGet feed, and want to apply a label dependencies and a work item on every PR that renovate creates:

module.exports = {
  hostRules: [
    {
      hostType: 'nuget',
      matchHost: 'https://pkgs.dev.azure.com/<org-name>/',
      username: 'user',
      password: process.env.NUGET_TOKEN
    }
  ],
  repositories: ['<project>/<repository>'],
  azureWorkItemId: 12345,
  labels: ['dependencies']
};

For private NuGet feeds, you need to add hostRules, to let renovate know how to authenticate with the NuGet feed. For Azure DevOps Artifacts, you can unfortunately not just use the System.AccessToken in the pipeline, so you need to create a Personal Access Token (PAT), with permission to read the package feed.

You can have renovate create PRs for one or more repositories, provide a list of repositories you want it to run on. You can quickly deduct this from the URL for your repo, which will be in the format: https://dev.azure.com/<organization>/<project>/_git/<repository>. Each repository you want to be scanned you add like: <project>/<repository>.

On my repositories, I have branch protection enabled and have a rule that work items must be linked to each PR. So for this I have created a work item, which I simply use for each renovate bot Pull Request.

That is it for the configuration.

Pipeline definition

With the configuration in place, you can now set up a pipeline to run renovate based on a schedule. I have used the example renovate suggest, running every night at 3.

This pipeline is using the docker container Pascal Berger let me know exists. So every step after specifying container will run inside of that.

The env argument NUGET_TOKEN, is what the password for the hostRule for the NuGet feed above will be replaced with. In my case it is a Personal Access Token (PAT) that only has access to the private NuGet feed. The GITHUB_COM_TOKEN is used to get release notes for Pull Request descriptions when renovate creates such.

schedules:
- cron: '0 3 * * *'
  displayName: 'Every day at 3am (UTC)'
  branches:
    include: [develop]
  always: true

trigger: none

pool:
  vmImage: 'ubuntu-latest'

container: swissgrc/azure-pipelines-renovate:latest

steps:
- bash: |
    npx renovate
  env:
    NUGET_TOKEN: $(NUGET_PAT)
    GITHUB_COM_TOKEN: $(GITHUB_TOKEN)
    RENOVATE_PLATFORM: azure
    RENOVATE_ENDPOINT: $(System.CollectionUri)
    RENOVATE_TOKEN: $(System.AccessToken)

With this, you should be good to go! First time renovate runs, it will create a pull request with a renovate.json file. Merge it and it will from now on create Pull Requests with dependency updates! Neat!

Here is a screenshot of how this looks.

pr

This works in many environments. Refer to the renovate documentation for more info.