Spending more time with Linux and random freezes at idle...

    A few weeks ago, I decided to start dual booting with Linux. This was something that I had considered a while back when I built my new desktop, and then never actually followed through with it. However, I recently installed Fedora on an old Macbook Air that just wasn’t handling MacOS X well any more and that got me thinkng about it again. I tend to be partial to Debian because I have used that a lot in the past. I ran into some issue with it on the Macbook Air though. I don’t remember exactly now, but I vaguely remember something about non-free wireless chipset drivers. So it was probably that. I’m sure it is not too difficult to solve and there are plenty of resources out there for people running Linux on Apple hardware. I was just looking to avoid any extra work and probably an excuse to try out other distros, so I tried Fedora next. No problems there and I have stuck with it on the Macbook Air.

    On the desktop, I went with Debian first again. This was going fine and initially, I don’t think I had any problems. I believe there was some firmware install step for my AMD GPU. Just a minor hiccup though. In fact, I was pretty happy with things until a few days later the computer would start freezing completely at random. It didn’t happen very often. Once a day or so, sometimes twice in a day. I must have just gotten lucky the first few days. Strangely, it almost always happened while I wasn’t using the computer. I would come back to it and find the lock screen displaying the time from a an hour or so ago. No response to any inputs. Eventually I thought that I would try Fedora on the desktop too and get lucky. This again seemed to do the trick at first. And then a couple days later, it started happening again.

    I am kind of late to the game here, since most people were looking at this issue over two years ago. The one benefit of that is that the issue has been solved. The downside is that there is a lot of information to sift through and it’s hard to know what is correct. After some research, I finally tracked down a solution. It appears this is a fairly low level bug and there is a BIOS based fix available. Initially, I ran into a lot of discussion around different CPU power states - especially the C6 state. It sounds like initially people had some success with disabling the C6 state through a python script. Similarly, you can find a lot of talk around kernel settings (CONFIG_RCU_NOCB_CPU) and boot parameters (rcu_nocbs). It is possible that with changes to the kernel you could work around the issue, but it appears that eventually AMD released a fix through BIOS updates provided by motherboard manufacturers. From what I have read, most seem to have these settings under an AMD CBS section and what you’re looking for is the Power Supply Idle Control setting. Setting that to Typical Current Idle fixes the issue. On my ASRock AB350M Pro4 that can be found under Advanced -> AMD CBS -> Zen Common Options. By default, it was set to Auto. With Typical Current Idle, I have had a solid week of uptime so far. Hopefully that continues!

    I’m not sure if this is an issue on newer Ryzen processors. I haven’t seen it mentioned, so I am assuming it was something that was fixed before their release. I specifically have a Ryzen 1700 with an ASRock AB350M Pro4. Hope that can help anyone else searching for information on system freezes under Linux with that combination.

    Document and automate as you go

    I have recently found myself setting up a new front end project that is utilizing an API which has already been in development for a while now. Starting a new project is fairly rare for our development team as most of our efforts are focused on the ongoing development of software that has been around a number of years (decades in some cases). However, in the not so distant past, we began migrating some of the oldest codebase over to C#. Only a few people worked on this at first, but of course that number grew over time. Unfortunately all of the steps necessary to get things up and running for a local development environment were never really documented well. Actually… they weren’t documented at all initially.

    For a simple project, this may be fine as it could just be a matter of grabbing everything from source control and building. Larger projects, however, aren’t going to last long in that state. Even from the start, you may have certain software dependencies or OS configuration that require extra steps. Thankfully there are a lot more tools now to make this less and less of an issue. Containers in particular are great for addressing these challenges.

    In this particular case, I was unable to take advantage of something like Docker for a variety of reasons. What I did do is immediately start documenting everything needed. If I needed some special IIS functionality, then I made a note to enable that under Windows features. Doing all of this right away makes it impossible to forget about things. This is especially important because sometimes the lack of a dependency or some missing configuration info results in strange behavior or incoherent error messages that may be difficult to diagnose later on. It will save you a ton of time when onboarding new developers and will probably help them feel a lot more comfortable with the codebase as well. Spending a bunch of time just getting a development environment setup is always a frustrating experience.

    If you take this a step further and automate all the necessary steps, the results are even better. This can get you a lot closer to the turn key ideal of getting the source and building. With initialization script(s) committed to source control along with the code, you just need to get the repository, run the scripts, and then build. Depending on how your builds are done, you could probably even have them run the scripts for you and write them in such a way that running more than once doesn’t unnecessarily waste time or cause other issues.

    In the end, I created a PowerShell script for this project that takes care of all the initial setup work that a developer would otherwise have to do manually to get things up and running. With appropriate comments explaining what is happening and, most importantly, why this doubles up as documentation. Going forward, I plan to utilize this approach as much as possible. Whether it’s through a Dockerfile or shell script(s), automating these types of tasks is a powerful tool. You save time, provide consistency, and just make life easier over all.

    This isn’t a new idea by any means. After all, at times programming is all about automation. One of my favorite software development books, The Pragmatic Programmer, emphasizes the importance of automating software development work. If you haven’t read it, I highly recommend it to all software developers.

    Dapper and Npgsql with .NET Core

    The most popular posts on this blog have been those discussing Dapper and Npgsql. The blog has always been more or less just a personal journal that I don’t really promote. So that means the majority of these views have come about from people searching for info related to the two. Obviously there’s an interest in using them together. As I started playing around with .NET Core 2.0 recently, I realized it was a good time to revisit the topic and update the previous example to use .NET Core. Since you tend to find PostgreSQL running on Linux servers, the ability to run .NET code on Linux with .NET Core opens up a lot more options for your application’s infrastructure. Of course, this has also been possible with Mono, but I expect a lot more organizations to pursue it now that Microsoft is officially supporting it. It is especially attractive if you want to leverage containers, since the ecosystem around those for Linux is a bit more mature.

    Unfortunately it looks like Dapper hasn’t had a release with .NET Core 2.0 support just yet. So for now the example code is targeting .NET Core 1.1. Hopefully a new release shows up on NuGet soon. UPDATE: It looks like a prerelease package is now up on NuGet.

    I won’t go into much detail explaining the code. You can find it on GitHub. It is pretty much the same as the old example which was covered in this post. The one big difference is that configuration has changed. The System.Configuration namespace isn’t present in .NET Core, so you no longer have the option of using the ConfigurationManager class to read connection strings out of .config files. I have replaced that code with something equally simple that takes advantages of the Microsoft.Extensions.Configuration package to read the connection string out of an environment variable. There are lots of other possibilities, and for a larger application you would probably choose another approach. ASP.NET Core applications, for example, use an appsettings.json file and the Microsoft.Extensions.Configuration code helps with that too. The ASP.NET Core documentation on the subject is pretty helpful if you’d like more information.

    AWS SDK for .NET - General Availability for .NET Core Support

    Lately I have been working on a new side project, so the blog has unfortunately been neglected. While I have been planning on writing about integration testing with ASP.NET Core, that’s going to have to wait until I wrap up the first phase of work on my project. I am hoping that won’t be too far away.

    In the meantime, I’m going to write a little about my new side project. I have been interested in working with Amazon Web Services (AWS) for some time, and I’m finally starting on a project to fill a personal need of backing up pictures from my phone on S3. Two birds with one stone sort of thing. As you can probably tell from all the previous posts, I’m also pretty interested in .NET Core and ASP.NET Core, so I decided that I would use those for any necessary backend work.

    At first, I was a little hesitant since I wasn’t sure of the status of the AWS SDK’s support of .NET Core. You can, of course, always use the REST API directly for S3, but I was hoping to leverage the SDK to make things go a little more smoothly. After spending some time manually creating an authenticated HTTP request, I decided that I was better off focusing on other efforts. I did get it working and it was a good learning experience. Just not really the point of the project. Although if you are interested in how the authentication of requests works, I’d definitely recommend going through the same exercise. Łukasz Adamczak has a great blog post that walks you through it.

    In the end, my fears over .NET Core support were misplaced. The beta version of the AWS SDK for .NET available at the time worked just fine for my needs

    • fairly basic interaction with S3. I was able to create presigned URLs for GET and PUT requests without any issues and I was surprised at how smoothly everything went. So far I have been pretty happy with the AWS documentation. I also found the posts by Norm Johanson on the AWS .NET Development blog pretty helpful. In particular, I was lucky to come across the Configuring AWS SDK with .NET Core post which answered a lot of questions I would have had about specifying credentials and instantiating an AmazonS3 object.

    While writing this post, I discovered a new piece of good news on the AWS .NET blog. The .NET Core support in the .NET SDK is now out of beta. Today marks the release of 3.3.0.0 which is the general availability release of support for .NET Core.

    .NET Core - ASP.NET Core and WebAPI - Dependency Injection

    Last time around, I mentioned an issue with how the DieController directly instantiated a new Die instance for each request that comes in. There are two main problems with this. In general, it’s a good idea to have your dependencies provided to a consuming class instead of having it directly instantiate them. This provides a number of benefits which I won’t get into. If you’re not familiar with Dependency Injection, it is worth reading up on. The other problem is specific to our particular application and its implementation. We are simulating dice rolling using System.Random and I previously mentioned that the documentation recommends against creating a new instance for each random number that is generated due to how its seeded. Since we instantiate a new Die object in each request to the DieController.Get() method and it in turn instantiates a new System.Random, the initial code was in direct conflict with that recommendation. What we need is a way to re-use the same Die object between all of the requests coming in to our API.

    Thankfully, this is really easy in ASP.NET Core. One of the new features is built in DI (Dependency Injection) container functionality. The DI container is provided with the IServiceProvider interface and the types it manages are referred to as “services”. The service terminology isn’t anything special. A service could be anything. A pretty standard example for DI is a Repository class that implements IRepository. You instruct the DI container that when you request a type of IRepository that it provides an instance of Repository. In this case, your Repository class can be thought of a “service”.

    So how does this help with the need to have a single instance of the Die class used for all API requests? When types are registered with the DI container, you can specify a service lifetime. For a small object, that doesn’t have any state to persist, the Transient lifetime is a good choice. In this case, you get a new instance each time one is requested. This is fairly typical and is probably what you would expect. I know it’s the default option for Ninject and Unity - probably others as well. There are other lifetime options and it just so happens there is a Singleton lifetime that is perfect for our use case. We can register our Die class with Singleton lifetime. Once the first request is made, a new instance is created and any subsequent requests will use that same instance. Exactly what we need.

    To implement this, a few changes are necessary. Most obvious is that we now need to provide the Die instance to DieController through the constructor. We’ll add a readonly property to DieController so that we can access it in the Get() method. Instead of dealing directly with the Die class within DieController, we’ll add an IRollable interface to DiceApi.Core. Using Die itself would work, but it’s more common practice to have an interface that provides the functionality you’re depending on. That way implementations can be more easily swapped out, you can more easily mock the dependency for unit tests, etc. If you go back to the IRepository/Repository example, the benefits are a bit more obvious. Your Repository may have database details, while a mock of IRepository can make unit testing much more approachable.

    The ASP.NET Framework code is instantiating our DieController instances with each request. How does it go about passing in an appropriate IRollable implementation? Well, right now it doesn’t. The DI container must be configured so that it knows about the mapping between IRollable and Die. This is where the Startup class’s ConfigureServices() method comes in and we register our type with Singleton lifetime. Adding the following line will take care of that.

    services.AddSingleton<IRollable, Die>();

    We’re most of the way there, but there is still an issue here. The DI container knows that we want an instance of Die when an IRollable is needed. But how does it go about creating an instance of Die? If Die had a default constructor, then it could just use that. Right now, there is only one version of the constructor, Die(int sides), and it has no way of knowing what to provide for that integer parameter. Looking back at DieController.Get(), the method previously used this constructor parameter to dictate the number of sides on the die. Since we’re now using the Singleton lifetime for this object, that doesn’t really make sense. What if the next API request was to roll a die with a different number of sides? If specifying it on the constructor isn’t going to work, then where should it go? It’s possible that we could deal with it as a property, but then we’re mixing in state on something that’s potentially shared across requests. There is also really no need to have that state. Modifying the Roll() method to accept number of sides as a parameter looks like the best choice.

    A few other supporting changes are needed. DieTest requires some refactoring due to moving the number of sides into the parameter on Roll(). The DieControllerTest class must now provide an IRollable when instantiating DieController. This is where you could potentially mock IRollable and use that in order to focus on just testing the controller specific code. To keep things simple, I’m just instantiating new instances of Die directly. And finally, since we’re now using Die in DiceApi.WebApi.Tests, its project.json file needs a reference to DiceApi.Core added to its dependencies.

    For more information on dependency injection in ASP.NET Core, check out the documentation page on docs.asp.net. I was able to get up and running with this really quick just by looking over it. As usual, all of the code is provided on GitHub. If you’re looking for the changes specific to this post, you can find them in this commit.