Limit Yourself (PR 2698).

Well that was a long, difficult trip.

I wrote earlier about debugging tests in PR 2698 (and messed up the URL, but that’s life). Since that time, Patrick ran the tests at least a thousand times in a couple of loops, and we’re pretty sure that both:

  • I didn’t break anything that was already working
  • the new behavior I added behaves like we think it should

What more can you ask?

As Paul Harvey might say, here’s the rest of the story.

What Should You Build? What People Ask For!

Everything started way back almost entirely seven months ago, when a shibe named Jamiereno asked about a way to increase the number of maximum connections allowed to a node without restarting the core.

How can you tell I’m an optimist? I wrote “That might not be too difficult.”

To my credit, I did write first with a new RPC endpoint, then in the UI. I haven’t made it to the UI yet.

My point isn’t to make fun of my optimism, although you can. It’s kinda funny. My point is that this was a great feedback interaction. We were having a discussion about running nodes, improving the network, getting more shibes involved in making things stronger (and getting more people access to reduced transaction fees in the 1.14 series).

Then along came a request to make things easier for node operators.

If this gets your Spidey-sense tingling too, good! These types of questions are always good. The right answer might not be to do exactly what the requester asks, but it’s useful to understand the request and think about what’s important and why.

Why RPC First?

I’ll get into the technical details of the implementation in subsequent posts, because there’s a lot going on here, but it’s important to understand the implementation goals Patrick and I sketched out in that very short Reddit subthread.

The Dogecoin RPC mechanism, inherited from Bitcoin, is a very low-level, bare-bones way to communicate with a running node. If you use the dogecoin-cli program or open the Debug console in the GUI (or make HTTP requests with curl or a library), you’re interacting with the core via this mechanism.

RPC stands for “remote procedure call”, which you don’t have to understand other than “making requests from something running somewhere else”. That somewhere else can be a machine in the cloud or a little Linux box under your desk in the corner or a different process on the same machine you’re using now. That’s not as important as the fact that you’re interacting directly with the node itself.

Because this is the lowest-level way to interact with the Core as it’s running, it’s the plumbing of the Core itself in many ways. Think of it like an unfinished staircase in a house under construction, just some plywood and nails and braces. It’s not necessarily pretty, and it’ll look better with balusters and nice finish and paint and maybe a lovely stair runner, but you can get up and down. Just don’t lean too far one direction, or you might find yourself taking a tumble and waking up with a goose egg.

In other words, doing the RPC feature first would let me prove that it works, especially with automated tests, and then I could worry about making it really easy to use for people who prefer things a little more polished.

Maybe “polish” is the wrong word; there’s nothing inherently better or worse about the GUI or RPC. They’re just different.

What Needs to Work?

At its heart, this code had to do two things:

  • provide a way for users to change a value that the code already supported
  • make sure the effects of that change took effect

My confidence was high when I first volunteered to write this code because the Core already supported a feature in the configuration file read at startup. That meant that the system as a whole already had an idea of this limit. I wasn’t introducing anything new by way of a limit; the only thing I had to introduce was the idea of that limit changing over time.

The only thing.

I had the advantage of being able to read the relevant networking code which already checked this limit every time a new node tried to connect.

What would I need to finish the job? Not much!

  • add a new RPC command
  • update the existing value, after making sure that that value could actually change
  • prove that it worked

Everything sounds easy when you ignore the important details, doesn’t it?

What Did I Learn?

The first 80% of the work actually was easy. The second 80%, I realized I didn’t know several things.

First, what are the allowable values for this setting? Obviously you can’t set a negative number of maximum connections (although the code seems to allow that, it seems very wrong).

Can you set zero? Not much of a node you have there if so.

Can you set a million? Your computer will probably complain.

What happens if you increase the number? That’s easy. The code already does the right thing.

What happens if you decrease the number? That was less easy, not just in figuring out what the right thing is but in proving that the code does the thing we decided was right.

Even with all of that, you still have to use RPC commands to change this value as the node is running, but you can change this value as your node is running.

Don’t lose sight of that. We’ll get the GUI in place (though you can use the RPC console from the Debug menu if you really want right now). With the upcoming 1.14.6 release, you can adapt your node to your own specific conditions of bandwidth and memory and CPU usage without taking your node offline, waiting for it to restart, and waiting for other nodes to reconnect. This makes your life easier and makes the network as a whole stronger!

That’s the most important thing I learned: there are plenty of things we developers can do to make the network more robust and easy and pleasant to use. Keep your eyes open for opportunities like this and keep asking questions and suggesting features and improvements.