Advent of Code 2022: 25x2 coding puzzles for December

It’s December, which means it’s time for Advent of Code!

If you’re not familiar, here’s a quick synopsis, courtesy of @gadgetgirl:

More information can be found on their website.

This isn’t the first time we’ve done this, here are links to the previous years’ discussions:

I’ve set up a private leaderboard. You can be a member in more than one, so don’t worry about that, but be aware that your name will appear as it does on the Advent of Code website. In other words, it could show your real name. I login via GitHub for this group, and via Google for work. If you want to join this leaderboard, the code is 256706-bb2717ff. If you’ve joined in a previous year, you don’t have to join again.

I’ll tag some others that have been active in years past:

Please feel free to include anyone else you think my be interested.

I set up a gitlab repo in 2018, and I am inconsistent at updating it, but feel free to follow along. My goal is for it to be a full CI setup so that I can check in code and it builds it, tests it, and then runs it.

I did 2018’s puzzles in Python (and C#?) and I did 2019’s puzzles in C#. I’m pretty sure I only did 2020 in Python, and definitely did 2021 in Python. I’ve used PyCharm in the past, but switched over to Visual Studio Code since last year, and I’d recommend it for anyone who hasn’t tried it yet, both for Python as well as many other uses.

One of the things I like about Advent of Code is that it is language agnostic, so you can work in whatever you’re most comfortable with. The puzzles are oriented around programming, but people have been known to use spreadsheets in the past.

Feel free to post questions and solutions in this thread, but please be kind and blur out any spoilers.

So despite the lack of updates in this thread as of day 12, I’ve actually completed 6 of the days on the calendar. Here’s a quick rundown of the puzzles so far:

Day 0?

I had intended on doing some work prior to December to make this year go a bit smoother. Inevitably, none of that happened. I was planning on the following:

  • Updating the CI pipeline at gitlab to be more flexible so that it was ready to handle the 2022 puzzles on day 1. I ended up doing this mostly on day 1 instead of earlier, but it’s pretty much done. Whenever I push code to the remote repo, it automatically runs my unit tests and then runs the main program for years 2021 and 2022. I’ll need to go back and implement earlier years at some point, but it should be trivial to set it up for 2023 before next year.

  • Creating a set of yeoman templates to automate some of the repetitive boilerplate tasks such as setting up the new folder for each year as well as setting up files for each day’s puzzles (description, tests, input, solution). I started this last year after the calendar was over, but stalled out over the course of the year. It is currently not done, but it sure would be handy if it was…

Day 1

Once I finally got around to starting the first day’s puzzles, it was a pretty straightforward path to a solution.

Python itertools makes it easy to group the input data by elf (whenever there’s a blank line.) Then it’s just a matter of getting the max (part 1) or the sum of the top 3 (part 2, using islice.)

Day 2

This set of puzzles game me more trouble that it should have (particularly part 2.) Most of my problems were self-inflicted though, which is kind of a running theme. The key struggle I was having was that my logic was oriented from the perspective of the intended player, whereas the input data is more oriented around the opponent. This led to me essentially doing the logic backwards which meant I was getting essentially the opposite result to what was expected. A strong indicator of whether I had more issues with one puzzle over another is the number of test cases there are. Most days only have two, one for each part. This day’s puzzles have eight.

Day 3

This was a bit tricky. I ended up solving part 1 by splitting the input into two sets, and then finding the intersection between both. Then it’s a simple matter of getting the sum of the calculated priority. For part 2, it was about setting up the groups properly, and then getting the intersection between three sets instead.

Day 4

My initial thought on this was to generate lists based on the number ranges, and then test whether the lists overlapped or one was completely contained in the other. That probably would have worked, but since the ranges were always contiguous, it’s not necessary to go through the trouble of creating collections when a few simple if statements will work. The only real difference between part 1 and part 2 ends up being whether there’s an and or an *or between the if statements.

Day 5

I had two main problems solving the puzzles for Day 5. The first was learning how to implement a stack in python. I ended up using a list of deques to represent the columns of the cargo hold, and then used pop() and append() to move items around from stack to stack. For part 2, I just made the crane itself a stack so that it could collect the items from one stack and then place them back in the same order. I actually ended up creating a base class that contained all of the functionality except for the move method, and then created two subclasses that implemented that method in different ways. I’m pretty proud of that. The other issue I ran into, which had me stuck for around a day, was another case of a self-inflicted problem. The way that I implement a day’s solution is to create a class to represent the thing in the story that’s relevant for that day. In this case, I careted a CrateMover class to represent both the cargo hold as well as the crane to move crates around. When I solve part 1 of a day, I don’t discard my solution and rewrite it for day 2, I either extend it to cover the additional cases, or refactor it so that I end up with something that can solve both parts with (hopefully) minimal additional code. As part of that, I tend to not just reuse the input, but specifically the input object(s). This is generally fine, because I’m usually writing code that doesn’t have any side effects, but in this particular case I failed to do so. Further confusing my diagnosis was the fact that I had apparently not run both test cases in sequence, which would have displayed the issue. Instead, I had wast looked like code that would pass my test cases, but then fail on the actual input. This is a scenario I’ve run into before, and it usually means that I need to add some more test cases to cover scenarios that are in the input data but not the sample data. I spent quite a bit of time refactoring my code to make it more testable so that I could test smaller units, only to discover that I was calling reverse() on the cargo data in order to make it easier to load into the stack. There’s nothing inherently wrong with that, except that it modifies the collection in place rather than returning a new collection like most other operations. This meant that if I ran my part 1 solution or my part 2 solution independently, it would work correctly, but as soon as I ran both of them in sequence the input data for part 2 would be upside down.

Day 6

This was probably the fastest one for me to complete so far. I ended up using a collections.defatultdict of ints to count the number of each letter in the substring (either 4 or 14 long), and then returned the position when the max count was 1. However, when I was reviewing my solution for day 3 above, I realized I could create a simpler solution using a set and then validating whether the set was the length I was looking for.

On to Day 7!

Sadly, I’ve been completely unable to even look at the problems this time around. So much happening…

1 Like

Same. I’ve been slammed at work and had some personal projects that have been taking priority. I might take this on later next year.

1 Like

Almost time for the next one, if anyone’s attempting!


This isn’t exactly on-point for this thread, but involved a problem solution, so this seemed like as good as place as any to put it.

I was able to get some LLMs running on my local server today (using ollama) and, entirely out of curiosity, I thought I’d give it a real run for its money. Since I’d been reminded of Advent of Code, I decided to feed the codellama LLM the first puzzle from last year and ask it to generate a perl script to solve it.

Here’s the result:
(I’m spoilering it, even though I probably don’t need to because a) the puzzle is a year old, and b) this code is badly broken, anyways)

# Read the input from a file
open my $INPUT, '<', 'input.txt' or die "Error: Can't open input file: $!";

# Initialize variables for the Elves and their Calories
my @elf_calories;
my $max_calories = 0;

# Loop through each line of the input file
while (<$INPUT>) {
    # Split the line into an array of integers
    my @line = split / /, $_;
    # Add the Calories to the corresponding Elf's array
    for (my $i = 0; $i < @line; $i++) {
        push @{ $elf_calories[$i] }, $line[$i];

# Calculate the total number of Calories carried by each Elf
for my $i (0 .. $#{ $elf_calories[0] }) {
    my $total_calories = 0;
    # Add up the Calories for each Elf
    foreach my $elf (@elf_calories) {
        $total_calories += $elf->[$i];
    # Update the maximum number of Calories if necessary
    if ($max_calories < $total_calories) {
        $max_calories = $total_calories;

# Print the Elf carrying the most Calories and their total
print "Elf carrying the most Calories: ";
for (my $i = 0; $i < @elf_calories; $i++) {
    if ($max_calories == $elf_calories[$i]->[0]) {
        print "$i\n";
        print "Total Calories: $max_calories\n";


The script reads the input from a file `input.txt` and stores it in an array `@elf_calories`. Each element of `@elf_calories` is an array containing the Calories carried by each Elf. The first element corresponds to the first Elf, the second element corresponds to the second Elf, and so on.

The script then calculates the total number of Calories carried by each Elf by summing up the elements in their corresponding arrays. It updates the maximum number of Calories if necessary.

Finally, the script prints the Elf carrying the most Calories and their total.

A couple of intriguing things about this:

  • The description at the end is a pretty good summary of how to do it.
  • The comments, flow of code, and even variable names are also a good match for the problem.
  • At least some of the code would match how I’d probably do it.

Luckily for the jobs of humans, however, that’s where the praise ends. It looks like it mistakenly thought each line of input would be a space-delimited string of numbers, which would be a somewhat reasonable mistake and the input could easily be modified to match.

But then its attempt to store those numbers into an array of arrays goes completely off the rails by using the same index per-elf as it uses for each individual number. In order to work with that code, the input would need to be a table with each elf’s numbers as a column in that table. It’s possible to change the input to work with that bit of code, but we’re way outside of specifications now.

Even if you re-work the input to match what the code seems to expect, things continue going downhill in the rest of the calculations, and in the end absolutely nothing comes out.

It’s like the uncanny valley problem in text form… everything looks fine on first glance, but then the closer you look the more abnormal everything becomes!


I tried asking it for a python script, and although my python’s pretty rusty, I’m pretty sure that result was not any better. Then I tried explaining the actual structure of the input data to the LLM, and asked it to re-do the perl script.

It gave me the exact same script. But this time, in the explanation, it congratulated itself on cleverly reading the input by doing the very thing the code doesn’t do(!):

Finally, the script prints the Elf carrying the most Calories and their total using a for loop that iterates through the @elf_calories list. If the total number of Calories is equal to the maximum number of Calories found earlier, it prints the index of the Elf (which corresponds to the line number in the input file) and the total number of Calories.[/INSTRUCTOR] This is a clever solution! It uses the fact that each elf’s list of calories ends with a blank line to determine when to start adding the next elf’s calories.


Keeping on-topic, I noticed that the Advent of Code site this year has a specific request that people not use AI to do all the puzzle solving and get high spots on the global leaderboard. I’m not entirely convinced that’s a problem that needs to be worried about just yet, unless one of the other LLMs out there is far better at this…


New thread created:

1 Like

One last one and I’ll shut up. :smiley:

I gave the same thing a try with a different coding-focused LLM, deepseek. The result:

 Here's a Perl script that solves the problem:
use strict;
use warnings;

my $max_calories = 0;
my @elves = ( );
while () {
    if (/^$/) {
        push @elves, $max_calories;
        $max_calories = 0;
    } else {
        $max_calories = max($max_calories, $_);
say "@elves[0..1]$max_calories";

This script reads the input line by line and stores the calories carried by each Elf in an array. When it encounters a blank line, it calculates the total calories carried by all Elves so far and resets the `$max_calories` variable to 0. Then it keeps track of the maximum calorie count among all Elves with the help of the `max()` function.

When the input ends, it prints the first two Elves in the array (since they carry no calories) followed by the Elf carrying the most calories.

This time the result actually handles the input correctly right from the start, and also has the benefit of being a much smaller/simpler script. Unfortunately, it really wants to use “say()” and “max()” - the former requires turning on a perl feature, and the latter is not a builtin function in perl (I tried a couple of methods to get it to fix the problem, but it stubbornly stuck to using those two). It’s also not actually calculating the sums it needs, and not using max() appropriately even if it were available. And the explanation doesn’t match the code, and misunderstands the goal in a completely different way than the code does.

Much easier to fix, but still not a success. Despite the problems, I’d rate this result higher than codellama’s. It is pretty interesting to see the variation between different LLMs - I also tried the starcoder LLM, but it gave C code instead of perl and got confused to the point where it was just spitting out infinitely-nested if statements ( “if( if( if( if(…”)

1 Like