Years ago I was working on a project that had a sort of “service locator” pattern in it. This is a memory about how it got replaced with dependency injection.
I put “service locator” in quotes because a normal service locator pattern looks something like this:
Then in order to locate a service, you just pass the service key to the locator:
When services depend on each other you try to keep it out of the initialize step, or at a minimum keep it as a directed acyclic graph inside the initialize step so that they won’t deadlock.
This is almost, but not quite, entirely unlike what this system did.
The System’s Design
While the system was clearly inspired by a service locator pattern, in an effort to make it “type safe” it had made certain allowances in its design:
Initialization happened entirely in the constructor and with interdependencies between the classes:
Services could (and did) refer to other services outside of their constructors, which may or may not be initialized. These methods would be called by other services inside their constructors.
Among other consequences:
It meant that the order that the services were created in the constructor was extremely fragile.
You had no ability to know, when you added a new call to an item, if it had any dependencies that were unresolved.
It was completely untestable and it made new code difficult to test. This was made worse by the constructors engaging in behavior like File IO or starting threads.
This company at the time was very anti-testing for a variety of reasons, but part of it was that writing tests with a system like this led to brittle tests that were difficult to write and provided minimal value.
Purpose of the Design
The (unfulfilled) idea was that you could have various types of system all living together not just in one code base, but one binary and spin up a server with radically different characteristics through configuration alone.
This was never followed through with, but it did mean that a lot of Root code would end up in sometimes bizarre tools that really didn’t need to be running, say, the production user management system.
This was not something that anyone really wanted to keep: it was a vision that never came to fruition, and no one really thought it would ever come to be, but a lot of the design choices were predicated on this assumption.
So I Set Out To Fix It
Fixing this “service locator” became a major focus for me. I wanted to convert us to dependency injection model and had chosen guice as a framework. Guice had several advantages over other systems:
It required no mucking about with the build, which at that point was something incredibly perilous to change (another team was working on this problem).
It had support for a lot of weird edge cases, such as multibinding, circular dependencies, and things of that nature. These aren’t necessarily desireable, but they were already extant in the code and we didn’t know where all of them were.
It worked off the standard JSR injection patterns, which facilitate doing manual injection and make testing easier.
I got a few people together who were on board, got buy-in from different parts of the org, and off we went.
Can we start from the bottom up?
The original idea was to start at the bottom and work my way up:
Where possible, it would also help to bring pieces into a test harness, even if we couldn’t write any tests for them yet.
This approach, which focused on the degree of coupling, had several advantages:
Minimally Invasive By starting with the lowest pieces of the puzzle and working up meant that, especially at first, the components did not need excess dependencies and the modules could be kept relatively simple. Also it meant that each individual code change would be small and would be unlikely to impact anyone who wasn’t working on that exact class.
Fast Benefit It could be done in small pieces—two hours here, a day there—rather than requiring a large amount of work up front.
Isolated Most of the classes that fit this description were not in the critical path per se, or were in the critical path but their instantiation and teardown were not in the crtitical path. This meant that we could make the changes without radically altering the performance or memory characteristics of the system so that we could prove that we wouldn’t impact things too badly before moving to the more critical path components.
The limitations, however, quickly became apparant as well:
Because of the mass of interrelated dependencies, we ended up needing multiple injectors that we would build on in later steps.
It turns out that the criteria for this actually came out to very few cases. Adding additional cases got more and more difficult, particularly as we started encountering classes with 15+ dependencies.
It was actually fairly difficult to map how many dependencies a given class had. Because of the problem of A would call B during the instantiation process of C, and because there was a lot of code that would do: root.getA().getB().getC().doSomething() finding which class to pick next was relatively challenging.
But the real kicker was this:
People who were doing development would often reach for a tool that was on Root but then get confused on “how to add it the right way.” They would add dependencies that were not ready to be brought in, and this meant that the entire class would need to be revisited.
This also meant that rather than doing dependency injection, a lot of classes would end up with a hodgepodge of dependency injection and service lookups… and we were inadvertantly encouraging them to make more of a hodgepodge going forward rather than encouraging other developers to clean things up.
We had created a situation where there were pretty much two people who could actually improve the health of the code and drive the project forward.
This was, suffice it to say, undesireable.
Enter Project Root Canal
Instead of going at it from the bottom up, what if we went at it from the top down? Focusing on the order of initialization rather than the degree of coupling.
Basically:
This also gave us an opportunity, because most of these needed to be initialized, to wrap a lot of the tooling in Guava services, which gave us better error handling and lifecycle management.
Dependencies that were not ready to be converted to DI wholesale could still inject Root and use that object, giving a stopgap measure to keep us from having to convert the entire system in one throw.
Basically:
Rather than start at the bottom and work our way up, we were starting at the top and working our way down.
Rather than attempting to eliminate the Root object per se, we were hollowing it out and turning it into a wrapper for the initialization.
The outer structure of Root would remain exactly the same, but the inner structure of it became something simpler.
Having decided on a course of action, the initial work to get the basic framework in place took around two weeks. This kicked off the iterative process–which took months–of moving every component over to the new system.
Making it Readable
Guice has a reputation for making code hard to read. This is especially true when it is used in highly complex ways, with architectures that involve lots of options in their module configurations (what is Foo bound to? who can say!). We adopted several principles in this design to make sure that we didn’t end up in a worse position than where we started:
Every package should contain exactly one (1) public module. No bindings should reach outside of their module if it is possible to avoid it (no impl packages). There was some flexibility on this point for multibindings. By convention this public module was named <Package>Module.
Every package was responsible, within its public module, for injecting the modules of child packages. There were a handful of exceptions to this, but we tried to make them as explicit as possible. What this meant was that, combined with (1), even if the classes had dependencies all over the place, the modules would form a tree that was identical to the package hierarchy.
Modules should contain no conditional logic, with only a handful of exceptions.
Everything should be as explicit as possible: no implicit bindings, always use @Inject on constructors, and try to follow the Law of Demeter[1].
All binding logic should be inside of the modules and their associated Providers. No using @ImplementedBy or @ProvidedBy.
The goal here was to simplify what we could in guice and make it as simple as possible to find and diagnose problems. You should never have to wonder “where was this bound.” If you wanted to know the implementation for com.example.foo.Bar you looked for com.example.foo.FooModule as a starting point, and you knew it had to be bound somewhere within a Module inside of com.example.foo. This limited the search scope to a handful of classes, usually only one or two.
Using these rules, you could pretty much always find the implementation of a given object with grep or by hand, if need be.
It also simplified reasoning about the module layout if you always knew that com.example.foo.FooModule would always inject com.example.foo.bar.BarModule and com.example.foo.baz.BazModule.
Getting Others On Board
There’s a lot more to making a change of this scale and scope than just the technical work of migrating the code. A lot of other things had to start happening simultaneously, and ultimately the success of a project like this depends on more than a small group working on it. We needed to get everyone to join in on the process.
Many of the developers had started their careers with this system and only had peripheral knowledge of dependency injection as a set of principles. This necessitated a lot of different approaches to try to make sure that the ideas were widely disseminated:
Holding small group workshops with practice and discussion elements.
Larger sessions that were more theory focused.
Providing resources in the form of books, articles, and the like.
One-on-one work with individuals, especially with help debugging or tracking down problems.
Copious code reviews.
It also required clearing up some misconceptions:
That “dependency injection” is a framework. Essentially thinking that DI = Spring, Guice, Dagger, etc.
That merely being instantiated with guice would significantly alter the runtime performance characteristics.
That this was the end of the journey and after this we would be in a magical place.
Now for the Test Harness
Meanwhile, a senior SDET had been working on trying to get the entire thing into an integration test harness that could run against a known database.
To do this, we made sure that there were tools in place that allowed them to swap out the injector in test and we prioritized those pieces that did things that were really undesireable in a testing situation. Things like interacting with a database or opening a network socket inside of a constructor. As much as possile we tried to move these into guava services that we could initialize separately from their construction.
Now whenever someone would instantiate the service, a series of steps took place that most programmers never had to worry about or deal with:
With that done we could add a few static methods that allowed us to swap out the modules and we were good to go.
Well, Almost
Turns out that people have a strong reflex when they see a class that looks like this:
1
2
3
Foo(Rootroot){this.root=root;}
Become this:
1
2
3
4
5
6
7
8
9
10
@InjectFoo(DependencyAa,DependencyBb,DependencyCc,DependencyDd,//12 more dependenciesDependencyQq){// ...}
They have some understandable twitch reactions and second thoughts.
The thing was that, by and large, those dependencies were preexisting. It’s just that previously they happened deeper in the code when root.getQ() was called. It wasn’t adding dependencies, instead it was revealing the dependencies that already existed.
Refinements and Iterations
Through this we had several missteps or learning experiences along the way that caused reevaluation and backtracking or doing things differently.
Multiple Injectors
Our early efforts involved building multiple, chained injectors. This was done because fo the early design philosophy of starting at the bottom and working our way up, as we often had to deal with situations where a (non-injected) object needed to use the result of injection before something else (that was injected) could be built.
This proved to be moderately disastrous as an approach:
This made it extremely difficult to test. The construction process become fragile and any change to ordering or system construction would cause the test harness to break.
Because some steps depended on the initialization of previous steps, it meant that you almost couldn’t use a ServiceManager and were instead stuck with initializing every service manually.
It became very difficult for engineers who weren’t extremely familiar with the ins and outs of guice to modify the system in any way, which was the opposite of what was desired.
PrivateModule
When we first started, we used PrivateModules to strictly segment the code areas. This way the packages very rarely impacted each other. Over time, it was recognized that this was not ideal.
PrivateModule did not work with a lot of the tooling that makes it easier to debug, visualize, or test guice.
It ended up making it harder to share certain key, central resources that were already shared. It became challenging to know whether it was more appropriate to declare a resource in the private module or in a higher level module, or to find resources that might be useful that had already been bound somewhere.
It added another layer of conceptual complexity to an already complex system.
In the end, we did away with virtually all of the private modules, moving instead to binding annotations/qualifiers instead.
Testing Moving Ahead of Refactoring
In several cases testing moved ahead of refactoring, requiring us to break and fix a large body of tests that had been written to try to solve the incredibly weird problem. This was especially true when we had started from the bottom up, but it remained a problem until we had finalized the changes.
Luckily, the testing group was completey on board with our work, so it made it an easy sell in a lot of ways, but it still meant there was some avoidable churn in how we went about it.
Is That a Singleton?
Guice likes to deemphasize the use of singletons—for good reasons—when they aren’t absolutely required. This had… odd interfacing with some of the elements of the system that depended on something being a singleton and never documented it.
This was particularly challenging for sets of objects that depended on shared locks (yes, this was a thing), where the object had some slow memory leak that hadn’t been previously detected, or where a singleton was being used as a memory control. In these cases there was no obvious reason why a singleton was the right choice–and sometimes it was in fact the wrong choice–but Bad Things™ would happen from removing that conceit from the system.
This was also challenging because a lot of bugs could crop up that were very subtle as a result of these sorts of conceits. To catch these we had to do a lot of work with load testing, because the problems would only manifest under load.
Conclusion
Ultimately the project was what I’d term successful: we got the system into a stable, testable, state. It was in many way
So that’s me getting the story of how we tore apart the central conceit of a system out of my system.
References
[1] K. J. Lieberherr, I. Holland, and A. J. Riel, “Object-Oriented Programming: An Objective Sense of Style,” SIGPLAN Notices, vol. 23, no. 11, pp. 323–334, Jan. 1988, doi: 10.1145/62084.62113. [Online; accessed on 25 April 2022]
Pretty much every time anything that vaguely looks like a union exists, there are a group of consumers who start calling for a boycott. They will circulate this message widely and make it seem like “the way to support the union.”
But in truth these efforts can actually do serious damage to the actual organizing.
Types of Boycotts
There are essentially four broad categories of things that often get called boycotts (there are other models, but this is one that I like):
Consumer Boycotts
Solidarity Boycotts
Symbolic Boycotts
Moral Purchasing
All four of these get called “boycotts” but in truth refer to three very different things. What it comes down to is “what are you trying to accomplish” and “with what power.”
Sometimes things also involve elements of multiple types, but that could be another essay.
Consumer Boycotts
When a man takes a farm from which another has been evicted, you must shun him on the roadside when you meet him – you must shun him in the streets of the town – you must shun him in the shop – you must shun him on the fair green and in the market place, and even in the place of worship, by leaving him alone, by putting him in moral Coventry, by isolating him from the rest of the country, as if he were the leper of old – you must show him your detestation of the crime he committed. [Charles Parnell at Ennis in County Claire, which became the foundation of pressure method on Charles Boycott]
A consumer boycott is the “classical” model of a boycott where a group of consumers organize to not engage with a business or set of businesses so long as some state or condition holds true.
There are variations, but the basic form is:
Find people who are consumers of the product
Get them to agree to not purchase the product until after a situation is resolved
Keep up the pressure and continue to organize until the company relents
Resume normal purchasing behavior once victory is won
For example, there have been a few successful boycott campaigns around advertising on some show or another. These work by looking for advertisers on the show and getting a group of people who might normally buy those products—regardless of whether they watch the show in question—and promise to not purchase until after they stop advertising.
Campaigns like this depend on people who are purchasing the product withholding their purchases for a period of time until the boycott has ended. Building a consumer boycott requires finding people who use the service or buy the product and convincing them on a short-term good to pressure the company in question.
I can’t boycott men’s soccer because I don’t watch men’s soccer and am not going to start pretty much no matter what they do. This form of a boycott relies on consumer purchasing power, and if I’m not willing to purchase the product even if things change, then I can’t exactly threaten them by removing what I am already not giving them. If I wanted to build a boycott to pressure them, I would have to find a way to convince people who are true fans of men’s soccer, not just people like me.
Solidarity Boycotts
Solidarity boycotts are where you act at the behest of the workers and refrain from purchasing a product at their request. Usually while they are striking.
The idea here is that you are amplifying the union’s power: by not purchasing, you are demonstrating to the company that the union has teeth and that failure to acquiese will be disasterous to the company.
There have been many successful solidarity boycotts. Both primary boycotts (boycotting a grocery store because the workers are striking over their working conditions) and secondary boycotts (boycotting a grocery store because they sell, e.g., grapes and the grape workers are striking over their working conditions).
But it isn’t always the right strategy. A boycott is an escalation, something that unions want to keep in their pocket but which jumping to right off the bat means that they don’t have that escalation tool for later. There are even some situations where not purchasing when you normally would can act like a scab by reducing the amount of work that needs to get done. It is difficult to know for sure unless you work at the company, and so it works best to listen to guidance of their union to know what to do.
Sometimes it is also more nuanced than simply “don’t buy”:
It should go without saying that you aren’t supporting the workers if you do this of your own volition, and boycotting without the union’s request can seriously hurt their organizing efforts.
Symbolic Boycotts
It’s more “symbolic than substantial,” but that doesn’t mean it’s consequence-free.
A lot of boycotts the intent is not to actually put any pressure at all on the company per se, or perhaps only indirect pressure for the future. The goal of such is to get people activated about some other issue. By calling for a boycott the hope is that they will raise awareness about that issue and bring focus onto something that is being neglected.
Mostly symbolic strikes are good for something other than applying pressure to a company: Bringing public attention on a much ignored issue. Often one that is shared between groups. They can be used by unions for structure testing, but for the most part their goals aren’t about worker power, they are about shining a light in the darkness.
In this way of thinking, #BoycottMulan wasn’t about Mulan at all. Not really.
It was about Hong Kong.
The point was not reducing support for Mulan. The point wasn’t to make Mulan crash. The point wasn’t to put screws into Disney or make Disney change their behavior (though if they made it sufficiently uncomfortable for them that was a nice perk). The point wasn’t what the actress said specifically or why she said it. The point was that China’s treatment of HK is reprehensible, and what the actress said and what happened was a vehicle to shine a light on it. That doesn’t mean that what she was was remotely okay, but it also means that the reason why she said it is kind of irrelevant: It doesn’t matter if she must say things like that to be successful in China, because the point isn’t about her, or Disney per se, it’s about China.
I’m not asking you to boycott this series so much as I’m asking you to CARE ABOUT GENOCIDE.
Similarly, Cixin Liu’s comments and fame are being used, via calls to boycott Three Body Problem, to bring attention to the genocide happening right now, right this second. It doesn’t matter why he said it, that’s not the point. The problem is that not enough people are aware of the attrocity happening.
With symbolic boycotts it often isn’t relevant if you ultimately end up seeing it, reading it, etc. Because the goal was much greater.
Moral Purchasing
The way we spend our money can help to change the world.
Moral purchasing—typified by movements and groups such as Ethical Consumer—is not attempting to influence corporate behavior so much as individual behavior. Moral purchasing is often conflated with a boycott but is something else entirely: an attempt to make the most moral choices possible given the capitalist dystopia.
To the extent that these can be cosidered “boycotts” they are qualitatively different from consumer boycotts: the goal is not to get back to eating veal (or meat) and the companies that produce veal are not being boycott (usually). By spending money and/or putting attention on “more ethical” causes, the goal is to shape the world accordingly.
Supporting Unions
A lot of people like to insert their own views about what “should” support a union, or they want to “show solidarity” with a union and so they reach for the tool they have seen used in the past: the boycott.
Sometimes they’ll even put the union logo or branding onto it, making it to all perspectives as if it is actually coming from the union.
This came up pronouncedly when a group of people decided to “boycott Amazon” during a union drive. Which was exactly the wrong time to be doing such:
"A boycott like this really just plays into management’s hands by giving ammunition to the idea that this is going to be about conflict all the time and that you’re going to have outside people interfering in your life, which isn’t what a union is," said Connor Lewis, editor for the labor publication Strikewave.
This also came up during the IATSE strike authorization:
Strike strategy is all about building to a crisis for the employer. Most (if not all) strike strategies and plans include a series of escalations before and after a strike.
It is incredibly important to listen to the union in these situations. Not just individual members—who do not speak for the union as a whole—but to the union itself. You aren’t helping the union if you aren’t doing what they ask and are speaking over them.
The idea of “consumers should never cross a picket line” is a good starting point, but it is just that: a starting point. When there is no picket line then you aren’t “crossing the picket line” to go to a company.
Another variation in the “speaking over the union” is when people insert their own views over what the union is promoting. For example, a relativley common pattern is something like:
Someone says in response “Yes! Never buy Apple products! They are anti-consumer!”
Regardless of your position on Apple’s market position, this is speaking over the workers. It isn’t doing what the union has requested, it is substituting your own agenda for theirs.
This isn’t to say that you should purchase from Apple, or Amazon, or whoever if that doesn’t fit within your morals. That is instead to say that you aren’t supporting the union by telling people “never purchase” when the request is not that.
Similarly, it is important to communicate within the domain of the requested action. So if the request is “don’t buy made-in-Mexico Nabisco products” then the request isn’t “don’t buy Nabisco products” without qualifier.
It’s really incredibly frustrating to see eight million twitter messages that all read “BOYCOTT TO SUPPORT THE UNION” and calling people “scabs” if they don’t when there was no request to boycott and the union does have specific requests that aren’t being shared.
It sucks the air out of the room.
Conclusion
This is a basic rundown of how I think and talk about types of boycotts. Some main takeaways:
Not all things that are called boycotts are actually boycotts.
The moral calculus on breaking ranks varies with the circumstances. Not participating in a symbolic boycott is a very different situation from not participating in a solidarity boycott, but also the infrastructure of support looks very different. In a symbolic boycott there is very rarely any infrastructure to support people who might be impacted, in a solidarity boycott you can often find resources and guides to help obviate the pain, at least for a little while.
Listen to the organizers on the ground. You don’t have to agree with them, but they are usually the ones with the most context and understanding, so let them lead.
References
[1] T. E. Hachey, J. M. Hernon, and L. J. McCaffrey, The Irish Experience: A Concise History. Armonk, N.Y. : M.E. Sharpe, 1996.
[2] O. P. I. R. G. Toronto, “Boycott Nestle,” Alternative Toronto. 1977. [Online; accessed on 16 April 2022]
[3] J. Kirby, “What the US’s diplomatic boycott of the 2022 Beijing Olympics does — and doesn’t — mean,” Vox, 10 Dec. 2021, [Online; accessed on 16 April 2022]
[4] J. Hunt, “Why Shop Ethically?,” Ethical Consumer, 06 Apr. 2021, [Online; accessed on 16 April 2022]
[5] A. Mak, “The Bizarre Amazon Boycott That Its Unionizing Workers Never Asked For,” Slate, 09 Mar. 2021, [Online; accessed on 17 April 2022]
One of the skills that I use, teach, and consider fundamental to being a good engineer is the idea of a “rule out.”
The idea is basically that, when debugging, you reverse the question from “what could be happening” to “how do I demonstrate that this set of things is not happening.” Not asking “what will prove this theory correct” but rather “what will rule out these other possibilities?”
The term comes from medicine, where it is a critical component of differential diagnosis.
A Definition With Wordle
By now everyone has at least a passing familiarity with Wordle. It’s a game where you try to guess a word in six tries, with it letting you know, at each guess:
Which letters you get in the correct position
Which letters appear in the word but are in the incorrect position
Which letters do not appear in the word
In a real way, each guess you take with wordle is a theory where you are trying to figure out what words it can’t be. From the start, you have no information with which to make a guess, with each guess you gain more information, so the best strategy is often to focus on eliminating as many words as possible.
For the first few guesses especially, you are more focused on what it can’t be rather than trying to guess what it is.
This is a rule out.
An Example
We had a problem at work that had occupied three developers over two days. The compiler was hanging and they couldn’t figure out why.
It took me thirty minutes to solve it and come up with a solution.
Not because I am all that, but because I knew the system well enough to start doing rule outs immediately.
Everyone in this group was convinced up one way and down the other that it had something to do with Guice—a library we were using. They couldn’t figure out how, but they knew that this was the problem as the element in all of this that they had no experience with.
I knew from the symptom—the compiler hanging—that it almost certainly couldn’t be Guice. Guice doesn’t affect the compiler and is largely downstream of the compiler. I devised a test to double check this case and hit the compiler hang without the guice annotations.
Great.
What else could it be?
I know that the Java compiler has quirks, but is usually pretty solid and doesn’t randomly hang. So what is changing the compiler?
I run the compiler and I note that we aren’t getting to the test phase, so it isn’t jacoco—a library that is frequently responsible for this sort of thing because it munges bytecode.
We also have a static analysis tool running that runs its own version of the compiler as part of the compiler step?
I disable the static analysis tool. Rerun it. No hanging. Culprit found. A little more diagnosis and I find the specific rule and fire a commit to disable it then file a bug with the team that handles the static analysis tool. I quickly do differential diagnosis and also determine how we can circumvent the problem in the meantime (it has to do with a final check, so we can either disable the final check or just… not trip the final value check).
You are paying for the 20 seconds of pushing the button and the 20 years of experience to know which button to push.
The Core Loop
The process is essentially one of making rule outs:
Identify the apparent error.
Determine what it can’t be.
Determine what it might be and start proving each one as Not the Problem, starting with the easiest ones and the ones that will clear the most possibilities.
Goto 1.
In any given situation with software a given error could be an almost (countably) infinite number of things. So being able to quickly prune entire trees and know how to test to rule out significant parts of the trees becomes a critical skill.
Why This Works
We have several biases as humans that work against us for debugging:
We tend to view things as more likely culprits that we are less familiar with and tend to overestimate the likelihood of things based on recent events. This is a form of the availability heuristic.
We tend to, on finding a possibility, try to find answers that confirm our hypothesis and reinforce our beliefs, rather than answers that eliminate the hypothesis. This is a form of confirmation bias.
We tend to look mostly to the tools we know in order to diagnose and fix the problem, which is a form of anchoring bias.
Performing rule outs is a way of breaking these (and other, related) biases. It forces our brain to consider alternatives and, by considering the alternatives, allows us to more quickly eliminate possibilities and narrow down on an answer.
Basically: Before we can accept something as true, we must first/also prove that other possibilities are false.
Another Illustrative Example
A while back I was working with an intern who had been banging their head on a problem for about half a day. They were getting an error when passing a file to a parser. It looked basically like this:
They had a error where it wasn’t working. So they jumped to the piece they were least familiar with: the internal library.
They started setting breakpoints inside of the third party library. Started evaluating the source code and trying to read it. They found not just the error coming out of it, but the error inside of the system that was leading to it. They were hypothesizing if the fact that it was a YAML file but the parser seemed to be geared for JSON mattered.
All questions you need to be able to ask. Eventually.
Instead, we start with the error: An illegal argument exception that is caused by some sort of parsing error. It’s coming from a shallow place in the code too.
Then my brain immediately went to the possibilities:
It can be that the parser itself has a bug, but since it’s a pretty solid and stable parser that seems unlikely. We can validate this is the case later by directly passing in the file if all else fails.
It could be a bug in the interface between their tooling and the library, e.g., calling the wrong method, needing a flag to be set, or asking it for an object type it doesn’t know how to work with.
It could be a problem with what they are giving the library. Either because it is not being properly loaded or because it isn’t in a format that the library expects.
Confirm: They got the file from the library. This is one of their example files. It’s a known-good file. So it isn’t the second half of (3). This also gives us some confidence that it isn’t (1) (Core Loop Step 2).
Okay, so how can I rule out that it is a problem with what’s being passed to the library? (Core Loop Step 3) Let’s load it out and print it as a first step. This will tell us if what is being passed in is what we expect.
Tada. Found the problem.
The contents of the file weren’t being loaded into the object, so an empty file was being passed along. The parser didn’t know what to make of that and couldn’t figure out how to fit it into the object type it was being asked to work with, so it was dying.
Conclusion
This is a difficult skill that takes practice and time, but it has helped me tremendously in my career. It’s a systematic approach to problem solving that I’ve personally found useful and that people who I’ve taught it to also seem to have found useful.