During the first couple of weeks of work, I was obsessed with getting tickets finished in the least number of days. To be completely honest, I actually counted the number of tickets I completed in a week and compared it to previous numbers. After bringing myself down one or two times, I realised quickly that this unhealthy exercise is not perfect, as some tickets are intrinsically more complex and take longer. With fresh enthusiasm, I decided to change the metric: from the number of tickets per week to the number of lines of code per day instead.
The reasoning was that the difficulty of tickets was directly proportional to the required number of lines of code to solve them. But applying any function to the number of lines, e.g. summing the number of changed lines, is still not perfect. Similar to the other metric, the value is heavily influenced by the task. For instance, if I were to change a unit test with hundreds of lines of expected output, it would seem like I was really productive. In fact, it might have taken less time than changing only a few lines in the body of a core function.
Naturally, my mind remained focused on finding the optimal metric to evaluate my performance. Meta-thinking about it, this whole matter might be linked to the constant feedback I would get during my degree through the marking of problem sheets and exams. I am now of the opinion that, compared to uni, real-world work has no ultimate system of measuring progress and understanding. At least not one that defines a one-dimensional scale. The feedback does not come from an absolute source of truth but rather from many, possibly overlapping sources. The surprising fact for a humble junior is that these sources are not precise. Frankly, they might be completely imprecise.
In turn, one of the most important jobs of a software engineer is to gather as much information as possible and to come up with a function that weights each source individually. For example, the weighting could be based on trustworthiness, considering the level of experience and the role of the team members. The output should be multi-dimensional, such as one’s progress in working with the company-specific libraries, evolved cooperation with others, better prioritisation skills, and so on. Personally, I cannot define such a function. It is impractical to keep track of the different inputs and to constantly readjust the weights.
In this seemingly uncharted territory, the only thing left to do is to relax. Requesting feedback and assessments every other day does not get very far because the time spent on the job is halved. In the end, the most relevant piece of advice I gave myself is to be more confident, worry less about proving myself, and trust that my team will speak up when needed. Of course, the situation may not be the same in every instance. I can imagine many managers and mentors not expressing their thoughts clearly and leaving their peers in perpetual oblivion. Having experienced this in the past, I can say that those cases require the forgotten peers to have the courage to step up and request feedback. The way the story goes, not daily feedback, but frequent enough to get the ball rolling and the discussion started.