This is one of two Recall- Rate reports. Two methods are provided because, as a design matter, we want to avoid imposing on users the burden of having to make a humanbased judgment, in every potential instance, as to whether a job should properly be classified as a recall. We think that is (or would be, if required) a nasty burden, particularly since it’s fraught with the potential of time-consuming (and emotion taxing) argument with technicians, upset, and so on. We think it’s better to have a system that allows valid comparison of recall rates, between techs, even while knowing the absolute numbers will likely include some percentage of instances that are charged as recalls inappropriately.
This particular report gives total quantity of jobs for each tech, total that are classified as recalls, and a resulting recall percent figure. It also provides graphs to provide an at-a-glance sense of comparison. The horizontal aspect in these graphs is obvious. The varying thickness of each graph may not be. The thickness varies, simply, to provide a visual indicator as to comparative quantities of work being done. If Tech A has a slightly higher recall rate as compared to Tech B, but is doing twice as much work, it may not seem so bad as otherwise.
Its underlying theory is that, if your company was called back to service the same underlying machine again, within 30-days1 of a previous (and supposedly complete ) job on that machine, there’s a pretty good chance the earlier work was not sufficiently complete or perfect. We automatically classify any such within-30-days-of-a-previouscompleted-job situation as a recall, though knowing some are not. The thinking is, sure, the figures might be a little higher than actual guilt in our technicians’ work. However, in the absence of any good reason why one tech should suffer greater such inflation than another, the figures remain totally valid for comparison purposes.
To illustrate, let’s suppose (simply for argument’s sake) “2 percent” happens to be the rate at which you get new jobs on the same machine within 30 days of a previous completed job—and for reasons not related to inadequacy in a tech’s prior work. In other words, that’s the “innocent” rate. Tech A shows on the report with a 4 percent recall rate, and Tech B shows with 5 percent. As far as true /guilty rates are concerned (i.e., jobs where you had to go back because of inadequacies in prior work), it’s easy (given our assumption of an innocent base at 2 percent) to deduce real numbers for both at 2 and 3 percent, respectively (i.e., after subtracting the innocent base).
However, the subtraction is not necessary (and you likely don’t know the innocent base regardless). Looking solely at the raw numbers, it’s apparent Tech A is performing better -- in terms of getting it right the first time -- as compared to Tech B. That comparative basis, really, is what you most need.
Methodology for producing this report is as follows:
If you used this report prior to ServiceDesk Ver 4.4.49, please note the present strategy is virtually opposite to what preceded it.
In present strategy, the system looks at jobs closed within your specified date range, and for each looks downward in the data, seeking to find if there was a subsequent job within the specified number of days and on the same machine. In the old strategy, it looked to find jobs that originated within your specified date range, and for those looked upward in the data seeking to find if there was a prior job within the specified number of days and on the same machine.
There are significant consequences in this distinction. With the old method, the guilty work as being reported (i.e., jobs performed by techs where new work was needed thereafter) was actually offset 30-days prior-in-time as compared to the your specified date-range. Thus, you were essentially determining how your techs performed, recallwise, 30-days prior to your date-range. Though slightly weird, it was an inherent consequence of how the method was structured. One benefit was, there was no impediment against picking a date-range including dates right up to the present.
With the new structure, that offset is eliminated. It produces results showing guilt as pertaining quite precisely to your specified date-range. But again, there’s a downside. Here it is not practical to pick a date-range that is not at least 30-days prior to the present. The simple reason is, there have not yet that quantity of days -- to see if a new job comes up within that period.
Please note that after this report compiles, a button appears in the form (labeled ‘Export Check Data’) that allows you to create a file that lists the jobs being charged, to each respective tech, as recalls. This is needed when you have that particular tech who denies there is any possibility he’s had so many recalls. For that situation, you can use the list to go through each and every item with him, proving that each fit the design criteria. Sometimes you have to prove to a tech that he needs improvement—before he believes he needs improvement.
Please further note the methodology in this report fundamentally depends on faithful creation and attachment of UnitInfo sheets, as applicable to each job. If it is not your practice to do this, the entire basis of this report fails—and you’d better consider using its alternative, instead.