ServiceDesk produces a plethora of reports. In early 2010 we realized there was a need to comprehensively catalog and describe each of these in a single document. This is our current state of resulting work. At present (and though it’s the vision for it to be so), this document is not yet comprehensive. In other words, it’s a work-in-progress, and significant further work is needed before it can claim the distinction of being a one place catalog/description of all ServiceDesk reports.
In the meantime, we can at minimum claim this document describes several of the major reports. In particular, it describes (likely) most of the reports that involve analytics.
For context, many of ServiceDesk’s reports are scattered within several, contextually relevant operational venues. None of those such scattered reports have yet had descriptions added to this document (it’s still a future project).
By contrast, there is a significant collection of reports accessed via a particular form that has no other purpose — except to be a locale from which to access that collection. It’s called (and without any intended irony) the Reports form (accessed via shortcut F11).
A general note about reports in the Reports form is that, for virtually each, there is opportunity to export the raw, record-by-record data details on which the analytical summary figures are based. These exports can be helpful if you wish to perform your own extended analysis or reporting, or perhaps wish solely to check the integrity of the analytical results as otherwise presented to you. To produce those exports, after a report displays, look for an Export button in the Reports form’s bottom-right corner.
Please bear in mind there is a counterpart to this document whose design is to be a review/description of all the ex ports in ServiceDesk. It may be found via a dedicated button in ServiceDesk’s Export Miscellaneous Data form (Shift-F3), or via this link.
Exports are distinguished from exports in that they simply output selected elements of data for you (typically via Excel file or similar). By contrast, reports are designed to analyze data, compiling sums, ratios, making comparisons, and so on — to give you digested analytics.
Aside from a few of its supplementary figures, all data in this report comes directly from the SalesJournal (accessible in terms of its raw entries via the SalesRead form, quick key shortcut is F4). In other words, to produce this report, ServiceDesk reads directly from applicable records within that file/journal, tabulates the results, and displays them to you.
The main body of the report is divided into four columns. The first column tabulates the total of Paycode 1 and Paycode 2 SalesEntries, and is intended to display totals for work actually completed during the period (regardless of whether paid or not). The second column tabulates the Paycode 1 and Paycode 3 SalesEntries, and is intended to show the total of work paid for in the period (regardless of when actually completed). The third column is, simply, the total of Paycode 2s, and fourth of Paycode 3s (these columns are there for review if wanted, but it’s typically the first or second that you’ll pay most attention to).
The second section applies a series of adjustments to work out such matters as true total of money received, changes in A/Rs, and so on. This adjustment process is needed because the SalesJournal, in and of itself, only reflects money received when the entirety of a sale is paid. A/Rs, in turn, may be partially paid, but that fact does not show in the face of any SalesJournal entry. To make the adjustments as applicable in this section (and arrive at the figures provided), ServiceDesk augments it’s reading of entries in the SalesJournal with reading of entries in the Applications Journal (quickkey shortcut is Alt-F9).
You’ll note there is an option to either display on-screen or print this report. If you take the option to print, you’ll have a further option to include line-by-line entries, concerning each sale that went into the report. That option simply is not present if electing to display on-screen.
There are also several Export options associated with this report—accessible via buttons that appear when the report is displayed.
One matter of occasional confusion concerns the section in the report where there is a distinction between “ServiceCalls” and “Tickets.” Basically, each entry in SalesJournal represents a “ticket,” so far as any applicable column of display is concerned. The intent is to classify the entry also as a “service call” if it is an entry reflecting in-field service (as opposed to POS activity), and if it’s the first (typically only) such entry as applicable on a given job. In other words, we want to exclude (as defined “service calls”) entries if they involve going back (for recall or continuation work) after the initial work was supposedly completed.
The actual method that’s used in effort to achieve the above outcome is as follows:
This report is provided for those who pay their technicians on a commission basis. Like the SalesSummary, it reads (finds its data) directly in your SalesJournal. It will apply whatever commission basis you have established in the EarningsRates form (quickkey is Alt-F2), as applicable to the tech on whom you are creating a report.
The screen-displayed version of this report contains summary data only, an in two columns.
Like the first two columns in the SalesSummary, these reflect: first, figures pertaining to work actually performed during the period (i.e., the total of applicable Paycode 1 and 2 entries); and second, of jobs paid for during the period (the total Paycode 1 and 3 entries). You’ll want to pay the tech on whichever column reflects your payment policy.
Also (and still much like the SalesSummary), it you elect to print the report, you’ll have the option to include the line-by-line entries that went into producing the summary figures. Usually, this is useful for allowing techs to verify they are indeed being paid on each of their jobs.
This report reads its data, simply, from the TimeLog.XX file, as applicable to the employee on whom the report is being created (such files are created for any employee when such employee uses the ServiceDesk ClockIn and ClockOut functions (with the “XX” extension, on the filename, being the two-letter abbreviation as applicable to the employee in question).
Like the Commissions Report, it applies whatever wage rate is established, for the employee, in the EarningsRates form (quickkey is Alt-F2). It does not calculate withholdings (raw earnings only)—meaning it’s up to you (or a payroll service) to independently do the latter.
This report is available in two forms. One tabulates all your A/Rs as a group:
The other provides individual breakdowns per individual HighVolumeClients.
Regardless of type of A/R Report you choose, the underlying machinery reads its data directly from your A/R file —the same data that may be reviewed, on a record-by-record basis, in the A/R - Read form (quickkey is F3).
This report lists each job as completed within a specific time frame, and for each shows three simple figures (revenue, job cost, and resulting margin). The cost figure is based on a combination of parts used and labor inputs (figuring via user-provided inputs for per-trip and hourly costs for technicians).
The underlying mechanics are as follows:
Please note that the underlying file, as simultaneously created when this report compiles for you, has added data breakdowns (separating parts cost factors from labor costs factors, for example). For such added detail, simply click on the button to open that file.
This report produces a series of figures (amounts involved in sales, averages per job, recall rates, etc.) that help you assess the level of work that’s being done for each of your HighVolumeClients, and to compare these parameters between such parties, and as compared to your non-HighVolumeClient work, both at a group level and individually.
The underlying mechanics, as involved in producing the report, are as follows:
This report produces a series of figures (quantity of trips, quantity of time, etc., as compared to revenue) designed to help you assess profitability of work as connected to each of your HighVolumeClients, comparing between such parties, and to your nonHighVolumeClient work (again, both at a group level and individually).
The underlying mechanics are as follows:
Please note that, for margin figures to be accurate, the user-query-provided trip - cost and hourly - cost figures must, in turn, be accurate. It raises the question as to how you arrive at such figures. Our suggestion is, run the report once using whatever seat-ofthe-pants guess you wish, for these figures. Run once, the report will provide total trips for the period and total on-the-job hours for the period. Go to your financial accounting and find what your total expenses were for the period. Figure half the total expense as trip cost, and divide by quantity of trips to get per-trip-cost. Figure the other half as hourly/time cost, and divide by total hours to get hourly cost. Then run the report again with these figures.
As another note, please observe that even if your provided cost-basis figures are not accurately, you’ll likely still have valid comparisons between one HighVolumeClient and another, and between HighVolumeClients and non-HVC work.
This is one of our newer Technician Productivity reports (introduced January 2012). Its purpose is to provide some fairly raw numbers showing just what each tech has done with the dispatches given him (i.e., how many resulted in completions, how many in part orders, etc.).
The report’s output loads into Excel, and on that basis takes advantage of greater width availability than can display well in this manual. Nevertheless, here’s a shrunken image to give you some idea of what’s involved:
The report’s methodology is as follows:
This report produces a set of figures that help you assess, on a comparison basis between technicians, how well each is doing in terms of completing on the first visit, versus second, versus third, versus requiring four or more trips for completion. It also allows a comparison of comparative averages, total quantity of jobs completed, etc.
Please note how the leftward graph allows you to visually compare (and at a glance) how your techs are comparing in regard to needing more than one trip, or not (you can easily see via comparative yellow/red/blue bands, for example, that BB is comparing poorly). The two rightward graphs (cyan and violet) similarly allow at-a-glance comparison of how the techs compare on average trips-per-job and average days startto-completion.
In the case of all graphs, it’s also easy to compare with company-wide averages and numbers, as shown in the top/red section. Indeed, you can think of the company-side values as providing a “par” figure, against which each tech can be compared.
Methodology for this report is as follows:
In respect Average - Completes - Per - Business - Day , please note the first figure is derived on the basis a five-day work week. In other words, the system looks at the date range involved, and figures how many standard week days (Monday through Friday) fit within that period. It’s on the basis of that figure that it calculates the quantity of standard “business” days, using that figure as the denominator to calculate completes (for the technician who did the complete) per such standard business day.
In regard to Average - Completes - Per - Day - Worked , by contrast, the system’s method is to count, as a day worked by any particular, any day in which there’s an entry in an applicable JobRecord showing that he completed a work visit, on a job, on that day. No others days are counted. For the operation as a whole (i.e., “ALL” in the tally), any day on which any tech worked (according to the above-described criteria) is counted as a work day.
Base on the above, you may note some interesting comparisons. Some of your techs may show higher completes per business day than per day worked. For those, it’s evident they must have worked more days than are involved in the measure of standard business days. Other techs may show higher completes per days worked than per business day. For those, it should be evident that, for the period in question, they worked fewer days than are involved in the measure of standard business days . Another interesting factor; you’ll likely find less variation, among the techs, in completes per days worked, as opposed to completes per business day.
This report produces a set of figures that help you assess, on a comparison basis between technicians, how well each is doing in terms of arriving at and departing from jobs within (and preferably toward the front portion of) scheduled time frames. It further allows a comparison of total time spent on jobs.
Please note the graphic provided in each section. The red rectangle is intended to denote average window size (i.e., of the appointment window for which the tech is scheduled). The green box denotes the average amount of time spent by the tech, per job, and where it fits, time-wise, within the larger appointment window.
You can tell at a glance, for example, that CP (above) is doing badly, in terms of where he’s positioning his on-site times compared to appointment windows. BB, on the other hand, is doing much better, and he’s very quick (short time on each job) as well. EB is doing the best, in terms of having his on-site time toward the front of appointment windows. On the other hand, he is being given much larger appointment windows (as compared to the other guys) to work with.
Please also again note that the top/red section shows figures and graphs for the operation as a whole. In part, this provides a “par” standard against which individuals may be assessed, but it’s also a useful measure on the company as a whole. In this particular case, indeed, a quick glance at the company-wide/top-section graphic shows that, overall, tech on-site times are quite late as compared to appointment windows. Knowing how much customers appreciate having techs on-site early within their appointment windows (and hate it otherwise), this is something that, as an owner/manager, I’d want to strongly address.
Methodology for this report is as follows:
This report produces a set of figures to help you determine, on a comparison basis between technicians, how well each is doing in terms of producing revenue.
There is, again, a top/red section for company-wide values, and a selection of graphics to help with at-a-glance comparisons.
In particular, the top section provides company-wide (or ”par”) geometries for each measure, with each tech’s particular measure purposely arranged to allow easy direct comparison. At a glance, for example (green graphs), you can see that three of the techs (RA especially) are performing well above-par in regard to total sales. Their average totals-per-work-day (yellow graphs) mirror the same fact. However, one of the techs who’s strong in total sales, is not so strong in average total-per-ticket (AV, blue graph).
The leftward purple/violet graph is particularly interesting in its ratio-type comparison between labor and materials sold. Glancing at this graph in AV’s section may give an immediate clue as to why his average total-per-ticket is below par. It appears, simply, that in comparison to others he is underselling on parts. Perhaps that is all he needs to amend.
Methodology for this report is as follows