Academic advancement is built on explicit and implicit standards that define the contributions of faculty, typically divided into three categories: Research, Teaching, and University/Public Service. At a large public university, such as my own institution, contributions in research and teaching are often quantified using metrics like publication output or teaching effectiveness scores to guide evaluations for merit and promotion.

While increasingly important, service and non-traditional contributions have consistently proven difficult to evaluate. These include vital work like public and citizen science, community-engaged research, and high-level administrative service—all of which are recognized in governing documents but often lack consistent conventions on how best to evaluate their worth. We can compare the publication records, impact factors, H-indices, and student evaluations of two faculty members within the same unit and at the same rank, calibrating our decisions accordingly. But comparing service–sitting on the board of a regional non-profit, on the committee of the local academic senate, or leading a departmental initiative–becomes less straightforward. Counting hours of service is nearly impossible, making most evaluations dependent on local institutional cultures that are often opaque and unclear.

The Challenge of Undervalued Service

The problem is not that this kind of work is unimportant, but that faculty struggle to translate its impact into the narratives that often go into their merit or promotion files. Without clear conventions for assessment, these contributions can be inadvertently minimized or overlooked, impacting faculty morale and career progression. In extremis, they can amplify inequalities in units by under-compensating faculty who tend to perform the lion’s share of service, leading to greater levels of separations and burnout over time.

To address this, I am currently working on a proposal to develop a bespoke Large Language Model (LLM)-based system to act as a “leveling-up” aid for faculty.

A Light-Touch LLM Solution

The core idea is to use a secure, AI-powered system to provide faculty with rank-adjusted suggestions and prompts that guide them to better document their service and non-traditional work. Trained on documents that explicitly state standards and rationale for advancement as well as a combination of successful and unsuccessful cases from the past, such a system would be able to provide additional guidance (I stress the additional part, since it is not intended to replace peer mentorship) for faculty writing their personal statements, calibrated by rank and field.

What would this look like? In a light-touch pilot using a commercial LLM, a primitive version of the system was able to transform a simple description of a service role into a richer narrative. The system does not offer text to copy-paste, but question that can be answered by the faculty member if feasible. For a faculty member serving as a program director (who could that be?!), the LLM generated a prompt inviting them to reflect on “the measurable result of his tenure as Director? (e.g., ‘Secured a new line for a Visiting Professor,’ ‘Oversaw a 25% increase in minors,’ or ‘Successfully guided the program through its 5-year external review.’)” This information was lacking in the original text, which incorrectly assumed knowledge of the role among readers and thus lacked adequate contextual information that would better represent the degree of contributions made by this particular individual.

This type of guidance can potentially helps faculty better frame their contributions by inviting them to include the kind of quantitative and qualitative data that highlights effective management and concrete results along the specific dimensions and criteria set out in the institution’s policy documents.

Ensuring Confidentiality and Oversight

While the initial test used a commercial model, the proposal I am working on emphasizes that any formal expansion must prioritize confidentiality. This would be achieved by leveraging the institutional internal AI platforms—rather than commercial providers—to ensure better oversight of the system’s use and efficacy.

This LLM-based system is not a substitute for expert evaluation or the faculty member’s effort in writing their file. Instead, it serves as an invaluable instrument to help faculty better frame and convey the full breadth of their service and non-traditional contributions, ensuring their essential work is recognized and fairly evaluated.

In any case, if admin doesn’t take it up, this might be my first start-up. Any angel investors out there?