Why documents in the Active Learning queue aren't counted toward the relevance rate in Relativity project management

Relativity project management clarifies that documents in the Active Learning queue aren't part of the relevance rate. Relevance relies on items with a definitive judgment. This distinction helps teams focus reviews, tune models, and keep the math clean while queues guide selection.

Relativity and the math behind your review queue: what really counts toward relevance

If you’ve ever wrangled big document sets, you know the drill: you flag what matters, your model learns from the feedback, and you keep refining until the signal stands out from the noise. In Relativity, this dance often involves something called the Active Learning queue. A lot of teams ask a simple, practical question about it: Do the documents sitting in that queue count toward the relevance rate calculation? The quick answer is no—but there’s a bit more nuance worth unpacking. Let me explain why this distinction matters and how to keep your metrics honest and usable in real-world work.

What is Active Learning in Relativity, anyway?

Think of Active Learning as a feedback loop for your review process. You don’t just dump a mountain of documents on reviewers and call it a day. Instead, you create a small “seed set” of documents that are labeled for relevance. A learning engine then studies those labels, tries to predict relevance for the rest of the set, and presents new documents to human reviewers that are most informative for improving the model. As reviewers label more documents—relevant or not—the model updates and the cycle continues.

The key idea is focus and efficiency: instead of lining up every single document for initial judgment, you direct attention to the items the model is most uncertain about. This helps you reach high quality classifications faster and with fewer total reviews. The Active Learning queue is the cockpit for that loop—where documents currently awaiting judgment or awaiting a model-influenced review decision live for a moment.

How is relevance rate calculated, really?

Relevance rate is a performance metric, not a running total of everything you touch. In practice, it’s built from documents that have a definitive, final label—relevant or not relevant—after a review decision has been made. Those labels usually come from a completed round of assessment, not from items that are mid-run in a queue or still awaiting a verdict.

To put it plainly: the relevance rate looks at a defined, finalized set of documents. It’s the bedrock you use to gauge how well your review strategy is pinpointing the truly pertinent material. Documents that are in flux—sitting in the Active Learning queue, flagged for further review, or pending classifier feedback—don’t get folded into that final tally until someone has given them a conclusive determination.

Why the queue doesn’t count toward the rate (and why that’s a good thing)

  • Finality matters. A document in the Active Learning queue isn’t deemed final. It’s part of an ongoing learning process, not a finished data point. Counting it would smear the metric with items whose status is temporarily ambiguous.

  • Clarity for stakeholders. When you report a relevance rate, you want a clean snapshot of proven relevance. Mixing in items that could flip later would create noise and invite misinterpretation.

  • Model feedback vs. measurement. The whole point of Active Learning is to improve the model quickly. The queue is a feed for training, not a fixed statistic. Separating the two keeps the analytics actionable—your team can see both: how the model is learning, and how well the end results stand on verified judgments.

That distinction matters more than you might think. If you ever try to squeeze in queue items into the rate, you risk overestimating the model’s precision or masking gaps in the labeled set. In projects of any size, honest metrics are the quickest way to spot where you need more review, where the model might need more tuning, and where the workflow could use a nudge.

A practical look at the numbers

Imagine you’re reviewing 5,000 documents in a project. You start with a seed set of 200 documents that your team labels as relevant or not, and the model begins its learning cycle. Over a couple of iterations, the system surfaces 1,400 more documents for labeling, all of which are pending final decisions. The Complete, Final Relevance Set might end up looking like this:

  • Total documents with final relevance labels: 2,100

  • Documents labeled relevant: 1,260

  • Documents labeled not relevant: 840

  • Relevance rate (relevant / total finalized): roughly 59.8%

The 1,400 items in the Active Learning queue aren’t part of that calculation. They’re in the process of being judged and fed back into the model. Once those have final judgments, they’ll join the finalized set, and the relevance rate can be recalculated if you’re maintaining a running metric. But until a document has a firm label, it doesn’t contribute to the rate.

This approach isn’t about hiding complexity; it’s about keeping measurement honest and traceable. When you present a metric to a client or a cross-functional team, they’ll appreciate that the number reflects confirmed outcomes, not potential outcomes.

Why this influences project management decisions

  • Resource planning. If you know the rate is calculated on finalized judgments, you can estimate how many reviewers you need to push a batch from queue to completion. You’ll avoid overcommitting or underutilizing staff waiting on uncertain results.

  • Confidence in progress. Stakeholders want to see progress that feels tangible. A clear split—Active Learning activity vs. finalized labeling—helps everyone understand where the project stands and where bottlenecks lie.

  • Quality control. Separating the queue from the rate makes it easier to spot anomalies: a sudden drop in final relevance, or an unexpectedly high rate that might signal bias in the seed set or a misinterpretation of guidelines.

Quick tips for keeping metrics meaningful

  • Define “final label” up front. Agree on what counts as a finalized determination and ensure the team applies it consistently. Document the definition so future dashboards stay aligned.

  • Maintain two dashboards. One tracks the Active Learning queue activity (documents surfaced, labeled, excluded, retracked). The other tracks finalized judgments and the relevance rate. Clear separation reduces confusion.

  • Use sampling for validation. Periodically pull a random sample of finalized documents to verify labels and ensure consistency across reviewers. It’s a safety net against drift.

  • Track time-to-label. Beyond the rate, monitor how long it takes to move a document from queue to final label. Speed matters for throughput, but speed without accuracy won’t help you in the long run.

  • Audit trails matter. Keep an auditable record of why a document got a certain label. If questions ever arise about the rate, you’ll have the reasoning at hand.

A few related ideas that fit nicely alongside this topic

  • Precision and recall in context. When you’re looking at relevance, it’s natural to pair it with precision (how many of the labeled relevant items truly are relevant) and recall (how many of the truly relevant items you found). In a Relativity workflow, you’ll often balance these by adjusting seed sets, tweaking review guidelines, or refining the model features used by the classifier.

  • The human-in-the-loop advantage. There’s real value in keeping human judgment in the loop. The Active Learning queue isn’t a bottleneck; it’s a strategic surface where human insight and machine learning collaborate to improve outcomes over time.

  • Tooling awareness. If you’re using Relativity Analytics or similar TAR workflows, you might see terms like seed set, training set, and classifier score. Recognizing what each piece represents helps prevent misinterpretation of metrics and keeps your team aligned.

A little analogy to ground the idea

Picture a librarian who’s cataloging a vast archive. The cataloger starts with a core set of clearly labeled books (the seed set). A smart system watches how those labels are applied and then suggests other volumes the librarian should check next—those suggestions form the Active Learning queue. The cataloger labels some more books, the system learns from those labels, and the cycle continues. The library’s “relevance rate” is like the percentage of books that, after careful review, are tagged as essential. Books sitting in the queue aren’t part of that percentage until the librarian makes a final decision on them. It’s a clean, understandable way to measure accuracy without being misled by items still under review.

A short reflection: keeping the focus where it belongs

In projects where data is plentiful and time is precious, it’s easy to conflate what’s being worked on with what’s already decided. The distinction between queue activity and finalized relevance is more than a bookkeeping rule; it’s a principle that keeps teams honest about what’s known, what’s assumed, and what still might change. When you present results, you want clarity, not ambiguity. And that clarity comes from separating the learning process from the outcome you report.

If you’re shaping a workflow or refining a dashboard, use this simple truth as your north star: the relevance rate shines when it’s built on solid, final judgments, not the buzzing activity of tasks in progress. The Active Learning queue is where the model learns; the final labels are where the metric lives. Treat them as distinct, and your project management metrics will be easier to explain, easier to defend, and ultimately more useful for making informed decisions.

To wrap it up, here’s the bottom line: documents in the Active Learning queue are not included in the relevance rate calculation. They’re part of the ongoing learning narrative, not the finished score. Keep the two streams separate, and you’ll maintain a clearer picture of both what the model is learning and how well you’re identifying the material that truly matters. That balance—between process and result—is what good project work looks like in practice, day in and day out.

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy