Turn Off Family Propagation to Keep Active Learning Coding Focused

Turning off family propagation helps active learning focus on each document, reducing bias and boosting coding accuracy. This simple setup creates cleaner training data for ML in legal data review, improving reliability and speed. Small changes ripple through teams. It boosts collaboration and results.

Outline:

  • Hook: a practical tip that quietly changes how you code documents in Relativity
  • What “family propagation” means in a coding panel

  • Why turning it off matters for active learning and data quality

  • Step-by-step vibe: how to flip the switch without drama

  • Real-world consequences: biases, noise, and cleaner training data

  • Quick tips to keep the process honest and efficient

  • A few related ideas that connect to the bigger workflow

  • Wrap-up: small change, big impact

Turning off family propagation: a small switch with big impact

Let me start with a simple scene from the world of document review. You’re inside Relativity, eyes glued to a single document, trying to decide its category, label, or confidentiality status. You know what you’re seeing in front of you—just that one file. But if the panel is set to propagate decisions to family members, your verdict might start nudging similar-looking documents as well, even if those siblings have different stories to tell. It’s like grading one essay and letting that grade automatically echo through its classmates. For active learning to work well, that echo can be a problem.

What is “family propagation,” really?

In Relativity’s coding panels, “family propagation” is a feature that can spread a decision you make on one document to its related items—think parent and child relationships, or other connected documents in a family group. The idea is to save time and keep things consistent. On the surface, that sounds convenient. But here’s the rub: those related documents aren’t identical twins. They often carry different contexts, dates, or pings of sensitive information. If your coding decision follows them without a fresh look, you risk letting a bias from one document bleed into others.

Active learning is all about building a reliable signal from human judgments. You want the model to learn from carefully labeled, representative examples. When decisions propagate, you can unintentionally dilute that signal. The result? The machine learning model trains on data that’s skewed, not because your reviewers are lazy, but because the process nudges outcomes in a direction that doesn’t reflect the nuance across the family.

Why this matters for active learning in practice

Think of active learning as a collaboration between humans and machines. A reviewer labels a handful of documents with clean, deliberate judgments. The system uses those labels to suggest the next batch of documents to review, aiming to maximize learning with as little labeling as possible. If every time you label one document, you’re also stamping its cousins with the same label, you lose the chance to understand where a slightly different context should lead to a different decision. You end up with a training set that’s biased toward the most dominant pattern in a family, not the full spectrum of possibilities.

If you care about training data that generalizes, keeping family propagation off during coding is a straightforward, practical choice. It helps ensure that each document is assessed on its own terms, and that related documents don’t carry over decisions by default. The payoff is better precision, cleaner training signals, and a more trustworthy model downstream.

A quick, practical how-to

If you’re wondering how to flip this switch without turning your workflow into a scavenger hunt, here’s a plain-language path you can follow:

  • Open the coding panel or the review workspace in Relativity where you do the labeling.

  • Look for the setting related to propagation or family decisions. It might be labeled something like “Propagate to Family” or “Family Propagation.”

  • Change the setting from On to Off (or disable propagation) before you start a new batch of documents for active labeling.

  • Save the configuration and, if possible, run a quick checkout—label a single document and confirm that its decision does not automatically apply to its related documents.

  • Document this choice in your workflow notes. It’s a small step, but it helps your team stay aligned on why you’re treating each document individually.

If you’re a frequent user, you might also see a related option like “Propagate to related items only after peer review” or “Propagate with review.” In those cases, you can tailor the level of propagation to different phases of your process, but for the core active-learning loop, keeping it off for the initial labeling tends to yield clearer signals.

What happens if you don’t?

Skipping this step isn’t catastrophic, but it is risky. Here are some concrete issues to watch for:

  • Bias creep: decisions get pulled into a family’s shared pattern, even when a single document’s context says otherwise.

  • Reduced variability in the training set: if the same label lands across siblings, the diversity of examples decreases, which can hamper model learning.

  • Fewer meaningful active-learning prompts: the system may suggest documents that are too similar to already labeled ones, slowing progress.

  • Harder audit trails: when you revisit a case, you’ll want to know why a label was given to one document but not its relatives. Propagation can blur that line.

A little analogy helps here: imagine grading a set of recipe cards that come from the same cookbook family. If you copy the same score to every card because of a “family propagation” setting, you miss the subtle differences in flavor, technique, or ingredients that matter to a chef who’s trying to learn the nuances.

Connecting the dots with your bigger workflow

Relativity isn’t just a labeling tool; it’s a workflow engine. Active learning sits at the sweet spot between human judgment and machine efficiency. When you disable family propagation during the labeling phase, you’re nudging your process toward cleaner data and more precise model input. That translates into:

  • More reliable training sets for downstream models.

  • Clearer audit trails that show why a decision was made on an individual document.

  • Better control over edge cases where a related document’s context would mislead a reviewer.

  • A workflow that’s easier to scale because you’re not constantly wrestling against unintended cross-document influence.

If you’re juggling multiple phases—collection, deduplication, coding, QA—remember that a well-tuned active-learning loop depends on the quality of each labeled example. A simple setting like turning off propagation can ripple through the entire cycle, often in a positive way.

A few practical tips to keep the momentum

  • Start small: test the off setting on a small project or a narrow document set first. Watch how it affects labeling tempo and the variety of labels you receive.

  • Keep notes: maintain a brief rationale for turning propagation off in that session. It saves mental energy later when someone asks why.

  • Periodically sample and compare: after a few hundred documents, re-check whether the labeling quality looks healthier with or without propagation. If needed, recombine insights with controlled reintroduction of propagation for specific lanes.

  • Pair with random checks: have a colleague review a random subset to spot any drift or bias that might sneak back in, especially if you’re dealing with sensitive categories.

  • Use machine-assisted checks: run an occasional audit to see if the model’s recommendations align with human labels, which is a good guardrail against drift.

A tangent worth considering: how this fits with broader data governance

While you’re hammering out a clean labeling approach, it’s worth pausing to connect this choice to the bigger governance picture. Data relevance, access controls, and confidentiality are always in play in legal or regulatory matters. Keeping a single-document focus during labeling helps ensure that decisions aren’t inadvertently colored by related content that sits behind stronger access restrictions. On the flip side, there are times when propagation might be appropriate—when a family truly shares a common, justified label and when your review policy calls for consistency across the group. The key is to know when and why you’re enabling or disabling it, not just defaulting to a setting because it’s easy.

A few words on tone and technique

If you’re sharing this with teammates, a calm, practical tone tends to win. People respond to concrete steps, clear reasons, and a sense that a small change can keep the process honest. You can pepper in a light analogy or two—like the classroom grading image above—just enough to keep the concept memorable without turning the piece into a lecture. And yes, it’s okay to admit that no setting is perfect for every scenario. The best approach is often a thoughtful blend: turn off propagation during the initial active-learning pass, then adjust based on outcomes and governance requirements.

Wrapping up: small switch, meaningful improvement

Turning off family propagation in a coding panel isn’t flashy, but it’s one of those practical choices that quietly strengthens the entire workflow. It helps ensure that each document gets a fair, independent assessment, improves the quality of training data for models, and keeps your audit trail tidy. If you’re building an active-learning loop, this is the kind of adjustment that pays dividends over time—without adding friction to your day-to-day.

If you’re curious to test this idea in your own setup, start with a single project, switch off propagation before labeling, and watch how the data pattern evolves. You might just discover that a small, deliberate constraint leads to clearer insights, sharper decisions, and a more confident team all around. After all, in a field where precision matters, a mindful tweak can make the difference between noise and insight.

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy