Speaking Engagement
Legalweek 2026 Panel — Leveraging Legal Data Intelligence to Streamline AI Governance: Practical Strategies for Legal Teams
Speaking Engagement
March 10, 2026
Bobby Malhotra, chair of Winston’s eDiscovery and Information Governance Practice, moderated a panel at Legalweek exploring how Legal Data Intelligence can empower legal teams to create scalable, transparent, and risk-aware AI governance frameworks that address the evolving challenges of AI in organizations. The discussion focused on practical strategies for legal professionals to streamline policy development, compliance, risk management, and data governance throughout the entire lifecycle of AI systems.
Key Takeaways
- Treat AI governance as a lifecycle, not a one-time approval. Use LDI to continuously track how AI systems evolve (data inputs, learning, outputs, and downstream decisions) so controls adjust as risk changes—not after problems appear.
- Start with visibility: you can’t govern what you can’t see. Make AI inventorying foundational by using data signals (e.g., usage patterns, adoption logs, workflow touchpoints) to identify what tools are in use, where, and for what purposes.
- Govern “data behavior,” not just “tool names.” LDI moves teams beyond a static inventory toward understanding how legal and business data moves through AI over time—what is ingested, retained, shared, and repurposed.
- Build a risk-tiering intake to scale reviews. Use consistent data-driven triage so low-risk use cases move quickly, while higher-risk uses trigger added legal, privacy, security, and human-review controls.
- Don’t reinvent the wheel on risk and impact assessments. Convert ad hoc assessments into repeatable, documented workflows (Initiate → Investigate → Implement) to improve consistency, speed, and defensibility across teams and business units.
- Apply the SUN/ROT lens to AI data and governance work. Focus assessments, policies, and controls on what is Sensitive, Useful, and Necessary and filter out “ROT” (redundant, obsolete, trivial) data that increases exposure without increasing value.
- Close the gap between policy and practice with feedback loops. Use LDI to compare stated policies against actual usage behavior, then refresh policies on a set cadence so they don’t become shelfware.
- Make training contextual and measurable. Effective AI training is role-based and tied to real workflows; LDI helps measure behavior change (e.g., escalations, exceptions, risk-tier accuracy) rather than attendance alone.
- Integrate AI governance with cybersecurity and information governance. LDI connects AI governance to acceptable data use, access controls, and security requirements—especially as organizations expand GenAI use into sensitive legal and privileged domains.
- Protect privilege with auditable, defensible processes. Use LDI to document why data was protected, how access decisions were made, and what safeguards were in place—creating an evidence trail if privilege is challenged or regulators/auditors ask for proof.
- Make vendor due diligence continuous, not a checkbox. Because GenAI risk depends on data ingestion, retention, training use, and ecosystem subcontractors, LDI supports ongoing validation of vendor behavior and controls over time.
- Build governance that’s “audit-ready” by design. Lifecycle-based LDI produces records of what was used, with what permissions, and what safeguards at each stage—supporting regulatory readiness and standards-aligned programs (e.g., ISO/IEC 42001).
- Prioritize three early capabilities: visibility, validated controls, and material risk focus. Inventory and risk classification first; then define access and approved uses; then implement controls that scale with risk to enable responsible innovation without slowing the business.
Bobby is a founding member of Legal Data Intelligence, a framework launched to provide guidelines for legal departments to identify and use data to complete legal tasks in a more efficient way.
