Responsible Use of AI

This statement describes how the Transformation Management Institute uses artificial intelligence in research, writing, education, and operations.

For this statement, AI refers to model-based systems that generate, transform, classify, rank, summarize, recommend, or predict content.

The Institute treats AI use as a boundary and responsibility question. The relevant issue is whether use remains purpose-bounded, intelligible to readers, and accountable to a responsible human agent. This statement governs Institute conduct and does not constitute research or regulatory guidance.

Institute position

The Institute permits AI use only where purpose, scope, and responsibility remain clear and stable. AI use is not permitted where it substitutes for human judgment, obscures authorship, or weakens accountability for claims, decisions, or consequences. Where these conditions cannot be maintained, non-use is the responsible choice.

Responsible use requires that AI be applied for a specific, bounded purpose; that a human remains fully responsible for all claims and outcomes; that use does not reduce a reader’s ability to understand what is asserted or on what basis; and that judgment, risk, or authority is not delegated without appropriate disclosure. In high-consequence or vulnerable contexts, restraint is prioritized over convenience. Material AI use is disclosed when it affects trust, authorship, or accountability.

The Institute treats third-party AI systems as external services and does not provide them with confidential, personal, or sensitive information. Limited exceptions require explicit permission where applicable and tightly bounded use.

Relationship to Institute research

This page is a conduct statement. It does not define the Institute’s analytical claims. For the Institute’s research on AI and interpretation, see C1: AI as a Meaning System.