Responsible Use of AI
This page states how the Transformation Management Institute approaches the use of artificial intelligence in research, writing, education, and operations.
In this statement, AI refers to any model-based system used to generate, transform, classify, rank, summarize, or recommend content. This includes both generative and predictive systems.
AI can be helpful in limited, well-bounded uses. AI use can also produce errors, misattribution, confusion about responsibility, or inappropriate reliance when boundaries are unclear, especially in domains involving judgment, care, trust, or decision responsibility. The Institute therefore treats AI use as a boundary-and-responsibility question: not whether the technology exists, but whether its use is bounded to a clear purpose and whether responsibility remains clear.
This statement applies to Institute conduct and publications. It is not a research publication and it does not extend or restate the Institute’s analytical work on AI and interpretation. The Institute does not claim regulatory authority.
Our stance
The Institute permits AI use where it is appropriate, bounded, and helps people think, learn, communicate, or reduce avoidable burden, and where responsibility remains human.
The Institute does not permit AI use where it replaces human judgment, obscures responsibility, or makes it harder for people to understand what the Institute is asserting, what has been decided, and who is accountable.
Responsible use is defined by boundaries that remain understandable and stable over time.
What responsible use requires
Responsible AI use requires that the following conditions remain true in the setting where the tool is used:
A clear purpose. The reason for using AI is specific and limited. The tool is not used as a default substitute for thinking.
Human responsibility remains intact. A person remains responsible for decisions, claims, and outcomes. AI output does not carry authority on its own. Accountability includes the ability to defend a claim or decision without appealing to the tool as the reason.
The work stays understandable to people affected by it. When AI is used, people who rely on the result must be able to understand, in plain language:
what the tool was used for
what role its output played
what the human verified or decided
the limits of the output, including any material uncertainty or dependence on missing information
No hidden delegation. AI is not used to quietly shift judgment, risk, or consequential work onto others without their awareness. Delegation is permitted. Undisclosed delegation of judgment or risk is not.
Appropriate use in sensitive contexts. Extra care is required in settings that combine high consequence, vulnerability, and limited recourse, including health, safety, legal exposure, employment decisions, or emotionally vulnerable people. In sensitive contexts, the Institute prioritizes clear responsibility and informed participation over convenience.
The Institute discloses AI use when it is material to trust, accountability, or expected authorship. This includes:
when AI materially shapes a claim, recommendation, or evaluative judgment
when AI is used on, derived from, or informed by someone else’s private material
when a reasonable reader would otherwise assume the work reflects direct human performance, judgment, or verification
When boundaries cannot remain clear, non-use is the responsible choice.
A way to stop. If use begins to create confusion, inappropriate reliance, or unintended harm, it can be reduced or discontinued without dependency.
Privacy and confidential information
AI services are treated as third-party systems unless operated entirely within Institute-controlled infrastructure. Private or sensitive material is not appropriate for AI tools by default. The Institute assumes AI services are not a secure channel unless assessed and approved for the specific use and data class.
By default, the Institute does not input confidential, personal, or sensitive information into AI tools that are not under Institute control. Exceptions require a clearly defined purpose, explicit permission where applicable, minimal necessary content, and constraints that keep the use limited and contained for the setting.
This includes health information, employment information, identifying details, financial account information, non-public organizational data, and private communications.
The Institute also avoids using AI tools with private information belonging to others without clear permission.
Environmental impact
Computing has real-world resource cost. The Institute applies the same boundary discipline to computational use that it applies elsewhere: purpose-bounded use, avoidance of unbounded repetition, and preference for reuse when it meets the same need.
What the Institute will not use AI for
The Institute will not use AI in ways that blur or replace human responsibility.
In particular, the Institute does not treat AI as a decision-maker, authority, or moral agent. AI may assist, but it may not substitute.
The Institute does not use AI to:
delegate high-stakes determinations to AI or present tool output as the basis for such determinations (health, safety, legal, or employment)
impersonate individuals or present simulated interpersonal interaction as personal care or professional counsel
generate content intended to mislead about authorship or origin
present AI output as verified fact, or treat it as a primary source, without independent confirmation or citation appropriate to the type of claim and context
pressure others into relying on AI against their comfort or judgment
Use and non-use
The Institute does not assume AI should be used everywhere. Many tasks are best done without it.
In some contexts, careful use can reduce avoidable burden or error. In other contexts, use can create confusion or inappropriate reliance. Non-use is sometimes the responsible choice. The Institute does not treat AI use as a default expectation.
The Institute’s position is conditional: AI use is appropriate where boundaries are clear and responsibility remains human. Where those conditions cannot be maintained, restraint is the responsible choice.
Clear boundaries make beneficial use sustainable.
How this relates to Institute publications
This page is not part of the monograph series. It does not make scientific claims and it does not serve as a reference text. It is a public conduct statement intended to make Institute practices understandable to readers.
For the Institute’s analytical work on how AI participation changes interpretation, see C1 · AI as a Meaning System.
Institute commitments
To reduce confusion and prevent misuse, the Institute commits to the following:
Clarity about role. AI may support drafting, summarization, translation, outlining, option generation, and consistency checks as draft support for human evaluation, but responsibility for claims remains human.
No concealment. The Institute will not misrepresent AI-generated work as personal testimony, professional judgment, or verified research.
Boundary discipline. The Institute will avoid uses that encourage dependency, simulate care, or blur human accountability, including avoiding uses that could reasonably be mistaken for human judgment or verified analysis.
Extra review for sensitive use. For work involving health, employment, legal risk, or personal vulnerability, the Institute applies additional review and prefers non-use unless the purpose and boundaries remain clear.
Internal check. For Institute publications and public materials, the Institute applies a brief internal check, appropriate to the work, to ensure that AI use stayed within the boundaries on this page.
Ongoing review. As common uses and risks change, the Institute revises this stance only when the boundaries above require clarification due to material change.
Closing
The Institute permits AI use that helps people work, learn, and communicate more clearly, where responsibility remains human.
It does not permit AI use that confuses responsibility, replaces judgment, or makes it harder for people to understand what is being asserted and what has been decided.
Responsible use is a practice of clear boundaries that protect people over time.
A-Series: Foundations
The Charter
Meaning System Science
Scientific Lineage of Meaning
Physics of Becoming
Proportionism
General Theory of Interpretation
B-Series: Transformation Science
Emergence of Transformation Science
Practice of Transformation Science
Restoration of Meaning
C-Series: Governance
AI as a Meaning System
Science as a Meaning System
Pop Culture as Meaning Systems
D-Series: Technical Standards
LDP 1.0
3E Standard™
3E Method™
Institute Resources
About the Institute
Responsible Use of AI
Official Terminology
Research Programs
Interpretation Field Studies
Transformation Management
Essential Reading
Citation Guidelines

