TMI Research Library
Interpretation Field Studies · IFS-4 (2025)
Disinformation Systems
Authors: Jordan Vallejo and the Transformation Management Institute™ Research Group
Status: IFS-4 | December 2025
Scope and boundary
This paper is descriptive and diagnostic rather than prescriptive. It does not provide political commentary, election guidance, media fact-checking services, legal advice, platform compliance checklists, or counter-influence playbooks. It analyzes disinformation as an interpretation system: how adversarial claims are encoded into signals and artifacts, decoded under constraint, routed through role-governed response protocols, and resolved through closure or extended through explicit non-closure. Throughout this paper, “reference condition” means the bounded state of affairs a claim targets under a declared scope and evidence regime, not a universal claim about truth.
This paper treats AI-assisted disinformation as a scaling condition, including synthetic media, automated claim generation, and personalized variation that increases relay speed and dispute load.
Disinformation is treated here as a system class that appears across domains, including fraud and scams, crisis communications, public health information environments, organizational reporting and reputation disputes, and online platform dynamics. State messaging is one instance of this class and is not used as the primary organizing frame.
IFS-3 studies identity decisions: whether an actor is the same subject across interactions. IFS-4 studies reality decisions: whether a contested account becomes usable for coordinated action under adversarial signaling.
Abstract
Disinformation is a high-constraint interpretive environment where engineered signals compete with verification capacity and closure authority is distributed across multiple interfaces. It is an extreme-case field site for studying how interpretation remains usable when evidentiary form can be manufactured, relayed, and iterated at scale.
IFS-4 introduces the Disinformation Meaning Event (DMEv) as the unit of analysis. A DMEv is a complete Interpretive Event spanning: (1) a contested reference condition, (2) adversarial encoding into claims and artifacts, (3) receiver decoding and credibility assignment, (4) response protocol selection, and (5) closure outcomes that determine whether the claim is corrected, contained, treated as operationally decisive, or remains contested.
The study maps DMEv dynamics onto the MSS variable set: truth fidelity (T), signal alignment (P), structural coherence (C), drift (D), and affective regulation (A). It identifies recurrent failure signatures including credibility inversion, evidence-threshold mismatch, authority-routing failure, protocol mismatch, closure instability, and drift-rate increase across repeated event series.
IFS-4 provides a domain map of disinformation subsystems (encoding, amplification, decoding, response, closure) and measurement candidates suitable for field observation using existing artifacts. By formalizing disinformation as a repeatable event structure with measurable stability conditions, IFS-4 extends the IFS method to adversarial signaling environments and provides an accessible on-ramp to MSS for civic and organizational contexts.
1. Introduction
Disinformation is often described as “false information” circulating online. In practice, the more consequential phenomenon is interpretive: how systems decide what is happening when verification is limited, when actors introduce engineered signals, and when audiences must act before disputes can be conclusively resolved.
Interpretation in this domain is not a private mental activity, it is a multi-role system process. Claims move across people, platforms, institutions, and time. Each movement includes credibility assignment, evidence-threshold decisions, authority routing, response protocol selection, and closure mechanisms. When these operators remain consistent, systems can maintain a shared operational account. When they do not, competing accounts persist and recur.
Disinformation becomes durable when credibility assignment and closure outcomes update faster than verification and correction can update shared operational accounts.
Scale illustration
A useful way to see disinformation as a system, rather than as “bad content,” is to track how a single claim changes shape as it crosses interfaces.
In small-group environments, a contested claim routes through a limited set of roles. An assertion is made, credibility is assigned, and closure can occur through direct discussion, shared evidence, or a mutually recognized authority. Verification can be slow, but relay volume is limited and the closure boundary is visible.
At scale, the same claim becomes a distributed event series. The claim is copied across channels, summarized without source conditions, attached to new artifacts, and routed through closure authorities that do not share evidence thresholds. Receivers do not encounter a single claim, they encounter a repeated signal environment in which reach functions as credibility, and where “what people are saying” substitutes for “what can be verified.” In that environment, correction does not compete with one message, it competes with the relay network and with the social cost of changing stance after public endorsement.
AI intensifies this shift by increasing output volume and surface-form variation. A single contested account can be rendered into many versions: different headlines, captions, screenshots, summaries, and narrative framings tuned to distinct audiences. The reference condition does not change, but the signal environment does. Receivers then encounter not one claim, but a recurring family of variants that produces repeated decision demand across channels.
This is the system-level shift IFS-4 targets. Disinformation becomes durable when amplification exceeds adjudication capacity and when closure authority remains distributed across interfaces that do not converge on a common decisive pathway.
2. Research foundations from disinformation and information integrity practice
IFS-4 is not a comprehensive literature review. The foundations below are used as structural lineage for operators that appear in a DMEv cycle, including diffusion and amplification dynamics, credibility operations, response protocol families, provenance constraints, and closure stability. The purpose of this section is to anchor those operators in established research and doctrine without converting the paper into a survey of the field.
Information disorder frameworks distinguish misinformation (false content shared without intent to mislead), disinformation (false content shared with intent to mislead), and malinformation (genuine information used in an adversarial manner). This distinction supports DMEv typology and prevents actor-based labeling from substituting for operator-level analysis.
Empirical work on online diffusion indicates that false claims can move through networks differently than verified claims. For IFS-4, the relevant implication is measurement: reach, relay depth, time-to-uptake, and recurrence are observable properties of the signal ecology that shape response selection and closure probability.
Research on coordinated inauthentic behavior and automation shows that distribution can be engineered through bot activity, synthetic personas, and account networks. For IFS-4, the relevance is role realism: distributor and witness roles can be simulated at scale, which changes how social proof functions as a credibility cue.
Research on accuracy attention and related interventions indicates that small shifts in evaluation cues can change sharing behavior. In IFS-4 terms, credibility assignment is an operator with measurable levers rather than a purely ideological trait.
Inoculation-based approaches treat resilience as an upstream capacity: exposure to manipulation tactics paired with explanation can improve later recognition. In IFS-4, inoculation is treated as a response protocol family that operates before a DMEv reaches high-amplification states.
Public health guidance treats harmful falsehoods and information overload as a population-level management problem with monitoring, response coordination, and trust maintenance as explicit responsibilities. This provides governance lineage for studying disinformation as a system-stability problem rather than as partisan content.
As synthetic media becomes easier to produce, the evidentiary status of images, audio, and video changes. Provenance standards and transparency initiatives treat authenticity, integrity, and origin assertions as technical and governance objects. In IFS-4, provenance is treated as a closure-relevant constraint: when provenance is absent or disputed, decisive-pathway requirements increase and non-closure becomes more common. Guidance on synthetic media and generative AI describes a dual pressure: realistic artifact supply increases, and dispute ambiguity increases because fabricated evidence and genuine evidence can share similar surface form.
Governance frameworks increasingly describe disinformation as a systemic risk with measurable indicators and response obligations, including transparency, process consistency, and cross-boundary coordination. For IFS-4, the central point is operator visibility: protocol families exist and can be observed, compared, and audited, even when outcomes differ by interface.
3. Domain boundary and system object
3.1 System object
A disinformation system is the bounded interpretive environment in which contested reference conditions generate decision demand, claims and artifacts are encoded and distributed under adversarial incentives, receivers decode and assign credibility under constraint, responses are selected through role-governed rules and authority routing, and closure outcomes treat the event as operationally decisive, correct it, contain it, or leave it contested.
3.2 Roles
Minimum roles:
Originator: entity that creates, coordinates, or seeds an adversarial claim
Distributor: accounts, communities, channels, media, or recommender pathways that relay the claim
Receiver: individual or team interpreting and deciding how to respond
Adjudicator: party with closure authority in a specific interface (platform moderation, newsroom standards, public health authority, institutional leadership, courts, compliance functions)
Optional roles include witnesses who shape credibility and protocol selection (experts, trusted community figures, fact-checkers, subject-matter reviewers) and targets that the event is attempting to influence or damage.
3.3 Interfaces treated in this field study
IFS-4 treats these as interfaces with distinct evidence thresholds and closure mechanisms:
Online social platforms and public channels
Organizational environments (workplaces, institutions, professional communities)
Crisis environments (public health, disasters, safety events)
High-consequence environments (finance, identity-related disputes, regulated operations)
4. Unit of analysis: Disinformation Meaning Event (DMEv)
4.1 Canonical definition
A Disinformation Meaning Event (DMEv) is a complete Interpretive Event spanning a contested reference condition, adversarial encoding into claims and artifacts, receiver decoding and credibility assignment, selection of a response protocol under role constraints, and a closure outcome that treats the event as operationally decisive, corrects it, contains it, or leaves it contested. DMEv is the domain-specific Interpretive Event type for disinformation systems.
4.2 DMEv phases
A DMEv includes five phases:
Reference condition: the target condition in the world that the claim purports to describe. Reference condition is a bounded referent, not a metaphysical claim about truth. It names the specific state of affairs the claim points to, under declared scope (who or what, when, where) and under the evidence categories an interface treats as admissible for verification and adjudication. In many DMEv instances the reference condition is only partially observable, contested, or distributed across records and witnesses. The analytic target in IFS-4 is whether the system can constrain credibility assignment, protocol selection, and closure outcomes to that referent under the interface’s own rules. Section 9 describes how unequal access and legibility conditions change closure probability even when the reference condition is the same.
Encoding under adversarial incentives: claims, images, video, audio, screenshots, documents, narrative packaging, timing, and channel selection
Decoding and credibility assignment: receiver inference, credibility priors, evidence thresholds, and acceptance, rejection, or ambiguity outcomes
Response protocol selection: ignore, monitor, correct, contextualize, contain, remove, escalate, adjudicate, inoculate
Closure outcome: resolved versus open, contested versus operationally decisive, and closure stability across time
4.3 DMEv boundaries
A DMEv begins when a claim is treated as decision-relevant and initiates relay, assessment, or response. A DMEv ends when a closure operator occurs (Section 8) or when the system enters an explicit open state (non-closure) that governs recurrence. A reactivated claim family constitutes a new DMEv linked to the same reference-condition family.
4.4 DMEv typology (minimal)
High-consequence versus low-consequence events
Fast-cycle versus slow-cycle events
Coordinated versus organic distribution patterns
Synthetic-artifact supported versus non-synthetic events, including AI-assisted variant generation versus single-form claims
Single-interface versus cross-interface events (platform only versus platform plus institutional plus organizational)
5. Credibility assignment and evidence thresholds
Credibility assignment is the receiver’s decision about whether a claim and its supporting signals are treated as a reliable indicator of the reference condition. In disinformation contexts, credibility assignment is unusually central because evidence is frequently incomplete, artifacts can simulate evidentiary form, and closure authority is distributed.
Evidence thresholds differ across roles and interfaces. Individuals and communities may treat social proof and narrative fit as sufficient. Organizations may require internal verification or trusted sources. Platforms may rely on policy criteria, automated signals, and review pathways. Institutions may require documentation standards and auditable decision records. Mismatch produces predictable outcomes: the same encoding can be treated as decisive evidence in one interface and treated as insufficient in another.
Common credibility-related failure signatures occur in repeatable ways. Low-quality artifacts can achieve high uptake through salience, repetition, or authority substitution. Prior episodes can raise evidence thresholds and increase non-closure probability in later events, while urgency and social pressure can lower thresholds and increase misclassification risk. In these cases, identity cues or group affiliation can become the dominant routing signal.
Gaslighting as a credibility and closure operator
In everyday usage, “gaslighting” is treated as a psychological label. In IFS-4, it is treated as an interpretive operator: a credibility move designed to destabilize the receiver’s access to the reference condition by contesting the receiver’s capacity to know what occurred.
In DMEv terms, the operator shifts the dispute from the claim to the observer, converting “What happened?” into “Is the observer competent to know what happened?” This increases non-closure probability by preventing the receiver’s evidence from functioning as decisive within the event, and by increasing dependence on the actor or interface that is contesting the reference condition.
At small scale, this operator can persist within relationships or organizations when adjudication routes through contested authority. At larger scale, related operators appear as denial plausibility strategies that increase closure burden by saturating an environment with competing accounts until verification becomes socially or practically non-viable for many receivers.
AI-mediated authority substitution
A common credibility move in contemporary disinformation is authority substitution through AI output. Model-generated summaries, confident explanations, and citation-like formatting can function as perceived verification even when source quality is weak or absent. In DMEv terms, the operator reduces evidence inspection by presenting plausibility as auditability. This can lower evidence thresholds for receivers and accelerate uptake before adjudicators can route the claim into a decisive closure pathway.
6. Signal ecology: encoding, channels, amplification, and provenance
6.1 Encoding channels
Disinformation encoding commonly uses text claims, images and screenshots that remove or alter context, edited or synthetic video and audio, fabricated or altered documents presented as records, and timing or channel selection that maximizes reach and credibility signaling.
6.2 Amplification pathways
Amplification is the system process by which a claim gains reach through repetition, relay through high-visibility accounts, community reinforcement, algorithmic recommendation, and cross-channel reposting. Amplification changes the perceived credibility environment and can function as an evidence substitute in many interfaces.
6.3 Synthetic content pressure and denial plausibility
Synthetic media increases the supply of realistic artifacts and also increases the plausibility of denial in disputes. When “it is generated” becomes a usable claim, decisive-pathway requirements increase because adjudicators require stronger provenance or cross-corroboration to reach an outcome treated as decisive.
6.4 AI-assisted variation and audience targeting
AI enables rapid claim variation without changing the underlying account. This includes paraphrase floods, caption swaps, screenshot recontextualization, and localized narrative framing. The system effect is not only increased content volume. It is increased pathway diversity for a claim family to reach different receivers with different credibility cues. This increases recurrence and complicates correction because the same claim family does not present as a single stable object in the field.
6.5 Signal-related failure patterns
Signal failure patterns are often identifiable as evidentiary form substitution (artifacts that resemble proof functioning as proof without verification), context stripping (removing source conditions that change meaning), and cross-interface mismatch (evidence categories differing by platform, newsroom, organization, or authority).
7. Response protocol families
7.1 Protocol definition
A response protocol is the receiver’s selected action class and stance in response to a decoded claim, given role constraints, authority routing, and evidence thresholds.
7.2 Protocol families (minimal)
Ignore / deprioritize protocol: deliberate non-engagement paired with monitoring criteria
Correction protocol: counter-claim with evidence and a public reference record
Context protocol: restoration of missing reference conditions (source context, timestamps, scope constraints)
Containment protocol: reduction of reach within a boundary (throttling, friction, limiting distribution pathways)
Removal / restriction protocol: enforcement actions under declared policy
Escalation / adjudication protocol: routing to a closure authority (review teams, experts, institutional decision owners)
Inoculation protocol: pre-exposure training and tactic explanation designed to reduce uptake in later events
7.3 Protocol mismatch patterns
Protocol mismatch is common when correction occurs without reach reduction in high-velocity events, when enforcement occurs without a rationale recipients recognize as decisive, when escalation routes to hierarchy rather than domain authority, or when inoculation is deployed after high amplification.
8. Closure, non-closure, recurrence, and drift
8.1 Closure definition
Closure is the event-level outcome in which the system reaches an interpretation usable for action, the selected protocol is executed to completion, the event transitions to a next state that reduces immediate recurrence, and a record exists that supports later review and cross-interface coordination.
Closure does not require universal agreement. It requires a stable operational account within a declared boundary and a closure mechanism recognized by the relevant roles.
8.2 Closure operators (examples by interface)
Closure operators include provenance or source records accepted as decisive within an interface, authoritative adjudication with enforceable consequence, correction records that become the default reference in later relays, platform actions paired with rationales and appeal pathways, and organizational confirmation with documentation sufficient for internal decision needs.
8.3 Non-closure states
Non-closure occurs when credibility remains contested, response protocols vary across interfaces, or closure authority is not recognized. Common non-closure states include persistent dispute loops with no decisive evidence pathway, parallel closure outcomes across communities that do not converge, and repeated reactivation of the same claim family after partial enforcement or partial correction.
8.4 Drift as a rate across repeated DMEv sequences
Within MSS, drift (D) is treated as the rate at which misalignment accumulates when truth fidelity, signal alignment, or structural coherence cannot keep pace with system demands. In disinformation contexts, drift can increase when recurrence frequency rises, adjudication capacity saturates, closure stability declines, and audiences shift toward identity cues as evidence substitutes.
9. Structural distortion layers and unequal closure
Unequal closure is a measurable system output: the same claim family can be evaluated and closed differently for different populations even when the reference condition is the same. In IFS-4 this does not require treating truth as subjective. A reference condition is the bounded state of affairs a claim targets under a declared scope and an interface’s admissible evidence categories. What varies is access to that referent, the ability to convert it into admissible form, and which signals an interface treats as credible.
Disinformation does not need to replace reality to succeed. It can increase dispute load by exploiting existing gaps in access, legibility, and credibility. When those gaps are present, the same encoding tactics yield different closure probabilities across groups, producing parallel outcomes that do not transfer across interfaces.
Three distortion layers recur in field settings:
Access distortion. Verification capacity differs because resources differ. Connectivity constraints, device limits, paywalls, time scarcity, and uneven local institutional support change whether verification is feasible inside the event’s time window. Broadband and device reliance disparities are documented and matter here as boundary conditions on verification.
Legibility distortion. Evidence can exist but remain unusable inside an interface. Some systems privilege specific formats, dialects, credential signals, and documentation norms. When a group’s language or documentary practices do not match those norms, the conversion cost rises. In digital pipelines, differential model performance can add friction by treating some speech patterns as lower-quality input, raising error rates in credibility and adjudication pathways.
Credibility distortion. Evidence thresholds and routing can differ by identity cues, institutional history, or prior harm. The same artifact can be treated as decisive in one context and treated as suspect in another. These differences are visible in outcomes even when intent is unknown: enforcement rates, appeal outcomes, time-to-adjudication, and recurrence can diverge by group.
IFS-4 treats these distortions as observable differences in routing and closure probability. A useful descriptive parallel is adverse impact screening in selection systems: when outcomes differ systematically across groups, the first diagnostic question is which operator produced the difference. Employment guidance uses the four-fifths rule as a screening heuristic for disparate selection rates. In IFS-4, a comparable approach can be used to compare closure outcomes across groups, then locate the operator driving divergence: access, legibility, routing, protocol, or the closure mechanism itself.
This is also where “truth is subjective” arguments often enter the domain. When groups repeatedly experience different closure outcomes, many infer that reality itself is optional. IFS-4 treats that inference as a system symptom: when closure mechanisms do not transfer across interfaces, interpretation becomes identity-bound, and closure becomes community-local rather than system-recognized. Improving interpretive equality in this paper means making evidence conversion less group-dependent and closure pathways more consistent, not imposing agreement.
Unequal closure also interacts with synthetic content pressure. When realistic artifacts are cheap to generate and provenance is unevenly available, receivers with fewer verification resources carry higher dispute burden and higher misclassification risk. Provenance and transparency initiatives matter here because they lower the cost of converting a reference condition into auditable form and reduce the advantage of manufactured evidentiary surfaces.
10. MSS variable mapping for DMEv
This section maps DMEv dynamics onto the MSS variable set.
10.1 Truth Fidelity (T)
In DMEv, T concerns whether interpretive outputs remain constrained by the reference condition and by declared evidence standards rather than by engineered salience.
Candidate observables include prevalence of verifiable sourcing in dominant narratives, persistence of corrections in later relays, and proportion of high-reach claims with checkable provenance.
10.2 Signal Alignment (P)
P concerns alignment between the signals produced and how they are interpreted across interfaces.
Candidate observables include cross-interface disagreement rate on evidence categories, provenance readability rate, and rate of context-restoration changes that alter interpretation.
10.3 Structural Coherence (C)
C concerns stability and clarity of roles, authority routing, protocol selection rules, and closure mechanisms.
Candidate observables include time-to-adjudication, appeal-path consistency, protocol selection consistency across similar events, and clarity of enforcement rationales.
10.4 Drift (D)
D concerns the rate of accumulated misalignment across repeated DMEv event series.
Candidate observables include recurrence growth rate for the same claim family, increasing adjudication load per event, and closure stability decline across time windows.
10.5 Affective Regulation (A)
A concerns regulation conditions that affect credibility assignment, relay behavior, and protocol selection.
Candidate observables include volatility in sharing rates during crises, escalation markers in community channels, and fatigue indicators such as reduced verification behavior and increased reliance on heuristics.
11. Measurement candidates
11.1 Field-observable DMEv measures
Time-to-uptake (first high-credibility relay)
Amplification depth and breadth (relay tree properties)
Correction latency (first correction, first authoritative adjudication)
Protocol routing distribution (share of responses by protocol family)
Closure stability rate (percentage not reactivated within 7, 30, 90 days)
11.2 Drift proxies (series-level)
Growth rate of recurring claim families
Adjudication load growth (review volume, time-to-decision)
Closure stability decline by interface
Cross-interface divergence rate (incompatible closure outcomes across communities)
11.3 AI pressure indicators
Variant proliferation rate for a claim family
Proportion of relays showing templated paraphrase patterns
Prevalence of synthetic media flags in dispute narratives
Growth rate of “AI as denial” claims during adjudication and appeal
11.4 Unequal-closure indicators (disparity auditing)
Unequal closure can be measured by comparing closure probability and closure stability across cohorts (by language, region, role, channel, and where lawful and appropriate, demographic categories). A practical approach is disparity auditing: measuring whether a cohort’s closure rate falls materially below a baseline rate, analogous to adverse-impact style monitoring used in other institutional domains.
Candidate observables: closure stability rate by cohort, time-to-adjudication by cohort, appeal success rate by cohort, false-positive enforcement rate by language or dialect, correction reach and uptake by cohort, and recurrence rate differences for the same claim family across communities.
11.5 Artifact sources
Public posts, reposts, screenshots, link references, media artifacts; platform enforcement artifacts (labels, removals, restrictions), where accessible; organizational communications and decision logs; newsroom correction records, where accessible; public health infodemic monitoring reports and advisories; provenance metadata and content credentials, where present.
11.6 Limits
No metric directly measures the full reference condition. The measurement target here is interpretive stability: credibility operations, protocol selection, closure probability, closure stability, and drift patterns across repeated event series.
12. Generalization beyond disinformation
Disinformation systems illustrate a broader class of meaning problems: adversarial signal conditions in which engineered claims compete with verification capacity and with closure authority. The DMEv approach is designed to transfer to other domains with similar structure, including fraud operations, crisis rumor environments, reputation disputes, compliance narratives, and internal organizational conflict where verification is limited and incentives shape encoding.
Institute Signature
Disinformation is a definitive case for Meaning System Science because the target is not “belief.” The target is the interpretive infrastructure a system uses to decide what is happening. Engineered claims do not need to outperform reality in content, they need to outrun institutions in throughput by pushing credibility assignment and closure demands faster than verification and adjudication can keep pace.
AI intensifies that imbalance by lowering the marginal cost of artifact production, claim variation, and audience targeting. The same contested account can be rendered into many surface forms across many channels, so receivers experience a claim family, not a single claim. In that environment, repetition and reach become credibility cues, and dispute load rises even when the reference condition has not changed.
Disinformation also scales through unequal closure. The referent can remain bounded and unchanged, while access to verification, the ability to convert evidence into admissible form, and the credibility assigned to the same signals vary across populations and interfaces. Under those conditions, engineered claims do not need to replace reality. They can exploit gaps in access, legibility, and credibility to produce divergent closure outcomes that then feed relay and recurrence.
The DMEv model makes this legible. A contested reference condition enters shared reality through signals optimized for salience and relay, then moves across interfaces with incompatible evidence thresholds and incompatible decisive pathways. Each interface performs its own credibility assignment, authority routing, and response selection. The result is not simple disagreement. It is parallel outcomes that do not transfer across the relay network.
This is why correction is often structurally late. Correction competes with an event series, not a single message. Once a claim family has been copied, summarized, and endorsed inside identity-bound communities, the question often shifts from “Is it true?” to “Which closure does my interface recognize?” In that setting, denial plausibility and gaslighting-style credibility moves extend non-closure by shifting the dispute from the reference condition to the receiver’s capacity to know.
IFS-4 closes with a governance implication that is not partisan and not optional. In adversarial signal environments, interpretive stability is a civic capability. If a system cannot produce closure mechanisms that later relays treat as decisive, and cannot keep those pathways consistently available across groups, it will act on whatever achieves reach with sufficient credibility cues, regardless of reference fidelity.
The most damaging disinformation events are the ones that convert dispute into consequence.
Citation
Vallejo, J. (2025). Disinformation Systems (IFS-4). Transformation Management Institute.
References
Wardle, C., & Derakhshan, H. (2017). Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making. Council of Europe.
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science.
Pennycook, G., et al. (2021). Research on attention-to-accuracy interventions and sharing behavior (selected publications).
Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest.
Roozenbeek, J., & van der Linden, S. (2019–2022). Inoculation and prebunking research on resistance to misinformation (selected publications).
World Health Organization (WHO). (2020 onward). Infodemic management resources and related publications.
National Institute of Standards and Technology (NIST). (2024). Reducing Risks Posed by Synthetic Content: An Overview of Technical Approaches to Digital Content Transparency (NIST AI 100-4). U.S. Department of Commerce.
Coalition for Content Provenance and Authenticity (C2PA). Content Credentials specification and related materials.
UNESCO. Guidance and reports on online information integrity and platform governance (selected documents).
Pew Research Center. (2025). Internet/Broadband Fact Sheet. Pew Research Center.
U.S. Equal Employment Opportunity Commission, U.S. Department of Labor, U.S. Department of Justice, & U.S. Civil Service Commission. (1978). Uniform Guidelines on Employee Selection Procedures (29 C.F.R. Part 1607).
National Institute of Standards and Technology (NIST). (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0). U.S. Department of Commerce.
Koenecke, A., et al. (2020). Racial disparities in automated speech recognition. Proceedings of the National Academy of Sciences, 117(14), 7684–7689.
United Nations. (2024). United Nations Global Principles for Information Integrity.
A-Series: Foundations
The Charter
Meaning System Science
The Scientific Lineage of Meaning
The Physics of Becoming
Proportionism
The General Theory of Interpretation
B-Series: Applied Science
The Emergence of Transformation Science
The Practice of Transformation Science
The Restoration of Meaning
C-Series: Governance
AI as a Meaning System
Science as a Meaning System
Pop Culture as Meaning Systems
D-Series: Technical Standards
LDP 1.0
3E Standard™
3E Method™
Institute Resources
Official Terminology
Citation Guidelines
Essential Reading
About the Institute
Interpretation Field Studies

