Proctorio BlogProctorio Blog

Two Conferences, Two Languages: What ATP and ICAI Reveal About the Future of Testing Integrity

Recently I attended two conferences focused on testing and assessment.

  • The first was ATP (Association of Test Publishers), which largely represents credentialing bodies, certification programs, and private-sector assessment providers.
  • The second was ICAI (International Center for Academic Integrity), a community that primarily focuses on higher education and K–12 institutions.

Both groups are grappling with the same fundamental challenge: how to ensure assessments remain fair and authentic in the age of AI.

What stood out most was not simply the differences in the solutions being discussed. It was the differences in how each community talks about the problem itself. The vocabulary, assumptions, and priorities often sounded noticeably different, even when the underlying concerns were very similar.

It is important to note that these observations come from attending a range of sessions and tracks at each conference, not every presentation or participant. The patterns described here reflect themes that emerged across many conversations rather than rigid divisions between the communities. There were ATP speakers who emphasized prevention and thoughtful assessment design, and there were ICAI participants who spoke strongly about the importance of stronger security controls. Still, the overall tone and language used in each community often reflected different professional traditions and priorities.

Understanding those differences reveals something important: these two communities have a great deal to learn from one another.

The Language of the Problem

One of the clearest contrasts between the conferences was the language used to describe the same underlying challenge.

TopicATP LanguageICAI Language
Core concernExam security, fraud prevention, impersonationAcademic integrity, learning culture, student behavior
Focus of discussionThreat actors, vulnerabilities, attack vectorsMotivation, fairness, pressure, student success
Test taker framingCandidates, bad actors, proxiesStudents, learners
Solutions discussedSecurity layers, biometrics, detection algorithmsHonor codes, expectations, institutional culture
Measurement termsScore validity, psychometrics, statistical forensicsLearning integrity, ethical behavior

In simple terms, ATP conversations often sound similar to cybersecurity discussions, while ICAI conversations sound closer to educational policy or student development discussions.

Neither perspective is wrong. They simply reflect the environments in which each community operates. Credentialing organizations must protect the value of professional certifications that often have financial or regulatory consequences. Universities, on the other hand, must balance enforcement with their broader mission of education and student development.

Different Assumptions About Test Takers

These language differences reflect deeper philosophical assumptions about how assessments should be protected and what role institutions should play.

DimensionATP (Credentialing / Private Sector)ICAI (Education)
Default assumptionSystems must prevent fraudSystems must protect fairness
OrientationProtect certification value and brandSupport student learning
Enforcement mindsetPrevent and detect violationsEnsure due process and education
Speed of changeRapid experimentationInstitutional consensus and policy

During many ATP sessions, the framing often centered on questions like:

“How do we stop increasingly sophisticated cheating operations?”

At ICAI, the discussion was more likely to focus on questions such as:

“Why are students cheating, and how do we change that behavior?”

Both perspectives are responding to real challenges, but they approach the problem from different starting points.

ATP: Security in an AI Arms Race

At ATP, one theme appeared repeatedly across sessions: AI has dramatically accelerated the threat landscape for testing organizations.

Speakers described how cheating has become cheaper, faster, and more scalable than ever before. AI tools can now assist candidates in answering questions in real time, impersonating test takers, or harvesting exam content for future distribution.

Because of this rapidly evolving landscape, many ATP conversations focused heavily on security infrastructure and detection systems.

A common theme was the move toward multi-layered security, where no single control is expected to solve the problem on its own.

These tools allow testing organizations to detect compromised exams or coordinated cheating networks even after the testing session has ended.

ICAI: Integrity as a Cultural Problem

At ICAI, the conversations were generally less technical and focused more on student motivation and institutional culture.

Many sessions explored the reasons students choose to cheat in the first place. Research presented at the conference highlighted several common drivers, including pressure to gain admission to competitive programs, heavy course loads, time constraints, and the perception that “everyone else is cheating.”

Because of this focus, ICAI discussions often emphasized solutions such as honor codes, clear expectations around the use of AI tools, consistent enforcement across courses, and programs designed to teach academic integrity.

The underlying belief is that culture and expectations play a critical role in shaping student behavior. While enforcement remains important, many ICAI discussions emphasized prevention through education, clarity, and shared institutional norms.

Where the Two Communities Converge

Despite these differences, the two conferences also revealed important areas of agreement.

Shared InsightImplication
AI permanently changes testingTraditional approaches alone are no longer sufficient
Solutions must be layeredNo single tool, design or policy can solve the problem
Assessment design mattersSecurity cannot simply be added afterward
Collaboration is necessaryTesting organizations must learn from one another

Both communities increasingly recognize that the challenges created by AI cannot be solved by technology alone, nor by policy and culture alone.

Technology without cultural expectations can create adversarial environments between institutions and test takers. At the same time, culture without security controls can leave assessments vulnerable to increasingly sophisticated cheating methods.

A Bridge Between Two Worlds

The most striking takeaway from attending both conferences is that the testing ecosystem sometimes feels like it is splitting into two parallel conversations.

One speaks the language of security, risk management, and psychometrics.

The other speaks the language of education, student development, and integrity culture.

Yet both communities are ultimately trying to answer the same fundamental question:

How can we ensure that the work submitted truly represents the person being assessed?

Solutions increasingly need to operate at the intersection of these two perspectives. The future of testing integrity will likely require systems that combine layered security, behavioral analytics, thoughtful assessment design, and clear expectations for learners.

In other words, technology, culture, and collaboration must work together.

Neither conference, and neither approach, can solve the problem alone.

What We Learned at ATP About the Future of Testing and AI

April 12, 2026

TRENDS & INSIGHTS

What We Learned at ATP About the Future of Testing and AI

Reclaiming Assessment Validity in the Age of Online Learning

April 3, 2026

TRENDS & INSIGHTS

Reclaiming Assessment Validity in the Age of Online Learning

The Integrity Crisis in Hiring Is Already Here

February 2, 2026

TRENDS & INSIGHTS

The Integrity Crisis in Hiring Is Already Here

Proctoring Decisions Can’t Be Made in a Vacuum: Why Anecdotes Aren’t Enough

January 12, 2026

TRENDS & INSIGHTS

Proctoring Decisions Can’t Be Made in a Vacuum: Why Anecdotes Aren’t Enough

Another Cloud Outage: What Businesses Can Learn from the Azure Downtime

October 30, 2025

TRENDS & INSIGHTS

Another Cloud Outage: What Businesses Can Learn from the Azure Downtime

Why Candidates Misrepresent Themselves in Interviews and What Employers Can Do About It

October 10, 2025

TRENDS & INSIGHTS

Why Candidates Misrepresent Themselves in Interviews and What Employers Can Do About It

Academic Integrity Needs More Than Just Monitoring

August 15, 2025

TRENDS & INSIGHTS

Academic Integrity Needs More Than Just Monitoring

What We Learned About AI at the Anthology Conference: 3 Clear Takeaways (And What To Do Next)

July 28, 2025

TRENDS & INSIGHTS

What We Learned About AI at the Anthology Conference: 3 Clear Takeaways (And What To Do Next)

From Crisis to Integrity: Rebuilding Trust in the Age of Cheating

July 14, 2025

TRENDS & INSIGHTS

From Crisis to Integrity: Rebuilding Trust in the Age of Cheating

Why Digital Privacy Matters in Education | Preparing Students for a Data-Driven World

June 25, 2025

TRENDS & INSIGHTS

Why Digital Privacy Matters in Education | Preparing Students for a Data-Driven World

What We Heard at the 2025 Online Teaching Conference: Faculty and Institutional Challenges in the Age of AI

June 20, 2025

TRENDS & INSIGHTS

What We Heard at the 2025 Online Teaching Conference: Faculty and Institutional Challenges in the Age of AI