April 12, 2026
TRENDS & INSIGHTS

Kevin RockmaelApril 20, 2026
Recently I attended two conferences focused on testing and assessment.
Both groups are grappling with the same fundamental challenge: how to ensure assessments remain fair and authentic in the age of AI.
What stood out most was not simply the differences in the solutions being discussed. It was the differences in how each community talks about the problem itself. The vocabulary, assumptions, and priorities often sounded noticeably different, even when the underlying concerns were very similar.
It is important to note that these observations come from attending a range of sessions and tracks at each conference, not every presentation or participant. The patterns described here reflect themes that emerged across many conversations rather than rigid divisions between the communities. There were ATP speakers who emphasized prevention and thoughtful assessment design, and there were ICAI participants who spoke strongly about the importance of stronger security controls. Still, the overall tone and language used in each community often reflected different professional traditions and priorities.
Understanding those differences reveals something important: these two communities have a great deal to learn from one another.
The Language of the Problem
One of the clearest contrasts between the conferences was the language used to describe the same underlying challenge.
| Topic | ATP Language | ICAI Language |
|---|---|---|
| Core concern | Exam security, fraud prevention, impersonation | Academic integrity, learning culture, student behavior |
| Focus of discussion | Threat actors, vulnerabilities, attack vectors | Motivation, fairness, pressure, student success |
| Test taker framing | Candidates, bad actors, proxies | Students, learners |
| Solutions discussed | Security layers, biometrics, detection algorithms | Honor codes, expectations, institutional culture |
| Measurement terms | Score validity, psychometrics, statistical forensics | Learning integrity, ethical behavior |
In simple terms, ATP conversations often sound similar to cybersecurity discussions, while ICAI conversations sound closer to educational policy or student development discussions.
Neither perspective is wrong. They simply reflect the environments in which each community operates. Credentialing organizations must protect the value of professional certifications that often have financial or regulatory consequences. Universities, on the other hand, must balance enforcement with their broader mission of education and student development.
Different Assumptions About Test Takers
These language differences reflect deeper philosophical assumptions about how assessments should be protected and what role institutions should play.
| Dimension | ATP (Credentialing / Private Sector) | ICAI (Education) |
|---|---|---|
| Default assumption | Systems must prevent fraud | Systems must protect fairness |
| Orientation | Protect certification value and brand | Support student learning |
| Enforcement mindset | Prevent and detect violations | Ensure due process and education |
| Speed of change | Rapid experimentation | Institutional consensus and policy |
During many ATP sessions, the framing often centered on questions like:
“How do we stop increasingly sophisticated cheating operations?”
At ICAI, the discussion was more likely to focus on questions such as:
“Why are students cheating, and how do we change that behavior?”
Both perspectives are responding to real challenges, but they approach the problem from different starting points.
ATP: Security in an AI Arms Race
At ATP, one theme appeared repeatedly across sessions: AI has dramatically accelerated the threat landscape for testing organizations.
Speakers described how cheating has become cheaper, faster, and more scalable than ever before. AI tools can now assist candidates in answering questions in real time, impersonating test takers, or harvesting exam content for future distribution.
Because of this rapidly evolving landscape, many ATP conversations focused heavily on security infrastructure and detection systems.
A common theme was the move toward multi-layered security, where no single control is expected to solve the problem on its own.
These tools allow testing organizations to detect compromised exams or coordinated cheating networks even after the testing session has ended.
ICAI: Integrity as a Cultural Problem
At ICAI, the conversations were generally less technical and focused more on student motivation and institutional culture.
Many sessions explored the reasons students choose to cheat in the first place. Research presented at the conference highlighted several common drivers, including pressure to gain admission to competitive programs, heavy course loads, time constraints, and the perception that “everyone else is cheating.”
Because of this focus, ICAI discussions often emphasized solutions such as honor codes, clear expectations around the use of AI tools, consistent enforcement across courses, and programs designed to teach academic integrity.
The underlying belief is that culture and expectations play a critical role in shaping student behavior. While enforcement remains important, many ICAI discussions emphasized prevention through education, clarity, and shared institutional norms.
Where the Two Communities Converge
Despite these differences, the two conferences also revealed important areas of agreement.
| Shared Insight | Implication |
|---|---|
| AI permanently changes testing | Traditional approaches alone are no longer sufficient |
| Solutions must be layered | No single tool, design or policy can solve the problem |
| Assessment design matters | Security cannot simply be added afterward |
| Collaboration is necessary | Testing organizations must learn from one another |
Both communities increasingly recognize that the challenges created by AI cannot be solved by technology alone, nor by policy and culture alone.
Technology without cultural expectations can create adversarial environments between institutions and test takers. At the same time, culture without security controls can leave assessments vulnerable to increasingly sophisticated cheating methods.
A Bridge Between Two Worlds
The most striking takeaway from attending both conferences is that the testing ecosystem sometimes feels like it is splitting into two parallel conversations.
One speaks the language of security, risk management, and psychometrics.
The other speaks the language of education, student development, and integrity culture.
Yet both communities are ultimately trying to answer the same fundamental question:
How can we ensure that the work submitted truly represents the person being assessed?
Solutions increasingly need to operate at the intersection of these two perspectives. The future of testing integrity will likely require systems that combine layered security, behavioral analytics, thoughtful assessment design, and clear expectations for learners.
In other words, technology, culture, and collaboration must work together.
Neither conference, and neither approach, can solve the problem alone.

April 12, 2026
TRENDS & INSIGHTS

April 3, 2026
TRENDS & INSIGHTS

February 2, 2026
TRENDS & INSIGHTS

January 12, 2026
TRENDS & INSIGHTS

October 30, 2025
TRENDS & INSIGHTS

October 10, 2025
TRENDS & INSIGHTS

August 15, 2025
TRENDS & INSIGHTS

July 28, 2025
TRENDS & INSIGHTS

July 14, 2025
TRENDS & INSIGHTS

June 25, 2025
TRENDS & INSIGHTS

June 20, 2025
TRENDS & INSIGHTS