By Whitney Coggeshall
In licensure and credentialing, we often use test scores as a proxy for whether someone is ready to do the job. A passing score is meant to reassure employers, clients, and regulators that a candidate can walk into a role and perform. But the reality is that most assessments were not designed with that goal in mind.
Traditional formats such as multiple-choice and essay questions are efficient, scalable, and familiar. They are also low-fidelity. These methods capture whether someone knows about a concept, not whether they can apply it in a real-world context. Even when test questions aim at higher levels of Bloom’s taxonomy, that does not necessarily mean they assess practical readiness. Yet we are increasingly expecting these formats to tell us whether someone is job-ready. That creates a mismatch between what a test measures and what the job actually demands.
If the goal is to understand whether someone is ready to perform on the job, then we need to assess whether they can actually do the work. That means aligning our assessments with the tasks, decisions, and constraints they are likely to face in practice.
This approach is often called authentic assessment. In practice, it means going beyond asking about a skill to having learners demonstrate it in context. It might involve analyzing data, making judgments based on limited information, prioritizing conflicting demands, or communicating with others in realistic scenarios.
These are not just knowledge-based challenges. They are judgment and performance-based, and they are often shaped by the tools, timelines, and pressures of real work. To measure those skills meaningfully, we need to put learners in situations that resemble how those skills are actually used.
The mistake: Wrapping a case in low-fidelity item types
One common workaround is to take a traditional test format and wrap it in a case study. For example, presenting a scenario and then asking a series of multiple-choice or short-answer questions to simulate a real-world task, oftentimes called a “testlet” or “item set” in assessment design circles. These formats can be useful for evaluating structured reasoning or conceptual understanding, but they fall short when the goal is to assess practice readiness.
While this approach may introduce some context, the core mechanics of the assessment remain the same. It still feels like a test, not like work. The interaction pattern—read, recall, select/write—does not mirror how professionals actually engage with problems on the job. That gap makes it hard to claim we're assessing job readiness in any meaningful way, because delivering low-fidelity item types in a lightly contextualized wrapper is not the same as assessing performance in a meaningful, job-relevant way.
There are understandable reasons why this happens. Traditional assessment systems, including item banks and test delivery platforms, were built around standard item types for efficiency, security, and comparability. These systems are optimized for formats like multiple choice and short answer because those formats are easier to score, deliver at scale, and to standardize. So it’s natural to try using the tools we already have at our fingertips.
Creating a higher-fidelity experience prioritizes authenticity over standardization, which means many existing systems aren't built to support the kinds of assessment tasks it requires. As demand grows for assessments that better reflect how work is actually done, our infrastructure must evolve.
The better way: Immersive tasks
A more powerful approach is to build assessments that mirror real tasks, not just in content but in structure and flow. These are more immersive tasks where learners navigate a realistic scenario, make decisions, interact with tools, and see the consequences of their actions.
These kinds of tasks create a higher-fidelity experience because they feel more like the work itself. Learners engage in a sequence of actions that resemble the way tasks unfold on the job, rather than responding to isolated test items.
Immersive tasks offer a way for learners to demonstrate that they are practice-ready, not just familiar with underlying concepts. For example, instead of asking someone to recall the steps of a financial analysis, a higher fidelity assessment could ask the learner to perform the analysis within a simulated dashboard, drawing on real-time data and navigating tradeoffs under pressure.
For clarity, when I refer to immersive tasks, I do not mean visual flash or advanced hardware. I mean the experience of being placed in a realistic work scenario and engaging with the task in a way that mirrors how it would unfold on the job. In fact, I try to use the term "immersiveness" rather than "immersion" because I think of it not as a binary condition, but a spectrum. An experience can be more or less immersive depending on how closely it reflects the task, environment, and pressures of the real world. The real challenge is finding the level of immersiveness that allows learners to demonstrate what they can do, without adding noise that obscures their performance.
For example, the finance industry doesn’t necessarily need cutting-edge VR headsets to achieve immersiveness. An effective task can be delivered via standard computers or web platforms. These tools use realistic scenarios, domain-relevant data, and interactive decision-making to engage learners in a way that conceptually resembles the work.
This kind of task supports stronger assessment by narrowing the gap between what is being measured and how the skill is used in practice. When learners feel like they are in a real situation, they are more likely to apply their skills the way they would on the job. That gives us a clearer picture of what they can actually do.
In the previous section, I mentioned that traditional assessment systems often don't support this type of assessment. Now you can probably see why. Immersive tasks often require complex UX that reflects domain-specific tools and workflows. That makes it difficult to build a single assessment platform that works across industries and use cases. These experiences also complicate scoring, but advances in AI and well-aligned measurement frameworks are helping to address that. Despite the increased complexity, we’ve reached a point where the benefits to stakeholders outweigh the implementation hurdles.
Closing thought: Getting closer to the work
If the goal is to know whether someone is ready to perform, then the assessment should look and feel like the work they are being prepared to do. A multiple-choice question inside a case stem might hint at context, but it is still a test-taking experience. And learners treat it that way.
We have the opportunity to design assessments that capture more than what someone knows. We can build experiences that reveal how they think, what they prioritize, and how they perform under realistic constraints. That means moving beyond legacy item types and investing in higher-fidelity workflows that better align with practice. Will these complicate things like scoring? Yes. But that problem is surmountable.
At the same time, immersiveness is not one-size-fits-all. It should be scaled to the audience, the context, and the skill. We should be aiming for assessment that feels less like a test, and more like doing the work, without overbuilding beyond what the use case requires. This might sound like an easy task, but it’s one of the hardest problems I wrestle with day in and day out.
As I continue to explore this space, I am thinking about how to build assessments that are not just efficient to deliver and scale, but meaningful to interpret. That starts with treating immersiveness and authenticity as essential features of high-quality assessment, not as optional enhancements.
If you are working on assessments that aim to reflect real-world readiness, I would love to connect. The more we share thinking on immersive, task-based design, the better equipped we will all be to build assessments that actually serve the goals we care about.
Author
You may also be interested in
Explore our programs and certificates
CFA Institute offers a diverse range of programs and certificates designed to meet the needs of finance professionals across various career stages and specializations.