notices - See details
Notices

The promise and complexity of skills-based learning and assessment

Jenga game color block tower with balls.
Published 16 Oct 2025

By Whitney Coggeshall 

Everyone’s talking about skills. Here’s why getting them right is so hard.

Have you noticed how job postings these days often emphasize skills over degrees? Employers are asking for things like “data storytelling,” “project management,” or “relationship building” instead of, or in addition to, traditional credentials. Large education companies have noticed too. Many are presenting on skills-based learning and assessment at conferences, updating their product descriptions with this language, and promising to help people build and showcase the exact skills employers want. It sounds simple: focus on what people can do, not just what is on their resume. Unfortunately, just because someone has work experience or degrees that sound like they should come with a particular set of skills, it often does not mean that they actually do.

But behind the scenes, there is a reason companies in education, learning, and assessment are so focused on this shift. It is not as straightforward as it seems. Defining what a skill actually is, teaching it in a meaningful way, and proving someone truly has it are enormously complex challenges. The promise of skills-based learning and assessment is huge, but so are the hurdles in making it real.

Take me, for example. One of my key skills is synthesizing information into actionable next steps and presenting it in a way that gets buy-in from others. On the surface, “synthesizing information” sounds like a tidy skill that you either have or you do not. But in practice, it is not universal. In product management or in the world of assessment and measurement, I can do this well because I know the problems, the vocabulary, and the context. If you asked me to synthesize market trends and investment data into a portfolio strategy, I would be completely lost. The label “synthesizing information” stays the same, but the way it is demonstrated depends entirely on the field and situation.

And that is the crux of the challenge. Skills that sound broad and transferable are often deeply tied to context. What counts as effective in one domain may look entirely different in another. This makes it far more complicated to define, measure, or validate skills than most people realize, because we are not measuring just the skill in isolation, but the interplay between the skill and the environment where it is practiced.

Additionally, what counts as “effective” in many domains, especially when we are talking about non-technical skills like communication, leadership, or problem solving, is a bit squishy. What looks like great communication in one organization or culture may come across as ineffective in another. Even within the same domain, what works in one situation might fall flat in another.

There is excellent research on these topics, but research always has limits when it comes to real-world applicability. Controlled studies can identify patterns, but the messy, dynamic nature of actual workplaces means that skills are expressed in highly variable ways. This makes it incredibly challenging to build assessments or learning experiences that are both valid, meaning they measure the right thing, and useful, meaning they apply across contexts.
 

The messy reality of measuring skills

Once you get past the buzz, the first big challenge is figuring out how to measure skills in the first place. With knowledge, it is relatively straightforward. If you want to know whether someone understands financial formulas or historical events, you can write multiple-choice or short-answer questions that test recall or application. Skills, however, do not fit neatly into a test bank.

Think about a skill like communication. Does “good communication” mean giving a polished presentation? Writing a clear email? Leading a team meeting? Listening well in a one-on-one? All of those could count, but they require very different behaviors and evidence. This fuzziness makes it difficult to define exactly what should be measured, let alone how.

Even if you agree on a definition, observing a skill, or even setting up the conditions where a learner can demonstrate it, is far more complicated than it seems. Many skills are demonstrated through performance tasks, like group projects, simulations, role plays, or case studies. These are much closer to real-world practice, but they are time consuming to design and difficult to scale. It is one thing to have a trained facilitator watch five students in a classroom. It is another to provide that kind of observation and feedback to thousands of learners online.

And then there is fairness. Two people might approach the same task differently because of their cultural background, prior work experience, or comfort with the format. If an assessment favors one style over another, you risk measuring confidence or familiarity instead of the actual skill. This is why companies and researchers spend so much time trying to validate their assessments — to make sure they are truly measuring what they claim to measure.

In other words, the complexity is not just technical. It is human. Skills are lived, contextual, and variable, which makes capturing them reliably a far trickier puzzle than it first appears.

The complexity of teaching skills

If measuring skills is hard, teaching them might be even harder. Knowledge can often be taught through readings, lectures, or videos. You can explain a formula, define a term, or outline a process, and someone can reasonably learn it through study. Skills, on the other hand, rarely develop without practice, feedback, and time.

Take problem solving as an example. You cannot just tell someone, “Be a better problem solver,” and expect it to stick. They need opportunities to wrestle with messy challenges, try out different approaches, and reflect on what worked and what did not. That cycle of practice and reflection is where skill growth happens, and it requires carefully designed learning experiences.

The environment matters too. Skills are context dependent, and practicing them in realistic situations is key. A leadership workshop might give you scenarios on paper, but managing an actual team with competing priorities and personalities is a completely different level of complexity. The closer the learning environment mirrors reality, the more transferable the skill becomes. Creating those authentic environments, whether through live simulations, case studies, or even AI-driven practice tools, is resource intensive.

And then there is feedback. Skills are honed when someone can see where they excelled and where they need to adjust. That means instructors, coaches, or peers have to observe and respond in real time. Scaling that kind of personalized support to large groups is one of the toughest challenges in education today.

In short, teaching skills is not a one-and-done event. It is an ongoing process of practice, application, and feedback, often tied to very specific contexts. That is what makes skills development so powerful, but also so challenging to deliver at scale.

Is AI our savior?

Artificial intelligence is rapidly changing the conversation about skills-based learning and assessment. Tools that once seemed impossible to scale, such as personalized feedback, immersive simulations, or dynamic practice environments, are suddenly more attainable with AI. Imagine an AI coach that can watch you deliver a presentation, highlight where you lost your audience, and suggest improvements. Or picture an adaptive simulation that changes in real time to challenge your problem-solving skills at just the right level. These possibilities make AI look like a potential savior for teaching and measuring skills.

But there are important caveats. AI can create assessments and generate feedback, but that does not mean it is measuring the right thing. Just because a tool produces data does not mean the data reflects the skill we actually care about. Without evidence that the feedback truly reflects the skill being assessed, AI risks creating an illusion of precision rather than real insight. Overreliance on AI can also reduce the human elements that are essential for learning, such as empathy, mentorship, and nuanced judgment. And while AI can make certain aspects of skills development more scalable, it does not erase the fact that skills are still deeply contextual and shaped by real-world practice.

So, is AI our savior? Probably not. But I would say it is a game changer. It may be one of the most powerful tools we have for expanding access, making practice and assessment more authentic, and lowering costs if we use it carefully, thoughtfully, and with an eye on both its strengths and its limitations. I am, in fact, a strong proponent of using AI for skills-based learning and assessment. To see why, consider the experience of new graduates entering the workforce.

If you were considering two new graduates with no prior real-world experience and they were equally matched in technical skills and knowledge, but one had training through an AI simulator that used authentic tasks they would actually face on the job, while the other had not, I would confidently pick the one with that additional training. All humans need to start somewhere. We could throw new graduates straight into the deep end and let them figure things out by interacting with clients, but that comes with real risks for an organization and its brand. Alternatively, giving them the chance to practice first through AI-based training creates a safer environment where they can make mistakes, learn from them, and build confidence before stepping into real-world situations. In this way, AI helps de-risk those early experiences while still preparing people for the complexity of actual work.

Examples like this show how AI can play an important role, while still falling short of replacing the practice, feedback, and judgment that only come from real-world experience.

What’s next in skills-based learning and assessment

The conversation about skills is not slowing down any time soon. In fact, it is only becoming more central as industries change and the workforce evolves. So where might we be headed?

One clear direction is innovation in assessment. AI and digital technology are opening doors to tools like interactive simulations, adaptive practice platforms, and automated yet personalized feedback. These approaches move beyond static tests and create opportunities to demonstrate skills in more authentic, real-world ways.

Credentialing is also shifting. Micro credentials, digital badges, and portable skills “passports” are gaining traction as ways to signal specific capabilities to employers. Instead of relying only on a degree or a job title, learners may carry a digital record of the skills they have demonstrated across different contexts. But all of this only works if there is trust. If companies and hiring managers do not believe that a micro credential actually reflects skill, then the signal is meaningless. Without trust, the entire system collapses into noise.

One of the most powerful tools for building that trust is validity. In the measurement world, validity is the way we show that an assessment really measures what it claims to measure. A credential that rests on valid evidence sends a signal employers can rely on, and without that foundation the credential has little value. Validity is essentially the bridge between what an assessment claims to measure and the trust people place in that claim.

At the same time, we cannot ignore the balancing act. Valid measurement requires rigor, but broad adoption requires accessibility and affordability. Pushing too far in one direction risks losing credibility, while pushing too far in the other risks leaving learners behind. The challenge ahead is to design systems that are trustworthy, scalable, and widely recognized by the audiences that matter most.

Ultimately, the future of skills-based learning and assessment will depend on thoughtful design and collaboration. Educators, employers, and technology providers will need to work together to define what matters, how to teach it, and how to prove it fairly. Done well, this shift has the potential to make education more relevant, hiring more equitable, and career pathways more transparent. Done poorly, it risks becoming just another buzzword.

Bringing it all together

Everyone is talking about skills these days, and for good reason. They promise a more flexible, fair, and practical way of connecting what people know with what the world needs. But as we have seen, turning that promise into reality is far from simple. Defining skills, teaching them in meaningful ways, and measuring them fairly are all enormously complex challenges.

The real key to making skills matter is trust. Learners need to trust that the time and money they invest in earning a credential will be recognized. Employers need to trust that when a candidate presents evidence of a skill, it actually means they can perform it. And society at large needs to trust that skills-based systems are not just buzzwords, but reliable signals that open doors to opportunity.

This is where validity comes in. In the measurement world, validity is the technical proof that an assessment measures what it says it does. It is our way of translating rigor into trust. Without it, skills-based credentials risk becoming noise in an already crowded marketplace. With it, they can become powerful, portable signals that truly change the way we learn, hire, and grow.

So the next time you hear about skills-based learning and assessment, remember that the buzz is justified, but the work is real. The future will not be built on slogans. It will be built on systems that earn and sustain trust. And that is where the real challenge — and opportunity — lies.

 

Whitney Coggeshall , PhD

Director of Product Management, Skills-Based Learning and Assessment

Whitney leads the strategy and development of innovative educational products at CFA Institute, with a focus on skills-based learning and assessment. She develops scalable, authentic learning and assessment solutions, leveraging emerging technologies such as artificial intelligence. Her work ensures that offerings align with the evolving needs of students and professionals in the financial services industry. With a career spanning psychometrics, applied research, and product management, she brings a unique blend of expertise to advancing educational innovation. Whitney holds a PhD in Educational Research and Measurement from the University of South Carolina and is currently pursuing an MBA with a specialization in entrepreneurship and finance from the University of North Carolina.

Whitney Coggeshall headshot

Explore our programs and certificates

CFA Institute offers a diverse range of programs and certificates designed to meet the needs of finance professionals across various career stages and specializations.

Learn more
financial education abstract illustration