AI Ready University (21): Personalized Learning — Separating the Hype from What Actually Works

March 31, 2026
AI Ready University (21): Personalized Learning — Separating the Hype from What Actually Works

Let me be blunt about something: personalized learning might be the single most over-promised, under-delivered concept in education technology over the past decade. And I say that as someone who genuinely believes in its potential.

If you’re an investor planning to launch a new college, university, or career school, you’ve almost certainly encountered the pitch. An ed-tech vendor slides a deck across the table showing adaptive algorithms that "meet every student where they are," platforms that "eliminate the one-size-fits-all classroom," and dashboards that "predict student success before the first exam." The global adaptive learning market grew from roughly $2.87 billion in 2024 to an estimated $4.39 billion in 2025—a staggering 52.7% year-over-year surge—and every vendor in that space wants a piece of your institutional budget.

Some of those promises are real. Many are not. And the difference between the two matters enormously when you’re building a school from the ground up, choosing technology platforms that will shape your students’ experience for years, and answering to accreditors who want evidence that your tools actually improve outcomes.

I’ve helped over two dozen institutions evaluate, procure, and implement adaptive learning platforms over the past eighteen months. What I’ve seen is a landscape where a few approaches are genuinely backed by solid research, a larger number are promising but unproven at scale, and a disturbing number are little more than marketing gloss over basic functionality. This post is my attempt to give you the honest breakdown—the kind your vendor won’t.

We’ll cover the evidence base for the major platforms, the genuine risks that come from over-personalization, which data points actually predict student success versus which ones are noise, what role your faculty play when algorithms enter the classroom, and the equity implications that most vendor pitches conveniently skip. If you’re spending six figures on adaptive technology—and you probably will be—you deserve the full picture before you sign.

What Personalized Learning Actually Means (And What It Doesn’t)

Before we evaluate the evidence, let’s define our terms. Personalized learning is a broad umbrella covering any instructional approach that tailors the pace, content, sequence, or method of instruction to individual student needs. That definition is so wide it can include everything from a professor adjusting their lecture after seeing puzzled faces to a sophisticated AI engine rerouting a student through prerequisite material in real time.

Within that umbrella, three distinct approaches dominate the current market, and mixing them up is one of the most expensive mistakes I see founders make:

Adaptive learning platforms use algorithms to adjust the difficulty, sequencing, or type of content a student receives based on their performance. These systems collect data on every interaction—correct answers, incorrect answers, time spent, hint usage—and use that data to build a learner model that guides what comes next. ALEKS, Knewton Alta, Realizeit, and DreamBox are examples in this category.

Learning analytics dashboards aggregate student data for faculty and advisors to review, typically showing patterns like login frequency, assignment completion rates, time on task, and early warning indicators for at-risk students. These don’t directly personalize instruction; they give humans the information to do so. Civitas Learning and platforms built into major LMS systems like Canvas and Blackboard fall here.

AI-powered tutoring systems use large language models or specialized AI to provide Socratic-style dialogue, answer student questions, and guide problem-solving in real time. Khanmigo from Khan Academy and various GPT-powered tools represent this newer wave. We covered AI tutoring in depth in Post 20 of this series, so I’ll focus here primarily on the first two categories.

The critical distinction for investors: adaptive platforms and AI tutors do different things, solve different problems, and have very different evidence bases. Vendors routinely conflate them, partly because the term “personalized learning” has become so elastic it’s nearly meaningless as a marketing differentiator.

The Evidence Base: What Actually Works, What Doesn’t, and What We Still Don’t Know

This is the section I wish every ed-tech salesperson would read before their next pitch, because the evidence picture is far more nuanced than any vendor deck suggests.

Where the Research Is Strongest: Math and Gateway Courses

The most robust evidence for adaptive learning comes from mathematics, and specifically from one platform: ALEKS (Assessment and Learning in Knowledge Spaces), developed by McGraw Hill. A meta-analysis published in Investigations in Mathematics Learning reviewed 56 independent effect sizes across more than 9,200 students in K–12 and higher education. The headline finding? When ALEKS was used as the sole method of instruction, outcomes were comparable to traditional teaching (Hedge’s g = 0.05). But when ALEKS supplemented traditional instruction—used alongside a real professor in a blended model—the effect was meaningful (g = 0.43).

That distinction is everything. The platform works best not as a replacement for human instruction, but as a complement to it. Mississippi State University’s ALEKS pilot reported higher pass rates and improved student confidence in College Algebra. San Antonio College used ALEKS to improve placement accuracy and reduce time students spent in developmental math. Arizona State University’s partnership with adaptive platforms showed early promise in reducing course withdrawal rates.

The pattern is consistent: adaptive platforms in math and quantitative fields, deployed alongside faculty instruction, tend to produce measurable gains. The keyword there is “alongside.”

Where the Research Gets Murky: Beyond Math

Once you move beyond math and structured STEM courses, the evidence thins considerably. Adaptive platforms rely on well-defined knowledge maps—clear prerequisite chains where topic A must be mastered before topic B. Math has this structure naturally. Nursing, business communications, creative writing, and most social science fields do not.

A comprehensive review of 127 studies on AI-enabled adaptive learning platforms, published in 2025 in Computers & Education Open, found that the real-world success of these systems was consistently tied to pedagogical foundations—mastery learning, spaced practice, formative assessment—rather than algorithms alone. The technology is only as good as the instructional design underneath it.

A 2025 user experience study comparing Khan Academy, Coursera, and Codecademy found something that should give every investor pause: participants described the AI-driven adaptive features on all three platforms as “subtle and minimally impactful.” The core platform interactivity—not the AI personalization—was the dominant factor in engagement and satisfaction. In plain language: the “adaptive” label was more marketing than reality for many users.

The Cautionary Tale of Knewton

No discussion of adaptive learning evidence would be complete without the Knewton story, and it’s one every education investor should know cold.

Knewton, founded in 2008, raised over $180 million in venture capital on the promise of building a “robot tutor in the sky” that could “semi-read your mind and figure out what your strengths and weaknesses are, down to the percentile.” The company partnered with Pearson, Arizona State University, Houghton Mifflin Harcourt, and others. The vision was seductive: a universal adaptive engine that would personalize every piece of educational content for every student on earth.

By 2019, Knewton was sold for a fraction of its valuation. What went wrong? Several things. The adaptive technology wasn’t sufficiently differentiated from competitors. The research evidence for Knewton’s specific impact was, by independent assessment, mixed to inconclusive. A U.S. Department of Education study found Knewton’s adaptive math course had no significant impact on student achievement compared to a traditional course. And the company’s reliance on partner publishers for content limited its ability to innovate.

The Knewton name lives on as Knewton Alta, now owned by Wiley, operating as a more modest adaptive courseware product. But the cautionary tale is clear: $180 million in venture funding couldn’t overcome the gap between the promise of personalization and the reality of learning.

And Knewton wasn’t an isolated case. Smart Sparrow was acquired by Pearson. DreamBox went through ownership changes. The pattern is consistent: when venture capital demands rapid scaling and marketing outpaces evidence, the product eventually collapses under the weight of its own promises. The survivors—ALEKS, Realizeit, a handful of others—focused on narrow, well-defined use cases where the technology actually works, rather than trying to be everything to everyone.

The lesson for founders: never confuse a compelling pitch deck with peer-reviewed evidence. If a vendor can’t show you independent studies demonstrating improved student outcomes with their specific platform, treat every claim as aspirational until proven otherwise.

Evidence Summary: What the Research Actually Supports

Approach Evidence Strength Best Use Case Key Caveat
Adaptive math platforms (ALEKS, DreamBox) Strong Supplementing faculty instruction in gateway math courses Works best blended with human teaching, not as standalone replacement
Adaptive STEM courseware Moderate Structured science and quantitative courses with clear prerequisite chains Requires well-defined knowledge maps; less effective in unstructured domains
AI-powered tutoring (Khanmigo, GPT-based) Emerging (promising RCTs) Real-time Socratic dialogue, personalized feedback, practice generation Newer evidence; concerns about dependency and metacognitive effects
Learning analytics dashboards Moderate Early warning systems for at-risk students; advising support Effectiveness depends entirely on human response to data; technology alone changes nothing
Adaptive platforms for humanities/social sciences Weak Limited applications in structured skill-building (grammar, vocabulary) Domain structure doesn’t support fine-grained adaptation; mostly aspirational
Full-spectrum “hyper-personalization” Very Weak Marketing decks No independent evidence supports the claim that any current platform delivers meaningful hyper-personalization across all domains


The Metacognition Problem: When Personalization Makes Students Worse Learners

This is the risk almost nobody talks about in the adaptive learning sales pitch, and it’s the one that keeps me up at night as an advisor.

Metacognition is the ability to think about your own thinking—to monitor your understanding, recognize when you’re confused, and adjust your learning strategies accordingly. It’s arguably the most important skill a student can develop, and there’s a growing body of research suggesting that poorly designed adaptive systems can actively undermine it.

Here’s the mechanism. A well-designed adaptive platform identifies that a student is struggling with a concept and immediately routes them to remedial content. The student masters the remedial material, returns to the original concept, and moves forward. Sounds great, right?

The problem is what the student didn’t do: they didn’t sit with the confusion. They didn’t struggle productively. They didn’t develop the metacognitive muscle of recognizing “I’m lost, and here’s what I need to do about it.” The algorithm did that work for them. Over time, students who rely on adaptive systems that manage their learning for them may develop what researchers have termed “cognitive laziness”—a diminished capacity for self-monitoring and self-regulation.

A 2025 study published in Frontiers in Psychology examining the cognitive paradox of AI in education found that while AI-based tools enhance accessibility and personalized delivery, excessive reliance correlates with decreased cognitive engagement and reduced long-term retention. Another study involving undergraduate students found that while pretesting before using AI tools improved retention and engagement, prolonged AI exposure led to measurable memory decline. Professor Rose Luckin’s research team at EDUCATE Ventures reported that recent studies show AI can reduce learning anxiety and support adaptive strategies, but may also diminish self-monitoring and metacognitive awareness.

A 2025 review in IntechOpen mapped the interaction between metacognitive learning strategies and AI systems, identifying three emerging risk patterns: cognitive laziness (students stop self-monitoring because the algorithm does it), cognitive overconfidence (students assume AI-guided learning is sufficient and skip deeper engagement), and unconditional reliance on algorithmic guidance (students lose the habit of making their own learning decisions).

For you as a founder, the practical implication is this: if you deploy adaptive platforms without intentional pedagogical design around metacognitive development, you risk producing graduates who can perform well within the platform’s environment but struggle the moment they encounter a learning challenge without algorithmic scaffolding. That’s the opposite of what employers want.

How to Mitigate the Metacognition Risk

The institutions getting this right are doing several things deliberately. They’re building reflection requirements into adaptive assignments—asking students to write brief analyses of what they found difficult, what strategies they used, and what they’d do differently next time. They’re incorporating periodic “AI-free” assessments where students must demonstrate skills without algorithmic support. They’re training faculty to use adaptive platform data not as a replacement for instructional judgment, but as one input among many. And they’re designing capstone experiences that require students to transfer learning from the platform environment to unstructured, real-world problems.

One allied health program I consulted with solved this elegantly. Students used an adaptive platform for clinical knowledge review, but every clinical skills assessment was conducted live, in person, with no technology assistance. The student had to demonstrate not just that they knew the material, but that they could retrieve it, apply it, and self-correct without a system telling them what to review next. Their accreditation evaluator specifically praised this balance.

Learning Analytics: What Data Actually Predicts Student Success

If adaptive platforms are the “student-facing” side of personalized learning, learning analytics is the “institution-facing” side. And frankly, it’s where I see the most underappreciated value for new institutions.

Learning analytics is the measurement, collection, analysis, and reporting of data about learners and their contexts, used to understand and optimize learning and the environments in which it occurs. In practice, this means tracking what students do—when they log in, how long they spend on materials, which assignments they complete, how they perform on formative assessments—and using that data to identify patterns that predict success or failure.

The evidence here is more mature than many people realize. Georgia State University’s pioneering predictive analytics system tracks over 800 risk factors for more than 40,000 students daily, triggering approximately 90,000 targeted interventions annually. Their approach contributed to meaningful improvements in retention while narrowing achievement gaps across demographic groups. That’s not a vendor claim—it’s been independently documented and widely studied.

A 2025 Stanford study found that data from just two to five hours of student activity within an intelligent tutoring system or learning platform could meaningfully predict whether a student would fall in the bottom or top quintile on delayed assessments months later. The key predictive features were straightforward: the percentage of problems the student answered correctly (success rate) and the average number of attempts per problem. These aren’t exotic AI metrics—they’re basic performance indicators that any platform can track.

For a new institution, the practical value of learning analytics is enormous—and it’s often underappreciated because it’s less flashy than adaptive platforms. You don’t need a sophisticated AI engine to identify that a student hasn’t logged into the LMS in ten days. You don’t need machine learning to notice that assignment completion in a particular section dropped 40% between weeks three and four. What you need is a system that surfaces these patterns and a process for responding to them. The technology side is relatively simple; the institutional response is the hard part.

I advised a small career college that implemented a basic early-warning analytics dashboard—nothing fancy, just automated alerts when students missed two consecutive assignments or showed a significant drop in quiz performance. The dean of students assigned an advisor to follow up with each flagged student within 48 hours. First-to-second term retention improved by nearly eight percentage points that year. Total technology cost: under $12,000. Total impact: significant. That’s the kind of ROI story that resonates with both investors and accreditors.

The Data That Actually Matters

Based on the current research and what I’ve seen work in practice, here’s what the data consistently tells us predicts student success—and what doesn’t:

Predictive Factor Evidence Strength Practical Implication
Early assignment completion patterns Strong Students who fall behind in the first two weeks are significantly more likely to withdraw or fail; early alerts save retention
LMS login frequency and consistency Moderate to Strong Regular engagement correlates with success; sudden drops signal risk before grades do
Formative assessment performance trends Strong Declining performance on low-stakes quizzes is a better predictor than midterm grades, which arrive too late
Time on task (with caveats) Moderate More time spent isn’t always better—it can indicate struggle as easily as engagement; context matters
Financial aid disbursement timing Moderate Delayed financial aid correlates with attrition; not a “learning” metric, but a powerful institutional one
Prior academic preparation Strong Incoming GPA and standardized test data remain among the strongest predictors, though adaptive systems can help close preparation gaps
Learning style preferences Weak to None Despite persistent marketing claims, the “learning styles” model has been widely debunked; platforms that claim to adapt to “visual vs. auditory learners” are selling a myth

That last row deserves emphasis. If a vendor tells you their platform adapts to individual learning styles, that’s a red flag, not a feature. The concept of distinct learning styles (visual, auditory, kinesthetic) as a basis for instructional design has been thoroughly dismantled by decades of cognitive science research. Knewton was criticized for relying on precisely this kind of discredited framework. Don’t let the next vendor repeat that mistake with your money.

Faculty Roles in AI-Personalized Learning Environments

Here’s the part most people get wrong about personalized learning: they assume it reduces the need for faculty. The opposite is true. Well-implemented adaptive platforms create more work for instructors, not less—but it’s different work, and it’s more valuable work.

In a traditional lecture model, the professor’s primary role is content delivery. In an adaptive learning environment, the platform handles a significant portion of content delivery and formative practice. This frees faculty to focus on what algorithms cannot do: facilitating discussion, providing nuanced feedback on complex work, mentoring students through ambiguous problems, and making pedagogical judgment calls that require human context.

But here’s the catch: most faculty haven’t been trained for this role shift. They were hired as subject matter experts, not data analysts or learning coaches. Moving a professor from “I deliver content” to “I interpret analytics and intervene strategically” requires intentional professional development—and that’s an investment many institutions skip.

What Effective Faculty Development Looks Like

For a new institution building from scratch, you have the advantage of hiring faculty who are ready for this model. In your job descriptions, your interview process, and your orientation program, make it explicit that instructors will be expected to work alongside adaptive tools, not compete with them. Then invest in training that covers at least these areas:

Interpreting learning analytics. Faculty need to understand what dashboard data actually tells them and—equally important—what it doesn’t. A student logging in frequently but scoring poorly on quizzes tells a different story than a student who logs in rarely but aces every assessment. Training faculty to read patterns, not just numbers, is essential.

Designing blended experiences. The most effective adaptive learning implementations use a blended model: platform-based practice for skill building, and face-to-face or synchronous sessions for application, analysis, and discussion. Faculty need practical skills in designing this interplay so the two modes reinforce each other rather than creating a disjointed experience.

Maintaining pedagogical authority. There’s a real risk that faculty begin to defer to the platform’s algorithm instead of exercising their own professional judgment. If the analytics say a student is “on track” but the instructor senses something is off during a class discussion, the instructor’s instinct should matter. Adaptive platforms are tools, not oracles.

Supporting metacognitive development. As discussed above, this is perhaps the most important faculty role in a personalized learning environment. Instructors need strategies for helping students develop self-awareness as learners, even when an algorithm is managing much of their learning path.

Budget for 30–50 hours of professional development per faculty member in the first year, with quarterly refreshers. This is non-negotiable for any institution making a serious adaptive learning investment. I’ve seen two institutions launch adaptive platforms without faculty training—one abandoned the platform within a semester, and the other had faculty working around the tool rather than with it.

The Equity Question: Can Algorithms Serve Everyone Fairly?

This is where the conversation about personalized learning gets uncomfortable, and it should.

Adaptive learning platforms are trained on data. If that data reflects historical patterns of educational inequality—and in the United States, it overwhelmingly does—the algorithms risk perpetuating those patterns. A platform trained primarily on data from well-resourced universities may not serve community college students, ESL learners, or first-generation students the same way. A system that interprets slower response times as “lower ability” may disadvantage students for whom English is a second language, students with disabilities, or students who simply think more carefully before answering.

Research published in the World Journal of Advanced Research and Reviews in 2025 found that algorithmic bias in educational systems operates through multiple channels—from data collection and algorithm design to implementation practices and institutional policies—and can systematically disadvantage students from marginalized communities. The OECD’s Digital Education Outlook specifically examined educational AI fairness and found that affect-detection models performed more poorly for students in rural communities than for those in urban or suburban settings. That’s not a theoretical risk; it’s a documented one.

For institutions serving diverse populations—which describes nearly every career college, trade school, and community-oriented program I work with—this issue demands proactive attention. Here’s what that looks like in practice:

Equity Safeguards for Adaptive Platform Deployment

Safeguard Implementation Why It Matters
Vendor transparency audit Require vendors to disclose training data demographics, bias testing results, and fairness metrics before contract signing You can’t address bias you can’t see; many vendors don’t test for demographic fairness unless required to
Disaggregated outcome monitoring Track platform outcomes by race, ethnicity, gender, language background, disability status, and socioeconomic indicators quarterly Aggregate data can hide disparities; a platform that works “on average” may be failing specific student populations
Alternative assessment pathways Ensure students can demonstrate competency through non-platform means (oral exams, portfolio reviews, practical demonstrations) Platform-dependent assessment disadvantages students who don’t perform well in digital environments for reasons unrelated to knowledge
Accessibility compliance verification Verify WCAG 2.1 AA compliance, screen reader compatibility, and accommodations for timed assessments Adaptive platforms that aren’t accessible to students with disabilities violate Section 504 and ADA requirements
Digital literacy scaffolding Provide orientation and support for students unfamiliar with platform interfaces Assuming platform fluency penalizes students from under-resourced educational backgrounds

I worked with a trade school in the Southeast that deployed an adaptive platform for its pharmacy technician program without doing this equity analysis. Within the first semester, pass rates for ESL students dropped compared to the previous cohort that had used traditional instruction. The platform’s timed assessments and English-centric interface were creating barriers the school hadn’t anticipated. It took a full semester of adjustments—adding time extensions, providing bilingual glossaries, and offering platform orientation sessions—before ESL student outcomes recovered. That’s a semester of student harm that could have been prevented with upfront equity planning.

There’s another dimension to the equity issue that’s less discussed but equally important: the digital divide in platform access itself. Adaptive platforms are data-heavy, require reliable internet, and are typically designed for modern devices. Students accessing materials from older phones, shared family computers, or unstable rural broadband connections don’t get the same adaptive experience as students on campus Wi-Fi with current laptops. If your institution serves populations with mixed technology access—and nearly every career college and community-oriented program does—you need a technology access plan alongside your adaptive learning plan. Loaner device programs, on-campus computer labs with extended hours, and mobile-optimized platform requirements should all be part of the conversation before you select a vendor.

One more equity consideration: the cultural assumptions embedded in adaptive content. A platform designed for traditional American higher education students may include examples and assessment contexts that feel unfamiliar to international students, immigrant communities, or students from indigenous backgrounds. Content customization isn’t just about aligning to your curriculum—it’s about ensuring the learning experience feels accessible to your specific student body. This is one reason I always recommend piloting adaptive platforms with a representative sample of your actual student population before committing to institutional deployment.

The Cost Reality: What Personalized Learning Platforms Actually Cost

Vendors aren’t always upfront about total costs, so let me give you the ranges I’m seeing across client implementations in 2025 and 2026. These are real numbers, not vendor list prices.

Cost Category Small Institution (3–5 Programs) Mid-Size Institution (8–15 Programs) Notes
Platform licensing $15,000–$45,000/year $50,000–$150,000/year Per-student pricing ranges from $25–$80/student/course depending on vendor and volume
Implementation and integration $10,000–$25,000 (one-time) $25,000–$75,000 (one-time) LMS integration, SSO configuration, data migration; higher if legacy systems are involved
Faculty professional development $10,000–$25,000/year $25,000–$60,000/year 30–50 hours per faculty member in year one; ongoing quarterly refreshers
Content customization $5,000–$15,000 (one-time) $15,000–$40,000 (one-time) Aligning platform content to your specific curriculum; often underestimated
Ongoing technical support $3,000–$8,000/year $8,000–$20,000/year Vendor support tiers plus internal IT staff time
Equity and accessibility review $3,000–$8,000 (one-time) $8,000–$15,000 (one-time) Bias audit, WCAG compliance verification, disaggregated outcome analysis setup
Total Year One $46,000–$126,000 $131,000–$360,000 Year two and beyond drops 30–40% as one-time costs are absorbed

The ROI calculation is straightforward in theory but tricky in practice. If adaptive platforms improve retention by even 3–5 percentage points, the tuition revenue retained typically exceeds the platform cost within a year. But that “if” is doing a lot of heavy lifting. Unless you’re measuring outcomes rigorously—with pre-implementation baselines and disaggregated data—you won’t know whether the platform is driving the improvement or whether other factors (a stronger cohort, better advising, improved marketing) deserve the credit.

Here’s what I recommend to every client: treat your adaptive platform deployment as a research project, not just a technology purchase. Establish baseline data for at least one full academic term before launch. Define the specific metrics you’ll track. Plan for comparison groups where possible—sections using the adaptive platform alongside sections using traditional instruction for the same course. Collect faculty and student satisfaction data alongside outcome data. And commit to reviewing results quarterly, not annually. If the platform isn’t moving the needle by the end of the second term, you need to know that before you’ve committed to a multi-year contract.

A quick note on pricing: most vendors offer per-student-per-course pricing, which scales with your enrollment. Avoid flat-rate enterprise licenses until you have enough students to make the economics work. And always negotiate a pilot period—three to six months at a discounted rate—before signing a full contract.

A Decision Framework for Evaluating Adaptive Platforms

When clients ask me which platform they should buy, my first answer is always: “It depends on what problem you’re trying to solve.” An adaptive math platform solves a different problem than an early-warning analytics system, which solves a different problem than an AI tutoring tool. Conflating them is how institutions end up spending $100,000 on technology that doesn’t address their actual challenge.

Here’s the evaluation framework I use with clients. Before you look at a single product demo, answer these five questions:

1. What specific student outcome are you trying to improve? Pass rates in gateway courses? Retention from first to second year? Time to degree completion? Clinical skills competency? Each of these requires a different technology approach. If you can’t name the outcome, you’re not ready to buy.

2. What does your evidence base look like today? You need pre-implementation data to measure against. If you don’t have baseline pass rates, retention numbers, and disaggregated outcome data, start collecting that before you sign a contract.

3. Is the domain well-structured enough for adaptive learning? Math, introductory chemistry, accounting fundamentals, basic coding—these work well because they have clear prerequisite structures. Creative writing, leadership development, clinical reasoning—these don’t. Be honest about whether your programs are actually suited for algorithmic adaptation.

4. Does the vendor’s evidence match your student population? A platform that shows impressive results at large state universities may not transfer to your 300-student career college serving working adults. Ask vendors specifically about outcomes with populations comparable to yours.

5. What’s your faculty readiness level? The best platform in the world fails if faculty resist it, ignore it, or work around it. Assess your team’s technology comfort and pedagogical flexibility before committing to a tool that will fundamentally change how they teach.

What Actually Happened: Lessons from the Field

The Community College That Started Small and Scaled Smart

A community college in the Southwest was hemorrhaging students in gateway math. Roughly 40% of students enrolled in College Algebra were failing or withdrawing, and for developmental math the numbers were even worse. The administration was under pressure to do something dramatic. The initial proposal from leadership was to replace all math instruction with an adaptive platform—go fully online, cut sections, and let the algorithm handle it.

We pushed back hard. The evidence doesn’t support replacing faculty with algorithms in math—it supports supplementing them. The college launched a pilot: two sections of College Algebra used ALEKS as a supplement alongside the existing instructor, while two sections continued with traditional instruction only. Same instructors, same textbook, same exams. The only difference was the adaptive platform layer.

After one semester, the ALEKS sections showed a pass rate increase of about nine percentage points and a withdrawal rate that dropped meaningfully. Student satisfaction surveys were mixed—some students loved the adaptive practice, others found it repetitive. Faculty reported spending less time on basic skill remediation and more on problem-solving and conceptual discussion. The college expanded the pilot to all gateway math sections the following term, with full faculty training and ongoing support. They did not eliminate any faculty positions—that was a condition we insisted on from the start.

The Online Business School That Learned the Vendor Lesson

A fully online business school contracted with an adaptive platform vendor promising “personalized learning across all business disciplines.” The vendor’s demo was polished. The ROI projections were compelling. The school signed a three-year enterprise license worth over $200,000.

Within the first year, faculty in the accounting and finance courses reported that the adaptive features worked reasonably well—the content was structured, prerequisite chains were clear, and students received useful remediation. But faculty teaching management, marketing, organizational behavior, and business communications found the adaptive features essentially useless. The content in those courses didn’t have the kind of clear knowledge architecture that adaptive algorithms need. Students were being routed through “remediation” that felt random and unhelpful. Several instructors simply stopped assigning the platform.

The school eventually renegotiated to limit the platform to quantitative courses only and added a learning analytics dashboard for other programs. The lesson: one platform rarely fits all programs. If someone tells you their tool personalizes everything, they’re overselling.

What Accreditors Want to See When You Deploy Personalized Learning

Accreditors aren’t anti-technology. But they’re deeply skeptical of technology that isn’t supported by evidence of student learning. If you’re deploying adaptive platforms and presenting that deployment as a strength in your accreditation materials—which you should—here’s what reviewers want to see:

Evidence of improved student outcomes. Not vendor-supplied data. Your data. Pre- and post-implementation comparisons, disaggregated by student demographics. If you can show that adaptive platforms improved pass rates for underprepared students without widening equity gaps, that’s a powerful accreditation narrative.

Documented assessment practices. Accreditors need to see that student learning is being assessed through multiple methods, not solely through the adaptive platform. If your entire grade in a course comes from ALEKS assignments, that’s a problem. Blended assessment strategies that include human-evaluated components are essential.

Faculty training documentation. Show that your instructors aren’t just using the platform—they’re trained in interpreting its data, integrating it into their teaching, and maintaining pedagogical oversight.

Data governance and privacy compliance. FERPA-compliant vendor agreements, data processing addenda, student notification practices, and clear policies about how student interaction data is stored, used, and protected. We covered FERPA in detail in Post 6 of this series—those requirements apply fully here.

Continuous improvement evidence. Accreditors love seeing that you’ve adjusted your approach based on data. If first-semester analytics showed that ESL students were struggling with a platform and you made specific changes—that story is gold in a self-study narrative.

Key Takeaways

1. The strongest evidence for adaptive learning is in structured math and STEM courses, used to supplement (not replace) human instruction. Outside these domains, the evidence base is much thinner.

2. The $180 million Knewton collapse is a cautionary tale every education investor should study. Hype is not evidence. Demand independent, peer-reviewed research for any platform you consider.

3. Over-personalization carries real metacognitive risks. Students who rely on algorithms to manage their learning may struggle when the scaffolding is removed. Design assignments that develop self-regulation alongside platform-based skills.

4. Learning analytics—data dashboards that inform human decision-making—may offer better ROI for many institutions than student-facing adaptive platforms, especially in the early years of operation.

5. Equity must be audited, not assumed. Adaptive platforms trained on non-representative data can systematically disadvantage ESL students, students with disabilities, and students from under-resourced backgrounds.

6. Faculty development is not optional. Expect to invest 30–50 hours per faculty member in year one. Without trained instructors, adaptive technology becomes expensive shelf-ware.

7. Total first-year costs for adaptive platform deployment range from $46,000–$126,000 for small institutions to $131,000–$360,000 for mid-size institutions, with meaningful ROI contingent on rigorous outcome measurement.

8. Start with the problem, not the product. Define the student outcome you’re trying to improve before evaluating any vendor.

Glossary of Key Terms

Term Definition
Adaptive Learning Platform Software that uses algorithms to adjust the content, difficulty, sequencing, or pace of instruction based on individual student performance data in real time
Learning Analytics The measurement, collection, analysis, and reporting of data about learners and their contexts to understand and optimize learning and the environments in which it occurs
Metacognition The awareness and regulation of one’s own thinking processes, including the ability to monitor comprehension, evaluate strategies, and adjust learning approaches
Knowledge Map A structured representation of the relationships and prerequisites among concepts within a subject domain, used by adaptive platforms to sequence learning
Formative Assessment Low-stakes evaluations conducted during the learning process to monitor progress and inform instruction, as opposed to summative assessments that evaluate final achievement
Learner Model A data-driven profile of an individual student’s knowledge state, strengths, weaknesses, and learning patterns, maintained by an adaptive system to guide personalization
Blended Learning An instructional approach that combines face-to-face (or synchronous) instruction with technology-mediated learning, designed so the two modes complement each other
Disaggregated Data Outcome data broken down by demographic categories (race, ethnicity, gender, language, disability status, etc.) to reveal disparities hidden in aggregate averages
FERPA Family Educational Rights and Privacy Act—federal law governing the privacy of student education records at institutions receiving federal funding
WCAG 2.1 Web Content Accessibility Guidelines version 2.1—internationally recognized standards for making web content accessible to people with disabilities
Hedge’s g A statistical measure of effect size used in meta-analyses to quantify the difference between two groups, corrected for small sample sizes
Cognitive Offloading The use of external tools (including AI systems) to reduce the mental effort required for a task, which can diminish the development of internal cognitive capacity if overused


Frequently Asked Questions

Q: How much does it cost to implement adaptive learning for a new institution?

A: Total first-year costs range from roughly $46,000–$126,000 for a small institution with 3–5 programs, and $131,000–$360,000 for mid-size institutions with 8–15 programs. These figures include licensing, implementation, faculty training, content customization, and equity review. Year two and beyond typically drops 30–40% as one-time costs are absorbed. The most commonly underestimated costs are faculty professional development and content customization to align platforms with your specific curriculum.

Q: Does adaptive learning actually improve student outcomes?

A: In structured, well-defined subjects like mathematics and introductory sciences, yes—with caveats. The strongest evidence shows that adaptive platforms supplementing traditional instruction produce meaningful gains (effect sizes around 0.43 in meta-analyses). However, adaptive platforms used as standalone replacements for instruction show outcomes comparable to traditional teaching, not superior. Outside math and structured STEM, the evidence base is considerably thinner. Any vendor who claims universal improvement across all subjects should be treated with skepticism.

Q: Which adaptive learning platform is best?

A: There’s no single best platform. ALEKS has the deepest evidence base for math and chemistry. Knewton Alta (now Wiley) offers affordable adaptive courseware. Realizeit provides more flexibility for custom content in various disciplines. For AI tutoring specifically, Khanmigo is emerging as a strong option. The right platform depends on your specific programs, student population, budget, and what problem you’re trying to solve. Start with the outcome you want, then evaluate platforms against that specific goal.

Q: What’s the risk of over-personalizing instruction?

A: The primary risk is reduced metacognition—students who rely on algorithms to manage their learning may lose the ability to self-monitor, self-regulate, and persevere through productive struggle. Research from multiple studies in 2025 documents that excessive AI reliance can lead to cognitive laziness, cognitive overconfidence, and diminished critical thinking. Mitigate this by building reflection requirements into adaptive assignments, incorporating AI-free assessments, and training faculty to develop students’ self-regulation skills alongside platform-based learning.

Q: Do accreditors care about how we use adaptive learning?

A: Yes, increasingly so. While no regional accreditor has mandated adaptive learning specifically, accreditors evaluate whether your programs produce evidence of student learning, whether your assessment methods are rigorous and varied, whether you protect student data privacy, and whether your approach reflects continuous improvement. A well-documented adaptive learning implementation strengthens your accreditation narrative. A poorly documented one—or one where the platform replaces rather than supplements human instruction—can raise red flags.

Q: How do I know if a vendor’s claims are legitimate?

A: Ask three questions. First: Can you show me independent, peer-reviewed studies of outcomes with your platform? (Not case studies written by your marketing team.) Second: Have those studies been conducted with student populations similar to mine? Third: What’s your platform’s performance with ESL students, students with disabilities, and students from under-resourced backgrounds specifically? If the vendor can’t answer all three, their evidence base isn’t sufficient for a serious institutional investment.

Q: Can adaptive platforms work for trade schools and career programs?

A: They can, but selectively. Adaptive platforms work well for the didactic (knowledge-based) components of career programs—pharmacy calculations, anatomy terminology, electrical code requirements, medical billing codes. They don’t work for hands-on clinical or trade skills, which require performance-based assessment with human evaluators. The best career programs use adaptive technology for knowledge mastery and reserve face-to-face time for applied skills training. This blended approach actually strengthens the overall program design.

Q: What should my data governance look like for adaptive platform data?

A: At minimum: a FERPA-compliant vendor agreement with a data processing addendum prohibiting student data use for model training, clear student notification about data collection, defined data retention periods, documented access controls specifying who at your institution can see which data, and breach notification procedures. Your AI governance policy (covered in Post 9 of this series) should specifically address adaptive platform data. Don’t rely on the vendor’s standard terms—negotiate data protections explicitly before signing.

Q: How should I budget for adaptive learning in my business plan?

A: Include adaptive learning technology as a line item in your Year One budget, distinct from general IT infrastructure. Plan for approximately 3–5% of your total technology budget going to adaptive platform costs, with an additional 1–2% for associated faculty development. Build in a pilot phase—one or two programs in the first year—before committing to institution-wide deployment. This allows you to gather outcome data, refine your implementation approach, and build a stronger case for scaling. Investors and accreditors both appreciate measured, evidence-driven expansion over blanket deployment.

Q: Are learning analytics dashboards a better investment than adaptive platforms for new schools?

A: For many new institutions, yes—at least in the first year or two. Learning analytics dashboards are less expensive, less complex to implement, and provide institutional intelligence that informs human decision-making across all programs. They help you identify at-risk students early, allocate advising resources strategically, and build the data infrastructure you’ll need if you later deploy adaptive platforms. Starting with analytics gives you the baseline data to measure any adaptive platform’s impact when you do deploy one.

Q: What does the research say about “learning styles” in adaptive platforms?

A: The concept that students learn better when instruction matches their preferred learning style (visual, auditory, kinesthetic) has been extensively tested and debunked by cognitive science research over several decades. There is no credible evidence that adapting instructional delivery to match self-reported learning styles improves outcomes. Platforms that market themselves as adapting to learning styles are relying on discredited science. What does work is adapting to demonstrated knowledge gaps, prior performance, and proficiency levels—which is what legitimate adaptive platforms actually do.

Q: How do I handle faculty resistance to adaptive technology?

A: Faculty resistance usually stems from fear of replacement, frustration with tools they weren’t trained to use, or legitimate pedagogical skepticism. Address each directly. Clarify that adaptive platforms augment teaching, not replace it. Invest in hands-on training that lets faculty experience the platform as learners first. And take pedagogical objections seriously—a faculty member who argues that an adaptive platform doesn’t suit their discipline may be right. The goal is integration where it works, not universal mandates.

Q: How fast is the personalized learning market growing, and what does that mean for my institution?

A: The global adaptive learning market is growing at over 50% annually, reaching an estimated $4.39 billion in 2025. This growth means more options, but also more noise. The market is flooded with products of wildly varying quality. For your institution, this means being a discerning buyer matters more than ever. The institutions that benefit are the ones that evaluate platforms on evidence, not marketing.

Q: What’s the most common mistake institutions make with personalized learning?

A: Buying the technology before defining the problem. I’ve watched institutions spend six figures on adaptive platforms because a board member saw a compelling demo, only to discover the platform doesn’t align with their programs and their faculty weren’t consulted. Start with the student outcome you want to improve. Gather baseline data. Evaluate platforms against that specific outcome. Pilot before scaling. And never let a vendor demo substitute for an evidence review.

Current as of March 2026. Regulatory guidance, accreditation standards, and technology platforms evolve rapidly. Consult current sources and expert advisors before making institutional decisions.

If you’re ready to explore how EEC can de-risk your AI-integrated launch, reach out at sandra@experteduconsult.com or +1 (925) 208-9037.

Share this  post
twitter logofacebook logolinkedin logo