Breaking News

What Learning Can Learn From Machine Learning – Forbes

Over the years, this biweekly letter has provided me with the opportunity to fully and fairly document just how much free time college students can have if they try. My college roommates tried really hard. They found time to make prank calls to the campus literary magazine, create enough frost in our fridge to throw snowballs out the window on 90-degree days, leave old pizza in the entryway for the stated purpose of growing penicillin for a roommate who couldn’t afford antibiotics, and organize campus recruiting events for fake investment banks. When these time-wasting activities required a fake identity, the persona of choice was John W. Moussach Jr., an alumnus turned successful Midwestern industrialist. (We don’t hear enough about successful industrialists these days – another downside of digital transformation.)Last week I looked online for remnants of John W. Moussach Jr. and came upon neither the Wikipedia page my roommates built after graduating nor the Moussach aphorism that somehow made it onto Wikiquote (“We have all heard the Will Rogers quote ‘I never met a man I did not like.’ In my youth, I met a World War I veteran who had met Will Rogers. The veteran told me, ‘I never met a man I did not like until I met Will Rogers’”), but rather an article on something called Study Sive which purports to feature higher education news. The article mentioned John W. Moussach Jr. in the second line, but then devolved into Moussach babble:

By famous acclaim, Moussach’s excellent quote is subsequent. Creating John W. Moussach Jr. I took a ton of work for no obvious purpose. The equal is alas proper of the kingdom of online getting to know. Sure, tens of many online degree programs within the U.S. Have made better training more available than ever earlier. But in stark assessment to the effect of online transport on each other service, to date, online studying has didn’t make American higher schooling greater affordable.
It turns out that the Study Sive article, ostensibly written by Cindy G. Fryer, an unusually attractive “social media evangelist and certified beer guru,” and dated January 9, 2022, was actually a 2019 Gap Letter mangled by some bad algorithm that adopted a synonym for every other word – an attempt to avoid detection that can fully and fairly be called Moussachian.

“Cindy Fryer”

Study Sive

“Cindy Fryer,” who “writes” all the “articles” on Study Sive, is an example of bad artificial intelligence (AI), or perhaps artificial stupidity. But in my daily digital encounters, “Cindy” is the exception, not the rule. We’re all experiencing more and more good AI. 52% of companies have accelerated AI adoption due to Covid-19 and 86% agree that doing so has become the norm. In higher education, AI is powering online discussion boards and chatbots improving student outcomes. Last summer Times Higher Education postulated that AI “will soon be able to research and write essays as well as humans can.” Last month Google/Alphabet announced that its DeepMind subsidiary had built an AI algorithm that can read and respond to questions at a high school level.
MORE FOR YOU
While what or who powers “Cindy” may never be uncovered, what powers good AI is machine learning. Machine learning initially comprised human-constructed algorithms that parsed data and predicted outcomes. If the outcome was incorrect, a human had to make adjustments to improve the algorithms. But that all seems as ancient as Will Rogers. Machine now learn based on software that mimics the networks of neurons in our brains. Data goes in, outcomes come out, same as before. But in between are thousands of layers of digital “neurons.” Today’s machine learning involves constructing giant mathematical models by pairing massive amounts of input data with correct outcomes and then training the software to form neural networks that produce the steepest gradient from input to outcome. The machine learning-constructed model or network is ultimately able to accurately recognize/classify/predict correct outcomes or make correct decisions without any human programming or even understanding of how it works.

If that sounds a bit tricky, it’s because it is. The natural language processing (NLP) engine released by Google in 2020 was trained on 45 terabytes of data and produced a model with 175 billion parameters. The next version had 1.75 trillion parameters – 10x growth in only 7 months. As machine learning advances are correlated to the volume of available data, Covid’s acceleration of digital transformation is doing the same for a field that was already progressing at an unfathomable pace.

This is why, in the pantheon of skills gaps, the data/machine learning/AI skills gap is the most consequential or existential. And why, just before Christmas, the Senate voted unanimously to pass the AI Training Act, a bill focused on educating government civilian leaders on AI. And why just after Christmas, President Biden signed into law the National Defense Authorization Act for 2022, which has a number of provisions related to training on AI including establishing a new community college for the Navy.
An AI-for-dummies explanation won’t close this gap. But by understanding the basics of machine learning, there are a few lessons (or gradients) to be drawn for K-12 and postsecondary education, which haven’t changed nearly enough from the days of our parents and grandparents, and which may bear some responsibility for your inability to understand machine learning. (So don’t think of this as AI-for-dummies, but rather AI-for-smart-people-failed-by-the-education-establishment.)
While machines have made remarkable progress when it comes to learning, humans need help. Here are a few lessons for schools from the bright new star in the learning firmament:
1. Importance of Clear Learning Outcomes
Machines only learn when the desired outcome is clear i.e., when a clear output or objective function can be defined: what exactly are we trying to get the subject to learn? In contrast, the vast majority of degree programs and courses for humans don’t start with clear learning outcomes. They start with what faculty want to teach (typically what they’ve always taught, and often took themselves as students). Nor do individual classes. When’s the last time you heard of an instructor starting a class with a clear expression of a learning outcome? To the extent four-year colleges and universities have learning outcomes, they’re an accreditation-process-driven afterthought and expressed in terms so broad they’re as fruitless for machine learning as they are for human learning (see e.g., English 2700 at Cal State L.A.: “analyze a text’s relationships to its cultural contexts” and “read intratextually and intertextually, making comparative connections within the texts themselves and with other literary works”).
While K-12 does better in this regard (public schools are required to meet state standards), few standards would pass machine learning muster in terms of clarity. And although new higher ed platforms like eLumen aim to reorient curriculum development around outcomes (in eLumen’s case, starting with institutions with the greatest urgency i.e., community colleges), we’re unlikely to see major changes for a few more years; as students continue to vote with their feet, non-selective institutions will have no choice but to unbundle and simplify current programs into linked series of skills-based learning experiences.
2. Primacy of Assessment
Only once we can assess that a given algorithm, model, or network is producing correct classifications or predictions can a machine begin to learn. So the twin quasar of a clear learning outcome is assessment. And while every K-12 and postsecondary course incorporates assessments, and while programs and courses with clear(er) learning outcomes are less likely to shy away from rigorous summative assessments, it’s relatively rare to find assessments closely tied to learning outcomes.
If the combination of learning outcomes and assessments rings a bell, that bell is probably competency-based learning. Commencing not with curriculum but rather competencies that graduates are expected to exhibit (as expressed by employers, for example), competency-based learning programs and courses are architected around assessments that test for desired competencies. Then and only then do we turn to the task of developing curricula to best prepare students for these assessments. Ballyhooed 20 years ago, and even more 10 years ago, with the notable exception of online everyday low pricing leader Western Governors University, competency-based education has been a bust. Employers don’t understand competency-based programs and haven’t aligned hiring systems and processes accordingly. Students don’t care; in the absence of paired competency-based hiring, students see little difference between (online) competency-based programs and run-of-the-mill online programs.
3. Iterative Improvement
Machine learning is hard to effect on a human scale. Imagine if a supersized school district took a million similarly situated students and saw who performed best on an assessment tightly linked to a clear learning outcome. And then did the same thing across a thousand learning outcomes. And then repeated the assessment a million times, each time iterating curricula and delivery to teach all students the way the highest performing students were taught. One can imagine a thousand (or million) angry school board meetings.
But machine learning’s most important lesson for learning is simply to watch what works and do more of that. There are many proven instructional practices that teachers and faculty simply disregard. Practices like active learning, peer learning, and frequent formative assessments and small assignments (scaffolding) are supported by a great deal of evidence. The acceleration of online learning and data collected by learning management systems is making it easier to determine which practices correlate to student engagement and performance. But you’d be hard pressed to learn about any of these from observing what’s happening in classrooms – real or virtual.
The Chronicle of Higher Education recently ran a piece titled Why the Science of Teaching Is Often Ignored and concluded that resistance to evidence-based teaching stems from a combination of skepticism that learning can truly be measured, lack of incentives to improve teaching, and to the small scale of educational research. (The U.S. spends about $1 billion each year on all education research. That’s 0.07% of the $1.5 trillion spent each year on education and training – much less than comparable ratios for other major economic sectors.) Because research into what works isn’t taken seriously, faculty training isn’t either. To the extent that colleges and universities provide systematic feedback to faculty, the medium is course evaluations, which can relate as much to whether professors “dress like a cougar or a vagabond or like your grandpa” as to teaching and learning. One thing we can say for machines: you don’t see them drawing gradients to improve their wardrobes.
“You people are just vectors of disease to me.”
– Professor Barry Mehler to his students
On January 9, Ferris State University humanities professor Barry Mehler kicked off spring semester with a recorded video discouraging students from showing up to a class the university was requiring him to teach in person. First he claimed that, like a machine, he randomly assigned grades: “I just look at the number and I assign a grade.” Then he argued there’s “no benefit whatsoever from coming to class,” he said. “Everything you need to earn an A is available to you on… Canvas.” If he hadn’t been placed on administrative leave, his profanity-laced rant would have yielded some interesting comments in end-of-semester course evaluations.
Holding up machine learning as a mirror to our beleaguered systems of education reminds us that when it comes to learning outcomes, schools may be serving students better than Professor Mehler, but in many cases not by much. Assessments linked to clear learning outcomes are rare, evidence-based teaching practices are widely ignored, and the system is more loosey-goosey/anything-goes than a learning model a data scientist would recognize.
There are many reasons K-12 and higher education can’t run like a machine. Impracticality aside, teaching children and young adults like we teach machines would be disastrous for soft skills like self-awareness, empathy, and communication, not to mention creativity or ethics. But even if professors can stop swearing at students, there’s ample reason to doubt that our learning systems as currently architected will keep us ahead of the machines. And that includes dumb machines like Cindy Fryer. So taking a few bits and bytes from machine learning could go a long way to addressing the harder, more easily defined skills sought by all employers. And according to my college roommates, that still includes successful Midwestern industrialist John W. Moussach Jr.
Source: https://www.forbes.com/sites/ryancraig/2022/01/21/what-learning-can-learn-from-machine-learning/